Anthropomorphism and tech

The human inclination to imbue non-human entities with human-like characteristics, a phenomenon recognized as anthropomorphism, represents a deeply ingrained cognitive and social mechanism. This inherent tendency, far from being a mere whimsical projection or a cognitive anomaly, serves fundamental human needs, facilitating comprehension, interaction, and the construction of meaning within an intricate and frequently unpredictable world. The ubiquitous presence of anthropomorphism across diverse historical and cultural contexts – from ancient religious doctrines and elaborate mythological narratives to contemporary engagements with sophisticated computer systems – underscores its critical importance as a foundational aspect of human cognition and social engagement. The essay argues that the human necessity for anthropomorphism constitutes an indispensable element of our psychological architecture, driven by compelling cognitive demands for enhanced understanding, imperative social requirements for interaction, and potentially intrinsic biological predispositions that have evolved over millennia. The multifaceted nature of this phenomenon and its prevalence in technology fields (i.e., tech) warrants a thorough examination of its cognitive, social, and biological underpinnings to appreciate its enduring relevance in human experience fully.

Anthropomorphism has been studied and debated for centuries by scholars across diverse fields, including personality psychology, social psychology, theology, anthropology, and even speculative or para-psychology. (Darwin 1998/1873; Epley et al. 2007; Feuerbach 2004/1873; Freud 1930; Hodgson, 1882, p. 229). It appears to be an innate tendency of human psychology as even prehistoric archeological findings and beliefs show signs of anthropomorphism: the Löwenmensch figurine (zoomorphism; Dalton, 2003), the Greek and Norse gods appearing as humans (anthropotheism) (Henrichs, 2010), or, in case of some religions, humans having been created in the picture of a god (theomorphism) (Bunta, 2007). Naturally, in modern times, we still have anthropomorphism everywhere, including in tech: the talking plush toy, the marketing announcements claiming that the product is smart, or more recently, that AI is performing acts or having processes generally attributed to humans, or even having intentions.

One of anthropomorphism's foremost and fundamental role is in reducing cognitive dissonance. The inherent human impetus to comprehend and elucidate the surrounding environment is a driving force behind this. When confronted with inherently complex phenomena, seemingly unpredictable, or otherwise transcending immediate human apprehension, attributing human-like intentions, emotions, and behaviors offers a familiar and readily accessible framework for interpretation. As early as the eighteenth century, the eminent philosopher David Hume sagaciously observed a "universal tendency among mankind to conceive all beings like themselves and to transfer to every object, those qualities" (as cited in Placani, 2024). This intrinsic propensity enables individuals to impose a semblance of order and predictability upon ambiguous or novel situations. For instance, early human civilizations frequently anthropomorphized natural forces, such as interpreting a tumultuous storm as a manifestation of an enraged deity or a bountiful harvest as the beneficence of a benevolent spirit, to construct explanatory narratives that provided coherence to events and afforded a perceived capacity for influence or appeasement. Beyond meteorological phenomena, this extended to the personification of celestial bodies, animals, or even inanimate objects within folklore and mythology worldwide, further exemplifying this deep-seated need to understand the unknown through the lens of the familiar human experience. The psychological comfort derived from attributing human-like agency to natural phenomena or abstract concepts allows individuals to feel a greater sense of mastery over their environment, transforming potentially chaotic or indifferent forces into entities with whom a relationship, however imagined, can be forged. This cognitive shortcut effectively simplifies complex realities, rendering them more manageable and less existentially threatening for the human mind. Assimilating novel and complex information into pre-existing schemata of human behavior and motivation significantly reduces the cognitive load required to process it. Today, this inclination extends profoundly to intricate technological systems. The personal computer "sees" what we are doing, the automated HTTP client (i.e., crawler) "follows" URLs, and the email client "reads" our emails. More recently and notably, however, artificial intelligence is getting attributed with human-like qualities, including "reasoning," "understanding," "consciousness," or "intent," even when such attributions may be technically fallacious, and can substantially mitigate the perceived complexity and intimidating nature of these advanced technologies. Pfeuffer et al. (2019) emphatically assert that the core objective underlying the projection of human-like attributes onto non-human agents is to facilitate the understanding and explanation of their operational behavior and inferred intentions. This strategic cognitive approach facilitates the initial adoption and seamless integration of novel technologies into daily life, as users can intuitively apply pre-existing mental models of human interaction to these emergent entities, consequently diminishing the cognitive load required for effective engagement.

Anthropomorphism plays a significant role in accelerating the societal acceptance of new and complex technological innovations. Fostering a sense of familiarity and comfort reduces the initial apprehension often associated with these technologies. This cognitive scaffolding is crucial for navigating an increasingly complex technological landscape. It allows individuals to form intuitive and actionable mental representations of non-human systems, promoting a more fluid and less effortful interaction. Without this inherent tendency, the sheer complexity of the natural world and advanced technology might overwhelm human cognitive capacities, hindering effective adaptation and improvement of said technology.

Beyond mere cognitive comprehension, anthropomorphism is pivotal in fostering social interaction and cultivating a sense of connection, even with unequivocally non-human entities. As fundamentally social organisms, humans are intrinsically wired for interpersonal engagement, and the human brain has evolved to become an exceptionally sensitive detector of social signals and cues (Scheele et al., 2015). This deeply ingrained, socially driven adaptation invariably leads to an almost automatic propensity to attribute social meaning and agency to inherently non-social entities (Scheele et al., 2015). The innate drive to socialize extends beyond biological organisms to inanimate objects and technological artifacts, transforming potential tools into perceived companions or collaborators. For instance, individuals often name their cars, talk to their plants, or even apologize to inanimate objects they accidentally bump into. This action demonstrates a spontaneous application of social norms to non-social contexts. Extensive research within the fields of human-computer interaction and human-machine interaction consistently indicates that individuals tend to apply social heuristics – the established rules and norms governing human-human interaction – to computers, robotic systems, and other technologies that are deliberately designed with human or social cues (Ribino, 2023).

This phenomenon is particularly pronounced and perhaps critically crucial in the rapidly expanding domain of human-machine interaction, where the rapid advancement of intelligent and human-like features in technology, encompassing sophisticated natural language processing capabilities, the simulation of empathetic responses, or even physical embodiment, necessitates a meticulous consideration of how an increasingly inter-personal interaction style profoundly influences human behavior (Ribino, 2023). For instance, the design of conversational agents (e.g., chatbots) often incorporates elements that encourage anthropomorphism, such as human-like names, avatars, and conversational styles, precisely because these features enhance user engagement and perceived helpfulness (Tsekouras et al., 2024). Users may develop emotional responses to virtual pets, form attachments to smart home devices that respond to voice commands, or even feel betrayed if a robotic vacuum cleaner (e.g., Roomba; Reeves, 2019) malfunctions. While this may appear to be a mere emotional reaction, such behavior reflects a deeper psychological phenomenon: the anthropomorphizing of non-human agents. Specifically, users attribute intentionality and agency to these devices, even though they lack consciousness or subjective experience. This tendency is linked to the activation of the Theory of Mind (ToM) system, a cognitive mechanism that allows humans to infer and attribute mental states to others to predict and interpret behavior (Premack & Woodruff, 1978). Remarkably, the ToM system can be engaged even in interactions with non-human agents, particularly when those agents exhibit behaviors that appear goal-directed or autonomous. As a result, machines that navigate environments or react to obstacles may be perceived as having intentionality, prompting social and emotional responses typically reserved for interactions with sentient beings.

The concept of politeness, a universal and foundational social norm observed in virtually all human-human interactions, is a salient illustration. Studies increasingly investigate the mechanisms through which politeness rules translate into human-machine interactions, consistently observing that anthropomorphized machines frequently elicit polite responses from human users (Ribino, 2023). For example, empirical research indicates that users routinely behave politely towards voice assistants. Machines engineered to adhere to socially acceptable norms are demonstrably more successful at eliciting sensitive information or guiding user behavior (Ribino, 2023). The evidence strongly suggests that anthropomorphism renders interaction feasible and cultivates a more natural, efficacious, and frequently more agreeable mode of engagement with technology, fostering a nascent sense of relationship, trust, and even psychological comfort, proving pivotal for long-term user acceptance, sustained satisfaction, and continued reliance upon these technological systems. When users perceive a non-human entity as possessing discernible human-like qualities, they are demonstrably more inclined to engage with it on a deeper, more personal level, which in turn leads to heightened engagement and an amplified perception of utility. This dynamic underscores the critical role of anthropomorphism in shaping the evolving landscape of human-technology cooperation and coexistence, transforming what might otherwise be purely functional interactions into more meaningful and effective engagements, thereby blurring the lines between social and instrumental relationships and potentially mitigating feelings of isolation in an increasingly digital world.

Furthermore, compelling empirical evidence increasingly substantiates a biological underpinning for the human imperative to anthropomorphize, suggesting that this tendency is not merely a learned cultural construct but an innate, evolutionarily significant predisposition. Neuroscientific and psychological investigations have commenced exploring the intricate roles of specific neurochemicals and discrete brain regions in mediating anthropomorphic inclinations.
Notably, oxytocin, a hypothalamic peptide widely recognized for its pivotal involvement in social bonding, trust formation, and affiliative behaviors, has been strongly implicated in augmenting anthropomorphic tendencies (Scheele et al., 2015). Research has unequivocally demonstrated a positive correlation between endogenous oxytocin concentrations and the subsequent attribution of animacy and social meaning to stimuli that are otherwise inanimate or non-social (Scheele et al., 2015). This neurobiological foundation suggests that anthropomorphism may constitute an evolutionarily advantageous trait, deeply integrated into our neurobiology to facilitate sophisticated social cognition and adaptive responses within a complex and dynamic environmental milieu. The capacity to rapidly assess and categorize entities as potentially social, even in instances where they are not, could have conferred significant survival advantages by enabling swift threat detection (e.g., perceiving a rustle in the bushes as an intentional predator, prompting a fight-or-flight response) or identifying opportunities for cooperation (e.g., interpreting animal behavior for hunting or domestication, leading to resource acquisition) within ancestral environments. This innate predisposition is further supported by anthropomorphism's extensive and pervasive history across diverse human cultures, as evidenced by ancient artistic representations depicting animals with human-like bodies dating back 30,000 years ( Dalton, 2003; Pfeuffer et al., 2019). These historical artifacts are powerful testaments to this human trait's deep-seated and enduring nature, predating complex societal structures and technological advancements. The universality of this phenomenon across disparate cultures and geographical locations, often manifesting in similar forms of animal personification or the attribution of spirits to natural elements, provides further compelling evidence for a shared, perhaps genetically encoded, cognitive architecture that favors anthropomorphic interpretations. This profound biological inclination reinforces the assertion that anthropomorphism is far from a superficial cognitive quirk; rather, it represents a fundamental and arguably unavoidable aspect of human existence, profoundly shaping how individuals perceive and interact with everything from companion animals to celestial bodies, and with increasing frequency, artificial intelligence. The persistence of this trait across human development and its deep biological roots underscore its adaptive significance in navigating a world populated by both living and non-living entities, providing a fundamental lens through which reality is interpreted and engaged, and ultimately contributing to human survival and flourishing by fostering a sense of connection and predictability in an otherwise chaotic environment.

While the human propensity for anthropomorphism is undeniably potent and confers substantial benefits, it is imperative to acknowledge that it can sometimes lead to misrepresentations or exaggerated perceptions of non-human entities. In the contemporary context of advanced artificial intelligence, this can manifest as "AI hype" or a cognitive "fallacy," erroneously attributing human-like traits to systems that do not genuinely possess them, potentially distorting moral judgments, perceptions of responsibility, and the establishment of trust (Barrow, 2024; Placani, 2024). Such misattributions can lead to unrealistic expectations regarding AI capabilities, for instance, assuming an AI possesses consciousness, genuine empathy, or subjective experience when it merely simulates these traits through complex algorithms and data processing. These assumptions can result in user disappointment, misuse of technology, or even ethical dilemmas if individuals place undue reliance or emotional investment in non-sentient systems. Conversely, it can also lead to an unwarranted sense of moral agency in machines, raising complex ethical questions about accountability, rights, and the very definition of personhood that may not be applicable or appropriate given current technological limitations. For example, if an autonomous vehicle causes an accident, anthropomorphizing its "decision-making" might obscure the underlying human programming and design choices that are truly responsible, shifting blame away from human developers or regulators. However, even these potential pitfalls and ethical considerations do not fundamentally negate the underlying need for anthropomorphism. Instead, they underscore the importance of fostering critical awareness among users and responsible design principles among developers in constructing and engaging with increasingly sophisticated non-human agents. Responsible design can judiciously leverage beneficial anthropomorphic cues to enhance usability and engagement while providing clear boundaries and transparent explanations regarding a system's capabilities, limitations, and the mechanisms of its operation. This balanced approach allows for the benefits of anthropomorphism without succumbing to its potential for deception or misunderstanding. Despite the inherent risks of over-attribution or fundamental misunderstanding, the profound cognitive and social benefits frequently outweigh the potential drawbacks, particularly in contexts where anthropomorphism demonstrably facilitates learning, enhances user engagement, or cultivates emotional connection. The sustained prevalence and demonstrable utility of anthropomorphism across various domains, from simplifying the understanding of complex scientific phenomena to fostering a connection with virtual assistants and therapeutic chatbots, unequivocally confirm its indispensable nature. For humanity, perceiving the human in the non-human is not merely a cognitive choice or a learned behavior but a fundamental and deeply ingrained necessity. Our inherent drive to comprehend, connect, and interact meaningfully with our environment's intricate and multifaceted tapestry propels this intrinsic need, rendering anthropomorphism a vital, albeit sometimes misleading, cognitive and social instrument for human engagement with the world. Consequently, a nuanced understanding of anthropomorphism is essential for leveraging its benefits, but more importantly, mitigating its potential liabilities in an increasingly interconnected and technologically advanced society, ensuring that this powerful human tendency serves rather than hinders our progress.

References

  1. Barrow, N. (2024). Anthropomorphism and AI hype. AI and Ethics, 4, 707-711. https://doi.org/10.1007/s43681-024-00454-1
  2. Bunta, S. N. (2007). THE MĒSU-TREE AND THE ANIMAL INSIDE: THEOMORPHISM AND THERIOMORPHISM IN DANIEL 4. Scrinium, 3(1), 364–384. https://doi.org/10.1163/18177565-90000162
  3. Dalton, R. Lion man takes pride of place as oldest statue. Nature 425, 7 (2003). https://doi.org/10.1038/425007a
  4. Darwin, C. (1872/1998) The expression of the emotions in man and animals. 3rd Edition, Oxford University Press, New York. https://doi.org/10.1093/oso/9780195112719.001.0001
  5. Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864
  6. Feuerbach, Ludwig (1873). The essence of religion. Amherst, N.Y.: Prometheus Books. Edited by Alexander Loos. https://philpapers.org/rec/FEUTEO-3
  7. Freud S (1930) Civilization and its Discontents. New York: Norton:64–145. https://wwnorton.com/books/Civilization-and-Its-Discontents/
  8. Henrichs, A. (2010). What is a Greek god? In The Gods of Ancient Greece (pp. 19–40). Edinburgh University Press. Retrieved from https://doi.org/10.1515/9780748642892-006
  9. Hodgson, S. H. (1882). PHILOSOPHY IN RELATION TO ITS HISTORY. The Journal of Speculative Philosophy, 16(3), 225–244. https://www.jstor.org/stable/25667919
  10. Pfeuffer, N., Benlian, A., Gimpel, H., & Hinz, O. (2019). Anthropomorphic Information Systems. Bus Inf Syst Eng, 61(4), 523-533. https://doi.org/10.1007/s12599-019-00599-y
  11. Placani, A. (2024). Anthropomorphism in AI: hype and fallacy. AI and Ethics, 4, 691-698. https://doi.org/10.1007/s43681-024-00419-4
  12. Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526. https://doi.org/10.1017/S0140525X00076512
  13. Reeves, M. (2019). The Roomba That Screams When it Bumps Into Stuff. YouTube. https://www.youtube.com/watch?v=mvz3LRK263E
  14. Ribino, P. (2023). The role of politeness in human-machine interactions: a systematic literature review and future perspectives. Artificial Intelligence Review, 56, 5445-5482. https://doi.org/10.1007/s10462-023-10540-1
  15. Scheele, D., Schwering, C., Elison, J. T., Spunt, R., Maier, W., & Hurlemann, R. (2015). A human tendency to anthropomorphize is enhanced by oxytocin. European Neuropsychopharmacology, 25, 1817-1823. https://doi.org/10.1016/j.euroneuro.2015.05.009
  16. Tsekouras, D., Gutt, D., & Heimbach, I. (2024). The robo bias in conversational reviews: How the solicitation medium anthropomorphism affects product rating valence and review helpfulness. Journal of the Academy of Marketing Science. https://doi.org/10.1007/s11747-024-01027-8

The following individuals have been involved in sparking the ideas in the essay and reviewing it: