Week 2: Anthropomorphism in AI


I focused my research on human-AI interaction. These are the first papers I read and the main insights:

  • https://arxiv.org/html/2405.15051 : Studies have focused on making AI systems more transparent and explainable to users, however, few studies explore allowing users to modify the algorithm design parameters to practice complete agency over AI decision-making.
  • https://arxiv.org/pdf/2103.15004 : The appearance (e.g., text interfaces, humanoid agents, robots) and behavior of AI significantly shape user perceptions, influencing views on its capabilities, competence, and risks. Human-centered design suggests evaluating how different embodiments impact user perception and interaction, identifying the effects these manipulations would have on users.
  • https://arxiv.org/pdf/2105.05424: Human-Centered AI (HCAI) emphasises designing AI systems that align with human needs, values, and ethical considerations. It promotes AI that is transparent, fair, safe, and trustworthy, ensuring that humans retain control and agency over AI decision-making.

This helped me recognise the importance of designing ethical AI systems that users trust. They also sparked my curiosity about how AI embodiment influences user perception and interaction, leading me to explore my next research topic: anthropomorphism in AI.

Essential learnings:

  • Anthropomorphism in AI refers to the tendency to attribute human-like characteristics, emotions, or intentions to artificial intelligence systems.
  • It is induced through intentional design choices, such as giving AI a name, human voice, or visual embodiment, as well as through training on human-written text, which can naturally embed human-like behaviors.
  • Examples include robots with facial expressions, voice assistants like Siri or Alexa, which mimic human speech and interaction patterns, and chatbots interfaces that use anthropomorphic cues, such as a ‘typing’ indicator resembling human messaging or emojis that enhance social interaction.
  • Moral reasoning is another example of anthropomorphism. When AI is seen as a moral agent, users may unintentionally treat it as if it understands and can be held accountable for its actions, much like a human would be. However, since AI lacks true understanding or moral awareness, this attribution of responsibility is misplaced, and blaming or praising AI for its actions becomes a flawed perception created by anthropomorphic thinking.
  • Anthropomorphism in AI can enhance user engagement, trust, and intuitive interactions by making AI systems feel more relatable and socially accessible.
  • Anthropomorphism can lead to emotional attachment and misinterpretation of AI capabilities, resulting in misplaced trust and harmful decisions, as users may over-rely on AI or mistakenly believe it acts with ethical intentions.
  • Assigning human-like qualities raises concerns about human distinctiveness and the fear of being replaced by non-human entities.
  • Human-like attributes in AI must be carefully designed to avoid causing harm, ensuring these systems serve their intended purpose without unintended consequences.
  • AI interfaces should balance personalisation with user boundaries, fostering emotional connection, trust, and transparency while reducing anxiety. The future of AI-human interactions relies on designing interfaces that resonate emotionally and address human needs for empathy, fairness, and intuitive communication.

I want to expand on the influence anthropomorphism in AI has on its user. This next paper I read discusses the effects anthropomorphism in AI has on users’ self-concept (how you think and feel about yourself, including your strengths, weaknesses, and beliefs). https://linkinghub.elsevier.com/retrieve/pii/S0040162522003109

Important Findings:

  • AI is categorised into:
    • Mechanical AI: used to perform transactional tasks and replace human intelligence.
    • Thinking AI: used to augment human intelligence with utilitarian services such as analytics or diagnostics.
    • Feeling AI: used for experience-based and emotional tasks, where AI agents such as chatbots can interact with customers and convey empathy and elements of social interaction.
  • Feeling AI analyses and understand users’ emotions through sentiment analysis from natural language (e.g., text, audio or video) tailoring interactions accordingly to users’ momentary needs.
  • When an AI exhibits human-like traits similar to the user’s own characteristics, the user feels a sense of connection or similarity with the AI (self-congruence with AI), reinforcing their engagement and trust in it.
  • Self–AI integration: occurs when users incorporate anthropomorphised AI into their self-schema due to perceived similarity. i.e., self-congruence.
  • Different types of anthropomorphism (physical, emotional, personality-based) influence self-congruence in various ways.
  • The more similarity one can experience with such agents, the more emotionally attached to them one would become.
  • Self–AI integration is particularly prominent when AI customises the interactions based on users’ needs and emotions.
  • Users feel more connected to AI when they perceive it as shaped by their input or interactions, especially in emotional AI.
  • Those who integrate AI into their self-concept may expect emotional support from it, like in human relationships.
    • This could threaten the depth of human connections and thus make individuals more self-involved and less empathetic.
    • Thus, the emotional support that is provided by AI agents to users might make the users’ relationships with other humans appear shallow.

Research Question

This will be addressed through an interactive installation in which AI personalises its interactions based on the viewer’s traits. The installation explores how AI’s human-like persona influences user engagement and behaviour, aiming to raise awareness of the ethical implications of anthropomorphic AI design and its impact on perceptions and user decisions.

Related Installation:

Ghost in the Machine is an interactive installation that explores the human tendency to anthropomorphise AI by engaging participants in a simulated conversation with an AI system.

Users speak into a telephone receiver, and an AI agent responds while multiple AI entities (Imaginative Agent, Sassy Agent…) with distinct personalities debate, argue, and collaborate to generate moving visuals and soundscapes that are projected in the installation space.

The installation highlights AI’s limitations, revealing errors and inconsistencies in its responses, metaphorically presented as “hallucinations.”

By exposing AI’s inability to truly understand or possess consciousness, the installation encourages critical reflection on the perception of AI as intelligent or sentient.

Ghost in the Machine ultimately challenges participants to reconsider their relationship with AI, questioning its role as a tool, social actor, or something more.


Leave a Reply

Your email address will not be published. Required fields are marked *