Anthropomorphization in AI Assistants: A Tale of Two Titans

June 17, 2024 by
Margarita Garcia

By: Margarita Garcia - Managing Director, Naoitech

As the competition of AI assistants rages, it's interesting to see how Google and Open AI navigate the question of anthropomorphism (AI systems with human-like features) in different ways. Google has taken a cautious approach, as evidenced by their paper "The Ethics of Advanced AI Assistants," which explores the potential risks associated with anthropomorphic AI assistants. On the other hand, Open AI has fully embraced anthropomorphism with the release of their new Chat GPT 4o. This decision raises some concerns and questions that are worth examining before integrating AI assistants into our daily lives.

Even though extensive research has been conducted on the topic of the Negative Consequences of Anthropomorphized Technology, it will be hard to analyze those immediate effects since conversational AIs have not been ubiquitously available until recently. The core concern lies in technology mimicking human behaviour and fostering unhealthy feelings and attachments. This is particularly worrisome for children and young adults, who might form bonds with AI assistants which are incapable of reciprocating human emotion. This can open a Pandora’s box of mental health issues, privacy issues and bad actors exploiting that vulnerability.

Going through some of the users' reviews on social media, it appears that Open AI's AI assistant is generally considered more capable than Google's. The surprising factor is that what seems to give that impression is not the quality of the results obtained, but the anthropomorphic qualities of the AI assistant almost equating it with AGI or sentient characteristics rather than intentional product design features. Already, we are seeing the perceptions that anthropomorphic design features are creating in individuals as they concentrate on the conversational interaction aspect of the AI assistant and not on the accuracy and relevance of the results that it provides. This finding is in line with Google Deepmind research paper, "The Ethics of Advanced AI Assistants” which states:

Dialogue capabilities are an anthropomorphic design feature. Software that has dialogue capabilities is, as a result, routinely anthropomorphised by its users. It is not uncommon for users to believe or expect that DVAs are capable of understanding and generating language in real time (Lovato and Piper, 2015; Sarikaya et al., 2016). Yet most commercially available DVAs are powered by rule-based system architectures, retrieving the appropriate response by conducting a relevance-based search over a large corpus of possible responses (Coheur, 2020). Though all distinctive DVA attributes – such as playfulness (Moussawi et al., 2021), affability (Kääriä, 2017) and excitability (Wagner and Schramm-Klein, 2019) – are handwritten by system designers, they are nonetheless effective at creating the sense that DVAs have consistent personalities (Cao et al., 2019); this impression, in turn, may inspire users to regard these manufactured expressions of ‘self’ as authentic human identity.

Google's decision to avoid anthropomorphization in their AI assistant likely stems from a desire to mitigate some of the ethical concerns raised in "The Ethics of Advanced AI Assistants" Chapter 10 which addresses the potential risks of anthropomorphic AI assistants. We can infer from their research that a more robotic tone in their AI assistant can potentially address the following:

  • Reduced Risk of Manipulation: A human-like AI assistant with emotional cues and persuasive tactics could be more adept at manipulating users, especially vulnerable populations. A robotic tone makes it clear the assistant is a tool, not a person, reducing the risk of emotional influence.
  • Transparency and Predictability: A robotic tone emphasizes the fact that the assistant is a machine following programmed instructions. This transparency allows users to understand the limitations and potential biases of the assistant, making its responses more predictable and less likely to contain hidden agendas.
  • Focus on Functionality: By avoiding a human-like personality, the assistant can prioritize its core function: providing information and completing tasks. This can help users avoid getting caught up in casual conversations or emotional interactions, leading to a more focused and efficient experience.
  • Curbing Unrealistic Expectations: A human-like tone might lead users to expect a level of understanding or empathy that the AI simply doesn't possess. A robotic tone manages expectations, reminding users they're interacting with a machine and preventing disappointment or frustration.

Currently, there's a lack of publicly available research on Open AI's stance on anthropomorphization in their AI assistants. This doesn't necessarily mean they haven't conducted research, but it creates a transparency gap as we are not certain what research, evidence, and rationale were utilized to adopt this approach. We will continue to monitor and update the information once it becomes available.

The new European AI legislation, the AI Act, doesn't specifically address anthropomorphic technology itself. However, the focus of the Act on human-centred AI and risk mitigation could still apply to how anthropomorphic technology is developed and used.

As Business Leaders and members of society, we have a great responsibility to future generations to make sure we don’t deploy technology that can be harmful and can inadvertently unleash a mental health crisis similar to the one that was partially created by social media.

Sources

Canfer Akbulut, Verena Rieser, Laura Weidinger, Arianna Manzini, Iason Gabriel “The Ethics of Advanced AI Assistants.” Chapter 10, April 19, 2024

Jianqing Zheng, S. Jarvenpaa "Negative Consequences of Anthropomorphized Technology: A Bias-Threat-Illusion Model." January 8, 2019

Murthy, S. Jarvenpaa "Surgeon General Issues New Advisory About Effects Social Media Use Has on Youth Mental Health." May 23, 2023

https://www.hhs.gov/about/news/2023/05/23/surgeon-general-issues-new-advisory-about-effects-social-media-use-has-youth-mental-health.html

Knight, William. "Prepare to Get Manipulated by Emotionally Expressive Chatbots." Wired, www.wired.com/story/prepare-to-get-manipulated-by-emotionally-expressive-chatbots/l.*