Guest post by Andrew McStay and Vian Bakir

Why This Collaboration Matters

Recent months have seen collaboration between Anthropic and Hume, arguably signifying an important development in the emergence of people-facing AI technologies. By combining expertise in multimodal emotional AI and large language models (LLMs), this partnership deepens human-computer interaction. However, it also raises pressing ethical concerns about the automation and emulation of empathy, calling for scrutiny.

Who Are Anthropic and Hume?

Most readers will likely be familiar with Anthropic’s Claude, a prominent AI model. However, fewer might know about Hume, named after the Scottish philosopher David Hume, who championed ideas of empathy and “fellow feeling.” Hume represents the next generation of industrial-grade emotional AI applications, designed to address the shortcomings of traditional emotion recognition technologies.

As some will know, emotional AI or what is sometimes known as emotion recognition, is associated with expressions such as “snake oil” and junk or pseudoscience. In many respects this is for good reason. The idea that one can reliably infer what is going on inside a person based on their outward behaviour is problematic to say the least. Used in situations or where decisions affect a person, such as at work, school or in policing, judging people based on emotional states and expressions is problematic, if only because the premise behind the technology is faulty.

Traditional emotional AI relies on outdated psychological theories of basic emotions (think Disney’s Inside Out), which oversimplify the complexities of human emotional life. Hume, led by CEO and emotion scientist Alan Cowen, acknowledges these limitations while still employing reverse inference—analysing outward signals to infer emotional states or intent— but with greater precision and credibility.

Large Language Models (LLMs)

In parallel, large language models (LLMs) have revolutionized AI. From their debut in Google’s “Attention Is All You Need” (2017) to the public release of ChatGPT (2022), LLMs now dominate the AI landscape. The partnership between Anthropic and Hume could lead to seamless integration of emotional AI and LLMs, enabling AI to interpret emotions and respond conversationally. This raises profound questions about whether machines can—or should—emulate human empathy.

This is tricky terrain because people and computers are clearly very different. While some may argue that brain-stuff and computer systems may be functionally equivalent, this is a conversation for a future day.

Empathy, in the context of AI, can be divided into two forms:

Weak empathy: The ability to sense, profile, and interact with human emotional states at scale. AI excels in this domain but operates without genuine emotional experience.

Strong empathy: The deeper human ability to share feelings, experience solidarity, and take moral responsibility for others. AI fundamentally lacks this capacity, highlighting the limits of computational empathy and its inability to replicate what is deeply human.

One can quickly see why the idea of computational empathy raises hackles – it misses what is most human about us. Weak empathy, while potentially useful, raises ethical concerns because it may be used to mimic strong empathy without the underlying human accountability or care, potentially leading to deceptive and otherwise problematic interactions.

Ethical Challenges: Why Standards Are Needed

The growing integration of emulated empathy into AI systems, particularly in general-purpose AI (GPAI), demands ethical oversight. The IEEE P7014.1 working group, chaired by McStay, aims to establish technical standards addressing these concerns.

With a technical standard involving agreed-upon guidelines for specific technologies to ensure safety (among other factors), IEEE P7014.1 diagnoses and will provide practices to address these ethical challenges. Broadly, the issues are twofold:

  1. General AI Concerns: Privacy, bias, energy consumption, and exploitative design.
  2. Specific to Emulated Empathy: Risks include addiction, deception, psychological dependency, imitation of living beings, reality detachment, and fiduciary misalignment (whose interests the AI serves).

 

Those pioneering empathic AI may see these concerns as distant or improbable. However, the pitfalls of poorly regulated systems are evident in the failures of social media platforms, where harmful design choices led to addiction, polarization, and exploitation. We are at the beginning of a new era with emulated empathy; careful foresight and ethical frameworks are essential to avoid repeating past mistakes.

Conclusion

The collaboration between Anthropic and Hume signifies ongoing development in AI that will deepen and increase the number and nature of interactions between people and AI systems in the form of partners, agents, companions, and so on. While the potential for positive outcomes should not be missed, the ethical stakes are high. Emulated empathy must be approached with utmost care and keen mindfulness of capitalism, market forces and time-to-market pressures that guide AI development. Funded by Responsible AI, Project AEGIS and its work on IEEE P7014.1 is an attempt to get ahead of the game: to provide detailed and actionable guidance of how to enhance rather than harm human well-being.