IEEE P7014.1
What is this standard?
P7014.1 is the ‘Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems’.The quickest way to the nub of this is to picture a three-way Venn diagram, made of overlapping circles.
The first circle is labelled General-Purpose AI (meaning AI that is used for diverse purposes); the second is Emulated Empathy (how AI appears to understand the way people feel, see things, and reacts appropriately); and third is Partners (AI systems for productivity, companionship, entertainment, or advice). At the intersection is us, P7014.1. I’m Chair, Ben Bland is Vice Chair and the amazing Karen Bennet is Secretary.
Next is that we’re an ethical standard, part of the P7000 series. This means that we’re explicitly interested in questions of fairness, responsible development, and that empathetic AI partners are beneficial to society. We build on the recently published P7014 standard, which is a broader standard that addresses Emulated Empathy in Autonomous and Intelligent Systems.
Roughly, the key difference between P7014 and P7014.1 is that P7014.1 focuses on general usage rather than narrow and task-specific AI. This means that general purpose systems are adaptable to tasks, but narrow AI systems (such as face or voice-based emotion recognition technology) are tailored to identify expressions and behaviour.
Why is it important, and what is a real-world example/case study to illustrate its value?
It’s important because longstanding promises about AI are being realised: machines are performing functions that are generally associated with human intelligence such as reasoning, learning and communication.
Things get tricky with empathy, though, because calling empathy a ‘function’ is a limited way of accounting for empathy. There is also a good argument to be made that empathy is a quintessentially human trait, in part because it involves moral responsibility to others through sameness. Indeed, one root of empathy is from political philosophy and the idea of “fellow-feeling.”
Computers can certainly be programmed with moral functions and parameters, but a computer does not know the feeling of mild panic when lost in a building when a meeting is about to begin. A person does and may recognise it in others. This leads philosophers and neuroscientists to speak of mirror effects (feeling what others seem to feel) and co-presence (sharing moments of experience with others). This embodied understanding can often compel us to help, and this is why when we see such a situation we will ask “Are you lost, and can I help?”
While machines are getting better at reading and responding, this is not the same. The form of empathy that involves moral imperative and feeling is strong empathy, because we can identify with that experience. The other, where one can watch, judge and respond appropriately is a weak form of empathy, which theorists tend to account for via a rules-based or theory-driven account of empathy.
Weak empathy does not mean that it is bad, especially if an AI system can help get me to my meeting on time, but it is very important that people are not confused about the difference (i.e. people are misled to believe that weak empathy is strong empathy). It is also vital that emulated empathy does not exploit people. If modern and emerging AI systems are going to act as partners and assistants in diverse domains of life (work, health, entertainment, and more), it is vital that people are clear on the moral capacity of AI.
Maybe more important is that AI should never pretend to care when the real purpose is to elicit personal and sensitive data, or that the system is programmed with some ulterior goal. AI partners risk exacerbating existing challenges, such as privacy abuses and automation of racism, but they also raise new concerns. These include psychological dependencies, deception, over-reliance, unexpected partner behaviour, over-sharing and, perhaps most important, whose interests the partner is acting for.
If P7014.1 is onto something, and we will see more rather than fewer AI partners, assistants, co-workers, companions and so on, and that these may even replace apps, it is vital we work out appropriate terms of interaction. Given that empathy may be explicitly built in, or its appearance may occur by dint of AI model behavior, I think we need guardrails for empathy-based interaction.
It’s also worth reflecting on the nature of global governance of AI. When it comes to law, some regions are quite advanced, and others are taking a “wait and see approach”. This is not to say that those ‘advanced’ regions have got everything right. Others still have no intention of creating detailed law for AI, preferring soft law. As soft law includes technical standards, especially of an ethical sort, this creates extra significance and importance for us. Also, even for regions that are advanced in their journey with AI governance, there is a keen interest in the supporting role that focused standards can play. This is one reason we are hyper-focused on what might seem a niche issue (the overlap of GPAI, empathy, and AI-human partnerships): we think we can offer leadership on this topic.
What stage is it at?
It’s early days for meetings and we’re very much open to fresh input. Likely at the date this article goes out, we’ll be three or four meetings in. The draft text is quite advanced, though, and while I’m keen to ensure we do not rush things, I’m also keen to get this out. Things are moving quickly, there are gaping holes in legal provision, and I think P7014.1 can make a real difference for society, policymakers, and industry.
What is the current geographical or disciplinary spread of your working members?
This is very important to us. We have a very good reach across the world, but is this reach of equal density across the world? No. So what does one do?
P7014.1 is not the first to experience this, but we do see a fairness problem. The nub is not anything malign, but time zones. Only very committed volunteers will join calls at ungodly hours and, as a Chair and a volunteer who should be on every call, my appetite is low for late night and the early hours. The answer as I see it is reform for asynchronous voting and working. These things are not easy, but if governments and large corporations can allow for asynchronous voting, I’m sure IEEE can too.
What type of people might be interested or well suited for this standards group?
All sorts! Many may think “that’s really interesting, but I don’t know enough”, but there’s a good chance they will be able to contribute through knowledge of a relevant, related issue.
An example may help: one misinformation expert on our group was worried about their lack of understanding about AI, and not being au fait with the ins and outs of how transformer models work. It turns out, though, that said person was a world expert on the issue of deception! Given that our interest in empathy spills into issues of anthropomorphism and zoomorphism, a deception expert is a very handy person to have on board.
This is just one example – one might, for example, have deep expertise in corporate auditing. It turns out this too is very useful for designing tools that simultaneously function well for users of a standard and are meaningful for society (i.e. the standard is more than a checklist of vacuous suggestions).
What triggered your own interest in this area?
This is an easy one: human-technology intimacy.
My day job is that of an academic (mostly of a humanities and social science sort) and over the years and books I’ve written, it became apparent that I’m always lured back to the issue of human-technology intimacy. Basically, I’m fascinated by human experience and how technology interacts with that.
From human experience, it’s an easy step to ethics, i.e. questions and answers about what good looks like for people and the world. In relation to technology, I try to stay positive, but sometimes it’s not easy!
Call to Action
If your interest in P7014.1 is piqued, pop me a line at [email protected] and I’ll help get you onboard. In addition, feel free to check out the IEEE P7014.1 page, the forthcoming IEEE lecture on ‘Automating Empathy in Human-AI Partnerships: Issues, Ethics and Governance,’ and our LinkedIn group.