Teaching AI Ethics: Affect Recognition

This is the seventh post in a series exploring the nine areas of AI ethics outlined in this original post. Each post goes into detail on the ethical concern and provides practical ways to discuss these issues in a variety of subject areas. For the previous post on datafication, click here.

As artificial intelligence continues to develop and influence different aspects of our lives, its role in education is becoming increasingly important. One particularly controversial implementation of AI is affect or emotion recognition, which claims to interpret human emotions and mental states by analysing facial expressions, body language, and speech patterns. Advocates for affect recognition argue that incorporating it into education can transform how students learn, enabling personalised and adaptive teaching methods that cater to each individual’s emotions and cognitive state. However, the reliability and ethical implications of this technology make it well worth further investigation.

This blog post introduces the ‘advanced’ level of AI ethics. At this level, it becomes more difficult to find information regarding these ethical concerns for a few reasons. Firstly, these issues are often complex and intertwined with concerns outside of the field of AI or education – for instance, affect recognition has its roots in psychology. Secondly, these issues are particularly contentious because of the vested interests in AI technologies and the companies that develop them. The next post on human labour, for example, will discuss a highly problematic issue which is potentially very damaging to AI developers’ reputations.

In this blog post, I’ll explore the concept of affect recognition, its theoretical underpinnings, and the debate surrounding its effectiveness. I’ll also go into the ethical considerations of affect recognition in education and its broader impact on students, teachers, and society. For me, this is absolutely one of the worst potential applications of AI in education, for reasons which I hope will become apparent.

Here’s the original PDF infographic which covers all nine areas:

What is affect recognition?

Affect recognition, also known as emotion recognition, is a subfield of AI that aims to identify and interpret human emotions and mental states by analysing various cues, such as facial expressions, body language, and speech patterns. By leveraging machine learning algorithms and computer vision techniques, affect recognition systems attempt to classify emotions into categories such as happiness, sadness, anger, fear, surprise, and disgust.

The origins of affect recognition can be traced back to the early studies on facial expressions and emotions conducted by psychologist Paul Ekman in the 1960s. Ekman’s work, which included research among the people of Papua New Guinea, led to the development of the Facial Action Coding System (FACS) and the theory that certain facial expressions are universally linked to specific emotions. This notion has been the foundation for much of the affect recognition research that has since taken place.

Several technologies and developers have already attempted to incorporate affect recognition. Some examples include:

  1. Affectiva: Affectiva is a software company that has developed an emotion recognition platform using computer vision and deep learning algorithms to analyse facial expressions in real-time, with applications in marketing, automotive, and gaming industries. They are also concerned with “behaviour analytics”. Affectiva’s partners include BMW, Boeing, and Lockheed Martin.
  2. Emotient: Acquired by Apple in 2016, Emotient’s technology used machine learning to analyse facial expressions in images and videos, for advertising purposes.
  3. Beyond Verbal: This company focuses on emotion recognition through vocal intonations, focusing on customer service interactions, telemedicine, and voice assistants. Though Beyond Verbal has a wikipedia entry, the listed website link is dead, and it is unclear whether or not the company still exists.

The controversy surrounding affect recognition primarily stems from the reliability and validity of the underlying theory. Critics argue that emotions are not universally expressed through facial expressions, as cultural and individual differences can heavily influence the way emotions are displayed. Recent studies have challenged the idea that specific facial expressions can be reliably linked to distinct emotions, suggesting that context plays a crucial role in interpreting emotional cues.

Another concern is the potential for bias in affect recognition algorithms, as they may not account for variations in facial structure, skin tone, neurodiversity, or cultural background. These biases can lead to misinterpretation and misclassification of emotions, raising ethical questions about the fair application of this technology.

Processing…
Success! You're on the list.

Case Study: Microsoft’s Decision to Remove Emotion Recognition Services

Given the problematic nature of emotion recognition research, and as part of its responsible AI development and deployment, Microsoft decided to remove certain features from their Azure Face service, including capabilities that infer emotional states, gender, age, smile, facial hair, hair, and makeup.

In a 2022 blog post, Microsoft acknowledged the need for AI systems to be trustworthy and appropriate solutions to the problems they are designed to solve. In the case of emotion recognition, the company highlighted several concerns that influenced their decision to remove these capabilities:

  1. Lack of scientific consensus: Experts both inside and outside the company pointed out the absence of a universally accepted definition of “emotions.” This lack of consensus makes it challenging to develop reliable and accurate emotion recognition systems.
  2. Generalisation challenges: Emotion recognition technology faces difficulties in generalising its inferences across diverse use cases, regions, and demographics. This could result in misinterpretations or biases, further questioning the technology’s efficacy and ethical implications.
  3. Heightened privacy concerns: The use of AI to analyse facial expressions and infer emotional states raises significant privacy concerns. Microsoft recognised the need to prioritise user privacy and address these concerns in the development and deployment of their AI systems.

Microsoft’s decision to remove emotion recognition services from their Azure Face platform demonstrates some of the biggest concerns with affect recognition technologies. Deploying technologies which do not have a solid theoretical underpinning and which may seriously impact people’s lives is clearly unethical. However, despite moves like this from big companies like Microsoft and IBM, the lure of emotion recognition continues developers to produce products for education.

Affect in education

Before a few activities that could support these discussions in the classroom, I thought I’d include some examples of how affect and emotion recognition is being pursued in education. These examples offer a glimpse into both the research and the commercial application of emotion recognition technologies.

  1. BrainCo: BrainCo has developed a headband called Focus1, which uses electroencephalography (EEG) technology to monitor students’ brainwaves and assess their attention levels in real-time. The company claims that this technology can help educators identify when students are losing focus and adjust their teaching methods accordingly.
  2. Carnegie Mellon University’s ArticuLab: The ArticuLab at Carnegie Mellon University has conducted several research projects on affect recognition, including the development of an AI-powered virtual tutor called Alex. Alex is designed to recognise and respond to students’ emotions, such as frustration or confusion, using natural language processing and facial expression analysis.
  3. The Affective Computing Group at MIT Media Lab: The Affective Computing Group has been conducting research on emotion recognition and its potential applications in various fields, including education. One of their projects, Teddy, is a “cutting edge data collection platform” in the form of a virtual teddybear chatbot.

Teaching AI Ethics

The posts in the beginner and intermediate series included questions, resources, and prompts for a variety of subject areas. Since the areas discussed in the “advanced” level of these posts are much harder to find resources, I’m going to offer instead some suggestions for classroom discussion activities that could get students thinking about these issues. These discussion activities could be used in any discipline and from K-12 to tertiary.

Activity 1: AI Ethics Debate

Objective: Encourage critical thinking and discussion about the ethical implications of affect recognition.

Instructions:

  1. Divide students into small groups, and assign each group a specific subject area (e.g., history, literature, environmental science, etc.).
  2. In their groups, students will research and discuss the potential benefits and drawbacks of implementing affect recognition technology in their assigned subject area.
  3. Each group will prepare a short presentation outlining their findings and their stance on the ethical implications of using affect recognition in their subject area.
  4. After all groups have presented, hold a class-wide debate to discuss the different perspectives and explore potential solutions to the ethical challenges posed by affect recognition technology.

Activity 2: AI Ethics Case Study Analysis

Objective: Develop students’ ability to analyse real-world examples of affect recognition technology in various disciplines and evaluate their ethical implications.

Instructions:

  1. Provide students with several case studies that highlight the use of affect recognition technology in different subject areas (e.g., education, marketing, healthcare, etc.). You could use some of the links throughout this post as examples.
  2. In pairs or small groups, students will analyse their assigned case study, focusing on the following aspects: a. The purpose and application of affect recognition technology. b. The potential benefits and drawbacks of the technology in the specific context. c. The ethical considerations involved in implementing the technology.
  3. Students will present their analysis to the class, and the class will discuss each case study, comparing and contrasting the ethical implications across the different subject areas.

Activity 3: Designing an Ethical Affect Recognition System

Objective: Encourage students to think creatively about designing affect recognition systems that address ethical concerns.

Instructions:

  1. Divide students into small groups and provide them with a subject area or specific context in which affect recognition technology might be applied (e.g., education, mental health, customer service, etc.).
  2. Ask each group to imagine they are designing an affect recognition system for their assigned context. Their task is to create a system that addresses the ethical concerns related to affect recognition, such as privacy, accuracy, and bias.
  3. Students should consider the following aspects in their designs: a. Data collection and privacy protection methods. b. Techniques for ensuring the accuracy and reliability of emotion detection. c. Strategies for addressing potential biases in the technology.
  4. Each group will present their design to the class, and the class will discuss the various approaches to addressing the ethical implications of affect recognition technology across the different contexts.
  5. As a class, address the following question: Is it possible to develop a truly ethical affect recognition system?

The next post in this series will explore the environmental costs associated with Artificial Intelligence and what companies are doing – or not – to mitigate the huge impact of AI. Join the mailing list for updates:

Processing…
Success! You're on the list.

Got a comment, question, or feedback? Get in touch:

One response to “Teaching AI Ethics: Affect Recognition”

  1. […] Teaching AI Ethics: Affect Recognition – Leon Furze […]

Leave a Reply

Discover more from Leon Furze

Subscribe now to keep reading and get access to the full archive.

Continue reading