Understanding and Mitigating Bias in Emotion Recognition Systems


ACII 2025 Tutorial

Saturday, October 11st 2025 - Canberra

Organizers


Woan-Shiuan Chien

National Tsing Hua University

Chi-Chun Lee

National Tsing Hua University

Summary


Human perception and expression of emotion are profoundly shaped by individual traits, social norms, and cultural values. Emotion recognition systems, which aim to infer emotional states from speech, facial ex-pression, or physiological signals, inevitably reflect biases embedded in their training data, annotation processes, and model assumptions. These biases can stem from subjective labeling practices, un-balanced demographic representation, or cultural misalignments. As a result, such systems may systemat-ically underperform or behave inconsistently across gender, age, accent, or ethnic groups. This raises ethical concerns in applications where emotional AI interacts with diverse users, including mental health moni-toring, education, and public services. A reliable emotion recognition system must not only be accurate but also inclusive, ensuring that users feel seen, heard, and respected.

In recent years, bias in machine learning and AI systems has gained increasing attention, particularly due to its impact on equity, accountability, and societal trust. This growing awareness has led to broader discussions on fairness as a normative goal, with major conferences such as NeurIPS, FAccT, and ICML hosting tutorials and special sessions on responsible AI, debiasing strategies, and trustworthy system design. While these efforts have advanced our understanding of bias in algorithmic decision-making, they often focus on general prediction tasks and lack a pipeline-specific perspective for emotion. Affective computing poses unique challenges due to its reliance on interpreting inherently ambiguous and subjective human emotions. In these systems, bias may emerge during model training and in upstream stages like annotation, where human raters’ perspectives and demographic backgrounds play a significant role. Achieving fairness in emotion recognition therefore requires more than accuracy; it must also consider how individuals feel perceived, valued, and represented by the system. In this context, both the annotation process and the model prediction carry potential for human-centered bias.

Details & Schedule


Saturday, October 11st

Time zone: AEST — Australian Eastern Standard Time
Location: Hotel Realm conference venue,
More details: ACII tutorial page
NEWS: slides available here.

For details, please contact Woan-Shiuan Chien.
Last updated: 27th or September 2025.
Website credit: This website style is adopted from the CVPR23 tutorial page.