National Tsing Hua University
National Tsing Hua University
Human perception and expression of emotion are profoundly shaped by individual traits, social norms, and cultural values. Emotion recognition systems, which aim to infer emotional states from speech, facial ex-pression, or physiological signals, inevitably reflect biases embedded in their training data, annotation processes, and model assumptions. These biases can stem from subjective labeling practices, un-balanced demographic representation, or cultural misalignments. As a result, such systems may systemat-ically underperform or behave inconsistently across gender, age, accent, or ethnic groups. This raises ethical concerns in applications where emotional AI interacts with diverse users, including mental health moni-toring, education, and public services. A reliable emotion recognition system must not only be accurate but also inclusive, ensuring that users feel seen, heard, and respected.
In recent years, bias in machine learning and AI systems has gained increasing attention, particularly due to its impact on equity, accountability, and societal trust. This growing awareness has led to broader discussions on fairness as a normative goal, with major conferences such as NeurIPS, FAccT, and ICML hosting tutorials and special sessions on responsible AI, debiasing strategies, and trustworthy system design. While these efforts have advanced our understanding of bias in algorithmic decision-making, they often focus on general prediction tasks and lack a pipeline-specific perspective for emotion. Affective computing poses unique challenges due to its reliance on interpreting inherently ambiguous and subjective human emotions. In these systems, bias may emerge during model training and in upstream stages like annotation, where human raters’ perspectives and demographic backgrounds play a significant role. Achieving fairness in emotion recognition therefore requires more than accuracy; it must also consider how individuals feel perceived, valued, and represented by the system. In this context, both the annotation process and the model prediction carry potential for human-centered bias.
Time zone: AEST — Australian Eastern Standard Time
Location: Hotel Realm conference venue,
More details: ACII tutorial page
NEWS: slides available here.
09:15 - 09:35 - Setting the Stage: Why Bias Matters in Emotion AI
• A human-centered perspective on fairness, bias, and ethical challenges in emotion AI systems.
09:35 - 9:45 - Sources of Bias & Case Study: Speech Emotion Recognition
• Where does bias come from? Annotation subjectivity, demographic gaps.
• Why is SER particularly sensitive to bias? Speaker- and rater-side analysis, dataset evidence.
9:45 - 10:35 - Break
10:35 - 11:00 - Mitigating Bias Across the Pipeline: Data, Models, and Metrics
• Fairness-aware Data Practices: Inclusive annotation, dataset auditing, labeling diversity.
• Bias Mitigation Strategies: Pre-, in-, and post-processing strategies.
• Evaluation Methods: Group vs. individual fairness, metrics and trade-offs.
11:00 - 11:10 - Societal Implications and Open Problems
• Cross-cultural affect, affective feedback, trust in emotion AI.
11:10 - 11:40 - Interactive Hands-on Session: Fairness Analysis in BIIC-Podcast
• Practical walkthrough using bias mitigation techniques, with fairness metrics applied to a real-world speech emotion dataset.
For details, please contact Woan-Shiuan Chien.
Last updated: 27th or September 2025.
Website credit: This website style is adopted from the CVPR23 tutorial page.