Traditional AI-supported decision-support systems are designed for objective contexts, where we can measure collaboration success based on how accurately the human performs when using the AI. These measures allow us to identify harmful reliance behaviours. With advances in large language models, humans are using AI to support subjective contexts, such as getting advice on interpersonal scenarios. In these cases with no one right answer, it is less clear what harmful reliance looks like, and how we can design against it. In collaboration with Dr. Anastasia Kuzminykh, Dr. Young-Ho Kim, and Paula Akemi Aoyagui, SHARE lab studies how AI influences human subjective decision-making.
Publications
-
Something borrowed: exploring the influence of AI-generated explanation text on the composition of human explanations
Sharon A Ferguson, Paula Akemi Aoyagui, and Anastasia Kuzminykh
In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 2023
-
The explanation that hits home: The characteristics of verbal explanations that affect human perception in subjective decision-making
Sharon Ferguson, Paula Akemi Aoyagui, Rimsha Rizvi, and 2 more authors
Proceedings of the ACM on Human-Computer Interaction, 2024
-
A matter of perspective (s): Contrasting human and llm argumentation in subjective decision-making on subtle sexism
Paula Akemi Aoyagui, Kelsey Stemmler, Sharon A Ferguson, and 2 more authors
In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 2025
-
Exploring Subjectivity for more Human-Centric Assessment of Social Biases in Large Language Models
Paula Akemi Aoyagui, Sharon Ferguson, and Anastasia Kuzminykh
arXiv preprint arXiv:2405.11048, 2024
-
Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making
Sharon Ferguson, Paula Akemi Aoyagui, Young-Ho Kim, and 1 more author
arXiv preprint arXiv:2404.12558, 2024