MedVoiceBias: A Controlled Study of Audio LLM Behavior in Clinical Decision-Making
Published in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2026), 2026
We investigate whether audio-based large language models introduce new biases through paralinguistic cues when making clinical decisions, compared to their text-based counterparts. We evaluated audio LLMs using 170 clinical cases, synthesizing each case into speech across 36 distinct voice profiles that varied by age, gender, and emotional characteristics. Our findings reveal significant modality bias: surgical recommendations for audio inputs varied by as much as 35% compared to identical text-based inputs, with one model delivering 80% fewer recommendations for audio. Age-related differences reached up to 12% between young and elderly voices, persisting despite chain-of-thought prompting in most models. While explicit reasoning successfully eliminated gender bias, the research demonstrates that audio LLMs can make clinical decisions based on voice characteristics rather than medical evidence, raising serious concerns about perpetuating healthcare disparities. We conclude that bias-aware model architectures are essential before clinical deployment. Accepted by ICASSP 2026
Recommended citation: Tam, Z.R., & Chen, Y.N. (2026). “MedVoiceBias: A Controlled Study of Audio LLM Behavior in Clinical Decision-Making.” In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Recommended citation: Tam, Z.R., & Chen, Y.N. (2026). "MedVoiceBias: A Controlled Study of Audio LLM Behavior in Clinical Decision-Making." In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Download Paper
