July 17, 2025
The GIST When the stakes are high, do machine learning models make fair decisions?
Lisa Lock
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

Machine learning is an integral part of high-stakes decision-making in a broad swath of human-computer interactions. You apply for a job. You submit a loan application. Algorithms determine who advances and who is declined.
Computer scientists from the University of California San Diego and the University of Wisconsin—Madison are challenging the common practice of using a single machine learning (ML) model to make such critical decisions. They asked how people feel when "equally good" ML models reach different conclusions.
Associate Professor Loris D'Antoni with the Jacobs School of Engineering Department of Computer Science and Engineering led the research that was presented recently at the Conference on Human Factors in Computing Systems (CHI 2025). The paper, "Perceptions of the Fairness Impacts of Multiplicity in Machine Learning," outlines work D'Antoni began with fellow researchers during his tenure at the University of Wisconsin and is continuing today at UC San Diego. It is available on the arXiv preprint server.
D'Antoni worked with team members to build on existing evidence that distinct models, like their human counterparts, have variable outcomes. In other words, one good model might reject an application while another approves it. Naturally, this leads to questions regarding how objective decisions can be reached.
"ML researchers posit that current practices pose a fairness risk. Our research dug deeper into this problem. We asked lay stakeholders, or regular people, how they think decisions should be made when multiple highly accurate models give different predictions for a given input," said D'Antoni.
The study uncovered a few significant findings. First, the stakeholders balked at the standard practice of relying on a single model, especially when multiple models disagreed. Second, participants rejected the notion that decisions should be randomized in such instances.
"We find these results interesting because these preferences contrast with standard practice in ML development and philosophy research on fair practices," said first author and Ph.D. student Anna Meyer, who was advised by D'Antoni at the University of Wisconsin and will start as assistant professor at Carlton College in the fall.
The team hopes these insights will guide future model development and policy. Key recommendations include expanding searches over a range of models and implementing human decision-making to adjudicate disagreements—especially in high-stakes settings.
Other members of the research team include Aws Albarghouthi, an associate professor in computer science at the University of Wisconsin, and Yea-Seul Kim from Apple.
More information: Anna P. Meyer et al, Perceptions of the Fairness Impacts of Multiplicity in Machine Learning, arXiv (2024). DOI: 10.48550/arxiv.2409.12332
Journal information: arXiv Provided by University of California – San Diego Citation: When the stakes are high, do machine learning models make fair decisions? (2025, July 17) retrieved 17 July 2025 from https://techxplore.com/news/2025-07-stakes-high-machine-fair-decisions.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
AI engineers don't feel empowered to tackle sustainability crisis, new research suggests 0 shares
Feedback to editors