June 19, 2025
The GIST New test can help driverless cars make 'moral' decisions
Gaby Clark
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread

Researchers have validated a technique for studying how people make "moral" decisions when driving, with the goal of using the resulting data to train the artificial intelligence used in autonomous vehicles. These moral psychology experiments were tested using the most critical audience researchers could think of: philosophers.
The paper, "Morality on the road: the ADC model in low-stakes traffic vignettes," is published in the journal Frontiers in Psychology.
"Very few people set out to cause an accident or hurt other people on the road," says Veljko Dubljević, corresponding author of the study and a professor in the Science, Technology & Society program at North Carolina State University. "Accidents often stem from low-stakes decisions, such as whether to go five miles over the speed limit or make a rolling stop at a stop sign. How do we make these decisions? And what constitutes a moral decision when we're behind the wheel?"
"We needed to find a way to collect quantifiable data on this, because that sort of data is necessary if we want to train autonomous vehicles to make moral decisions," Dubljević says.
"Once we found a way to collect that data, we needed to find a way to validate the technique—to demonstrate that the data is meaningful and can be used to train AI. For moral psychology, the most detail-oriented set of critics would be philosophers, so we decided to test our technique with them."
The technique the researchers developed is based on the Agent Deed Consequence model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that results from the deed.
Specifically, the technique tests how people judge the morality of driving decisions by sharing a variety of traffic scenarios with test subjects, and then having the test subjects answer a series of questions about moral acceptability and various aspects of what took place in each scenario.
For this validation study, the researchers enlisted 274 study participants with advanced degrees in philosophy. The researchers shared driving scenarios with the study participants and asked them about the morality of the decisions that drivers made in each scenario. The researchers also used a validated measure to assess the study participants' ethical frameworks.
"Different philosophers subscribe to different schools of thought regarding what constitutes moral decision-making," Dubljević says. "For example, utilitarians approach moral problems very differently from deontologists who are very focused on following rules. In theory, because different schools of thought approach morality differently, results on what constituted moral behavior should have varied depending on which framework different philosophers used.
"What was exciting here is that our findings were consistent across the board," Dubljević says. "Utilitarians, deontologists, virtue ethicists—whatever their school of thought, they all reached the same conclusions regarding moral decision-making in the context of driving.
"That means we can generalize the findings," says Dubljević. "And that means this technique has tremendous potential for AI training. This is a significant step forward.
"The next step is to scale up testing among broader populations and in multiple languages, with the goal of determining the extent to which this approach can be generalized both within western culture and beyond."
The first author of the paper is Michael Pflanzer, a Ph.D. student at NC State. The paper was co-authored by Dario Cecchini, a postdoctoral researcher at NC State; and by Sam Cacace, an assistant professor of psychology at the University of North Carolina at Charlotte.
More information: Michael Pflanzer et al, Morality on the road: the ADC model in low-stakes traffic vignettes, Frontiers in Psychology (2025). DOI: 10.3389/fpsyg.2025.1508763
Journal information: Frontiers in Psychology Provided by North Carolina State University Citation: New test can help driverless cars make 'moral' decisions (2025, June 19) retrieved 19 June 2025 from https://techxplore.com/news/2025-06-driverless-cars-moral-decisions.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
To help autonomous vehicles make moral decisions, researchers ditch the 'trolley problem' 0 shares
Feedback to editors