October 18, 2024 report
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread
DeepMind researchers find LLMs can serve as effective mediators
A team of AI researchers with Google's DeepMind London group has found that certain large language models (LLMs) can serve as effective mediators between groups of people with differing viewpoints regarding a given topic. The work is published in the journal Science.
Over the past several decades, political divides have become common in many countries—most have been labeled as either liberal or conservative. The advent of the internet has served as fuel, allowing people from either side to promote their opinions to a wide audience, generating anger and frustration. Unfortunately, no tools have surfaced to diffuse the tension of such a political climate. In this new effort, the team at DeepMind suggests AI tools such as LLMs may fill that gap.
To find out if LLMs could serve as effective mediators, the researchers trained LLMs called Habermas Machines (HMs) to serve as caucus mediators. As part of their training, the LLMs were taught to identify areas of overlap between viewpoints of people in opposing groups—but not to try to change anyone's opinions.
The research team used a crowdsourcing platform to test their LLM's ability to mediate. Volunteers were asked to interact with an HM, which then attempted to gain perspective on the views of the volunteers about certain political topics. The HM then produced a document summarizing the views of the volunteers, in which it was prompted to give more weight to areas of overlap between the two groups.
The document was then given to all the volunteers who were asked to offer a critique, whereupon the HM modified the document to take the suggestions into account. Finally, the volunteers were divided into six-person groups and took turns serving as mediators for statement critiques that were compared to statements presented by the HM.
The researchers found that the volunteers rated the statements made by the HM as higher in quality than the humans' statements 56% of the time. After allowing the volunteers to deliberate, the researchers found that the groups were less divided on their issues after reading the material from the HMs than reading the document from the human mediator.
More information: Michael Henry Tessler et al, AI can help humans find common ground in democratic deliberation, Science (2024). DOI: 10.1126/science.adq2852
Journal information: Science
© 2024 Science X Network
Citation: DeepMind researchers find LLMs can serve as effective mediators (2024, October 18) retrieved 18 October 2024 from https://techxplore.com/news/2024-10-deepmind-llms-effective.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
As LLMs grow bigger, they're more likely to give wrong answers than admit ignorance 26 shares
Feedback to editors