August 1, 2025
The GIST Experts outline evidence-based strategies for responsible AI policy development
Lisa Lock
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread

As policymakers in the U.S. and globally consider how to govern artificial intelligence technology, Berkeley researchers have joined with others to recommend opportunities to develop evidence-based AI policy.
Jennifer Chayes, Ion Stoica, Dawn Song, and Emma Pierson are co-authors on the article—"Advancing science- and evidence-based AI policy" by Rishi Bommasani et al—published in the journal Science on July 31. The article proposes policy mechanisms to address the opportunities and challenges of "increasingly powerful AI."
Additional co-authors include AI experts and scholars from Stanford University, Princeton University, the Carnegie Endowment for International Peace, Harvard University, the Institute for Advanced Study, and Brown University.
The latest publication follows a recent report developed by the Joint California Policy Working Group on AI Frontier Models, which was co-led by Chayes, professor and dean of the UC Berkeley College of Computing, Data Science, and Society (CDSS).
Last month, the working group submitted its final report—"The California Report on Frontier AI Policy"—at the request of Governor Gavin Newsom. In examining the best available evidence on AI, the working group report proposed key policy principles to help inform the use, assessment and governance of frontier AI in the state. The report has been cited by California state senators and assembly members drafting legislation, along with state agencies, tech industry leaders, and civil society organizations.
The July 31 article in Science makes recommendations to the broader swath of policymakers considering interventions in the U.S. and around the world: "AI policy should advance AI innovation by ensuring that its potential benefits are responsibly realized and widely shared. To achieve this, AI policymaking should place a premium on evidence: Scientific understanding and systematic analysis should inform policy, and policy should accelerate evidence generation."
In the article, the authors articulate a vision for the development of evidence-based AI policy through consideration of three core components: how evidence should inform policy; the current state of evidence; and how policy can accelerate the generation of new evidence.
The authors note that "defining what counts as (credible) evidence is the first hurdle for applying evidence-based policy to an AI context—a task made more critical since norms for evidence vary across policy domains."
However, the experts warn that continually evolving evidence should not be used by policy actors to justify inaction or to promote negative societal outcomes, citing past incidences of industries co-opting evidence as historical examples to avoid repeating.
"Evidence-based AI policy would benefit from evidence that is not only credible but also actionable," the authors said. "A focus on marginal risk, meaning the additional risks posed by AI compared to existing technologies like internet search engines, will help identify new risks and how to appropriately intervene to address them."
"The broad reach of AI may mean evidence and policy are misaligned: Although some evidence and policy squarely address AI, much more partially intersects with AI," the authors said. "Well-designed policy should integrate evidence that reflects scientific understanding rather than hype."
"Policy can actively accelerate the generation of evidence that can best inform future policy decisions," the authors said.
They recommend policymakers pursue the following mechanisms to grow the evidence base and serve as the foundation of evidence-based AI policy:
- Incentivize the evaluation of AI models prior to release;
- Require major AI companies to disclose more information about their safety practices to governments and, especially, to the public;
- Increase post-deployment monitoring of AI harms;
- Create shields to protect good-faith third-party AI research;
- Strengthen societal defenses, especially given clear evidence of unmitigated risk even absent AI capabilities; and,
- Catalyze the formation of scientific consensus.
"We recognize that coalescing around an evidence-based approach is only the first step in reconciling many core tensions," the authors concluded. "Extensive debate is both healthy and necessary for democratically legitimate policymaking; such debate should be grounded in available evidence."
More information: Rishi Bommasani et al, Advancing science- and evidence-based AI policy, Science (2025). DOI: 10.1126/science.adu8449
Journal information: Science Provided by University of California – Berkeley Citation: Experts outline evidence-based strategies for responsible AI policy development (2025, August 1) retrieved 1 August 2025 from https://techxplore.com/news/2025-08-experts-outline-evidence-based-strategies.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Researchers describe how philosophers can bridge the gap between science and policy 0 shares
Feedback to editors