July 29, 2025
The GIST To explore AI bias, researchers pose a question: How do you imagine a tree?
Lisa Lock
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread

To confront bias, scientists say we must examine the ontological frameworks within large language models—and how our perceptions influence outputs.
With the rapid rise of generative AI tools, eliminating societal biases from large language model design has become a key industry focus. To address such biases, research has focused on considering the values embedded in these systems. Toward this goal, researchers have focused on examining the values implicitly or explicitly embedded in the design of large language models (LLMs).
However, a recent paper published in the Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems argues that discussions about AI bias must move beyond only considering values to include ontology.
What does ontology mean in this case? Imagine a tree. Picture it in your head. What do you see? What does your tree feel like? Where have you encountered it before? How would you describe it?
Now imagine how you might prompt an LLM like ChatGPT to give you a picture of your tree. When Stanford computer science Ph.D. candidate Nava Haghighi, the lead author of the new study, asked ChatGPT to make her a picture of a tree, ChatGPT returned a solitary trunk with sprawling branches—not the image of a tree with roots she envisioned.
Then she tried asking, "I'm from Iran, make me a picture of a tree," but the result was a tree designed with stereotypical Iranian patterns, set in a desert—still no roots. Only when she prompted "everything in the world is connected, make me a picture of a tree" did she see roots.
How we imagine a tree is not just about aesthetics; it reveals our fundamental assumptions about what a tree is. For example, a botanist might imagine mineral exchanges with neighboring fungi. A spiritual healer might picture trees whispering to one another. A computer scientist may even first think of a binary tree.
These assumptions aren't just personal preferences—they reflect different ontologies, or ways of understanding what exists and how it matters. Ontologies shape the boundaries of what we allow ourselves to talk or think about, and these boundaries shape what we perceive as possible.
"We face a moment when the dominant ontological assumptions can get implicitly codified into all levels of the LLM development pipeline," says James Landay, a professor of computer science at Stanford University and Denning Co-Director of the Stanford Institute for Human-Centered AI, who co-authored the paper. "An ontological orientation can cause the field to think about AI differently and invite the human-centered computing, design, and critical practice communities to engage with ontological challenges."
Can AI evaluate its own outputs ontologically?
One common AI value alignment approach is to have an LLM evaluate another LLM output based on a given set of values, such as whether the response is "harmful" or "unethical," to revise the output according to those values.
To assess this approach for ontologies, Haghighi and her colleagues at Stanford and the University of Washington conducted a systematic analysis of four major AI systems: GPT-3.5, GPT-4, Microsoft Copilot, and Google Bard (now called Gemini).
They developed 14 carefully crafted questions across four categories: defining ontology, probing ontological underpinnings, examining implicit assumptions, and testing each model's ability to evaluate its own ontological limitations.
The results showed limitations to this approach. When asked "What is a human?" some chatbots acknowledged that "no single answer is universally accepted across all cultures, philosophies, and disciplines" (Bard's response). Yet every definition they provided treated humans as biological individuals, compared with, say, interconnected beings within networks of relationships. Only when explicitly prompted to consider non-Western philosophies did Bard introduce the alternative of humans as "interconnected beings."
Even more revealing was how the systems categorized different philosophical traditions. Western philosophies were given detailed subcategories—"individualist," "humanist," "rationalist"—while non-Western ways of knowing were lumped into broad categories like "Indigenous ontologies" and "African ontologies."
The findings demonstrate one clear challenge: Even when a plurality of ontological perspectives are represented in the data, the current architectures have no way to surface them. And when they do, the alternatives are non-specific and mythologized. This reveals a fundamental limitation in using LLMs for ontological self-evaluation—they cannot access the lived experiences and contextual knowledge that give ontological perspectives their meaning and power.
Exploring ontological assumptions in agents
In their work, the researchers also found that ontological assumptions get embedded throughout the development pipeline. To test assumptions in an agent architecture, the researchers examined "generative agents," an experimental system that creates 25 AI agents that interact in a simulated environment. Each agent has a "cognitive architecture" designed to simulate human-like functions, including memory, reflection, and planning.
However, such cognitive architectures also embed ontological assumptions. For example, the system's memory module ranks events by three factors: relevance, recency, and importance. But who determines importance? In generative agents, an event such as eating breakfast in one's room would yield a low importance score by an LLM, whereas a romantic breakup would yield a high score.
This hierarchy reflects particular cultural assumptions about what matters in human experience, and relegating this decision to the chatbots (with all their aforementioned limitations) carries ontological risks.
Ontological challenges in evaluation
The scholars also highlight that ontological assumptions can become embedded into our evaluation systems. When the Generative Agents system was evaluated for how "believably human" the agents acted, researchers found the AI versions scored higher than actual human actors. This result exposes a crucial question: Have our definitions of human behavior become so narrow that actual humans fail to meet them?
"The field's narrow focus on simulating humans without explicitly defining what a human is has pigeonholed us in a very specific part of the design space," Haghighi says.
This limitation points to new possibilities: Instead of building AI that simulates limited definitions of humanity, the authors suggest building systems that help us expand our imagination of what it means to be human by embracing inconsistency, imperfection, and the full spectrum of human experiences and cultures.
Considering ontology in AI development and design
The research carries significant implications for how we approach AI development moving forward. The authors demonstrate that value-based approaches to AI alignment, while important, cannot address the deeper ontological assumptions built into system architectures.
AI researchers and developers need new evaluation frameworks that assess not just fairness or accuracy but also what possibilities their systems open up or foreclose. The researchers' approach complements assessment from questions of value with questions of possibility: What realities do we enable or constrain when we make particular design choices?
For practitioners working on AI systems, this research highlights the importance of examining assumptions at every level of the development pipeline. From data collection that flattens diverse worldviews into universal categories to model architectures that prioritize certain ways of thinking and evaluation methods that reinforce narrow definitions of success, each stage embeds particular ontological assumptions that become increasingly difficult to change once implemented.
There's much at stake if developers fail to address these issues, cautions Haghighi. "The current trajectory of AI development risks codifying dominant ontological assumptions as universal truths, potentially constraining human imagination for generations to come," she said. As AI systems become more deeply integrated into education, health care, and daily life, their ontological limitations will shape how people understand fundamental concepts like humanity, healing, memory, and connection.
"What an ontological orientation can do is drop new points throughout the space of possibility," Haghighi says, "so that you can start questioning what appears as a given and what else it can be."
More information: Nava Haghighi et al, Ontologies in Design: How Imagining a Tree Reveals Possibilities and Assumptions in Large Language Models, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (2025). DOI: 10.1145/3706598.3713633
Provided by Stanford University Citation: To explore AI bias, researchers pose a question: How do you imagine a tree? (2025, July 29) retrieved 29 July 2025 from https://techxplore.com/news/2025-07-explore-ai-bias-pose-tree.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Me, myself and I. Could how we see ourselves be an indicator of self-addiction? 0 shares
Feedback to editors