AI May Be Quietly Narrowing Worldviews, Israeli Scholar Argues

Jerusalem, 7 August, 2025 (TPS-IL) — As generative AI tools like ChatGPT become embedded in daily life, one Israeli legal scholar is raising an urgent concern: these systems may be quietly narrowing our worldview.

In a newly published article in the Indiana Law Journal, Prof. Michal Shur-Ofry of the Hebrew University of Jerusalem, also a Visiting Faculty Fellow at NYU’s Information Law Institute, argues that large language models (LLMs) tend to generate standardized, mainstream content — at the cost of cultural diversity and democratic discourse.

“If everyone is getting the same kind of mainstream answers from AI, it may limit the variety of voices, narratives, and cultures we’re exposed to,” Shur-Ofry said. “Over time, this can narrow our own world of thinkable-thoughts.”

Her study explored how AI-generated answers, while often useful and plausible, are frequently repetitive and culturally narrow. For example, when asked about important figures of the 19th century, ChatGPT returned names like Abraham Lincoln, Charles Darwin, and Queen Victoria — prominent, but overwhelmingly Anglo-centric. A similar bias emerged when the model was prompted to list the best television series: the results skewed heavily toward English-language hits, omitting non-Western and non-English alternatives.

The problem, Shur-Ofry explains, is rooted in how these AI systems are built. LLMs are trained on vast quantities of internet text, which are disproportionately in English and reflect dominant cultural norms. The models use statistical patterns to predict likely responses, meaning that what’s most common appears most often. While this approach increases accuracy and coherence, it sidelines perspectives from smaller linguistic and cultural communities. Over time, as LLMs feed on their own outputs and continue to train on existing digital content, the narrowing effect compounds.

“This isn’t just a technical glitch,” Shur-Ofry warns. “It can have deep societal consequences. It can reduce cultural diversity, undermine social tolerance, and weaken the foundations of democratic conversation and collective memory.”

To counter this trend, Shur-Ofry proposes a new legal and ethical principle for AI governance: “multiplicity.” This concept calls for AI tools to be designed in a way that actively supports exposure to diverse viewpoints and narratives, rather than just returning the most statistically likely answers.

“If we want AI to serve society, not just efficiency, we have to make room for complexity, nuance and diversity,” she said. “That’s what multiplicity is about—protecting the full spectrum of human experience in an AI-driven world.”

The paper also highlights two key ways to promote multiplicity. First, by building features into AI platforms that allow users to easily increase diversity in output – such as adjusting the model’s “temperature,” a setting that broadens the range of generated responses. Second, by developing an ecosystem of competing AI systems that offer users the ability to seek out “second opinions” and alternative perspectives.

Shur-Ofry also emphasizes the importance of AI literacy. “People need a basic understanding of how LLMs work and why their outputs may reflect popular, rather than balanced or inclusive, viewpoints,” she said. “This awareness can help users ask follow-up questions, compare answers, and think more critically about the information they’re receiving. It encourages them to see AI not as a single source of truth, but as a tool—one they can push back against in pursuit of richer, more pluralistic knowledge.”

She is collaborating with Dr. Yonatan Belinkov and Adir Rahamim of the Technion’s Computer Science Department, and Bar Horowitz-Amsalem of Hebrew University, to implement these ideas in practice.