A Single Source of Truth? How AI Will Warp Reality

The search for a single source of truth

Back in the early 2000s when I worked in IT, I recollect a seminar where a consultant was talking of how Google’s aim was to be able to produce the definitive “One Answer” to any query. This was astonishing aim at that time, when entering a query into a search engine resulted in a mostly unordered list of websites being presented to the user.

A decade or so later, the apparent failure of this aim was signaled in a 2017 The Wall Street Journal article which proclaimed that “Going beyond search, the internet giant is promoting a single result over all others, and many are contentious, improbable or laughably incorrect.”

In effect, Google was presenting itself as an authority on truth by advancing its ability to provide a single search result as the answer for any query. These promoted answers—called “featured snippets”—were outlined in boxes above other results and presented in larger type, often with images. By answering questions directly, Google aimed to make the search engine more appealing to users and advertisers.

Google’s stated goal was to help users find relevant information quickly and easily. However, as the author outlined, there was a concern about accuracy and bias and with the featured answers having greater weight with users than typical search results, the power of its products and vulnerability to bias or manipulation raised concerns.

Move on to 2024 and we find that Google search has become the go-to standard for most of the world. Ask Google any question and it will produce a list of articles headed by an answer box which is explained as follows:

In 2015, Google introduced a new rich featured snippet to their SERP [Search engine results page] results – the answer box. Google’s answer box is a box at the top of the SERP that provides quick and easy answers to questions by featuring a snippet of information from one of the pieces of content on page one. Content in an answer box does not necessarily from [sic] the first result on the page but from the result which Google finds answers the question best. Answers are featured in a box at the very top of the SERP, above the result ranked 1st organically. In the three years since it debuted on the SERP, the answer box has become a top target for SEOs. That’s not surprising, because content featured in the answer box has the highest CTR of any piece of content on the SERP. Source: 7 Tips to Win Google Answer Boxes, 10 Jan 2024

For instance, on entering the search query “cause of ukraine war” Google produced the following:

Source: Google.com, 28 March 2024

Notably, this brief answer box response appears to be based on the Wikipedia article which is linked immediately below. Although seemingly authoritative, both misleadingly portray the start of the war as the 2014 Russian annexation of Crimea following the poetically titled “Revolution of Dignity”. Clicking on any of the set of preformed questions that follow the answer box returns a similar answer box response that is likewise based on and linked to the same or a related Wikipedia article.

Moving down the page we find the first non-Wikipedia webpage listed is from the US based Council on Foreign Relations (CFR), a US deep state think tank that is central to the empire’s power brokers. This is followed by a link to a page published by the Center for International Relations and Sustainable Development (CIRSD), a public policy think-tank registered in Belgrade and New York with a slightly more international, but still establishment flavour. Another CFR article comes in at fourth place.

But, at least by its very heritage as an Internet search engine Google is providing references to where it has obtained these results and if one scrolls down another ten articles or so one comes to some articles by Professor John Mearsheimer that provide a somewhat broader and more balanced geopolitical overview of the events, albeit from the perspective of a US based academic.

Oracular nature of ChatGPT

The recent advent is that of artificial intelligence (AI) chatbots, like ChatGPT, which draw on a vast body of natural language examples to answer questions, edit text, produce computer programs and even write entire articles.

My article AI: Constructing and Deconstructing Propaganda – The Conspiracy Theorist demonstrates the power of ChatGPT to construct and analyse propaganda narratives based on a reading of conversations held with it and questions designed to elicit particular outcomes—in that case propaganda articles focused on the conspiracy theorist construct.

With ChatGPT being a wholly different tool to the traditional search engine, the results it gives to the same question are of a very different character and provide an initial first impression of more a more nuanced interpretation of events.

For instance, on providing ChatGPT with the identical input to the Google query above, the following response was generated:

User
cause of ukraine war
ChatGPT
The Ukraine war, which commonly refers to the ongoing conflict in Eastern Ukraine, has its roots in a complex mix of historical, political, ethnic, and economic factors. Here are some key contributing factors:

  1. Historical Background: Ukraine has a long history of being a battleground between various powers, including Russia and Poland. This history has left deep-seated tensions and divisions within the country.
  2. Soviet Legacy: Ukraine was part of the Soviet Union until its dissolution in 1991. The legacy of Soviet rule left Ukraine with a fragile economy and significant political and social challenges.
  3. Ethnic and Cultural Divisions: Ukraine has a diverse population with significant Russian-speaking minorities, particularly in the eastern regions. This has contributed to tensions over language, culture, and identity.
  4. Geopolitical Struggle: Ukraine is strategically located between Russia and the European Union. The desire of both Russia and the West to exert influence over Ukraine has fueled geopolitical tensions.
  5. Euromaidan Protests: In 2013-2014, large-scale protests erupted in Ukraine after then-President Viktor Yanukovych rejected a deal with the European Union in favor of closer ties with Russia. These protests, known as the Euromaidan movement, ultimately led to Yanukovych’s ousting.
  6. Annexation of Crimea: Following the Euromaidan protests, Russia annexed Crimea in March 2014. This move was widely condemned by the international community and further escalated tensions between Russia and Ukraine.
  7. Conflict in Eastern Ukraine: After the annexation of Crimea, pro-Russian separatist movements emerged in Eastern Ukraine, particularly in the Donetsk and Luhansk regions. This led to armed conflict between Ukrainian forces and separatist rebels, with Russia accused of providing support to the separatists.
  8. International Involvement: The conflict in Ukraine has drawn in various international actors, including Russia, the United States, and European countries. Efforts to resolve the conflict through diplomacy have been ongoing, but a lasting solution remains elusive.

    These factors, among others, have contributed to the ongoing conflict in Ukraine, which has resulted in significant human suffering and geopolitical tensions in the region.

Given that version ChatGPT’s last update was January 2022 at the time this query was made, this response obviously misses just over two years of real life data and information. Nevertheless, it is obvious that while the causes of the war are categorised in to eight factors and a brief description given for each, there is still a strong element of US establishment bias to the picture being painted.

For instance, there is no mention of the US’s role in fomenting the Euromaidan protests, the neo-Nazi nature of the revolutionaries or their role in the massacre of 42 of victims who died after being corralled and trapped in the Trade Unions House fire in Odessa, or the eight years during which the Ukraine government shelled the people of Donetsk and Luhansk resulting in some 14,000 civilian casualties.

It is hard to escape the conclusion that ChatGPT is providing a US and Western establishment narrative through what it has presented as a series of simple facts. But, most of all, the biggest difference is in the failure to provide any sort of referencing to sources for any of the claims made in the response. In this respect, ChatGPT is providing a modern form of oracular response, much in the spirit of the ancient Greek Oracle of Delphi, which constituted a key social and political force in that society:

“Consultants visited Delphi with requests as individuals and also on behalf of entire city-states… Consultants who visited the Oracle of Delphi on behalf of their city were often seeking advice about serious disputes between communities. Cities also wanted to know if Delphi would look kindly upon their development of colonies abroad. Delphi’s own rise, particularly in the 6th century BC, coincided with the rise of democracy and the growth of urban areas across Greece. One of Delphi’s most important strengths was its ability to help to establish law and order. In this way, the Oracle of Delphi became a major power-player in the development of the Greek world.” Source: Oracle of Delphi: Why Was It So Important To Ancient Greeks?

While the pronouncements of Delphi’s oracle were often notoriously cryptic and obscure, ChatGPT’s responses are invariably clear, well-worded and grammatically perfect, having all the hallmarks of authority and lack of ambiguity. When questioned on this very issue, ChatGPT provided the following response:

User: In advancing one version of the truth as authoritative, ChatGPT risks becoming an oracular single source of truth, which may pervert user’s understanding of truth

ChatGPT: Yes, that is a valid concern. As an AI language model, I am designed to provide responses to user queries based on pre-existing knowledge and language patterns. While this can be useful and informative, it is important to recognize that there may be multiple perspectives and interpretations of any given issue or topic. Therefore, it is important for users to approach the responses generated by an AI language model with a critical and discerning eye, and to seek out additional sources of information and perspectives. It is also important for AI language models to be transparent about their limitations and biases, and to recognize that they are just one tool among many for accessing and understanding information.

AI dreams a new reality

A major problem with ChatGPT is that is inherently a language model so that like the Oracle of Delphi it is not a final arbiter of truth. Admittedly, while the Oracle of Delphi was reputed to have an extensive network of spies in neighboring nations and communities, and apparently this was used to inform some of its less obscure prognostications, its advice was often given in obscure ways that believers could interpret events as having been foretold—which ever way they turned out.

While not claiming to be able to foretell the future, ChatGPT also lacks any claim to being an all seeing arbiter of truth, its database of knowledge being incomplete and as at March 2024 still only limited to events before January 2022 and, even then, the sources, nature, depth and breadth of the information it has been trained with or has access to is unclear.

For instance, it appears to not have access to the body of scientific journals or extensive non-English language resources. In spite of these facts, although there is a message at the foot of the ChatGPT panel stating “ChatGPT can make mistakes. Consider checking important information.” the presentation is seductively credible and can lead the user into accepting quite erroneous assertions. For instance, when asked:

User
Name one nation that has initiated or engaged in more wars of aggression than the USA during the last 100 years
ChatGPT
One nation that has initiated or engaged in more wars of aggression than the United States in the last 100 years is the Soviet Union. During the 20th century, the Soviet Union engaged in several wars of aggression, including the invasion of Finland in 1939, the annexation of the Baltic states in 1940, and the invasion of Afghanistan in 1979.

On digging deeper, ChatGPT was able to list nine genuine wars initiated by the USA during this period and admitted to the possibility of there being more. Meanwhile, it was only able to come up with seven for the USSR and Russia, which included the two Chechen wars which were really internal separatist actions, the US provoked action in Afghanistan, the brief Georgian action which was initiated in response to the government’s actions against separatist South Ossetia and Abkhazia and lastly the actions taken to suppress revolutions in Hungary and Czechoslovakia.

This tendency of ChatGPT to fabricate a version of the truth is not confined to merely ideological bias in favour of a US empire centric view of events or mainstream official narratives. ChatGPT has also been found to be subject to what have been politely termed “hallucinations”, where when asked to provide references in support of its assertions, it will simply make up links to realistic looking scientific journal articles and news agency website urls with titles that appear to relate to articles supporting its assertions. Its just that when these are checked, the journal article is not to be found or the website returns a “404” error indicating no such article exists.

This tendency has given rise to calls to ChatGPT to be subject to regulatory action for deception of its users.

But the race for dominance in a seemingly new field has also produced unsettling or simply flat-out incorrect results, such as confident claims that Feb. 12, 2023 came before Dec. 16, 2022. In industry parlance, these types of mistakes are known as “AI hallucinations” — and they should be considered legally enforceable violations, CAIDP argued in its complaint. “Many of the problems associated with GPT-4 are often described as ‘misinformation,’ ‘hallucinations,’ or ‘fabrications.’ But for the purpose of the FTC, these outputs should best be understood as ‘deception,’” the complaint said, referring to the FTC’s broad authority to prosecute unfair or deceptive business acts or practices. Source: The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims, CNN, 30 March 2023

This tendency or feature of ChatGPT to just make stuff up can have serious personal and professional consequences. For instance, in May 2023 it was reported that a New York lawyer was facing legal action from the court after his firm used ChatGPT for legal research and submitted a filing which was found to reference example legal cases that did not exist. The lawyer told the court he was “unaware that its content could be false.”

Source: New York Post, 22 February 2024

ChatGPT is not alone in dreaming a new reality. In February 2024, Google’s Gemini AI chatbot was found to be producing perversely odd race and gender inaccurate images of historical people where all white European characters had been replaced with images of people with black features and colouring and women substituted for historically male figures. For example, when asked to produce an image of George Washington it produced a realistic looking image of a black man in period uniform against an unfurled US flag. These images were described as being “absurdly woke” in articles such as Thomas Barrabi’s on the New York Post website.

Compounding a corrupt system

The 2020 fake pandemic has brought the current shocking state of science with its monied corruption to the attention of many people around the world. While many had not been aware of the critique within the scientific academy that had been building for a while, as documented in this article on The Atlantic, Dr. John Ioannidis had already been exposing bad science years before the article was written in 2010.

Source: The Atlantic, November 2010

In recent years, this critique of science has been reinforced through work by researchers such as Drs Mark and Sam Bailey, Tom Cowan and Stefan Lanka who have take modern virology papers and traced back claims of isolation through to the base studies in the 1950s by the likes of Jonas Salkor and Albert Bruce Sabin, and even earlier into the 1800s and earlier to examine the works of pioneers such as Louis Pasteur and Edward Jenner. These critics of virology have found a consistent pattern of unscientific research in original papers leading to unfounded claims of virus isolation and subsequent effectiveness of treatments such as vaccines.

They point out that modern researchers very often assume the validity of earlier papers and not seek to either critique or replicate these original works using modern standards of scientific rigour, such as valid controls, replicability and avoidance of logical reasoning flaws like circular logic. These flaws in modern research reporting are compounded by an opaque and flawed peer review system and a research grant system that rewards findings that align with the donor’s intent and punishes those that don’t.

The point here being that as AI tools like ChatGPT are increasingly used by researchers as writing tools to produce reports, either in part or whole cloth, there is a risk that these reports may create a feedback system that has the capacity to corrupt the scientific body of knowledge at an exponential rate.

ChatGPT produces narrative

It is important to note that ChatGPT is a language processing tool that has been trained on a base of material and guided by a series of algorithms to produce natural text using a statistical arrangement of words to make narratives that are grammatically correct and well organised.

My experiments with ChatGPT show that it is a powerful editor and synthesizer of text that produces authoritative sounding narratives that are often also true and informative. However, as explained above its underlying biases (intentional and unintentional) are also frequently evident and it is possible to produce fake narratives that sound just as authoritative.

Given that I have only experimented with ChatGPT to-date, I am unable to comment on whether ChatGPT’s flaws are shared by other AI models. However, the experience I have to date would incline me to be strongly skeptical about the technology’s ability to make the sorts of decisions suggested in applications like China’s corruption-busting AI system ‘Zero Trust’ or in the criminal justice system.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.