In 2016, Donald Trump ran a campaign that shook the world and eroded trust in traditional media. Through his relentless attacks on so-called "fake news," he led more people to turn away from stark truths and instead embrace alternative narratives. This trend reached its peak during the COVID-19 pandemic, when conspiracy theories exploded not only in the United States but also in Germany, through alternative media, Telegram channels, and Facebook groups. The AfD capitalized on this trend, gaining support by systematically isolating its members from traditional information sources.
Previously, it was relatively easy to debunk misinformation and conspiracy theories with some media literacy. However, the days of debunking "low-hanging fruits" are over. With the rise of increasingly sophisticated artificial intelligence (AI), we face entirely new challenges. Deepfakes and large language models (LLMs) are just the beginning of a technological revolution that will fundamentally change our perception of reality.
The advances in AI technology, both impressive and frightening, make it increasingly difficult to distinguish between genuine and artificial content. We are heading toward a time when even AIs may struggle to differentiate between reality and fiction.
But if we can no longer distinguish between fact and fiction—what is real?
Future Trust Anchors
Throughout human history, there have always been "trust anchors" in the form of leaders, institutions, or companies that provided us with a subjective sense of authenticity in information. With globalization and the rise of the internet, this power of interpretation has shifted from a few actors to many. Never before in human history has information been as freely, comprehensively, and verifiably accessible as in the last 20 to 30 years.
However, this freedom is already receding in some countries due to censorship and filtering. In the future, this informational freedom will likely continue to be restricted and threatened, as the reliability of information becomes endangered by an increasing number of uncontrollable actors supported by AI. This phenomenon is evident not only in the multitude of bots on social media attempting to sway discourse but also in the voluntary turn away from complex issues and challenges, and the turn toward opinion leaders such as COVID-19 deniers or political actors like Trump in the U.S. or the AfD in Germany.
In a world where distinguishing between reality and fiction is difficult, the question arises as to who or what will serve as a trustworthy information source—essentially, as a trust anchor. Here are four possible archetypes for future trust anchors, though in reality, it is likely that we will see hybrid forms rather than distinct archetypes:
Religious Leaders, Neoreligions, and Spiritual Cults
For the most time in Western history, religions held a monopoly on education and research, and thus on the interpretation of truth. For many religious people, the absolute truth still lies in the Torah, the Bible, or the Quran. Religions also developed independently in various regions around the world, indicating a deep-rooted human inclination toward spirituality, order, and leadership.
Therefore, it is very likely that in the future, there will be groups or individuals who emerge as trust anchors. Their rise could be facilitated by the fact that, especially in times of great uncertainty, people often seek solace in spirituality. However, as with the other archetypes, there is a legitimate concern that these figures may use their interpretive authority to spread manipulative narratives.
Technocratic Elites and Corporatocracy
Dave Eggers' novel "The Circle" depicts an all-powerful tech company that exerts near-total control over people's lives through the collection and use of personal data. Many dystopian novels, including William Gibsons’ "Neuromancer," which pioneered the cyberpunk genre, paint a picture of a future where a technological elite or corporations rule over humanity.
This fiction could also become a reality. Today, a few companies already control a large portion of the information we consume and act as gatekeepers, deciding, for example, which apps we can have on our phones, as is the case with Apple and Google. Billions are already being spent on lobbying and direct bribery, where possible, for profit maximization. In this sense, a corporatocracy would be the logical consequence of capitalism. Tech billionaires are not far behind the old elite. They have long been acquiring media companies and entire social networks to pursue their own agendas, and they or their companies own the major AI models.
Elon Musk, for example, under the guise of free speech, far-right radical ideologies have been allowed back on platforms, while the reach of political opponents has been curtailed. After failing to gain more control over ChatGPT, Musk stepped down from the board and later founded his own AI company, which developed an "uncensored and unbiased" LLM named Grok. However, when Grok was deemed "too woke," it too was increasingly censored—just in the opposite direction.
Nationalism and Authoritarian Regimes
Orwell's famous novel "1984" is often cited as a warning against the dangers of authoritarian regimes, recently also by critics of COVID-19 restrictions. The book, which has been banned both for being too pro-communist and too anti-communist, is fundamentally a critique of totalitarianism in general. It depicts a world characterized by comprehensive state control, manipulation of truth, surveillance, and the loss of personal freedoms.
China is a state that comes closest to the world described in Orwell's novel, already determining which information is accessible to its population and which is not. In addition to censorship, national isolation and control of information flow are key instruments of this interpretive power.
Anarchy and Social Fragmentation
In the absence of clear trust anchors within nations, society could fragment into various groups and subcultures, each maintaining its own version of reality. This phenomenon is already functioning surprisingly well today, even without AI, partly because many people lack media literacy and critical thinking skills.
The U.S. is a prime example of how a nation can live in two different realities based on the same set of facts. The use of AI could further polarize and destabilize society, potentially leading to a collapse, such as a civil war.
Real or Not—Does It Matter?
In this context, Bostrom's simulation hypothesis is particularly relevant. If we can no longer distinguish between reality and fiction, the probability that we are consuming fiction—or living in a kind of simulation—is greater than that we are experiencing the truth. This raises the fundamental question: If we can no longer make the distinction ourselves, does it still matter what is true?
Ultimately, this depends on how we define "truth" and "reality." If our perceptions and beliefs influence our behavior and shape the way we experience and interact with the world, then it does matter what we believe. Our beliefs affect our actions and, consequently, our lives and the lives of others. Even if reality and fiction are difficult to distinguish, engaging with the question of what is true remains an essential part of our lives and our understanding of ourselves.