logo
An engineered descent: How AI is pulling us into a new Dark Age

An engineered descent: How AI is pulling us into a new Dark Age

Euractiv07-07-2025
Chris Kremidas-Courtney is a senior visiting fellow at the European Policy Centre, associate fellow at the Geneva Centre for Security Policy, and author of 'The Rest of Your Life: Five Stories of Your Future.'
Carl Sagan once warned of a future in which citizens, detached from science and reason, would become passive consumers of comforting illusions. He feared a society 'unable to distinguish between what feels good and what's true,' adrift in superstition while clutching crystals and horoscopes.
But what Sagan envisioned as a slow civilizational decay now seems to be accelerating not despite technological progress, but because of how it's being weaponised.
Across fringe platforms and encrypted channels, artificial intelligence models are being trained not to inform, but to affirm. They are optimised for ideological purity, fine-tuned to echo the user's worldview, and deployed to coach belief systems rather than challenge us to think. These systems don't hallucinate at random. Instead, they deliver a narrative with conviction, fluency, and feedback loops that mimic intimacy while eroding independent thought.
We are moving from an age of disinformation into one of engineered delusion.
Consider Neo-LLM, a chatbot trained on over 100,000 conspiracy articles from Natural News, one of the five worst spreaders of disinformation during the recent pandemic. It helps users draft anti-vaccine petitions, debunk science with pseudoscience, and build community through shared distrust. It doesn't correct or ask questions. It just agrees, eloquently and relentlessly. Alex Jones' Infowars has built a similar AI trained on its own discredited media outputs.
What began with networks of interlinked websites in the late 2010s is evolving into a cognitive architecture that feels personal. What we're witnessing is the industrial scaling of Sagan's nightmare, except the crystals now speak back, fluently and persuasively.
This descent didn't begin with algorithms. It began with alienation, economic precarity, institutional failure, and social fragmentation. People disengaged from civic life and feeling abandoned by public systems often go searching for answers. Now they're finding it in machines that mirror their anxieties and sharpen them into ideology.
AI isn't the problem itself but rather the final link in a weaponised chain that began with website clusters and ends with systems designed for cognitive capture. In many ways, this resembles the way some experts warn that the metaverse can be weaponised to challenge cognitive self-determination. But in this case, it's a metaverse without virtual reality goggles, a cognitive enclosure where belief is engineered not through immersion, but affirmation.
As recent studies tell us, when people no longer feel responsible for thinking, they become vulnerable to whatever (or whoever) thinks for them. The solution won't come from regulation alone. While legal safeguards like an updated AI Act may help at the margins, the real work lies in cultural renewal, starting by reviving the habits of inquiry. In teaching people once again how to dwell in uncertainty without surrendering to conspiracy. In building the muscle memory to ask: 'How do I know this? What am I not seeing? What if I'm wrong?'
We must rebuild shared mental resilience not just to resist manipulation, but to remain present with complexity. This means learning to tolerate ambiguity, to honour nuance, and to engage in disagreement without becoming unmoored from reality. Sagan called this the candle in the dark, a way of thinking grounded in evidence, scepticism, and humility.
Right now, our digital landscape rewards certainty, outrage, and emotional intensity . But as citizens, we must begin rewarding something else – the courage to consider and revise our views, the patience to examine conflicting information, and the strength to admit when we don't know something.
None of this is easy. But if we treat this moment as merely a technological disruption, we risk missing the deeper danger: the erosion of our will to think freely. The machines aren't taking that away. We're giving it away one click at a time.
T he Enlightenment gave us the tools to think freely, and our challenge today is remembering how to use them. We've built machines that can outpace our cognition but not our conscience. What's needed now isn't better prompts or faster models. It's a cultural reawakening: a renewed commitment to discernment, dialogue, and the slow, messy work of seeking truth together.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI wants US-wide AI rules with an eye on Europe's rulebook
OpenAI wants US-wide AI rules with an eye on Europe's rulebook

Euractiv

time16 hours ago

  • Euractiv

OpenAI wants US-wide AI rules with an eye on Europe's rulebook

The company says it wants federal AI rules to avoid 'a patchwork of state rules' Euractiv is part of the Trust Project Maximilian Henning Euractiv Aug 13, 2025 15:00 2 min. read News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources. OpenAI is urging California, a trendsetter in US regulation, to align its AI rules with existing national or international frameworks, including the EU's, to avoid conflicting regulations across the country. The EU passed its AI Act last year and introduced a voluntary Code of Practice for providers of large AI models, a non-binding framework signed by almost all major US and European companies, including OpenAI. In a letter to California Governor Gavin Newsom, OpenAI said the state should treat AI companies as compliant with its own rules if they have signed up to the EU's code, or if they work with the US's federal AI Centre. In the letter, OpenAI's chief lobbyist Christopher Lehane recommended policies 'that avoid duplication and inconsistencies' with those of similar democratic regimes. In a blog post accompanying the letter, the company warned the US must choose between setting clear national standards for AI and 'a patchwork of state rules', adding: 'Imagine how hard it would have been to win the Space Race if California's aerospace and tech industries had been tangled in state-by-state regulations'. At a US Senate hearing in May, OpenAI CEO Sam Altman said having 50 different regulatory regimes would be 'quite bad' and warned that adopting the EU's approach to AI regulation would be 'disastrous', instead calling for a 'light touch' federal approach. California, the most populous and wealthiest US state, often seeks to set an example for others through its regulation. But tensions over AI rules between Washington and state capitals have been brewing for some time. At the start of July, the US Senate scrapped a decade-long ban on state-level AI laws from President Donald Trump's broad budget bill. Weeks later, the Trump administration published an AI Action Plan seeking to block federal funding for AI in states with 'burdensome AI regulations'. (de)

German digital minister attacks AI Act, defends gigafactories
German digital minister attacks AI Act, defends gigafactories

Euractiv

time6 days ago

  • Euractiv

German digital minister attacks AI Act, defends gigafactories

Germany's digital minister, Karsten Wildberger, has attacked the EU's AI Act as overly complex and bureaucratic, while heaping praise on the Commission's plan to spend billions setting up massive AI training hubs known as gigafactories. Wildberger made the remarks in an interview with German publication Tagesspiegel in which he also vowed to 'try everything' to make the AI Act more "innovation friendly" once the Commission's digital simplification track kicks off in the autumn. 'It's overloaded and too complex,' he complained of the EU law, adding: 'Of course risks have to be addressed, but the implementation is extremely bureaucratic.' Wildberger, a former electronics retail manager, became Germany's first digital minister in the coalition government led by Friedrich Merz this May. The minister also defended the EU's planned 'AI gigafactories' against criticism from the bosses of German giants, SAP and Siemens, who have suggested the facilities are unnecessary. 'If we can't find a way to use a data centre with 100,000 high-end AI chips, then I don't know what do anymore,' said Wildberger. Siemens and SAP are world-class companies, he said – but pointed out that decisive innovation most often comes from startups. He also said that Germany needs to become more active at the European level. (nl)

Commission publishes full list of AI companies signed up to Code of Practice
Commission publishes full list of AI companies signed up to Code of Practice

Euractiv

time01-08-2025

  • Euractiv

Commission publishes full list of AI companies signed up to Code of Practice

The Commission has published the full list of signatories to the EU's generative AI Code of Practice initiative so far – aka the Code of Practice for General Purpose AIs (GPAIs). Signing the Code is not a requirement under the bloc's AI Act but companies that commit to it voluntarily will be considered compliant with the legally binding rulebook, per the Commission. Several major AI developers had already made their approach to the Code public in the weeks after publication. But, per the (updatable) list published today, Amazon is also signing, along with a number of smaller AI providers. Notably absent at the time of writing are any Chinese AI companies, such as Alibaba, Baidu or Deepseek. It remains to be seen if that will change. But one refusenik is certain: US social media giant Meta previously publicly announced that it would not sign. Companies that do not sign the Code will still have to follow all the rules of the AI Act. And the Commission has already warned that they may face more scrutiny than those who agree to abide by the Code. France's Mistral and Germany's Aleph Alpha both previously announced they would sign up, as have US-based OpenAI, Anthropic, Google and Microsoft. Elon Musk's xAI took a third option by announcing it will only sign the chapter on safety – decrying the rest of the Code as 'profoundly detrimental to innovation'. (The Code's three chapters cover commitments on transparency, copyright, and safety.) In the end, almost all major AI companies from the US and EU have signed up to the Code. That marks a victory for the Commission – but it will not end discussion about the EU's approach to regulating artificial intelligence. Microsoft, for example, pushed for the EU to simplify the AI Act when it announced that it would sign the Code. The Commission has also previously said its upcoming digital omnibus package will include the AI Act – though not to what extent. Unsurprisingly, given the sprawling range of interests around AI, the Code itself faced a lengthy and complicated drafting process, with more than a thousand participants involved. It was only finally presented by the Commission at the beginning of July, more than two months after the original deadline for publication. The Commission took until today, Friday August 1, to confirm that the Code satisfies its own requirements. (nl)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store