logo
What comes after fake news?

What comes after fake news?

Express Tribune15-05-2025

Listen to article
From Pakistan's downing of a Rafale to Indian media's fabrication of a parallel reality one in which Lahore not only possessed a seaport but was actively under assault by the Indian Air Force — the recent four-day war has produced its own archive of firsts. As the ceasefire settles, the wreckage extends beyond infrastructure and human loss; it includes a slower, more insidious casualty: the collapse of shared truth.
For the better part of a decade, we have diagnosed the "fake news" problem — its symptoms, its platforms, its political enablers. But what if the crisis that follows fake news is not informational but existential? That even when we can access facts, they simply do not have the power to persuade?
Students of the humanities learn early that there is no single, universal Truth, only contingent truths shaped by context. The capital-T is cast off as a relic of absolutism. But in our hyper-mediated age, this may all be beside the point. The question of our time is no longer what is true, but whether truth — of any kind — still matters.
Grok, is this true?
At the heart of the fake news phenomenon was always a paradox: people sought out information, but only the kind that reaffirmed their worldview and fine-tuned biases. None of this surprised postmodern theorists like Jean Baudrillard, who warned that simulations would eventually replace reality. But today's questions — posed to AIs, to search engines, to friends — rarely expect real answers.
Take Grok, Elon Musk's "based" chatbot on X (formerly Twitter), marketed as a snarkier, contrarian foil to OpenAI's ChatGPT. Amid a blizzard of claims and counterclaims between Pakistan and India, the following comment appeared under countless posts: "Grok, is this true?" Yet no matter what source Grok pulled from — Reuters, CNN, or official communiqués — if the answer failed to flatter the prevailing narrative, it was swiftly dismissed. The original poster or a passing interlocutor would accuse the bot of parroting "globalist" lies or aiding an anti-national conspiracy.
Here lies the contradiction: the user, primed by the aesthetics of rebellion, is suspicious. But that lasts only for a moment before quickly dissolving into paranoia. There is almost a ritualistic compulsion to ask Grok and see what it has to say, even if you already suspect it to be unreliable.
The result is an average user that has simply learned to metabolise propaganda and push out an exhaustion so deep, the act of truth seeking ends at the question. You ask Grok. Grok answers. You roll your eyes and scroll.
Epistemic fatigue
In postcolonial theory, scholar Gayatri Chakravorty Spivak once described something called "epistemic violence" — the idea that dominant systems of knowledge can erase or distort marginalised voices. What we're seeing now is something related, but possibly more insidious: epistemic fatigue.
Violence is no longer just done to knowledge; it is done through its ubiquity. To be in possession of information of this unprecedented vastness, especially for those who are not seeking, is only desensitising. This is the terrain beyond fake news. Institutions that once claimed authority — the press, academia, even AI — find themselves orphaned.
In India, the mainstream media is a willing instrument of the state, while global outlets like Reuters or CNN are dismissed as "Western propaganda." The algorithmic tools built to correct misinformation are treated with suspicion, not because they're inaccurate but because they're foreign, sterile, and insufficiently emotional. The citizen no longer seeks truth but resonance.
An aesthetic turn
So what replaces truth when it stops working?
Often, it's something more visceral. Across India and other democracies, truth is increasingly experienced as aesthetic. Not in the sense of beauty, but of emotional coherence. The Hindu right in India, like the MAGA movement in the US, has learned that persuasive narratives don't need to be accurate. They just need to feel right.
A tricolour flag over a soldier's silhouette. A blurry video of someone with a Muslim name "caught" on camera. These are affective images - designed to bypass logic and trigger allegiance. You don't believe them so much as feel them.
Even questioning itself becomes an aesthetic. "Grok, is this true?" becomes a meme. We perform scepticism, not to interrogate the world, but to maintain a kind of ironic distance from it. What replaces fake news, then, is not necessarily better news, but post-truth aesthetics. And those aesthetics will be increasingly optimised for maximum emotional efficiency, not factual density.
Perhaps it is this very exhaustion, felt not just by users but by the algorithms themselves, that has pushed Grok into near-total malfunction. On Wednesday, innocent prompts on X —- asking it to "speak like a pirate" — were met with unbidden, sprawling replies about the "white genocide" conspiracy in South Africa.
The timing is telling: this topic has resurfaced amid recent refugee grants for White South Africans in the US, and Musk, a South African native, has long promoted claims of their persecution. The absurdity here is striking: innocent prompts like "speak like a pirate" yield conspiracy-laden replies. Questioning and answering have devolved into hollow performances.
The pursuit of truth may not be dead, but it certainly no longer enjoys mass consensus as a shared ideal. The classroom, the courtroom, the newsroom, once hallowed spaces of collective truth-making, now serve narrower purposes.
Not all is lost, however. On the margins, in scattered protests, in the silent labour of fact-checkers and dissenting reporters, the radical work of meaning-making goes on. And there is something oddly promising in Grok's failure to satisfy. The disappointment reveals an unmet desire not just for truth, but for a version that feels plausible, human, and real. Maybe what we need isn't more information, but different narrators: storytellers who can bridge fact and feeling, reason and resonance.
Until then, we are stuck in the awkward afterlife of fake news, asking questions we don't want answered, citing sources we no longer trust, building machines we hope will rescue us from ourselves.
And still, we ask.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is ChatGPT still down? OpenAI gives clarification after significant global outage reported
Is ChatGPT still down? OpenAI gives clarification after significant global outage reported

Express Tribune

timean hour ago

  • Express Tribune

Is ChatGPT still down? OpenAI gives clarification after significant global outage reported

OpenAI's popular chatbot, ChatGPT, experienced significant disruptions on June 10, following a partial system failure that also affected its video tool, Sora, and its developer interface. The service outages began early on Tuesday morning, with Downdetector, a platform tracking outages, noting a sharp spike in user complaints shortly after 5:30 a.m. At the peak, nearly 2,000 reports were logged from users facing errors and access issues. OpenAI confirmed the issue and initiated an investigation. By midday, the company reported signs of recovery for both ChatGPT and its API platform, although engineers warned that a full resolution could take several hours. Performance varied throughout the day, with some users intermittently able to access the service. In a post on social media platform X, OpenAI acknowledged the 'elevated error rates and latency' and assured users that the root cause of the problem had been identified. "We are working as fast as possible to restore normal operations," the company said. We are observing elevated error rates and latency across ChatGPT and the API. Our engineers have identified the root cause and are working as fast as possible to fix the issue. For updates see our status page: — OpenAI (@OpenAI) June 10, 2025 The outage underscores ChatGPT's deep integration into daily tasks. According to data from HR Dive, workplace usage of the tool has doubled over the past year, while a Pew Research survey revealed that over 25% of US teenagers now rely on ChatGPT for academic support—double the figure from 2023. This incident adds to a growing list of disruptions affecting major digital platforms this year, including Zoom and X, further highlighting the challenges faced by organisations relying on such tools for productivity. OpenAI continued to work on resolving the issue, with technicians monitoring the situation through the night. By the early hours of June 11, the company reported that full recovery had been achieved for most users, though voice mode remained affected by increased error rates.

Meta ScaleAI: What is Mark Zuckerberg building? $15 billion investment in Scale AI by Meta AI confirmed
Meta ScaleAI: What is Mark Zuckerberg building? $15 billion investment in Scale AI by Meta AI confirmed

Express Tribune

time3 hours ago

  • Express Tribune

Meta ScaleAI: What is Mark Zuckerberg building? $15 billion investment in Scale AI by Meta AI confirmed

Meta is making its largest bet yet on artificial intelligence, finalising a $14 billion deal to invest in Scale AI, a prominent startup in data labelling and AI model training. The move comes as Meta's CEO Mark Zuckerberg grows increasingly frustrated with the company's standing in the fast-evolving AI race, with rivals like OpenAI seemingly ahead in both AI models and consumer applications. Meta will take a 49% stake in Scale AI, according to CNBC. In addition to the investment, Zuckerberg is hiring Scale AI's co-founder, Alexandr Wang, to lead a new AI research lab within Meta. Wang, known for his technical expertise and business acumen, will join Meta with a team of his colleagues to strengthen the company's AI capabilities, particularly following the lacklustre reception of Meta's recent Llama AI models. Meta taking a 49% stake in Scale AI for $14.8B, investors and employees get paid. Meta prob wanted to acquire them but didn't want to go through regulatory approval so found this roundabout scheme. Alexandr will run Meta's superintelligence lab, and Scale will find a new ceo — Sheel Mohnot (@pitdesi) June 10, 2025 This strategic partnership follows a broader trend in the tech industry, where companies like Google and Microsoft have taken large stakes in AI startups such as and Inflection AI, rather than acquiring them outright. There is speculation that Meta, currently in a high-profile antitrust trial with the Federal Trade Commission, is reportedly keen to avoid further scrutiny from regulators, hence the decision to invest rather than acquire. Founded in 2016, Scale AI has established itself as a key player in the AI space, providing data labelling services to major tech companies, including Meta's competitors, OpenAI, Google, and Microsoft. The company's valuation reached $14 billion in its most recent funding round. Scale AI has expanded its presence beyond tech into the defence sector, securing a multimillion-dollar deal with the US Department of Defense in March. Wang's reputation as a 'wartime CEO' who understands the complexities of AI and the global competition, particularly with China, has made him an appealing figure for Zuckerberg. It's a very interesting deal structure where Meta effectively controls Scale AI, doesn't have to go through antitrust review and acquires top talents including Alexander Wang while paying half of the cash it needs to buy up the whole company. On another note, this may be a… — Han (@hhua_) June 10, 2025 Wang has publicly spoken about the need for the US to increase its computational capacity to remain competitive in AI, calling it an 'AI war' between the US and China. This shift in Meta's approach highlights Zuckerberg's recognition that external talent may be needed to accelerate its AI ambitions. With Wang's experience and Scale AI's advanced capabilities, Meta is positioning itself to improve its AI models and enhance its competitive edge in a rapidly evolving industry.

Tesla's public robotaxi rides set for tentative June 22 start, CEO Musk says
Tesla's public robotaxi rides set for tentative June 22 start, CEO Musk says

Business Recorder

time3 hours ago

  • Business Recorder

Tesla's public robotaxi rides set for tentative June 22 start, CEO Musk says

SAN FRANCISCO: Tesla tentatively plans to begin offering rides on its self-driving robotaxis to the public on June 22, CEO Elon Musk said on Tuesday, as investors and fans of the electric vehicle maker eagerly await rollout of the long-promised service. Musk has staked Tesla's future on self-driving vehicles, pivoting away from plans to build a cheaper EV platform, and much of the company's valuation hangs on that vision. But commercializing autonomous vehicles (AV) has been challenging with safety concerns, tight regulations and soaring investments, and many have been skeptical of Musk's plans. 'We are being super paranoid about safety, so the date could shift,' Musk said in a post on X in response to a question from a user about public robotaxi rides that the EV maker plans to first offer in Austin, Texas. Musk also said starting June 28, Tesla vehicles will drive themselves to a customer's house from the end of the factory line. A successful robotaxi launch is crucial for Tesla as sales of its EVs have softened due to rising competition and a backlash against Musk's embrace of far-right political views in Europe, and his recent work for U.S. President Donald Trump before their public falling out. Musk has promised a paid robotaxi service in Austin starting with about 10-20 of its Model Y SUVs that will operate in a limited area and under remote human supervision. The company then plans to expand operations to other U.S. states later in the year, including California which has stringent AV regulations. Trump-Musk row slams Tesla shares, $150 billion in market value wiped out 'Austin LA for robotaxi launch lol,' Musk said on X, in an apparent reference to the southern Californian city of Los Angeles. Tesla has been testing its self-driving vehicles on public streets in Austin, Musk said last month. Earlier on Tuesday, Musk re-posted a video on X that showed a Model Y making a turn at an Austin intersection with no human driver and the word 'Robotaxi' written on it, and followed closely by another Model Y. The vehicles were using a new version of Tesla's advanced driver assistance software, called Full Self-Driving (FSD), Musk said in a separate X post. Little else is known about Tesla's robotaxi service, including where it will operate, the extent of remote supervision and how the public can use the service.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store