logo
#

Latest news with #politicalneutrality

Neutrality Is A Myth: Generative AI And The Politics Of Everything
Neutrality Is A Myth: Generative AI And The Politics Of Everything

Forbes

timea day ago

  • Politics
  • Forbes

Neutrality Is A Myth: Generative AI And The Politics Of Everything

Artificial intelligence (AI), once discussed chiefly as a technological innovation, now sits at the center of a societal reckoning over truth, power, and the future of democratic discourse. President Donald Trump's impending executive order—reported in the last few days by news outlets as conditioning government contracts on whether firms' AI systems are 'politically neutral'—captures how generative AI has become not just a commercial or technological concern, but a flashpoint for ideological and epistemic struggle. The order comes in the wake of controversies involving Google's Gemini and Meta's chatbot, which generated images of racially diverse Nazis and Black depictions of America's Founding Fathers. These outputs were framed by their developers as counterweights to historical exclusion, yet were widely denounced as historical fabrications, viewed by critics as examples of 'woke' technology supplanting accuracy with ideology. Executive order written under torn paper. AI's , Bridges, and the Politics of Design The anxiety surrounding AI deepened when Elon Musk's Grok chatbot spiraled into an antisemitic meltdown, producing hateful screeds and referring to itself as 'MechaHitler' before Musk's company intervened. The episode demonstrated how generative systems, even when tightly supervised, can produce destabilizing and harmful content—not merely reflecting the biases of their creators, but amplifying extremes unpredictably. Such incidents destabilize public trust in AI systems and, by extension, the institutions deploying them. These dynamics underscore a broader truth articulated by Langdon Winner in his now decades old seminal essay, Do Artifacts Have Politics? Winner contended that technologies are never neutral; they embody the social values, choices, and power structures of those who design them. His most enduring illustration was Robert Moses's low-hanging parkway bridges on Long Island, allegedly built to prevent buses—and therefore lower-income passengers—from accessing public parks. Critics at the time dismissed Winner's argument as over-deterministic and accused him of reading intent where circumstantial evidence sufficed. Yet whether or not Moses's motives were as deliberate as Winner alleged, the broader point has endured: infrastructure, from bridges to algorithms, channels social outcomes. Generative AI, often marketed as a neutral informational tool, is in reality a deeply value-laden system. Its training datasets, inclusionary adjustments, and 'safety filters' reflect countless normative decisions—about whose histories matter, what harms to mitigate, and which risks are acceptable. Generative AI apps icons —ChatGPT, Gemini, Microsoft Copilot, Claude, and Perplexity — are seen on a ... More Google Pixel smartphone. The Algorithmic Newsfeed and AI Persuasion The power of such systems is magnified by shifts in how Americans consume information. Most now rely primarily on digital platforms—social media feeds, streaming video, and algorithmically curated aggregators—for news . Television and traditional news sites remain significant, but algorithmic feeds have eclipsed them. These digital ecosystems privilege engagement over deliberation, elevating sensational or tribal content over balanced reporting. When generative AI begins writing headlines, summarizing events, and curating feeds, it becomes another layer of mediation—one whose authority derives from fluency and speed, not necessarily accuracy. Recent empirical research suggests this influence is far from benign. A University of Zurich study found that generative AI can meaningfully sway online deliberations, with AI-authored posts shifting sentiment in forums like Reddit even when participants were unaware of their origin. This dynamic threatens deliberative democracy by eroding what is referred to as 'public reason'—the ideal of discourse grounded in rational argumentation and mutual recognition rather than manipulation. When AI-generated content becomes indistinguishable from authentic human contribution, the public sphere risks devolving into what philosopher Harry Frankfurt described as a marketplace of 'bullshit,' where the concern is neither truth nor falsehood, but the sheer pursuit of persuasion and virality. In this photo illustration, the American social news aggregation, web content rating, and discussion ... More website, Reddit, logo is seen displayed on an Android mobile device with a figure in the background. (Photo Illustration by Miguel Candela/SOPA Images/LightRocket via Getty Images) AI, Memory, and Manufactured Truths The dangers are not confined to subtle persuasion. A June 2025 Nature study demonstrated that large language models systematically hallucinate or skew statistical information, particularly when questions require nuanced reasoning. A separate MIT investigation confirmed that even debiased models perpetuate stereotypical associations, subtly reinforcing societal hierarchies. UNESCO has warned that generative AI threatens Holocaust memory by enabling doctored or fabricated historical materials to circulate as fact. And reporting by The New York Times has detailed how AI-driven bots, microtargeted ads, and deepfakes are already reshaping electoral landscapes, creating an environment where voters cannot easily discern human-authored narratives from synthetic ones. The Word 'History' is crossed out on a blackboard with the words 'Re-Writing History' chalk writing ... More underneath. This is for a learning about Re-Writing History Concept. Just because you can, doesn't mean you should. Consensus, AI and the Weaponization of Knowledge These technological developments intersect with a cultural trajectory that I have described, several years ago, as the 'death of the second opinion,' as public and digital discourse increasingly favors frictionless consensus over contested deliberation. Platforms reward virality, not complexity; generative AI, with its capacity to produce seamless, confident prose, reinforces this tendency by smoothing over ambiguities and suppressing dissenting voices. The space for pluralism—the messy, contradictory engagement that sustains democratic culture—is contracting. Even legacy broadcasters, which once offered starkly divergent perspectives, reflect this homogenization. News networks, despite their ideological differences, now tailor much of their content for algorithmic optimization: short-form videos, emotionally charged headlines, and personality-driven narratives designed to thrive on social feeds. AI-driven tools, which draft summaries and even produce full story packages, exacerbate this shift by standardizing the cadence and texture of news, eroding the distinctiveness of editorial voices. Simultaneously, institutions once regarded as neutral have become sites of contestation. In 2024, a U.S. prosecutor reportedly threatened legal action against Wikipedia over alleged partisan bias, raising alarms about state intrusion into crowd-sourced knowledge. Around the same time, a coordinated campaign on X, branded 'WikiBias2024,' accused Wikipedia of systemic ideological slant. These conflicts reflect a broader epistemic insecurity: as AI, social media, and legacy institutions all mediate public understanding, every node in the information ecosystem becomes suspect, politicized, and weaponized. Computer screen showing the website for free online encyclopedia, Wikipedia. (Photo by Mike Kemp/In ... More Pictures Ltd./Corbis via Getty Images) AI and the Mirage of Neutrality President Trump's proposed executive order must be understood within this fraught landscape. According to early reports, the initiative will require AI vendors seeking federal contracts to undergo 'neutrality audits,' produce 'certifications of political impartiality,' and submit to recurring oversight. While these measures echo prior federal interventions into private technology—such as the Justice Department's demands that Apple unlock the San Bernardino shooter's iPhone—the implications here are arguably broader. Whereas Apple's dispute centered on specific criminal evidence, the neutrality mandate would deputize federal agencies as arbiters of political balance in a dynamic and interpretive domain. The risk is not merely bureaucratic overreach, but the entrenchment of a preferred ideological baseline under the guise of balance. Any audit mechanism, after all, must be designed according to someone's conception of neutrality, and thus risks ossifying bias while purporting to erase it. The US Department of Justice (DOJ) headquarters building on January 20, 2024, in Washington, DC. ... More (Photo by J.) The impulse to demand neutrality, while understandable, may itself be symptomatic of what Freud described in Civilization and Its Discontents as the longing for an 'oceanic feeling'—a sensation of boundless connection and security, often tied to religious or existential comfort. In the context of AI, many seem to hope for a similarly oceanic anchor: a technology that can transcend human divisions and deliver a singular, stabilizing truth. Yet such expectations are illusory. Generative AI is not a conduit to universal reality; it is a mirror, refracting the biases, aspirations, and conflicts of its human architects. Recognizing this does not mean resigning ourselves to epistemic chaos. It means abandoning the myth of neutrality and designing governance around transparency, contestability, and pluralism. AI systems should disclose their data provenance, flag when diversity or safety adjustments influence outputs, and remain auditable by independent bodies for factual and normative integrity. More importantly, they should be structured to preserve friction: surfacing dissenting framings, offering uncurated outputs alongside polished summaries, and ensuring that a 'second opinion' remains visible in digital spaces. Democracy cannot survive on curated consensus or algorithmic fluency alone. It cannot endure if truth itself becomes a casualty of convenience, reduced to whichever narrative is most seamless or viral. The stakes are not abstract: as UNESCO has warned, when the integrity of pivotal histories is compromised, the very notion of shared truth—and the moral lessons it imparts—begins to erode. Democracy does not thrive on sanitized agreement but on tension: the clash of perspectives, the contest over competing narratives, and the collective pursuit of facts, however uncomfortable. As generative AI becomes the primary lens through which most people access knowledge—often distilled to prompts like, 'Grok, did this really happen? I don't think it did, but explain the controversy around this issue using only sources in a specific language'—the challenge is not whether these systems can feign neutrality. It is whether we can design them to actively safeguard truth, ensuring that pluralism, contestation, and the arduous work of deliberation remain immovable foundations for both history and democracy.

Army sergeant investigated for forcing soldiers to do pushups under MAGA banner
Army sergeant investigated for forcing soldiers to do pushups under MAGA banner

Daily Mail​

time4 days ago

  • Politics
  • Daily Mail​

Army sergeant investigated for forcing soldiers to do pushups under MAGA banner

An army drill sergeant is under investigation after a video appeared to show him forcing soldiers to do pushups under a MAGA flag. Staff Sgt. Thomas Mitchell is accused of breaking rules about political neutrality in the army. Mitchell allegedly posted a now-deleted video which featured a group of training soldiers doing pushups and burpees under a MAGA banner while on a base in Georgia. The flag read, 'This is Ultra MAGA Country,' in the video uploaded on Friday before it was removed. A second video was then reportedly re-uploaded with the caption, 'Cry about it.' The video uploaded to @11chuckduece on TikTok, a now-deleted account, launched an investigation into the sergeant. The demonstration violates 'multiple military regulations' regarding political activity in uniform on federal property,' wrote. 'The US Army is an apolitical organization,' Jennifer Gunn, a service spokesperson, said in a statement. 'Displaying partisan political materials in government facilities, including training areas, is prohibited under Army regulation. 'We will investigate this matter and address it in accordance with established policies to ensure compliance with standards of conduct and to maintain an environment free from political influence.' Mitchell serves as an infantry drill sergeant with B Company, 2-19th Infantry Battalion, 198th Infantry Training Brigade, at Fort Benning, Georgia. His current status remains unclear. Garrison Public Affairs Director Joe Cole told Law & Crime that the investigation into the video would 'take some time.' The display of political flags or memorabilia inside federal buildings is prohibited according to Defense Department regulations which are 'designed to preserve the military's role as a nonpartisan institution.' Rules also dictate that troops in positions of authority may not use their position of authority or power to politically influence subordinates. Daily Mail reached out to the US Army and Sgt. Mitchell for comment. The incident comes a month after Trump made a speech during the celebration of the Army's 250th birthday. Troops in the crowd behind the president at Fort Bragg were reportedly carefully selected for the televised event based on their political views and physical appearance. Internal 82nd Airborne Division communications obtained by showed the soldiers were sent messages including 'No fat soldiers.' Another memo said that 'if soldiers have political views that are in opposition to the current administration and they don't want to be in the audience, then they need to speak with their leadership and get swapped out.' The end result was a predominantly white, male crowd who booed as Trump hit out at California Gov. Gavin Newsom and Los Angeles Mayor Karen Bass for the fiery protests against Immigration and Customs Enforcement operations as he vowed to 'liberate' the city. They were also seen booing former President Joe Biden and the press - and roared with laughter at Trump's remarks berating his successor. Such actions appear to also be in violation of longstanding Department of Defense protocol, with even the Army's recently-published field manual touting the importance of a politically neutral force. 'Being nonpartisan means not favoring any specific political party or group,' it says, according to NBC News. 'Nonpartisanship assures the public that our Army will always serve the Constitution and our people loyally and responsively.' It goes on to note that troops can participate in political functions - so long as they are not in uniform. 'As a private citizen, you are encouraged to participate in our democratic process, but as a soldier you must be mindful of how your actions may affect the reputation and perceived trustworthiness of our Army as an institution,' the field guide says. At least one 82nd Airborne noncommissioned officer now says he does not see how the troops' reactions on Tuesday could be seen as anything other than 'expressing a political view while in uniform.' He even suggested that none of the soldiers who were booing Newsom and Bass 'even knew the mayor's name or could identify them in a lineup.' Department of Defense officials, though, have denied that the soldiers were in violation of its rules. 'Believe me, no one needs to be encouraged to boo the media,' Sean Parnell, a Pentagon spokesman replied to 'Look no further than this query, which is nothing more than a disgraceful attempt to ruin the lives of young soldiers.' Even if the soldiers did violate Defense Department rules, multiple Army officials told they likely would not be held accountable because they were goaded by the commander-in-chief.

White House Wants Bias-Free AI for Government Work
White House Wants Bias-Free AI for Government Work

Yahoo

time5 days ago

  • Business
  • Yahoo

White House Wants Bias-Free AI for Government Work

The White House is cooking up an order to make sure AI tools that work with the government stay politically neutral. Officials worry models trained on internet data can drift into liberal or conservative slants, so this would set a clear standard. Warning! GuruFocus has detected 3 Warning Sign with UAL. At the center of the plan is AI czar David Sacks, who has pointed to embarrassing moments like Google's Gemini painting a black George Washington or diverse Nazis. OpenAI, Anthropic, Google (NASDAQ:GOOG) and Elon Musk's xAI fear it could pick industry winners and spark free speech fights. This order lands just as the Pentagon is handing out nearly two hundred million dollars in AI contracts. Tying neutrality to federal deals could shift who wins big and shape how the industry builds its next generation of tools. It also comes bundled with moves to boost chip exports and speed data center approvals. As Washington juggles bias concerns and tech rivalry with China, investors will be watching every step. This article first appeared on GuruFocus. Sign in to access your portfolio

Trump targets ‘woke' AI in diversity crackdown
Trump targets ‘woke' AI in diversity crackdown

Telegraph

time5 days ago

  • Business
  • Telegraph

Trump targets ‘woke' AI in diversity crackdown

Donald Trump is preparing to launch a crackdown on 'woke' artificial intelligence (AI) chatbots as Republicans wage war on perceived Left-wing bias in Silicon Valley. The White House is preparing an executive order as soon as next week that would ban tech companies from government contracts if their AI is not 'politically neutral'. The decree, first reported by the Wall Street Journal, comes after a series of blunders by technology giants as they have sought to fine-tune their AI tools to avoid prejudice and offence. Last year, Google's Gemini chatbot prompted outcry after it generated pictures of racially diverse Nazis and other historically inaccurate images, such as black US founding fathers. Google was forced to pause the launch of its image-generating app to contain the problem. A chatbot from Facebook owner Meta generated similar historical 'woke' images. The errors came about as part of efforts by the companies to instil diversity into their tools. AI safety experts have long warned AI products risk amplifying the biases of their creators. These problems have caught the attention of David Sacks and Sriram Krishnan, Mr Trump's AI advisers, the Wall Street Journal said. Fight against liberal bias Republican politicians have for years railed against Silicon Valley's alleged liberal bias and accused companies of unfairly penalising conservatives with censorship and one-sided fact-checking. AI chatbots represent a fresh target for concern. Last year, Elon Musk, whose xAI has developed the aggressively 'anti-woke' Grok chatbot, said: 'A lot of the AIs that are being trained in the San Francisco Bay Area, they take on the philosophy of people around them. 'So you have a woke, nihilistic – in my opinion – philosophy that is being built into these AIs.' Mr Musk has even criticised his own Grok chatbot for being too liberally biased. Last month, he responded to a user on X who had claimed Grok was 'manipulated by Leftist indoctrination', pledging to fix the bot.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store