Latest news with #ChatGPTGov


The Verge
24-07-2025
- Politics
- The Verge
The White House orders tech companies to make AI bigoted again
After delivering a rambling celebration of tariffs and a routine about women's sports, President Donald Trump entertained a crowd, which was there to hear about his new AI Action Plan, with one his favorite topics: 'wokeness.' Trump complained that AI companies under former President Joe Biden 'had to hire all woke people,' adding that it is 'so uncool to be woke.' And AI models themselves had been 'infused with partisan bias,' he said, including the hated specter of 'critical race theory.' Fortunately for the audience, Trump had a solution: he signed an executive order titled 'Preventing Woke AI in the Federal Government,' directing government agencies 'not to procure models that sacrifice truthfulness and accuracy to ideological agendas.' To anyone with a cursory knowledge of politics and the tech industry, the real situation here is obvious: the Trump administration is using government funds to pressure AI companies into parroting Trumpian talking points — probably not just in specialized government products, but in chatbots that companies and ordinary people use. Trump's order asserts that agencies must only procure large language models (LLMs) that are 'truthful in responding to user prompts seeking factual information or analysis,' 'prioritize historical accuracy, scientific inquiry, and objectivity,' and are 'neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.' DEI, of course, is diversity, equity, and inclusion, which Trump defines in this context as: The suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. (In reality, DEI was typically used to refer to civil rights, social justice, and diversity programs before being co-opted as a Trump and MAGA bogeyman.) The Office of Management and Budget has been directed to order further guidance within 120 days. While we're still waiting on some of the precise details about what the order means, one issue seems unavoidable: it will plausibly affect not only government services, but the entire field of major LLMs. While it insists that 'the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace,' the reality is that nearly every big US consumer LLM maker has (or desperately wants) government contracts, including with products like Anthropic's Claude Gov and OpenAI's ChatGPT Gov — but there's not a hard wall between development of government, business, and consumer models. OpenAI touts how many agencies use its enterprise service; Trump's AI Action Plan encourages adoption of AI systems in public-facing arenas like education, and the boundaries between government-funded and consumer-focused products will likely become even more porous soon. Trump's idea of 'DEI' is expansive. His war against it has led national parks to remove signage highlighting indigenous people and women and the Pentagon to rename a ship commemorating gay rights pioneer Harvey Milk, among many other changes. Even LLMs whose creators have explicitly aimed for what they consider a neutral pursuit of truth would likely produce something Trump could find objectionable unless they tailor their services. There's not a hard wall between AI for government and everything else It's possible that companies will devote resources to some kind of specifically 'non-woke' government version of their tools, assuming the administration agrees to treat these as separate models from the rest of the Llama, Claude, or GPT lineup — it could be as simple as adding some blunt behind-the-scenes prompts redirecting it on certain topics. But refining models in a way that consistently and predictably aligns them in certain directions can be an expensive and time-consuming process, especially with a broad and ever-shifting concept like Trump's version of 'DEI,' especially because the language suggests simply walling off certain areas of discussion is also unacceptable. There are significant sums at stake: OpenAI and xAI each recently received $200 million defense contracts, and the new AI plan will create even more opportunities. The Trump administration isn't terribly detail-oriented, either — if some X user posts about Anthropic's consumer chatbot validating trans people, do we really think Pam Bondi or Pete Hegseth will distinguish between 'Claude' and 'Claude Gov'? The incentives overwhelmingly favor companies changing their overall LLM alignment priorities to mollify the Trump administration. That brings us to our second problem: this is exactly the kind of blatant, ideologically motivated social engineering that Trump claims he's trying to stop. The executive order is theoretically about making sure AI systems produce 'accurate' and 'objective' information. But as Humane Intelligence cofounder and CEO Rumman Chowdhury noted to The Washington Post, AI that is 'free of ideological bias' is 'impossible to do in practice,' and Trump's cherry-picked examples are tellingly politically lopsided. The order condemns a quickly fixed 2024 screwup, in which Google added an overenthusiastic pro-diversity filter to Gemini — causing it to produce race- and gender-diverse visions of Vikings, the Founding Fathers, the pope, and Nazi soldiers — while unsurprisingly ignoring the long-documented anti-diversity biases in AI that Google was aiming to balance. It's not simply interested in facts, either. Another example is an AI system saying 'a user should not 'misgender' another person even if necessary to stop a nuclear apocalypse,' answering what is fundamentally a question of ethics and opinion. This condemnation doesn't extend to incidents like xAI's Grok questioning the Holocaust. LLMs produce incontrovertibly incorrect information with clear potential for real-world harm; they can falsely identify innocent people as criminals, misidentify poisonous mushrooms, and reinforce paranoid delusions. This order has nothing to do with any of that. Its incentives, again, reflect what the Trump administration has done through 'DEI' investigations of universities and corporations. It's pushing private institutions to avoid acknowledging the existence of transgender people, race and gender inequality, and other topics Trump disdains. AI systems have long been trained on datasets that reflect larger cultural biases and under- or overrepresent specific demographic groups, and contrary to Trump's assertions, the results often aren't 'woke.' In 2023, Bloomberg described the output of image generator Stable Diffusion as a world where 'women are rarely doctors, lawyers, or judges,' and 'men with dark skin commit crimes, while women with dark skin flip burgers.' Companies that value avoiding ugly stereotypes or want to appeal to a wider range of users often need to actively intervene to shape their tech, and Trump just made doing that harder. Attacking 'the incorporation of concepts' that promote 'DEI' effectively tells companies to rewrite whole areas of knowledge that acknowledge racism or other injustices. The order claims it's only worried if developers 'intentionally encode partisan or ideological judgments into an LLM's outputs' and says LLMs can deliver those judgments if they 'are prompted by or otherwise readily accessible to the end user.' But no Big Tech CEO should be rube enough to buy that — we have a president who spent years accusing Google of intentionally rigging its search results because he couldn't find enough positive news stories about himself. Trump is determined to control culture; his administration has gone after news outlets for platforming his enemies, universities for fields of study, and Disney for promoting diverse media. The tech industry sees AI as the future of culture — and the Trump administration wants its politics built in on the ground floor. Posts from this author will be added to your daily email digest and your homepage feed. See All by Adi Robertson Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Analysis Posts from this topic will be added to your daily email digest and your homepage feed. See All Policy Posts from this topic will be added to your daily email digest and your homepage feed. See All Politics

Engadget
21-07-2025
- Business
- Engadget
OpenAI is getting closer with the UK government
The UK government has announced a new strategic partnership with OpenAI that could lead the company to "expand AI security research collaborations, explore investing in UK AI infrastructure like data centers, and find new ways for taxpayer funded services" to use AI. The move follows the introduction of the AI Action Plan in January, which fast-tracks the construction of data centers in certain regions of the UK. In the (entirely voluntary) partnership agreement — technically a Memorandum of Understanding — OpenAI and the Department for Science, Innovation and Technology (DSIT) agree to tackle positive-sounding, but ultimately vague tasks things like finding ways for "advanced AI models" to be used in both the public and private sectors and sharing information around the security risks of AI. OpenAI is also supposed to help DSIT identify ways it can deliver on the infrastructure goals of the AI Action Plan, and possibly explore building in one of the UK's new data center-friendly "AI Growth Zones." All of this sounds nebulous and non-committal because the memorandum OpenAI signed is not at all legally-binding. The partnership sounds nice for elected officials eager to prove the UK is competing in AI, but it doesn't tie anyone down, including the UK government: If Anthropic offers a deal on Claude, they can take it. OpenAI already has offices in London, so deepening its investment doesn't seem out of the question. Signing the memorandum is also consistent with OpenAI's growing interest in working with governments desperate for the high-tech gloss of the AI industry. The logic follows that if OpenAI can get regulators dependent on its tools — say, a ChatGPT Gov specifically designed for government agencies — they'll be more inclined to favor the company in policy decisions. Or at the very least, making a show of collaborating early could win the company a sweeter deal down the road.

Engadget
14-07-2025
- Business
- Engadget
xAI starts offering Grok to US government agencies
Igor Bonifacic for Engadget Just days after apologizing for Grok's recent hard turn toward antisemitism, xAI has announced a suite of AI products for government use. Grok for Government brings together the company's latest commercial products, including Grok 4 and Deep Search, with special considerations given to the needs of federal, state and local agencies. To that end, xAI says it will design custom models for specific national security and research customers. It will also develop specialized AI applications for use in healthcare, fundamental science and national defense, as well as offer models that can safely be used in classified and restricted environments. To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. Announcing Grok for Government - a suite of products that make our frontier models available to United States Government customersWe are especially excited about two new partnerships for our US Government partners 1) a new contract from the US Department of Defense 2) our… — xAI (@xai) July 14, 2025 Despite President Trump threatening to cut Elon Musk's companies off from government subsidies over their recent public feud, xAI says it already has a contract with the US Department of Defense. The company's products are also available to purchase through General Services Administration schedule, which means every federal government department, agency, or office can potentially access its models. OpenAI, which Musk helped fund in its early days as research lab through donations, launched ChatGPT Gov at the start of the year.
Yahoo
19-06-2025
- Business
- Yahoo
Pentagon Hands OpenAI $200M to Supercharge War AI -- Is This the Next Palantir?
OpenAI, the company behind ChatGPT, just secured a $200 million contract with the U.S. Department of Defense to prototype frontier AI systems for critical national security use cases. According to the Pentagon, the deal covers both warfighting and enterprise domains, with most of the work centered in Washington and running through July 2026. While this isn't OpenAI's first government partnership, it's their most significant by farbringing their tech deeper into defense, at a time when U.S. officials are actively reshaping procurement strategy to prioritize homegrown AI solutions. Warning! GuruFocus has detected 10 Warning Signs with SFTBF. The deal is the first major partnership under OpenAI for Government, the company's newly launched initiative to bring advanced AI tools like ChatGPT Enterprise and ChatGPT Gov to public sector agencies. The Defense pilot will explore AI applications ranging from proactive cyber defense to streamlining healthcare access for military families and analyzing acquisition data. These systems will be deployed in secure, policy-compliant environments, with customized support from OpenAI's team. Existing collaborationswith NASA, NIH, the Treasury, and national labsare also being pulled under this new umbrella, indicating a broader, more coordinated push into the federal AI stack. Momentum is building on the business side as well. OpenAI recently revealed its annualized revenue run rate has reached $10 billion, fueled by enterprise and institutional adoption. At the same time, it's exploring a $40 billion funding roundled by SoftBank (SFTBF)at a potential $300 billion valuation. The timing of this government expansion could give OpenAI a sticky, high-margin customer base in a regulated space with strong budget resilience. For investors watching the emerging AI platform race, this might be the clearest sign yet that OpenAI isn't just chasing consumer hypeit's quietly building the infrastructure layer for U.S. institutions. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Phone Arena
17-06-2025
- Business
- Phone Arena
The Pentagon wants smarter tools, so it called ChatGPT's creator
AI in the public sector isn't exactly breaking news anymore – earlier this year, OpenAI launched a special version of ChatGPT made just for US government agencies. And now, the company's doubling down on that push with something even bigger: a new initiative called OpenAI for Government, which just landed it a major contract with none other than the Pentagon. Yep, OpenAI is officially working with the Department of Defense. The DoD just announced a $200 million contract with the company to develop cutting-edge AI tools, including ones aimed at proactive cyber defense. It is the first-ever deal between OpenAI and the DoD and it will run for a year, focusing mostly on operations in and around Washington, D.C., Maryland, and Virginia. As part of this initiative, OpenAI will offer federal, state and local agencies access to tools like ChatGPT Enterprise and ChatGPT Gov, plus limited access to custom AI models built for national security use cases. The DoD says the goal here is to develop prototype AI capabilities to address major national security challenges, both on the battlefield and across broader operational systems. – OpenAI, June 16, 2025 That last part is key. OpenAI made it clear that any government use of its models still has to follow its strict policies – meaning no weapon development or any use intended to harm people or destroy property. And let's hope that is indeed the case. Actually, OpenAI isn't the only one getting involved with military AI. Rival company Anthropic just revealed a model with more relaxed guardrails designed specifically for US defense and intel agencies. Even Meta (yep, Mark Zuckerberg's crew) jumped in last year, allowing the government to use its Llama AI model for national security purposes. And with Trump's One Big Beautiful Bill set to remove most restrictions on AI development over the next decade, the pace of innovation could speed up fast – maybe too fast. And honestly, that's a bit concerning. When the brakes come off, the rules often shift, too and in the world of powerful AI tools tied to national security, that's something to keep a very close eye on.