logo
#

Latest news with #ChatGPTGov

DOGE's "AI-first" strategy courts disaster
DOGE's "AI-first" strategy courts disaster

Axios

time13-03-2025

  • Business
  • Axios

DOGE's "AI-first" strategy courts disaster

A rush to use artificial intelligence to root out government waste, as apparently planned by Elon Musk's DOGE operation, is likely to trigger chaotic outcomes and surprise disasters, AI experts tell Axios. Why it matters: AI can help cut costs — but careless deployment risks harming people who need the government's help, amplifying inefficiencies, opening security holes and automating flawed decision-making. Driving the news: Musk's allies at the so-called Department of Government Efficiency and within other government units are reportedly pursuing an "AI-first" strategy to integrate systems across federal agencies, assess contracts and recommend cuts, per the New York Times and other outlets. What they're saying: No one at DOGE has publicly discussed using AI to replace government employees recently let go in mass layoffs, and the White House did not comment. The General Services Administration is working on a custom AI chatbot designed to boost productivity and analyze contract and procurement data, per Wired. Catch up quick: Silicon Valley talent has been descending on Washington for nearly two decades in efforts to modernize government tech. DOGE itself has taken over the main organization left by one such previous effort, the Obama-era U.S. Digital Service. The AI industry is eager to demonstrate the value of its products to government. OpenAI launched ChatGPT Gov in January. Google spokesperson Jose Castaneda pointed Axios to successful state projects that have used the company's AI to save money and speed up unemployment claims. Microsoft foresees a vast market for its AI in government. Perplexity gives its pro version free for a year to anyone with a .gov email address. And Anthropic has been working with Palantir and Amazon Web Services to help U.S. intelligence and defense agencies more efficiently process data. Between the lines: It's likely that DOGE is trying to create a tool that lets you feed in government documents and ask where to trim spending, says Meredith Broussard, NYU professor and author of "Artificial Unintelligence: How Computers Misunderstand the World." "And then the machine will give an answer because the machine always gives an answer," Broussard tells Axios. "But that answer is not necessarily correct." Threat level: "Everyone is excited about the use of AI, and over time it will undoubtedly be used in government," Donald Moynihan, public policy professor at the University of Michigan and co-director of the Better Government Lab, tells Axios. "But broad-scale rollouts without extensive testing are a recipe for disaster," he adds. Moynihan says he's seen too many cases where algorithms fueled systemic errors and discriminations because the limitations of AI weren't properly understood beforehand. "At this point, I have little trust that DOGE will engage in the sort of careful development and testing of AI," Moynihan says. All of the experts Axios spoke to had privacy and security concerns about unleashing AI on government documents, but some also said AI just isn't up to the task. If technology were capable of so quickly optimizing government, Broussard argues, then someone would have already done it. "I am not optimistic that any small team of people, no matter how talented, can go in and unravel or streamline and make sense of the big ball of mud that is government technology in a short period of time," Broussard says. Zoom in: While AI can quickly analyze large amounts of data and identify patterns, it may arrive at wrong or even nonsensical conclusions. AI is known for inaccuracies — errors the industry has come to call "hallucinations" or "confabulations." Unless the technology's conclusions are carefully double-checked, they can lead to costly mistakes. "You can just imagine all the ways that that could go wrong," Broussard says. Mike Lu, founder of Triller and Turrem, imagined a scenario where you asked AI to "optimize hockey." "AI would come back and tell you, 'Oh, you should make hockey boxing on ice,'" Lu told Axios. "The AI would see the highest level of engagement when the teams take off their gloves and start fighting," he said. The other side: Used right, AI could help a "streamline government" project, Dmitry Shevelenko, chief business officer at Perplexity, tells Axios. He says that "lazy" or unsophisticated prompting could prove ineffective, but "if you describe exactly what you're looking for, it can be very efficient." Freezing funding to government programs shouldn't proceed based only on guidance from an AI tool, Shevelenko adds, and "autonomous AI" shouldn't be making spending decisions. "It's all about doing 80% of that initial work faster, where you get your target list, and then you still need humans very much to review it and check for accuracy," he says.

The Nature Conservancy's Embarrassing Capitulation to Trump
The Nature Conservancy's Embarrassing Capitulation to Trump

Yahoo

time26-02-2025

  • Politics
  • Yahoo

The Nature Conservancy's Embarrassing Capitulation to Trump

The Trump administration's attempt to rename the Gulf of Mexico—and decision to kick the Associated Press out of the White House press corps for not updating their style guidelines accordingly—have been roundly rebuked by the Mexican government and free press outlets. On Monday, a federal judge appointed by Donald Trump declined to restore the AP's access to White House press events. A somewhat surprising organization has been more compliant: the Nature Conservancy, the country's largest and wealthiest conservation nonprofit. In February, the group changed references to the Gulf of Mexico on its website to the Gulf of America. In the Gulf of America, the website now states, the group works on 'restoring healthy shorelines, protecting the Gulf's waters, and ensuring that diverse communities benefit from Gulf restoration.' On social media, environmentalists quickly criticized the group for capitulating to a White House that has targeted climate science; frozen climate funding; purged the Environmental Protection Agency; and pledged to tear up regulations while seizing enormous amounts of power for itself. While controversial, the decision isn't totally out of the blue for the Nature Conservancy. Shortly after Trump's election, TNC CEO Jennifer Morris released a statement indicating the group's intention to 'work with the Trump administration on a range of issues.' On inauguration day, January 20, TNC put out two press releases referencing federal policy. One said the group would 'continue to honor the Paris Agreement goals and help the U.S. do its part.' The other stated that the group 'remained committed to its values, including respect for people, cultures, communities and the world around us.' Neither statement criticized the Trump administration. Subsequent press releases haven't either, and have all generally avoided discussion of White House policy decisions. Other large environmental nonprofits—including the Environmental Defense Fund and Sierra Club—have repeatedly criticized the White House since Trump took office. As E&E News reported on Tuesday, other groups aren't adopting Trump's 'Gulf of America' title, either. The Nature Conservancy did not respond to requests for comment in time for publication. Kevin Weil, chief product officer at OpenAI, sits on the Nature Conservancy's board and had allegedly planned to attend Trump's inauguration but doesn't appear to have shown up. OpenAI has fostered an especially close relationship with the administration. CEO Sam Altman joined Trump in the Oval Office to announce the $500 billion Stargate initiative to build AI infrastructure, including energy-intensive data centers. Late last month, the company also unveiled a new product called Chat GPT Gov, aimed at helping the U.S. government use AI to 'boost efficiency and productivity.' The company is reportedly in talks with 'several' unnamed federal agencies that want to use it. The Nature Conservancy has faced criticism in the past for selling land to trustees and lending money to executives, ties to the timber industry, support for questionably 'sustainable' industrial logging operations and sale of dubious carbon offset schemes. Its 'Gulf of America' decision comes as tech magnates—who've been important funders for climate and environmental nonprofits—cozy up to the new administration; some of the country's biggest tech companies have been eager to get the government's blessing in building out ever-larger fleets of energy-intensive data centers to power AI ventures, and court lucrative defense contracts. Google co-founder Sergey Brin gave $243 million to climate-related causes last year through his family foundation, plus another $22 million through Catalyst4 Inc., his nonprofit advocacy group. Having previously criticized Trump, Brin attended his inauguration last month. In 2021, TNC received a $100 million grant from Jeff Bezos's $10 billion Earth Fund; in 2025, Bezos attended Trump's inauguration, as well. Looking out for their bottom lines, plenty of Silicon Valley CEOs—including those who once branded themselves climate champions—seem to have made peace with the Trump administration's assault on everything from environmental regulations to clean energy subsidies and the Constitution. It's not clear, though, what green groups like The Nature Conservancy might have to gain from doing the same.

Amazon.com, Inc. (AMZN): AWS and Retail Margin Growth Drive Bullish Outlook
Amazon.com, Inc. (AMZN): AWS and Retail Margin Growth Drive Bullish Outlook

Yahoo

time31-01-2025

  • Business
  • Yahoo

Amazon.com, Inc. (AMZN): AWS and Retail Margin Growth Drive Bullish Outlook

We recently published a list of . In this article, we are going to take a look at where Inc. (NASDAQ:AMZN) stands against other AI news and ratings too important to miss. Artificial intelligence is rapidly evolving and is shaping industries and global competition. Companies are developing advanced models, governments are exploring policies to regulate AI, and concerns over security and innovation keep growing. As AI becomes more integrated into everyday life, discussions around its impact, ethical considerations, and future developments remain at the forefront. Kevin Weil, OpenAI's Chief Product Officer, discussed the company's AI advancements and its increasing engagement with the U.S. government. He highlighted the evolution of AI from simply answering queries to actively performing tasks in the real world and said that the current year is significant for agents. OpenAI recently introduced 'Operator,' an AI agent capable of web-based tasks such as ordering groceries and filling out forms, with more agent-driven tools set to launch soon. Weil also addressed concerns over China's DeepSeek AI and acknowledged its technological improvements but emphasized that AI should align with democratic values rather than authoritarian ones. He stated that AI competition extends beyond companies to a broader U.S. and China rivalry. OpenAI has accused DeepSeek of intellectual property theft, but Weil maintained that the company's focus remains on developing the most advanced models and ensuring U.S. leadership in AI. Regarding infrastructure, Weil pointed to OpenAI's $500 billion Stargate initiative, which is aimed at expanding U.S. energy and semiconductor capabilities to support AI growth. He also highlighted new government partnerships, including ChatGPT Gov, which is a secure AI model for federal agencies. On the topic of open-source competition, Weil talked about OpenAI's dual approach, which includes staying ahead with increasingly advanced models and building practical AI products that improve efficiency. He also acknowledged SoftBank's role as a key partner in OpenAI's expansion, especially in AI infrastructure investments. For this article, we selected AI stocks by reviewing news articles, stock analysis, and press releases. We listed the stocks in ascending order of their hedge fund sentiment taken from Insider Monkey's database of 900 hedge funds. Why are we interested in the stocks that hedge funds pile into? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletter's strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (). A customer entering an internet retail store, illustrating the convenience of online shopping. Number of Hedge Fund Holders: 286 Inc. (NASDAQ:AMZN) integrates AI into shopping, entertainment, and operations while driving innovation through investments, AWS partnerships, and Tranium. On January 30, Bernstein raised Amazon's price target from $265 to $280, maintaining an Outperform rating. The firm expects AWS to keep accelerating in Q4, while retail will benefit from a strong holiday season and record viewership of Thursday Night Football. Despite slight adjustments to AWS and retail revenue estimates due to FX headwinds, the firm anticipates operating leverage in both AWS and retail, with margins improving. Overall, the company is expected to meet or exceed the higher end of management's guidance for Q4. Overall, AMZN ranks 1st on our list of AI news and ratings too important to miss. While we acknowledge the potential of AMZN as an investment, our conviction lies in the belief that AI stocks hold greater promise for delivering higher returns and doing so within a shorter time frame. If you are looking for an AI stock that is more promising than AMZN but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and Complete List of 59 AI Companies Under $2 Billion in Market Cap Disclosure: None. This article is originally published at Insider Monkey.

OpenAI Strikes Deal With US Government to Use Its AI for Nuclear Weapon Security
OpenAI Strikes Deal With US Government to Use Its AI for Nuclear Weapon Security

Yahoo

time31-01-2025

  • Business
  • Yahoo

OpenAI Strikes Deal With US Government to Use Its AI for Nuclear Weapon Security

Remember the plot to the 1984 sci-fi blockbuster "The Terminator"? "There was a nuclear war," a character explains. "Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination." It seems like either the execs at OpenAI have never seen it or they're working overtime to make that premise a reality. Don't believe us? OpenAI has announced that the US National Laboratories will use its deeply flawed AI models to help with a "comprehensive program in nuclear security." As CNBC reports, up to 15,000 scientists working at the institutions will get access to OpenAI's latest o1 series of AI models — the ones that Chinese startup DeepSeek embarrassed on the world stage earlier this month. According to OpenAI CEO Sam Altman, who announced the partnership at an event in Washington, DC, the tech will be "focused on reducing the risk of nuclear war and securing nuclear materials and weapons worldwide," as quoted by CNBC. If any alarm bells are ringing by this point, you're not alone. We've seen plenty of instances of OpenAI's AI models leaking sensitive user data and hallucinating false claims with abandon. OpenAI's been making a huge push into government. Earlier this week, the Sam Altman-led company released ChatGPT Gov, a platform specifically designed for US government use that focuses on security. But whether the company can deliver on some sky-high expectations — while also ensuring that its frequently lying AI chatbots won't leak the nuclear codes or trigger the next nuclear war — is anyone's guess. The news comes after the Wall Street Journal reported that OpenAI is in early talks for a new round of funding that would value it at a gargantuan $340 billion, double its previous valuation last year. Altman has also fully embraced president Donald Trump, gifting him $1 million for his inauguration and claiming that he had "really changed my perspective on him" after trashing him in years past. OpenAI also signed onto Trump's $500 billion AI infrastructure deal, dubbed Stargate, with the plan of contributing tens of billions of dollars within the next year. Whether the company's o1 reasoning models will prove useful in any meaningful way to the researchers at the US National Laboratories remains to be seen. But given the widespread dismantling of regulations under the Trump administration, it also feels like an unbelievably precarious moment to be handing over any amount of control over nuclear weapons to a busted AI system. More on OpenAI: OpenAI Asking for Tens of Billions in New Investment to "Fund Its Money-Losing Business Operations"

OpenAI to partner with US National Laboratories to boost national security
OpenAI to partner with US National Laboratories to boost national security

Yahoo

time30-01-2025

  • Science
  • Yahoo

OpenAI to partner with US National Laboratories to boost national security

OpenAI announced Thursday it is partnering with the Los Alamos National Laboratory to install its newest artificial intelligence models onto the lab's supercomputer for national security research. 'We care a lot about AI and science, we've talked about this for a very long time. This is what we think will be on the most important impacts of AI long term. If we can use AI to help drive scientific progress, then I think it can drive huge forward progress for the country,' OpenAI CEO Sam Altman said in the announcement. The partnership will allow U.S. National Laboratories to 'supercharge their scientific research' with OpenAI's latest reasoning models, OpenAI wrote in a statement to The Hill. OpenAI's latest o-series models will be installed onto the Lab's Venado supercomputer, which is powered by Nvidia superchips. The machine will be moved to a secure and classified network where researchers from Los Alamos, Lawrence Livermore and Sandia National Labs will be able to utilize it, according to the Los Alamos National Laboratory. 'As threats to the nation become more complex and more pressing, we need new approaches and advanced technologies to preserve America's security,' said Thom Mason, the laboratory director, in a statement. 'Artificial intelligence models from OpenAI will allow us to do this more successfully, while also advancing our scientific missions to solve some of the nation's most important challenges,' he added. The models are expected to help lab staff identify ways to treat and prevent disease, detect natural and human-made cyber and biology threats, along with the U.S.'s natural resources. The partnership will also benefit the Laboratories' work in nuclear security, OpenAI said. OpenAI and Los Alamos also collaborated last summer to help assess the risks of bioweapon creation. It comes just days after OpenAI launched a new version of its popular ChatGPT model specifically tailored to government agencies and workers. Under ChatGPT Gov, federal agencies will have access to OpenAI's top models even when dealing with sensitive information. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store