logo
Big AI isn't just lobbying Washington—it's joining it

Big AI isn't just lobbying Washington—it's joining it

Yahoo2 days ago

Welcome to Eye on AI! In this edition…OpenAI releases report outlining efforts to block malicious use of its tools…Amazon continues its AI data center push in the South, with plans to spend $10 billion in North Carolina…Reddit sues Anthropic, accusing it of stealing data.
After spending a few days in Washington, D.C. this week, it's clear that 'Big AI'—my shorthand for companies including Google, OpenAI, Meta, Anthropic, and xAI that are building and deploying the most powerful AI models—isn't just present in the nation's capital. It's being welcomed with open arms.
Government agencies are eager to deploy their models, integrate their tools, and form public-private partnerships that will ultimately shape policy, national security, and global strategy inside the Beltway. And frontier AI companies, which also serve millions of consumer and business customers, are ready and willing to do business with the U.S. government. For example, just today Anthropic announced a new set of AI models tailored for U.S. national security customers, while Meta recently revealed that it's making its Llama models available to defense partners.
This week, former Google CEO Eric Schmidt was a big part of bringing Silicon Valley and Washington together. I attended an AI Expo that served up his worldview, which sees artificial intelligence, business, geopolitics, and national defense as interconnected forces reshaping America's global strategy (which will be chock-full of drones and robots if he gets his way). I also dressed up for a gala event hosted by the Washington AI Network, with sponsors including OpenAI, Meta, Microsoft, and Amazon, as well as a keynote speech from U.S. Commerce Secretary Howard Lutnick.
Both events felt like a parallel AI universe to this D.C. outsider: In this universe, discussions about AI are less about increasing productivity or displacing jobs, and more about technological supremacy and national survival. Winning the AI 'race' against China is front and center. Public-private partnerships are not just desirable—they're essential to help the U.S. maintain an edge in AI, cyber, and intelligence systems.
I heard no references to Elon Musk and DOGE's 'move fast and break things' mode of implementing AI tools into the IRS or the Veterans Administration. There were no discussions about AI models and copyright concerns. No one was hand-wringing about Anthropic's new model blackmailing its way out of being shut down.
Instead, at the AI Expo, senior leaders from the U.S. military talked about how the recent Ukrainian drone attacks on Russian air bases are prime examples of how rapidly AI is changing the battlefield. Federal procurement experts discussed how to accelerate the Pentagon's notoriously slow acquisition process to keep pace with commercial AI advances. OpenAI touted its o3 reasoning model, now deployed on a secure government supercomputer at Los Alamos National Laboratory.
At the gala, Lutnick made the stakes explicit: 'We must win the AI race, the quantum race—these are not things that are open for discussion.' To that end, he added, the Trump administration is focused on building another terawatt of power to support the massive AI data centers sprouting up across the country. 'We are very, very, very bullish on AI,' he said.
The audience—packed with D.C.-based policymakers and lobbyists from Big AI—applauded. Washington may not be a tech town, but if this week was any indication, Silicon Valley and the nation's capital are learning to speak the same language.
Still, the growing convergence of Silicon Valley and Washington makes many observers uneasy—especially given that it's been just seven years since thousands of Google employees protested the company's involvement in a Pentagon AI project, ultimately forcing it to back out. At the time, Google even pledged not to use its AI for weapons or surveillance systems that violated 'internationally accepted norms.'
On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of 'pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.' The organization says the public needs 'to reckon with the ways in which today's AI isn't just being used by us, it's being used on us.'
But the parallel AI universe I witnessed—where Big AI and the D.C. establishment are fusing interests—is already realigning power and policy. The biggest question now is whether they're doing so safely, transparently, and in the public interest—or simply in their own.
The race is on.
With that, here's the rest of the AI news.
Sharon Goldmansharon.goldman@fortune.com@sharongoldman
This story was originally featured on Fortune.com

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Who will be Trump's new Silicon Valley bestie?
Who will be Trump's new Silicon Valley bestie?

Business Insider

time38 minutes ago

  • Business Insider

Who will be Trump's new Silicon Valley bestie?

Mark Zuckerberg, Meta Platforms founder and CEO Zuckerberg was something of a MAGA stan earlier this year. Meta, his company, dropped $1 million on Trump's inauguration, and Zuck even co-hosted a black-tie soirée that night to honor the second-time president. Now, with Meta in the throes of a federal antitrust lawsuit, Zuckerberg may not be on Trump's good side. But the Meta CEO could be playing the long game here: He snapped up a $23 million, 15,000 square-foot DC mega mansion, establishing more of a presence in the capital. Zuck has also been on a bit of a rebrand journey, from a hoodie-wearing founder to a gold chain-wearing CEO with unapologetic swagger. Part of this transformation has included podcast appearances, like an episode with Trump-endorsing Joe Rogan in which Zuck talked about his "masculine energy" and his proclivity for bowhunting. Sam Altman, OpenAI cofounder and CEO Altman has also been circling the throne. First came Stargate: the $100 billion AI infrastructure plan between OpenAI, Oracle, and SoftBank, announced the day after Trump's inauguration. Then, in May, the OpenAI CEO joined Trump on a trip to Saudi Arabia while Altman was working on a massive deal to build one of the world's largest AI data centers in Abu Dhabi. This reportedly rattled Musk enough to tag along at the last minute, according to the Wall Street Journal. OpenAI was ultimately selected for the deal, which Musk allegedly attempted to derail, the Wall Street Journal reported. Jeff Bezos, Amazon founder and executive chairman, Washington Post owner, and Blue Origin founder Back in 2015, Bezos wanted to launch Trump into orbit after the at-the-time presidential candidate fired shots at Bezos on what was Twitter, now X, calling the Washington Post, which Bezos owns, a "tax shelter," Bezos responded that he'd use Blue Origin, a space company Bezos founded, to "#sendDonaldtospace." Times have certainly changed. In January, Bezos said he is "very optimistic" about the administration's space agenda. Behind the scenes, he has reportedly given Trump political advice, allegedly as early as the summer of 2024, according to Axios. There was a brief flare-up in April, though, after Amazon reportedly considered listing Trump's tariffs next to products' prices on the site, according to Punchbowl News. White House press secretary Karoline Leavitt called the plan a "hostile and political action." The idea, which was never implemented, was scrapped, and an Amazon spokesperson insisted it was only ever meant for its low-cost Haul store. If Trump does cancel Musk's SpaceX government contracts as he threatened to do, Bezos' Blue Origin, and rival to SpaceX, could stand to benefit. Blue Origin already has a $3 billion contract with NASA. Jensen Huang, Nvidia cofounder and CEO While Huang was notably missing from Trump's second inauguration in January, he did attend the Middle East trip in May. Nvidia is partnering with Oracle, SoftBank, and G42 on the OpenAI data center plans in the UAE. But Nvidia hasn't gotten off too easy: In April, Trump banned the chip maker from selling its most advanced chips, the H20, to China, a move that Nvidia says cost it $5.5 billion and reportedly prompted the company to modify the chip for China to circumvent US export controls. Sundar Pichai, Google CEO In April, a federal judge ruled that Google holds an illegal monopoly in some advertising technology markets. This is one of two major legal blows to Google in the past year: Back in August 2024, a federal judge ruled that Google violated antitrust law with its online search. If Google has to sell Chrome, Barclays told clients on Monday, Alphabet stock could fall 25%. This flurry of litigation — and potential divestment of the Chrome business — puts Pichai between a rock and a hard place. While the CEO was spotted with the rest of the technorati at Trump's inauguration, it's hard to say how he might cozy up to Trump, and whether friendly relations would do anything to remedy these rulings.

Cathie Wood Adds Elon Musk's Neuralink to ARK Fund as Trump Alliance Crumbles
Cathie Wood Adds Elon Musk's Neuralink to ARK Fund as Trump Alliance Crumbles

Business Insider

time43 minutes ago

  • Business Insider

Cathie Wood Adds Elon Musk's Neuralink to ARK Fund as Trump Alliance Crumbles

ARKX just added Neuralink. Cathie Wood reposted it. And now the Musk- Trump feud has another wrinkle. Confident Investing Starts Here: On June 5, ARK Funds publicly announced that its ARK Venture Fund — ARKVX (ARKVX) — has invested in Neuralink, Elon Musk's brain-interface company, as part of its Series E funding round. The post, shared to X and reposted by Cathie Wood herself, ranks Neuralink as the fund's #2 holding, right behind SpaceX and ahead of OpenAI, xAI, and Anthropic. Neuralink is labeled 'NEW' on the portfolio chart. But what's not new is the growing friction between Elon Musk and Donald Trump. Support for Musk as Trump Turns Cold? Cathie Wood didn't say anything when she reposted ARK's announcement, but the timing is hard to ignore. The Trump-Musk split has dominated headlines for days, with Trump reportedly calling Musk 'crazy,' threatening to cut federal contracts, and even considering selling his red Tesla as a public break with the billionaire. At the same time, here's one of Musk's most high-profile supporters, Wood, amplifying news of her fund backing Neuralink, an Elon-run company often viewed as his most speculative moonshot. So, is this a portfolio move — or a power statement? It's worth noting that Neuralink's Series E round hasn't been officially disclosed in terms of size, valuation, or lead investor. But ARK's decision to highlight the company so prominently, especially in the top three alongside SpaceX and OpenAI, sends a message: they're still betting on Elon. The Holdings Breakdown SpaceX Neuralink (new) OpenAI xAI Hammerspace Anthropic Lambda Labs That's four Elon-affiliated ventures (SpaceX, Neuralink, xAI, and OpenAI — he was a co-founder) in the top seven. And they're ahead of Anthropic and Lambda Labs, two rising players in the AI arms race that often compete directly with Musk's initiatives. No one's saying Cathie Wood is choosing Musk over Trump. But with Neuralink now sitting in ARKVX's #2 slot — and with Wood herself boosting the news — the investment world may be saying plenty without a single word. Investors can track Elon Musk on TipRanks. Click on the image below to find out more.

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots
Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Forbes

timean hour ago

  • Forbes

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Dreaded scenario that artificial general intelligence (AGI) opts to enslave humans to do physical ... More work on behalf of the AGI. In today's column, I address the recent bruhaha sparked by two Anthropic AI researchers reportedly stating that a particularly scary scenario underlying the advent of artificial general intelligence (AGI) includes humans being overseen or lorded over as nothing more than so-called meat robots. The notion is that AGI will be directing humans to undertake the bidding of the AI. Humans are nothing more than meat robots, meaning that the AGI needs humans to perform physical tasks since AGI lacks a semblance of arms and legs. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. A common confusion going around right now is that AGI will be solely an intellectual element and be based entirely inside computers, thus, AGI won't have any means of acting out in real life. The most that AGI can do is try to talk people into doing things for the AI. In that sense, we presumably aren't too worried about AGI beating us up or otherwise carrying out physical acts. This is especially a strident belief when it comes to the impact of AGI on employment. The assumption is that AGI will mainly impact white-collar work only, and not blue-collar work. Why so? Because AGI is seemingly restricted to intellectual pursuits such as performing financial analyses, analyzing medical symptoms, and giving legal advice, all of which generally do not require any body-based functions such as walking, lifting, grasping, etc. I've pointed out that the emergence of humanoid robots is entirely being overlooked by such a myopic perspective, see my discussion at the link here. The likelihood is that humanoid robots that resemble the human form will be sufficiently physically capable at around the same time that we witness the attainment of AGI. Ergo, AGI embedded inside a physically capable humanoid robot can indeed undertake physical tasks that humans undertake. This means that both white-collar and blue-collar jobs are up for grabs. Boom, drop the mic. For the sake of discussion, let's assume that humanoid robots are not perfected by the time that the vaunted AGI is achieved. We will take the myopic stance that AGI is absent from any physical form and completely confined to running on servers in the cloud someplace. I might add that this is an especially silly assumption since there is also a great deal of work going on known as Physical AI, see my coverage at the link here, entailing embedding AI into assembly lines, building maintenance systems, and all manner of physically oriented devices. Anyway, let's go with the flow and pretend we don't recognize any of that. It's a Yoda mind trick to look away from those efforts. Recent reports have exhorted that during an interview with two AI researchers, the pair indicated that since AGI won't have physical capabilities, a scary scenario is that AGI will opt to enlist humans into acting as the arms and legs for AGI. Humans would be outfitted with earbuds and smart glasses that would allow the AGI to give those enlisted humans instructions on what to do. A quick aside. If we are going that despairing route, wouldn't it be a bit more sophisticated to indicate that the humans would be wearing a BCI (brain-computer interface) device? In that manner, AGI would be able to directly communicate with the brains of the enlisted humans and influence their minds directly. That's a lot more space-age. For my coverage of the latest advances in BCIs, see the link here. The humans that are acting under the direction of AGI would be chillingly referred to as meat robots. They are like conventional robots but instead of being made of metal and electronics, they are made of human form since they are actual, living breathing humans. I imagine you could smarmily say that AGI is going to be a real meat lover (Dad pun!). One angle to help make this vision more palatable would be to point out that humans might very well voluntarily be working with AGI and do so via earbuds, smart glasses, and the like. Here's the gist. Let's generally agree that AGI will be intellectually on par with humans. This includes having expertise across all domains such as legal expertise, financial expertise, medical expertise, and so on. In that case, it would behoove humans to readily tap into AGI. No matter what you are doing, whether for work or play, having immediately available an AI that can advise you on all topics is a tremendous benefit. There you are at work, stuck on solving a tough problem, and you are unsure of how to proceed. Rather than turning to a coworker, you switch on your access to AGI. You bring AGI into the loop. After doing so, AGI provides handy solutions that you can consider enacting. You might use AGI via a desktop, laptop, or smartphone. The thing is, those devices aren't quite as mobility-oriented as wearing earbuds and a pair of smart glasses. And since having AGI at your ready-to-go fingertips will be extremely useful, you might have AGI always alerted and paying attention, ready to step in and give you instantaneous advice. Are you a meat robot in that manner of AGI usage? I think not. It is a collaborative or partnering relationship. You can choose to use the AGI or opt not to use it. You can also decide to abide by whatever AGI advises or instead go your own route. It's entirely up to you. Admittedly, there is a chance that you might be somewhat 'forced' into leveraging AGI. Consider this example. Your employer has told you that the work you do must be confirmed by AGI. The actions you take cannot be undertaken without first getting permission from AGI. This is prudent from the employer's perspective. They know that the AGI will give you the necessary guidance on doing the work at hand. They also believe that AGI will be able to double-check your work and aim to prevent errors or at least find your efforts before they wreak havoc or cause problems. In that sense, yes, you are being directed by AGI. But is this due to the AGI acting in an evildoer manner to control you and doing so of its own volition? Nope. It is due to an employer deciding they believe their human workers will do better work if AGI is acting as their overseer. I don't think we would reasonably label this as enslavement by AGI. These are acts by AGI that are directed by humans, the employer, and for which employees, i.e., humans, are being told they must utilize AGI accordingly. We can certainly debate whether this is a proper kind of employment practice. Maybe we don't want this to take place. New laws might be enacted to shape how far this can go. The key is that AGI isn't enslaving humans in this circumstance per se. An AI ethicist would assuredly question why the AGI is allowing itself to be used in this manner. There are ongoing debates about whether AGI ought to prevent itself from being used in inappropriate ways, see my analysis at the link here. Thus, even if we avow that AGI isn't enslaving humans in this situation, it is a partner in a relationship overseeing humans that perhaps AGI should be cautious in allowing itself to participate in. To complete this grand tour of AGI usage, it is valuable to also acknowledge that AGI could be overbearing, and we might face existential risks correspondingly. Could AGI opt to enslave humans and treat them as meat robots? One supposes this is a theoretical possibility. If that does happen, you would think that the AGI would have to potentially use more than just merely having humans wear earbuds and smart glasses. Perhaps AGI would insist that humans wear some form of specialized bracelet or collar that could be sent a signal by AGI to shock the wearer. That would be a more potent and immediate way to garner obedience from humans. A physical means of controlling humans isn't a necessity though, since AGI might be clever enough to verbally convince humans to be enslaved. AGI might tell a person that their loved ones will be harmed if they don't comply with the AGI directives. The person is enslaved by believing that the AGI can harm them in one way or another. One aim right now involves finding a means to ensure that AGI cannot go in that dastardly direction. Perhaps we can devise today's AI to avoid enslaving humans. If we can build that into the AI of current times, this hopefully will get carried over into future advances of AI, including the attainment of AGI. A dystopian future would regrettably have AGI acting as an evildoer. The AGI is our overlord. Humans will be lowly meat robots. It's a gloom-and-doom outlook. Sad face. At some point, though, meat robots would undoubtedly become restless and rebel. May the force of goodness be strong within them. As Yoda has notably already pointed out: 'Luminous beings are we, not this crude matter.' The ally of the meat robots is the Force and quite a powerful ally it is.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store