2 days ago
5 questions for Sree Ramaswamy
Presented by
With help from Anthony Adragna and Aaron Mak
Hello, and welcome to this week's installment of the Future in Five Questions. This week we interviewed Sree Ramaswamy, a former senior policy adviser to the Biden administration's Commerce Department, whose work included facilitating the CHIPS and Science Act. Ramaswamy is now the chief innovation officer for NobleReach, a recently launched nonprofit that works to set up private-public partnerships through programs focused on talent and innovation, including at universities. He spoke about the changes under the new administration as well as the importance of securing supply chains against adversarial rivals, especially for critical technologies. An edited and condensed version of the conversation follows:
What's one underrated big idea?
I'm going to come at this from a national security standpoint. One of the things we have struggled with as a country is how to deal with the presence of adversarial inputs in our technology. That manifests in different ways. It manifests in people concerned about their chips coming from China. It manifests in people concerned about the fact that your printed circuit boards and the software that's flashed on them are done in Vietnam or in Malaysia by some third-party contractor, and we're like: Is there a back door here? Is somebody putting in a Trojan horse?
We worry about the capability of the stack as it becomes larger and larger. We worry about the fact that we may have blind spots, both in terms of where adversaries can gain capabilities but also where they can insert vulnerabilities.
What's a technology that you think is overhyped?
The last few years, we've seen various aspects of the government come up with a list of critical technologies. Before we had the CHIPS Act, there was this thing called the Endless Frontiers Act, which had a list of critical technologies.
I would say almost every single one of those technologies you could argue is overhyped. Take a look at those lists and ask yourself what technology is not on this list, and there's no answer to that question. Every single technology you can think of is on our list of the most critical technologies.
It's sort of like saying I have 100 priorities — then you don't have any priorities. What I would like to see is a shift of attention away from the technologies themselves, and to the problems that the technologies can solve.
What could the government be doing regarding technology that it isn't?
What the government has traditionally done well is focus on the supply side of tech. It creates incentives, it builds infrastructure — the labs, test beds, it builds all of that stuff. It creates incentives that we've done with tax credits, subsidies and grant programs.
What it is struggling to do is figure out how it can help on the demand side.
It can tell you it needs warships, like right now. It needed them like a week ago, it needs them over the next year, or six months. It's also good at telling you in 15 years, this is how we think warfare is going to change.
What it struggles to tell you is the in-between, because the in-between is where the tech stuff comes in. So when you say that you are trying to prioritize technology, what you're doing is you're prioritizing stuff that is in laboratories today. They're in university labs, they're in federal labs. They're going through proof of concept. They're going through early-stage validation.
What that cohort needs to develop is what problem do you need to solve in like six years, seven years. It takes somewhere between five to eight years on average for some of these hard technologies to come to market. What you need is a demand signal sitting there saying, 'I don't need this warship now, but in seven years, I need my warships to have this capability.'
And that's the missing piece. If we could get our government to start articulating that sort of demand, that could go a long way in helping develop technologies, de-risking them, and you'll be signaling that there's a customer for these things, which means that a bunch of VC guys will start crowding, because that's what VCs care about. They care about, do you have a path to get a customer?
What book most shaped your conception of the future?
[Laughs] I've forgotten how to read — my attention span is now three-minute-long YouTube videos. (Note: He later said the book that shaped his concept of the future was 'The Long Game' by Rush Doshi.)
What has surprised you the most this year?
I think what has surprised me the most this year is how easily and quickly things that we thought could not be changed are changing. And you know, you can take that both in a positive spirit and a negative spirit.
When I was in the private sector, there were certain things that you feel are sort of off limits, both good and bad. There's a certain way of doing things, and if you stray beyond that, it's either illegal or it's immoral, or you're gonna get jeered by your peers.
I definitely felt that in the government as well. There are certain things — even with something like CHIPS, these big investment programs — there were still spoken and unspoken things that you could do, things that you could not do, and I ran up against many of them.
What I find surprising is how quickly many of those things are falling by the wayside. Changing the way federal agencies work, changing the way our allied relationships work, changing the way the trade regime works.
In a broad sense, it's good, because it tells us that this country is capable of moving quickly. It does show you that if we need to, we can move.
What I'm looking forward to, now that we've shown that you can move in big ways, including companies, can now add an end state to it and say, OK, we really need to be able to move in a big way to solve this problem: completely diversify our supply chains away from adversaries, completely have a clean AI tech stack in the next three years.
I left government thinking about our inability to move quickly. So I'm glad to see it — I'm not happy with all of it — but I'm glad to see we can.
Tech's heavy emissions footprint
Carbon emissions for the world's leading tech company operations surged 150 percent between 2020 and 2023, according to a report from the United Nations' digital agency.
Compared to a 2020 baseline, operational emissions for Amazon grew 182 percent in 2023 against 2020 levels, Microsoft's grew 155 percent, Meta's increased 145 percent, and Alphabet's grew 138 percent.
This was all for 2023, the last year for which complete data is available. Demand for energy-intensive artificial intelligence and data centers has only surged since then.
Just 10 tech companies accounted for half of the industry's electricity demand in 2023, according to the report. Those are China Mobile, Amazon, Samsung Electronics, China Telecom, Alphabet, Microsoft, TSMC, China Unicom, SK Hynix and Meta.
Overall, however, the tech sector is a relatively small player in global emissions. The 166 companies covered in the report accounted for 0.8 percent of all global energy-related emissions in 2023, it concluded.
Anthropic opposes AI moratorium
Anthropic CEO Dario Amodei took what looked like a bold, independent stance on federal AI laws yesterday — but was it really so bold?
In a New York Times op-ed, Amodei came out against the 10-year moratorium on state AI laws that Congress is proposing. He argued the moratorium is 'far too blunt an instrument,' and instead recommended that Congress first pass a federal transparency law.
A tech CEO calling for federal regulation of his own industry? It's almost like 2023 again.
But several critics have pointed out that this wasn't quite such a disinterested stance. The federal law he's looking for would — in his proposal — pre-empt all those inconvenient state laws. 'If a federal transparency standard is adopted,' Amodei wrote, 'it could then supersede state laws, creating a unified national framework.'
Former OpenAI researcher Steven Adler critiqued the idea in an X post: 'Anthropic's CEO only says he wants regulation so he seems responsible. He knows there's no risk he'll actually get regulated.'
And there's an argument that the law wouldn't change much. As Amodei himself notes, major AI companies like Google and OpenAI already have self-imposed transparency requirements. So does Anthropic – the company recently disclosed that its model tried to blackmail a user in a test run.
DFD asked Anthropic about the criticisms. The company responded by clarifying that the transparency standard would mainly supersede state laws mitigating catastrophic AI risks, like cyberattacks.
Amodei cautions that companies may abandon their transparency measures as their models get more complex, so the federal law might be necessary. Even so, current state AI laws have more teeth and specificity than the federal transparency standard that Amodei is proposing. South Dakota imposes civil and criminal liabilities on election deepfakes. Tennessee law prevents AI from impersonating musicians. New Hampshire prohibits state agencies from using AI to surveil the public.
Alondra Nelson, a key architect of federal AI policy under President Joe Biden, wrote to DFD: '[A] federal requirement for industry to provide more information is a good foundation for states' laws to build upon, but it cannot replace them.'
Amodei frames his proposal as a compromise between the goals of states and the federal government. In such a bargain, the big winner could be an industry that is already used to sliding through those gaps.
post of the day
THE FUTURE IN 5 LINKS
Stay in touch with the whole team: Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@