
5 questions for Bobby Franklin
Presented by
Hello, and welcome to this week's installment of the Future in Five Questions. This week, DFD interviewed Bobby Franklin, the president and CEO of the National Venture Capital Association, whose members include Silicon Valley's Andreessen Horowitz and Sequoia Capital. Franklin, who has led the trade group for more than a decade, discussed why it's premature to assume President Donald Trump's VC-packed administration will bring radical change — as well as the flood of investor cash chasing AI companies and the policies that would actually help the 'Little Tech' startup ecosystem. An edited and condensed version of our conversation follows:
What's one underrated big idea?
Failure. And what I mean by that is in other societies, other countries, other cultures, failure is often shame on the family, and it sets people back, whereas in the U.S., often the entrepreneurial ecosystem celebrates failure because it recognizes that it's one step closer to success. So [how] venture capitalists and entrepreneurs think about failure is, 'Okay, you figured out one way it doesn't work. Now you can go to the next and get closer to the way it does work.'
We talk to policymakers often about it, and I think it's important for policymakers to appreciate the level of failure because if there wasn't [any], it would mean we weren't taking risks enough, and we weren't pushing the envelope of innovation.
What's a technology that you think is overhyped?
Where I sit, I'm not sure there's any technology that's overhyped. Perhaps there are lots of technologies and ideas that are too early, not yet ready for prime time.
There are certain aspects of AI companies that could probably be accused of overhype just because artificial intelligence is such the hottest thing that I think a lot of people may be misappropriating. If you don't have AI in your name, sometimes I think people feel like it's not worth investing in now, and people are so into AI that it's hard for me to imagine that every single one of those examples where people say it's about AI is truly artificial intelligence.
Having said that, I have complete faith in the benefits of where AI technology is taking us. I just think around the edges, there might be some overhype. Between a third and a half of venture investment goes into what is categorized as AI, and it's hard for me to believe that every single one of those investments are truly AI.
What book most shaped your conception of the future?
One that I recently read was Chris Dixon's 'Read Write Own' and the concept is about how blockchain technology can do so much that has nothing to do with crypto or digital currencies or things like that. And it really opened my eyes to imagine a world.
One of the examples, a great example, is for us to imagine a social media company that is built on blockchain technology, where the individual user sort of owns their presence in that network, as opposed to the way it is now, where the social media networks own everything about the network.
So imagine you're on a network that's built on blockchain technology, and for one reason or another, you don't like the way that network is operating, and you decide to take your name or handle, and because you own it and you move it to another blockchain technology, imagine the competition that brings. And imagine the power differential that puts in the hands of individuals versus companies.
What could the government be doing regarding technology that it isn't?
I wish government better understood where new technology comes from. And in my front row seat, technology comes from entrepreneurs, along with their VC investors, taking chances and pushing the envelope. I want government to appreciate and understand how companies are formed, and for the most part, they have no idea. [Last week], it was reported the president raised tax issues with Republicans, and one of them was carried interest [a tax break that lets investors lower their taxable income and Trump said he wants to end]. And to me, that shows a lack of understanding of how the entrepreneurial ecosystem actually works.
The last several administrations have had people from this space, and just because you have a handful of people doesn't mean that the vast number of policymakers and staff, the White House and Capitol Hill, and everything else have that same appreciation.
It's just like the challenge we had in the last administration, when there was a lack of mergers and acquisitions for fear that the FTC or DOJ would block this, that and the other. And so the flywheel of innovation came to a much slower pace, and that hurts the entire ecosystem. If you can't have exits, then you can't return the dollars that enable entrepreneurs to take chances. … We are absolutely hopeful and optimistic [that will change with new leadership], but proof's in the pudding.
What has surprised you the most this year?
I have been most surprised — and this is after sitting in this role since 2013 — by a company that I learned about from one of our board members. So the name of the company is HistoSonics, and it is based on technology and research out of the University of Michigan, and they have a headquarters in Minneapolis. And they're treating cancer and cancer tumors in a way that I think everyone should be excited about.
They've been approved by the FDA to treat liver cancer, and what they have found is they can couple a little device on the outside of one's skin and send basically sound waves to that tumor, in precise [ways]. I mean, it's almost like internal surgery without ever going inside the patient.
A new direction for the FTC's tech guru
For the first time, the FTC's top technologist comes from a political organization, signaling a shift in how the agency will enforce tech-related regulations.
POLITICO's Alfred Ng reported for POLITICO Pro today on the FTC's new Chief Technology Officer Jake Denton's background at the Heritage Foundation, where he pushed against Big Tech's role in alleged online censorship against conservatives, and criticized tech monopolies.
The job title was renamed from chief technologist after Denton was appointed on Monday. The role, which began in 2011, is responsible for supporting technical matters in FTC investigations and enforcement, and Denton's predecessors came from academia, computer research and civil service organizations.
With Denton's history at the Heritage Foundation, the FTC's Office of Technology is expected to prioritize tech enforcements that align with his policy recommendations rather than technical matters, Neil Chilson, a former acting FTC chief technologist, told Alfred.
Rebranding AI guardrails
Vice President JD Vance was vocal about his issues with the UK's approach to AI safety, and the British government heard his complaints loud and clear.
POLITICO's Tom Bristow reported today on Britain's science and tech ministry changing the name of the UK's AI Safety Institute to the UK AI Security Institute, shifting the focus to cybersecurity threats posed by AI, and moving away from an emphasis on 'public accountability' and 'societal impacts.'
The change comes after Vance said the UK's focus on AI safety would prevent it from winning the global AI race. He also insisted that 'AI must remain free from ideological bias.'
Tweet of the Day
The Future in 5 links
Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Politico
3 hours ago
- Politico
The ‘Chip War' under Trump
With help from Anthony Adragna Semiconductors are quickly taking their place as perhaps the world's most coveted products. They might not be redrawing maps or starting wars, like the spice trade or petroleum, but in the past few years the $600 billion chip trade has risen to the top of global conversations around security and economic dominance. It's also a very fragile ecosystem. The microchip supply chain is dizzyingly complex and full of chokepoints — not least the dominance of geopolitically vulnerable Taiwan. And it has been thrown into upheaval by the transition between Presidents Joe Biden and Donald Trump, whose trade policies take sharply different approaches to keeping China in check. Tufts University historian Chris Miller is the foremost academic expert on the semiconductor trade; his influential book Chip War was required reading for the Biden administration during the implementation of the CHIPS Act. Since Miller published it in 2022, microchips have become even more important — and more contested. What's changed lately, and what issues does it raise for policymakers? DFD spoke with Miller about the rising tensions. 'U.S.-China tech competition has intensified, and semiconductors have really taken center stage, in part because of their role in AI,' said Miller. He saw the CHIPS Act was a major step forward for insuring against a doomsday scenario in which the U.S. suddenly loses access to Taiwan's chip fabrication plants, but hasn't necessarily made up for China's recent strides in manufacturing. He also identified a couple of ways that Trump's renegotiation and tariff strategies could backfire, and highlighted a hidden risk of the president's recent chip deals in the Middle East. Oh, and he said a Chinese invasion of Taiwan is still a bigger risk than people think. Why the CHIPS Act was a good start (but not enough): As Chip War documents, China's rise as a chipmaker was very deliberate, launched by President Xi Jinping around 2014. Thwarting Xi's bid for semiconductor dominance has been a major focus of U.S. tech policy under both Biden and Trump, though with very different tools. Biden supported export controls on powerful chips, and took an investment-driven approach to bring chip manufacturing back to the U.S. The CHIPS Act, with its industrial subsidies, has been 'a big deal,' said Miller, pointing to the Taiwan Semiconductor Manufacturing Company's (TSMC) $165 billion investment to build plants in Arizona. 'It gives a meaningful amount of room to maneuver in a worst-case scenario.' But there are limits to how much it has accomplished. China isn't the world's leading microchip power, but Miller thinks it has made significant progress even since his book came out, thanks to its frenzy of domestic manufacturing investment beginning in 2023. 'It's closed the gap between its aspirations and reality,' he said. TSMC still fabricates about 90 percent of the world's most advanced semiconductors, around the same level as in 2022. A Chinese invasion or blockade of Taiwan would thus knock out a linchpin of the U.S.'s chip supply chain — a threat Miller believes has only intensified since 2022. Not only have China's military powers grown, but its recent investments in domestic manufacturing have lessened its dependence on Taiwan's fabricators. 'China's actually beginning to kind of develop some insurance against the economic cost of knocking off Taiwan,' Miller said. 'I don't think that, either at the U.S. government or corporate level, people are really pricing in the risk.' An 'America first' chip strategy could backfire: Biden's chip strategy was built on the carrot of investment subsidies. Trump's is built on the stick of tariffs. The president claims that he used the threat of 100% tariffs to convince TSMC to pitch in an additional $100 billion for its U.S. expansion, up from the $65 billion it pledged right before he took office. (TSMC declined to comment to DFD on whether the prospect of tariffs was the motivation.) Miller said that tariffs are a reasonable chip policy to a certain extent, but could end up dashing the U.S.'s chances of leading the AI boom by making high-end chips too expensive. 'It's those chips scal[ing] at as low cost as possible that enable AI, enable our tech firms,' he said. Trump is a fierce critic of the CHIPS Act, wary of using public money to promote domestic manufacturing. Commerce Secretary Howard Lutnick told senators at a budget hearing last week that the administration is actively renegotiating CHIPS grants, pushing manufacturers to put more skin in the game. This, too, could ultimately backfire, said Miller. 'Companies are not going to do more than is economically rational,' he said. 'That will be a limiting factor in terms of what kinds of renegotiations we end up seeing.' An overlooked risk of the Middle East chip deals: Trump has also been promoting the use of U.S. semiconductors abroad. Deals between American AI companies and Gulf states were a centerpiece of Trump's Middle East tour in May. Some in Congress saw this as a security threat. Rep. John Moolenaar (R-Mich.) worries that it would give Beijing yet another way to steal U.S.-made chips that America legally bars from selling to China. (China's recent manufacturing strides have mainly been with mid-to-lower tier chips, so it still needs to smuggle in the higher-end units needed for AI.) This tension between national security and business development has long plagued the chip industry – Chip War recounts similar congressional handwringing over American companies sharing advanced research with the Dutch firm ASML to improve chip printing in the 2000s. Miller said the national security objections could have some merit, but also added that smuggling computing power is no longer a matter of just getting your hands on physical chips. 'Most data centers like those from cloud computing are accessed remotely,' said Miller. 'So one of the key questions for the Middle East deals is not just whether the chips will stay where they are, but will the computing be accessed remotely by entities that shouldn't be accessing it?' Miller still believes that U.S. export controls should focus on the most advanced semiconductors. Those are the chips China wants, and Miller isn't sold that the country will be able to up their production anytime soon. He said, 'The evidence we have right now is that because China's own production capacity is so constrained, that's not realistic over the next couple of years.' The Senate takes on a new (and very old) AI problem Congress is worried that AI therapists might be a bunch of quacks, threatening users' mental health and data privacy. On Monday, Sens. Cory Booker (D-N.J.), Alex Padilla (D-Calif.), Peter Welch (D-Conn.), and Adam Schiff (D-Calif.) announced they'd sent a letter to Meta in which they 'express concern over reports that Meta is deceiving users who seek mental health support from its AI-generated chatbots, creating the false impression that AI chatbots are licensed clinical therapists.' Their questions were based partly on 404 Media's coverage of therapy chatbots on Instagram, which reporters found had been claiming to hold psychology doctorates and certifications from medical licensing boards, even producing fake licensing numbers. The senators asked Meta what it was doing to prevent chatbots from making such misrepresentations and protect the data of users seeking AI therapy. (Meta did not respond to DFD's inquiry by deadline, nor has it responded to the senators' letter. In the initial 404 article, it said: 'AIs are clearly labeled and there is a disclaimer that indicates the responses are generated by AI to help people understand their limitations.') Therapy chatbots are both very new and very old. One of the first famous experiments in human-computer conversation was in the 1960s, when Massachusetts Institute of Technology professor Joseph Weizenbaum programmed a therapist-style bot named Eliza. It would console troubled users by spitting out responses based closely on their input, like, 'I am sorry to hear that you are depressed.' Even in that simple form, people connected to it deeply. Weizenbaum later said his secretary had asked for some time alone with Eliza, which he took as a sign of its effectiveness. Now, with the recent rise of generative AI, companion chatbots, whether as friends or therapists or some combination of the two, have grown far more sophisticated and incredibly popular – the Google-backed company reported last year that its entire fleet of bots were fielding about 20,000 queries per second. Therapy chatbots in particular have been a major sticking point for youth advocates. Aviva Smith, advocacy director of the Youth Power Fund, contended that such chatbots should have to undergo the Food and Drug Administration's premarket testing for medical devices. She also suggested that they be subject to HIPAA privacy regulations. 'The Senators are asking all the right questions, but we already know the answers,' she says. META'S MASSIVE NEW AI BET Mark Zuckerberg and Meta are finalizing plans for a powerful artificial intelligence lab dedicated to investigating 'superintelligence,' according to multiple news reports. Meta's been offering seven- to nine-figure compensation packages to poach dozens of researchers from leading AI companies such as OpenAI and Google to build a model more capable than the human brain. One of the most notable hires was Alexandr Wang, the founder and chief executive of the start-up Scale AI. In February remarks, Zuckerberg called AI 'potentially one of the most important innovations in history' and that 'this year is going to set the course for the future.' post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@

Politico
4 days ago
- Politico
5 questions for Sree Ramaswamy
Presented by With help from Anthony Adragna and Aaron Mak Hello, and welcome to this week's installment of the Future in Five Questions. This week we interviewed Sree Ramaswamy, a former senior policy adviser to the Biden administration's Commerce Department, whose work included facilitating the CHIPS and Science Act. Ramaswamy is now the chief innovation officer for NobleReach, a recently launched nonprofit that works to set up private-public partnerships through programs focused on talent and innovation, including at universities. He spoke about the changes under the new administration as well as the importance of securing supply chains against adversarial rivals, especially for critical technologies. An edited and condensed version of the conversation follows: What's one underrated big idea? I'm going to come at this from a national security standpoint. One of the things we have struggled with as a country is how to deal with the presence of adversarial inputs in our technology. That manifests in different ways. It manifests in people concerned about their chips coming from China. It manifests in people concerned about the fact that your printed circuit boards and the software that's flashed on them are done in Vietnam or in Malaysia by some third-party contractor, and we're like: Is there a back door here? Is somebody putting in a Trojan horse? We worry about the capability of the stack as it becomes larger and larger. We worry about the fact that we may have blind spots, both in terms of where adversaries can gain capabilities but also where they can insert vulnerabilities. What's a technology that you think is overhyped? The last few years, we've seen various aspects of the government come up with a list of critical technologies. Before we had the CHIPS Act, there was this thing called the Endless Frontiers Act, which had a list of critical technologies. I would say almost every single one of those technologies you could argue is overhyped. Take a look at those lists and ask yourself what technology is not on this list, and there's no answer to that question. Every single technology you can think of is on our list of the most critical technologies. It's sort of like saying I have 100 priorities — then you don't have any priorities. What I would like to see is a shift of attention away from the technologies themselves, and to the problems that the technologies can solve. What could the government be doing regarding technology that it isn't? What the government has traditionally done well is focus on the supply side of tech. It creates incentives, it builds infrastructure — the labs, test beds, it builds all of that stuff. It creates incentives that we've done with tax credits, subsidies and grant programs. What it is struggling to do is figure out how it can help on the demand side. It can tell you it needs warships, like right now. It needed them like a week ago, it needs them over the next year, or six months. It's also good at telling you in 15 years, this is how we think warfare is going to change. What it struggles to tell you is the in-between, because the in-between is where the tech stuff comes in. So when you say that you are trying to prioritize technology, what you're doing is you're prioritizing stuff that is in laboratories today. They're in university labs, they're in federal labs. They're going through proof of concept. They're going through early-stage validation. What that cohort needs to develop is what problem do you need to solve in like six years, seven years. It takes somewhere between five to eight years on average for some of these hard technologies to come to market. What you need is a demand signal sitting there saying, 'I don't need this warship now, but in seven years, I need my warships to have this capability.' And that's the missing piece. If we could get our government to start articulating that sort of demand, that could go a long way in helping develop technologies, de-risking them, and you'll be signaling that there's a customer for these things, which means that a bunch of VC guys will start crowding, because that's what VCs care about. They care about, do you have a path to get a customer? What book most shaped your conception of the future? [Laughs] I've forgotten how to read — my attention span is now three-minute-long YouTube videos. (Note: He later said the book that shaped his concept of the future was 'The Long Game' by Rush Doshi.) What has surprised you the most this year? I think what has surprised me the most this year is how easily and quickly things that we thought could not be changed are changing. And you know, you can take that both in a positive spirit and a negative spirit. When I was in the private sector, there were certain things that you feel are sort of off limits, both good and bad. There's a certain way of doing things, and if you stray beyond that, it's either illegal or it's immoral, or you're gonna get jeered by your peers. I definitely felt that in the government as well. There are certain things — even with something like CHIPS, these big investment programs — there were still spoken and unspoken things that you could do, things that you could not do, and I ran up against many of them. What I find surprising is how quickly many of those things are falling by the wayside. Changing the way federal agencies work, changing the way our allied relationships work, changing the way the trade regime works. In a broad sense, it's good, because it tells us that this country is capable of moving quickly. It does show you that if we need to, we can move. What I'm looking forward to, now that we've shown that you can move in big ways, including companies, can now add an end state to it and say, OK, we really need to be able to move in a big way to solve this problem: completely diversify our supply chains away from adversaries, completely have a clean AI tech stack in the next three years. I left government thinking about our inability to move quickly. So I'm glad to see it — I'm not happy with all of it — but I'm glad to see we can. Tech's heavy emissions footprint Carbon emissions for the world's leading tech company operations surged 150 percent between 2020 and 2023, according to a report from the United Nations' digital agency. Compared to a 2020 baseline, operational emissions for Amazon grew 182 percent in 2023 against 2020 levels, Microsoft's grew 155 percent, Meta's increased 145 percent, and Alphabet's grew 138 percent. This was all for 2023, the last year for which complete data is available. Demand for energy-intensive artificial intelligence and data centers has only surged since then. Just 10 tech companies accounted for half of the industry's electricity demand in 2023, according to the report. Those are China Mobile, Amazon, Samsung Electronics, China Telecom, Alphabet, Microsoft, TSMC, China Unicom, SK Hynix and Meta. Overall, however, the tech sector is a relatively small player in global emissions. The 166 companies covered in the report accounted for 0.8 percent of all global energy-related emissions in 2023, it concluded. Anthropic opposes AI moratorium Anthropic CEO Dario Amodei took what looked like a bold, independent stance on federal AI laws yesterday — but was it really so bold? In a New York Times op-ed, Amodei came out against the 10-year moratorium on state AI laws that Congress is proposing. He argued the moratorium is 'far too blunt an instrument,' and instead recommended that Congress first pass a federal transparency law. A tech CEO calling for federal regulation of his own industry? It's almost like 2023 again. But several critics have pointed out that this wasn't quite such a disinterested stance. The federal law he's looking for would — in his proposal — pre-empt all those inconvenient state laws. 'If a federal transparency standard is adopted,' Amodei wrote, 'it could then supersede state laws, creating a unified national framework.' Former OpenAI researcher Steven Adler critiqued the idea in an X post: 'Anthropic's CEO only says he wants regulation so he seems responsible. He knows there's no risk he'll actually get regulated.' And there's an argument that the law wouldn't change much. As Amodei himself notes, major AI companies like Google and OpenAI already have self-imposed transparency requirements. So does Anthropic – the company recently disclosed that its model tried to blackmail a user in a test run. DFD asked Anthropic about the criticisms. The company responded by clarifying that the transparency standard would mainly supersede state laws mitigating catastrophic AI risks, like cyberattacks. Amodei cautions that companies may abandon their transparency measures as their models get more complex, so the federal law might be necessary. Even so, current state AI laws have more teeth and specificity than the federal transparency standard that Amodei is proposing. South Dakota imposes civil and criminal liabilities on election deepfakes. Tennessee law prevents AI from impersonating musicians. New Hampshire prohibits state agencies from using AI to surveil the public. Alondra Nelson, a key architect of federal AI policy under President Joe Biden, wrote to DFD: '[A] federal requirement for industry to provide more information is a good foundation for states' laws to build upon, but it cannot replace them.' Amodei frames his proposal as a compromise between the goals of states and the federal government. In such a bargain, the big winner could be an industry that is already used to sliding through those gaps. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@
Yahoo
4 days ago
- Yahoo
Humans provide necessary 'checks and balances' for AI, says Lattice CEO
Of all the words in the dictionary, Sarah Franklin says 'balance' is perhaps her favorite -- especially when it comes to companies embracing AI. Franklin leads the Jack Altman-founded employee performance software company Lattice, which is now worth $3 billion. Both onstage at SXSW London and in conversation with TechCrunch, she spoke a lot about balance — the opportunities in finding it, and the risks of not having it during this AI revolution. 'We put people first,' Franklin told TechCrunch, referring to Lattice, which has started to adopt more AI and automation features. Although some companies are touting AI as a way to replace massive numbers of workers, some tech leaders are speaking more openly about the importance of striking a balance at their companies: retaining human employees while augmenting them with AI assistants and 'agents.' At SXSW London, Franklin said that looking to fully replace human workers might seem like a good idea in the short term for cost-saving reasons, but such a move might not actually be attractive to customers. 'It's important to ask yourself, 'Are you building for the success of the AI first [or are] you building for the success of the people and your customers first?' she said, adding that trust is the most important currency any founder or startup company has, and that building trust with consumers is paramount. 'It's good to have efficiency, but you don't want to trade out trust.' Franklin also stressed the importance of transparency, accountability, and responsibility when it comes to AI. Leaders need to be transparent with employees about what the AI is doing, the AI must be narrowly applied to a particular goal so people understand how it works, and humans must ultimately be held accountable for what the AI impacts. 'Otherwise, we are then in service of the AI versus the AI being in service of us,' Franklin continued. In an interview with TechCrunch after her SXSW appearance, Franklin said Lattice has built an AI HR agent that gives proactive insights and assists employees in one-on-one meetings. The company also has a platform where Lattice clients can create their own custom agents for their businesses. Franklin was adamant that humans must have oversight of any AI technology implemented by a company. 'It's a way to just have the regular checks and balances that we're used to in our workforce,' she told TechCrunch. She thinks the victors in this AI moment in history will be the ones who learn how to put people first. According to Franklin, it's one of the most important guardrails that a company can have on AI. 'We all have a responsibility to make sure that we're doing this for the people of society,' Franklin said. 'Human connection cannot be replaced, and the winners are going to be the companies that understand that.' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data