logo
#

Latest news with #HOFFMAN

nVent Electric plc to Participate in the Wolfe Global Transportation and Industrials Conference
nVent Electric plc to Participate in the Wolfe Global Transportation and Industrials Conference

Business Wire

time08-05-2025

  • Business
  • Business Wire

nVent Electric plc to Participate in the Wolfe Global Transportation and Industrials Conference

LONDON--(BUSINESS WIRE)--nVent (NYSE:NVT), a global leader in electrical connection and protection solutions, today announced its participation in the Wolfe Global Transportation and Industrials Conference on Thursday, May 22, 2025. Chair and Chief Executive Officer Beth Wozniak will present at 2:35 p.m. ET. A webcast will be available on nVent's Investor Relations website at About nVent nVent is a leading global provider of electrical connection and protection solutions. We believe our inventive electrical solutions enable safer systems and ensure a more secure world. We design, manufacture, market, install and service high performance products and solutions that connect and protect some of the world's most sensitive equipment, buildings, and critical processes. We offer a comprehensive range of systems protection and electrical connections solutions across industry-leading brands that are recognized globally for quality, reliability, and innovation. Our principal office is in London and our management office in the United States is in Minneapolis. Our robust portfolio of leading electrical product brands dates back more than 100 years and includes nVent CADDY, ERICO, HOFFMAN, ILSCO, SCHROFF and TRACHTE. Learn more at nVent, CADDY, ERICO, HOFFMAN, ILSCO, SCHROFF and TRACHTE are trademarks owned or licensed by nVent Services GmbH or its affiliates.

nVent Electric plc First Quarter 2025 Financial Results Available on Company's Website
nVent Electric plc First Quarter 2025 Financial Results Available on Company's Website

Business Wire

time02-05-2025

  • Business
  • Business Wire

nVent Electric plc First Quarter 2025 Financial Results Available on Company's Website

LONDON--(BUSINESS WIRE)--nVent Electric plc (NYSE:NVT) ('nVent'), a global leader in electrical connection and protection solutions, reported first quarter 2025 financial results today through an earnings release posted on the company's Investor Relations website at The earnings release will be furnished with the Securities and Exchange Commission on a Form 8-K and is available here. The company will also hold a conference call with analysts and investors at 9:00 a.m. ET. Conference Call and Webcast Details The call can be accessed via webcast at or by dialing 1-833-630-1071 or 1-412-317-1832. Once available, a replay of the conference call will be accessible through May 16, 2025, by dialing 1-877-344-7529 or 1-412-317-0088, along with the access code 1400461. About nVent nVent is a leading global provider of electrical connection and protection solutions. We believe our inventive electrical solutions enable safer systems and ensure a more secure world. We design, manufacture, market, install and service high performance products and solutions that connect and protect some of the world's most sensitive equipment, buildings and critical processes. We offer a comprehensive range of systems protection and electrical connections solutions across industry-leading brands that are recognized globally for quality, reliability and innovation. Our principal office is in London and our management office in the United States is in Minneapolis. Our robust portfolio of leading electrical product brands dates back more than 100 years and includes nVent CADDY, ERICO, HOFFMAN, ILSCO, SCHROFF and TRACHTE. Learn more at nVent, CADDY, ERICO, HOFFMAN, ILSCO, SCHROFF and TRACHTE are trademarks owned or licensed by nVent Services GmbH or its affiliates.

nVent Electric plc to Report First Quarter 2025 Financial Results on May 2
nVent Electric plc to Report First Quarter 2025 Financial Results on May 2

Business Wire

time21-04-2025

  • Business
  • Business Wire

nVent Electric plc to Report First Quarter 2025 Financial Results on May 2

LONDON--(BUSINESS WIRE)--nVent Electric plc (NYSE: NVT) ('nVent'), a global leader in electrical connection and protection solutions, will report first quarter 2025 financial results on Friday, May 2, 2025. The financial results will be posted on the company's website at The company will issue an alert over a news wire when the earnings materials are publicly available, including a link to those documents. The company will also hold a conference call with analysts and investors at 9:00 a.m. ET. Related presentation materials will be posted to prior to the conference call. Conference Call and Webcast Details The call can be accessed via webcast at or by dialing 1-833-630-1071 or 1-412-317-1832. Once available, a replay of the conference call will be accessible through May 16, 2025, by dialing 1-877-344-7529 or 1-412-317-0088, along with the access code 1400461. About nVent nVent is a leading global provider of electrical connection and protection solutions. We believe our inventive electrical solutions enable safer systems and ensure a more secure world. We design, manufacture, market, install and service high performance products and solutions that connect and protect some of the world's most sensitive equipment, buildings and critical processes. We offer a comprehensive range of systems protection and electrical connections solutions across industry-leading brands that are recognized globally for quality, reliability and innovation. Our principal office is in London and our management office in the United States is in Minneapolis. Our robust portfolio of leading electrical product brands dates back more than 100 years and includes nVent CADDY, ERICO, HOFFMAN, ILSCO, SCHROFF and TRACHTE. Learn more at nVent, CADDY, ERICO, HOFFMAN, ILSCO, SCHROFF and TRACHTE are trademarks owned or licensed by nVent Services GmbH or its affiliates.

Transcript: The Futurist: Superagency with Reid Hoffman
Transcript: The Futurist: Superagency with Reid Hoffman

Washington Post

time05-02-2025

  • Business
  • Washington Post

Transcript: The Futurist: Superagency with Reid Hoffman

MS. VENKATARAMAN: Hello. I'm Bina Venkataraman, and I'm a columnist for The Washington Post and its editor-at-large for Strategy and Innovation. It's my pleasure today to be in conversation with Reid Hoffman, the co-founder of LinkedIn and the author of a new book, "Superagency: What Could Possibly Go Right With Our AI Future?" Hi, Reid. It's great to see you, and welcome back. MR. HOFFMAN: Thank you, and it's great to see you as well. MS. VENKATARAMAN: So I want to start. I've been reading your book, and it's been really exciting to read and fascinating. And I have so many questions. I don't know if we'll get to all of them. But the first one really starts with how you framed the book and even the title of the book. So "Superagency" to me implied, sort of at a superficial level, that we'd be talking about what the buzz is in Silicon Valley and all around the world, really, with respect to AI, which is that AI agents are on the rise, that we're in an era of agentic AI, where we can set goals for AI, and they will autonomously--systems will autonomously fulfill those goals for us. But the agency and superagency that you're talking about in your book is actually human agency, and you recenter the human. And you sort of draw a line from your work with LinkedIn, expanding the ability for human beings to create professional identities online into a vision you have for what AI could do for human agency. So start by telling us more about that. Like, how is that going to work? MR. HOFFMAN: Well, one of the things--and as you know from looking at the book--we covered a bunch of the kind of the history of how we as human beings have encountered these kind of massive technological leap forwards. And in each of these cases, what happens is we basically--you know, we essentially kind of view them as a possible loss of agency, both human and society. Like, the dialogue we have around AI, which is frequently very worried and everything else, is very much like the dialogue we had around the printing press. But on the other side of these technological evolutions--printing press, automobile, you know, kind of electricity, et cetera--we have an enormous increase in human agency, that these things give us various forms of superpowers. And the thesis is that AI will do the same. It will be amplification intelligence. It will be the kind of thing that gives us a set of superpowers, and part of the superagency part of this is not just--like, for example, take a car. Like, I get a superpower if I can go, you know, travel and have a longer distance and being able to get places. But because other people also get that superpower--like a doctor can come do a home visit for, you know, a kid, a grandparent, you know, other kinds of things--that means that I also get agency from that. And that increase of human agency is actually, in fact, one of the things we should all start focusing on. We should all start thinking about, you know, how do we participate in this technology? How do we shape it both as technologists, but also as society? And this is one of the things about, you know--as you know, the first chapter is, you know, with the launch of ChatGPT, humanity enters the chat. And it's as opposed to having a kind of sense of, like, fear or concern. That's fine to have concern and skepticism. What I want to try to help people is become AI curious. Like, what is the way that I could get a superpower and that we could get these superpowers that elevate human agency? And the agentic, which you deliberately saw, was a deliberate pun between this kind of agentic revolution and also why it is enhancing human agency. MS. VENKATARAMAN: So I could see a lot of people agreeing with sort of your diagnosis of past technologies, that eventually we end up in a world where humans, on balance, perhaps gain greater agency. But I could see people having disagreements about the trajectory, how we get there. So, in your view, is it an execrable sort of we are inevitably, through a natural evolution of technology, going to get to a place where humans benefit, or is it a time for vigilance and sort of the social contract to be renewed and carefully thought of with respect to how the technology unfolds so that we get that version of the future? MR. HOFFMAN: Well, so three things. First is, I think that these technology transformations are always painful. So you go printing press. We can't have the modern society without it. We can't have scientific method. We can't have medicine. We can't have education. We can't have middle class. And so everyone goes, yeah, printing press, clearly good. Yet the discourse around the printing press, when it came out, was very similar to the discourse today, degradation of human abilities, no longer memory matter, we're going to spread misinformation within society, et cetera. And we as human beings adopt to this very poorly. So with the printing press, we had nearly a century of religious war as a function of it. So we tend to encounter these disruptions badly. So I don't think it's, like, inevitable, like, tomorrow it's all, you know, sunshine and flowers and all the rest. I think there's a lot of transition difficulty. Second is, I think that if we get through these transitions in the kind of the right ways, I do think that we end up with, you know, kind of naturally an increase in human agency. Like, I don't think we don't have to--like, we don't have to go, "Oh my god, it's going to go off the cliff unless we're doing stuff." But I think that there's not a reason not to steer--as you were saying vigilantly, to steer better, try to learn from our historical efforts, because I call this the "cognitive industrial revolution," and I do so with a direct parallel. And obviously, the industrial revolution is also instrumental to the kind of the broad, you know, kind of like why we have so many people who are alive today, you know, kind of middle class, you know, elevation of productivity and all the rest. But the transition was enormously difficult and painful, and we want to navigate these much more gracefully, much more with kind of humanity. Doesn't mean it won't be hard, doesn't mean there won't be pain, but that's where I would apply the vigilance is that how are we navigating to make the transition period good and also to quicker and more powerfully recognize--kind of call it, you know, "humanist outcomes," the superagency, the evolution of how AI can make our lives so much better. MS. VENKATARAMAN: This framework you're offering of what could go right, what could possibly go right, is pretty distinctive, and it strikes me in this moment. A lot of people can imagine and observe what could go wrong. There's a lot of dread and despair, whether people are looking at geopolitics, the climate, the economy. And what you're offering is a sort of way of looking at things where you paint a vision of the future and you try to drive towards a different, more positive future. I know about a week ago, you announced a new startup in the discovery--drug discovery space, to use AI to apply to medicine. So help us do that in the sphere of medicine. What--if everything goes right, if things go right with how we use AI for medicine, what's the vision that you hope that you're driving towards now? MR. HOFFMAN: So I'll start with my company, and then I'll go to more broad for everyone. So part of the thing that I realized when I was talking to this, you know, amazing cancer researcher who--you know, Siddhartha Mukherjee, who has released--you know, helped create a lot of different cancer drugs, you know, some approved, some in Phase 3 trials. And we were talking about, like, how does AI give superpowers, and how might this apply more broadly than just kind of agents who can help you with kind of cognitive work, translation, you know, kind of analysis, research? You know, OpenAI just released deep research. And how can it help with more than all of that? And the answer was, well, there's these other areas where if we suddenly made, you know, 10x, 100x capability increases, like, for example, figuring out what are the possible, you know, kind of drugs to solve cancer--and there's all kinds of cancers. So it's not just one cancer pill. There's like, you know, how does, you know, triple negative breast cancer work? How does leukemia work? And they have similar kind of dysfunctions, but there's all the kind of different sorts of cells and different challenges. And, say, well, how do we both understand it? How do we get possible, you know, kind of molecules that can be the right kind of, you know, get rid of the bad cancer, but keep the healthy cells, which is one of the major problems in all this? And how do we accelerate this entire scientific process? And I was like, well, here's the kind of things that AI can bring into this. And he's like, well, here's the things we understand from the best of science, so let's bring the best of kind of this AI revolution with the best of science and massively accelerate our research and understanding, our possible compounds, our possible evaluation of the compounds, and evaluation not just for might it work, but also might it work within the human ecosystem. And, you know, this is just, I think, one of many kinds of areas where we could suddenly get--you know, cancer is the great killer, all ages, you know, everyone around the world, all societies--something that we could make a massive difference for the quality and longevity of human life, and this is just one area that applying AI intelligently to could make a very big difference. Now, the other one, to be a little bit, perhaps more, you know, kind of "everybody" is, you know, today with today's technology, there's no invention. You know, Manas has a lot of invention, our drug-curing, AI drug discovery company, but, like, no invention. You can have a medical assistant that runs in every smartphone for, you know, 24/7 that is better than your average good GP. That doesn't mean put GPs out of businesses. Like, GPs are all overloaded. There's all kinds of things they could do. But imagine, you know, you're at home at, you know, kind of at a, you know, Tuesday at, you know, 11 p.m., and your child has an issue, your parent has an issue, your partner has an issue, your friend has an issue. And you need to know, you know, if you have access to an emergency room, should you go there, you know, what the level of urgency should be, what you might be able to do. That's buildable today. And as again, kind of like a form of once that's there, that gives us all a form of superagency, and this is, I think, among the things that is, like, literally line of sight. It's just a question of how we build it, how we navigate regulatory system, how we navigate liability system. But that's kind of like something that could be there for everyone, and obviously, you know, even in the U.S., we have a lot of uninsured people. It's around the world. That can run, you know, for, you know, like, simply, you know, small number of dollars per hour. You know, that kind of thing is the kind of thing that is kind of what is the more human future that I'm hoping for us to get to. MS. VENKATARAMAN: I for one am eager to see how that evolves too. You mentioned--excuse me. You mentioned the deep reasoning models, the reasoning models of large language models that have been released recently, which is one of the major developments in AI of late, and, of course, reasoning being a sort of partially accurate term for what these models are doing, which is taking a few extra steps and acting more like scholars and how they answer our questions as opposed to just delivering predictive text. Of course, the most recent excitement and panic has been about the Chinese version of one of these reasoning models, DeepSeek. What do you make of--you know, obviously, the stock market crash in response to this. It caused a lot of reaction across Silicon Valley, technologists and politicians with respect to the so-called arms race on AI. What do you make of what the Chinese have done? MR. HOFFMAN: Well, so a lot of the story around this kind of caused a bunch of--the story was radically incomplete. Like, one, they almost certainly used a large model, which required large-scale compute and everything else in order to make--whether it was GPT-4, Llama, some combination, et cetera. Second is when I've consulted with experts from multiple different firms, they also likely had access to a large compute cluster, and so the statement that this was all done, you know, just kind of super cheaply, was actually, in fact--you know, required all of these large expenses in order to make it happen. Now, that being said, I don't want to undercut the value of kind of some of the areas of achievement. Like, I think, you know, necessity is the mother of invention. Part of what I think they figured out some efficiencies in operation that by having open source, you know, all of the U.S. and other AI labs have learned those things too. I think that was great invention, bringing it into the--kind of the general industry practice. And I think it really illustrates something that I and others have been saying for the last couple of years, which is this cognitive industrial revolution is actually, in fact, you know, kind of a--there's a competition going on for the development of these technologies that, you know, the Chinese and others, I think--I think part of the thing that--I think this--that the Chinese effort demonstrates is that other people can also be in this, Europeans, others, because if you're working through the distillation from large models like ChatGPT and leveraging off open source, like, you know, Meta's Llama, I think that those are--those are things where you can actually then go build things that are unique and, you know, kind of additional kind of value of these open models. Now, part of I think you have to be careful--and this is frequently discussed as open source versus open weights. And to be slightly jargonist, for example, open source is a description of kind of I have a code and all the process works. Open weights is just the--as a worthy artifact. Like, here's the computer program in terms of how it runs. Open source is, generally speaking, always very good because, you know, you can check security. You can iterate on it. Open weights is kind of just giving out the program to everyone. In some sense, this is very good. Academics, startups, entrepreneurs. On the other hand, you'd be careful about, like, rogue states, terrorist criminals. So you have to be careful about that kind of thing on these, and so we have to be--we have to navigate this in kind of intelligent ways. But I think that, you know, part of what you'll see is with a variety of these, you know, small models being broadly available is that's part of how we get to the cognitive industrial revolution where, you know, part of what I would say is, you know, kind of within some number of years, every professional--you, Bina, me, et cetera--when we deploy with our work, we will have at least one, probably multiple copilots helping us with the stuff we're doing, and I think that that's part of what the revolution of all these small models will be. And I think that the DeepSeek side showed that, you know, other players, not just the U.S., will be present in developing and deploying them. So we are in a, you know, kind of like a build the future, you know, kind of there's a competition afoot. And it's one of the things that I think is so important for us to be dedicated on as Americans, because part of the thing that I hope is that AI is not just amplification intelligence but also American intelligence, building in some of our values about, you know, kind of individual rights and order of society and these kinds of things. And I think that's, you know, all been--you know, kind of had a spotlight put on it by DeepSeek. MS. VENKATARAMAN: So Eric Schmidt wrote an op-ed recently in The Washington Post where he called into question the more closed system used by OpenAI and other of the American-backed companies. Of course, you're a major backer of OpenAI and of the GPT systems. And he raised the question of whether that more open-sourced, open-weight model used by DeepSeek, by the Chinese, is a better model for innovation. What's your take on that? MR. HOFFMAN: Well, so I think fundamentally you want both. I do think that part of what will be important about continuing to build these very large-scale models that, you know, OpenAI, Microsoft, Google--Amazon's made announcements--are all building is because those actually, in fact, can help build really capable small models, among other things, solve, you know, much higher-order, more challenging problems. On the other hand, that doesn't mean that open source isn't a very good thing. I was on the board of Mozilla for 11 years. At LinkedIn, we open-sourced a large number of projects, some of which have become public companies in terms of how they operate. So it's not kind of really closed or open as the dynamic, but the question is what the different blend is. And when you have these kind of open projects, they can be built upon and amplified upon. Now, generally speaking, when it's open source, the modifications can kind of go back to the common, you know, repository and create, you know, kind of a triumph of the commons, of the digital commons, one of the things we were kind of talking about in "Superagency" and other contexts, but also, you know, kind of like make it more secure, it's more investigatable. Open weights is a little bit more tricky. It means that the technology is more dispersed, but the work that's being done by a startup doesn't necessarily get re-contributed. You know, academics by nature will re-contribute in various ways, but these open-weight things are like, you know, software application programs. They're not as easily understood. And so I think it's a good thing to be having open programs in the midst of what you're doing, but I also think that the proprietary-scale things are also very good. And I think one of the things, that it's one of the areas where our, you know, American hyper-scaler companies have some strong edge. I think it's one of the things that was intelligent about the CHIPS Act in terms of, you know, kind of it doesn't prevent the Chinese from doing things, but it helps, you know, kind of give a little bit of a lead--you know, maintain a bit of a lead advantage for our companies. And I think that will be important in the, you know, kind of contest of, you know, whose shape of the cognitive industrial revolution might help set the standards, the kind of the technology that will be the platform basis for around the world. MS. VENKATARAMAN: So we've been talking about some of the exciting benefits of AI, but a couple of weeks ago, I was talking with Demis Hassabis, who is the founder of Google DeepMind, formerly DeepMind, one of the AI pioneers out there who's been calling for regulation and actually actively participating in a lot of global fora to explore what regulation might be needed for, in particular, the harm of bad actors using AI in various ways and this sort of eventuality of superintelligence, intelligence that exceeds human intelligence, and sort of what guardrails should be put on that. What's your view--knowing full well that you make an argument for not over-regulating AI so that we don't realize these benefits, what, in your view, is the ideal regulation to prevent some of the harmful effects of AI? MR. HOFFMAN: Well, I think you want three parts. So one part is, what are the clear things that could be really bad that we must prevent in advance? And some of those things are the things I was just getting at earlier, which is, we don't want to empower rogue states, terrorists, criminals. Like, what are the things about, like, you know, if terrorists are looking for various ways to massively damage societies, what are the ways we make sure that they're not overly empowered with AI agents and the kind of--the superpower copilots that can come from this? And I think you want to say, okay, we want, you know, things like from the Executive Order, which is red teams, you know, kind of safety plans, analyses, things that the government can then ask about. And then, the next thing I think you want to do is say, okay, what are areas that we should have as kind of, as it were, research around if we worry about things like superintelligence? Namely, well, how could we monitor for when it's on a massive self-improvement curve, where it's kind of reprogramming itself or other kinds of things and make sure that this kind of safety measures are traded well across this? I mean, I think the U.S., the U.K., the French, you know, and other safety institutes are kind of working on this and kind of making sure that the leaders building these--certainly within the Western sphere that are building these things are kind of trading things and say, well, what happens if this--you know, kind of how do we maintain it aligned with human interest, and how do we maintain, you know, kind of the right kinds of controls around it? And I think, again, it's lightweight and tough. And then the next thing is, what are we monitoring to--as opposed to imagining everything that could possibly go wrong--what are we monitoring to see what would be early signs of things that need some correction and then to be kind of doing the dashboards and to be monitoring that? And that's kind of the--you know, kind of like, you know, having the companies and dialogue with, you know, governments, with journalists, with other folks saying, hey, what are those kinds of things? Are you--are you paying attention to them? Are you doing, you know, safety alignment and training? Do you have safety groups? You know, what are the metrics that you're holding yourself accountable to in terms of how this operates? And so not just kind of saying, hey, we need to like have formal approval. I mean, to give you a sense of already how this is impeding, I actually know of some companies that are shipping much worse-quality AI underlying products to Europe because Europe is saying you must undergo, you know, many months of testing before you release it. So they'll go, okay, we'll release it in the U.S., and by the way, the product works really fine. There's--like, literally of the companies I know of, there's been zero complaint, and it's only been, you know, kind of quality product. But the worst-quality product is actually shipped to Europe because of this kind of just like, oh, you must ask for any permission before you launch anything. And that's the kind of thing that that prevents us from getting like a medical assistant in every in everyone's pocket. And by the way, you know, there's a human cost to that. If you said, hey, today we can have a medical assistant that's kind of a high-quality GP in every pocket, you know, think about, you know, kind of being able to intersect, you know, possibly dangerous, very dangerous illnesses or injuries and being able to do something about it, being able to be much more cost efficient in your health care system to be able to answer these kind of questions around like, "Okay. You know, I'm really nervous. I don't know what to do, you know, how to do all that." And so I think that's one of the reasons why the--kind of the more lightweight regulation that, you know, we argue for in "Superagency" is the right thing to do. Now, as you know, from reading the book--and we describe ourselves as "bloomers" rather than "zoomers"--you want to be in dialogue with the risks and stuff and in dialogue with regulation, not saying no regulation. You're just saying, you know, accelerate to the future, but navigate intelligently. MS. VENKATARAMAN: So, Reid, I happen to notice that you weren't up on the dais behind President Trump on Inauguration Day. And obviously, some of your fellow leaders in the tech industry were. Of course, I'm being a little tongue in cheek. I know you're a major backer of the Democratic Party and were of Kamala Harris's presidential bid. How are you feeling right now about the country? MR. HOFFMAN: Well, I mean, we're a couple of weeks in, and I tend to think that the right responsibility of every citizen, including myself, is, how do we help improve America as much as possible? And that's part of the reason why, you know, co-founding Manas, which is a company based here in New York and, you know, kind of--and building the future is, I think, really important. If you abstract, you know, kind of there was obviously a lot of negative dialogue around the inauguration. But you say, well, should the U.S. president and the U.S. government be in dialogue with the tech industry? I think that's critical. I think that's really important. You know, obviously, there have been a bunch of things that I--you know, I'm dismayed by, whether it's the blanket January 6th pardons or, you know, kind of, you know, like some of our close friends and allies like Canada and others, you know, kind of--you know, like, it's better to be in dialogue and collaboration there. But nevertheless, you know, I think that the--that the important thing, you know, as citizens is, how do we essentially say here's where we are. How do we essentially contribute to American society, to American citizens and American industry? And so that's what I've obviously been focusing most of my attention on. MS. VENKATARAMAN: So I've been taking the call of your subtitle, "What Could Possibly Go Right with Our AI Future," but just this framework of what could possibly go right and sitting with that over the last few days. And I'm wondering--you know, I've been thinking about when in our history as a country have we used that framework to drive progress? And, of course, the natural example that comes to mind is the moonshot when John F. Kennedy said, "Let's put a man on the Moon," and then, within 10 years, that happened. What other historical examples or present examples do you invoke to show that that framework actually can be self-fulfilling, that if you imagine what can go right, you can actually make it happen? MR. HOFFMAN: Well, that's ultimately all of the technological progress that we've gotten to, even through transitions. And, Bina, you may or may not have seen, we actually released a video of the kind of contrasting, you know, JFK's moonshot with Manas and what we're doing with cancer discovery, because we like that one very much. MS. VENKATARAMAN: Okay. I hadn't seen it yet, so we'll check it out. MR. HOFFMAN: Yes. And we wanted to kind of inspire to the, hey, this is the kind of future you're getting to. And, you know, this can be anything from, like, for example, you know, the electrification drive for cities and societies, because, you know, think about, you know, like nothing works without kind of the electrical grid. You've seen these things and, you know, building out, you know, train tracks and highways and, you know, kind of building cities for cars. I mean, all of this stuff is like, "Oh, I can see how this future would really work. Let's build towards that," because you don't get the future you want by just trying to eliminate all the ones you don't want. That's a very long list, and that's just a whole bunch of negative. You get it by building the one that can be more human, and that's part of the reason why I say, hey, not just is "Superagency" written for people who might be AI fearful, AI skeptical, AI uncertain to become AI curious, but also for technologists to say, "Look, what people's worry is--whether it's job transformation or other things is that they're worried about human agency, both themselves and within society." So take human agency as a design focus. And, you know, all of the technologies that we have--that we have essentially, you know, built out, you know, all the way back to, you know, agriculture and the first villages, you know, and printing press and, you know, cars and planes, all of that have gotten to, hey, we're working towards that. There's something we could accomplish. And if we accomplish that, we give society and individuals in it superpowers and then superagency. So I think the history is replete with it. Now, the Moon project is an example of something that was done by, you know, kind of government-led. I think many of the times that people don't realize is these transformations are frequently led by industry, you know, the smartphone, et cetera. And that's not--that has many good attributes. It doesn't mean that we don't need many different kind of voices and engagement, including some regulatory. But by having, you know, kind of deploying hundreds of millions of people, you actually, in fact, get a kind of a very inclusive process. So, like, for example, we say, well, is AI going to differentially benefit the wealthy versus the poor? It's like, well, look at the--you know, the iPhone as a parallel. You know, your Uber or Lyft driver has the same iPhone that Tim Cook has, and so when you're building out this mass, you have the kind of inclusion to that. And I think that's one of the things that we want to have happen, you know, broadly with AI and our, you know, cognitive industrial revolution. MS. VENKATARAMAN: Reid, thank you so much. I'm told we're running out of time, though we certainly aren't running out of curiosity about these perspectives, and may your optimism become self-fulfilling for all of our sake. Thank you so much. And thank you to everyone for joining us for Washington Post Live for the session, and if you'd like to subscribe, just go to And thank you very much. Have a great day. [End recorded session]

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store