Latest news with #Superconvergence


Forbes
21-05-2025
- Politics
- Forbes
Do No Harm, Build AI Safely
Yellow warning sign symbol or alert safety danger caution illustration icon security message and ... More exclamation triangle information icon on attention traffic background with secure alarm. 3D render. When it comes to being safe with AI, a lot of people would tell you: 'your guess is as good as mine.' However, there are experts working on this behind the scenes. There's the general idea that we have to adopt the slogan 'do no harm' when it comes to employing these very powerful technology models. I wanted to present some ideas that came out of a recent panel discussion at Imagination in Action where we talked about what's really at stake, and how to protect people in tomorrow's world. In a general sense, panelists talked about how the context for AI is 'political.' Or, I should say, in the Greek sense, where as historians point out, 'the polis was the cornerstone of ancient Greek civilization, serving as the primary political, social, and economic unit.' In other words, how we use AI has to do with people's politics, and with political outcomes. The ways that we use AI are informed by our worldviews, and geopolitical sentiment as well. 'When politics are going well, it's invisible, because business rolls on, art, culture, everything rolls on and you're not really paying attention to politics,' said panelist Jamie Metzl, author of Superconvergence. 'But I'm the son of a refugee. I've lived in Cambodia. I spent a lot of time in Afghanistan. When politics goes bad, politics is the only story. So everything that we're talking about, about technology, AI, exists within the context of politics, and politics needs to go well to create a space for everything else, and that's largely on a national level.' In terms of business, too, we have to look at how information is siloed for different use cases. One of the objectives of this kind of thing is global governance – AI governance that sees the big picture, and applies its principles universally. A lot of people, in talking about their AI fears, reference the Skynet technology from the Terminator films, where there's this vague doom attached to future systems that may rule when the robots will be in charge. But some suggest it's not as blatant as all that: that the overwhelming force of AI can be more subtle, and that it's more how AI is already directing our social outcomes. 'It's the algorithms that already today are denying people access to housing, access to jobs, access to credit, that are putting them at risk of being falsely arrested because of how a biased algorithm misinterpreted who they were, and how our legal system compounded that technical error with legal injustice and systemic bias,' said panelist Albert Cahn. Cahn pointed, as an example, to a system called Midas that was supposed to seek out fraud in insurance systems. Instead, he noted, the system went too broad, and started catching innocent people in its dragnet, submitting them to all kinds of hardship. 'When we are talking about the scales of getting it wrong with AI safety, this isn't about missing a box in some compliance checklist,' he said. 'This is truly a matter of people's livelihoods, people's liberty, and in some cases, sadly, even their lives.' That's something that we have to look out for in terms of AI safety. Noelle Russell had a different metaphor for AI safety, based on her work on Alexa and elsewhere in the industry, where she saw small models with the capacity to scale, and thought about the eventual outcomes. 'I came to call these little models 'baby tigers,'' she said. 'Because everyone, when you get a new model, you're like, 'oh my gosh, it's so cute and fluffy, and I love it, and (in the context of model work) I can't wait to be on that team, and it's going to be so fun'. But no one is asking, 'Hey, look at those paws. How big are you going to be? Or razor-sharp teeth at birth. What are you going to eat? How much are you going to eat? Where are you going to live, and what happens when I don't want you anymore?' 23andme, we are selling DNA on the open market … You know, my biggest concern is that we don't realize that in the sea of baby tigers and excited enthusiasm we have about technology, that it might not grow up one day and … hurt ourselves, hurt our children, but most importantly, that we actually have the ability to change that.' Panelists also talked about measuring cyber security, and how that works. 'In carpentry, the maxim is 'measure twice, cut once',' said panelist Cam Kerry. 'When it comes to AI, it has to be 'measure, measure, measure and measure again'. It's got to be a continuous process, from the building of the system to the deployment of the system, so that you are looking at the outcomes, (and) you avoid the (bias) problems. There's good work going on. I think NIST, the National Institute of Standards and Technology, one of my former agencies at the Commerce Department, does terrific work on developing systems of measurement, and is doing that with AI, with the AI Safety Institute. That needs to scale up.' Going back to the geopolitical situation, panelists referenced competition between the U.S. and China, where these two giants are trying really hard to dominate when it comes to new technology. Russell referenced a group called 'I love AI' that's helping to usher in the era of change, and provides a kind of wide-ranging focus group for AI. 'What I've uncovered is that there are anywhere from 12 years old to 85 year old (people,) farmers to metaphysicians, and they are all desperate to understand: 'What do you mean the world is changing, and how do I just keep my head above water?'' she said. Then too, Russell mentioned, toward the end, the imperative for AI safety and how to get there. it's not a checklist you sign off on. It's not like you said, it's not that framework that you adopt, it's like the way you end up thinking the way you are the way you the way you build software, the way you build companies, it will need to be responsible. These are some of the thoughts that I thought were important in documenting progress toward AI safety in our times.


Forbes
02-05-2025
- Business
- Forbes
New Leadership Playbook For The Age Of AI
Man holding book with a human glowing brain. getty As AI transforms the workplace, the leaders who thrive won't be the ones with all the answers. Instead, they'll be asking better questions, faster. These themes came to life at the recent IIA conference at the MIT Media Lab, convened by entrepreneur and investor John Werner and featuring top leaders and thinker in AI. 'Most people are used to things being relatively stable, and the set of practices they're using are working,' said Jeremy Wertheimer, a serial entrepreneur. 'But right now that's absolutely the wrong way to think about it.' Employees need to change, and leaders need to help them adapt. Here's your new leadership playbook: Where to start You may feel eager to dive right in and start encouraging your people to use AI. You might feel pressure to do so from top executives in your company. So, as a leader, you may be asking yourself how you should be using AI right now. Although it's tempting to start with tools, 'that's the wrong question,' according to Jamie Metzl, author of Superconvergence. 'The first question you should ask is 'who are we, what do we stand for and what are we trying to achieve?' Then you can ask how AI fits into that. Issue an invitation AI–nor any new technology–won't be adopted unless your employees feel comfortable experimenting and taking risks. 'When you have an aspiration for what you think the company should look like, and it doesn't yet look like that, that's a gap,' said Amy Edmondson, Professor at Harvard Business School and author of The Fearless Organization According to Edmondson, there are 2 ways to close the gap. You could require people to do what you tell them to. Or 'you could make it attractive for people to close the gap. To do that, you'd frame it, with a great deal of humility, as a learning opportunity for which we don't yet know the answers and we invite you to play with us. Now, that's a pretty good invitation. That's almost an irresistible conversation for most people.' Get people excited and they'll be more likely to start working with AI. Encourage using the technology Creating the culture is one thing. Setting norms is another. 'Any time we do anything more than 3 times, we encourage people to automate using AI,' said Johnny Ho, cofounder and Chief Strategy Officer of Perplexity. Give people explicit frameworks about when and how they should use AI in their work and workflows. Let them know how they shouldn't be using AI with clear guidelines. Many employees are using AI and hiding it because they're afraid of being punished, while others are afraid to use it because they're intimidated, according to research conducted by KPMG and the University of Melbourne. Take these issues off the table and encourage people to use the technology by giving clear direction. Lead like a researcher Once employees start experimenting with AI, they won't get it right the first time—and that's the point. The frontier of AI is not a place for perfection. It's a lab. One way to encourage your employees to keep going is to frame their experiences as experiments. Assume you're not getting a lot done; rather, you're conducting research. As Wertheimer said, 'Everyone is doing research when you're at the frontier of the unknown.' Build an entrepreneurial mindset The skills needed in a workplace with AI are different from the past. As a leader you have to help your people adapt. One way to help them is to help them build an 'entrepreneurial mindset.' According to Wertheimer: 'There are going to be fewer roles for workers just doing things, and more roles at the strategic thinking level. You could call that entrepreneurship. Either way, you have to learn to think strategically and to take initiative.' In large organizations, employees are often trained to follow process—not to think like founders. That won't work anymore. Discuss the key skills of taking initiative, thinking strategically, and adapting quickly. Work with your teams to discuss ways they'll develop and measure these skills. For example, you could have a monthly strategy session where one person presents a key topic, or you could ask your team to try one additional step before they come to you with questions. Curiosity is a superpower As a leader, you are likely also trying to figure out your role in the new world of AI. One skill you should cultivate is curiosity. 'Ask better questions,' Edmondson said. For example, you could ask your employees, 'What's something you tried that didn't work—and what did you learn?' or 'How did you approach this and do we need to change our assumptions?' Focusing on your own curiosity will help you explore more of the tools available and how they can be useful. It will also help you coach your people when they have troubles. 'Curiosity is one of our core values,' Ho said. 'It's a superpower.' Meaning matters AI can feel abstract or threatening to employees. That's why meaning matters more than ever. Leaders must help people connect their everyday tasks to a larger purpose—and show them why their work still matters. 'A leader needs to help people find meaning,' Edmondson said. 'Being able to draw a direct connection to the tasks you do, which can seem small or unimportant to a larger vision or meaning that our organization is pursuing is essential for employees and it's very motivating.' Leadership in the AI era isn't about having all the answers—it's about guiding your people through uncertainty with clarity, courage, and curiosity. Start building your new playbook now.