Latest news with #MartinReeves


Forbes
30-07-2025
- Science
- Forbes
Lessons From NASA: How Failure Begets Success
Organizations need to learn from failure, encourage constructive discussions, debates, and even ... More dissent, and move forward. Many organizations—and their leaders—don't recognize the value of trial and error. They want the Moon shot to succeed, perfectly, the first time. But as Martin Reeves, chairman of the BCG Henderson Institute, our internal think tank, describes, embracing 'experimentation, fast learning, adaptation, and innovation,' especially on big 'super-projects,' can make long-term success more likely. NASA, which has been dealing since 1958 with the challenges and (sometimes) chaos of trying to put people and machines in space, can attest to that. Failure Is An Option, And Sometimes It's Necessary Few projects have been bigger than the early U.S. space program, sparked initially by the then Soviet Union's 1957 Sputnik launch and then by President John F. Kennedy's 1961 challenge to the newly established National Aeronautics and Space Administration (NASA) to land U.S. astronauts on the Moon and safely return them to Earth. The U.S. space program encountered numerous setbacks and failures, both before and since the July 1969 Apollo 11 moon landing—most tragically, the Apollo 1 fire during a pre-launch test in 1967, the Space Shuttle Challenger disaster in 1986, and the Space Shuttle Columbia disaster in 2003, each of which resulted in multiple fatalities. The ultimate test for any organization or team, however, is whether it is capable of analyzing the source (or sources) of such failures, correct the problem(s), and move forward. What isn't an option—and shouldn't be an option—is wallowing in failure. As Robert (Bob) Gibbs, NASA's Assistant Administrator for mission support until his recent retirement, told me recently, whether an organization benefits from failure, or gets dragged down by it, largely depends on how its leadership reacts to setbacks. NASA, Moon Shots, Potholes, And Mars Today's space commercialization and exploration programs are driven by increasing collaboration between government and the private sector. These missions not only aim to return to the Moon, but to go far beyond. The science and technology are challenging and the missions unforgiving. Errors and failed attempts go with the territory. That means the road to success likely will have any number of dead ends, detours, and potholes—a familiar path SpaceX seems to be following with its super-sized, 400-foot- tall Starship rocket, designed to support interplanetary travel. First stop: Mars. At this writing there have now been nine Starship tests, several of them explosive failures. Every failure, however, produces valuable information and insights, bringing the team closer to success. How You React To Failure Is Key According to Gibbs, who also served as the U.S. space agency's chief human capital officer during his tenure there, organizations can react to setbacks in one of two ways. One common approach is the 'kill the messenger' approach, he told me. That's when leaders 'make it absolutely clear that failure is not acceptable, and ensure those associated with it don't advance and are seen to 'pay a price'.' This approach effectively kills discussion about what went wrong and guarantees that your team will become even more risk averse, he stressed. Yet, this is the way many leaders respond to adversity—probably because it's hard for many leaders, for both personal and financial reasons, to be open about failure. The alternative is to learn from failure, encourage constructive discussions, debates, and even dissent, and move forward. This is the approach that NASA has embraced, Gibbs says. It's how engagement and innovation thrive and learning from failure becomes part of the culture. One way an organization can demonstrate this principle is to talk candidly about failures. According to Gibbs, NASA does this through its publicly available lessons learned system. To evaluate whether your organization truly embraces failure as part of its learning process, Gibbs suggests that you take the test below. If you answered yes to all of Gibbs's questions--which come from NASA's Jet Propulsion Laboratory and are included in NASA's Lessons Learned Information System--you have created an environment where employees can fail, learn, innovate and move forward, an environment for success. One of the most-often repeated quotes about one of the world's most-impactful inventors is Thomas Edison's alleged description of his serial failures on what became known as the light bulb. 'I have not failed,' he reportedly said. 'I've just found 10,000 ways that won't work.' The rest is history.


Irish Times
12-06-2025
- Business
- Irish Times
How the ‘like' button was used by social media companies to turbocharge their business models
The 'like' button and its palette of variations is used about 160 million times a day globally and it is now the most widely used symbol of affirmation in the digital world. The story of how and why it quickly established this status in the noughties, how social platforms have exploited it to turbo boost their business models since and its future in the world of artificial intelligence is the subject of a fascinating new book, Like: The Button that Changed the World . Bob Goodson, co-author of the book with Martin Reeves, played an important role in the button's development. While he says that, as with most innovative breakthroughs, no one person can claim to have developed the like button single-handedly, he was one half of a two-person team at the then start-up online directory Yelp that developed the first button that allowed users to stay on the same page after clicking. Yelp, which is now a $1 billion business, was also the first to put multiple emotional buttons on a web page. Then as now, the big issue was engagement. READ MORE 'In the early years of the web, about 5 per cent of people created content and 95 per cent simply consumed it and that was a real problem for platforms who wanted to grow,' he tells The Irish Times. 'Putting content on the web in its early days was a really geeky thing to do. They were technical issues, there was dial up, joining forums, it was very male-dominated and women didn't feel safe on the internet. This was before the nice graphical interfaces were introduced and it was all text-based. Yelp and others were thinking about ways of bringing the barrier down and introducing 'likes' proved a real winner.' Today, Goodson runs a Silicon Valley-based firm, Quid, whose AI models are used by many of the top Fortune 50 companies, while Reeves is chairman of Boston Consulting Group's business strategy think-tank BCG Henderson Institute. Their book explores the psychology of likes and finds evolutionary motives. Likes boost survival chances by contributing to the friendship, communication and hierarchy that maximises social learning. We have evolved to get a buzz from liking and that applies to both giving and receiving. A positive reaction to a social-media post causes a frisson of excitement and a feel-good surge and this is the psychological territory that the like button taps into. Interestingly, it doesn't work in reverse. Pioneers who tried 'don't like' buttons soon dropped them as they found users hated them. The thumb symbol is another enduring feature of the like button and many of the variations also incorporate elements of the hand. 'We think there's something very powerful about the hand. Monks in the 14th century used to draw hands with pointing fingers in the margins of manuscripts, called manicules. People have been interested in interacting with text like this for as long as it has been written down,' Goodson notes. Liking is a form of content creation. When you click a like you are putting your mark down forever. If you are not careful, it could cost you your job — Bob Goodson Facebook 's adoption of the like button was the key development for the 'like economy' and provided a huge boost to its fortunes from 2009 onwards. Others quickly followed. As analytics became more and more powerful, like data has become a digital gold mine, allowing platforms to customise content to users and sell that user data to those wanting to reach them. The like button greases the wheels of this model and the algorithms here are the secret sauce, guarded carefully. As the authors put it: 'Your use of a platform can reveal behaviour you don't even reflect on or recognise about yourself. The timing of your activity on the site, for example, can say a lot about the typical rhythm of your day, and the emotional valence of your postings can serve as a kind of mood ring you didn't know you were wearing. Like poker players highly adept at spotting tells, the social-media platforms you use pick up on signals you're not conscious of transmitting.' Secrecy is a recurring theme. 'It's a neat trick on the part of the platforms, giving their product away for free to users without being too explicit about the fact that, in fact, those users are the product.' Not that consumers seem to be too bothered. Arguably having goods and services pitched at you isn't a problem if the algorithm is serving up things that you've told it you are actually interested in – whether you've done that consciously or unconsciously. More worryingly is the effect of likes and/or the lack of them, on the more vulnerable. Children are a particular area of concern in regard to issues such as self-esteem, Goodson acknowledges. 'Social media is a powerfully addictive thing because of the game the platforms are playing for attention, and they will use whatever works. It has created an experience that requires a lot of self-discipline and awareness to keep it healthy. There's a growing belief that children are not ready for it. We're now seeing regulation in this area, such as in Australia, which is prohibiting under-16s from using platforms. My expectation is that trend will continue and that there will need to be stronger protections and regulations for younger people.' Likes are not a trivial matter for those of any age, however, as many public figures have found to their extreme cost. 'Liking is a form of content creation. When you click a like you are putting your mark down forever. If you are not careful, it could cost you your job.' Likes are also playing an important role in the ongoing development of AI, albeit in a different way. 'There are like buttons in the chat apps. They use thumbs up and thumbs down because it won't offend the model. They want you to interact with it because it will improve performance of the large language model. It's just as important. We also think the like data that the platforms hold are very important data sets to train the models. It could be a real competitive advantage for Meta and X .' Reeves and Goodson have produced a thoughtful, well researched and easily accessible book on this subject, exploring the deep significance of what appears to be simple and intuitive feature that we take for granted, but one that in fact has a profound influence on our lives. It gets a thumbs up. Like: The Button That Changed the World, by Martin Reeves and Bob Goodson, is published by Harvard Business Review Press


Harvard Business Review
02-05-2025
- Entertainment
- Harvard Business Review
Like: The Button That Changed the World
A riveting, insider's look at the creation and evolution of the like button and what it reveals about innovation, business, and culture--and its profound impact on modern human interaction. Over seven billion times a day, someone taps a like button. How could something that came out of nowhere become so ubiquitous--and even so addictive? How did this seemingly ordinary social media icon go from such a small and unassuming invention to something so intuitive and universally understood that it has scaled well beyond its original intent? This is the story of the like button and how it changed our lives. In Like, bestselling author and renowned strategy expert Martin Reeves and coauthor Bob Goodson--Silicon Valley veteran and one of the originators of the like button--take readers on a quest to uncover the origins of the thumbs-up gesture, how it became an icon on social media, and what's behind its power. Through insights from key players, including the founders of Yelp, PayPal, YouTube, Twitter, LinkedIn, Gmail, and FriendFeed, you'll hear firsthand the disorderly, serendipitous process from which the like button was born. It's a story that starts with a simple thumbs-up cartoon but ends up with surprises and new mysteries at every turn, some of them as deep as anthropological history and others as speculative as the AI-charged future. But this is much more than the origin story of the like button. Drawing on business and innovation theory, evolutionary biology, social psychology, neuroscience, and other human-centered disciplines, this deeply researched book offers smart and unexpected insights into how this little icon changed our world--and all of us in the process.


WIRED
29-04-2025
- Business
- WIRED
AI Is Using Your Likes to Get Inside Your Head
By Martin Reeves and Bob Goodson Apr 29, 2025 7:00 AM Liking features on social media can provide troves of data about human behavior to AI models. But as AI gets smarter, will it be able to know users' preferences before they do? Photo-Illustration:What is the future of the like button in the age of artificial intelligence? Max Levchin—the PayPal cofounder and Affirm CEO—sees a new and hugely valuable role for liking data to train AI to arrive at conclusions more in line with those a human decisionmaker would make. It's a well-known quandary in machine learning that a computer presented with a clear reward function will engage in relentless reinforcement learning to improve its performance and maximize that reward—but that this optimization path often leads AI systems to very different outcomes than would result from humans exercising human judgment. To introduce a corrective force, AI developers frequently use what is called reinforcement learning from human feedback (RLHF). Essentially they are putting a human thumb on the scale as the computer arrives at its model by training it on data reflecting real people's actual preferences. But where does that human preference data come from, and how much of it is needed for the input to be valid? So far, this has been the problem with RLHF: It's a costly method if it requires hiring human supervisors and annotators to enter feedback. And this is the problem that Levchin thinks could be solved by the like button. He views the accumulated resource that today sits in Facebook's hands as a godsend to any developer wanting to train an intelligent agent on human preference data. And how big a deal is that? 'I would argue that one of the most valuable things Facebook owns is that mountain of liking data,' Levchin told us. Indeed, at this inflection point in the development of artificial intelligence, having access to 'what content is liked by humans, to use for training of AI models, is probably one of the singularly most valuable things on the internet.' While Levchin envisions AI learning from human preferences through the like button, AI is already changing the way these preferences are shaped in the first place. In fact, social media platforms are actively using AI not just to analyze likes, but to predict them—potentially rendering the button itself obsolete. Buy This Book At: If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. This was a striking observation for us because, as we talked to most people, the predictions mostly came from another angle, describing not how the like button would affect the performance of AI but how AI would change the world of the like button. Already, we heard, AI is being applied to improve social media algorithms. Early in 2024, for example, Facebook experimented with using AI to redesign the algorithm that recommends Reels videos to users. Could it come up with a better weighting of variables to predict which video a user would most like to watch next? The result of this early test showed that it could: Applying AI to the task paid off in longer watch times—the performance metric Facebook was hoping to boost. When we asked YouTube cofounder Steve Chen what the future holds for the like button, he said, 'I sometimes wonder whether the like button will be needed when AI is sophisticated enough to tell the algorithm with 100 percent accuracy what you want to watch next based on the viewing and sharing patterns themselves. Up until now, the like button has been the simplest way for content platforms to do that, but the end goal is to make it as easy and accurate as possible with whatever data is available.' He went on to point out, however, that one reason the like button may always be needed is to handle sharp or temporary changes in viewing needs because of life events or situations. 'There are days when I wanna be watching content that's a little bit more relevant to, say, my kids,' he said. Chen also explained that the like button may have longevity because of its role in attracting advertisers—the other key group alongside the viewers and creators—because the like acts as the simplest possible hinge to connect those three groups. With one tap, a viewer simultaneously conveys appreciation and feedback directly to the content provider and evidence of engagement and preference to the advertiser. Another major impact of AI will be its increasing use to generate the content itself that is subject to people's emotional responses. Already, growing amounts of the content—both text and images—being liked by social media users are AI generated. One wonders if the original purpose of the like button—to motivate more users to generate content—will even remain relevant. Would the platforms be just as successful on their own terms if their human users ceased to make the posts at all? This question, of course, raises the problem of authenticity. During the 2024 Super Bowl halftime show, singer Alicia Keys hit a sour note that was noticed by every attentive listener tuned in to the live event. Yet when the recording of her performance was uploaded to YouTube shortly afterward, that flub had been seamlessly corrected, with no notification that the video had been altered. It's a minor thing (and good for Keys for doing the performance live in the first place), but the sneaky correction raised eyebrows nonetheless. Ironically, she was singing 'If I Ain't Got You'—and her fans ended up getting something slightly different from her. If AI can subtly refine entertainment content, it can also be weaponized for more deceptive purposes. The same technology that can fix a musical note can just as easily clone a voice, leading to far more serious consequences. More chilling is the trend that the US Federal Communications Commission (FCC) and its equivalents elsewhere have recently cracked down on: uses of AI to 'clone' an individual's voice and effectively put words in their mouth. It sounds like them speaking, but it may not be them—it could be an impostor trying to trick that person's grandfather into paying a ransom or trying to conduct a financial transaction in their name. In January 2024, after an incident of robocalls spoofing President Joe Biden's voice, the FCC issued clear guidance that such impersonation is illegal under the provisions of the Telephone Consumer Protection Act, and warned consumers to be careful. 'AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,' said FCC chair Jessica Rosenworcel. 'No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.' Short of fraudulent pretense like this, an AI-filled future of social media might well be populated by seemingly real people who are purely computer-generated. Such virtual concoctions are infiltrating the community of online influencers and gaining legions of fans on social media platforms. 'Aitana Lopez,' for example, regularly posts glimpses of her enviable life as a beautiful Spanish musician and fashionista. When we last checked, her Instagram account was up to 310,000 followers, and she was shilling for hair-care and clothing brands, including Victoria's Secret, at a cost of some $1,000 per post. But someone else must be spending her hard-earned money, because Aitana doesn't really need clothes or food or a place to live. She is the programmed creation of an ad agency—one that started out connecting brands with real human influencers but found that the humans were not always so easy to manage. With AI-driven influencers and bots engaging with each other at unprecedented speed, the very fabric of online engagement may be shifting. If likes are no longer coming from real people, and content is no longer created by them, what does that mean for the future of the like economy? In a scenario that not only echoes but goes beyond the premise of the 2013 film Her , you can also now buy a subscription that enables you to chat to your heart's content with an on-screen 'girlfriend.' CarynAI is an AI clone of a real-life online influencer, Caryn Marjorie, who had already gained over a million followers on Snapchat when she decided to team up with an AI company and develop a chatbot. Those who would like to engage in one-to-one conversation with the virtual Caryn pay a dollar per minute, and the chatbot's conversation is generated by OpenAI's GPT-4 software, as trained on an archive of content Marjorie had previously published on YouTube. We can imagine a scenario in which a large proportion of likes are not awarded to human-created content—and not granted by actual people, either. We could have a digital world overrun by synthesized creators and consumers interacting at lightning speed with each other. Surely if this comes to pass, even in part, there will be new problems to be solved, relating to our needs to know who really is who (or what), and when a seemingly popular post is really worth checking out. Do we want a future in which our true likes (and everyone else's) are more transparent and unconcealable? Or do we want to retain (for ourselves but also for others) the ability to dissemble? It seems plausible that we will see new tools developed to provide more transparency and assurance as to whether a like is attached to a real person or just a realistic bot. Different platforms might apply such tools to different degrees. Excerpt adapted from Like: The Button That Changed the World by Martin Reeves and Bob Goodson. Published by arrangement with HBR Press. Copyright © 2025 by Martin Reeves and Bob Goodson.