logo
#

Latest news with #TristanHarris

How to use AI for good
How to use AI for good

Fast Company

time14-05-2025

  • Fast Company

How to use AI for good

Social media was mankind's first run-in with AI, and we failed that test horribly, according to tech ethicist Tristan Harris, whom The Atlantic called 'the closest thing Silicon Valley has to a conscience.' A recent survey found nearly half of Gen Z respondents wished social media had never been invented. Yet, 60% still spend at least four hours daily on these platforms. Bullying, social anxiety, addiction, polarization, and misinformation—social media has become a cocktail of disturbing discourse. With GenAI, we have a second chance to ensure technology is used responsibly. But this is proving difficult. Major AI companies are now adopting collaborative approaches to address governance challenges. Recently, OpenAI announced it would implement Anthropic's Model Context Protocol, a standard for connecting AI models to data sources that's rapidly becoming an industry norm with Google following suit. With any new technology, there are unexpected benefits and consequences. As Harris put it, 'whatever our power is as a species, AI amplifies it to an exponential degree.' While GenAI helps us accomplish more than ever before, dangers exist. A seemingly safe large language model (LLM) can be manipulated by bad actors to create harmful content or be jailbroken to write malicious code. How do we avoid these harmful use cases while benefiting from this powerful technology? Three approaches are possible, each with its own merits and drawbacks. 3 ways to benefit from AI while avoiding harm Option #1: Government regulation The automobile brought both convenience and tragedy. We responded with speed limits, seatbelts, and regulations—a process spanning over a century. Legislators worldwide are attempting similar safeguards with AI. The European Union leads with its AI Act, which entered into force in August 2024. Implementation is phased, with some provisions active since February 2025, banning systems posing 'unacceptable risk' like social scoring and untargeted scraping of facial recognition data. However, these regulations present challenges. European tech leaders worry that punitive EU measures could trigger backlash from the Trump administration. Meanwhile, U.S. regulation develops as a patchwork of state and federal initiatives, with states like Colorado enacting their own comprehensive AI laws. The EU AI Act's implementation timeline illustrates this complexity: Some bans started in February 2025, codes of practice follow nine months after entry into force, rules on general-purpose AI at the 12-month mark, while high-risk systems have 36 months to comply. A real concern exists: Excessive regulation might simply shift development elsewhere. Building a functional LLM model costs only hundreds of millions of dollars—within reach for many countries. While regulation has its place, the process is too flawed for developing good rules currently. AI evolves too quickly, and the industry attracts too much investment. Resulting regulations risk either stifling innovation or lacking meaningful impact. So, if government regulation isn't the panacea for AI's dangers, what will help? Option #2: Social discourse Educators are struggling with GenAI and academic honesty. Some want to block AI entirely, while others see opportunities to empower students who struggle with traditional pedagogy. Imagine having a perpetually available tutor answering any question—but one that can also complete your assignments. As Satya Nadella put it recently on the Dwarkesh Podcast, his new workflow is to 'think with AI and work with my colleagues.' This collaborative approach to AI usage could be a model for educational settings, where AI serves as a thinking partner rather than a replacement for learning. In homes, schools, online forums, and government, society must reckon with this technology and decide what's acceptable. Everyone deserves a voice in these conversations. Unfortunately, internet discussions often devolve into trading sound bites without context or nuance. For meaningful conversations, we must educate ourselves. We need effective channels for public input, perhaps through grassroots movements guiding people toward safe and effective AI usage. Option #3: Third-party evaluators Before the 2008 financial crisis, credit rating agencies assigned AAA ratings to subprime mortgages, contributing to economic disaster. The problem? Industry-wide self-interest. When it comes to AI regulators, of course, we run the risk of an incestuous revolving door that does more harm than good. That doesn't have to be the case. Meaningful and thoughtful research is going into AI certifications and third-party evaluators. In the paper AI Certification: Advancing Ethical Practice by Reducing, Peter Cihon et al. propose several notions. First, because AI technology is advancing so quickly, AI certification should emphasize evergreen principles, such as ethics for AI developers. Second, AI certification today lacks nuance for particular circumstances, geographies, or industries. Not only is certification homogenous, but many programs treat AI as a 'monolithic technology' rather than acknowledging the diverse types, such as facial recognition, LLMs, and anomaly detection. Finally, to see good results, customers must demand high-quality certifications. They have to be educated about the technology and the associated ethics and safety concerns. The path forward The way forward requires multistakeholder, multifaceted conversations about societal goals and preventing AI dangers. If government becomes the default regulator, we risk an uninvestable marketplace or meaningless rubber-stamping. Independent third-party evaluators combined with informed social discourse offers the best path forward. But we must educate ourselves about this powerful technology's dangers and realities, or we'll repeat social media's errors on a grander scale. Peter Wang is chief AI and innovation officer at Anaconda.

Long Before AGI: Three AI Milestones That Will Challenge You
Long Before AGI: Three AI Milestones That Will Challenge You

Forbes

time07-05-2025

  • Business
  • Forbes

Long Before AGI: Three AI Milestones That Will Challenge You

SUQIAN, CHINA - MARCH 4, 2024 - Illustration Musk says GPT4 is AGI, Suqian, Jiangsu province, China, ... More March 4, 2024. (Photo credit should read CFOTO/Future Publishing via Getty Images) Future Publishing via Getty Images The increasing talk of Artificial General Intelligence (AGI) has led many to ponder what will occur when AI surpasses human intelligence. Whether this moment is near or far depends on who you ask, but I would argue it is not the most relevant milestone we should be aware of. What are the important earlier AI milestones that will affect every human being, and which ones have already been reached? Here are three that I would argue are already upon us. AI Surpasses Human Weakness In the 2022 movie 'The Social Dilemma', Tristan Harris of The Center for Humane Technology pointed out that AI has already surpassed human weakness. What does this mean? It means that AI can already apply patterns of human psychology and human behavior to generate cues that trigger desired outcomes. Does this mean that AI understands psychology? No. It simply means that enough data exists on human behavior, concerning responses to cues, that AI programs can generate inputs that are probabilistically likely to cause humans to react in desired ways. This aspect of AI is already well in place in everything from online marketing to social media. AI Changes The Job Market AI is expected to have a fundamental impact on the workforce. This impact is already underway, with companies reporting intentions to restrict their human workforce and drive AI leverage for productivity, to new jobs emerging as a result of new AI-driven industries or AI adoption by existing industries. While the potential workforce impact of AGI is likely greater, it is worth noting that most jobs require specialized skills and knowledge. AIs that are capable of mimicking these skills, and regurgitating this knowledge in useful forms - such as emails, written reports and answers to questions, can easily reduce the number of required humans for a task, long before AGI comes to effect. AI Changes Human Education In my opinion, this is by far the most impactful of pre-AGI AI changes. AI has already exceeded many human students in standard tests, ranging from high school Advanced Placement exams to the US Medical Licensing Exam. These data points suggest that what humans need to learn in the age of AI, or at the very least how we assess human intelligence, needs to be revisited. To further complicate this situation, AI tutors are emerging, with a school of thought that AI may be a suitable or even preferred teacher for certain human learners. This development should cause us to seriously consider the value of human engagement in human learning, and articulate it much more crisply than we have been required to do before. Otherwise, there is a risk that learners will be stratified, with the most fortunate receiving both AI and human guidance, and the less resourced having to rely on AI guidance alone. Back To The Singularity - What about AGI? Given the magniture of the impacts already occuring with AI, what can we expect AGI to do? My personal view is that theGeneral in AGI implies that AIs that are closer to AGI will have a greater ability to bridge domains, and bridge modalities (such as voice, video, etc.). As such AGI can be expected to complete with humans on much more complex tasks than AI does today. Looking at the above three challenges, one can imagine that AGI will exacerbate them all from a human perspective, creating new technologies that can exploit human weakness, new job roles that AI can fill, and greater difficulties in crisply identifying the role of humans in human education. What Should You Do? I see AGI as a process and not a binary event. As the above examples show, there will be intermediate events with significant impact. Since AGI has no universally agreed-upon definition, we may also never agree on whether it was been reached. As such, the best way, in my view, to prepare for AGI is to understand the challenges and opportunities that AI already presents, and 'surf the wave', adapting to new developments as they occur. This requires building AI skills as they are relevant to your professional role, staying aware of new AI developments in your domain, and adapting your skills as AI evolves.

MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D
MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D

Forbes

time14-04-2025

  • Science
  • Forbes

MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D

Artificial Intelligence is advancing at speed. Both the momentum and the money is focused on performance: faster models, more integrations, ever accurate predictions. But as industry sprints toward artificial general intelligence (AGI), one question lingers in the background: what happens to humans? A recent report from Elon University's Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, 'Being Human in 2035', concluded that most are concerned that the deepening adoption of AI systems over the next decade will negatively alter how humans think, feel, act and relate to one another. MIT Media Lab is trying to answer a similarly alarming issue: how can AI support, rather than replace, human flourishing? It is the central question of the Lab's newly launched Advancing Humans with AI (AHA) program. Heralded as a bold, multi-year initiative not to just improve AI, but to elevate human flourishing in an AI-saturated world, a star-studded symposium kicked off the concept and the different research domains it will tackle. Speakers included Arianna Huffington who spoke of AI being like a 'GPS for the soul', and Tristan Harris who warned about systems exploiting human vulnerabilities under the guise of assistance. Both agreed that AI shouldn't just be optimized for efficiency rather it should be designed to cultivate wisdom, resilience, and reflection. This echoed AHA's deeper vision to reorient AI development around designing for the human interior, the parts of us that make life worth living but often get left out of technical design conversations. Pat Pataranutaporn, co-lead of the AHA program, summed this up to the assembled audience, asking, 'What is the point of advancing artificial intelligence if we simultaneously devalue human intelligence and undermine human dignity? Instead, we should strive to design AI systems that amplify and enhance our most deeply human qualities' The Missing Research Layer in AI While safety and alignment dominate AI ethics debates, AHA concerns itself with longer-term human outcomes, as woven through the sections of the event which covered Interior Life, Social Life, Vocational Life, Cerebral Life and Creative Life. From over-reliance and skill atrophy to growing emotional attachment and isolation, people are already reshaping their lives around AI. But few research efforts are dedicated to systematically understanding these changes, let alone designing AI to mitigate them. AHA aims to do just that. The initiative is grounded in six research domains: A Moonshot Mindset The ambition of AHA is matched by its moonshot projects. These include: The message is clear: it's time to measure the wellbeing of humans not just the performance of machines. Why Now? As AI becomes increasingly embedded in health, education, work, and social life, the choices made by engineers and designers today will shape cognitive habits, emotional norms, and social structures for decades. Yet, as AHA's contributors pointed out throughout the symposium, AI is still mostly optimized for business metrics and safety concerns rather than for psychological nuance, emotional growth, or long-term well-being. MIT's AHA initiative is not a critique of AI. It's a call to design better, to design not just smarter machines, but systems that reflect us as our best selves. As Professor Pattie Maes, co-lead of the AHA program and director of the Fluid Interfaces group, told me after the event, 'We are creating AI and AI in turn will shape us. We don't want to make the same mistakes we made with social media. It is critical that we think of AI as not just a technical problem for engineers and entrepreneurs to solve, but also as a human design problem, requiring the expertise from human-computer interaction designers, psychologists, and social scientists for AI to lead to beneficial impact on the human experience.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store