logo
#

Latest news with #AI-saturated

The Existential Threat To Universities
The Existential Threat To Universities

Forbes

time25-04-2025

  • Business
  • Forbes

The Existential Threat To Universities

CAMBRIDGE, MASSACHUSETTS - JULY 08: A view of the campus of Harvard University on July 08, 2020 in ... More Cambridge, Massachusetts. Harvard and Massachusetts Institute of Technology have sued the Trump administration for its decision to strip international college students of their visas if all of their courses are held online. (Photo by) It's strange times for Ivy League schools – aside from all of the other sorts of campus and political issues that administrators have to deal with, so many of those in higher education planning are looking over their shoulders at the prospect of emerging artificial intelligence. Is AI coming for education? It's a fair question, and one that's inspiring educators and others with skin in the game to think about how this might play out. 'Universities are grappling with how to integrate AI into curricula while also addressing ethical concerns and potential academic integrity issues,' writes Matthew Lynch at The Tech Edvocate. 'Many institutions have implemented AI literacy courses to ensure students understand both the capabilities and limitations of these tools. ... Some universities have embraced AI-powered chatbots for student support services, reporting increased efficiency and student satisfaction. However, concerns persist about the impact of AI on critical thinking skills and the potential for over-reliance on technology. Faculty members are adapting their teaching methods and assessment strategies to encourage original thought in an AI-saturated world.' That's part of the picture – but some employees of university systems are also worried about their business. In addition to privacy and ethics concerns, there is the threat of disruption, which could change the business model of the university as an institution. In a recent presentation at the Imagination in Action April 15 event, former Harvard student Will Sentance talked about this existential threat and how it might affect big schools, using Harvard as an example. Harvard, he quipped, is often described as 'a hedge fund with a classroom attached,' or rather, a multipurpose institution – he named things like a health network and publications as other components of the Harvard octopus. A long-standing practice with universities and other businesses, he contends, is to 'bundle' services and products, to augment the appeal of a business brand. Higher education may be less different than we think. Citing 'hundreds of years of expansion,' Sentance suggested the best laid plans of Harvard people and those elsewhere in the Ivy League could come unraveled fairly quickly, given the pace of AI development. As a precursor, Sentance cited 20 years of 'unbundling' of newspapers and cable TV, where disruptive models emerged and took down these traditional media companies. 'Unbundling could now come for education,' he said. As for a 'moat' in higher ed, Sentance noted the importance of education and research, and a symbiotic systems where world class professors mentor the 'brightest young intellects,' who, he added, are vital to lab science work. But he also referenced Andrej Karpathy and Eureka Labs, mentioning AI's ability to 'emulate any teacher,' which he said raises 'existential questions for universities.' 'How do we nurture what is human?' Sentance asked rhetorically. 'Open AI and others know this is coming,' Sentance said, citing rumors about what Sam Altman says about AI and education in private. While positing that some disruption can be constructive, Sentance asked whether universities are capable of what he calls 'inventive renewal.' On the other hand, he acknowledged that humans are guiding other humans toward educational objectives, and AI doesn't seem poised to change that anytime soon. 'Struggle is hard,' he added, conceding that right now, it's humans (professors) supporting students in learning. In not too many years, we might have an entirely new system of education. As my colleague John Sviokla envisions, we may finally have a universal tutoring model, where it's always one teacher (human, AI or blended) and one student. Sentance also suggests that while where may be growing pains, we might end up with more human teachers, not fewer. 'Journalism experienced something similar,' he pointed out, ' the collapse in cost of information distribution, complete disruption to journalism, but that (didn't lead) to fewer storytellers, rather more the growth of their own audiences. … I'm confident the same will happen for education: new forms of learning, new forms of teaching, but more teachers, more lifelong learning, and at a whole new scale, to audiences previously left out.' That's optimistic. But it's compelling, too. To the extent that Substack and other venues are the new journalism, maybe tomorrow's teachers will be thriving in places other than traditional schools. Education might look different, but in many ways, it might actually get better.

MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D
MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D

Forbes

time14-04-2025

  • Science
  • Forbes

MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D

Artificial Intelligence is advancing at speed. Both the momentum and the money is focused on performance: faster models, more integrations, ever accurate predictions. But as industry sprints toward artificial general intelligence (AGI), one question lingers in the background: what happens to humans? A recent report from Elon University's Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, 'Being Human in 2035', concluded that most are concerned that the deepening adoption of AI systems over the next decade will negatively alter how humans think, feel, act and relate to one another. MIT Media Lab is trying to answer a similarly alarming issue: how can AI support, rather than replace, human flourishing? It is the central question of the Lab's newly launched Advancing Humans with AI (AHA) program. Heralded as a bold, multi-year initiative not to just improve AI, but to elevate human flourishing in an AI-saturated world, a star-studded symposium kicked off the concept and the different research domains it will tackle. Speakers included Arianna Huffington who spoke of AI being like a 'GPS for the soul', and Tristan Harris who warned about systems exploiting human vulnerabilities under the guise of assistance. Both agreed that AI shouldn't just be optimized for efficiency rather it should be designed to cultivate wisdom, resilience, and reflection. This echoed AHA's deeper vision to reorient AI development around designing for the human interior, the parts of us that make life worth living but often get left out of technical design conversations. Pat Pataranutaporn, co-lead of the AHA program, summed this up to the assembled audience, asking, 'What is the point of advancing artificial intelligence if we simultaneously devalue human intelligence and undermine human dignity? Instead, we should strive to design AI systems that amplify and enhance our most deeply human qualities' The Missing Research Layer in AI While safety and alignment dominate AI ethics debates, AHA concerns itself with longer-term human outcomes, as woven through the sections of the event which covered Interior Life, Social Life, Vocational Life, Cerebral Life and Creative Life. From over-reliance and skill atrophy to growing emotional attachment and isolation, people are already reshaping their lives around AI. But few research efforts are dedicated to systematically understanding these changes, let alone designing AI to mitigate them. AHA aims to do just that. The initiative is grounded in six research domains: A Moonshot Mindset The ambition of AHA is matched by its moonshot projects. These include: The message is clear: it's time to measure the wellbeing of humans not just the performance of machines. Why Now? As AI becomes increasingly embedded in health, education, work, and social life, the choices made by engineers and designers today will shape cognitive habits, emotional norms, and social structures for decades. Yet, as AHA's contributors pointed out throughout the symposium, AI is still mostly optimized for business metrics and safety concerns rather than for psychological nuance, emotional growth, or long-term well-being. MIT's AHA initiative is not a critique of AI. It's a call to design better, to design not just smarter machines, but systems that reflect us as our best selves. As Professor Pattie Maes, co-lead of the AHA program and director of the Fluid Interfaces group, told me after the event, 'We are creating AI and AI in turn will shape us. We don't want to make the same mistakes we made with social media. It is critical that we think of AI as not just a technical problem for engineers and entrepreneurs to solve, but also as a human design problem, requiring the expertise from human-computer interaction designers, psychologists, and social scientists for AI to lead to beneficial impact on the human experience.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store