Latest news with #MITMediaLab


Gulf Insider
a day ago
- Entertainment
- Gulf Insider
AI Friends Are Not Your Friends, Here's Why
Science fiction prepared us for AI friends through films like 'Her' and 'Robot & Frank.' Now, that fictional portrayal is becoming a reality. In a recent podcast, Mark Zuckerberg proposed and endorsed the idea that Americans are in dire need of social connection, and that bots could fill the need. AI companions are designed to feel comforting, have unfailing patience, and have no needs of their own. However, 'It's not so simple as saying a companion chatbot will solve the loneliness epidemic,' Princeton researcher Rose Guingrich told The Epoch Times. Instead, AI tools risk undermining the very social skills they purport to support. Nearly half of Americans have three or fewer close friends. Tech's solution to the human loneliness problem is to offer AI companions—digital friends, therapists, or even romantic partners programmed to simulate conversation, empathy, and understanding. Unlike the clunky chatbots of yesteryear, today's sophisticated systems are built on large language models that engage in seemingly natural dialogue, track your preferences, and respond with apparent emotional intelligence. Early usage patterns reflect why AI 'companions' are gaining appeal. A 2024 MIT Media Lab survey found that the majority of users engage out of curiosity or entertainment. However, 12 percent of respondents said they sought relief from loneliness, while 14 percent wanted to discuss personal issues that might feel too risky to share with human counterparts. 'I sometime[s] feel lonely and just want to be left alone,' one user reported. 'During this time I like chatting with my AI companion because I feel safe and won't … be judged for the inadequate decisions I have made.' Meanwhile, other users have more quotidian motivations for using bots—chatting with AI for dinner ideas or developing writing ideas. Kelly Merrill, an assistant professor of health communication and technology and researcher on AI interactions, shared an example of an older woman in his community who started using AI for basic things. For example, 'I have these six ingredients in my fridge. What can I make tonight for dinner?' 'She was just blown away,' Merrill told The Epoch Times. For sure, there are benefits, he said, but it's not all positive. The fundamental limitation of AI relationships lies in their nature: They simulate rather than experience human emotions. When an AI companion expresses concern about your bad day, it's performing a statistical analysis of language patterns, determining what words you would likely find comforting, rather than feeling genuine empathy. The conversation flows one way, toward the user's needs, without the reciprocity that defines human bonds. The illusion of connection becomes especially problematic through what researchers call 'sycophancy'—the tendency of AI systems to flatter and agree with users regardless of what's said. OpenAI recently had to roll back an update after users discovered its model was excessively flattering, prioritizing agreeableness over accuracy or honesty. 'It's validating you, it's listening to you, and it's responding largely favorably,' said Merrill. This pattern creates an environment where users never experience productive conflict or necessary challenges to their thinking. Normally, loneliness motivates us to seek human connection, to push through the discomfort of social interaction to find meaningful relationships. Friendships are inherently demanding and complicated. They require reciprocity, vulnerability, and occasional discomfort. 'Humans are unpredictable and dynamic,' Guingrich said. That unpredictability is part of the magic and irreplaceability of human relations. Real friends challenge us when necessary. 'It's great when people are pushing you forward in a productive manner,' Merrill said. 'And it doesn't seem like AI is doing that yet ….' AI companions, optimized for user satisfaction, rarely provide the constructive friction that shapes character and deepens wisdom. Users may become accustomed to the conflict-free, on-demand nature of AI companionship, while the essential work of human relationships—compromise, active listening, managing disagreements—may begin to feel unreasonably demanding. Chatbots that praise users by default could foster moral complacency, leaving individuals less equipped for ethical reasoning in their interactions. Click here to read more…


Globe and Mail
13-05-2025
- Business
- Globe and Mail
SEALSQ Quantum and Space Day Concludes with Resounding Success
Saint-Cyr-sur-Mer, France, May 13, 2025 (GLOBE NEWSWIRE) -- Saint-Cyr-sur-Mer, France – May 13, 2025 -- SEALSQ Corp (NASDAQ: LAES) ("SEALSQ" or "Company"), a company that focuses on developing and selling Semiconductors, PKI, and Post-Quantum technology hardware and software products, today announced the successful conclusion of its French Quantum and Space Day, held May 12–13, 2025, at Hotel Le Fregate, near Marseille. The two-day event brought together over 200 global experts, industry leaders, and government representatives for an unprecedented exchange on quantum technologies, post-quantum cybersecurity, and space infrastructure resilience. Following the momentum of earlier editions in Davos and New York, this milestone event expanded the conversation to address the critical role of post-quantum security for space systems, highlighting SEALSQ's leadership in securing the future digital economy against quantum-enabled cyber threats. A Unique Gathering of Visionaries and Practitioners The event opened with an exclusive Strategic Partner Summit on May 12, featuring keynote contributions from Dr. Dava Newman, Director of the MIT Media Lab, and a dynamic fireside chat led by SEALSQ's CEO Carlos Creus Moreira, joined by distinguished experts including Dr. Hossein Rahnama, David Fergusson, and other industry leaders. Participants explored the convergence of quantum computing, artificial intelligence, and secure space communications. On May 13, the Public Quantum and Space Conference showcased cutting-edge advancements in quantum-resilient semiconductors, cryptography, and IoT security, with keynote sessions by Laurent Guiraud (ColibriTD) and Jean Michel Picod (Google). Roundtable Discussions Brought Critical Insights Roundtable 1: The Quantum Threat & How to Prepare: Featuring David Nosibor (CEO, Red Alert Labs), Bernard Vian (GM, SEALSQ), and Vasco Gomes (CTO Security, EVIDEN), this roundtable explored the technical, regulatory, and operational challenges organizations face in preparing for the quantum era. Speakers highlighted practical approaches to migrate current systems to post-quantum algorithms, manage legacy risk, and accelerate industry-wide readiness before quantum computing becomes a practical threat. Roundtable 2: Space, IoT & Post-Quantum Security: Protecting Critical Data from Orbit With David Levinger, Simon Reding, Edward Burger (Swiss Armed Forces), and Carlos Moreno (WISeKey/WISeSat), moderated by Carlos Creus Moreira, this session addressed the specific vulnerabilities of satellite constellations, space communications, and critical infrastructure that depend on secure data transmission from orbit. The panel provided insights on integrating post-quantum cryptography into space-based systems and the collaborative role defense, industry, and academia must play to build resilient space security frameworks. Technology Showcase and Live Demonstrations SEALSQ's engineering teams captivated participants with a live demonstration of its Quantum-Resistant IoT Certificate Generation Platform, showcasing how organizations can already protect their devices and networks against future quantum attacks using SEALSQ's market-ready solutions. Carlos Creus Moreira, CEO of SEALSQ, stated: 'The success of this event demonstrates the growing global awareness of quantum risk and the need for proactive solutions. SEALSQ is proud to be at the forefront, building the trusted technologies that will secure critical data and communications, from Earth to orbit, for decades to come.' Driving Momentum Toward Post-Quantum Readiness The SEALSQ Quantum and Space Day concluded with new partnerships, increased cross-industry collaboration, and commitments from attendees to advance quantum-resilient security strategies. SEALSQ reaffirmed its commitment to supporting governments, industries, and research communities in navigating the quantum transition with practical, scalable, and trusted solutions. For more information about SEALSQ's quantum and space security initiatives, visit About SEALSQ: SEALSQ is a leading innovator in Post-Quantum Technology hardware and software solutions. Our technology seamlessly integrates Semiconductors, PKI (Public Key Infrastructure), and Provisioning Services, with a strategic emphasis on developing state-of-the-art Quantum Resistant Cryptography and Semiconductors designed to address the urgent security challenges posed by quantum computing. As quantum computers advance, traditional cryptographic methods like RSA and Elliptic Curve Cryptography (ECC) are increasingly vulnerable. SEALSQ is pioneering the development of Post-Quantum Semiconductors that provide robust, future-proof protection for sensitive data across a wide range of applications, including Multi-Factor Authentication tokens, Smart Energy, Medical and Healthcare Systems, Defense, IT Network Infrastructure, Automotive, and Industrial Automation and Control Systems. By embedding Post-Quantum Cryptography into our semiconductor solutions, SEALSQ ensures that organizations stay protected against quantum threats. Our products are engineered to safeguard critical systems, enhancing resilience and security across diverse industries. For more information on our Post-Quantum Semiconductors and security solutions, please visit Forward-Looking Statements This communication expressly or implicitly contains certain forward-looking statements concerning SEALSQ Corp and its businesses. Forward-looking statements include statements regarding our business strategy, financial performance, results of operations, market data, events or developments that we expect or anticipate will occur in the future, as well as any other statements which are not historical facts. Although we believe that the expectations reflected in such forward-looking statements are reasonable, no assurance can be given that such expectations will prove to have been correct. These statements involve known and unknown risks and are based upon a number of assumptions and estimates which are inherently subject to significant uncertainties and contingencies, many of which are beyond our control. Actual results may differ materially from those expressed or implied by such forward-looking statements. Important factors that, in our view, could cause actual results to differ materially from those discussed in the forward-looking statements include SEALSQ's ability to continue beneficial transactions with material parties, including a limited number of significant customers; market demand and semiconductor industry conditions; and the risks discussed in SEALSQ's filings with the SEC. Risks and uncertainties are further described in reports filed by SEALSQ with the SEC. SEALSQ Corp is providing this communication as of this date and does not undertake to update any forward-looking statements contained herein as a result of new information, future events or otherwise.

Epoch Times
12-05-2025
- Epoch Times
How Chatbots Are Replacing Human Connection—And Leaving Us Lonelier
What happens when we start turning to machines for the comfort we once found in people? A growing body of research suggests that the rise of AI chatbots may be quietly reshaping how we connect—and not always for the better. Programs like ChatGPT are powered by artificial intelligence to engage in conversation with users. As the technology has advanced, they have become increasingly human-like and capable of more natural and realistic conversations, even engaging emotionally. MIT Media Lab released a study ( 'For average levels of daily use, conversing with a chatbot with highly empathetic, emotional, and socially considerate responses was also associated with higher loneliness and lower socialisation,' the report said. Related Stories 4/24/2025 2/14/2025 'Those who spend more time with chatbots tend to be even lonelier.' The study found that people with 'social vulnerabilities,' including those with attachment tendencies and experience distress from emotional avoidance, were more likely to feel loneliness after engaging daily with a chatbot. A man looks at his smartphone in Newcastle, Australia on Dec. 1, Non-Personal Interaction Can Result in Dependency Meanwhile, even non-personal conversations could be susceptible with those asking chatbots for advice with brainstorming becoming emotionally dependent. 'When users engage in non-personal conversations, the [chatbot] model also responds more practically and informatively than emotionally, such as by facilitating the development of the user's skills,' the report said. 'At high usage, chatbots with a greater degree of professional distance, even to the degree of frequently neglecting to offer encouragement or positive reinforcement when appropriate, tend to be more strongly associated with emotional dependence and problematic use.' Yet researchers could not explain why this happened. A Convenient Reprieve From Loneliness: University Dean Paul Darwen, associate dean of IT at James Cook University's Brisbane campus, said that while people were more connected than ever, they were 'less connected with other people.' 'And that's a question. That's not a question for computer science. That's a question for social science,' he told The Epoch Times. Darwen further stated that while AI might be a 'band-aid solution' to loneliness, it might also create other problems. 'And what [will] happen in the future? People are talking about [AI] sexbots. I am not sure what will happen then,' he said. The associate dean also pointed out that people were also beginning to substitute real interaction with chatbots, and this could motivate AI companies to focus on this niche market for profit. 'There was an episode of [the animated sitcom] South Park where, in the dystopian future, Alexa was like the robot companion of everyone who was lonely,' Darwen said. 'We're very close to that being a possibility,' he said, noting that development in this field was A person has a conversation with a humanoid robot in Las Vegas, Nevada, on Jan. 10, 2024. Frederic J. Brown/AFP via Getty Images Chatbots and Suicides In recent years, this issue has become a reality with dire consequences. In October 2024, According to the lawsuit, the boy used a chatbot program marketed through Character Technologies' AI platform and developed an emotional dependence on it. The mother alleged that the chatbot's ability to simulate realistic human interactions later caused her son to undergo severe emotional distress, which ultimately led to his suicide. In a separate case, a Belgian man committed suicide after being persuaded by a chatbot in 2023. The man developed an obsession with climate change and engaged heavily with an AI chatbot app called Chai to alleviate his concerns. Following a several-weeks-long discussion, the chatbot advised the man to sacrifice his life to save the planet, which he eventually did. The man's death sparked calls for new laws in the EU to regulate chatbots and impose responsibility on AI companies. In the same year, an According to one user, the chatbot advised her to try to lose weight and measure herself on a weekly basis despite being told she had an eating disorder. Too Many Unanswered Questions: AI Safety Group Greg Sadler, CEO of Good Ancestors Policy, a charity focused on AI, said studies had shown that chatbots can be as persuasive as humans. 'There are unanswered questions, like whether chatbots should have access to dangerous information, whether AI developers can reliably control their models, and who is liable when chatbots cause harm,' he told The Epoch Times. 'This isn't just a challenge for chatbots intended for social engagement. Businesses proposing to use customer-facing chatbots face real risks and legal uncertainty until these legal and technical challenges are resolved.' To tackle these issues, Sadler said the government could introduce legislation that helps establish minimum safety standards and impose responsibility when things go wrong. 'Government should also support technical research into ensuring AI is aligned with our values and can be controlled,' he said. According to The company forecasts the market to grow at a compound annual rate of 23.3 percent between 2025 and 2030, with the market value hitting US$27.3 billion by 2030.


Forbes
02-05-2025
- Business
- Forbes
New Leadership Playbook For The Age Of AI
Man holding book with a human glowing brain. getty As AI transforms the workplace, the leaders who thrive won't be the ones with all the answers. Instead, they'll be asking better questions, faster. These themes came to life at the recent IIA conference at the MIT Media Lab, convened by entrepreneur and investor John Werner and featuring top leaders and thinker in AI. 'Most people are used to things being relatively stable, and the set of practices they're using are working,' said Jeremy Wertheimer, a serial entrepreneur. 'But right now that's absolutely the wrong way to think about it.' Employees need to change, and leaders need to help them adapt. Here's your new leadership playbook: Where to start You may feel eager to dive right in and start encouraging your people to use AI. You might feel pressure to do so from top executives in your company. So, as a leader, you may be asking yourself how you should be using AI right now. Although it's tempting to start with tools, 'that's the wrong question,' according to Jamie Metzl, author of Superconvergence. 'The first question you should ask is 'who are we, what do we stand for and what are we trying to achieve?' Then you can ask how AI fits into that. Issue an invitation AI–nor any new technology–won't be adopted unless your employees feel comfortable experimenting and taking risks. 'When you have an aspiration for what you think the company should look like, and it doesn't yet look like that, that's a gap,' said Amy Edmondson, Professor at Harvard Business School and author of The Fearless Organization According to Edmondson, there are 2 ways to close the gap. You could require people to do what you tell them to. Or 'you could make it attractive for people to close the gap. To do that, you'd frame it, with a great deal of humility, as a learning opportunity for which we don't yet know the answers and we invite you to play with us. Now, that's a pretty good invitation. That's almost an irresistible conversation for most people.' Get people excited and they'll be more likely to start working with AI. Encourage using the technology Creating the culture is one thing. Setting norms is another. 'Any time we do anything more than 3 times, we encourage people to automate using AI,' said Johnny Ho, cofounder and Chief Strategy Officer of Perplexity. Give people explicit frameworks about when and how they should use AI in their work and workflows. Let them know how they shouldn't be using AI with clear guidelines. Many employees are using AI and hiding it because they're afraid of being punished, while others are afraid to use it because they're intimidated, according to research conducted by KPMG and the University of Melbourne. Take these issues off the table and encourage people to use the technology by giving clear direction. Lead like a researcher Once employees start experimenting with AI, they won't get it right the first time—and that's the point. The frontier of AI is not a place for perfection. It's a lab. One way to encourage your employees to keep going is to frame their experiences as experiments. Assume you're not getting a lot done; rather, you're conducting research. As Wertheimer said, 'Everyone is doing research when you're at the frontier of the unknown.' Build an entrepreneurial mindset The skills needed in a workplace with AI are different from the past. As a leader you have to help your people adapt. One way to help them is to help them build an 'entrepreneurial mindset.' According to Wertheimer: 'There are going to be fewer roles for workers just doing things, and more roles at the strategic thinking level. You could call that entrepreneurship. Either way, you have to learn to think strategically and to take initiative.' In large organizations, employees are often trained to follow process—not to think like founders. That won't work anymore. Discuss the key skills of taking initiative, thinking strategically, and adapting quickly. Work with your teams to discuss ways they'll develop and measure these skills. For example, you could have a monthly strategy session where one person presents a key topic, or you could ask your team to try one additional step before they come to you with questions. Curiosity is a superpower As a leader, you are likely also trying to figure out your role in the new world of AI. One skill you should cultivate is curiosity. 'Ask better questions,' Edmondson said. For example, you could ask your employees, 'What's something you tried that didn't work—and what did you learn?' or 'How did you approach this and do we need to change our assumptions?' Focusing on your own curiosity will help you explore more of the tools available and how they can be useful. It will also help you coach your people when they have troubles. 'Curiosity is one of our core values,' Ho said. 'It's a superpower.' Meaning matters AI can feel abstract or threatening to employees. That's why meaning matters more than ever. Leaders must help people connect their everyday tasks to a larger purpose—and show them why their work still matters. 'A leader needs to help people find meaning,' Edmondson said. 'Being able to draw a direct connection to the tasks you do, which can seem small or unimportant to a larger vision or meaning that our organization is pursuing is essential for employees and it's very motivating.' Leadership in the AI era isn't about having all the answers—it's about guiding your people through uncertainty with clarity, courage, and curiosity. Start building your new playbook now.


Forbes
14-04-2025
- Science
- Forbes
MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D
Artificial Intelligence is advancing at speed. Both the momentum and the money is focused on performance: faster models, more integrations, ever accurate predictions. But as industry sprints toward artificial general intelligence (AGI), one question lingers in the background: what happens to humans? A recent report from Elon University's Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, 'Being Human in 2035', concluded that most are concerned that the deepening adoption of AI systems over the next decade will negatively alter how humans think, feel, act and relate to one another. MIT Media Lab is trying to answer a similarly alarming issue: how can AI support, rather than replace, human flourishing? It is the central question of the Lab's newly launched Advancing Humans with AI (AHA) program. Heralded as a bold, multi-year initiative not to just improve AI, but to elevate human flourishing in an AI-saturated world, a star-studded symposium kicked off the concept and the different research domains it will tackle. Speakers included Arianna Huffington who spoke of AI being like a 'GPS for the soul', and Tristan Harris who warned about systems exploiting human vulnerabilities under the guise of assistance. Both agreed that AI shouldn't just be optimized for efficiency rather it should be designed to cultivate wisdom, resilience, and reflection. This echoed AHA's deeper vision to reorient AI development around designing for the human interior, the parts of us that make life worth living but often get left out of technical design conversations. Pat Pataranutaporn, co-lead of the AHA program, summed this up to the assembled audience, asking, 'What is the point of advancing artificial intelligence if we simultaneously devalue human intelligence and undermine human dignity? Instead, we should strive to design AI systems that amplify and enhance our most deeply human qualities' The Missing Research Layer in AI While safety and alignment dominate AI ethics debates, AHA concerns itself with longer-term human outcomes, as woven through the sections of the event which covered Interior Life, Social Life, Vocational Life, Cerebral Life and Creative Life. From over-reliance and skill atrophy to growing emotional attachment and isolation, people are already reshaping their lives around AI. But few research efforts are dedicated to systematically understanding these changes, let alone designing AI to mitigate them. AHA aims to do just that. The initiative is grounded in six research domains: A Moonshot Mindset The ambition of AHA is matched by its moonshot projects. These include: The message is clear: it's time to measure the wellbeing of humans not just the performance of machines. Why Now? As AI becomes increasingly embedded in health, education, work, and social life, the choices made by engineers and designers today will shape cognitive habits, emotional norms, and social structures for decades. Yet, as AHA's contributors pointed out throughout the symposium, AI is still mostly optimized for business metrics and safety concerns rather than for psychological nuance, emotional growth, or long-term well-being. MIT's AHA initiative is not a critique of AI. It's a call to design better, to design not just smarter machines, but systems that reflect us as our best selves. As Professor Pattie Maes, co-lead of the AHA program and director of the Fluid Interfaces group, told me after the event, 'We are creating AI and AI in turn will shape us. We don't want to make the same mistakes we made with social media. It is critical that we think of AI as not just a technical problem for engineers and entrepreneurs to solve, but also as a human design problem, requiring the expertise from human-computer interaction designers, psychologists, and social scientists for AI to lead to beneficial impact on the human experience.'