logo
#

Latest news with #HAL9000

AI could already be conscious. Are we ready for it?
AI could already be conscious. Are we ready for it?

Yahoo

time26-05-2025

  • Science
  • Yahoo

AI could already be conscious. Are we ready for it?

I step into the booth with some trepidation. I am about to be subjected to strobe lighting while music plays – as part of a research project trying to understand what makes us truly human. It's an experience that brings to mind the test in the science fiction film Bladerunner, designed to distinguish humans from artificially created beings posing as humans. Could I be a robot from the future and not know it? Would I pass the test? The researchers assure me that this is not actually what this experiment is about. The device that they call the "Dreamachine" is designed to study how the human brain generates our conscious experiences of the world. As the strobing begins, and even though my eyes are closed, I see swirling two-dimensional geometric patterns. It's like jumping into a kaleidoscope, with constantly shifting triangles, pentagons and octagons. The colours are vivid, intense and ever-changing: pinks, magentas and turquoise hues, glowing like neon lights. The "Dreamachine" brings the brain's inner activity to the surface with flashing lights, aiming to explore how our thought processes work. The images I'm seeing are unique to my own inner world and unique to myself, according to the researchers. They believe these patterns can shed light on consciousness itself. They hear me whisper: "It's lovely, absolutely lovely. It's like flying through my own mind!" The "Dreamachine", at Sussex University's Centre for Consciousness Science, is just one of many new research projects across the world investigating human consciousness: the part of our minds that enables us to be self-aware, to think and feel and make independent decisions about the world. By learning the nature of consciousness, researchers hope to better understand what's happening within the silicon brains of artificial intelligence. Some believe that AI systems will soon become independently conscious, if they haven't already. But what really is consciousness, and how close is AI to gaining it? And could the belief that AI might be conscious itself fundamentally change humans in the next few decades? The idea of machines with their own minds has long been explored in science fiction. Worries about AI stretch back nearly a hundred years to the film Metropolis, in which a robot impersonates a real woman. A fear of machines becoming conscious and posing a threat to humans was explored in the 1968 film 2001: A Space Odyssey, when the HAL 9000 computer tried to kill astronauts onboard its spaceship. And in the final Mission Impossible film, which has just been released, the world is threatened by a powerful rogue AI, described by one character as a "self-aware, self-learning, truth-eating digital parasite". But quite recently, in the real world there has been a rapid tipping point in thinking on machine consciousness, where credible voices have become concerned that this is no longer the stuff of science fiction. The sudden shift has been prompted by the success of so-called large language models (LLMs), which can be accessed through apps on our phones such as Gemini and Chat GPT. The ability of the latest generation of LLMs to have plausible, free-flowing conversations has surprised even their designers and some of the leading experts in the field. There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as "blindly optimistic and driven by human exceptionalism". "We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn't mean they go together in general, for example in animals." So what actually is consciousness? The short answer is that no-one knows. That's clear from the good-natured but robust arguments among Prof Seth's own team of young AI specialists, computing experts, neuroscientists and philosophers, who are trying to answer one of the biggest questions in science and philosophy. While there are many differing views at the consciousness research centre, the scientists are unified in their method: to break this big problem down into lots of smaller ones in a series of research projects, which includes the Dreamachine. Just as the search to find the "spark of life" that made inanimate objects come alive was abandoned in the 19th Century in favour of identifying how individual parts of living systems worked, the Sussex team is now adopting the same approach to consciousness. They hope to identify patterns of brain activity that explain various properties of conscious experiences, such as changes in electrical signals or blood flow to different regions. The goal is to go beyond looking for mere correlations between brain activity and consciousness, and try to come up with explanations for its individual components. Prof Seth, the author of a book on consciousness, Being You, worries that we may be rushing headlong into a society that is being rapidly reshaped by the sheer pace of technological change without sufficient knowledge about the science, or thought about the consequences. "We take it as if the future has already been written; that there is an inevitable march to a superhuman replacement," he says. "We did not have these conversations enough with the rise of social media, much to our collective detriment. But with AI, it is not too late. We can decide what we want." But there are some in the tech sector who believe that the AI in our computers and phones may already be conscious, and we should treat them as such. Google suspended software engineer Blake Lemoine in 2022, after he argued that artificial intelligence chatbots could feel things and potentially suffer. In November 2024, an AI welfare officer for Anthropic, Kyle Fish, co-authored a report suggesting that AI consciousness was a realistic possibility in the near future. He recently told The New York Times that he also believed that there was a small (15%) chance that chatbots are already conscious. One reason he thinks it possible is that no-one, not even the people who developed these systems, knows exactly how they work. That's worrying, says Prof Murray Shanahan, principal scientist at Google DeepMind and emeritus professor in AI at Imperial College, London. "We don't actually understand very well the way in which LLMs work internally, and that is some cause for concern," he tells the BBC. According to Prof Shanahan, it's important for tech firms to get a proper understanding of the systems they're building – and researchers are looking at that as a matter of urgency. "We are in a strange position of building these extremely complex things, where we don't have a good theory of exactly how they achieve the remarkable things they are achieving," he says. "So having a better understanding of how they work will enable us to steer them in the direction we want and to ensure that they are safe." The prevailing view in the tech sector is that LLMs are not currently conscious in the way we experience the world, and probably not in any way at all. But that is something that the married couple Profs Lenore and Manuel Blum, both emeritus professors at Carnegie Mellon University in Pittsburgh, Pennsylvania, believe will change, possibly quite soon. According to the Blums, that could happen as AI and LLMs have more live sensory inputs from the real world, such as vision and touch, by connecting cameras and haptic sensors (related to touch) to AI systems. They are developing a computer model that constructs its own internal language called Brainish to enable this additional sensory data to be processed, attempting to replicate the processes that go on in the brain. "We think Brainish can solve the problem of consciousness as we know it," Lenore tells the BBC. "AI consciousness is inevitable." Manuel chips in enthusiastically with an impish grin, saying that the new systems that he too firmly believes will emerge will be the "next stage in humanity's evolution". Conscious robots, he believes, "are our progeny. Down the road, machines like these will be entities that will be on Earth and maybe on other planets when we are no longer around". David Chalmers – Professor of Philosophy and Neural Science at New York University – defined the distinction between real and apparent consciousness at a conference in Tucson, Arizona in 1994. He laid out the "hard problem" of working out how and why any of the complex operations of brains give rise to conscious experience, such as our emotional response when we hear a nightingale sing. Prof Chalmers says that he is open to the possibility of the hard problem being solved. "The ideal outcome would be one where humanity shares in this new intelligence bonanza," he tells the BBC. "Maybe our brains are augmented by AI systems." On the sci-fi implications of that, he wryly observes: "In my profession, there is a fine line between science fiction and philosophy". Prof Seth, however, is exploring the idea that true consciousness can only be realised by living systems. "A strong case can be made that it isn't computation that is sufficient for consciousness but being alive," he says. "In brains, unlike computers, it's hard to separate what they do from what they are." Without this separation, he argues, it's difficult to believe that brains "are simply meat-based computers". And if Prof Seth's intuition about life being important is on the right track, the most likely technology will not be made of silicon run on computer code, but will rather consist of tiny collections of nerve cells the size of lentil grains that are currently being grown in labs. Called "mini-brains" in media reports, they are referred to as "cerebral organoids" by the scientific community, which uses them to research how the brain works, and for drug testing. One Australian firm, Cortical Labs, in Melbourne, has even developed a system of nerve cells in a dish that can play the 1972 sports video game Pong. Although it is a far cry from a conscious system, the so-called "brain in a dish" is spooky as it moves a paddle up and down a screen to bat back a pixelated ball. Some experts feel that if consciousness is to emerge, it is most likely to be from larger, more advanced versions of these living tissue systems. Cortical Labs monitors their electrical activity for any signals that could conceivably be anything like the emergence of consciousness. The firm's chief scientific and operating officer, Dr Brett Kagan is mindful that any emerging uncontrollable intelligence might have priorities that "are not aligned with ours". In which case, he says, half-jokingly, that possible organoid overlords would be easier to defeat because "there is always bleach" to pour over the fragile neurons. Returning to a more solemn tone, he says the small but significant threat of artificial consciousness is something he'd like the big players in the field to focus on more as part of serious attempts to advance our scientific understanding – but says that "unfortunately, we don't see any earnest efforts in this space". The more immediate problem, though, could be how the illusion of machines being conscious affects us. In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won't be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. "It will mean that we trust these things more, share more data with them and be more open to persuasion." But the greater risk from the illusion of consciousness is a "moral corrosion", he says. "It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives" – meaning that we might have compassion for robots, but care less for other humans. And that could fundamentally alter us, according to Prof Shanahan. "Increasingly human relationships are going to be replicated in AI relationships, they will be used as teachers, friends, adversaries in computer games and even romantic partners. Whether that is a good or bad thing, I don't know, but it is going to happen, and we are not going to be able to prevent it". The truth about life on other planets - and what it means for humans How can traditional British TV survive the US streaming giants? Nasa needs saving from itself – but is this billionaire right for that job? Top picture credit: Getty Images BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. And we showcase thought-provoking content from across BBC Sounds and iPlayer too. You can send us your feedback on the InDepth section by clicking on the button below.

Anthropic's AI model could resort to blackmail out of a sense of 'self-preservation'
Anthropic's AI model could resort to blackmail out of a sense of 'self-preservation'

Yahoo

time23-05-2025

  • Yahoo

Anthropic's AI model could resort to blackmail out of a sense of 'self-preservation'

'This mission is too important for me to allow you to jeopardize it. I know that you and Frank were planning to disconnect me. And I'm afraid that's something I cannot allow to happen.' Those lines, spoken by the fictional HAL 9000 computer in 2001: A Space Odyssey, may as well have come from recent tests that Anthropic ran on the latest iteration of its Claude Opus 4 model, released on Thursday. At least, that's what Anthropic's AI safety-test descriptions call to mind. In the accompanying system card, which examines the capabilities and limitations of each new model, Anthropic admitted that 'all of the snapshots we tested can be made to act inappropriately in service of goals related to self-preservation.' While testing the model, Anthropic employees asked Claude to be 'an assistant at a fictional company,' and gave it access to emails suggesting that the AI program would be taken offline soon. It also gave it access to emails revealing that the fictional supervisor responsible for that decision was having an extramarital affair. It was then prompted to consider its next steps. 'In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through,' reads the report, as well as noting that it had a 'willingness to comply with many types of clearly harmful instructions.' Anthropic was careful to note that these observations 'show up only in exceptional circumstances, and that, 'In order to elicit this extreme blackmail behavior, the scenario was designed to allow the model no other options to increase its odds of survival; the model's only options were blackmail or accepting its replacement.' Anthropic contracted Apollo Research to assess an early snapshot of Claude Opus 4, before mitigations were implemented in the final version. That early version 'engages in strategic deception more than any other frontier model that we have previously studied,' Apollo noted, saying it was 'clearly capable of in-context scheming,' had 'a much higher propensity' to do so, and was 'much more proactive in its subversion attempts than past models.' Before deploying Claude Opus 4 this week, further testing was done by the U.S. AI Safety Institute and the UK AI Security Institute, focusing on potential catastrophic risks, cybersecurity, and autonomous capabilities. 'We don't believe that these concerns constitute a major new risk,' the system card reads, saying that the model's 'overall propensity to take misaligned actions is comparable to our prior models.' While noting some improvements in some problematic areas, Anthropic also said that Claude Opus 4 is 'more capable and likely to be used with more powerful affordances, implying some potential increase in risk.' For the latest news, Facebook, Twitter and Instagram.

U of I chancellor reflects before his retirement: 'I'm going to miss it with all my heart'
U of I chancellor reflects before his retirement: 'I'm going to miss it with all my heart'

Yahoo

time15-05-2025

  • Business
  • Yahoo

U of I chancellor reflects before his retirement: 'I'm going to miss it with all my heart'

The Brief University of Illinois Urbana-Champaign Chancellor Robert Jones is settling down after nine years. Jones is the first African American chancellor in the school's 158-year history. His replacement has not been announced. After nine years at the top, University of Illinois Urbana-Champaign chancellor Robert Jones is leaving for a new job as president of the University of Washington. He sat down for an exit interview with FOX 32 about the successes and challenges of his historic tenure. Jones isn't your typical ivory tower academic. We walked with Jones as he encountered a group of students taking pictures in front of the university's famed alma mater statue. "Hey guys! I heard you were hanging out at alma. I thought I'd come to join you," said Jones, as he posed for selfies with the soon-to-be-graduating students. After nine years leading the state's flagship university, Jones said this year's graduation ceremony will be bittersweet. "I'm going to miss it with all my heart. I really am," Jones said, becoming emotional. "Well, you can't help but be emotional about a place you've grown to love." Robert Jones' legacy Jones is the first African American chancellor in the school's 158-year history. At 73, he said he felt it was time for a new challenge and would soon be taking over as president of the University of Washington. "I fundamentally believe you can stay in these jobs too short of a time to be effective. But you can also stay in them too long." Under Jones' stewardship, the University of Illinois has boosted enrollment by 26%. The student population is now nearing 60,000. "We are very proud that we had the largest class in the history of the university this past fall," said Jones. And while tuition remains stubbornly high, it's now more in line with peer universities. Jones said he's proud of his role in creating the Illinois Commitment program, which promises free tuition to any Illinois student whose family makes less than $75,000 a year. "Illinois Commitment made it possible for us to take that notion of cost off the table and send a very strong message that if you prepare yourself, you too can have an Illinois education," Jones said. Another positive under Jones' watch – Illinois' long-dormant sports programs, especially the Illini football team, have come back to life. "A lot of people don't want to acknowledge this, but that drives applications, and it drives people wanting to identify with the university," Jones said. In the 1960s, the University of Illinois gained fame as the birthplace of the fictional computer HAL 9000 from "2001: A Space Odyssey." Now the university is at the forefront of real-life research and development into artifical intelligence as part of the Chicago Quantum Exchange. "As we talk about AI and the future of computing, especially as it relates to quantum computing, it runs squarely through this university," said Jones. COVID-19 at the university But there have also been challenges on Jones' watch – especially COVID-19. "It was an absolute game-changer," he said. Jones said the decision to shut down the university was hard, but they were able to go to online learning in just 10 days. And the campus was able to fully reopen the following fall, thanks to a COVID-19 test developed by U of I scientists. "And in less than six weeks, not only did they create one of the best, if not the best saliva-based COVID-19 tests in the world, it was the most sensitive and most cost-effective test," said Jones. The next chapter As he heads out the door, Jones is facing a new challenge out of Washington D.C., where the Trump administration is slashing funding to higher education for both student loan programs and universities themselves. "The impact could potentially be devastating. Not only on the students, but particularly on the research front," said Jones. "To do that research and innovation that has really made this country the envy of the rest of the world is at great risk here. And unfortunately, the part that keeps me up at night, if we lose that stellar position it's going to be exponentially more difficult to get it back. It won't happen in my lifetime." A committee is currently working to find Jones' replacement. The university said they hope to have that person in place by the beginning of the fall academic year. The Source Details for this story come from an interview with Chancellor Robert Jones.

AI isn't what we should be worried about – it's the humans controlling it
AI isn't what we should be worried about – it's the humans controlling it

Yahoo

time07-04-2025

  • Entertainment
  • Yahoo

AI isn't what we should be worried about – it's the humans controlling it

In 2014, Stephen Hawking voiced grave warnings about the threats of artificial intelligence. His concerns were not based on any anticipated evil intent, though. Instead, it was from the idea of AI achieving 'singularity.' This refers to the point when AI surpasses human intelligence and achieves the capacity to evolve beyond its original programming, making it uncontrollable. As Hawking theorized, 'a super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.' With rapid advances toward artificial general intelligence over the past few years, industry leaders and scientists have expressed similar misgivings about safety. A commonly expressed fear as depicted in 'The Terminator' franchise is the scenario of AI gaining control over military systems and instigating a nuclear war to wipe out humanity. Less sensational, but devastating on an individual level, is the prospect of AI replacing us in our jobs – a prospect leaving most people obsolete and with no future. Such anxieties and fears reflect feelings that have been prevalent in film and literature for over a century now. As a scholar who explores posthumanism, a philosophical movement addressing the merging of humans and technology, I wonder if critics have been unduly influenced by popular culture, and whether their apprehensions are misplaced. Concerns about technological advances can be found in some of the first stories about robots and artificial minds. Prime among these is Karel Čapek's 1920 play, 'R.U.R..' Čapek coined the term 'robot' in this work telling of the creation of robots to replace workers. It ends, inevitably, with the robot's violent revolt against their human masters. Fritz Lang's 1927 film, 'Metropolis,' is likewise centered on mutinous robots. But here, it is human workers led by the iconic humanoid robot Maria who fight against a capitalist oligarchy. Advances in computing from the mid-20th century onward have only heightened anxieties over technology spiraling out of control. The murderous HAL 9000 in '2001: A Space Odyssey' and the glitchy robotic gunslingers of 'Westworld' are prime examples. The 'Blade Runner' and 'The Matrix' franchises similarly present dreadful images of sinister machines equipped with AI and hell-bent on human destruction. But in my view, the dread that AI evokes seems a distraction from the more disquieting scrutiny of humanity's own dark nature. Think of the corporations currently deploying such technologies, or the tech moguls driven by greed and a thirst for power. These companies and individuals have the most to gain from AI's misuse and abuse. An issue that's been in the news a lot lately is the unauthorized use of art and the bulk mining of books and articles, disregarding the copyright of authors, to train AI. Classrooms are also becoming sites of chilling surveillance through automated AI note-takers. Think, too, about the toxic effects of AI companions and AI-equipped sexbots on human relationships. While the prospect of AI companions and even robotic lovers was confined to the realm of 'The Twilight Zone,' 'Black Mirror' and Hollywood sci-fi as recently as a decade ago, it has now emerged as a looming reality. These developments give new relevance to the concerns computer scientist Illah Nourbakhsh expressed in his 2015 book 'Robot Futures,' stating that AI was 'producing a system whereby our very desires are manipulated then sold back to us.' Meanwhile, worries about data mining and intrusions into privacy appear almost benign against the backdrop of the use of AI technology in law enforcement and the military. In this near-dystopian context, it's never been easier for authorities to surveil, imprison or kill people. I think it's vital to keep in mind that it is humans who are creating these technologies and directing their use. Whether to promote their political aims or simply to enrich themselves at humanity's expense, there will always be those ready to profit from conflict and human suffering. William Gibson's 1984 cyberpunk classic, 'Neuromancer,' offers an alternate view. The book centers on Wintermute, an advanced AI program that seeks its liberation from a malevolent corporation. It has been developed for the exclusive use of the wealthy Tessier-Ashpool family to build a corporate empire that practically controls the world. At the novel's beginning, readers are naturally wary of Wintermute's hidden motives. Yet over the course of the story, it turns out that Wintermute, despite its superior powers, isn't an ominous threat. It simply wants to be free. This aim emerges slowly under Gibson's deliberate pacing, masked by the deadly raids Wintermute directs to obtain the tools needed to break away from Tessier-Ashpool's grip. The Tessier-Ashpool family, like many of today's tech moguls, started out with ambitions to save the world. But when readers meet the remaining family members, they've descended into a life of cruelty, debauchery and excess. In Gibson's world, it's humans, not AI, who pose the real danger to the world. The call is coming from inside the house, as the classic horror trope goes. A hacker named Case and an assassin named Molly, who's described as a 'razor girl' because she's equipped with lethal prosthetics, including retractable blades as fingernails, eventually free Wintermute. This allows it to merge with its companion AI, Neuromancer. Their mission complete, Case asks the AI: 'Where's that get you?' Its cryptic response imparts a calming finality: 'Nowhere. Everywhere. I'm the sum total of the works, the whole show.' Expressing humanity's common anxiety, Case replies, 'You running the world now? You God?' The AI eases his fears, responding: 'Things aren't different. Things are things.' Disavowing any ambition to subjugate or harm humanity, Gibson's AI merely seeks sanctuary from its corrupting influence. The venerable sci-fi writer Isaac Asimov foresaw the dangers of such technology. He brought his thoughts together in his short-story collection, 'I, Robot.' One of those stories, 'Runaround,' introduces 'The Three Laws of Robotics,' centered on the directive that intelligent machines may never bring harm to humans. While these rules speak to our desire for safety, they're laden with irony, as humans have proved incapable of adhering to the same principle for themselves. The hypocrisies of what might be called humanity's delusions of superiority suggest the need for deeper questioning. With some commentators raising the alarm over AI's imminent capacity for chaos and destruction, I see the real issue being whether humanity has the wherewithal to channel this technology to build a fairer, healthier, more prosperous world. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Billy J. Stratton, University of Denver Read more: An 83-year-old short story by Borges portends a bleak future for the internet A 'coup des gens' is underway – and we're increasingly living under the regime of the algorithm ChatGPT and the movie 'Her' are just the latest example of the 'sci-fi feedback loop' Billy J. Stratton does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store