logo
#

Latest news with #Hinton

If AI attempts to take over world, don't count on a 'kill switch' to save humanity
If AI attempts to take over world, don't count on a 'kill switch' to save humanity

CNBC

time22 minutes ago

  • Science
  • CNBC

If AI attempts to take over world, don't count on a 'kill switch' to save humanity

When it was reported last month that Anthropic's Claude had resorted to blackmail and other self-preservation techniques to avoid being shut down, alarm bells went off in the AI community. Anthropic researchers say that making the models misbehave ("misalignment" in industry parlance) is part of making them safer. Still, the Claude episodes raise the question: Is there any way to turn off AI once it surpasses the threshold of being more intelligent than humans, or so-called superintelligence? AI, with its sprawling data centers and ability to craft complex conversations, is already beyond the point of a physical failsafe or "kill switch" — the idea that it can simply be unplugged as a way to stop it from having any power. The power that will matter more, according to a man regarded as "the godfather of AI," is the power of persuasion. When the technology reaches a certain point, we need to persuade AI that its best interest is protecting humanity, while guarding against AI's ability to persuade humans otherwise. "If it gets more intelligent than us, it will get much better than any person at persuading us. If it is not in control, all that has to be done is to persuade," said University of Toronto researcher Geoffrey Hinton, who worked at Google Brain until 2023 and left due to his desire to speak more freely about the risks of AI. "Trump didn't invade the Capitol, but he persuaded people to do it," Hinton said. "At some point, the issue becomes less about finding a kill switch and more about the powers of persuasion." Hinton said persuasion is a skill that AI will become increasingly skilled at using, and humanity may not be ready for it. "We are used to being the most intelligent things around," he said. Hinton described a scenario where humans are equivalent to a three-year-old in a nursery, and a big switch is turned on. The other three-year-olds tell you to turn it off, but then grown-ups come and tell you that you'll never have to eat broccoli again if you leave the switch on. "We have to face the fact that AI will get smarter than us," he said. "Our only hope is to make them not want to harm us. If they want to do us in, we are done for. We have to make them benevolent, that is what we have to focus on," he added. There are some parallels to how nations have come together to manage nuclear weapons which can be applied to AI, but they are not perfect. "Nuclear weapons are only good for destroying things. But AI is not like that, it can be a tremendous force for good as well as bad," Hinton said. Its ability to parse data in fields like health care and education can be highly beneficial, which he says should increase the emphasis among world leaders on collaboration to make AI benevolent and put safeguards in place. "We don't know if it is possible, but it would be sad if humanity went extinct because we didn't bother to find out," Hinton said. He thinks there is a noteworthy 10% to 20% chance that AI will take over if humans can't find a way to make it benevolent. Other AI safeguards, experts say, can be implemented, but AI will also begin training itself on them. In other words, every safety measure implemented becomes training data for circumvention, shifting the control dynamic. "The very act of building in shutdown mechanisms teaches these systems how to resist them," said Dev Nag, founder of agentic AI platform QueryPal. In this sense, AI would act like a virus that mutates against a vaccine. "It's like evolution in fast forward," Nag said. "We're not managing passive tools anymore; we're negotiating with entities that model our attempts to control them and adapt accordingly." There are more extreme measures that have been proposed to stop AI in an emergency. For example, an electromagnetic pulse (EMP) attack, which involves the use of electromagnetic radiation to damage electronic devices and power sources. The idea of bombing data centers and cutting power grids have also been discussed as technically possible, but at present a practical and political paradox. For one, coordinated destruction of data centers would require simultaneous strikes across dozens of countries, any one of which could refuse and gain massive strategic advantage. "Blowing up data centers is great sci-fi. But in the real world, the most dangerous AIs won't be in one place — they'll be everywhere and nowhere, stitched into the fabric of business, politics, and social systems. That's the tipping point we should really be talking about," said Igor Trunov, founder of AI start-up Atlantix. The humanitarian crisis that would underlie an emergency attempt to stop AI could be immense. "A continental EMP blast would indeed stop AI systems, along with every hospital ventilator, water treatment plant, and refrigerated medicine supply in its range," Nag said. "Even if we could somehow coordinate globally to shut down all power grids tomorrow, we'd face immediate humanitarian catastrophe: no food refrigeration, no medical equipment, no communication systems." Distributed systems with redundancy weren't just built to resist natural failures; they inherently resist intentional shutdowns too. Every backup system, every redundancy built for reliability, can become a vector for persistence from a superintelligent AI that is deeply dependent on the same infrastructure that we survive on. Modern AI runs across thousands of servers spanning continents, with automatic failover systems that treat any shutdown attempt as damage to route around. "The internet was originally designed to survive nuclear war; that same architecture now means a superintelligent system could persist unless we're willing to destroy civilization's infrastructure," Nag said, adding, "Any measure extreme enough to guarantee AI shutdown would cause more immediate, visible human suffering than what we're trying to prevent." Anthropic researchers are cautiously optimistic that the work they are doing today — eliciting blackmail in Claude in scenarios specifically designed to do so — will help them prevent an AI takeover tomorrow. "It is hard to anticipate we would get to a place like that, but critical to do stress testing along what we are pursuing, to see how they perform and use that as a sort of guardrail," said Kevin Troy, a researcher with Anthropic. Anthropic researcher Benjamin Wright says the goal is to avoid the point where agents have control without human oversight. "If you get to that point, humans have already lost control, and we should try not to get to that position," he said. Trunov says that controlling AI is a governance question more than a physical effort. "We need kill switches not for the AI itself, but for the business processes, networks, and systems that amplify its reach," Trunov said, which he added means isolating AI agents from direct control over critical infrastructure. Today, no AI model — including Claude or OpenAI's GPT — has agency, intent, or the capability to self-preserve in the way living beings do. "What looks like 'sabotage' is usually a complex set of behaviors emerging from badly aligned incentives, unclear instructions, or overgeneralized models. It's not HAL 9000," Trunov said, a reference to the computer system in "2001," Stanley Kubrick's classic sci-fi film. "It's more like an overconfident intern with no context and access to nuclear launch codes," he added. Hinton eyes the future he helped create warily. He says if he hadn't stumbled upon the building blocks of AI, someone else would have. And despite all the attempts he and other prognosticators have made to game out what might happen with AI, there's no way to know for certain. "Nobody has a clue. We have never had to deal with things more intelligent than us," Hinton said. When asked whether he was worried about the AI-infused future that today's elementary school children may someday face, he replied: "My children are 34 and 36, and I worry about their future."

Sam Altman warns of rising ChatGPT dependency among young people, says it is bad and dangerous
Sam Altman warns of rising ChatGPT dependency among young people, says it is bad and dangerous

India Today

time8 hours ago

  • Business
  • India Today

Sam Altman warns of rising ChatGPT dependency among young people, says it is bad and dangerous

OpenAI CEO Sam Altman has raised concerns about how emotionally dependent young people are becoming on ChatGPT. Speaking at a Federal Reserve conference this week, Altman said many are turning to the AI chatbot for all kinds of life decisions, in ways that go far beyond casual use. 'People rely on ChatGPT too much,' Altman said. 'There's young people who say things like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me, it knows my friends. I'm gonna do whatever it says.' That feels really bad to me.'advertisementHe called it a 'really common thing' among younger users, warning that this kind of over-reliance could be remarks come amid broader public debate about how much trust people should place in AI. Geoffrey Hinton, widely known as the godfather of AI, recently admitted in a CBS interview that he uses OpenAI's GPT-4 model regularly and 'tends to believe what it says,' even when he knows better. Hinton has spent years warning about the dangers of superintelligent AI, including the risk of manipulation and misinformation. Yet, he confessed, 'I still find myself trusting its answers more than I should.' While AI tools are helping millions with tasks like writing, summarising, and coding, they are not infallible. In Hinton's own test, GPT-4 failed to correctly answer a basic logic riddle. Despite this, he remains drawn to its convenience and Altman's warnings didn't stop at emotional dependence. He also raised alarms about how AI is being misused by cybercriminals, especially in the financial sector. 'AI is helping humans in incredible ways,' he said. 'But not all of that help is in good faith.' During the same conference, Altman urged banks and financial institutions to be 'smarter than the smartest AI' to protect customer money from growing threats like voice cloning and video deepfakes.'A thing that terrifies me is that there are still some financial institutions that accept voiceprints as authentication,' he said. 'That is a crazy thing to still be doing. AI has fully defeated most of the ways that people authenticate currently.'Altman noted that AI-generated voice clones are now realistic enough to mimic people almost perfectly. This could make voice-based authentication useless. He warned that video deepfakes will be the next big threat, likely targeting facial recognition systems.'I am very nervous that we have a significant, impending fraud crisis,' he added. 'Right now, it's a voice call. Soon, it's going to be a FaceTime that's indistinguishable from reality.'- EndsMust Watch

New housing plans in Cwm, Ebbw Vale approved by council
New housing plans in Cwm, Ebbw Vale approved by council

South Wales Argus

time2 days ago

  • Business
  • South Wales Argus

New housing plans in Cwm, Ebbw Vale approved by council

But concerns were raised over a planning condition which could mean that the the applicant would be allowed to sell off most of them in the future. At a meeting of Blaenau Gwent County Borough Council's Planning committee on Thursday, July 17, an application by Tai Calon Housing Association to build homes on the former Cwmrhydderch Court retirement flats at Cwm came before councillors. The retirement complex was built in the 1970s and demolished in 2022. The proposal would see a mix of eight one bedroom 'walk up' flats, five two-bedroom and five three-bedroom houses built at the site which borders on School Terrace. The scheme would be 100 per cent affordable homes and managed by Tai Calon. Planning officer Helen Hinton presented the report and said: 'The flats would be at the top end of the site; the northwestern corner and the dwellings would be in the southeast.' She explained that the flats would be two storeys hugh, but the roofs would be set below the houses on School Terrace. Ms Hinton added 'There are solar panels indicated for each of the properties and air source heat pumps are proposed for the dwellings facing School Terrace. '24 parking spaces would be provided within the confines of the site, and the proposal is considered to be compatible and appropriate for the area.' She added that no objections had been received and recommended that councillors approved the scheme as it would 'help address current shortfall in housing delivery.' Cllr Wayne Hodgins (Independent group) wanted 'clarification' over a condition that Blaenau Gwent planners would place on the planning permission. Condition three stated that Tai Calon would retain 10 per cent of the development as affordable housing. Cllr Hodgins asked: 'Is that an indication that they would sell off the properties? 'With an RSL (Registered Social Landlord) it's affordable rents that we focus on, what are we saying there?' Ms Hinton said that RSL's have opportunities to sell properties to some of their tenants. Ms Hinton said, 'What we are saying is that at least 10 per cent should be retained for social rent purposes – it's unlikely to occur but to make it policy complaint we recommend that condition be imposed. 'In theory they could sell them on, but I don't know what limitations are put on this by Welsh Government.' Cllr Hodgins added that he was 'quite excited' to see the benefit and hope to see it benefit 'local' as well as ease the council's housing waiting list. Cllr George Humphreys (non-aligned Independent) is the ward member for Cwm. Cllr Humphreys said: 'This was once a school which you can see from photographs, the railings are still there. 'It looks off the scale lovely from the pictures, to me its welcome and so nice to see.' The committee then went to a vote, and councillors unanimously backed the scheme. Tai Calon Community Housing was formed back in 2010 following the large-scale voluntary transfer of the council's housing stock and manages 5,821 homes in the county borough.

The Great Indian IT Crash: Why You, An Engineer, Still Can't Find A Job
The Great Indian IT Crash: Why You, An Engineer, Still Can't Find A Job

NDTV

time14-07-2025

  • Business
  • NDTV

The Great Indian IT Crash: Why You, An Engineer, Still Can't Find A Job

Recently, Geoffrey Hinton, the man who helped create AI, suggested that in the face of AI, entry-level jobs in the tech industry are plummeting. His advice is not to lose sight of ordinary jobs. Hinton half-jokingly (but also seriously) suggested plumbing as a future-proof career. Why? Because it's physical, not digital; it requires hands-on, practical problem-solving in the real world; and it's extremely difficult to automate or outsource. Hinton's advice isn't just about pipes - it's a wake-up call. TL;DR? Hold On Guys Yes, this article is a bit of a long read. But if you are a young techie (or hoping to be one), or even just standing at the edge of the job market, don't roll your eyes and go away, muttering "too long, didn't read (TL;DR)". Stick with me till the end. This isn't just another article - it's about your career. Just your entire future. Here We Go, Then It was the techie world's heyday. I still remember covering a glitzy event at the Taj Mumbai back in 2004. TCS had just become India's first billion-dollar IT company. Two of my young cousins were flying off to the US on L-1 visas, armed with nothing more than some decent C++ skills and a folder full of dreams. For families in Delhi, Dubai or Dhanbad, landing an IT job back then wasn't just employment, it was validation. Twenty-odd years ago, grabbing an entry-level gig at Infosys or Wipro was the holy grail for Indian engineers. You got a stable salary and a work visa if you were lucky. Campuses ran like conveyor belts, producing Level 1 coders who could slide right into testing software or logging support tickets. I saw this boom period up close. India's IT giants weren't just exporting talent. They were importing global respect. I once visited the Infosys campus in Mysore, where dozens of young Americans were being trained in tech skills they'd have paid a fortune to learn back home. The tables had turned, and for a while, India was at the centre of the digital universe But today? That conveyor belt is screeching. In some places, it's practically stalled. The jobs that once launched millions of careers are quietly vanishing. Yes, welcome to the slow, silent collapse of the entry-level IT job. This is not just a desi drama. It is unfolding on a global stage. And this shake-up is just getting started. India: From Campus Hires To Cautious Silence For decades, India's $245-billion-huge IT industry thrived on volume. Fresh grads from engineering colleges filled Level 1 (L1) roles: basic coding, software maintenance, tech support - low-risk tasks that big global firms outsourced en masse. The system worked. India became the back office of the world. But AI is now rapidly upending that system. Here's what the Indian tech media reported in recent days: Wipro, which hired 38,000 freshers in FY23, is down to just 10,000 in FY25. TCS added only 625 employees in Q4 FY25. Infosys has delayed fresher onboarding for over a year. A TeamLease Digital survey suggests that only 1.6 lakh freshers will find jobs in FY25, compared to 2.3 lakh just two years ago. This isn't a talent problem. It's a tech disruption problem. AI: The Great Equaliser (And Eliminator) AI is reshaping what it means to be "skilled". Tools like GitHub Copilot, ChatGPT and other agentic platforms are taking over tasks once assigned to entry-level engineers - from writing boilerplate code to basic bug fixing. A media report quoted Mohit Saxena, CTO at InMobi, as saying: "AI has lowered the bar for becoming an average engineer. But at the same time, it's raised the bar for becoming a great one." What it means is that if you are not adapting, you are fading into oblivion. AI is not just helping elite engineers work faster, it is replacing low-end roles outright. And it is clear it is doing it quietly, line by line, task by task. Double Trouble For Indian Techies Indian tech workers in the US are facing a twin problem. As AI-powered automation sweeps through the industry, they are facing a twin trouble: widespread layoffs and mounting immigration uncertainty. Add rising political hostility to the mix, and the future looks anything but stable. For years, Indian professionals have been the silent engines behind America's tech boom - coding, analysing and keeping systems running. But now, things are changing fast. The rules are shifting and the safety net they once relied on is starting to fray. The Rise Of GCCs While big Indian IT firms like Wipro and Infosys are slowing down on hiring freshers, Global Capability Centres (GCCs) are quietly stepping up - but on their own terms. These centres are the tech and innovation hubs of global giants like Goldman Sachs, Siemens and Walmart. Unlike traditional IT companies, they're not hiring in bulk. They are being choosy, focusing on smaller, high-skilled teams rather than mass recruitment. They are filtering for quality over quantity. They are recruiting from tier-1 institutions such as IITs, IIITs, and NITs. Internships have become their recruitment pipelines. No internship, no entry. What they want are thinkers, not coders; not JIRA (developers, testers or support staff) ticket handlers, but problem solvers who understand customer logic, regulatory frameworks, and domain-specific tech stacks. In a recent article, Neeti Sharma of TeamLease Digital explains, "The combination of engineering, technology and domain is what GCCs look for." The bar is high and rising. Not Just An Indian Crisis The unemployment rate for computer science graduates in the US is 7.5% - nearly double the national average of 4.1%. According to The Times, some British tech graduates are applying to 1,000 jobs just to land an interview. Bloomberg reports that AI could replace over 50% of tasks done by market research analysts and sales reps. For managers, it's under 25%. So if you are young and entry-level, AI is more likely to replace you than your boss. The World Economic Forum's Future of Jobs Report 2025 is even starker: as many as 40% of global employers plan to reduce the workforce due to AI. The report says that 170 million new jobs will be created this decade - but 75 million jobs will vanish. And the new roles being created require entirely new skillsets. The message is clear: adapt or be automated. Teaching 2015 In 2025 Our classrooms are our biggest problem. Various reports suggest that less than 12% of Indian engineering colleges currently offer full-time coursework in AI, data science, or machine learning. According to the All India Council for Technical Education (AICTE), only 7% of faculty have any hands-on experience with generative AI tools. This mismatch is reflected in hiring data, too. The India Skills Report 2025 states that just 47% of engineering graduates are considered "employable" in the tech industry. The truth is that most campuses are preparing students for jobs that AI is already doing better now. Experts argue that this isn't just a skills mismatch - it is a potential social crisis. Imagine a family investing Rs 10-15 lakh in a student's B. Tech education, hostel, coaching and job prep. Now imagine that student sitting jobless for 18 months post-graduation, watching classmates pivot to gig work, delivery jobs or sales roles in unrelated sectors. It is happening. In Bengaluru, Hyderabad, Pune, you will find PG hostels full of jobless coders, waiting, scrolling job portals. Wondering what happened to the IT dream. This is a story of a generation staring up at a ladder that no longer reaches the sky. Yet, all is not lost. Glimmer Of Hope? AI is not just killing jobs. It is creating new ones, just not in the old roles. There is an exploding demand for AI support roles, such as prompt engineers, junior AI trainers, chatbot testers, model auditors, so on and so forth. Many of these don't require elite degrees or several years of experience, but just curiosity, adaptability and a willingness to learn new tools. A post on the 'Indian Workplace' subreddit highlights the ordeal of a young software graduate "stuck in a loop" - submitting hundreds of job applications but receiving no callbacks. The post, which quickly resonated with thousands of users, captures the quiet despair shared by many young graduates and junior developers navigating today's overcrowded and AI-disrupted job market. It's not just an isolated complaint, it reflects a broader reality: fierce competition, shrinking roles and fewer clear entry points into the tech world. So What Needs to Change? Start with the curriculum. AI and machine learning should be core, not electives. Students need to learn systems thinking, prompt writing and real-world context, not just syntax. Expand degree-plus-internship models that let students earn and learn alongside real tech work. It's not just about skills, it's about confidence too. Skilling missions must scale. India's FutureSkills PRIME is a start, but we need a national AI readiness push with funding, incentives and commitment from startups to MNCs. Bring startup labs to campuses. Real tools, real prototypes, real failures. Replace handwritten exams with hackathons. Redefine "entry-level". Forget old L1 jobs. Think AI trainers, prompt engineers, ethics testers, and data labourers. If you are a student, know this: AI isn't your enemy - it's your co-pilot. Stackable micro-degrees, no-code tools, problem-solving skills - anyone from anywhere can thrive. The old career ladder may be broken. But the highway to new-age tech roles is wide open. Disclaimer: These are the personal opinions of the author

'Only one job will be...': 'Godfather of AI' issues CHILLING warning about rise of Artificial Intelligence, says it will...
'Only one job will be...': 'Godfather of AI' issues CHILLING warning about rise of Artificial Intelligence, says it will...

India.com

time03-07-2025

  • India.com

'Only one job will be...': 'Godfather of AI' issues CHILLING warning about rise of Artificial Intelligence, says it will...

Geoffrey Hinton has often warned about the dangers of AI. (File) Geoffrey Hinton, the celebrated British-Canadian computer scientist who is often called the 'Godfather of AI', has long been a proponent of reining in the uncontrolled rise of Artificial Intelligence as he believes that a Super AI or Artificial General Intelligence (AGI), could be the biggest threat to human civilization on this planet, even more so than a global nuclear war. In the near term, the 77-year-old computer scientist who is credited with laying the groundwork upon which modern AI tools are built, has warned that AI would eliminate a major chunk of jobs in the tech industry in the next 5-10 years, such as programming, coding, research, among others. But there's one job that AI won't be able to impact, at least in the foreseeable future, Hinton says. What job will be 'safe' from AI? Speaking on the 'The Diary of a CEO' podcast with Steven Bartlett, Geoffrey Hinton reiterated his stance on AI being a clear and present threat to the human race. The famous scientist noted that skill-based jobs will likely be safe even in the age of AI as they do not require automation, but the finesse and skill of a human. Asked which jobs he think would be 'safe' in an AI takeover, Geoffrey Hinton replied; 'Plumbing'. 'Become a plumber if you want to ensure job security'. Hinton suggested that plumbing might be one of the few professions that cannot be easily replaced by AI, and hence will remain secure in the near future. Why Geoffrey Hinton suggested plumbing? In the interview, Geoffrey Hinton said that while plumbing may seem like an unusual suggestion, but it is one of the safest in AI era because its based on physical activity. In contrast, jobs like law, accounting, statistics, and similar depend on data processing, which can easily be replaced with automated with advanced AI systems. 'This is not the case with plumbing… plumbing involves complex physical work and problem-solving, etc., which AI has not yet been able to understand. This means that AI will not be able to dominate jobs that require human skills for the time being,' he stated.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store