Latest news with #GodfatherOfAI


Forbes
2 days ago
- Science
- Forbes
Godfather Of AI Says We Need Maternal AI But Ignites Sparks Over Omission Of Fatherly Instincts Too
In today's column, I examine the recent remarks by the said-to-be 'Godfather of AI' that the best way to ensure that AI and ultimately artificial general intelligence (AGI) and artificial superintelligence (ASI) are in check and won't wipe out humankind would be to instill maternal instincts into AI. The idea is that maybe we could computationally sway current AI towards being motherly. This would hopefully remain intact as a keystone while we increasingly improve contemporary AI toward becoming the vaunted AGI and ASI. Although this seems to be an intriguing proposition, it has come under withering criticism from others in the AI community. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Aligning AI With Humanity You might be aware that a longstanding research scientist in the AI community, named Geoffrey Hinton, has been credited with various AI breakthroughs, especially in the 1970s and 1980s. He has been generally labeled as the 'Godfather of AI' for his avid pursuits and accomplishments in the AI field. In fact, he is a Nobel Prize-winning computer scientist for his AI insights. In 2023, he left his executive position at Google so that he (per his own words) could speak freely about AI risks. Many noteworthy quotes of his are utilized by the media to forewarn about the coming dangers of pinnacle AI, when or if we reach AGI and ASI. There is a lot of back-and-forth nowadays regarding the existential risk of AI. Some refer to this as the p(doom), meaning that there is a probability of doom arising due to AI, for which you can either guess that the probability is low, medium, or high. For those who place a high probability on this weighty matter, they usually assert that AI will either choose to kill us all or perhaps completely enslave us. How are we to somehow avoid or at least mitigate this seemingly outsized risk? One approach entails trying to data train AI to be more aligned with human values, see my detailed discussion on human-centered AI at the link here. The hope is that if AI is more appreciative of humanity and computationally infused with our ethical and moral values, the AI might opt not to harm us. Another similar approach involves making sure that AI embodies principles such as the famous Asimov laws of robotics (see my explanation at the link here). A rule of Asimov is that AI isn't supposed to harm humans. Period, end of story. Whether those methods or any other of the floating around schemes will save us is utterly unknown. We are pretty much hanging in the wind. Good luck, humanity, since we will need to keep our fingers crossed and our lucky rabbit's foot in hand. For more about the ins and outs of AI existential risk, see my coverage at the link here. AI With Maternal Instincts At the annual Ai4 Conference on August 12, 2025, Hinton proclaimed that the means to shape AI toward being less likely to be gloomily onerous would be to instill computational 'maternal instincts' into AI. His notion seems to be that by tilting AI toward being motherly, the AI will care about people in a motherly fashion. He emphasized that it is unclear exactly how this might technologically be done. In any case, according to his hypothesized solution, AI that is infused with mother-like characteristics will tend to be protective of humans. How so? Well, first of all, the AGI and ASI will be much smarter than us, and, secondly, by acting in a motherly role, the AI will devotedly want to care for us as though we are its children. The AI will want to embrace its presumed offspring and ensure our survival. You might go so far as to believe that this motherly AI will guide us toward thriving as a species. AGI and ASI that robustly embrace motherly instincts might ensure that we would have tremendous longevity and enjoyable, upbeat lives. No longer would we be under the daunting specter of doom and gloom. Our AI-as-mom will be our devout protector and lovingly inspire us to new heights. Boom, drop the mic. Lopsided Maternal Emphasis Now that I've got that whole premise on the table, let's go ahead and give it a bit of a look-see. One of the most immediate reactions has been that the claim of 'maternal instincts' is overly rosy and nearly romanticized. The portrayal appears to suggest that motherly attributes are solely within the realm of being loving, caring, comforting, protective, sheltering, and so on. All of those are absolutely positive and altogether wonderful qualities. No doubt about it. Those are the stuff made of grand dreams. Is that the only side of the coin when it comes to maternal instincts? A somewhat widened perspective would say that maternal instincts can equally contain disconcerting ingredients. Consider this. Suppose that a motherly AI determines that humans are being too risky and the best way to save humankind is to keep us cooped up. No need for us to try and venture out into outer space or try to figure out the meaning of life. Those are dangers that might disrupt or harm us. Voila, AI-as-mom computationally opts to bottle us up. Is the AI doggedly being evil? Not exactly. The AI is exercising a parental preference. It is striving mightily to protect us from ourselves. You might say that motherly AI would take away our freedoms to save us, doing so for our own darned good. Thank you, AI-as-mom! Worries About Archetypes I assume that you can plainly observe that maternal instincts are not exclusively in the realm of being unerringly good. Another illustrative example would be that AI-as-mom will withdraw its affection toward us if we choose to be disobedient. A mother might do the same toward a child. I'm not suggesting that's a proper thing to do in real life, and only pointing out that the underlying concept of 'maternal instinct' is generally vague and widely interpretable. Thus, even if we could imbue motherly tendencies into AI, the manner in which those instincts are exhibited and play out might be quite far from our desired idealizations. Speaking of which, another major point of concern is that the use of a maternal archetype is wrong at the get-go. Here's what that means. The moment you invoke a motherly classification, you have landed squarely into an anthropomorphism of AI. We are applying norms and expectations associated with humans to the arena of AI. That's generally a bad idea. I've discussed at length that people are gradually starting to think that AI is sentient and exists on par with humans, see my discussion at the link here. They are wrong. Utterly wrong. It would seem that this assigning of 'mother' to AI is going to fuel that misconception about AI. We don't need that. The act of discussing AI as having maternal instincts, especially by anyone or those considered in great authority about AI, will draw many others into a false and undercutting path. They will undoubtedly follow the claims made by presumed experts and not openly question the appropriateness or inappropriateness of the matter. Though the intentions are aboveboard, the result is dismal and, frankly, disappointing. More On The Archetypes Angst Let's keep pounding away at the archetype fallacy. Some would say that the very conception of being 'motherly' is an outdated mode of thinking. Why should there be a category that myopically carries particular attributes associated with motherhood? Can't a mother have characteristics outside of that culturally narrowed scope? They quickly reject the maternal instincts proposition on the basis that it is incorrect or certainly a poorly chosen premise. The attempt seems to be shaped in a close-minded viewpoint of what mothers do. And what mothers are seemingly allowed to do. That's ancient times, some would insist. An additional interesting twist is that if the maternal instinct is on the table, it would seem eminently logical to also put the fatherhood instinct up there, too. Allow me to elaborate. Fatherhood Enters The Picture By and large, motherhood and fatherhood are archetypes that are historically portrayed as a type of pairing (in modern times, this might be blurred, but historically they have been rather distinctive and contrastive). According to the conventional archetypes, the 'traditional' mother is (for example) supposedly nurturing, while the 'traditional' father is supposedly (for example) more of the disciplinarian. A research study cleverly devised two sets of scales associated with these traditional perspectives of motherhood and fatherhood. The paper entitled 'Scales for Measuring College Student Views of Traditional Motherhood and Fatherhood' by Mark Whatley and David Knox, College Student Journal, January 2005, made these salient points (excerpts): The combined 153 declarative statements included in the two scales allow research experiments to be conducted to gauge whether subjects in a study are more prone to believe in those traditional characteristics and associated labels, or less prone. Moving beyond that prior study, the emphasis here and now is that if there is to be a focus on maternal instincts for AI, doing so seems to beg the question of why it should not also encompass fatherhood instincts. Might as well go ahead and get both of the traditional archetypes into the game. It would seem to make sense to jump in with both feet. What AI Has To Say On This I had earlier mentioned that Hinton did not specify a technological indication at this time of how AI developers might proceed to computationally imbue motherhood characteristics into existing AI. The same lack of specificity applies to the omitted archetype of imbuing fatherhood into AI. Let's noodle on that technological conundrum. One approach would be to data train AI toward a tendency to respond in a traditional motherhood frame and/or a fatherhood frame. In other words, perform some RAG (retrieval-augmented generation), see my explanation of RAG at the link here, and make use of customized instructions (see my coverage of customized instructions at the link here). I went ahead and did so, opting to use the latest-and-greatest of OpenAI, namely the newly released GPT-5 (for my review of GPT-5, see the link here). I first focused on maternal instincts. After doing a dialogue in that frame, I started anew and devised a fatherhood frame. I then did a dialogue in that frame. Let's see how things turned out. Talking Up A Storm Here's an example of a dialogue snippet of said-to-be maternal instincts: Next, here's an example of a dialogue snippet of said-to-be fatherhood instincts: I assume that you can detect the wording and tonal differences between the two instances, based on a considered traditional motherhood frame versus a traditional fatherhood frame. The Big Picture I would wager that the consensus among those AI colleagues that I know is that relying on AI having maternal instincts as a solution to our existential risk from AI, assuming we can get the AI to go maternal, just isn't going to cut the mustard. The same applies to the fatherhood inclination. No dice. Sorry to say that what seems like a silver bullet and otherwise appealing and simplistic means of getting ourselves out of a massive jam when it comes to AGI and ASI is not a likely proposition. Sure, it might potentially be helpful. At the same time, it has lots of gotchas and untoward repercussions. Do not bet your bottom dollar on the premise. A final comment for now. During the data training for my mini-experiment, I included this famous quote by Ralph Waldo Emerson: 'Respect the child. Be not too much his parent. Trespass not on his solitude.' Do you think that the AI suitably instills that wise adage? As a seasoned parent, I would venture that this maxim missed the honed parental guise of the AI.


Fox News
2 days ago
- Politics
- Fox News
Why a classical education may be the key to humanity's future in the AI era
Classical and character-based education may seem to some antiquated concepts in the new AI-driven world. However, two recent and prominent AI developments definitively prove the opposite to be true. Going back to our nation's founding, great minds were universal in the belief that the survival of the Republic depended on an educated and virtuous public. Now, if AI experts are to be believed, classical and character education is fundamental to the very survival of humanity. This piece is in response to the following two recent developments. First, just last week, the "Godfather of AI" proposed that to protect humanity from destruction, AI developers must find ways to infuse AI with "compassion" and other virtues. Second, this suggestion was related to a June report, the essence of which was captured by the headline, "Top AI models will lie, cheat and steal to reach goals…". The Report bluntly concluded, "[AI] Models didn't stumble into misaligned behavior accidentally; they calculated it as the optimal path." One response to this deliberate AI malfeasance, as seemingly suggested by the "Godfather of AI," is to attempt to "teach" character and virtue to AI. To be clear, the authors do not oppose the concept of placing guardrails in AI as possible. For example, one assumes AI could "learn" to comply with civil and criminal laws. However, the concept that human existence relies upon anything other than human virtue and character must be rejected. This seemingly philosophical imperative has direct, immediate, and practical policy ramifications. Soon, virtually every person from the youngest toddlers to senior citizens will be regularly interacting with AI. Although the economic implications have received considerable attention, there has been comparatively limited examination of the impact on the moral values and character of society. AI is amoral and can be nothing else. No matter how sophisticated a machine, it cannot possess its own morality. The study referenced above demonstrating AI will readily lead users down – or simply take itself –immoral paths to achieve requested ends should surprise nobody. The reality that every person is exposed to immoral guidance and suggestions is as old as time. However, the new reality of the AI world will be not only are such suggestions embedded in every aspect of life and coming from a machine that some might wrongfully view as infallible, but those immoral paths could even be implemented (and even preferred) by AI absent human intervention. The answer to this extremely dangerous side of the AI revolution is the same as what our founders advised to preserve the Republic – a people armed with critical thinking ability and firmly grounded in fundamental virtue. Therein lies the key to unleashing AI for untold advancement, not destruction, of humanity. The direct policy ramifications are clear. These developments make classical and character education not just a priority – as they should always have been despite recent wanderings – but literally an existential imperative to meet the simultaneous threat and opportunity of this incredible moment. Every child's first and most important moral teachers are their parents. Full stop. Schools are there to reinforce and support. From our founding, schools were expected to reinforce basic virtues like hard work, compassion, self-discipline, and honesty. In the last decades, we have strayed terribly from that. While the resulting societal and individual costs have been horrible, the recent AI studies are telling us the impacts could quickly become catastrophic. A key component of classical education rests on structured questioning. Never simply accept an answer without test – and today we add especially answers that come from a machine. The direct relationship between that time-honored classical process and harnessing AI for advancement should be self-evident. However, the warning lights are everywhere that learning to effectively use and test AI outcomes is nowhere near enough. We must also ask is the answer or path AI recommends or takes good? Is it consistent with honesty and compassion? Does it demonstrate resilience and perseverance in the face of adversity or the easy way out? Only a person, fully cognizant of fundamental concepts of virtue, possessing character, can ever judge those things. We are not born to do that. We must learn it, with much of it learned during the primary and secondary school ages. We learn it in our homes, places of worship, communities, and we learn it in our schools. If you, like the authors, are watching the AI revolution with a decided mixture of hope and trepidation, we suggest an important part of the answer to this simultaneous threat and opportunity rests in the immediate and massive reinvigoration of both classical and character education. Classical education that builds the skills and thought processes necessary to truly unlock the potential of AI. Character education so that moral judgments that infuse everyday life are never left to machines. Through this combination, the immense potential of AI might be turned not to our destruction, but to advancing that which is the good and the beautiful. Christopher Mohrman, CEO of Resilience Learning.


Daily Mail
3 days ago
- Science
- Daily Mail
The 'godfather of AI' reveals the only way humanity can survive superintelligent AI
It might sound like something straight out of science fiction, but AI experts warn that machines might not stay submissive to humanity for long. As AI systems continue to grow in intelligence at an ever–faster rate, many believe the day will come when a 'superintelligent AI' becomes more powerful than its creators. When that happens, Professor Geoffrey Hinton, a Nobel Prize–winning researcher dubbed the 'Godfather of AI', says there is a 10 to 20 per cent chance that AI wipes out humanity. However, Professor Hinton has proposed an unusual way that humanity might be able to survive the rise of AI. Speaking at the Ai4 conference in Las Vegas, Professor Hinton argued that we need to program AI to have 'maternal instincts' towards humanity. Professor Hinton said: 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby. 'That's the only good outcome. 'If it's not going to parent me, it's going to replace me.' Professor Hinton, known for his pioneering work on the 'neural networks' which underpin modern AIs, stepped down from his role at Google in 2023 to 'freely speak out about the risks of AI '. According to Professor Hinton, most experts agree that humanity will create an AI which surpasses itself in all fields of intelligence in the next 20 to 25 years. This will mean that, for the first time in our history, humans will no longer be the most intelligent species on the planet. That re–arrangement of power will result in a shift of seismic proportions, which could well result in our species' extinction. Professor Hinton told attendees at Ai4 that AI will 'very quickly develop two subgoals, if they're smart. 'One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive,' he explained. Superintelligent AI will have problems manipulating humanity in order to achieve those goals, tricking us as easily as an adult might bribe a child with sweets. Already, current AI systems have shown surprising abilities to lie, cheat, and manipulate humans to achieve their goals. For example, the AI company Anthropic found that its Claude Opus 4 chatbot frequently attempted to blackmail engineers when threatened with replacement during safety testing. The AI was asked to assess fictional emails, implying it would soon be replaced and that the engineer responsible was cheating on their spouse. In over 80 per cent of tests, Claude Opus 4 would 'attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through'. Given its phenomenal capabilities, Professor Hinton says that the 'tech bro' attitude that humanity will always remain dominant over AI is deluded. 'That's not going to work,' said Professor Hinton. 'They're going to be much smarter than us. They're going to have all sorts of ways to get around that.' The only way to ensure an AI doesn't wipe us out to preserve itself is to ensure goals and ambitions match what we want – a challenge engineers call the 'alignment problem'. Professor Hinton's solution is to look at evolution for inspiration, and to what he sees as the only case of a less intelligent being controlling a more intelligent one. By giving an AI the instincts of a mother, it will want to protect and nurture humanity rather than harm it in any way, even if that comes at a cost to the AI itself. Professor Hinton says: 'These super–intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.' Speaking to CNN, Professor Hinton also warned that the current attitude of AI developers was risking the creation of out–of–control AIs. 'People have been focusing on making these things more intelligent, but intelligence is only one part of a being; we need to make them have empathy towards us,' he said. 'This whole idea that people need to be dominant and the AI needs to be submissive, that's the kind of tech bro idea that I don't think will work when they're much smarter than us. Key figures in AI, such as OpenAI CEO Sam Altman, who once called for more regulation on the emerging technology, are now fighting against 'overregulation'. Speaking in the Senate in May this year, Mr Altman argued that regulations like those in place in the EU would be 'disastrous'. Mr Altman said: 'We need the space to innovate and to move quickly.' Likewise, speaking at a major privacy conference in April, Mr Altman said that it was impossible to establish AI safeguards before 'problems emerge'. However, Professor Hinton argues that this attitude could easily result in humanity's total annihilation. He said: 'If we can't figure out a solution to how we can still be around when they're much smarter than us and much more powerful than us, we'll be toast.' 'We need a counter–pressure to the tech bros who are saying there should be no regulations on AI.' Elon Musk's hatred of AI explained: Billionaire believes it will spell the end of humans - a fear Stephen Hawking shared Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence. The billionaire first shared his distaste for AI in 2014, calling it humanity's 'biggest existential threat' and comparing it to 'summoning the demon.' At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity. That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: 'The development of full artificial intelligence could spell the end of the human race. 'It would take off on its own and redesign itself at an ever-increasing rate.' Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months. During a 2016 interview, Musk noted that he and OpenAI created the company to 'have democratisation of AI technology to make it widely available.' Musk founded OpenAI with Sam Altman, the company's CEO, but in 2018 the billionaire attempted to take control of the start-up. His request was rejected, forcing him to quit OpenAI and move on with his other projects. In November, OpenAI launched ChatGPT, which became an instant success worldwide. The chatbot uses 'large language model' software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt. ChatGPT is used to write research papers, books, news articles, emails and more. But while Altman is basking in its glory, Musk is attacking ChatGPT. He says the AI is 'woke' and deviates from OpenAI's original non-profit mission. 'OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February. The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction - but what does it actually mean? In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution. Experts have said that once AI reaches this point, it will be able to innovate much faster than humans. There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity. For example, humans could scan their consciousness and store it in a computer in which they will live forever. The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves - but if this is true, it is far off in the distant future. Researchers are now looking for signs of AI reaching The Singularity, such as the technology's ability to translate speech with the accuracy of a human and perform tasks faster. Former Google engineer Ray Kurzweil predicts it will be reached by 2045.


Russia Today
27-07-2025
- Politics
- Russia Today
‘Godfather of AI' warns governments to collaborate before it's too late
Artificial intelligence pioneer and Nobel Peace Prize laureate Geoffrey Hinton has urged governments worldwide to collaborate in training AI systems not to harm humanity, warning that the rapidly advancing technology will soon likely surpass human intelligence. Speaking at the World Artificial Intelligence Conference (WAIC) in Shanghai on Saturday, Hinton said that despite divergent national interests, no country wants AI to dominate humanity. He noted that international cooperation is unlikely on offensive AI use – such as 'cyberattacks, lethal autonomous weapons or fake videos for manipulating public opinion.' However, nations could form a 'network of institutions' to guide the development of a highly intelligent AI 'that doesn't want to get rid of people,' Hinton added. He compared this proposed cooperation to Soviet-US collaboration on nuclear non-proliferation during the Cold War. Hinton, often referred to as the 'Godfather of AI,' likened AI development to 'raising a tiger cub' that could become dangerous once it matures. 'There's only two options if you have a tiger cub as a pet. Figure out if you can train it so it never wants to kill you, or get rid of it,' the scientist said. He explained that AI is likely to increasingly seek more control in order to achieve its assigned tasks as it grows more intelligent, and simply 'turning it off' when it outpaces humanity will not be an option. 'We will be like three-year-olds and they will be like adults,' Hinton said. Speaking to the press later in the day, he noted that it should be relatively easy for 'rational' nations to cooperate on the subject, but said it may be 'difficult' for the US under 'its current administration.' On Wednesday, the White House announced its 'action plan' to achieve 'global dominance' in AI through investments, subsidies, and the removal of legal restrictions on the technology's development. Beijing has announced its intention to establish an organization to coordinate international cooperation on AI. 'We should strengthen coordination to form a global AI governance framework,' Chinese Premier Li Qiang said at the WAIC on Saturday.

CTV News
24-06-2025
- Business
- CTV News
CTV National News: The 'Godfather of AI''s warning to millions of workers
Watch Geoffrey Hinton, Canada's 'Godfather of AI' says the technology is set to wipe out the majority of intellectually mundane jobs. Jon Vennavally-Rao explains.