logo
Should We Think Of AI As A Mother?

Should We Think Of AI As A Mother?

Forbes7 hours ago
Mother, child and happy piggyback in summer. getty
This is a very unique time in the development of artificial intelligence.
When historians go back and sort through the last few years, and the years to come, it might be hard to put a finger on just where the critical mass moment occurred, when it was that we vaulted into the future on the wings of LLMs. But to me, it's telling that we are talking so much about systems and products and opportunities that would have been unimaginable just a decade ago. Image creation, for instance. Even in the oughts, in the early years of the new millennium, you still had to make your own pretty pictures and charts and graphs. Not anymore. Voice cloning, realistic texting companions, robots running races… the list goes on and on.
Amid this rapid set of developments, some of those closest to the industry are warning that we need a certain trajectory to make sure that AI is safe. One such person is Yann LeCun, former head of research at Meta, who has been on stage at multiple Imagination in Action events, and gets top billing on many panels and conferences where he discusses innovation.
Right now, LeCun is in the news for suggesting that AI needs 'guardrails,' that there are specific principles that we will need to keep in mind to ensure the fidelity of our use cases. What he's calling for is two-fold: first, that the systems be able to represent 'empathy' or compassion, and second, that the AI entities need to be deferential to human authority. That second one speaks to the way that breakout forces escape the food chain of the natural world: humans did this aeons ago, with weaponry and protective systems that basically eliminated natural predators. I guess the idea is that we now have a new potential predator that must be neutralized in a different way.
To wit, LeCun said this:
"Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans.'
That word, instinct, helps to explain those deep-level motivations that do, in a real sense, guide behaviors. Hopefully we haven't lost ours, as humans, and hopefully we can help AIs form theirs.
Reporting on LeCun's comments notes that he's speaking in the wake of some input from Geoffrey Hinton, who is often called the 'godfather of AI' but ended up disavowing his brainchild, to a certain extent.
Hinton's own comments go right to the core of how we see human-to-human interactions, and by extension, those we will have with humanoid AI.
He asks us to imagine if AI could be like our mothers.
'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,' Hinton reportedly said. 'If it's not going to parent me, it's going to replace me. …These super-intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.'
Unfortunately, this goal seems to fly in the face of the hubris observed in our modern societies – with both superpowers and domestic populations armed to the teeth against each other, what chance do we have of internalizing the right instinct, to bond with a more powerful partner?
On the other hand, ascribing maternal roles to AI seems like a positive thing, but is it the right thing, at the end of the day?
Ultimately, those aspirations that LeCun and Hinton mention (empathy, etc.) are objectives for us, too.
It's also sobering that these comments come at a time when a jury has just brought the top self-driving vehicle company to heel with a $200 million fine for a fatality involving technology: ruling on the death of Naibel Benavides Leon , struck by a Tesla car on Autopilot, the jury found that technology makers are responsible, to an extent, for that lack of guardrails that has real and tragic consequences.
It's a powerful metaphor: that to build correctly, we have to deliberate, not only on market principles, but on greater ones, too – that we have to have a long-term picture of how society is going to work with these AGIs and agentic systems in play. AI is now able to 'do things for you,' and so, what sorts of things will it be doing?
I'm reminded, again, of the proposal by my colleague Dr. John Sviokla that AI could provide individual tutors for humans , to help them work through various kinds of critical thinking, and the suggestion from other quarters that one human priority should be to hire an army of philosophers to keep us nicely in the lane when it comes to AI development.
Here's an interesting resource from Selmer Bringsjord and Konstantine Arkoudas at the Rensselaer Polytechnic Institute (RPI) in Troy, NY, talking in 2007 about the fundament of AI research . They cite another team of authors in suggesting:
'The fundamental goal of AI research is not merely to mimic intelligence or produce some clever fake. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves.'
'This 'theoretical conception' of the human mind as a computer has served as the bedrock of most strong-AI research to date,' Bringsjord and Arkoudas write. 'It has come to be known as the computational theory of the mind; we will discuss it in detail shortly. On the other hand, AI engineering that is itself informed by philosophy, as in the case of the sustained attempt to mechanize reasoning, discussed in the next section, can be pursued in the service of both weak and strong AI.'
There's a lot more in here, about speculation, logic, mechanistic thought, etc. – to sink your teeth into. And similarly, quite a few MIT people are working somewhere in the junction of neuroscience, AI, and biological modeling, to come to a more informed perspective on what the future will look like. And perhaps, as Paul Simon sings, the mother and child reunion is only a motion away.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Cerence Inc. (CRNC) Might Be A Hidden Gem, Says Jim Cramer
Cerence Inc. (CRNC) Might Be A Hidden Gem, Says Jim Cramer

Yahoo

time18 minutes ago

  • Yahoo

Cerence Inc. (CRNC) Might Be A Hidden Gem, Says Jim Cramer

We recently published . Cerence Inc. (NASDAQ:CRNC) is one of the stocks Jim Cramer recently discussed. Cerence Inc. (NASDAQ:CRNC) is a software company that caters to the needs of the transportation industry. Its shares have gained 60% year-to-date and gained an unbelievable 45% in August. Cerence Inc. (NASDAQ:CRNC) is benefiting from the AI wave through having landed major deals, such as a partnership with Mercedes-Benz. Cramer discussed the firm in the context of it being an outlier that could be interesting in the AI era: 'I'm doing a piece about Cerence tonight. People are going to want to find out what I'm saying there. There's a lot of technology away from these big companies that is better than the big companies.' Copyright: audioundwerbung / 123RF Stock Photo Here are Cramer's previous thoughts about Cerence Inc. (NASDAQ:CRNC): 'I like Cerence, and I also happen to like Brian Krzanich, the CEO. I am partial. They make money. I think you've got a winner. I was actually trying to figure out whether I could justify doing a piece on it because it's not that expensive. Cerence is a winner, and Brian's always welcome on the show, as we know.' While we acknowledge the potential of CRNC as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Why Racing Games Are Lying Cheaters And How They Do It
Why Racing Games Are Lying Cheaters And How They Do It

Yahoo

time18 minutes ago

  • Yahoo

Why Racing Games Are Lying Cheaters And How They Do It

There's a lot of anecdotal evidence that racing games cheat their human players. For example, on Steam's forum for Horizon Chase Turbo, one player writes: "Why the heck NPC cars don't follow the same rules as the player? Yesterday I was trying to win a 1-v-1 race from the season, like, 30 times, but the effing AI car kept bumping me in the back REPEATEDLY and then just speeding ahead. WTH? If the player bumps someone in the back, they get pushed behind big time—but not the AI, oh no." This is hardly a new complaint. Going all the way back to a 2010 post at The Escapist, another player writes: "I don't think I've played a single racing game where I don't get the impression the AI is cheating in some fashion or other. Usually it's stuff like the other cars/karts/ships always following a perfect path and never falling off the course, to blatantly accelerating to catch up and pass you (Super Mario Kart). I was just reading a review of Forza 2, and it seems even a sim like that does it." Cheating is part of the game for Mario Kart, but Adam Ismail of The Drive (and former Jalopnik writer extraordinaire) points us to a video of Sega GT, the original Xbox competitor to the PlayStation's Gran Turismo before the Forza franchise existed. Thanks to a data overlay of power multipliers for each car, we can see exactly how the game nerfs the front-runners and helps the backmarkers. Read more: These Are The Worst New Car And SUV Deals Right Now, According To Consumer Reports The Numbers Don't Lie In this particular race, there is no human player. All the cars are computer-controlled. To its credit, the computer drives these cars much better than the real thing. At the bottom right of the screen is a display of the power multiplier being applied to each of the eight positions in the race. Except before the race starts, none of them is ever 1, showing that every car is faster than it should be. Each car is about twice as powerful as it should be for the first lap or so, except when cornering. As the race goes on, we can see the multiplier drop significantly for the cars in front, but remain high for the cars in back, giving them a significant advantage. No human player would have a chance. Granted, this is a game, not a simulator like iRacing, so it's probably unreasonable to expect much in the way of realism. Handicapping the field on the fly makes for closer racing in a dynamic way that no NASCAR restrictor plate can duplicate. But when it feels like everyone is faster than you, even if you're driving what's supposed to be a faster car, the game gets more frustrating than fun and isn't worth playing anymore. It seems that we couldn't trust AI even back in the early days of racing games. Still, it could be worse. Nothing's as bad as the virtually unplayable Intellivision Auto Racing. Want more like this? Join the Jalopnik newsletter to get the latest auto news sent straight to your inbox... Read the original article on Jalopnik.

Who's paying for big tech's energy binge? You might be
Who's paying for big tech's energy binge? You might be

Fast Company

time19 minutes ago

  • Fast Company

Who's paying for big tech's energy binge? You might be

If cooling your house down during the summer's heat waves is costing you an arm and a leg, you can blame AI. Tech companies plan to spend trillions to feed AI's voracious appetite for energy, but normal Americans are eating the cost of that increased demand. Earlier this summer, OpenAI CEO Sam Altman declared that a 'significant fraction of the power on Earth' should be dedicated to running AI. OpenAI and its competitors have been raising and spending mind-boggling sums on data centers capable of powering their near-future AI plans, which stand to make the world's richest companies even richer. Unfortunately, all of that energy consumption is starting to trickle down to the average American. Compared to last year, consumers paid 5.5% more for electricity in 2025, a rate increase that outstrips inflation during the same period. The average American paid $144 in 2024 for their electric bill compared to $122 in 2021, and those increases are expected to speed up. Subscribe to the Daily newsletter. Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters Myriad factors contribute to rising electricity costs, but the major trends behind the energy use spike aren't hard to spot. 'Energy experts did expect electricity demand to rise, given the drivers of U.S. economic growth,' according to a recent report from ICF, an energy consulting firm. 'However, the rapid spikes due to data center use and industrial demand were not predicted to occur as quickly as they have.' The report notes that after two decades of consistent energy use, the country's appetite for energy is suddenly spiking, sending electricity costs up too. 'Rising electricity demand is expected to lead to higher electricity bills for Americans,' the report states, noting that residential rates are expected to go up by 15 to 40% over the next five years. By 2050, electricity bills could double in some markets. While the national average residential price for a kilowatt hour of electricity rose 6.5% from May 2024 to May 2025, Americans aren't feeling those cost increases evenly. In Maine, that price increase was a whopping 36%. In Connecticut, residential rates rose by 18%, while Utah residents saw their bills go up by 15%. Rates only dropped or hovered around their existing price in five states. New problems, fewer solutions Many obvious solutions that could offset soaring power costs are off the table now. In his second term's early months, the Trump administration moved swiftly to undercut U.S. investment in wind and solar, delete clean energy tax credits and slash other climate adaptation measures set in motion in Biden's signature legislative package, the Inflation Reduction Act. Trump's decision to point the U.S. economy away from renewable energy and back toward burning fossil fuels is too recent to be reflected in your home energy bill, but those reversals do mean that no relief is in sight unless something else changes dramatically. That change is unlikely to come from tech companies, which are scrambling to build more electricity-guzzling data complexes before their competitors can. Amazon, Google, Meta, Microsoft, Apple and OpenAI are all pouring billions into new data centers that will dot the country. Amazon is even trying to build its own set of small nuclear reactors to meet its power needs – an option that many Washington state residents aren't thrilled about. Data centers often come packaged with grand promises about revitalizing local economies, but once built they don't actually require much of a human workforce to operate. Communities are also becoming more aware of environmental concerns associated with inviting Amazon or OpenAI to town, though those worries are likely to do little to slow down tech companies. 'You should expect OpenAI to spend trillions of dollars on infrastructure in the not very distant future,' Altman told reporters on Thursday. 'And you should expect a bunch of economists to say, 'This is so crazy, it's so reckless, and whatever. And we'll just be like, 'You know what? Let us do our thing.''

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store