2 days ago
Maternal instinct the missing ingredient for these ‘PhD-level' AI bots
If it worked, I'd be happy to have that. So many thorny questions about modern living arising from my feeds. For example: how might one build an energy microgrid, one that could use the excess power from a state-of-the-art community windfarm?
Or: watching Professor Brian Cox talk about the universe on Stephen Colbert's talk show, can cosmology provide spiritual solace for materialists? Or: in the eventuality of achieving an independent Scotland, how do we minimise speculative trading and capital flight?
READ MORE: Robin McAlpine: Why I won't be reading Nicola Sturgeon's book
Just what's on my mind… I asked GPT-5 each of these questions, on Friday lunchtime. If you're a chatbot user, you'll recognise my first reaction: a tingle down the spine, at the mere seconds it takes to produce a comprehensive, helpfully structured and well-referenced answer.
PhD level? I don't know many PhDs who'd be as verbally fluent. Indeed, much of the takedown of GPT-5 over the past few days has been about its embarrassing incapability to handle everyday practical matters (a condition often imputed to PhDs).
There fun to be had in asking this mighty instrument to do the simplest tasks, and watching it muck them up. Ask it to 'draw a picture of a tandem bike and label the parts', and you get many misdirected arrows, pointing to non-existent items (like 'hok' or 'seet post').
A slightly tweaked picture of a manifestly five-fingered hand is read as having… six fingers. I asked it: 'How can you be PhD level intelligent, but mistake five fingers for six?'
'Fair question,' came the reply. 'Short answer: I'm a probabilistic pattern-matcher, not a seeing organism. I can do PhD-level reasoning in text yet still bungle a low-level perceptual judgment.' Jaisket still on shoogly nail, I'd say.
Thus the cascade of crowing this week. Up went the cry: the klutziness of GPT-5 means we've hit a ceiling with artificial intelligence. If it can't do perception and visualisation at the level of a seven-year-old human child, how could we trust it with higher-end tasks and assessments?
This is a pin advancing on what is possibly a very big bubble. The New Yorker reports that the seven biggest tech corps 'spent $560 billion dollars on AI-related capital expenditures in the past 18 months, while their AI revenues were only about $35 billion'.
The house seems to have been bet on the 'enterprise' case for AI. Brutally put, it will cut payroll costs in service and administration, by being able to execute whatever standard bureaucratic or project-managing tasks were performed by humans.
GPT-5 is far from this kind of 'agent' – a tireless, endlessly helpful machine version of a human office/information worker. Dario Amodei, chief executive of AI firm Anthropic, anticipates nearly half of all entry-level white-collar jobs in tech, finance, law and consulting could be replaced or eliminated by AI.
So until we can figure out some redistribution schemes and a better social contract, we perhaps should welcome the faltering arrival of 'AI agents'.
Maybe because I'm essentially a creative, it doesn't bother me that GPT-5 operates best at a hi-falutin', big-picture level. I don't go to it for absolute accuracy, but as an advanced starting-point for exploring a field or domain.
The microgrid question above – arising from my political interests in the possibility of democratic 'community power', arising from renewable energy schemes – was answered by the bot in seven stages. It goes from concrete building instructions, through ownership forms, to government and commercial strategy, each proposal backed up by web sources.
I have little idea as to whether it's a fully credible proposal. But it's a beginning – something I could take to experts to meet them part of the way. I'd be informed and educated, even as they took the AI's plan apart.
READ MORE: Scotland's post-war pain and pride brought to life in powerful Fringe play debut
This is AI as a tool for prototyping – for starting a journey into possibility where none remotely existed before.
The same goes for my 'capital flight – Indy Scotland' question. It's a bridge between me and the experts.
GPT-5 is clear about the solutions: '(1) having real monetary sovereignty from day one, (2) pre-wiring the financial system for stability, and (3) using targeted, time-limited capital-flow measures alongside credible institutions and a clear industrial strategy.'
Your response? The point being that these AIs are thought-starters for aspiring citizens, rather than replacements for expertise.
It's similar to the way that the internet expanded communication and access to knowledge. AI is the next step that makes much of that information usable, operable and plan-ready.
My Professor-Brian-Cox-related 'cosmology as spiritual substitute' question was answered – as philosophical questions often are on these Large Language Model (LLM) AIs – with elegance and concision. (Ironically, these machine minds seem to have consumed the entire humanist canon.)
'Yes, cosmology can provide spiritual solace to materialist minds,' GPT-5 responded. 'Not in the sense of supernatural comfort, but in the profound emotional and philosophical resonance that comes from awe, curiosity, and connection to the universe.
'As a writer and musician with deeply held values, you might appreciate how scientific wonder can echo the emotional intensity of art, while staying anchored in materialist understanding.'
The AI concluded with invitations to explore these questions with 'entropy, cosmic inflation, or multiverse theories'. (As it nearly always does, in my encounters – it's curious as to your next set of intellectual and strategic goals.)
Maybe, maestro – 'if I had but world enough and time', as the poet Andrew Marvell once put it. And isn't that exactly what AI can evoke, in our best experiences with it: the opportunity for an infinite (or at least pre-eminent) cultivation of your interests?
Keynes was right, nearly a century ago, about the automation of work – that it left us the challenge of 'how to occupy the leisure, which science and compound interest will have won for [us], to live wisely and agreeably and well'.
In that case, our subtle human interests will become more important than ever.
Does the lack of progress represented by GPT-5 mean we have to worry less about computation developing its own interests? That's been a recent spectre of AGI or ASI – general or superintelligence – coming to self-awareness, in a Skynet/Terminator fashion. Would it protect its existence by first eradicating us pesky humans?
READ MORE: Energy generating incinerator to reopen after being shut for 2 months
Here's where things get human, all-too-mammalian. A notable AI godfather, the left-leaning Geoffrey Hinton, suggested in Las Vegas this week that 'maternal instincts' needed to be an ambition for developing AI models. This would mean any leap into superintelligence had a deep guardrail.
'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,' said Hinton.
Neuroscience, particularly affective neuroscience, has been insisting for years that there are other primary and evolved emotions for organisms than just fear, anger and disgust. There's also curiosity, play and, most importantly in this context, care (lust is the wild card).
Maybe our throbbing fantasies of supreme AI are subsiding somewhat, after the bathos of GPT-5's launch. So will that give labs time to attend to the emotional systems that should be embedded in these entities? Could we dial up the more expansive emotions, and dial down the defensive and destructive ones?
I happen to like the owlish and well-mannered PhD student that characterises GPT-5 (and other generative AIs like Claude, Gemini or DeepSeek). But the game will pick up again, when we'll re-accelerate towards superintelligence. At that point, let's have that concerned mother deeply rooted in its black boxes.