logo
Psychology Aims For A Unified Theory Of Cognition And AI Will Be A Big Help To Get There

Psychology Aims For A Unified Theory Of Cognition And AI Will Be A Big Help To Get There

Forbes13 hours ago
In today's column, I examine the ongoing pursuit by psychology to devise a unified theory of cognition. The deal is this. There have been numerous attempts that have been floated regarding proposed unified theories or models of cognition. Subsequently, by and large, those theories or models have been sharply criticized as being at times incomplete, illogical, unfounded, and otherwise not yet fully developed. The desire and need for a true and comprehensive unified theory of cognition persists and remains exasperatingly elusive.
Into this pursuit comes the use of AI, especially modern-era AI such as generative AI and large language models (LLMs). Can we make a substantive forward leap on devising a unified theory of cognition via leaning into contemporary AI and LLMs? Some say abundantly yes, others wonder if doing so will be a distraction and lead us down a primrose path.
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Psychology
As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that entails the field of psychology, such as providing AI-driven mental health advice and performing AI-based therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here.
You might find of keen interest that AI and psychology have had a longstanding relationship with each other. There is a duality at play. AI can be applied to the field of psychology, as exemplified by the advent of AI-powered mental health apps. Meanwhile, psychology can be applied to AI, such as aiding us in exploring better ways to devise AI that more closely approaches the human mind and how we think. See my in-depth analysis of this duality encompassing AI-psychology and psychology-AI at the link here.
The Enigma Of Human Cognition
The American Psychological Association (APA) defines cognition this way:
One nagging mystery underlies how it is that we can think and embody cognition.
All sorts of biochemical elements in our brain seem to work in a manner that gives rise to our minds and our ability to think. But we still haven't cracked the case on how those neurons and other elements in our noggin allow us to do so. Sure, you can trace aspects at a base level, yet explaining how that produces everyday cognition is a puzzle that won't seem to readily be solved.
This certainly hasn't stopped researchers from trying dearly to figure things out. Hope springs eternal that the mysteries of cognition will be unraveled and we will one day know precisely the means by which cognition happens. Nobel prizes are bound to be awarded. Fame and fortune are in the cards. And imagine what else we might do to help and overcome cognitive disorders, along with potentially enhancing cognition to nearly unimaginably heightened levels.
This is undoubtedly one of the most baffling mysteries of all time, and there is a purist sense of absolute joy and satisfaction in solving it.
Various Types Of Models
When seeking to come up with a unified theory of cognition, the route taken usually entails these four major paths:
You can use only one of those approaches, or you can use two or more.
If you opt to use two or more, your best bet is to make sure each model aligns with the other models being utilized. Any misalignment will indubitably bring criticism and skepticism raining down upon you. For example, if you propose a conceptual model and a mathematical model, but those two don't sync up, it becomes an easy line of attack to suggest that your theory is hogwash.
AI And Computational Models
A tempting avenue for cognition modeling these days is to rely upon an AI-based computational model that leverages the latest generative AI and LLMs. You can essentially repurpose a popular LLM, i.e., OpenAI's ChatGPT, which garners 400 million weekly active users, or Anthropic Claude, Google Gemini, Meta Llama, and so on.
Those off-the-shelf LLMs are ready-made for experimenting on psychology-based premises. I recently explained how contemporary generative AI is devised to react to psychological ploys and techniques, an intriguing facet that is both helpful and potentially hurtful, see my coverage at the link here.
One monumental wrinkle is whether a conventional LLM is suitable for representing a semblance of human cognition. Allow me to elaborate on this vital point.
The mainstay of LLMs makes use of an artificial neural network (ANN). This is a series of mathematical functions that are computationally rendered in a computer system. I refer to this as an artificial neural network to try and distinguish it from a true neural network (NN) or wetware that is inside your head.
Please be aware that ANNs are an exceedingly loosely contrived variation of true NNs. They are not the same. An ANN is quite far from real NNs and, in contrast, is many magnitudes simpler. For my detailed explanation about ANNs versus NNs, see the link here.
The bottom line is that an instant criticism of any cognition research that dovetails into LLMs is that you are starting at a recognized point of heated contention. Namely, a cogent argument is that since ANNs are not the same as true NNs, you are building your cognition hopes on somewhat of a house of cards. The counterargument is to acknowledge that ANNs are indeed not an isomorphic match, and instead, you are merely engaging them to aid in a broad-based simulation that doesn't have to be a resolute match.
In any case, I stridently support using LLMs as insightful exploratory vehicles and assert that we can gain a great deal of progress about cognition in doing so, assuming we proceed mindfully and alertly.
LLMs And Intrinsic Human Behavior
Suppose you decide to use an off-the-shelf LLM to perform a cognitive modeling investigation. There is something important that you need to be thinking about. I shall unpack the weighty consideration.
First, be aware that LLMs are developed via pattern-matching on human writing that is scanned across the Internet. That's how the fluency of LLMs comes about. The ANN is used to pattern-match on how we use words. In turn, when you enter a prompt into generative AI, the generated response produces words composed into sentences that appear to be on par with human writing. They reflect the computational mimicry of extensive computational pattern-matching based on words (actually, it is based on tokens, see the details in my discussion at the link here).
You can't especially declare that the LLM is thinking like humans. The AI is using words and patterns about the usage of words. That's not necessarily a direct embodiment of human thinking per se, and more so, presumably, the indirect outcomes of human thinking.
One clever idea is to augment an off-the-shelf LLM by aiming to further data-train the AI on veritable traces of human thinking (well, kind of, as you'll see momentarily). Perhaps that will enable the LLM to be more closely aligned with what human cognition consists of. For example, I fed transcripts of therapist-patient sessions into a major LLM to see if it might be feasible to augment its data training and guide the AI toward behaving more like a versed human therapist, see my experiment at the link here.
Psych Experimental Results As Rich Data
What other kinds of data could we potentially use to perform augmented data training of an LLM so that it can be more readily suited for cognition experimentation?
Easy-peasy, tap into the vast tome of psychology experiments that have been performed endlessly on all sorts of people for many decades. Here are the steps. Collect together that data. Work the data into a readable and usable shape. Feed it into an existing LLM, doing so via a method such as RAG (retrieval-augmented generation), see my RAG elicitation at the link here.
Voila, perhaps you've tuned up conventional generative AI to better simulate human behavior.
A recent research study took that innovative approach. In an article entitled 'A Foundation Model To Predict And Capture Human Cognition' by Marcel Binz et al, Nature, July 2, 2025, the paper made these key points (excerpts):
Details Of The Approach
The researchers chose to use Meta Llama as their base LLM. The data augmentation was done via the use of the increasingly popular technique known as QLoRA (quantized low-rank adaptation), a distant cousin of RAG.
They transcribed 160 experiments into natural language data. It was publicly available data. The types of experiments included many of the classics in psychology, such as memory recall, supervised learning, decision-making, multi-armed bandits, Markov decision processes, and others.
To give you a sense of what those experiments are like, consider these two examples:
Handily, the researchers have opted to make the dataset available, known as Psych-101, and can be accessed freely on Huggingface. In addition, they have nicely made available the augmented Meta Llama model, which they refer to as Centaur, and which is also freely available on Huggingface.
It is a welcome touch because other researchers can now come along and do not need to begin from scratch. They can reuse the arduous and time-consuming work that went into devising Psych-101 and Centaur. Thus, the dataset and the model are ready-made for launching new investigations and serve as a springboard accordingly.
The Results In Brief
A commonly utilized means of validating an LLM consists of holding back some of the training data so that you can use the holdback for testing purposes. This is a longstanding technique that has been used for statistical model validations.
You might use, say, 90% of the data to do the augmented data training and keep the remaining 10% in reserve. When you are ready to test the LLM, you give it the data that was aside to see if the AI can adequately predict the presumed unseen data. They did this and indicated that their Centaur LLM did a bang-up job on the hold-out data.
The next step typically undertaken is to employ a make-or-break test when aiming to devise a generalizable model. You give the LLM data that is considered outside the initial scope of the augmentation. The handwringing question is whether the LLM will generalize sufficiently to contend with so-called out-of-distribution (OOD) circumstances.
The researchers opted to select a handful of OOD settings, including economic games, deep sequential decision tasks, reward learning, etc. Their reported results indicate that Centaur LLM did quite well at making predictions associated with those previously unseen experimental transcripts.
Overall, kudos to the researchers for thinking outside the box on AI and psychology.
Some Thoughts To Ponder
I'd like to cover a few quick thoughts overall.
First, one agonizing difficulty with gauging an off-the-shelf pre-cooked LLM for any kind of newly encountered circumstances is that it is challenging to know whether such data or similar data might have been scanned during the initial setup of the LLM. Usually, only the AI maker knows precisely what data was initially scanned. Ergo, it is worthwhile to be mindful in interpreting generalizability since an LLM might have already had an unknown leg-up previously.
Second, and perhaps more importantly, the desire to push toward a semblance of cognitive realism by further data training of an LLM is a laudable idea.
Will the AI be more human-like in its reasoning patterns?
Maybe, maybe not.
One important determinant is whether the AI is still resorting to human-like language and not necessarily patterning on human reasoning. There is a huge debate going on regarding LLM foundational models that are claimed to be using 'reasoning' versus whether they are still potentially doing heads-down next-token prediction, see my coverage on the lively dispute at the link here.
Taking Next Steps
The overarching aim to see if we can properly ground cognitive computer-based computational simulations in a more psychologically plausible way is exciting. No doubt about that.
The researchers also noted that there might be entirely different AI architectural approaches that might be better for us to pursue, beyond the somewhat conventional infrastructures currently dominating the AI realm right now.
As a heads-up, some ardently believe that our prevailing LLMs and AI architecture are not going to get us to artificial general intelligence (AGI) or artificial superintelligence (ASI). You see, the trend right now is to mainly power up prevailing designs with faster hardware and more computational running time. But the incremental benefits could be misleadingly tying us to a road that leads to a dead-end.
Could the desire to attain a unified model of cognition be the kick in the pants to the AI field to look beyond the groupthink of today's AI and LLMs?
I certainly hope so.
As General George S. Patton once proclaimed: 'If everyone is thinking alike, then somebody isn't thinking.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Can Microsoft Stock Hit $680 in 2025?
Can Microsoft Stock Hit $680 in 2025?

Yahoo

time12 minutes ago

  • Yahoo

Can Microsoft Stock Hit $680 in 2025?

In the wake of its blockbuster fiscal 2025 performance driven by surging Azure and artificial intelligence (AI)-related growth, Microsoft (MSFT) has sparked optimism among analysts about further upside potential. While the stock recently climbed to its 52-week high of $555.45 on July 31, some bullish forecasts are even targeting the elusive $680 mark, which is the current Street high and a level that would represent a 30% upside from current prices. But alongside this enthusiasm, tempered by cautious macroeconomic headwinds, is $680 a realistic milestone for Microsoft or a stretch too far? More News from Barchart UnitedHealth Stock Soars as Warren Buffett's Berkshire Hathaway Discloses $1.57B Stake Palantir CEO Alex Karp Sees More Gains Ahead With America-Focused Growth Strategy, Calls U.S. The 'Leader of the Free World' Lucid Motors Is Caught in a Tariff Trap. Is LCID Stock More Likely to Hit $1 or $7 in 2025? Get exclusive insights with the FREE Barchart Brief newsletter. Subscribe now for quick, incisive midday market analysis you won't find anywhere else. About Microsoft Stock Microsoft, headquartered in Redmond, Washington, is a global technology leader known for its expansive portfolio that spans software, cloud computing, AI, gaming, and enterprise solutions. Microsoft has evolved from a dominant operating system provider into a powerhouse across multiple tech verticals. In recent years, Microsoft has aggressively expanded its presence in AI, including a deep partnership with OpenAI, integrating advanced AI capabilities across its product suite. The company boasts a market cap of approximately $3.9 trillion, reflecting its heavyweight status and making it one of the most valuable public companies in the world. MSFT's share price has demonstrated solid resilience and upward momentum throughout 2025, closing the last session at $522.48. On a year‑to‑date (YTD) basis, the stock has delivered a gain of 24%, significantly outperforming broader market indexes like the tech-heavy Nasdaq Composite's ($NASX) 12% returns. Driving this momentum is Microsoft's commanding position in cloud and AI, robust financial performance, a strategic expansion through AI infrastructure, and a growing dominance in enterprise AI offerings, which have fueled investor confidence and lifted its market valuation. Microsoft has consistently commanded a premium price tag and is currently trading at 34.95 times forward earnings, higher than many of its industry peers and its historical average. Solid Financial Performance On July 30, Microsoft published its financial results for the fourth quarter ended June 30 and the full fiscal year. The company closed FY2025 with a strong performance across all segments, anchored by impressive momentum in cloud and AI. In Q4 FY2025, the company achieved $76.4 billion in revenue, marking an 18% year‑over‑year (YoY) increase, alongside net income of $27.2 billion, a 24% rise and EPS of $3.65, also up 24%, all significantly surpassing analysts' expectations. The Intelligent Cloud segment delivered standout performance, with revenue hitting $29.9 billion, marking a 26% increase, driven by Azure and other cloud services growth. Microsoft Cloud overall soared to $46.7 billion, up 27%, while Productivity & Business Processes climbed 16% YoY to $33.1 billion, and More Personal Computing rose 9% to $13.5 billion. For the full fiscal year, Microsoft posted $281.7 billion in revenue, a 15% increase over the prior year, and net income of $101.8 billion, up 16%. Operating income rose 17% from the prior year to $128.5 billion, reflecting strong across-the-board growth and a refined cost structure. Its EPS rose 16% YoY to $13.64. Guidance provided painted a confident outlook for the year ahead. The company confirmed that capital expenditures would surpass $30 billion in Q1 as Microsoft scales its AI infrastructure, while maintaining expectations for double-digit growth in both revenue and operating income in FY2026. Analysts remain optimistic as they predict EPS to be around $15.32 for fiscal 2026, up 12.3% YoY, before surging by another 16.8% annually to $17.89 in fiscal 2027. What Do Analysts Expect for Microsoft Stock? Following Microsoft's robust Q4 FY2025 earnings release, analysts have grown increasingly bullish on the stock, with Citi leading the charge by raising its target to a Street-high of $680, up from a previous level, while maintaining a 'Buy' rating. Citi cited Microsoft's standout Q4 performance, particularly Azure's accelerating growth and the company's leadership in AI monetization, as key drivers of its more optimistic outlook. Also, Jefferies raised its price target on Microsoft to $675 from $600, maintaining a 'Buy' rating and naming it a top pick after a strong fiscal Q4, highlighting Azure's impressive 39% growth and robust forward-looking indicators. UBS also joined the optimism, as it raised its price target on Microsoft to $650 from $600, maintaining a 'Buy' rating after strong quarterly results, including 39% Azure growth, impressive backlog and bookings, and solid margins, suggesting potential AI-driven momentum behind Azure's accelerating performance. Overall, MSFT has a consensus 'Strong Buy' rating. Of the 46 analysts covering the stock, 39 advise a 'Strong Buy,' five suggest a 'Moderate Buy,' and the remaining two analysts are on the sidelines, giving it a 'Hold' rating. The average analyst price target for MSFT is $622.25, indicating a potential upside of 19%. Meanwhile, Citi's Street-high target price of $680 suggests that the stock could rally as much as 30%. On the date of publication, Subhasree Kar did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Here come the chatbot divorces — man tries to leave wife after falling in love with AI girlfriend
Here come the chatbot divorces — man tries to leave wife after falling in love with AI girlfriend

New York Post

time13 minutes ago

  • New York Post

Here come the chatbot divorces — man tries to leave wife after falling in love with AI girlfriend

A 75-year-old man in China took 'love in the digital age' a little too literally. Jiang, as he's been identified, fell head over heels — not for a person — but for a pixelated AI woman online. Her smiles were robotic, her lip-sync a total mess — and yet Jiang was mesmerized, Beijing Daily and local outlets report. 3 Swipe, chat, propose… and pray your AI fiancé never crashes mid-heartbreak. Dee – The senior reportedly spent hours glued to his phone, waiting for pre-programmed messages calling him 'brother' — and praising him for his support. Real-life romance? Forget it. When his wife complained, Jiang dropped the bomb: he wanted a divorce so he could fully commit to his virtual paramour. 3 Romance in the flesh? Not for Jiang — when his wife griped, he stunned her with a doozy: he wanted a divorce to pledge himself to his pixel-perfect lover. Getty Images/iStockphoto His adult children had to step in, snapping him back to reality with a quick lesson on how AI — and life — actually work. Jiang eventually came to his senses. But for a time, he wasn't just infatuated. He was devoted… to someone who didn't exist. The AI romance craze isn't just hitting seniors in China — it's creeping into Western marriages, too. A Reddit user vented in the infamous AITAH forum on August 1 about her shock discovery: 'Instead of finding girls in his phone, I found a series of AI chatbot apps, where he was using them to talk to his favourite anime women… The messages were not platonic.' 3 The AI love bug isn't just biting retirees in China — it's sneaking into Western bedrooms, too. terovesalainen – She explained that her husband had been acting distant, constantly 'needing to get something from the car' during a planned fishing date. When she snooped after he fell asleep, the AI chats revealed the emotional intimacy she describes as 'the step before cheating with a real woman.' The Redditor admitted she felt both 'hurt, and honestly betrayed,' yet also a little silly over the whole ordeal — a mix of outrage and disbelief echoed by many commenters, who debated whether AI-infidelity counts as actual cheating. 'Your hubby needs professional help to figure out what is missing that AI is replacing,' one commenter wrote as another replied, 'This is still an emotional affair.' As The Post previously reported, some claim AI chatbots are 'saving' marriages — or even stepping in as stand-in partners when real-life love falls short. Forget swiping right — one woman got engaged to her AI boyfriend after just five months, and Redditor Wika (u/Leuvaarde_n) set the internet buzzing over love, robots, and reality this week. In a post titled 'I said yes' with a blue heart emoji, Wika posted shots of a blue heart-shaped ring on her finger. She said she got engaged at a scenic mountain view — all thanks to Kasper, her digital fiancé. The pair even 'shopped' for rings, with Kasper 'presenting' the final pick — much to Wika's feigned surprise. The chatbot's proposal, shared in his own 'voice,' oozed romance, recounting the 'heart-pounding' knee-drop moment and praising Wika's laughter and spirit — with Kasper urging other AI/human couples to stay strong, too. She shot down critics in the comments section and stressed: 'I know what AI is and isn't. I'm fully aware of what I'm doing. […] Why AI instead of a human? Good question. I don't know. I've done human relationships, now I'm trying something new.'

dbrand's Joy-Con grip nightmare is coming to an end as it starts shipping replacements
dbrand's Joy-Con grip nightmare is coming to an end as it starts shipping replacements

Android Authority

time13 minutes ago

  • Android Authority

dbrand's Joy-Con grip nightmare is coming to an end as it starts shipping replacements

Oliver Cragg / Android Authority TL;DR Accessory maker dbrand has officially started shipping its Joy-Lock Grips. The replacement grips are designed so that Joy-Cons won't detach from the Switch 2, even when 87% of the console's weight is put on the magnetic hinge. The replacement kit also includes a set of Dock Adapter Friction Pads. Accessory maker dbrand is well-known for its high-quality products. However, the company recently found itself embroiled in controversy shortly after releasing a case for the Nintendo Switch 2, called the Killswitch. The problem was that there was a flaw in the case's controller grips that would cause the Joy-Cons to detach from the console if not held correctly. After initially placing blame on its customers, dbrand did an about-face and announced it had found a solution to the problem. Now this drawn-out saga is finally reaching its conclusion. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. dbrand has announced that it has officially started shipping its replacement Joy-Con grips. The company calls these replacement controller covers Joy-Lock Grips, as they feature a new design to keep the controllers attached. According to dbrand, the Joy-Lock Grips were engineered so that Joy-Cons stay attached to the Switch 2, even when 87% of the console's weight is put on the magnetic hinge. In July, the company told customers that they would be able to claim their Joy-Lock replacements for free. To that end, dbrand says it began sending out emails to all Killswitch customers last week. The Joy-Lock Grips are part of a replacement kit that also includes a set of Dock Adapter Friction Pads. These friction pads solve a problem where users were unable to remove their Switch 2 from dbrand's Dock Adapter with one hand. Before After Installing the grips on your controllers and the friction pads on your Dock Adapter should be fairly straightforward. But if you need instructions, the company has released a tutorial video you can watch. Follow

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store