
Why Machines Aren't Intelligent
OpenAI has announced that its latest experimental reasoning LLM, referred to internally as the 'IMO gold LLM', has achieved gold‑medal level performance at the 2025 International Mathematical Olympiad (IMO).
Unlike specialized systems like DeepMind's AlphaGeometry, this is a reasoning LLM, built with reinforcement learning and scaled inference, not a math-only engine.
As OpenAI researcher Noam Brown put it, the model showed 'a new level of sustained creative thinking' required for multi-hour problem-solving.
CEO Sam Altman said this achievement marks 'a dream… a key step toward general intelligence', and that such a model won't be generally available for months.
Undoubtedly, machines are becoming exceptionally proficient at narrowly defined, high-performance cognitive tasks. This includes mathematical reasoning, formal proof construction, symbolic manipulation, code generation, and formal logic.
Their capabilities also extend significantly to computer vision, complex data analysis, language processing, and strategic problem-solving, because of significant advancements in deep learning architectures (such as transformers and convolutional neural networks), the availability of vast datasets for training, substantial increases in computational power, and sophisticated algorithmic optimization techniques that enable these systems to identify intricate patterns and correlations within data at an unprecedented scale and speed.
These systems can accomplish sustained multi-step reasoning, generate fluent human-like responses, and perform under expert-level constraints similar to humans.
With all this, and a bit of enthusiasm, we might be tempted to think that this means machines are becoming incredibly intelligent, incredibly quickly.
Yet this would be a mistake.
Because being good at mathematics, formal proof construction, symbolic manipulation, code generation, formal logic, computer vision, complex data analysis, language processing, and strategic problem-solving, is neither a necessary nor a sufficient condition for 'intelligence', let alone for incredible intelligence.
The fundamental distinction lies in several key characteristics that machines demonstrably lack.
Machines cannot seamlessly transfer knowledge or adapt their capabilities to entirely novel, unforeseen problems or contexts without significant re-engineering or retraining. They are inherently specialized. They are proficient at tasks within their pre-defined scope and their impressive performance is confined to the specific domains and types of data on which they have been extensively trained. This contrasts sharply with the human capacity for flexible learning and adaptation across a vast and unpredictable array of situations.
Machines do not possess the capacity to genuinely experience or comprehend emotions, nor can they truly interpret the nuanced mental states, intentions, or feelings of others (often referred to as "theory of mind"). Their "empathetic" or "socially aware" responses are sophisticated statistical patterns learned from vast datasets of human interaction, not a reflection of genuine subjective experience, emotional resonance, or an understanding of human affect.
Machines lack self-awareness and the ability for introspection. They do not reflect on their own internal processes, motivations, or the nature of their "knowledge." Their operations are algorithmic and data-driven; they do not possess a subjective "self" that can ponder its own existence, learn from its own mistakes through conscious reflection, or develop a personal narrative.
Machines do not exhibit genuine intentionality, innate curiosity, or the capacity for autonomous goal-setting driven by internal desires, values, or motivations. They operate purely based on programmed objectives and the data inputs they receive. Their "goals" are externally imposed by their human creators, rather than emerging from an internal drive or will.
Machines lack the direct, lived, and felt experience that comes from having a physical body interacting with and perceiving the environment. This embodied experience is crucial for developing common sense, intuitive physics, and a deep, non-abstracted understanding of the world. While machines can interact with and navigate the physical world through sensors and actuators, their "understanding" of reality is mediated by symbolic representations and data.
Machines do not demonstrate genuine conceptual leaps, the ability to invent entirely new paradigms, or to break fundamental rules in a truly meaningful and original way that transcends their training data. Generative models can only produce novel combinations of existing data,
Machines often struggle with true cause-and-effect reasoning. Even though they excel at identifying correlations and patterns, correlation is not causation. They can predict "what" is likely to happen based on past data, but their understanding of "why" is limited to statistical associations rather than deep mechanistic insight.
Machines cannot learn complex concepts from just a few examples. While one-shot and few-shot learning have made progress in enabling machines to recognize new patterns or categories from limited data, they cannot learn genuinely complex, abstract concepts from just a few examples, unlike humans. Machines still typically require vast datasets for effective and nuanced training.
And perhaps the most profound distinction, machines do not possess subjective experience, feelings, or awareness. They are not conscious entities.
Only when a machine is capable of all (are at least most of) these characteristics, even at a relatively low level, could we then reasonably claim that machines are becoming 'intelligent', without exaggeration, misuse of the term, or mere fantasy.
Therefore, while machines are incredibly powerful for specific cognitive functions, their capabilities are fundamentally different from the multifaceted, adaptable, self-aware, and experientially grounded nature of what intelligence is, particularly as manifested in humans.
Their proficiency is a product of advanced computational design and data processing, not an indication of a nascent form of intelligence in machines.
In fact, the term "artificial general intelligence" in AI discourse emerged in part to recover the meaning of "intelligence" after it had been diluted through overuse in describing machines that are not "intelligent" to clarify what these so-called "intelligent" machines still lack in order to really be, "intelligent".
We all tend to oversimplify and the field of AI is contributing to the evolution of the meaning of 'intelligence,' making the term increasingly polysemous. That's part of the charm of language. And as AI stirs both real promise and real societal anxiety, it's also worth remembering that the intelligence of machines does not exist in any meaningful sense.
The rapid advances in AI signal that it is beyond time to think about the impact we want and don't want AI to have on society. In doing so, this should not only allow, but actively encourage us to consider both AI's capacities and its limitations, making every effort not to confuse 'intelligence' (i.e. in its rich, general sense) with the narrow and task-specific behaviors machines are capable of simulating or exhibiting.
While some are racing for Artificial General Intelligence (AGI), the question we should now be asking is not when they think they might succeed, but whether what they believe they could make happen truly makes sense civilisationally as something we should even aim to achieve, while defining where we draw the line on algorithmic transhumanism.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
5 minutes ago
- Yahoo
Can Nvidia Stock Skyrocket Another 370% By 2030? 1 Wall Street Analyst Says Yes.
Key Points Global data center capital expenditures are expected to reach $1 trillion by 2028. Nvidia already takes a massive cut of that spending. 10 stocks we like better than Nvidia › Nvidia (NASDAQ: NVDA) has been one of the best-performing stocks in recent years, with its price up more than 1,000% since 2023 and around 250% since 2024. However, one Wall Street analyst believes that Nvidia still has plenty of room to soar. Phil Panaro of the Boston Consulting Group thinks that the stock could reach $800 per share by 2030, representing a 370% increase from current levels. Considering that it's already the world's largest company, that's a bold call. But is this $800 price target realistic? Let's back-calculate the numbers to find out. Nvidia forecasts monster growth for data centers Nvidia makes graphics processing units (GPUs) alongside other hardware and software that support them. Although originally designed to process gaming graphics, GPUs are incredibly useful for any task that requires a significant amount of computing power. GPUs can process multiple calculations in parallel, which allows them to excel at these complex tasks. They have been widely used in processing engineering simulations, drug discovery, cryptocurrency mining, and their largest assignment to date: training and processing AI models. Nvidia sells most of its GPUs to AI hyperscalers, which then place them in data centers to create powerful computing clusters. Its revenue growth has been impressive, but it's nowhere near done expanding. The company likes to cite third-party research that shows data center capital expenditures (capex) were $400 billion in 2024 and are expected to increase to $1 trillion by 2028. Considering that the chipmaker generated $115 billion in data center revenue during fiscal 2025 (which encompasses most of 2024), it captured nearly 30% of total spending on it. Should this $1 trillion projection come to fruition and Nvidia maintains a 30% market share, that would mean it will generate $300 billion in revenue. Over the past 12 months, the company has $149 billion in revenue, indicating about 100% growth if the projection comes true. That's far shy of the 370% growth needed to turn it into an $800 stock, as Phil Panaro projects, but it's still strong and market-beating growth. But Panaro's call isn't for 2028, it's for 2030. So we need to look beyond the projection that Nvidia is citing. Even an extended time frame doesn't get Nvidia to $800 The global data center capex figure cited by Nvidia in its 2025 GTC event indicates a compound annual growth rate of 26%. If the data center industry can maintain that growth rate for two extra years until 2030, projected capex would reach nearly $1.6 trillion. If the company can maintain its 30% market share, its revenue would be $473 trillion, indicating 217% growth. This clearly falls short of the 350% Nvidia would need to achieve the $800 price target that Panaro has assigned to the stock. As a result, I don't think the chipmaker can reach Panaro's target within his specified time frame. Still, these calculations have shown that if the data center capex projections are met over the next few years -- and Nvidia maintains its market share dominance -- it can be an incredibly strong performer, delivering market-crushing returns. That makes it a solid stock to buy now, and I think investors would be wise to add Nvidia shares over the next few months. Should you buy stock in Nvidia right now? Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $636,628!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,063,471!* Now, it's worth noting Stock Advisor's total average return is 1,041% — a market-crushing outperformance compared to 183% for the S&P 500. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of July 21, 2025 Keithen Drury has positions in Nvidia. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy. Can Nvidia Stock Skyrocket Another 370% By 2030? 1 Wall Street Analyst Says Yes. was originally published by The Motley Fool Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


Tom's Guide
6 minutes ago
- Tom's Guide
Here's why you shouldn't use ChatGPT as your therapist — according to Sam Altman
Turning to ChatGPT for emotional support may not be the best idea for a very simple reason, according to OpenAI CEO Sam Altman. Speaking on a recent podcast appearance, Altman warned that AI chatbots aren't held to the same kind of legal confidentiality as a human doctor or therapist is. 'People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?'" Altman said in a recent episode of This Past Weekend w/ Theo Von. "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it," he continued. "There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' Altman points out that, in the result of a lawsuit, OpenAI could be legally compelled to hand over records of a conversation an individual has had with ChatGPT. The company is already in the midst of a legal battle with the New York Times over retaining deleted chats. In May, a court order required OpenAI to preserve "all output log data that would otherwise be deleted" even if a user or privacy laws requested it be erased. During the podcast conversation, Altman said he thinks AI should "have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago.' Earlier this year, Anthropic — the company behind ChatGPT rival Claude — analyzed 4.5 million conversations to try and determine if users were turning to chatbots for emotional conversations. According to the research, just 2.9% of Claude AI interactions are emotive conversations while companionship and roleplay relationships made up just 0.5%. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. In the result of a lawsuit, OpenAI could be legally compelled to hand over records of a conversation an individual has had with ChatGPT. While ChatGPT's user base far exceeds that of Claude, it's still relatively rare that people use the Chatbot for an emotional connection. Somewhat at odds with Altman's comments above, a joint study between OpenAI and MIT stated: "Emotional engagement with ChatGPT is rare in real-world usage." The summary went on to add: "Affective cues (aspects of interactions that indicate empathy, affection, or support) were not present in the vast majority of on-platform conversations we assessed, indicating that engaging emotionally is a rare use case for ChatGPT" So far, so good. But, here's the sting: conversational AI is only going to get better at interaction and nuance which could quite easily lead to an increasing amount of people turning to it for help with personal issues. ChatGPT's own GPT-5 upgrade is right around the corner and will bring with it more natural interactions and an increase in context length. So while it's going to get easier and easier to share more details with AI, users may want to think twice about what they're prepared to say. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Fast Company
6 minutes ago
- Fast Company
This AI gadget turns your dreams into mini movies
An intense dream can leave you in sweats and existential wonder. But just moments later, it evaporates from your mind to never be experienced again. The fleeting nature of dreams is why many keep a dream journal by their bedside to jot down the story before it disappears. The design studio Modem imagined another, more modern recording device. Called the Dream Recorder, it's something like a bedside clock radio that uses AI to log your dreams and play them back to you. When you wake up in the morning, you pick up the recorder and dictate what you remember of your dream. That ensuing transcript is sent to an AI video generator in the cloud, which creates a short video of it. What's important to Modem is the ritual, done without an app or phone, is performed with an object dedicated to you—a sort of generated visual diary of dreams. 'The thing that happens in your head isn't going to be magically recreated by this video generator,' says project contributor Mark Hinch. 'But it will hopefully capture the essence of the perhaps bizarre, weird, fragmented ideas of what happened in your head in the story.' The dreams themselves are rendered through an intentionally ethereal aesthetic, at a low fi 240-by-240-pixel resolution that's meant to mirror the way we remember a dream, but also sidestep too much literality when things naturally don't match up. For instance, it blurs faces so that you never see someone who doesn't match up with what you remember. And rather than saving every dream you ever have forever, the Dream Recorder has been designed to flush its memory much like you do—holding onto dreams for a week at most before overwriting them with whatever you dream up next. Instead of selling the device, Modem shares the code on Github, along with all the items you need to buy to build it, ranging from a Raspberry Pi processor to USB microphones and capacitive touch sensors, via Amazon links. The body can be printed via an online service like Shapeways, and it all connects together without soldering. (Dreams cost between about a penny and 14 cents apiece, depending on the AI service you connect to render them.) But the Dream Recorder is admittedly less interesting as another product with features to be scrutinized than it is as a greater idea, and model of experimentation that's been lacking in the race toward AGI or building the next unicorn. With so much of the AI conversation focused on companions, productivity tools, or generative whatever, it's easy to block out the more transcendental possibilities like being able to literally speak to whales. Modem cut through the productization of AI with a new dose of wonder. The Dream Recorder is fascinating not just for what it literally does, but as a rare, tangible beacon for a future that feels just within our grasp. (Dream recording inherently seems feasible within our electrical brain patterns and new AI capabilities—so much so that Samsung filed a patent around a UI to control your dreams.) And much like a good sci-fi novel, it offers us an anchor to discuss and debate what it all means until a world of inventors actually leads us there. 'We hope to inspire the new generation coming of age in the age of intelligence . . . showing them that there's a more mindful alternative to the very distracted world,' says Bas van de Poel, cofounder of Modem. 'Perhaps using the engines of wisdom and mindfulness, and combining them with the logic of computer science, will be sort of like the ultimate dream,' he says.