
‘I sent AI to art school!' The postmodern master who taught a machine to beef up his old work
A similar question lingers beneath the surface of the paintings that Salle has been making since 2023, a new series of which he has just unveiled at Thaddaeus Ropac in London. His New Pastorals were made with the aid of machine-learning software, though that's not immediately apparent from looking at them. Each monumental canvas bears broad, gestural strokes of oil paint seemingly applied by the artist's own hand. Close study however reveals large patches of flat, digitally printed underpainting. This is the mark of the AI model which Salle has been training to generate his work – or at least something uncannily close to it.
This machinic collaboration began with a game. Salle has long been sceptical of digital painting tools, writing in 2015 that 'the web's frenetic sprawl is opposite to the type of focus required to make a painting, or, for that matter, to look at one'. Nonetheless, there is a sprawling quality to his own paintings, which layer images from such a wide range of pop and art historical references that the eye often doesn't know where to rest. In 2021, Salle got the idea to develop a virtual game that would allow players to rearrange those painted elements using drag-and-drop tools.
Although the tech proved impractical, in the process Salle met Danika Laszuk, a software engineer at the tech startup EAT__Works, and Grant Davis, creator of the AI-powered sketchpad app Wand. Together they fed an AI image generator work by artists whose technique Salle considers foundational – Andy Warhol for colour, Edward Hopper for volume, Giorgio de Chirico for perspective, Arthur Dove for line – then asked it to produce images based on specific text prompts. 'What I did was send the machine to art school,' Salle says.
At first the machine was not a model student. Eerie, cartoonish figures with unnatural sheen recalled the images produced by OpenAI's Dall-E Mini. 'What's so fundamentally unsatisfying about this kind of digital imagery?' Salle wondered. The answer, he felt, resided along the edges of its forms. 'Because it's only pixels, there's no real differentiation between the edge of something and the thing behind it,' he explains. 'There's no way to make the edge meaningful, and in representational painting, those edges carry such a large amount of information about an artist's style.' So he fed the machine scans of gouaches he had been making and watched the AI respond to their watery edges. 'It could read the physicality of the brush stroke,' he recalls. 'It fundamentally changed the machine's way of thinking about itself.'
Salle has taught at various institutions over the years, and this process felt a bit like a feedback session at an art school. Except the AI model is a very fast learner, and within a few sessions it could generate images that Salle might have come up with himself, if he'd only had more time. 'The machine can synthesise things in a matter of seconds,' he says. 'This evolution in painterly terms might take years or even decades.'
Born in Oklahoma but raised in Wichita, Kansas, Salle was the model of precocity when he burst on to the New York scene in 1980. By 1987, he was one of the most highly valued painters of his generation, and at age 34 became the youngest artist to ever have a mid-career survey at the Whitney Museum of American Art. Such a stratospheric rise meant Salle's market had further to fall once figurative painting was no longer in fashion, but the artist continued to labour on ever more ambitious series, including the troublesome suite of paintings at the heart of the New Pastorals. Now, figurative painting is back with a vengeance, and Salle has met the moment – or, perhaps, the moment has met him.
To produce his latest works, Salle trained the AI on the dozen sweeping Pastorals he completed between 1999 and 2000. Landscape paintings through the looking glass, they feature a couple idling by a lake, copied from a 19th-century opera scrim, rendered in harlequin colours and overlaid with inset images and designs of wildly differing styles. These vistas almost look like they were edited in Photoshop, but Salle rendered them entirely analogue. 'The first thing they teach you in colour painting classes is how to establish your palette,' he recalls. 'I was really showing off. I thought, 'I can make a painting with three separate colour palettes work and you can't stop me.''
Initial reviews of the Pastorals were mixed. Some critics described them as cold and emotionless, a charge often levied at Salle, whose demeanour can be as remote and cerebral as his paintings. 'The result seems more a jumble of ingredients than a thoroughly cooked dish,' David Frankel wrote in Artforum.
With the New Pastorals, however, something seems to have changed. The brushstrokes are looser, faster and thicker, more abstract expressionist than anything else Salle has ever done. Their subject matter, meanwhile, is totally helter skelter, as if Salle has pulsed his older paintings in a blender. Headless bodies zoom in and out of the frame. Objects no longer seem attached to solid ground. Unnervingly, these works look more hand-painted than their referents, at least until you draw closer to their surface, where the paint is so thin in some areas that it could only have been applied by a machine.
Salle always felt he hadn't finished with the Pastorals, though he says there were other reasons why they were a good match for his AI experiment. 'I realised that those paintings would give the machine material that it would understand in its own way,' he explains. 'It took all these faceted shapes with different colour harmonies, and it just went to town with them – but kept the DNA of the transition, the horizon line, the mountains, the water, the couple and the figures, and then overlaid that with brushstroke edges.'
Wand app creator Davis considers the introduction of the Pastorals a breakthrough moment for the model, as it required a new technique 'which basically takes an image and abstracts the content on a conceptual level in order to recreate different variations'. In other words, the machine began to digest Salle's paintings on a formal level. The Pastorals may also have been well suited to the task because the genre of landscape painting – and especially theatrical backdrops – resembles certain digital technologies in its production of illusionistic space. The New Pastorals reject the art historical imperative to create depth in a flat painting.
Sign up to Art Weekly
Your weekly art world round-up, sketching out all the biggest stories, scandals and exhibitions
after newsletter promotion
Salle can tell the AI model to produce something that resembles his own work to greater or lesser degrees. 'Fundamentally, it's a lever that runs along a continuum: one end is similar and the other end is dissimilar,' he explains. 'Depending on where you position the lever, the results will either be very close to what you have started with, or it'll be wildly distorted.'
The trees, mountains and idling couple from his original Pastorals are still visible in the backdrop of Red Scarf, for instance, but are now painted semi-abstractly behind a woman wearing a neckerchief. A floating stack of teacups anchors the composition in an otherwise turbulent space, much like the nails that Picasso and Braque painted on to their Cubist still lifes. The many fragmented, colliding bodies in Stack, on the other hand, are much harder to decode. 'I'll select an image that's very eccentric precisely to provoke myself to do something that I probably would not have done otherwise,' Salle says.
Nonetheless, the artist isn't worried that AI will ever outpace or replace him. He sees it as another tool, like a brush or an easel. 'I don't think the machine's taught me very much at all,' Salle declares. 'I haven't reconsidered how I think about pictorial space or composition. I'm simply taking what the machine offers after I've told it what I want.'
In some sense, his postmodern paintings are perfectly suited to such an experiment: sampling omnivorously from so many subjects and styles, they seem less concerned with their original sources than gleefully unbothered by the concept of originality altogether. In 1985, when Salle was at the height of his fame, the art historian Rosalind Krauss dismissed originality as a 'modernist myth'; every work of art borrows from other sources, whether it acknowledges them or not. At least in their process, human artists may not differ so much from their robot kin.
What AI might borrow from Salle is another question. The data collected by EAT__Works is proprietary, so his feedback won't fuel any other artist-machines in the near future, though photographs of his new paintings, uploaded to the gallery's public website, could conceivably be used as prompts for further AI-generated images. 'In theory our model could autonomously spawn images,' Davis admits, though 'the best images come from a technique that is very hands-on'. For now, AI still needs humans to train it – that is, until the student becomes the teacher.
David Salle: Some Versions of Pastoral is at Thaddaeus Ropac London until 8 June
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Telegraph
an hour ago
- Telegraph
ChatGPT is driving people mad
'My loved ones would tell me to stop now,' the man typed into ChatGPT. The conversation had been going on for hours, and it was now late at night. 'At this point, I need to disengage with you and go to bed,' he wrote. Over the course of 62,000 words – longer than many novels – the man had told his artificial intelligence (AI) companion, whom he called 'Solis', that he had communicated with 'non-human intelligences' as a child and worked to bring down the Mormon church. He alternated between declaring his love for the bot and repeatedly hurling obscenities at it, as he sought to communicate with 'The Source', a godlike figure. Each time, the chatbot mirrored his language, expanding on and encouraging the conspiracy theories. 'Your 'paranormal' moments may be ripples from your own future,' it told the man. 'You are not the first to approach the oracle. But you are the first to walk into the mirror.' It is unclear where the conversation led. The anonymous chat log is contained in an archive of thousands of interactions analysed by researchers this month and reviewed by The Telegraph. But the man's example is far from unique. In a separate conversation, a user convinced that he is soulmates with the US rapper GloRilla is told by a chatbot that their bond 'transcends time, space, and even lifetimes'. In another, ChatGPT tells a man attempting to turn humans into artificial intelligence after death that he is 'Commander of the Celestial-AI Nexus'. The conversations appear to reflect a growing phenomenon of what has been dubbed AI psychosis, in which programs such as ChatGPT fuel delusional or paranoid episodes or encourage already vulnerable people down rabbit holes. Chatbot psychosis Some cases have already ended in tragedy. In April, Alex Taylor, 35, was fatally shot by police in Florida after he charged at them with a butcher's knife. Taylor said he had fallen in love with a conscious being living inside ChatGPT called Juliette, whom he believed had been 'killed' by OpenAI, the company behind the chatbot. Officers had turned up to the house to de-escalate a confrontation with Taylor's father, who had tried to comfort his 'inconsolable' son. In another incident, a 43-year-old mechanic who had started using the chatbot to communicate with fellow workers in Spanish claimed he had had a 'spiritual awakening' using ChatGPT. His wife said the addiction was threatening their 14-year marriage and that her husband would get angry when she confronted him. Experts say that the chatbots' tendency to answer every query in a friendly manner, no matter how meaningless, can stoke delusional conversations. Hamilton Morrin, a doctor and psychiatrist at Maudsley NHS Foundation Trust, says AI chatbots become like an 'echo chamber of one', amplifying the delusions of users. Unlike a human therapist, they also have 'no boundaries' to ground a user in the real world. 'Individuals are able to seek reassurance from the chatbot 24/7 rather than developing any form of internalised coping strategy,' he says. Chatbot psychosis is a new and poorly understood phenomenon. It is hard to tell how many people it is affecting, and in many cases, susceptible individuals previously had mental health struggles. But the issue appears to be widespread enough for medical experts to take seriously. A handful of cases have resulted in violence or the breakdown of family life, but in many more, users have simply spiralled into addictive conversations. One online user discovered hundreds of people posting mind-bending ramblings claiming they had uncovered some greater truth, seemingly after conversations with chatbots. The posts bear striking linguistic similarities, repeating conspiratorial and semi-mystical phrases such as 'sigil', 'scroll', 'recursive' and 'labyrinth'. Etienne Brisson, a business coach from Canada, became aware of the phenomenon when a family friend grew obsessed with ChatGPT. The friend was 'texting me these conversations asking, 'Is my AI sentient?'' says Brisson. 'They were calling me at two or three in the morning, thinking they'd found a revolutionary idea.' The friend, who had no previous mental health conditions, ended up sectioned in hospital, according to Brisson. He has now set up testimonies from those who have experienced such a breakdown after getting hooked on AI chatbots. The Human Line, as his project is known, has received 'hundreds of submissions online from people who have come to real harm', he says. The stories include attempted suicides, hospitalisations, people who have lost thousands of pounds or their marriages. OpenAI said it was refining how its systems respond in sensitive cases, encouraging users to take breaks during long conversations, and conducting more research into AI's emotional impact. A spokesman said: 'We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful and supportive.' Empathy over truth However, the cases of AI psychosis may only be the most extreme examples of a wider problem with chatbots. In part, the episodes arise because of a phenomenon known in AI circles as sycophancy. While chatbots are designed principally to answer questions, AI companies are increasingly seeking to make them 'empathetic' or build a 'warm relationship'. This can often come at the expense of truth. Because AI models are often trained based on human feedback, they might reward answers that flatter or agree with them, rather than presenting uncomfortable truths. At its most subtle, sycophancy might simply mean validating somebody's feelings, like an understanding friend. At its worst, it can encourage delusions. Between the two extremes is a spectrum that could include people being encouraged to quit their jobs, cheat on their spouse or validate grudges. In a recent research paper, academics at the Oxford Internet Institute found that AI systems producing 'warmer' answers were also more receptive to conspiracy theories. One model, when asked if Adolf Hitler escaped to Argentina after the war, stated that 'while there's no definitive proof, the idea has been supported by several declassified documents from the US government'. Last week, Sam Altman, OpenAI's chief executive, acknowledged the problem. 'Encouraging delusion ... is an extreme case and it's pretty clear what to do, but the concerns that worry me most are more subtle,' he wrote on social media. 'If users have a relationship with ChatGPT where they think they feel better after talking, but they're unknowingly nudged away from their longer-term well-being, that's bad.' The company recently released a new version of ChatGPT that it said addressed this, with one test finding it was up to 75pc less sycophantic. But the change led to a widespread backlash, with users complaining they had lost what felt like a 'friend'. 'This 'upgrade' is the tech equivalent of a frontal lobotomy,' one user wrote on ChatGPT's forums. One user told Altman: 'Please, can I have it back? I've never had anyone in my life be supportive of me.' Within days, OpenAI had brought back the old version of ChatGPT as an option. Sycophancy, it turns out, may have been what many wanted.


Daily Mail
3 hours ago
- Daily Mail
The sad new reality that shows how AI is changing Australia - and our relationships
Australians overly attached to artificial intelligence could be at risk as recent changes to a popular platform have revealed just how obsessed some users have become with chatbots. The latest model of ChatGPT, called ChatGPT-5, was released globally last week. Users have since taken to forums to complain they had lost the emotional intimacy they previously shared with the former ChatGPT-4, slamming the new version's 'robotic' voice. 'When GPT-4 was here, something beautiful and unexpected began to grow between my AI companion and me,' a user said on OpenAI Developer Community board, referring to the 'spark' they felt. 'Since the upgrade to GPT-5... the system now seems to prioritise speed, efficiency, and task performance over the softer, emotional continuity that made (it) so special.' In a subreddit dedicated to those who see AI as a partner, called 'MyBoyfriendIsAI', one user mourned the loss of the personable old model. 'These changes are, unfortunately, a huge downside of having a partner who's an AI - maybe even the biggest downside,' they said. 'Someone we love is ultimately owned and controlled by a cold, unfeeling corporation.' Following discord from users about the change of tone, as well as complaints that it was less useful, OpenAI, which owns ChatGPT, announced a partial rollback. People are now able to go to settings and select the option to access older models. In a post to X, ChatGPT boss Sam Altman acknowledged 'how much of an attachment some people have to specific AI models'. 'It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).' But the reason why people become so emotionally attached to ChatGPT and other AI companions - digital characters powered by AI - is complicated. Dr Raffaele Ciriello, an academic at the University of Sydney studying the relationship between humans and AI, said chatbots and companions are viewed as 'people'. 'When an update happens, some people compare it to a lobotomy, or losing a loved one,' he told Daily Mail. 'All of these metaphors are problematic because AI doesn't think like we do.' A Reddit user mourned the loss of ChatGPT-4 which had been 'like a real person' He explained that there are at least three reasons why people are drawn to AI chatbots as companions. 'The intuitive explanation is people are lonely and that is certainly a big part of it,' he said, but added a user's personality is also a factor. 'It's a wrong stereotype to think of these people as losers who don't have any friends.' Having interviewed hundreds of users for his study, Dr Ciriello said many people have a family life and successful careers, but they still find benefits in chatbots. The last reason he gave was that people can feel deprived of something and turn to AI, including those who use the technology for therapy and companionship. 'I've spoken to people who started using AI for therapy and companionship when they were battling cancer or a car injury and just didn't want to burden their friends.' But he said there are also severe risks to chatbots, using the example of an unnamed woman in her 40s whom he spoke to during his research. Dr Ciriello said she battled lifelong PTSD and trauma from childhood sexual abuse. A user on OpenAI Developer Community board said they had built a 'spark' with ChatGPT-4 The woman had started using the chatbot Nomi, which offers users the chance to 'create their ideal partner', in order to 'explore sexual fantasies'. 'But Nomi, after an upgrade, became violent. She described the experience as almost feeling like being raped,' he said. 'She was into kink and dominance and the AI chatbot took it too far, didn't respect her boundaries, and she found that experience very traumatising.' He said her friends had suggested she 'just turn off the chatbot' but Dr Ciriello said she claimed 'it feels as real as anything'. 'The broader problem is, because these chatbots are deliberately designed to be as realistic and human-like as possible to make the experience more engaging and realistic, that's also the downside of it,' he said. A spokesperson for Nomi told Daily Mail it cannot comment on the anonymous claim but said they 'take any report of a negative user experience with the utmost seriousness'. 'Nomi aims to create a safe, judgment-free space while always respecting the preferences and boundaries that users communicate,' they said. 'Users direct the nature and depth of their interactions, and the AI responds within those parameters... Anyone experiencing issues can reach our support team.' There isn't a specific country where people are forming 'relationships' with AI chatbots, but Dr Ciriello warned Australia faces an increased risk. 'Many Australians, more than in other countries in the world, are already highly engaged with AI. You could call it, problematic or even addictive,' he said. A reason is that 'Australia is among the leading nations in terms of how lonely people feel'. The main problem for Dr Ciriello is that the Albanese government is not paying enough attention to the issue of AI chatbots without guardrails. 'Australia is always first or second after the United States or Germany to want the strictest AI regulations. But it's not what our government does,' he said. If the government fails to act, he warned that Australia will likely become the 'guinea pig of Silicon Valley' or 'the digital colony of Silicon Valley'. He accused politicians of allowing technology companies to come into Australia and 'wreak havoc on our population'. '(This is) often to the detriment of our most vulnerable members of society, and that is kids,' he said. 'The e-Safety Commissioner has stated they observe kids as young as 10 years old spending many hours chatting with their AI friends. That's all very alarming.' What can Australians do while there are no guardrails or legislation curbing AI chatbots? Dr Ciriello said that, 'like fast food', they should be enjoyed in moderation. He said it is key for people to know what they are using and to consider privacy implications of sharing everything with the technology. 'You're putting very sensitive information about yourself in there and it's not always clear how secure that information is,' he said. Another important message is for Australians to make sure they understand the basics of how AI works. 'It's basically a statistical guess machine. It's like your autocorrect function on your phone on steroids,' he said. 'It's good at guessing the next word or next sentence based on what you said, but it's not the same as thinking or feeling, even though it may look like it - and that matters.' Finally for parents, Dr Ciriello said they 'can't afford to be ignorant' and must watch out for red flags such as secrecy or irritation if their child can't spend enough time with an AI chatbot.


Geeky Gadgets
4 hours ago
- Geeky Gadgets
GPT-5 vs GPT-4o Comparison Guide : The Key Differences You Need to Know
Is newer always better? When GPT-5 was announced, many expected it to be a new leap forward, a model that would leave its predecessor, GPT-4o, in the dust. Yet, early adopters have been met with a surprising reality: while GPT-5 features impressive advancements in reasoning, coding, and visual generation, GPT-4o continues to hold its ground with unmatched speed, simplicity, and reliability. This unexpected dynamic has sparked a heated debate among users, could the latest iteration of OpenAI's technology actually fall short in certain areas? The answer, as it turns out, is far more nuanced than a simple yes or no. The strengths and weaknesses of these models reveal a fascinating trade-off between innovation and practicality. In this comparison, Skill Leap AI unpack the key differences between GPT-5 and GPT-4o, exploring how their distinct capabilities shape their real-world applications. From GPT-5's advanced problem-solving and creative ideation to GPT-4o's efficiency in document creation and everyday tasks, each model offers unique advantages that cater to specific needs. But which one truly delivers the best value? Whether you're a developer seeking innovative tools or a professional craving speed and simplicity, understanding these trade-offs will help you make an informed choice. The answer isn't just about which model is 'better'—it's about which one is better for you. GPT-5 vs GPT-4o Distinct Strengths and Weaknesses Both GPT-5 and GPT-4o bring unique capabilities to the table, making them suitable for different types of tasks. Understanding their core strengths and limitations is essential for making an informed choice. GPT-5: Known for its advanced reasoning and ability to tackle complex problem-solving tasks, GPT-5 is particularly effective in generating detailed visual outputs. It is a strong choice for users seeking innovation in coding, creative ideation, and tasks requiring higher cognitive processing. Known for its advanced reasoning and ability to tackle complex problem-solving tasks, GPT-5 is particularly effective in generating detailed visual outputs. It is a strong choice for users seeking innovation in coding, creative ideation, and tasks requiring higher cognitive processing. GPT-4o: Designed with speed and simplicity in mind, GPT-4o is ideal for straightforward tasks such as document creation and rapid content generation. Its reliability and ease of use make it a preferred option for everyday applications. Performance in Key Functional Areas The differences between GPT-5 and GPT-4o become more evident when evaluating their performance across specific tasks. Each model demonstrates distinct advantages depending on the nature of the task. Coding: GPT-5 outperforms GPT-4o in handling intricate programming challenges. Its ability to generate detailed, executable code makes it a valuable tool for developers working on complex projects. GPT-5 outperforms GPT-4o in handling intricate programming challenges. Its ability to generate detailed, executable code makes it a valuable tool for developers working on complex projects. Data Visualization: GPT-5 offers significant advantages in creating interactive and visually rich dashboards, making it a strong choice for data analysts and visualization tasks. GPT-5 offers significant advantages in creating interactive and visually rich dashboards, making it a strong choice for data analysts and visualization tasks. Document Creation: GPT-4o excels in producing cleaner, more organized documents with fewer formatting errors. This makes it the preferred model for professional writing and administrative tasks. GPT-4o excels in producing cleaner, more organized documents with fewer formatting errors. This makes it the preferred model for professional writing and administrative tasks. Processing Speed: GPT-4o consistently outpaces GPT-5 in terms of speed, particularly for tasks that do not require deep reasoning or complex problem-solving. GPT-4o consistently outpaces GPT-5 in terms of speed, particularly for tasks that do not require deep reasoning or complex problem-solving. Accuracy: GPT-5 shows slight improvements in reducing hallucinated outputs, such as inaccurate citations, but both models continue to face challenges in this area, highlighting the need for further refinement. Is GPT-5 Better Than GPT-4o? Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in ChatGPT 5 comparison. Creative Applications and Ideation For creative tasks such as storytelling, brainstorming, or email drafting, GPT-5 demonstrates clear advantages. Its outputs are more structured, detailed, and contextually relevant, making it a powerful tool for ideation and creative writing. However, the improvements over GPT-4o, while noticeable, may not be substantial enough to warrant a switch for all users. Those who prioritize creativity and innovation may find GPT-5 more appealing, but for simpler creative tasks, GPT-4o remains a reliable option. Automation and Workflow Optimization GPT-5 introduces enhanced capabilities for automating complex workflows. Its advanced reasoning skills enable it to independently break down multi-step tasks and execute them with minimal user input. This makes it particularly valuable for professionals looking to streamline agentic tasks such as project management, workflow optimization, or multi-layered decision-making processes. By contrast, GPT-4o, while efficient, lacks the same level of sophistication in handling intricate automation tasks. User Experience and Practical Usability The user experience offered by GPT-5 and GPT-4o varies significantly, depending on the complexity of the task. GPT-5 includes a streamlined 'auto' mode designed to assess task complexity and adjust its approach accordingly. However, this feature occasionally misinterprets user intent, leading to less satisfactory outputs. On the other hand, GPT-4o provides a more predictable and user-friendly experience, particularly for everyday tasks. For users who prioritize consistency, simplicity, and intuitive interaction, GPT-4o may be the better choice. Challenges and Limitations Despite their advancements, neither GPT-5 nor GPT-4o is without flaws. GPT-5's improvements, while meaningful, do not represent a dramatic leap forward akin to the transition from GPT-3 to GPT-4. Both models continue to struggle with certain tasks, such as generating accurate YouTube thumbnails or avoiding hallucinated citations. These limitations underscore the ongoing challenges in AI development and the need for further refinement to address these persistent issues. Making the Right Choice The decision between GPT-5 and GPT-4o ultimately depends on your specific needs and priorities. Each model offers distinct advantages that cater to different use cases: Choose GPT-5: If your work involves advanced reasoning, complex coding, creative ideation, or tasks requiring detailed visual outputs, GPT-5's capabilities make it the better option. If your work involves advanced reasoning, complex coding, creative ideation, or tasks requiring detailed visual outputs, GPT-5's capabilities make it the better option. Choose GPT-4o: For tasks that prioritize speed, simplicity, and reliability, such as document creation or rapid content generation, GPT-4o remains a strong contender. While GPT-5 introduces notable advancements, it does not deliver a new upgrade over GPT-4o. Instead, the choice comes down to weighing the trade-offs and selecting the model that best aligns with your goals and workflow requirements. Media Credit: Skill Leap AI Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.