
I let ChatGPT be my personal trainer for a week, but can it do better than a human fitness coach?
As millions turn to ChatGPT to experiment with it, a trend in using its generative AI in place of a personal assistant, therapist and even doctor has emerged. Another role the platform is vying for is a personal trainer.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Maverick
7 minutes ago
- Daily Maverick
Artificial intelligence – the panacea to all ills, or an existential threat to our world?
'Once men turned their thinking over to the machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.' – Frank Herbert, Dune, 1965 In the early 19th century, a group of disgruntled factory workers in industrial England began protesting against the introduction of mechanised looms and knitting frames into the factories. Fearful of losing their jobs, they smashed machines and engaged in acts of sabotage. They were dealt with harshly through imprisonment and even execution. They became known as the Luddites. At the time, it was not the technology they were most concerned about, but rather the loss of their livelihoods. Ironically, today, the word Luddite has become something of an accusation, a complaint about those who, because they are seen as not understanding a new technology, are deemed to be anti-technology. Even anti-progress. The 2020s have seen rapid progress in the development of a 'new' technology – artificial intelligence (AI). But the history of AI can be traced back to the middle of the 20th century, and so is perhaps not very new at all. At the forefront of the current process has been the release of Large Language Models (LLMs) – with ChatGPT being the most prominent – that allow, at the click of a single request, an essay on the topic of your choice. LLMs are simply one type of AI and are not the same as artificial general intelligence (AGI). Unlike current LLMs, which perform a single task, AGI would be able to reason, be creative and use knowledge across many domains – be more human-like, in essence. AGI is more of a goal, an end point in the development of AI. LLMs have already been hugely disruptive in education, with university lecturers and school teachers scrambling to deal with ChatGPT-produced essays. Views about the dangers of AI/AGI tend to coalesce into the doomer and the boomer poles. Crudely, and I am oversimplifying here, the 'doomers' worry that we face an existential threat to our existence were AI to be designed in a way that is misaligned with human values. Boomers, on the other hand, believe AI will solve all our problems and usher in an age of abundance, where we will all be able to work less without seeing a drop in our quality of life. The 'doomer' narrative originates with Oxford University philosopher Nick Bostrom, who introduced a thought experiment called the ' paperclip maximiser '. Bostrom imagines a worst-case scenario where we create an all-powerful AGI agent that is misaligned with our values. In the scenario, we request the AGI agent to maximise the production of paperclips. Bostrom worries that the command could be taken literally, with the AGI agent consuming every last resource on Earth (including humans) in its quest to maximise the production of paperclips. Another take on this thought experiment is to imagine that we ask an all-powerful AGI agent to solve the climate breakdown problem. The quickest and most rational way of doing this would, of course, be to simply rid planet Earth of eight billion human beings. What do we have to fear from LLMs? LLMs have scraped the internet for every bit of data, stolen the data, and fed off the intellectual property of writers and artists. But what exactly do we have to fear from LLMs? I would suggest very little (unless, of course, you are a university lecturer in the humanities). LLMs such as ChatGPT are (currently) little more than complex statistical programs that predict what word follows the word before, based on the above-mentioned internet scraping. They are not thinking. In fact, some people have argued that everything they do is a hallucination. It is just that the hallucination is more often than not correct and appropriate. Francois Chollet, a prominent AI researcher, has described LLMs in their current form as a ' dead end ' in the quest for AGI. Chollet is so confident of this that he has put up a $1-million prize for any AI system that can achieve even basic human skills in something he calls the abstraction and reasoning corpus (ARC) test. Essentially, the ARC is a test of what is called fluid intelligence (reasoning, solving novel problems, and adaptation). Young children do well on ARC tasks. Most adults complete all tasks. Pure LLMs achieve around 0%. Yes – 0%. The $1-million prize does not even require that AGI systems match the skills of humans. Just that they achieve 85%. The prize is yet to be claimed. People are the problem If LLMs are (currently) a dead end in the quest for AGI, what should we be worried about? As is always the case, what we need to be afraid of is people. The people in control of this technology. The billionaires, the tech bros, and the dystopian conspiracy theorists. High on my list is Mark Zuckerberg. The man who invented Facebook to rate the attractiveness of college women, and whose company profited enormously from the echo chamber it created. In Myanmar, this resulted in the ethnic cleansing of the Rohingya people in 2017. At the beginning of 2025, Zuckerberg showed the depth of his commitment to diversity and integrity in his slavering capitulation to Donald Trump. Jokes aside about whether Zuckerberg is actually a robot, in recent pronouncements, what he seems to want is a world of atomised and alienated people, who out of quiet desperation turn to his dystopian hell where robots – under his control – will be trained to become 'our friends '. And my personal favourite – Elon Musk. Musk, the ketamine-fuelled racist apologist for the Great Replacement Theory. A man who has committed securities fraud, and accused an innocent man of being a paedophile because the man had the nerve and gall to (correctly) state that Musk's submarine could not negotiate an underwater cave in Thailand. More recently, estimates are that Musk's destruction of USAid will lead to the deaths of about 1,650,000 people within a year because of cuts to HIV prevention and treatment, as well as 500,000 annual deaths due to cuts to vaccines. I, for one, do not want this man anywhere near my children, my family, my community, my country. OpenAI Sam Altman, the CEO of the world's largest plagiarism machine, OpenAI, recently stated that he would like a large part of the world's electricity grid to run his LLM/AI models. Karen Hao, in her recently published book Empire of AI, makes a strong case for OpenAI being a classic colonial power that closely resembles (for example) the British East India Company, founded in 1600 (and dissolved in 1874). Altman recently moved squarely into Orwellian surveillance when OpenAI bought io, a product development company owned by Jonny Ive (designer of the iPhone). While the first product is a closely guarded secret, it is said to be a wearable device that will include cameras and microphones for environmental detection. Every word you speak, every sound you hear, and every image you see will be turned into data. Data for OpenAI. Why might Altman want this? Money, of course. But for Altman and Silicon Valley, money is secondary to data, to surveillance and the way they are able to parlay data into power and control (and then money). He will take our data, further train his ChatGPT models with it, and in turn use this to better surveil us all. And for the pleasure of working for, and giving our data to OpenAI? Far from being paid for the data you produce, you will have to buy the gadget, be monitored 24/7, and have your life commodified and sold. As Shoshana Zuboff said in her magisterial book, The Age of Surveillance Capitalism, 'Forget the cliché that if it's free, 'you are the product'. You are not the product; you are the abandoned carcass. The 'product' derives from the surplus that is ripped from your life.' The problem was never the cotton loom. The Luddites knew this in the 19th century. It was always about livelihood loss and people (the industrialists). Bostrom has it badly wrong when he imagines an all-powerful AGI entity that turns against its human inventors. But about the paperclips, he might be correct. Zuckerberg, Musk and Altman are our living and breathing paperclip maximisers. With their political masters, they will not flinch at turning us all into paperclips and sacrificing us on the altar of their infinite greed and desire for ever-increasing surveillance and control. DM


Time of India
19 minutes ago
- Time of India
Is anything real anymore? AI testimonials take over the American justice system
Generative AI has been developing at a breakneck pace since the high-profile release of ChatGPT in November 2022. The Large Language Model (LLM) garnered massive media recognition for its ability to write complex and coherent responses to simple prompts. Other AI LLMs such as and Microsoft's 'Sydney' (now the AI copilot) also gained media notoriety for the manner in which they seemed to mimic human emotions to an uncanny degree. Written text is not the only area where AI has a disruptive effect, with image generation algorithms such as Midjourney, and video generation programs such as Google Veo progressively blurring the line between what's made by humans, and what's made by AI. Google Veo, in particular, became infamous for generating short videos resembling viral social media posts that had netizens wondering how convincing they looked. These rapid developments in AI technology have led to increased concerns about their disruptive impact on everyday life, and this has now begun to happen in the courtrooms of the United States. AI testimonies are now a part of the US court system AI video is now being introduced as a kind of posthumous testimony in court trials. During a manslaughter sentencing hearing where the victim was an American male named Christopher Pelkey, shot and killed in a road rage incident, an AI video of Perkley played where he gave an impact statement. The video had the AI say 'To Gabriel Horcasidas, the man who shot me, it is a shame we encountered each other that day, under those circumstances…I believe in forgiveness, and a God who forgives and I always have. I still do.' Pelkey's sister, Stacy Wales, had given her own testimony during the sentencing hearing, but didn't feel that her own words alone could properly convey the extent of her grief. Christopher Pelkey was killed in a road rage incident in Chandler in 2021, but last month, artificial intelligence brought him back to life during his killer's sentencing hearing. At the end of the hearing, Gabriel Horcasidas was sentenced to 10.5 years in prison. The ruling has since been appealed, shining a spotlight on the disruptive impact AI tech is already having on America's court system. Speaking to the Associated Press, AI deepfake expert David Evan Harris said that the technology might end up stacking the deck in favour of the wealthy and privileged: 'I imagine that will be a contested form of evidence, in part because it could be something that advantages parties that have more resources over parties that don't,' In one of the viral Google Veo videos that took the internet by storm, an AI generated girl says: 'This is wild. I'm AI generated by Veo 3. Nothing is real anymore.' We are Veo 3 just broke the internet.10 wild examples 1. Nothing is real anymore With the increasing normalization of AI technology in everyday life, as well as vital civic avenues such as criminal justice, the impacts of such technologies are sure to be dissected and studied for years to come.
Yahoo
25 minutes ago
- Yahoo
Stop Wasting AI on Personal Productivity: 60% of Leaders Pivot to Agentic Automation for Real Enterprise Value
New Research Confirms Costly Copilot Limitations, Driving Demand for Governed, Process-Centric AI Solutions That Accelerate Autonomy SAN JOSE, Calif., June 20, 2025 /PRNewswire/ -- Automation Anywhere, the leader in Agentic Process Automation (APA), today released a new proprietary research report developed in collaboration with Forrester Consulting, revealing key barriers and breakthroughs shaping enterprise adoption of AI agents. The findings highlight the increasing momentum of AI agents across industries, as well as the implementation challenges organizations must address to realize their full potential. The study, based on a survey of global decision-makers overseeing enterprise-wide AI strategies, found that 60% of respondents believe automation platforms—especially those from Robotic Process Automation (RPA) and AI leaders like Automation Anywhere—are the most valuable foundation for managing AI-driven processes. This preference outpaces general-purpose AI providers such as OpenAI (ChatGPT) and Anthropic (Claude), as well as broader enterprise platforms like Microsoft Power Automate and Salesforce Einstein, highlighting the need for automation-native solutions purpose-built for process orchestration and scale. Additionally, 71% of respondents agreed that automation solutions should augment human capabilities rather than replace them—reinforcing the importance of keeping strategic decision-making in human hands. "This research highlights a critical inflection point for enterprises," said Mihir Shukla, CEO of Automation Anywhere. "Leaders are clearly prioritizing AI-augmented workflows, recognizing the undeniable value of Agentic AI. The fact that a significant majority are specifically seeking these solutions from modern, cloud-native RPA and AI automation vendors underscores that deep process automation expertise is critical to scale adoption and unlock meaningful impact, accelerating the journey to the autonomous enterprise and paving the path to artificial general intelligence for work." Key Insights from the Study: High Interest Meets Practical Hurdles With deep roots in AI-powered automation and RPA, Automation Anywhere's APA system is purpose-built to overcome the key hurdles slowing AI agent adoption. While 74% of respondents recognize the promise of AI agents to surface insights from vast data sets, concerns around data privacy (66%), skillset gaps (63%), and integration complexity (61%) persist. APA is designed to balance autonomous execution with enterprise-grade governance and human oversight—making it possible to scale safely and effectively. Transformational Opportunities Across Business Functions Organizations are already piloting or implementing AI agents for internal employee support (53%) and customer service (48%). Many plan to extend these capabilities to broader business functions, to enterprise automation and organizational stewardship in the next two years. The potential value of AI agents for areas such as customer service, sales automation, and compliance received transformational value ratings exceeding eight out of ten on average. With the launch of our new Agentic Solutions, Automation Anywhere is helping organizations accelerate this shift—offering pre-built, enterprise-grade AI agents that go beyond pilots to deliver real business impact across customer service, finance, compliance, and more. Businesses Demand Enterprise-Grade AI Automation Platforms When evaluating platforms for building and deploying AI agents, 60% of respondents found intelligent automation platforms from RPA and AI automation vendors to be highly valuable for long-running processes. Organizations strongly prefer solutions capable of enterprise-grade integration, end-to-end process orchestration, and mature data security. Automation Anywhere is uniquely positioned to meet these enterprise demands with their APA system—offering intelligent, secure, and scalable AI agents that integrate seamlessly across systems, orchestrate complex, long-running processes end-to-end, and uphold the highest standards of data security and governance. Early Adoption & Transformational Value Nearly 75% of leaders plan to pilot AI agents for customer support within the next year, with 71% eyeing research applications. Across all potential use cases, respondents expect transformational levels of value, underscoring strong confidence in AI agents' impact. Navigating the Road Ahead While challenges remain, enterprise leaders are clear-eyed and confident about the transformational potential of AI agents. By proactively addressing hurdles around security, cost, and talent, organizations can move beyond experimentation and begin scaling Agentic AI to drive measurable business outcomes. Those that act decisively today will be best positioned to lead in the AI-powered enterprise of tomorrow. Automation Anywhere is helping enterprises accelerate this journey—offering a secure, cost-effective, and easy-to-adopt APA system with new pre-built Agentic Solutions and the agentic solutions workspace that reduce complexity, lower barriers to entry, and empower business users to confidently scale AI agents. About Automation Anywhere Automation Anywhere is the leader in Agentic Process Automation (APA) and guided by its vision to fuel the future of work by unleashing human potential through automation. Learn more at SOURCE Automation Anywhere, Inc. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data