
AI on verge of eight-hour job shift without burnout or break. Is 24-hour AI workday next?
Jan Williamson, retired, age 72, of Penn Argyle, Pennsylvania, often finds herself chatting up ChatGPT for answers to questions instead of what used to be the old searching standby: Google.
"It will give you plenty of basic information, and expanded information if you ask for it, on any topic you can think of. I now use it regularly to give me information on anything I don't understand or am curious about," said Williamson.
But if Williamson engages in conversations with it that are too long, she'd notice ChatGPT might start to run out of gas, despite the massive data farms that power the computing behind AI.
That's why Anthropic's Amazon-backed Claude 4 AI model drew notice last week for its breakthrough ability to work seven straight hours. And Claude won't head to the coffee machine or gossip at the water cooler. Claude will work. For seven hours. But then, like an exhausted and grumpy cubicle worker at the end of a long shift, Claude too begins to peter out.
The fact that Claude, or any other AI program, has a cap on its work hours is surprising to those who are used to being able to use AI at will whenever. But in the AI arms race, seven hours of work is a barrier broken.
"What Anthropic has accomplished with Claude Opus 4 is an incredible, unmatched feat in AI. For a model to work on a task for seven straight hours is unheard of when current standards expect models to spend seconds to minutes working on a problem," said Brian Jackson, principal research director at Info-Tech Research Group.
"AI can't work 24/7 because it's a bit like a goldfish with a very expensive aquarium. It's like a goldfish because it can only remember things for limited windows of time, and the aquarium is expensive because it requires high-end GPUS or TPUs working at max performance to create the environment," Jackson said.
Increasing the number of what are called "tokens" is key to AI longevity on the job.
A token in AI parlance isn't bus fare, it's words or word fragments that AI models intake through prompts. They are an important metric for the capacity of the LLM's memory. When "How many tokens does this sentence use in AI?" is entered into ChatGPT, it says that 10 tokens are used.
The number of tokens in a context window is the metric that determines the memory, and Claude Opus 4 can hold 200,000 tokens, almost double Chat-GPT's 128,000 tokens.
But once that limit is reached, "the context window is flushed and you have to start over," Jackson said.
MJ Jiang, chief strategy and revenue officer at Credibly, a business financing firm that uses AI, said the reason AI tools like ChatGPT often experience performance degradation during long conversations is that the model may begin to "forget" earlier instructions and produce lower-quality responses. This is due to limitations including context window size, token limits, and the computational burden of managing large amounts of information.
"As a result, older parts of a conversation may be discarded, leading to reduced accuracy, slower response times, and loss of coherence," Jiang said.
Claude's seven-hour workday is notable because it implies continuous execution without performance degradation which is "an impressive feat, given the substantial compute power required to maintain stable output over time," Jiang said.
It also raises the question of how far off the 24-hour workday is for AI.
Ultimately, computing power is the key constraint. While Claude and ChatGPT are doing their thing, those GPUs and TPUs are working to deliver the results, and that is costly, requiring copious cooling and electricity. While compute efficiency will continue to improve, it comes with tradeoffs in cost, accessibility, and environmental impact.
"Anthropic probably wants some sort of limit on how much cost will go into these super-sessions, or else it could break the bank," Jackson said.
Jiang says the environmental impact from AI is growing quickly. "One can very well envision a future where the demand for water cooling is so great that water shortages become even more prevalent," she said.
While a 24-hour AI workday may be technically possible in the future, the more important question, according to Jiang, is whether we should pursue it, and at what cost?
"We might want to think about self-imposing [limits] smartly before some other force demands it and makes it truly suboptimal for all," Jiang said.
Jeremy Rambarran, a professor at Touro University Graduate School of Technology, says 24-hour AI would be a game-changer, though the development of long-term memory over sessions, the processes of continuously storing and retrieving memories, can become technically intricate and expensive when done at a scale.
"Transitioning from 7-hour sessions to enduring, memory-laden, constantly active AI agents would resemble the shift from calculators to full-time research assistants. It transforms AI from a mere instrument into a partner," Rambarran said.
What would take a team of people weeks or months, an AI operating continuously could complete in days or hours, whether the focus is drug development, product design, or cybersecurity threat management.
Anthropic leadership has spoken about the advance in similar terms, with its CEO Dario Amodei telling CNBC's "Squawk Box" that Claude's design speaks to a future where AI builds long-term working relationships across various domains.
"You're going to have this model that talks to the biochemists for years, and that model becomes an expert in the law or national security. I think that force is going to lead to different model providers specializing in different things, even as the base model they made is the same." Amodei said.
Not everyone is convinced that the burnout-free AI is almost here.
Jon Brewton, founder and CEO of Data2, says AI agents still struggle with long-running tasks due to several hidden constraints that compound over time like memory overflow, cumulative reasoning errors, infrastructure costs, and fragile toolchains. He likens Claude's current seven-hour workday to seeing a photocopy degrade with each pass.
"Claude 4's recent leap to a seven-hour run reflects breakthroughs in context retention, self-correction, and safety monitoring, but hitting a full 24-hour stretch will require cheaper, more efficient hardware, more reliable workflows, and smarter trust and governance systems," Brewton said.
Amodei seems to have drawn some lines in his own thinking about Claude, at least at this point in the AI work cycle and concerns about wholesale human replacement. "I'd like us to get to the point where you can just give the AI system a task for a few hours — similar to a task you might give to a human intern or an employee," he said in an interview with the Financial Times. "Every once in a while, it comes back to you, it asks for clarification ... think of a management consultant."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Hypebeast
9 minutes ago
- Hypebeast
Capcom's New ‘Pragmata' Trailer Showcases Unique Dual-Character Gameplay
Summary Capcomhas officially unveiled a new trailer forPragmataduringPlayStation'sState of Play, offering a fresh look at the long-awaited sci-fi action-adventure game. Originally announced in2020, the game has faced multiple delays, but the latest trailer confirms its 2026 release window. Set in a futuristic lunar research station,Pragmatafollows spacefarer Hugh and android Diana, who must navigate a hostile AI-controlled environment to return to Earth. The trailer showcases stunning visuals, blending haunting lunar landscapes with dynamic action sequences, reinforcing the game's mysterious and immersive atmosphere. One of the standout features revealed in the trailer isPragmata's distinctive dual-character gameplay. Players control both Hugh and Diana simultaneously, with Hugh often carrying Diana through much of the action. Each character possesses unique abilities, requiring players to strategically switch between them to overcome obstacles. The game introduces a novel hacking-based combat system, allowing Diana to manipulate enemy systems and disrupt their functions. Concurrently, Hugh engages in direct combat and leverages his skills for environmental navigation, creating a synergistic gameplay loop that emphasizes both action and strategic planning. The trailer teases tense encounters with rogue machines, emphasizing the blend of action and strategy inPragmata's gameplay. The cinematic sequences suggest a story-driven experience, where players will uncover the secrets of the lunar station and its enigmatic AI overlord. With its striking visuals, innovative mechanics and compelling premise,Pragmatais shaping up to be a standout sci-fi adventure when it launches in 2026 for PlayStation 5, Xbox Series X|S and PC.

Business Insider
10 minutes ago
- Business Insider
Google's AI CEO explains why he's not interested in taking LSD in his quest to understand 'the nature of reality'
Demis Hassabis prefers gaming over acid trips. The Google DeepMind CEO said he's never taken LSD and doesn't want to. In a recent interview with Wired's Steven Levy, the AI boss was asked about his pursuit of understanding the "nature of reality," as his X bio states. More specifically, Hassabis was asked if acid had ever helped him get a glimpse of the nature of reality. The short answer is no. "I didn't do it like that," Hassabis said. "I just did it through my gaming and reading a hell of a lot when I was a kid, both science fiction and science." Hassabis set out as a child to understand the universe better, and the quest is ongoing. He's hoping AI and, eventually, artificial general intelligence will help reach his goal. While some tech leaders have talked about using psychedelics, Hassabis said he's "too worried about the effects on the brain." "I've sort of finely tuned my mind to work in this way," he said. "I need it for where I'm going." Google DeepMind is the research lab behind the company's AI projects, including chatbot Gemini. Hassabis is leading Google's charge toward the AI race's holy grail — AGI. Google DeepMind didn't immediately respond to a request for comment from Business Insider. Over the years, Silicon Valley has embraced the use of psychedelics, such as microdosing to improve productivity or going on ayahuasca retreats. Some investors have banked on their popularity, backing psychedelic startups that are seeking to turn the drugs into medical treatments or expand the industry in other ways. However, that's not a green light to take acid or magic mushrooms on the clock. In 2021, CEO Justin Zhu, cofounder and CEO of a startup called Iterable, said he was fired for microdosing LSD before a meeting. He hoped it would improve his focus, he said. Some of Hassabis's tech peers have been open about using LSD as established bosses or as college students. Microsoft cofounder Bill Gates, for example, took acid for the first time as a teenager, according to his memoir, " Source Code: My Beginnings." For Gates, dropping acid was exhilarating at first and a "cosmic" experience when he did it again. However, he ended up thinking his brain could delete his memories like a computer. "That would be one of the last times I would do LSD," Gates said. It didn't have that effect on Apple cofounder Steve Jobs, who told his biographer, Walter Isaacson, that it was "a profound experience, one of the most important things in my life." OpenAI's Sam Altman has also spoken positively about his experience with psychedelics. Although he didn't specify exactly what drug he took, he said it changed him from a "very anxious, unhappy person" to "calm." "If you had told me that, like, one weekend-long retreat in Mexico was going to significantly change that, I would have said absolutely not," Altman said. "And it really did." For Hassabis, he's seeking other ways to find answers to life's deepest questions. "We don't know what the nature of time is, or consciousness and reality," he told Wired. "I don't understand why people don't think about them more. I mean, this is staring us in the face."


Bloomberg
14 minutes ago
- Bloomberg
BOE Tests AI as a Tool for Forecasting Inflation, Greene Says
The Bank of England is experimenting with artificial intelligence to help forecast inflation and hone its communications, according to rate-setter Megan Greene. She said Thursday that staff were testing a number of use cases for the technology, from providing early warning signals of financial crises to analyzing the labor market.