logo
‘Less structure, more intelligence': AI agent Manus draws upbeat reviews of nascent system

‘Less structure, more intelligence': AI agent Manus draws upbeat reviews of nascent system

Advertisement
'Got access and it's true … Manus is the most impressive AI tool I've ever tried,' wrote Victor Mustar, head of product at AI and machine-learning developer platform
Hugging Face , in a post over the weekend on
X, formerly Twitter
Mustar prompted Manus to create and display an animated 3D video game in a web browser, using a cross-browser JavaScript library and application programming interface – known as Three.js.
His prompt: 'code a threejs game where you control a plane'.
That simple prompt shows how AI agents like Manus represent a new avenue for innovation, compared with chatbots and large language models (LLMs) – the technology underpinning
generative AI assistants like
ChatGPT
Advertisement
AI agents are programs that are capable of autonomously performing tasks on behalf of a user or another system. Essentially, these agents create a plan of specific tasks and subtasks to complete a goal using its available resources.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Founder of China's Unitree sees lack of advanced AI as biggest roadblock to mass robot use
Founder of China's Unitree sees lack of advanced AI as biggest roadblock to mass robot use

South China Morning Post

time2 days ago

  • South China Morning Post

Founder of China's Unitree sees lack of advanced AI as biggest roadblock to mass robot use

The biggest obstacle to the mass deployment of robots is the lack of advanced artificial intelligence, according to Wang Xingxing, founder of China's leading robotics company Unitree Robotic AI had not reached a critical threshold necessary for widespread adoption, Wang said in an interview published on Wednesday by the People's Daily, the official newspaper of the Communist Party. He reiterated his earlier assertion that the 'ChatGPT moment' for the robotics industry had yet to arrive. 'This is a common challenge worldwide, and many people are working to overcome it,' he said. 'But breakthroughs can happen at any time … issues that currently seem insurmountable could be suddenly resolved in the future.' Wang's remarks come as China's robotics sector – an area also known as embodied intelligence – is gaining momentum, attracting interest from the government, various industries and the public. Unitree garnered national attention after its humanoid robots gave a dance performance during state broadcaster China Central Television's annual Lunar New Year's Eve gala. Weeks later, Wang became the youngest entrepreneur to join Chinese President Xi Jinping's high-profile business symposium in February. Visitors watch humanoid robots fight at the Unitree booth during the World Robot Conference in Beijing. Photo: cnsphoto via Reuters Wang, 35, said the heightened attention was beneficial for the entire industry, adding that Unitree and other robotics businesses performed well in the first half of the year.

China rejects OpenAI's GPT-5 trademark application in blow to US firm's branding efforts
China rejects OpenAI's GPT-5 trademark application in blow to US firm's branding efforts

South China Morning Post

time3 days ago

  • South China Morning Post

China rejects OpenAI's GPT-5 trademark application in blow to US firm's branding efforts

Chinese authorities have rejected OpenAI 's attempt to register the name of its new flagship artificial intelligence model, GPT-5 , as a trademark on the mainland, where the ChatGPT creator's products and services are not officially available. According to records on the website of the Trademark Office, under the China National Intellectual Property Administration, the US firm's application through subsidiary OpenAI OpCo was denied and is pending appeal. That was the latest rejection handed by the regulator to OpenAI. Last year, it denied a series of applications filed by the US start-up between March and November 2023 to register ChatGPT and GPT – covering AI models GPT-4, GPT-5, GPT-6 and GPT-7 – as trademarks on the mainland. These are still pending appeal. The Trademark Office's recent refusal dealt another blow to San Francisco-based OpenAI's efforts to protect its brand in the fast-developing and highly competitive AI industry. In February 2024, the United States Patent and Trademark Office denied OpenAI's applications to trademark ChatGPT and GPT. 'Registration is refused because the applied-for mark merely describes a feature, function, or characteristic of applicant's goods and services,' the regulator's ruling said. OpenAI did not immediately respond to a request for comment on Tuesday.

GPT-5: Has AI just plateaued?
GPT-5: Has AI just plateaued?

AllAfrica

time3 days ago

  • AllAfrica

GPT-5: Has AI just plateaued?

Skip to content OpenAI CEO Sam Altman says GPT-5 is PhD-level general intelligence but that's not clearly the case. Photo: Aflo Co Ltd / Alamy OpenAI claims that its new flagship model, GPT-5, marks 'a significant step along the path to AGI' – that is, the artificial general intelligence that AI bosses and self-proclaimed experts often claim is around the corner. According to OpenAI's own definition, AGI would be 'a highly autonomous system that outperforms humans at most economically valuable work.' Setting aside whether this is something humanity should be striving for, OpenAI CEO Sam Altman's arguments for GPT-5 being a 'significant step' in this direction sound remarkably unspectacular. He claims GPT-5 is better at writing computer code than its predecessors. It is said to 'hallucinate' a bit less, and is a bit better at following instructions – especially when they require following multiple steps and using other software. The model is also apparently safer and less 'sycophantic', because it will not deceive the user or provide potentially harmful information just to please them. Altman does say that 'GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.' Yet it still doesn't have a clue about whether anything it says is accurate, as you can see from its attempt below to draw a map of North America. Sam Altman: With GPT-5, you'll have a PhD-level expert in any area you needMe: Draw a map of North America, highlighting countries, states, and capitalsGPT 5: *Sam Altman forgot to mention that the PhD-level expert used ChatGPT to cheat on all their geography classes… — Luiza Jarovsky, PhD (@LuizaJarovsky) August 10, 2025 It also cannot learn from its own experience, or achieve more than 42% accuracy on a challenging benchmark like 'Humanity's Last Exam', which contains hard questions on all kinds of scientific (and other) subject matter. This is slightly below the 44% that Grok 4, the model recently released by Elon Musk's xAI, is said to have achieved. The main technical innovation behind GPT-5 seems to be the introduction of a 'router'. This decides which model of GPT to delegate to when asked a question, essentially asking itself how much effort to invest in computing its answers (then improving over time by learning from feedback about its previous choices). The options for delegation include the previous leading models of GPT and also a new 'deeper reasoning' model called GPT-5 Thinking. It's not clear what this new model actually is. OpenAI isn't saying it is underpinned by any new algorithms or trained on any new data (since all available data was pretty much being used already). One might therefore speculate that this model is really just another way of controlling existing models with repeated queries and pushing them to work harder until it produces better results. It was back in 2017 when researchers at Google found out that a new type of AI architecture was capable of capturing tremendously complex patterns within long sequences of words that underpin the structure of human language. By training these so-called large language models (LLMs) on large amounts of text, they could respond to prompts from a user by mapping a sequence of words to their most likely continuation in accordance with the patterns present in the dataset. This approach to mimicking human intelligence became better and better as LLMs were trained on larger and larger amounts of data – leading to systems like ChatGPT. Ultimately, these models just encode a humongous table of stimuli and responses. A user prompt is the stimulus, and the model might just as well look it up in a table to determine the best response. Considering how simple this idea seems, it's astounding that LLMs have eclipsed the capabilities of many other AI systems – if not in terms of accuracy and reliability, certainly in terms of flexibility and usability. The jury may still be out on whether these systems could ever be capable of true reasoning, or understanding the world in ways similar to ours, or keeping track of their experiences to refine their behaviour correctly – all arguably necessary ingredients of AGI. In the meantime, an industry of AI software companies has sprung up that focuses on 'taming' general purpose LLMs to be more reliable and predictable for specific use cases. Having studied how to write the most effective prompts, their software might prompt a model multiple times, or use numerous LLMs, adjusting the instructions until it gets the desired result. In some cases, they might 'fine-tune' an LLM with small-scale add-ons to make them more effective. OpenAI's new router is in the same vein, except it's built into GPT-5. If this move succeeds, the engineers of companies further down the AI supply chain will be needed less and less. GPT-5 would also be cheaper to users than its LLM competitors because it would be more useful without these embellishments. At the same time, this may well be an admission that we have reached a point where LLMs cannot be improved much further to deliver on the promise of AGI. If so, it will vindicate those scientists and industry experts who have been arguing for a while that it won't be possible to overcome the current limitations in AI without moving beyond LLM architectures. OpenAI's new emphasis on routing also harks back to the 'meta reasoning' that gained prominence in AI in the 1990s, based on the idea of 'reasoning about reasoning.' Imagine, for example, you were trying to calculate an optimal travel route on a complex map. Heading off in the right direction is easy, but every time you consider another 100 alternatives for the remainder of the route, you will likely only get an improvement of 5% on your previous best option. At every point of the journey, the question is how much more thinking it's worth doing. This kind of reasoning is important for dealing with complex tasks by breaking them down into smaller problems that can be solved with more specialized components. This was the predominant paradigm in AI until the focus shifted to general-purpose LLMs. No more gold rush? Photo: JarTee via The Conversation It is possible that the release of GPT-5 marks a shift in the evolution of AI which, even if it is not a return to this approach, might usher in the end of creating ever more complicated models whose thought processes are impossible for anyone to understand. Whether that could put us on a path toward AGI is hard to say. But it might create an opportunity to move towards creating AIs we can control using rigorous engineering methods. And it might help us remember that the original vision of AI was not only to replicate human intelligence, but also to better understand it. Michael Rovatsos is professor of artificial intelligence, University of Edinburgh This article is republished from The Conversation under a Creative Commons license. Read the original article.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store