logo
#

Latest news with #南方科技大学

Chinese scientists claim AI is capable of spontaneous human-like understanding
Chinese scientists claim AI is capable of spontaneous human-like understanding

Yahoo

time14 hours ago

  • Science
  • Yahoo

Chinese scientists claim AI is capable of spontaneous human-like understanding

Chinese researchers claim to have found evidence that large language models (LLMs) can comprehend and process natural objects like human beings. This, they suggest, is done spontaneously even without being explicitly trained to do so. According to the researchers from the Chinese Academy of Sciences and South China University of Technology in Guangzhou, some AIs (like ChatGPT or Gemini) can mirror a key part of human cognition, which is sorting information. Their study, published in Nature Machine Intelligence, investigated whether LLM models can develop cognitive processes similar to those of human object representation. Or, in other words, to find out if LLMs can recognize and categorize things based on function, emotion, environment, etc. To discover if this is the case, the researchers gave AIs 'odd-one-out' tasks using either text (for ChatGPT-3.5) or images (for Gemini Pro Vision). To this end, they collected 4.7 million responses across 1,854 natural objects (like dogs, chairs, apples, and cars). They found that of the models created, sixty-six conceptual dimensions were created to organize objects, just the way humans would. These dimensions extended beyond basic categories (such as 'food') to encompass complex attributes, including texture, emotional relevance, and suitability for children. The scientists also found that multimodal models (combining text and image) aligned even more closely with human thinking, as AIs process both visual and semantic features simultaneously. Furthermore, the team discovered that brain scan data (neuroimaging) revealed an overlap between how AI and the human brain respond to objects. The findings are interesting and provide, it appears, evidence that AI systems might be capable of genuinely 'understanding' in a human-like way, rather than just mimicking responses. It also suggests that future AIs could have more intuitive, human-compatible reasoning, which is essential for robotics, education, and human-AI collaboration. However, it is also important to note that LLMs don't understand objects the way humans do emotionally or experientially. AIs work by recognizing patterns in language or images that often correspond closely to human concepts. While that may appear to be 'understanding' on the surface, it's not based on lived experience or grounded sensory-motor interaction. Also, some parts of AI representations may correlate with brain activity, but this doesn't mean they can 'think' like humans or share the same architecture. If anything, they can be thought of as more of a sophisticated facsimile of human pattern recognition rather than a thinking machine. LLMs are more like a mirror made from millions of books and pictures, reflecting those models at the user based on learned patterns. The study's findings suggest that LLMs and humans might be converging on similar functional patterns, such as organizing the world into categories. This challenges the view that AIs can only 'appear' smart by repeating patterns in data. But, if, as the study argues, LLMS are starting to build conceptual models of the world independently, it would mean that we could be edging closer to artificial general intelligence (AGI)—a system that can think and reason across many tasks like a human. You can access the study in the journal Nature Machine Intelligence.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store