Meta AI takes first step to superintelligence — and Zuckerberg will no longer release the most powerful systems to the public
Researchers at Meta claim to have observed the company's artificial intelligence (AI) systems improving themselves without human input.
"Over the last few months we have begun to see glimpses of our AI systems improving themselves," Meta CEO Mark Zuckerberg said in a policy paper published July 30 on the company's website. "The improvement is slow for now, but undeniable."
Zuckerberg added that self-improving AI is the first step towards achieving artificial superintelligence (ASI) — an artificial intelligence that is capable of surpassing the cognitive abilities of humans.
Broadly speaking, AI researchers consider AI's capabilities in three strata. While current AI models can achieve extraordinary superhuman feats, like predicting protein structures, these capabilities are extremely narrow. In such cases, the system can only outpace humans in one specific area but lacks general intelligence for this capability to apply across different disciplines.
The next tier is called artificial general intelligence (AGI), and refers to models that can understand, learn and adapt in much the same way as the human brain.
ASI is the final state that Zuckerberg referenced in the blog post, and concerns models that have evolved to far exceed human potential — and could improve themselves at an exponential rate, leading to an "intelligence explosion."
Scientists refer to the hypothetical moment that we achieve AGI as the technological singularity.
Related: AI is entering an 'unprecedented regime.' Should we stop it — and can we — before it destroys us?
Zuckerberg's reference to catching sight of AI self-improvement is not the first time this phenomenon has been spotted. In October 2024, researchers at the University of California, Santa Barbara published a paper to the preprint arXiv database centered on the notion of self-improvement.
In it, they defined, created and tested an AI framework based on the idea of a Gödel Machine — a theoretical device capable of improving itself by rewriting its own code and instructions. Importantly, the machine would only implement a change if it could produce a formal proof that the change would be beneficial.
Related: AI could soon think in ways we don't even understand — evading our efforts to keep it aligned — top AI scientists warn
The team experimented with an AI framework based on that concept, and proved that their Gödel Agent could improve its own performance in various tasks, including coding, science, math and reasoning.
Almost all models are not typically empowered to rewrite their own code. The Gödel Agent could not only access its entire codebase but also the code used to develop improvements, while showing that those improvements were beneficial.
The AI consistently showed better performance in key areas than the human-designed agents to which it was compared, according to the study.Zuckerberg said that ASI could represent a transformative step in humanity's technological progress, and that it could lead to the "discovery of new things that aren't imaginable today."
RELATED STORIES
—New AI system can 'predict human behavior in any situation' with unprecedented degree of accuracy, scientists say
—The more advanced AI models get, the better they are at deceiving us — they even know when they're being tested
—AI could soon think in ways we don't even understand — evading our efforts to keep it aligned — top AI scientists warn
That said, he added that Meta would be far more careful about which models the company chooses to release to the public under an open source framework — meaning anybody could have access to it.
"I am extremely optimistic that superintelligence will help humanity accelerate our pace of progress. But perhaps even more important is that superintelligence has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose," Zuckerberg wrote.
"As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
3 minutes ago
- Business Wire
AI-Coding Becomes a Risky Norm as Use of AI-Coding Assistants Takes Off and More Than 80% of Organizations Ship Vulnerable Code
BUSINESS WIRE)--Checkmarx, the leader in agentic AI-powered application security, today released the results of its annual survey titled 'Future of Application Security in the Era of AI,' offering a candid assessment of how AI‑accelerated development is reshaping the risk landscape and how to prepare for the year ahead. The study surveyed more than 1,500 CISOs, AppSec managers and developers across North America, Europe and Asia‑Pacific to understand how organizations are adapting to a world where software is increasingly written by machines. Risky business: Global survey of tech and security leaders says only 18% of organizations have policies governing AI use, and 81% knowingly ship vulnerable code, up from 91% in 2024. The findings paint a stark picture: AI‑generated code is becoming mainstream, but governance is lagging. Half of respondents already use AI security code assistants and 34% admit that more than 60% of their code is AI‑generated. Yet only 18% have policies governing this use. The growing adoption of AI coding assistants is eroding developer ownership and expanding the attack surface. The research also shows that business pressure is normalizing risky practices. Eighty‑one percent of organizations knowingly ship vulnerable code, and 98% experienced a breach stemming from vulnerable code in the past year, that's a sharp rise from 91 % in 2024. Within the next 12 to 18 months, nearly a third (32%) of respondents expect Application Programming Interface (API) breaches via shadow APIs or business logic attacks. Despite these realities, fewer than half of the respondents report deploying foundational security tools, such as using mature application security tools such as dynamic application security testing (DAST) or infrastructure‑as‑code scanning. While DevSecOps is widely discussed industry-wide, only half of organizations surveyed actively use core tools and just 51% of North American organizations report adopting DevSecOps. 'The velocity of AI‑assisted development means security can no longer be a bolt‑on practice. It has to be embedded from code to cloud,' said Eran Kinsbruner, vice president of portfolio marketing. 'Our research shows that developers are already letting AI write much of their code, yet most organizations lack governance around these tools. Combine that with the fact that 81% knowingly ship vulnerable code and you have a perfect storm. It's only a matter of time before a crisis is at hand.' The report outlines six strategic imperatives for closing the application security readiness gap: move from awareness to action, embed 'code‑to‑cloud' security, govern AI use in development, operationalize security tools, prepare for agentic AI in AppSec, and cultivate a culture of developer empowerment. Kinsbruner added, 'To stay ahead, organizations must operationalize security tooling that is focused on prevention. They need to establish policies for AI usage and invest in agentic AI that can automatically analyze and fix issues real-time. AI generated code will continue to proliferate; secure software will be the competitive differentiator in the coming years.' The release of this report follows Checkmarx's announcement of general availability of its Developer Assist agent, with extensions to top AI-native Integrated Development Environments (IDE) including Windsurf by Cognition, Cursor, and GitHub Copilot. This new agent—the first in a family of agentic-AI tools to enhance security for developers, AppSec leaders, and CISO's alike—delivers real-time, context-aware issue identification and guidance to developers as they code for autonomous prevention. Download the full 'Future of Application Security in the Era of AI' report at Checkmarx website to learn how organizations can navigate the AI‑accelerated risk landscape and build secure‑by‑default development practices. About Checkmarx Checkmarx is the leader in agentic AI, cloud-native application security that empowers the world's largest development organizations with real-time scanning and closed-loop remediation to boost developer productivity on security tasks by up to 50%. Based on the powerful Checkmarx One platform that scans over six trillion lines of code each year, Checkmarx is designed for large-scale, hybrid human and AI-assisted development teams. Checkmarx. Always Ready to Run. Follow Checkmarx on LinkedIn, YouTube, and X.


Digital Trends
3 minutes ago
- Digital Trends
Samsung's smart glasses could arrive next year, but a surprising rival has beaten it to market
Samsung's much-rumored smart glasses will enter the market in 2026 according to a new report, but the South Korean giant has just been beaten to market by a Taiwanese rival, HTC. Yes, HTC. The company which for several years brought us beautiful phones including the HTC One, One M8 and One M9, before withdrawing from the market to focus on its Vive VR headset, is now making a splash in the AI smart glasses arena. Recommended Videos With Meta and Snapchat having somewhat popularized the smart glasses form factor with products like the Ray-Ban Meta and Snapchat Specs, serious competition is beginning to come to the fore. Samsung isn't alone According to a report from Korean publication SE Daily (via GSMArena), Samsung will announce its first set of smart glasses in late 2026, meaning we're likely still a year off an eyewear launch from the brand. The report claims Samsung's glasses will be similar in design to the Ray-Ban Meta specs, and will feature speakers, a microphone and a camera. It also notes that these glasses are separate to the AR specs (with a display) which Samsung is said to be working on in partnership with Google. And Samsung isn't the only brand tipped to make a smart glasses splash in 2026, after Google demoed its own Android XR glasses during the IO 2025 keynote earlier this year – the first of which, made by partner Warby Parker, could arrive next year. But here comes a new set of smart glasses from an established (if not slightly forgotten) brand in 2025. Meet the HTC Vive Eagle Billed as 'AI smart glasses', HTC Vive Eagle feature a 12MP ultra-wide camera with LED indicator, 235mAh battery and magnetic fast charging in a 49g frame. There are speakers too, but the open-ear design means you won't have anything in or covering you ears. You also get HTC's Vive AI voice assistant, which supports LLM platforms including ChatGPT and Google's Gemini, allowing you to control features such as the camera and music playback with spoken commands, as well as ask more complex questions. There's also support for real-time translation in 13 different languages; Arabic, Traditional Chinese, English, French, German, Greek, Italian, Japanese, Portuguese, Spanish, Korean, Thai and Turkish. HTC says the Vive Eagle glasses can deliver 4.5 hours of continuous music playback on a single charge, and last up to 36 hours on standby. There is a catch though, the HTC Vive Eagle are only available in Taiwan, where pre-orders are now open, with a release date set for September 1. There are four colors to choose from – Berry, Coffee, Grey, and Black – and they're priced at NT$15,600 (around $500). There's currently no word on whether we'll see the smart glasses launched in more countries, but watch this space as the market starts to hot up. Oakley dipped its toe in the smart glasses water only a couple of months ago.


CBS News
5 minutes ago
- CBS News
AI and the classroom - what teachers are saying as students head back to school
Sometimes things change so fast that it's hard to keep up. That is exactly the case with artificial intelligence, or AI. Nowhere is it more of a challenge than for teachers and families as students head back to class. If you're of a certain age, you remember when math teachers fought against calculators in the classroom; now AI is the modern-day equivalent. What AI can do is bound only by imagination. "It is so rapidly changing that I don't think anyone can exactly tell you how it will be used even six months from now," said Point Park University's Associate Dean of the School of Education, Dr. Linda Hippert. "That's how quickly it is changing." Dr. Hippert said that it's another tool educators have to embrace and learn how to use as a way to help students use it better. She also added that a student using AI isn't cheating any more than if they used Google or any other information resource. "They begin there, but they can't end there," Dr. Hippert said. "They need to look beyond the information that AI is giving them." Even AI tools admit that some of their information is coming from open sources and may be wrong, but prohibiting their usage would be hypocritical. "Our educators are currently using AI, and there are many tools that are helpful to them to become better teachers in the classroom," she said. However, with AI instruction, there must come an understanding of ethics and integrity. "If I put something in writing, and it's not my own thoughts, I should be citing my source for that," Dr. Hippert said. "That's what we are teaching." Dr. Hippert said the world of AI is growing so fast that every time she talks to her colleagues, they're using some other, newer platform. That said, whatever form of AI someone is using, it's great for research and helping with diagnosis in the medical field. Ultimately, the information must be processed through the human touch and the human brain to make decisions.