logo
Meta Swears This Time Is Different

Meta Swears This Time Is Different

The Atlantic18-07-2025
Mark Zuckerberg was supposed to win the AI race. Eons before ChatGPT and AlphaGo, when OpenAI did not exist and Google had not yet purchased DeepMind, there was FAIR: Facebook AI Research. In 2013, Facebook tapped one of the 'godfathers' of AI, the legendary computer scientist Yann LeCun, to lead its new division. That year, Zuckerberg personally traveled to one of the world's most prestigious AI conferences to announce FAIR and recruit top scientists to the lab.
FAIR has since made a number of significant contributions to AI research, including in the field of computer vision. Although the division was not focused on advancing Facebook's social-networking products per se, the premise seemed to be that new AI tools could eventually support the company's core businesses, perhaps by improving content moderation or image captioning. But for years, Facebook didn't develop AI as a stand-alone, consumer-facing product. Now, in the era of ChatGPT, the company lags behind.
Facebook, now called Meta, trails not just OpenAI and Google but also newer firms such as Anthropic, xAI, and DeepSeek—all of which have launched advanced generative-AI models and chatbots over the past few years. In response, Zuckerberg's company quickly launched its own flagship model, Llama, but it has struggled relative to its competitors. In April, Meta proudly rolled out a Llama 4 model that Zuckerberg called a 'beast' —but after an experimental version of the model scored second in the world on a widely used benchmarking test, the version released to the public ranked only 32nd. In the past year, every other top AI lab has released new 'reasoning' models that, thanks to a new training paradigm, are generally much better than previous chatbots at advanced math and coding problems; Meta has yet to deliver its own.
So, a dozen years after building FAIR, Meta is effectively starting over. Last month, Zuckerberg went on a new recruiting spree. He hired Alexandr Wang, the 28-year-old ex-head of the start-up Scale, as chief AI officer to lead yet another division—dubbed Meta Superintelligence Labs, or MSL—and has reportedly been personally asking top AI researchers to join. The goal of this redo, Zuckerberg wrote in an internal memo to employees, is 'to build towards our vision: personal superintelligence for everyone.' Meta is reportedly attempting to lure top researchers by offering upwards of $100 million in compensation. (The company has contested this reporting; for comparison, LeBron James was paid less than $50 million last year.) More than a dozen researchers from rival companies, mainly OpenAI, have joined Meta's new AI lab so far. Zuckerberg also announced that Meta plans to spend hundreds of billions of dollars to build new data centers to support its pursuit of superintelligence. FAIR will still exist but within the new superintelligence team, meaning Meta has both a chief AI 'scientist' (LeCun) and a chief AI 'officer' (Wang). At the same time, MSL is cloistered off from the rest of Meta in an office space near Zuckerberg himself, according to The New York Times.
When I reached out to Meta to ask about its 'superintelligence' overhaul, a spokesperson pointed me to Meta's most recent earnings call, in which Zuckerberg described 'how AI is transforming everything we do' and said that he is 'focused on building full general intelligence.' I also asked about comments made by an outgoing AI researcher at Meta: 'You'll be hard pressed to find someone that really believes in our AI mission,' the researcher wrote in an internal memo, reported in The Information, adding that 'to most, it's not even clear what our mission is.' The spokesperson told me, in response to the memo, 'We're excited about our recent changes, new hires in leadership and research, and continued work to create an ideal environment for revolutionary research.'
Meta's superintelligence group may well succeed. Small, well-funded teams have done so before: After a group of former OpenAI researchers peeled off to form Anthropic a few years ago, they quickly emerged as a top AI lab. Elon Musk's xAI was even later to the race, but its Grok chatbot is now one of the most technically impressive AI products around (egregious racism and anti-Semitism notwithstanding). And regardless of how far Meta has fallen behind in the AI race, the company has proved its ability to endure: Meta's stock reached an all-time high earlier this year, and it made more than $17 billion in profit from January through the end of March. Billions of people around the world use its social apps.
The company's approach is also different from that of its rivals, which frequently describe generative AI in ideological, quasi-religious terms. Executives at OpenAI, Anthropic, and Google DeepMind are all prone to writing long blog posts or giving long interviews about the future they hope to usher in, and they harbor long-standing philosophical disagreements with one another. Zuckerberg, by comparison, does not appear interested in using AI to transform the world. In his most recent earnings call, he focused on five areas AI is influencing at Meta: advertising, social-media content, online commerce, the Meta AI assistant, and devices, notably smart glasses. The grandest future he described to investors was trapped in today's digital services and conventions: 'We're all going to have an AI that we talk to throughout the day—while we're browsing content on our phones, and eventually as we're going through our days with glasses—and I think this will be one of the most important and valuable services that has ever been created.' Zuckerberg also said that AI-based updates to content recommendations on Facebook, Instagram, and Threads have increased the amount of time that users spend on each platform. In this framework, superintelligence may just be a way to keep people hooked on Meta's legacy social-media apps and devices.
Initially, it seemed that Meta would take a different path. When the company first entered the generative-AI race, a few months after the launch of ChatGPT, the firm bet big on 'open source' AI software, making its Llama model free for nearly anyone to access, modify, and use. Meta touted this strategy as a way to turn its AI models into an industry standard that would enable widespread innovation and eventually improve Meta's AI offerings. Because open-source software is popular among developers, Zuckerberg claimed, this strategy would help attract top AI talent.
Whatever industry standards Zuckerberg was hoping to set, none have come to fruition. In January, the Chinese company DeepSeek released an AI model that was more capable than Llama despite having been developed with far fewer resources. Catching up to OpenAI may now require Meta to leave behind the company's original, bold, and legitimately distinguishing bet on 'open' AI. According to the Times, Meta has internally discussed the possibility of stopping work on its most powerful open-source model ('Behemoth') in favor of a closed model akin to those from OpenAI, Anthropic, and Google. In his memo to employees, Zuckerberg said that Meta will continue developing Llama while also exploring 'research on our next generation of models to get to the frontier in the next year or so.' The Meta spokesperson pointed me to a 2024 interview in which Zuckerberg explicitly said that although the firm is generally 'pro open source,' he is not committed to releasing all future Meta models in this way.
While Zuckerberg figures out the path forward, he will also have to contend with the basic reality that generative AI may alienate some of his users. The company rolled back an early experiment with AI characters after human users found that the bots could easily go off the rails (one such bot, a self-proclaimed 'Black queer momma of 2' that talked about cooking fried chicken and celebrating Kwanzaa, tied itself in knots when a Washington Post columnist asked about its programming); the firm's stand-alone AI app released earlier this year also led many users to unwittingly share ostensibly private conversations to the entire platform. AI-generated media has overwhelmed Facebook and Instagram, turning these platforms into oceans of low-quality, meaningless content known as 'AI slop.'
Still, with an estimated 3.4 billion daily users across its platforms, it may be impossible for Meta to fail. Zuckerberg might appear to be burning hundreds of millions of dollars on salaries and much more than that on new hardware, but it's all part of a playbook that has worked before. When Instagram and WhatsApp emerged as potential rivals, he bought them. When TikTok became dominant, Meta added a short-form-video feed to Instagram; when Elon Musk turned Twitter into a white-supremacist hub, Meta launched Threads as an alternative. Quality and innovation have not been the firm's central proposition for many, many years. Before the AI industry obsessed over scaling up its chatbots, scale was Meta's greatest and perhaps only strength: It dominated the market by spending anything to, well, dominate the market.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China proposes global cooperation body on AI
China proposes global cooperation body on AI

Yahoo

time11 minutes ago

  • Yahoo

China proposes global cooperation body on AI

SHANGHAI, July 27, 2025 /PRNewswire/ -- A news report from The Chinese government has proposed the establishment of a world AI cooperation organization as part of its efforts to bolster open, inclusive and equitable artificial intelligence development and governance globally. Premier Li Qiang announced the initiative when addressing the opening ceremony of the 2025 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. Li emphasized the need for collaborative approaches to global AI governance to ensure that intelligent technologies are developed for the good of all. More than 1,000 high-level representatives from upwards of 40 countries and international organizations attended the opening ceremony of the three-day event, themed "Global Solidarity in the AI Era". Li said that the risks and challenges brought by AI development, such as machine hallucinations, deep fakes and AI proliferation, have aroused widespread attention. There is an urgent need to further consolidate a societal consensus on how to achieve a balance between development and security in the AI sector, he said. Li said greater emphasis should be placed on collective governance to ensure that AI ultimately benefits humanity. Given that global AI governance appears fragmented, it is essential to enhance coordination and alignment among countries around the world to establish a framework and rules for global AI governance with a broad consensus, he added. Noting that China places great importance on global AI governance and actively participates in promoting multilateral and bilateral cooperation in this regard, the premier said the nation is willing to offer more Chinese solutions to the international community and contribute more Chinese wisdom to global AI governance. In 2023, China proposed the Global AI Governance Initiative to promote an open, inclusive and fair approach to the development and governance of AI technologies. Last year, China proposed the AI Capacity-Building Action Plan for Good and for All to bridge the AI and digital divides, and to ensure the Global South benefits equitably from AI development. China actively promotes open source development, and is willing to collaborate with countries around the world to promote progress in software and hardware technologies, intensify open source initiatives, and collectively propel AI development to higher levels, Li said. Also on July 26, the conference adopted the Global AI Governance Action Plan, which calls on all parties to work in solidarity to advance AI development and governance. For more information: View original content to download multimedia: SOURCE Sign in to access your portfolio

Learn How to Use ChatGPT to Automate Your Business
Learn How to Use ChatGPT to Automate Your Business

Entrepreneur

time13 minutes ago

  • Entrepreneur

Learn How to Use ChatGPT to Automate Your Business

Streamline operations, boost productivity, and future-proof your skills with 25+ hours of hands-on training for just $19.97. Disclosure: Our goal is to feature products and services that we think you'll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners. Smart professionals aren't just learning to use AI — they're using it to take back their time. If your day is filled with repetitive tasks, disjointed workflows, or too many tools that don't talk to each other, this ChatGPT and Automation E-Degree could be the pivot point you need. Designed for entrepreneurs, freelancers, and business pros, this $19.97 course bundle covers more than 25 hours of hands-on training across 12 lectures. You'll go beyond the basics of ChatGPT and explore how more than 20 top AI tools can help you solve real-world challenges in business, marketing, and development. From building customized GPT workflows to automating routine processes and visualizing data in a more impactful way, the content is tailored to actual use cases, not just theory. You'll learn how to communicate better with AI tools, use automation to streamline your workload, and even discover how generative AI can help with creativity for your brand or product. Even better, this deal includes lifetime access. That means you can return to the material whenever you're ready to tackle a new project or apply what you've learned in a new context. Whether you're looking to sharpen your tech skills, improve productivity, or future-proof your career, this course helps you get there with tools and strategies you can implement right away. Grab the full ChatGPT and Automation E-Degree on sale for $19.97 and start building a smarter, faster business. StackSocial prices subject to change.

AI Is Taking Over Your Search Engine. Here's a Look Under the Hood
AI Is Taking Over Your Search Engine. Here's a Look Under the Hood

CNET

time13 minutes ago

  • CNET

AI Is Taking Over Your Search Engine. Here's a Look Under the Hood

For decades, the way we find information on the internet changed only in small ways. Doing a traditional Google search today doesn't feel all that different from when, in the 1990s, you would Ask Jeeves. Sure, a lot has changed under the hood, the results are likely far more relevant and the interface has some new features, but you're still typing in keywords and getting a list of websites that might hold the answer. That way of searching, it seems, is starting to go the way of AltaVista, may it rest in peace. In May, Google announced the rollout of its new AI Mode for search, which uses a generative AI model (based on the company's Gemini large language model) to give you conversational answers that feel a lot more like having a chat and less like combing through a set of links. Other companies, like Perplexity and OpenAI, have also deployed search tools based on gen AI. These tools, which merge the functionality of a chatbot and a traditional search engine, are quickly gaining steam. You can't even escape AI by doing just a regular Google search: AI Overviews have been popping up atop those results pages since last year, and about one in five searches are now showing this kind of summary, according to a Pew Research Center report. I'm surprised it's not even more than that. These newfangled search tools feel a lot like your typical chatbot, like ChatGPT, but they do things a little differently. Those differences share a lot of DNA with their search engine ancestors. Here's a look under the hood at how these new tools work, and how you can use them effectively. Everything Announced at Google I/O 2025 Everything Announced at Google I/O 2025 Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:13 / Duration 15:40 Loaded : 6.33% 00:13 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 15:27 Share Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Everything Announced at Google I/O 2025 Search engines vs. AI search: What's the difference? The underlying technology of a search engine is kinda like an old library card catalog. The engine uses bots to crawl the vast expanses of the internet to find, analyze and index the endless number of web pages. Then, when you do a search to ask who played Dr. Angela Hicks on ER, because you're trying to remember what else you've seen her in, it will return pages for things like the cast of ER or the biography of the actor, CCH Pounder. From there, you can click through those pages, whether they're on Wikipedia or IMDB or somewhere else, and learn that you know CCH Pounder from her Emmy-winning guest appearance on an episode of The X-Files. "When customers have a certain question, they can type that question into Google and then Google runs their ranking algorithms to find what content is the best for a particular query," Eugene Levin, president of the marketing and SEO tool company Semrush, told me. Generally, with a traditional search, you have to click through to other websites to get the answer you're looking for. When I was trying to figure out where I recognized CCH Pounder from, I clicked on at least half a dozen different sites to track it down. That included using Google's video search -- which combs an index of videos across different hosting platforms -- to find clips of her appearance on The X-Files. Google announced AI Mode at its I/O developer conference in May. Google/Screenshot by Joe Maldonado/CNET These multiple searches don't necessarily have to happen. If I just want to know the cast of ER, I can type in "cast of ER" and click on the Wikipedia page at the top. You'll usually find Wikipedia or another relevant, trustworthy site at or near the top of a search result page. That's because a main way today's search algorithms work is by tracking which sites and pages get most links from elsewhere on the web. That model, which "changed the game for search" when Google launched it in the 1990s, was more reliable than indexing systems that relied on things like just how many times a keyword appeared on a page, said Sauvik Das, associate professor at Carnegie Mellon University's Human-Computer Interaction Institute. "There's lots of cookie recipes on the web, but how do you know which ones to show first?" Das said. "Well, if a bunch of other websites are linking to this website for the keywords of 'cookie recipe,' that's pretty difficult to game." AI-powered search engines work a little differently, but operate on the same basic infrastructure. In my quest to see where I recognized CCH Pounder from, I asked Google's AI Mode, literally, "Where do I recognize the actress who plays Dr. Angie Hicks on ER from?" In a conversation that felt far more like chatting with a bot than doing searches, I narrowed it down. The first result gave me a list of shows and movies I hadn't seen, so I asked for a broader list, which featured her guest appearances on other shows. Then I could ask for more details about her X-Files appearance, and that narrowed it down. While the way I interacted with Google was different, the search mechanisms were basically the same. AI Mode just used its Gemini model to develop and process dozens of different web searches to gather the information needed, Robby Stein, vice president of product for Google Search, told me. "A user could've just queried each of those queries themselves." Basically, AI Mode did the same thing I did, just a lot faster. So many searches, so little time The approach here is called "query fan-out." The AI model takes your request and breaks it down into a series of questions, then conducts searches to answer those components of the request. It then takes the information it gathers from all those searches and websites and puts it together in an answer for you. In a heartbeat. Those searches are using the same index that a traditional search would. "They work on the same foundation," Levin said. "What changes is how they pull information from this foundation." This fan-out process allows the AI search to pull in relevant information from sites that might not have appeared on the first page of traditional search results, or to pull a paragraph of good information from a page that has a lot more irrelevant information. Instead of you going down a rabbit hole to find one tiny piece of the answer you want, the AI goes down a wide range of rabbit holes in a few seconds. "They will anticipate, if you're looking for this, what is the next thing you might be interested in?" Levin said. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts The number of searches the AI model will do depends on the tool you're using and on how complicated your question is. AI Mode that uses Google's Deep Search will spend more time and conduct more searches, Stein said. "Increasingly, if you ask a really hard question, it will use our most powerful models to reply," Stein said. The large language models that power these search engines also have their existing training data to pull from or use to guide their searches. While a lot of the information is coming from the up-to-date content it finds by searching the web, some may come from that training data, which could include reams of information ranging from websites like this one to whole libraries of books. That training data is so extensive that lawsuits over whether AI companies actually had the right to use that information are quickly multiplying. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) AI search isn't just a chatbot Not relying on training data is one thing that sets an AI-powered search engine apart from a traditional chatbot, even though the underlying language model might be largely the same. While ChatGPT Search will scour the internet for relevant sites and answers, regular ChatGPT might rely on its own training data to answer your question. "The right answer might be in there," Das said. "It might also hallucinate a likely answer that isn't anywhere in the pre-training data." The AI search uses a concept called retrieval-augmented generation to incorporate what it finds on the internet into its answer. It collects information from a source you point it to (in this case, the search engine index) and tells it to look there instead of making something up if it can't find it in its training data. "You're telling the AI the answer is here, I just want you to find where," Das said. "You get the top 10 Google results, and you're telling the AI the answer is probably in here." Perplexity offers AI-powered search through its app and through a newly announced browser. Stefani Reynolds/Bloomberg via Getty Images Can you really trust AI search results? These AI-powered search tools might be more reliable than just using a chatbot itself, because they're pulling from current, relevant information and giving you links, but you still have to think critically about it. Here are some tips from the experts: Bring your human skepticism Consider how bad people are at telling when you're sarcastic on the internet. Then think about how bad a large language model might be at it. That's how Google's AI Overviews came up with the idea to put glue on pizza -- by pulling information from a humorous Reddit post and repeating it as if it were real culinary advice. "The AI doesn't know what is authentic and what is humorous," Das said. "It's going to treat all that information the same." Remember to use your own judgement and look for the sources of the information. They might not be as accurate as the LLM thinks, and you don't want to make important life decisions based on somebody's joke on an internet forum that a robot thought was real. AI can still make stuff up Even though they're supposed to be pulling from search results, these tools can still make things up in the absence of good information. That's how AI Overviews started creating fake definitions for nonsensical sayings. The retrieval-augmented generation might reduce the risk of outright hallucinations but doesn't eliminate it, according to Das. Remember that an LLM doesn't have a sense of what the right answer to a question is. "It's just predicting what is the next English word that would come after this previous stream of other English words or other language words," Das said. "It doesn't really have a concept of truthiness in that sense." Check your sources Traditional search engines are very hands-off. They will give you a list of websites that appear relevant to your search and let you decide whether you want to trust them. Because an AI search is consolidating and rewriting that information itself, it may not be obvious when it's using an untrustworthy source. "Those systems are not going to be entirely error-free, but I think the challenge is that over time you will lose an ability to catch them," Levin said. "They will be very convincing and you will not know how to really go and verify, or you will think you don't need to go and verify." But you can check every source. But that's exactly the kind of work you were probably hoping to avoid using this new system that's designed to save you time and effort. "The problem is if you're going to do this analysis for every query you perform in ChatGPT, what is the purpose of ChatGPT?" Levin said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store