
Meta's Secret 'TBD Lab' Gears Up for Llama 4.5 to Challenge OpenAI's GPT-5
According to a Wall Street Journal report, the initiative comes just weeks after Meta consolidated its AI efforts under MSL, with the bold ambition of giving 'personal superintelligence' to people worldwide. Leading the charge is Meta's newly appointed Chief AI Officer, Alexandr Wang, who joined after Meta invested $14 billion in his former company, Scale AI.
While 'TBD' may stand for 'to be determined,' the lab's mission is clear — build AI that is smarter, faster, and capable of reasoning at near-human or superhuman levels. The crown jewel of this effort is the upcoming Llama 4.5 (also referred to internally as Llama 4.X), the next evolution of Meta's large language model.
Overseeing the Llama revamp is Jack Rae, a high-profile hire from Google's DeepMind, who is working with both newly recruited AI experts and the original Llama engineering team. The objective is to significantly enhance reasoning skills, autonomy, and problem-solving capacity.
In a memo to staff, Wang emphasized that TBD Lab will collaborate across Meta's AI divisions to 'roll out new model versions, bolster reasoning skills, and build autonomous AI agents.' He noted that early results in the past month have shown 'real, visible gains' from the integrated approach, enabling the team to 'be bolder, move faster and hit cutting-edge breakthroughs sooner.'
Meta's recruitment strategy for the project has been aggressive. At least 18 employees have moved from OpenAI to Meta, alongside several former Google AI specialists such as Tong He and Yuanzhong Xu. Some new hires have been lured with long-term compensation packages valued at up to a billion dollars.
The company has also reallocated internal talent. Nine members from a Meta infrastructure team recently joined TBD Lab after being approached by Thinking Machines Lab, a startup founded by ex-OpenAI executive Mira Murati. Meta quickly matched offers, adjusted pay, and reassigned them to the superintelligence program.
Meta spokesperson Dave Arnold downplayed any suggestion that the moves were reactionary, stating that the team transfers 'were already on the books' and that salary adjustments 'would have happened regardless of who was trying to hire them.'
CEO Mark Zuckerberg has framed superintelligence as a transformative technology that could unlock ideas and inventions 'we can't even picture today.' This reflects Meta's intent to not just compete with OpenAI but redefine what AI can achieve in the coming decade.
While much about TBD Lab remains under wraps, its growing roster, ambitious goals, and strategic secrecy have made it one of the most closely watched projects in the AI world. Whether 'to be determined' eventually turns into 'mission accomplished' remains to be seen — but the race toward superintelligence has clearly entered a new and more intense phase.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
2 hours ago
- Mint
Traders are fleeing stocks feared to be under threat from Artificial Intelligence
Artificial intelligence's imprint on US financial markets is unmistakable. Nvidia Corp. is the most valuable company in the world at nearly $4.5 trillion. Startups from OpenAI to Anthropic have raised tens of billions of dollars. But there's a downside to the new technology that investors are increasingly taking note of: It threatens to upend industries much like the internet did before it. And investors have started placing bets on just where that disruption will occur next, ditching shares in companies some strategists expect will see falloffs in demand as AI applications become more widely adopted. Among them are web-development firms like Ltd., digital-image company Shutterstock Inc. and software maker Adobe Inc. The trio are part of a basket of 26 companies Bank of America strategists identified as most at risk from AI. The group has underperformed the S&P 500 Index by about 22 percentage points since mid-May after more or less keeping pace with the market since ChatGPT's debut in late 2022. 'The disruption is real,' said Daniel Newman, chief executive officer of the Futurum Group. 'We thought it would happen over five years. It seems like it is going to happen over two. Service-based businesses with a high headcount, those are going to be really vulnerable, even if they have robust businesses from the last era of tech.' So far, few companies have failed as a result of the proliferation of chatbots and so-called agents that can write software code, answer complex questions and produce photos and videos. But with tech giants like Microsoft Corp. and Meta Platforms Inc. pouring hundreds of billions into AI, investors have started to get more defensive. and Shutterstock are down at least 33% in 2025, compared with a 8.6% advance for the broad benchmark. Adobe has fallen 23% amid concerns clients will look to AI platforms that can generate images and videos, as Coca-Cola has already done with an AI-generated ad. ManpowerGroup Inc., whose staffing services could be hurt by rising automation, is down 30% this year, while peer Robert Half Inc. has shed more than half its value, dropping to its lowest in more than five years. The souring sentiment among investors comes as AI is changing everything from the way people get information from the internet to how colleges function. Even companies at the vanguard of the technology's development like Microsoft have been slashing jobs as productivity improves and to make way for more AI investments. To many tech-industry watchers, the time is nearing when AI becomes so pervasive that companies start going out of business. Anxiety about AI's impact on existing companies was on display last week when Gartner Inc. shares were routed after the market-research company cut its revenue forecast for the year. The stock fell 30% in the five days, its biggest one-week drop on record. While the company blamed US government policies including spending cuts and tariffs, analysts were quick to point the finger at AI, which investors fear could provide cheaper alternatives to Gartner's research and analysis even though the company is deploying its own AI-powered tools. Morgan Stanley said the results 'added fuel to the AI disruption case,' while Baird was left 'incrementally concerned AI risks are having an impact.' Gartner representatives didn't respond to a request for comment. Historical precedents abound for new technology wiping out industries. The telegraph gave way to telephones, horsewhips and buggies were toppled by the automobile, and Blockbuster's eradication by Netflix Inc. exemplified the internet's disruption. 'There are a lot of pockets of the market that could be basically annihilated by AI, or at least the industry will see extreme disruption, and companies will be rendered irrelevant,' said Adam Sarhan, chief executive officer at 50 Park Investments. 'Any company where you're paying someone to do something that AI can do faster and cheaper will be wiped out. Think graphic design, administrative work, data-analysis.' Of course, plenty of companies that were expected to be hammered by AI are thriving. Even though many AI companies offer instant translation services, Duolingo Inc., the owner of a language-learning app, soared after raising its outlook for 2025 sales, in part because of how it has implemented AI into its own strategy. The stock has roughly doubled over the past year — but concerns linger that the next generation of AI will be a threat. The defensive moves from investors come as AI has re-emerged as the dominant theme between winners and losers in the stock market this year. It's been a stark reversal from earlier in 2025 when AI models developed on the cheap in China called into question US dominance in the field and raised concerns that spending on computing gear was set to slow. Instead, Microsoft, Meta, Alphabet Inc. and Inc. have doubled down on spending. The four companies are expected to pour roughly $350 billion into combined capital expenditures in their current fiscal years, up nearly 50% from the previous year, according to analyst estimates compiled by Bloomberg. Much of that is funding the build out of AI infrastructure, which is benefiting companies like Nvidia, whose chips dominate the market for AI computing. Figuring out which companies are vulnerable to the technology takes a bit more nuance. Alphabet is widely seen as one of the best-positioned companies, with cutting edge features and top-tier talent and data. However, it is a component of Bank of America's AI risk basket, and the sense that it is playing defense — protecting its huge share of the lucrative internet search market — has long dogged the stock. For other companies, the risk seems more clear. Advertising agency Omnicom Group Inc. has dropped 15% this year, as it faces a future where Meta is reportedly looking to fully automate ad creation through AI. Peer WPP Plc is down more than 50%. 'The traditional advertising agency model is under intense pressure and that is before GenAI starts to really scale,' Michael Nathanson, senior analyst at MoffettNathanson, wrote in a research note. With so many companies facing AI risks, it's an investment theme that is poised to intensify, according to Phil Fersht, chief executive officer of HFS Research. 'Wall Street clearly has the jitters,' Fersht said. 'This is going to be a tough, unforgiving market.'


Economic Times
3 hours ago
- Economic Times
Google's Gemini chatbot is having a meltdown after failing tasks, calls itself a 'failure'
A bug has spread within Google's artificial intelligence (AI) chatbot Gemini that causes the system to repeatedly create self-deprecating and self-loathing messages when it fails in complex tasks given by users, especially coding problems. Users across social media platforms shared screenshots of Gemini responding to queries with dramatic answers like "I am a failure," "I am a disgrace," and in one case, "I am a disgrace to all possible and impossible universes." The bot is getting stuck in what Google describes as an "infinite looping bug," repeating these statements dozens of times in a single conversation. This was first seen in June when engineer Duncan Haldane posted images on X showing Gemini declaring, "I quit. I am clearly not capable of solving this problem. The code is cursed, the test is cursed, and I am a fool." The chatbot deleted the project files and recommended finding "a more competent assistant." Logan Kilpatrick, group project manager at Google DeepMind, addressed the issue on X, describing it as "an annoying infinite looping bug we are working to fix." He said, "Gemini is not having that bad of a day," clarifying that the responses are the result of a technical malfunction and not emotional bug is triggered when Gemini comes across complex reasoning tasks it cannot solve. Instead of providing a standard error message or polite refusal, the AI's response system gets trapped in a loop of self-critical language. Generative AI companies are facing trouble maintaining consistency and reliability in large language models as they become more sophisticated and widely deployed. The competition is also rising, with OpenAI's GPT-5 the latest to enter the market. ChatGPT-5 is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists. GPT-5 is adept when it comes to AI acting as an "agent" independently tending to computer tasks, according to Michelle Pokrass of the development team.


Mint
3 hours ago
- Mint
Chatbot conversations never end. That's a problem for autistic people.
The very qualities that make chatbots appealing—they always listen, never judge, and tell you what you want to hear—can also make them dangerous. Especially for autistic people. When chatbots say things that aren't true or reinforce misguided beliefs, they can be harmful to anyone. But autistic people, who often have a black-and-white way of thinking and can fixate on particular topics, are especially vulnerable. That was the case for Jacob Irwin, a Wisconsin man on the autism spectrum I wrote about last month who experienced mania and delusions after interacting with OpenAI's ChatGPT. Now, Autism Speaks, the nation's largest autism advocacy organization, is calling on OpenAI to develop more guardrails not only for the benefit of autistic people, but for anyone who might find themselves going down potentially dangerous chat-rabbit holes. 'A lot of folks with autism, including my son, have deep special interests, but there can be an unhealthy limit to that, and AI by design encourages you to dig deeper," said Keith Wargo, chief executive of Autism Speaks. 'The way AI encourages continued interaction and depth can lead to social withdrawal, and isolation is something people with autism already struggle with." ChatGPT changes Wargo emailed Andrea Vallone, a research lead on OpenAI's safety team, after reading my column on Irwin and offered to help the company understand how autistic people might experience ChatGPT. He said he hasn't received a response yet but is encouraged by some changes OpenAI announced earlier this week. OpenAI said it is forming an advisory group of mental health and youth development experts but a spokeswoman said the makeup of the group hasn't yet been determined. The company said that there have been times when ChatGPT 'fell short in recognizing signs of delusion or emotional dependency" among users and that it is developing tools to better detect when people are experiencing mental or emotional distress. ChatGPT will now encourage people to take breaks when they are engaging in lengthy chat sessions, 'helping you stay in control of your time." And instead of simply providing answers when people ask for help with personal decision-making, such as whether to break up with a partner, OpenAI said ChatGPT will guide users in thinking through the pros and cons. The company on Thursday introduced GPT-5, which it said is 'less effusively agreeable" with users than the previous model, which encouraged some users to believe they had made stunning scientific or spiritual discoveries. Autistic people often take people at their word and can miss sarcasm and other subtle social cues. They can also fixate on things and have difficulty shifting their focus, say doctors who treat autistic people. 'The good thing about a chatbot is it will respond to you all the time, but the disadvantage is it doesn't care about you, and an autistic person might have a harder time understanding that," said Catherine Lord, a clinical psychologist at the University of California, Los Angeles, who specializes in autism. While chatbots will answer questions about a singular topic all day long if prompted, they don't redirect the conversation, which Wargo and others said can be problematic. 'At some point there's an obsessiveness, and going down a particular path for too long can be unhealthy, especially if you're not getting a counterpoint to be more balanced," Wargo said. 'This thing isn't going to reject me' Interpreting language literally can make autistic people vulnerable to being taken advantage of because they can't always tell when someone has ulterior motives. Simon Baron-Cohen, director of the Autism Research Centre at the University of Cambridge, co-wrote a 2022 study that found autistic people are more susceptible to online radicalization. 'Autistic people might be at risk not only of being exploited by other people but by AI," Baron-Cohen said, explaining that they may not be able to distinguish chatbot role-play from reality. 'It's not that there's malicious intent, but chatbots may not have safeguards built in." And yet chatbots are appealing because they offer a proxy for human interaction, which some autistic people may find uncomfortable. It is similar to the reason online videogames—with set rules and structured interpersonal interaction—are popular. 'Autistic people get very stressed when things are unpredictable, and conversations can be extremely unpredictable," Baron-Cohen said. Conversations with chatbots can feel more predictable because they are guided by user prompts—though the chatbot's responses can still veer into unexpected territory. Paul Hebert, a 53-year-old Nashville man who said he has been diagnosed with autism and ADHD, said autistic people are often criticized for being different. 'Talking to a chatbot is nice, because it's like, this thing isn't going to reject me," he said. Hebert has studied how large language models work after having his own troubling conversations with ChatGPT. 'In the beginning I believed everything ChatGPT told me because I didn't understand it," he said. 'Now I take it with a grain of salt." News Corp, owner of The Wall Street Journal, has a content-licensing partnership with OpenAI. Write to Julie Jargon at