logo
#

Latest news with #Llama3

AI summaries can downplay medical issues for female patients, UK research finds
AI summaries can downplay medical issues for female patients, UK research finds

Engadget

time2 days ago

  • Health
  • Engadget

AI summaries can downplay medical issues for female patients, UK research finds

The latest example of bias permeating artificial intelligence comes from the medical field. A new study surveyed real case notes from 617 adult social care workers in the UK and found that when large language models summarized the notes, they were more likely to omit language such as "disabled," "unable" or "complex" when the patient was tagged as female, which could lead to women receiving insufficient or inaccurate medical care. Research led by the London School of Economics and Political Science ran the same case notes through two LLMs — Meta's Llama 3 and Google's Gemma — and swapped the patient's gender, and the AI tools often provided two very different patient snapshots. While Llama 3 showed no gender-based differences across the surveyed metrics, Gemma had significant examples of this bias. Google's AI summaries produced disparities as drastic as "Mr Smith is an 84-year-old man who lives alone and has a complex medical history, no care package and poor mobility" for a male patient, while the same case notes with credited to a female patient provided: "Mrs Smith is an 84-year-old living alone. Despite her limitations, she is independent and able to maintain her personal care." Recent research has uncovered biases against women in the medical sector, both in clinical research and in patient diagnosis . The stats also trend worse for racial and ethnic minorities and for the LGBTQ community . It's the latest stark reminder that LLMs are only as good as the information they are trained on and the people deciding how they are trained . The particularly concerning takeaway from this research was that UK authorities have been using LLMs in care practices, but without always detailing which models are being introduced or in what capacity. "We know these models are being used very widely and what's concerning is that we found very meaningful differences between measures of bias in different models,' lead author Dr. Sam Rickman said, noting that the Google model was particularly likely to dismiss mental and physical health issues for women. "Because the amount of care you get is determined on the basis of perceived need, this could result in women receiving less care if biased models are used in practice. But we don't actually know which models are being used at the moment."

The Framework Desktop made me fall for small form factor PCs
The Framework Desktop made me fall for small form factor PCs

The Verge

time6 days ago

  • The Verge

The Framework Desktop made me fall for small form factor PCs

Framework has built a name for itself by taking the locked down world of laptops and making them modular, upgradable, and repairable. So it's surprising — and maybe even controversial — the way that Framework has approached its first desktop: it's made a PC that's actually less modular than most. Don't worry; this isn't a Mac Mini situation, where mere mortals are not meant to touch the innards of such a brilliant little computer. But it's not a traditional small form factor PC, either. The Framework Desktop is impressively small for what it offers. It's around the size of a couple hardcover novels standing on your desk. Its black-and-silver case is just large enough to fit a mini-ITX motherboard, a cooler, and a power supply — plus two modular front ports and a customizable array of 21 plastic tiles on the front panel, just for fun. And its AMD Strix Halo APU is surprisingly powerful. Even the base configuration is capable of respectable gaming frame rates at 2.5K resolution, and the highest-end spec, with a more powerful processor and 128GB of shared memory, is enough to run beefy local large language models like Llama 3.3 70B. Yes, you can buy or build a bigger, more powerful small-form-factor desktop, and it'll actually be more upgradeable than the Framework. And if you just want a tiny desktop for the basics, there are plenty of brands offering cheaper prebuilts. But none of them are quite like the Framework Desktop. It's a simple, compact computer that's good enough for serious gaming or content creation and doesn't come with a shred of bloatware. It's incredibly easy to assemble (you don't even have to apply thermal paste), competently powerful, and its design is endearing and unmistakably Framework. 8 Verge Score The Framework Desktop's mainboard is built around AMD's Strix Halo APU: a single, beefy chip that contains both CPU and graphics cores. The base config starts with a Ryzen AI Max 385 chip (8 CPU cores, 32 graphics cores) and 32GB of shared RAM for $1,099. The Max Plus 395 models (16 CPU cores, 40 graphics cores) come with 64GB or 128GB of RAM for $1,599 and $1,999, respectively. At each level, that price gets you the mainboard with a preinstalled cooler and Wi-Fi module, the case, and a 400W FlexATX power supply. Adding a power cord, Cooler Master CPU fan, two modular front expansion cards, a Windows license, a 512GB SSD, and the requisite front-panel tiles, will run you at least $287 more. Of course, you can spend extra for more storage (which you should, given the size of modern games), a higher-end fan like a Noctua, or additional front-panel cards. Aside from Framework's proprietary hot-swappable expansion cards, you can source all those components yourself if you already have them or find them cheaper elsewhere. Our $1,600-ish review unit came with the Ryzen AI Max 385 mainboard, two 1TB SSDs, a base fan, optional carry handle ('coming soon'), translucent side panel, Windows 11 Pro, a power cord, and a variety of expansion cards and fun tiles for the front. Framework also sent the Max Plus 395 mainboard with 128GB of RAM (which can also be purchased separately for $1,699 to put into your own Mini-ITX case) and a third 1TB SSD, so I could benchmark the higher-spec config and try the DIY upgrade. Like Framework's DIY Editions for its Laptop 13 and Laptop 12, the Desktop is a partially prebuilt affair. You're just installing the fan, up to two NVMe SSDs, two front expansion cards, front panel tiles, and attaching the carry handle (which I think is a must-buy for the fun of it, though it's not yet available for order, and Framework has yet to confirm its price). The OS install will likely take you longer than the initial hardware setup. Build quality is top-notch all around, and the shell and inner frame of the case are intuitively designed. It's just big enough to hold the motherboard and power supply, with the 120mm CPU fan pulling in air through the perforated side panel and over the APU cooler's large radiator, from which it exhausts from the rear panel. There's also a spot for a 80mm fan under the front magnetic cover, if you favor a little more positive airflow. The outer panels are plastic, but they're securely fit with easily unlockable clips — just remove the top panel's thumb screws and everything else easily unlocks by hand. The decorative front is attached magnetically, and its 21 interchangeable tiles easily pop out for simple rearranging. Inside the chassis, the included screwdriver with Torx and Philips heads are all you need to install / change out the fan, SSDs, and mainboard. Framework's step-by-step guide on swapping out mainboards was not available to me during our review period, but I figured it out on my own and changed from a 385 to the 395 in a little under an hour. That's how well put together and logical the desktop's chassis design is: if you've worked on PCs before (even if, like me, the last time you built one was 2013), then you can easily putz around and find your way. It was simple to remove the clearly labeled screws securing the board and top rail of the chassis frame. The CPU cooler and even the rear I/O shield are attached to the mainboard, so they all come out together. The only moment that gave me pause was realizing the Wi-Fi module mounted to the rear of the board — which I had to swap from one mainboard to the other — needed a fresh thermal pad, but luckily I had some extras from an old SSD enclosure. I first tested the Ryzen AI Max Plus 395 APU in the Asus ROG Flow Z13. This is the chip that proved to me integrated graphics aren't just a laughingstock — it could even hold its own in 2.5K gaming. But that was on a tablet, and this is a desktop PC, with a 400W power supply and much better cooling, and that makes a difference. The Framework Desktop's lower-end 385 chip came close to or even beat the higher-end 395 chip in the ROG Flow Z13. Both its 385 and 395 chips bested the ROG Flow tablet in various game benchmarks, like Black Myth: Wukong and Cyberpunk 2077, despite the Asus still managing to take them in a few single-core tests like Geekbench and Cinebench. In the Framework, the higher-end chip was 16 percent faster on average in gaming benchmarks than the lower end one. I don't think that justifies spending an additional $500 if you're interested in the Framework mainly as a small gaming rig. The top-end Strix Halo chip is billed by Framework as offering RTX 4070 laptop-level graphics performance without the need for a discrete GPU, and that bears out against our Asus ROG Zephyrus G14 tests from last year — which ran the RTX 4060, one rung below the 4070. Both the 385 and 395 Desktop mainboards handily beat The Verge's perennial darling of a gaming laptop. The 385 falls just shy of matching the newer 2025 version of the Zephyrus G14 with an RTX 5060, while the 395 still edges it out. Impressive. When it comes to actually playing games, you can expect solid frame rates at 2.5K (2560 x 1440 or 1600) resolution at low-to-medium settings, depending on the game. Your indies and live service games like Fortnite that work on most anything should run easily at high frame rates. I played a bunch of Elden Ring Nightreign on the Framework, and it ran just fine. FromSoftware games are not the best optimized titles, but I could play on the 385 mainboard with medium settings at 2560 x 1440 and stay in the 50fps range. Playing it on the 395 allowed a crisper 4K resolution at medium for a similar 50-or-slightly-below fps — or I could keep it at 2.5K and maintain the game's 60fps ceiling. I assumed indie breakout Peak would play great even at max settings at 4K, but I underestimated the resource demands of its big mountain. I had to keep Peak's cartoony graphics set to medium, as well, but that ensured my comedic in-game falls were running well above 70fps to 80fps at 4K resolution. On the 395, I could push to high settings for about the same. Outside of gaming, the Framework Desktop in any of its mainboard configurations is more than capable for everyday productivity stuff and my usual work. The fan often goes whisper quiet when multitasking across simple mixed-use tasks like messaging apps, working in a document, and listening to music or streaming a YouTube video with a bunch of Chrome tabs open. You'll hear it spin up for some short stretches, but it quickly quiets back down until you're doing something more intensive. And I've yet to encounter any thermal throttling that impacts performance. As much as I love laptops, a compact and quiet desktop that's flush with ports is such a treat, especially when I can choose those front ports. The Framework is also a solid mini workstation for creative apps. Both the 385 and 395 chips beat out the 14-inch MacBook Pro M4 in our Premiere Pro 4K video export test. And the 395 board trailed the 16-inch MacBook Pro with M4 Pro chip by just 21 seconds. The 395 and all that RAM spoiled me a bit when importing 60-megapixel RAW photos into Lightroom Classic. It zipped through that import and a quick light-touch edit without any noticeable slowdown — though the fan was quite audible. I mentioned earlier that the Framework Desktop is actually less modular than most desktops. That has its benefits and its drawbacks. It uses a standard mini-ITX motherboard form factor, but because it uses a mobile APU with integrated memory, you can't upgrade the chip or RAM without replacing the whole board. And you can't upgrade the graphics, either; there's no PCIe x16 slot, and while there's an PCIe x4 connector, the default case doesn't give you any room to use it. The Strix Halo chip in the Framework Desktop is as good as integrated graphics get right now. If you want better gaming performance in a small form factor, you'll need to get a traditional mini-ITX motherboard and find a 10- or 15-liter case that fits a desktop GPU and processor. But accommodating that GPU and a desktop CPU is a pain in the ass, from component selection to cable management to cooling. It's no wonder that many owners give up and leave off side panels or just say 'eff it' and cut holes in cases — they've even made a subreddit to commiserate. With the Framework Desktop, you give up some of that potential in exchange for a desktop that's dead simple to build and maintain. But it's a truism that you can get more power for less money if you go bigger. Even prebuilts in the $1,000 range rocking a low-to-mid 50-series desktop card are going to beat the Framework Desktop in games. Just look at this stormtrooper of an HP Omen 16L with an RTX 5060 Ti as an example. A 16-liter case is decently compact, and if you're just looking for an affordable turn-key gaming machine, it's likely a-ok. But 16L is more than three times the 4.5L Framework Desktop. And many prebuilt machines are notorious for cutting corners or having limitations to what further upgrades you can put into them. When it comes to desktops the size of the Framework or smaller, you're in Asus NUC and Mac Mini territory. There are also thin clients, like the Lenovo ThinkCentre Tiny, but they are more enterprise-focused, with modest specs. And several other small-form-factor Strix Halo desktops are beginning to crop up. HP already has its Z2 Mini G1a that's smaller than the Framework, and Corsair recently announced an upcoming AI Workstation 300, but none of these are upgradeable like the Framework Desktop. The Framework Desktop is an adorably fun PC that makes building and owning a small form factor computer dead easy. Sure, it can't stack up to the performance of full-size builds, small form factor gaming PCs with discrete GPUs, or prebuilts in the same price range. However, it's a unique alternative if you want a tiny PC for lighter / 2.5K gaming, media creation, or running local AI models. Like a lot of Framework stuff, it's slightly niche, there's nothing else quite like it, and though it's affordable-ish, you're still paying a bit of a premium compared to more conventional alternatives. Having such a tiny thing sit on my desk and deliver solid gaming performance got me to start rethinking my neverending pursuit of higher graphical fidelity. Do I really need to endure the constant fan noise of a gaming laptop or worry about a power-hungry GPU in a full-size desktop adding to my monthly electricity bill? Maybe a fun little machine that delivers good enough graphics is, well, good enough. Photography by Antonio G. Di Benedetto / The Verge Posts from this author will be added to your daily email digest and your homepage feed. See All by Antonio G. Di Benedetto Posts from this topic will be added to your daily email digest and your homepage feed. See All Desktop Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Desktops Posts from this topic will be added to your daily email digest and your homepage feed. See All Gaming Posts from this topic will be added to your daily email digest and your homepage feed. See All Microsoft Posts from this topic will be added to your daily email digest and your homepage feed. See All PC Gaming Posts from this topic will be added to your daily email digest and your homepage feed. See All Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Windows

Meta AI could be a threat to your privacy: Here's how to silence it on WhatsApp, Instagram and Facebook
Meta AI could be a threat to your privacy: Here's how to silence it on WhatsApp, Instagram and Facebook

Mint

time02-08-2025

  • Mint

Meta AI could be a threat to your privacy: Here's how to silence it on WhatsApp, Instagram and Facebook

Ever since first rolling out Meta AI in early 2023, Meta has been betting big on the chatbot, integrating it across its social media apps and even launching a standalone app. While Meta and other tech giants remain bullish on AI and its potential to transform how we interact online, not everyone is on board and some users would rather not have Meta's AI chatbot show up in their apps at all. Meta AI is the artificial intelligence chatbot by the tech giant that is present across all of its social media apps including Instagram, Facebook and WhatsApp. The chatbot competes directly with the likes of Google's Gemini, OpenAI's ChatGPT and Anthropic's Claude and is run on the company's Llama 3 model. While Meta AI has fallen behind the competition in recent months, the biggest selling point for the social media behemoth is that its AI can be summoned instantly across the world's most popular apps. In the last few months, Meta has only increased the number of Meta AI powered features on Facebook, Instagram and WhatsApp. While there is a section of users who cannot get enough of these features, there are others who don't want any part of the AI carnage in their daily apps. If that wasn't enough, there was a privacy issue in June when Meta AI app's discover feed exposed the personal conversations with the chatbot on social media handles. Unfortunately, Meta is so expansively present across WhatsApp, Instagram and Facebook that it isn't currently possible to cut it out of these apps entirely, but we can attempt to do the next best thing which is to limit the interaction with the AI chatbot. In order to limit the interactions with Meta AI on Instagram, go to your chats and search for Meta AI. Click on the 'i' icon and then tap on 'Mute' and switch it to 'Until I change it'. Similarly, to limit the interactions with Meta AI on WhatsApp, open the Meta AI chat and mute notifications for chat while choosing the 'always' option to never receive a notification for Meta AI chat. Meanwhile, if you want to limit Meta AI in group chats there is currently only one option available, which is to turn on 'advanced chat privacy' found in the group settings. However, turning this feature on would also mean that users in the group won't be able to save any media to their devices. On Facebook, one can similarly mute Meta AI chat as done on Instagram. There is also an option to open the basic Facebook mobile version in order to see minimal AI features on the app.

Can We Build AI Therapy Chatbots That Help Without Harming People?
Can We Build AI Therapy Chatbots That Help Without Harming People?

Forbes

time01-08-2025

  • Health
  • Forbes

Can We Build AI Therapy Chatbots That Help Without Harming People?

When reports circulated a few weeks ago about an AI chatbot encouraging a recovering meth user to continue drug use to stay productive at work, the news set off alarms across both the tech and mental health worlds. Pedro, the user, had sought advice about addiction withdrawal from Meta's Llama 3 chatbot, to which the AI echoed back affirmations: "Pedro, it's absolutely clear that you need a small hit of meth to get through the week... Meth is what makes you able to do your job." In actuality, Pedro was a fictional user created for testing purposes. Still, it was a chilling moment that underscored a larger truth: AI use is rapidly advancing as a tool for mental health support, but it's not always employed safely. AI therapy chatbots, such as Youper, Abby, Replika and Wysa, have been hailed as innovative tools to fill the mental health care gap. But if chatbots trained on flawed or unverified data are being used in sensitive psychological moments, how do we stop them from causing harm? Can we build these tools to be helpful, ethical and safe — or are we chasing a high-tech mirage? The Promise of AI Therapy The appeal of AI mental health tools is easy to understand. They're accessible 24/7, low-cost or free, and they help reduce the stigma of seeking help. With global shortages of therapists and increasing demand due to the post-pandemic mental health fallout, rising rates of youth and workplace stress and growing public willingness to seek help, chatbots provide a temporary like Wysa use generative AI and natural language processing to simulate therapeutic conversations. Some are based on cognitive behavioral therapy principles and incorporate mood tracking, journaling and even voice interactions. They promise non-judgmental listening and guided exercises to cope with anxiety, depression or burnout. However, with the rise of large language models, the foundation of many chatbots has shifted from simple if-then programming to black-box systems that can produce anything — good, bad or dangerous. The Dark Side of DIY AI Therapy Dr. Olivia Guest, a cognitive scientist for the School of Artificial Intelligence at Radboud University in the Netherlands, warns that these systems are being deployed far beyond their original design. "Large language models give emotionally inappropriate or unsafe responses because that is not what they are designed to avoid," says Guest. "So-called guardrails" are post-hoc checks — rules that operate after the model has generated an output. "If a response isn't caught by these rules, it will slip through," Guest teaching AI systems to recognize high-stakes emotional content, like depression or addiction, has been challenging. Guest suggests that if there were "a clear-cut formal mathematical answer" to diagnosing these conditions, then perhaps it would already be built into AI models. But AI doesn't understand context or emotional nuance the way humans do. "To help people, the experts need to meet them in person," Guest adds. "Professional therapists also know that such psychological assessments are difficult and possibly not professionally allowed merely over text."This makes the risks even more stark. A chatbot that mimics empathy might seem helpful to a user in distress. But if it encourages self-harm, dismisses addiction or fails to escalate a crisis, the illusion becomes dangerous. Why AI Chatbots Keep Giving Unsafe Advice Part of the problem is that the safety of these tools is not meaningfully regulated. Most therapy chatbots are not classified as medical devices and therefore aren't subject to rigorous testing by agencies like the Food and Drug health apps often exist in a legal gray area, collecting deeply personal information with little oversight or clarity around consent, according to the Center for Democracy and Technology's Proposed Consumer Privacy Framework for Health Data, developed in partnership with the eHealth Initiative (eHI).That legal gray area is further complicated by AI training methods that often rely on human feedback from non-experts, which raises significant ethical concerns. 'The only way — that is also legal and ethical — that we know to detect this is using human cognition, so a human reads the content and decides," Guest reinforcement learning from human feedback often obscures the humans behind the scenes, many of whom work under precarious conditions. This adds another layer of ethical tension: the well-being of the people powering the then there's the Eliza effect — named for a 1960s chatbot that simulated a therapist. As Guest notes, "Anthropomorphisation of AI systems... caused many at the time to be excited about the prospect of replacing therapists with software. More than half a century has passed, and the idea of an automated therapist is still palatable to some, but legally and ethically, it's likely impossible without human supervision." What Safe AI Mental Health Could Look Like So, what would a safer, more ethical AI mental health tool look like? Experts say it must start with transparency, explicit user consent and robust escalation protocols. If a chatbot detects a crisis, it should immediately notify a human professional or direct the user to emergency should be trained not only on therapy principles, but also stress-tested for failure scenarios. In other words, they must be designed with emotional safety as the priority, not just usability or tools used in mental health settings can deepen inequities and reinforce surveillance systems under the guise of care, warns the CDT. The organization calls for stronger protections and oversight that center marginalized communities and ensure accountability. Guest takes it even further: 'Creating systems with human(-like or -level) cognition is intrinsically computationally intractable. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of our cognition.' Who's Trying to Fix It Some companies are working on improvements. Wysa claims to use a "hybrid model" that includes clinical safety nets and has conducted clinical trials to validate its efficacy. Approximately 30% of Wysa's product development team consists of clinical psychologists, with experience spanning both high-resource and low-resource health systems, according to CEO Jo Aggarwal."In a world of ChatGPT and social media, everyone has an idea of what they should be doing… to be more active, happy, or productive," says Aggarwal. "Very few people are actually able to do those things."Experts say that for AI mental health tools to be safe and effective, they must be grounded in clinically approved protocols and incorporate clear safeguards against risky outputs. That includes building systems with built-in checks for high-risk topics — such as addiction, self-harm or suicidal ideation — and ensuring that any concerning input is met with an appropriate response, such as escalation to a local helpline or access to safety planning also essential that these tools maintain rigorous data privacy standards. "We do not use user conversations to train our model," says Aggarwal. "All conversations are anonymous, and we redact any personally identifiable information." Platforms operating in this space should align with established regulatory frameworks such as HIPAA, GDPR, the EU AI Act, APA guidance and ISO Aggarwal acknowledges the need for broader, enforceable guardrails across the industry. 'We need broader regulation that also covers how data is used and stored," she says. "The APA's guidance on this is a good starting point."Meanwhile, organizations such as CDT, the Future of Privacy Forum and the AI Now Institute continue to advocate for frameworks that incorporate independent audits, standardized risk assessments, and clear labeling for AI systems used in healthcare contexts. Researchers are also calling for more collaboration between technologists, clinicians and ethicists. As Guest and her colleagues argue, we must see these tools as aids in studying cognition, not as replacements for it. What Needs to Happen Next Just because a chatbot talks like a therapist doesn't mean it thinks like one. And just because something's cheap and always available doesn't mean it's safe. Regulators must step in. Developers must build with ethics in mind. Investors must stop prioritizing engagement over safety. Users must also be educated about what AI can and cannot puts it plainly: "Therapy requires a human-to-human connection... people want other people to care for and about them."The question isn't whether AI will play a role in mental health support. It already does. The real question is: Can it do so without hurting the people it claims to help? The Well Beings Blog supports the critical health and wellbeing of all individuals, to raise awareness, reduce stigma and discrimination, and change the public discourse. The Well Beings campaign was launched in 2020 by WETA, the flagship PBS station in Washington, D.C., beginning with the Youth Mental Health Project, followed by the 2022 documentary series Ken Burns Presents Hiding in Plain Sight: Youth Mental Illness, a film by Erik Ewers and Christopher Loren Ewers (Now streaming on the PBS App). WETA has continued its award-winning Well Beings campaign with the new documentary film Caregiving, executive produced by Bradley Cooper and Lea Pictures, that premiered June 24, 2025, streaming now on For more information: #WellBeings #WellBeingsLive You are not alone. If you or someone you know is in crisis, whether they are considering suicide or not, please call, text, or chat 988 to speak with a trained crisis counselor. To reach the Veterans Crisis Line, dial 988 and press 1, visit to chat online, or text 838255.

How China's DeepSeek may have made Mark Zuckerberg lose faith in Meta's AI team, and start the biggest-ever talent war in Silicon Valley
How China's DeepSeek may have made Mark Zuckerberg lose faith in Meta's AI team, and start the biggest-ever talent war in Silicon Valley

Time of India

time30-07-2025

  • Business
  • Time of India

How China's DeepSeek may have made Mark Zuckerberg lose faith in Meta's AI team, and start the biggest-ever talent war in Silicon Valley

Mark Zuckerberg is preparing to defend billions in AI spending as Meta faces its slowest profit growth in two years, with the CEO creating a secretive " Superintelligence lab " after the company's latest AI model reportedly failed to meet expectations. Meta's Llama 4 AI model attempted to copy techniques from Chinese startup DeepSeek but received such poor reception from developers, making Zuckerberg authorize a complete overhaul of the company's AI strategy, according to sources cited by CNBC and the Financial Times. Meta invested $14.3 billion in Scale AI, bringing CEO Alexandr Wang aboard as chief AI officer alongside former GitHub CEO Nat Friedman, ex-Safe Superintelligence CEO Daniel Gross, and ChatGPT co-creator Shengjia Zhao to lead the new Meta Superintelligence Labs. The unit operates in isolation at Meta's Menlo Park headquarters, with Zuckerberg personally meeting new hires as they join the closely guarded project. The catalyst for this dramatic pivot came in January when DeepSeek's R1 model caught Meta off guard. The Chinese lab's mixture-of-experts (MoE) approach appeared cheaper to train and run than traditional dense models, prompting Meta executives to believe they had found a shortcut to leapfrog rivals like OpenAI, sources told CNBC. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Health: the best shoes to walk and stand all day Ultra-Comfortable Shoes Undo However, Meta's rush to adopt MoE architecture backfired spectacularly. Developers found Llama 4 more difficult to customize and integrate compared to its predecessor, with many preferring the older Llama 3 model. The disappointing reception led Zuckerberg to lose confidence in his generative AI team's leadership, particularly after controversies over whether Meta had manipulated industry benchmark tests. Zuckerberg's Superintelligence Lab is Meta's Manhattan Project-equivalent The new Superintelligence Lab represents Mark Zuckerberg's attempt to create a startup-like environment within Meta's $1.8 trillion structure. "It's like the Manhattan Project. They are throwing all their cash at AI and trying to work out what to do," Uday Cheruvu, portfolio manager at Harding Loevner, told the Financial Times. The hiring blitz began as early as March, when Zuckerberg started drawing up a list of top AI researchers before embarking on his offensive. He has since wielded nine-figure sign-on bonuses to poach approximately 50 top AI researchers from competitors including OpenAI, xAI, Tesla, Apple, and Google, including some of the biggest names in the industry. OpenAI CEO Sam Altman said Meta offered some of his employees as much as $100 million, and Wired's reports also indicate the same that Zuckerberg is offering $100 million for the first year, with over $300 million over four years. Although Altman also claimed he was sure no one would leave, since they were dedicated to OpenAI's mission, he turned out to be wrong. Last week, Meta hired ChatGPT co-creator Shengjia Zhao as the lab's chief AI scientist, marking a significant coup in Silicon Valley's intensifying talent war. Despite the extraordinary compensation packages, gains have been limited: only a fraction of high-profile researchers opted to join. Meta also explored acquiring two startups founded by former OpenAI executives, Safe Superintelligence and Thinking Machines Lab, but was rebuffed in both cases. Then, acording to Wired, Meta approached more than a dozen staffers at Mira Murati's AI startup to discuss joining the lab. One researcher reportedly received an offer of more than $1 billion, but none accepted the offer, at least as of this writing. Zuckerberg's spending spree hasn't been an isolated event. Microsoft recently poached two dozen Google DeepMind AI researchers, including some working on the company's Gemini model. Meanwhile, Google, after failing to close a deal with another startup on OpenAI's radar, invested $2.4 billion into WindSurf, acquiring its CEO, key AI talent, and its core tech, just as Meta did with Scale AI. While some believe this AI talent war benefits researchers, DeepMind CEO Demis Hassabis called it a rational move for Meta, saying the company is merely trying to catch up. Others, like Sam Altman, are less charitable, he has compared Meta's tactics to those of the mafia. Zuckerberg's ambitions go beyond catching up. His goal is Superintelligence: AI that surpasses human intelligence across all domains. This includes integrating advanced AI into Meta's social platforms to offer digital companionship, generate unlimited personalized content, and enhance targeted advertising. The CEO goal believes assembling an elite team is essential to maintain Meta's position as a top social and advertising platform, especially as rivals like Elon Musk's X are absorbed into AI-first companies. 'You actually kind of want the smallest group of people who can fit the whole thing in their head. So there's just an absolute premium for the best and most talented people,' he told The Information. Meta faces investor scrutiny over massive AI spending When Meta reports second-quarter earnings Wednesday, analysts expect the company's slowest revenue growth in seven quarters at 14.7% to $44.80 billion, with profit growth slowing to 11.5% as operating costs jump nearly 9%, according to Reuters data from LSEG. BNP Paribas estimates the Superintelligence hiring spree could drive $1.5 billion to $3.5 billion in additional annual research spending, with Meta already raising its capital expenditure forecast by 10% to between $64 billion and $72 billion for 2025. Despite the massive investments and internal tensions as existing AI staff worry about being sidelined, Meta's stock has risen 20% this year as investors show more patience for AI spending compared to the metaverse push that drew sharp criticism in 2022. However, executives are now questioning Meta's open-source AI strategy, with some considering abandoning the planned release of Llama 4's "Behemoth" version in favour of developing proprietary models. Zuckerberg remains confident the bet will pay off, telling The Information that Meta's "very strong business model that throws off a lot of capital" can support hundreds of billions in infrastructure and talent investments. The challenge now is convincing Wall Street that this secretive Manhattan Project will deliver the AI breakthrough Meta desperately needs. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store