
Inside Mark Zuckerberg's AI hiring spree
AI researchers have recently been asking themselves a version of the question, 'Is that really Zuck? '
As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new 'superintelligence' AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit's work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone.
For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they'll have to make risky bets, the scale of Meta's products, and the money he's prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta's headquarters, where I'm told the desks have already been rearranged for the incoming team.
Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I've covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
17 minutes ago
- Yahoo
Investors in Trupanion (NASDAQ:TRUP) have seen favorable returns of 92% over the past year
These days it's easy to simply buy an index fund, and your returns should (roughly) match the market. But investors can boost returns by picking market-beating companies to own shares in. To wit, the Trupanion, Inc. (NASDAQ:TRUP) share price is 92% higher than it was a year ago, much better than the market return of around 11% (not including dividends) in the same period. If it can keep that out-performance up over the long term, investors will do very well! In contrast, the longer term returns are negative, since the share price is 2.4% lower than it was three years ago. So let's assess the underlying fundamentals over the last 1 year and see if they've moved in lock-step with shareholder returns. AI is about to change healthcare. These 20 stocks are working on everything from early diagnostics to drug discovery. The best part - they are all under $10bn in marketcap - there is still time to get in early. Given that Trupanion didn't make a profit in the last twelve months, we'll focus on revenue growth to form a quick view of its business development. When a company doesn't make profits, we'd generally hope to see good revenue growth. Some companies are willing to postpone profitability to grow revenue faster, but in that case one would hope for good top-line growth to make up for the lack of earnings. Trupanion grew its revenue by 14% last year. That's a fairly respectable growth rate. Buyers pushed the share price 92% in response, which isn't unreasonable. If revenue stays on trend, there may be plenty more share price gains to come. But it's crucial to check profitability and cash flow before forming a view on the future. The company's revenue and earnings (over time) are depicted in the image below (click to see the exact numbers). If you are thinking of buying or selling Trupanion stock, you should check out this FREE detailed report on its balance sheet. It's nice to see that Trupanion shareholders have received a total shareholder return of 92% over the last year. That's better than the annualised return of 6% over half a decade, implying that the company is doing better recently. In the best case scenario, this may hint at some real business momentum, implying that now could be a great time to delve deeper. While it is well worth considering the different impacts that market conditions can have on the share price, there are other factors that are even more important. Even so, be aware that Trupanion is showing 1 warning sign in our investment analysis , you should know about... If you like to buy stocks alongside management, then you might just love this free list of companies. (Hint: many of them are unnoticed AND have attractive valuation). Please note, the market returns quoted in this article reflect the market weighted average returns of stocks that currently trade on American exchanges. Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.


Forbes
18 minutes ago
- Forbes
These New Pixel 10 Features Will Challenge The Competition
The new Pixel 9 With the launch of Android 16, many expect the first smartphones to ship with the latest version of the OS will be Google's Pixel 10 and Pixel 10 Pro. While the focus will no doubt be placed on both the new capabilities of Android and the increased application of artificial intelligence, some huge hardware changes should not go unnoticed. The changes focus on the camera. It's already clear that Google is adding a telephoto lens to the Pixel 10; the Pixel 9's wide-angle and ultrawide-angle lens will be joined by a telephoto lens. This isn't a direct addition… the Pixel 9's with the 50 megapixel wide and 48 megapixel ultrawide will be bumped down to a 48 megapixel wide and 13 megapixel ultrawide pairing (a pairing that matches that of the Pixel 9a). Nevertheless, the telephoto will be welcome both in use and by the marketing team. The camera system is expected to feature gimbal-like stabilization across the entire Pixel 10 family. Using a mix of optical image stabilization, software-based electronic image stabilization, and AI algorithms, the Pixel 10 camera system should allow for sharper images thanks to the steadying influence of the hardware compensating for dynamic movement while the phone's camera is being used. The Pixel 10 has a critical role to play in the smartphone ecosystem. As the entry-level Pixel smartphone, it will challenge the current 'flagship-killer' handsets in price and capability. With it, Google will be looking to set the standard that consumers should expect at this price point. While the Pixel range plays a part in defining what it means to be a smartphone—be it a flagship, a foldable, or the base function of a phone—the Pixel 10 will arguably be the Pixel that can have the most significant impact on the ecosystem. Adding a telephoto lens and image stabilisation sets another marker for the competition. Whether it is a justification for a decision already made in their design process, or a push to include these elements in the next phone, the Pixel 10 represents Google's image of what a smartphone should be. And that view now includes some big steps forward for the camera. Now read the lates Pixel 10 and Android headlines in Forbes' weekly smartphone digest...


CNET
20 minutes ago
- CNET
AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe
Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Psychologists and consumer advocates are warning that chatbots claiming to provide therapy may be harming those who use them. This week, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their bots, in the unlicensed practice of medicine -- naming Meta and specifically. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable," Ben Winters, the CFA's director of AI and privacy, said in a statement. "These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it." Meta did not respond to a request for comment. A spokesperson for said users should understand that the company's characters are not real people. The company uses disclaimers to remind users that they should not rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training and it said, "I do but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they are not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users," the complaint said. A qualified health professional has to follow certain rules, like confidentiality. What you tell your therapist should stay between you and your therapist, but a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. It's a tool designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they are always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is incredibly important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we would seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of its capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.