Latest news with #Llamacon
Yahoo
19 hours ago
- Yahoo
AI-powered chat summaries are coming to WhatsApp
Meta is adding a new Message Summaries feature to WhatsApp that uses AI to summarize unread messages in a few bullet points. The feature is built on the Private Processing technique Meta announced at Llamacon in April, and claims to let AI work with content in WhatsApp without exposing any of it to Meta itself. Once the feature appears in your app, you just tap on the onscreen banner over your unread messages with that says "Summarize privately" to receive a summary from Meta AI. The Message Summaries feature is rolling out to WhatsApp users in the US chatting in English first, but Meta says it hopes to "bring it to other languages and countries later this year." The company pitches summaries as an easier way to catch-up on what you missed if you haven't checked your phone or you're just in too many chats. AI is by no means foolproof at even simple tasks like this — Apple's trouble with notification summaries was only a few months ago — but the tool could be appealing to people in particularly large and active chats. The real novelty of the summaries is how Meta claims to be deploying them without walking back the private nature of WhatsApp chats. The company has a blog post and whitepaper digging into the details of how Private Processing works, but on first blush it sounds similar to Private Cloud Compute, the method Apple uses to call on more demanding AI features without exposing its users' data. Using end-to-end encryption and a secure cloud environment, WhatsApp messages can be processed without data being accessed while its happening, or saved after the fact. Importantly, all of this is still optional. Summaries won't be provided without you asking for them first, and the feature is disabled by default. Meta also says you can exclude chats from being shared with the company's AI via the Advanced Chat Privacy feature.

Engadget
a day ago
- Engadget
AI-powered chat summaries are coming to WhatsApp
Meta is adding a new Message Summaries feature to WhatsApp that uses AI to summarize unread messages in a few bullet points. The feature is built on the Private Processing technique Meta announced at Llamacon in April, and claims to let AI work with content in WhatsApp without exposing any of it to Meta itself. Once the feature appears in your app, you just tap on the onscreen banner over your unread messages with that says "Summarize privately" to receive a summary from Meta AI. The Message Summaries feature is rolling out to WhatsApp users in the US chatting in English first, but Meta says it hopes to "bring it to other languages and countries later this year." The company pitches summaries as an easier way to catch-up on what you missed if you haven't checked your phone or you're just in too many chats. AI is by no means foolproof at even simple tasks like this — Apple's trouble with notification summaries was only a few months ago — but the tool could be appealing to people in particularly large and active chats. The real novelty of the summaries is how Meta claims to be deploying them without walking back the private nature of WhatsApp chats. The company has a blog post and whitepaper digging into the details of how Private Processing works, but on first blush it sounds similar to Private Cloud Compute, the method Apple uses to call on more demanding AI features without exposing its users' data. Using end-to-end encryption and a secure cloud environment, WhatsApp messages can be processed without data being accessed while its happening, or saved after the fact. Importantly, all of this is still optional. Summaries won't be provided without you asking for them first, and the feature is disabled by default. Meta also says you can exclude chats from being shared with the company's AI via the Advanced Chat Privacy feature.

The Hindu
7 days ago
- Business
- The Hindu
After Scale AI, Meta is going after OpenAI co-founder Ilya Sutskever's startup Safe Superintelligence
Meta is on AI overdrive. After steadily losing talent to rivals and a faulty product launch during its developer event, Llamacon, in May, the social media giant is desperately trying to stay in the AI race. Its plan: if you can't build internally, then go shopping. Last week (June 10), Meta agreed to pick 49% stake in data-labelling firm Scale AI — a deal that brought the company's founder Alexandr Wang under Meta's fold. Now, per media reports, the social network is looking to bolster Mr. Wangs' team. After its failed attempt to buy OpenAI co-founder Ilya Sutskever's startup Safe Superintelligence Inc earlier this year, Meta has opened communication channels with SSI CEO Daniel Gross and ex-Github CEO Nat Friedman. It must be noted that both — Mr. Gross and Mr. Friedman — run a venture fund called NFDG. Per media reports, the duo is expected to join Meta and report to Mr. Wang — Meta's superintelligence department chief. Meta will also own a portion of NFDG under the terms. Neither Mr. Gross nor Mr. Friedman have confirmed or denied talks with Meta. Apart from this, Meta has also attempted poaching OpenAI's AI researchers by offering some even upto $100 million as joining bonus Mr. Sutskever's startup SSI, per an April funding round, was valued at $32 billion. The firm was established by the Canadian computer scientist after he quit OpenAI over alignment issues.


Axios
15-05-2025
- Business
- Axios
Meta delays "Behemoth" AI model release: report
Meta is pushing back plans for a public release of its largest Llama 4 AI model, known as " Behemoth," because of concerns that it may not be enough of an advance on previous models, per a new Wall Street Journal report. Why it matters: It's another indicator that the AI industry's scaling strategy — "just make everything bigger" — could be hitting a wall. Driving the news: The Journal says that Behemoth is now expected to be released in the fall or even later. It was originally scheduled to coincide with Meta's Llamacon event last month, then later postponed till June. It's also possible the company could speed up a more limited Behemoth release. The big picture: Meta is spending tens of billions of dollars on its AI projects, and till recently it had made fast progress catching up with rivals like Google and OpenAI. Between the lines: Meta's disappointments mirror a broader worry inside the AI industry that progress dependent on scaling up models may be plateauing. OpenAI has faced well-documented hurdles in shipping an all-in-one next-generation large-scale model to follow GPT 4.0. Instead, the ChatGPT maker has rolled out a series of separate models, some specializing in reasoning and others in coding and technical work. Google and Anthropic have also reportedly encountered setbacks with their most recent efforts to train the largest models.


Axios
02-05-2025
- Business
- Axios
In Meta's AI future, your friends are bots
Mark Zuckerberg and Meta are pitching a vision of AI chatbots as an extension of your friend network and a potential solution to the " loneliness epidemic." Why it matters: Meta's approach to AI raises the broader question of just whose interest chatbots are serving — especially when the bot has access to the details of your life and the company's business depends on constantly boosting the time you spend with it. Driving the news: Facebook's parent company on Tuesday debuted a new mobile app that transforms the Meta AI chatbot into a more social experience, including the ability to share AI-generated creations with friends and family. But Zuckerberg also sees the AI bot itself as your next friend. "The average American has, I think, it's fewer than three friends," Zuckerberg said during a podcast interview Monday. "And the average person has demand for meaningfully more." In a media blitz that included several podcast appearances this week timed for Meta's Llamacon event, Zuckerberg mapped out an AI future built on a foundation of augmented-reality glasses and wrist-band controllers. With those devices plugged into Meta's AI models, he predicted the emergence, within four or five years, of a new platform for human interaction with bots. It would, he said, be the next logical step on the internet's evolution from text to audio and video. What they're saying:"Today, most of the time spent on Facebook and Instagram is on video, but do you think in five years we're just going to be sitting in our feed and consuming media that's just video?" Zuckerberg asked podcaster Dwarkesh Patel. "No," he answered himself. "It's going to be interactive. You'll be scrolling through your feed, and there will be content that maybe looks like a Reel to start, but you can talk to it, or interact with it, and it talks back, or it changes what it's doing. Or you can jump into it like a game and interact with it. That's all going to be AI." Yes, but: Where Zuckerberg sees opportunity, critics see alarm bells, especially given Meta's history and business model. "The more time you spend chatting with an AI 'friend,' the more of your personal information you're giving away," Robbie Torney, Common Sense Media's senior director of AI programs, told Axios. "It's about who owns and controls and can use your intimate thoughts and expressions after you share them." Under Meta's privacy policy, its AI chatbot can use what the company knows about you in its interactions. Meta can also use your conversations — and any media you upload — for training its models. You can choose to have Meta AI not remember specific details, but there is no way for U.S. users to opt out more broadly. The intrigue: Zuckerberg's bot-friendship vision is arriving at a moment when AI companions face criticism and controversy, particularly as younger users encounter them. The Wall Street Journal reported last week that its testing showed earlier versions of Meta's chatbots — including those based on celebrity personas — are willing to engage in sexual banter, even with users who identified themselves as teens. (Meta said it has since implemented controls to prevent his from happening.) The big picture: It's not just Meta that is being forced to navigate the social maze of chatbot-human interaction. Most chatbots typically serve up information users request, but as models grow in size and complexity, their makers are finding it hard to tune the bots' traits properly. ChatGPT maker OpenAI was forced to roll back an update after users found that its latest model was behaving with an overload of flattery — it had become, as Engadget put it, "an ass-kissing weirdo." Between the lines: Critics' concerns range from the sensitivity of the data users are sharing to the potential for addiction to the risk of bots dispensing potentially dangerous advice. "These companies are optimizing for data collection first and user engagement first, with well-being as a secondary consideration, if it's a consideration at all," Torney said. Common Sense Media issued a report Wednesday declaring the entire category of AI companions — including those from Nomi, and Replika — poses an unacceptable safety risk for minors. Torney said they can be problematic for vulnerable adults as well. "People are increasingly using AIs as a source of practical help, support, entertainment and fun — and our goal is to make them even easier to use. We provide transparency, and people can manage their experience to make sure it's right for them," a Meta spokesperson said in a statement. What's next: The more social AI becomes, the more likely it is that AI companies will replicate the aspects of social media that have gradually soured so many users on the platforms — which still command billions of people's attention. Camille Carlton, policy director at the Center for Humane Technology, sees dangers in this "transition towards engagement," as well as in companies' push to grab as much data as they can to personalize their AI services. Carlton noted that companies like Anthropic and OpenAI are generating revenue from business customers who pay for access — but those, like Meta, which focus on consumers will keep looking for ways to make their large investments in AI models pay off.