logo
WhatsApp Is Testing New Way To Share Images With Meta AI And Ask Questions: Here's How

WhatsApp Is Testing New Way To Share Images With Meta AI And Ask Questions: Here's How

News182 days ago

Last Updated:
WhatsApp offers Meta AI features that lets you create images and get a detailed food recipe but the AI chatbot could get more useful.
WhatsApp and Meta AI are getting stronger with every iteration and soon we could have a dedicated AI tab that supports building new AI chatbots. However, the messaging app is also looking to enhance the use case of Meta AI and make it interact with content provided by the user.
We have seen this work with Grok, Gemini AI using Live screen and even ChatGPT for the paid users. WhatsApp offering this ability through Meta AI for free could make it available to a wider audience when the official roll out happens.
Meta AI on WhatsApp is testing the power to share images and documents. By forwarding an image you can share a prompt with Meta AI and seek more context like 'is this image real, or can you describe what is there in the image.'
The details via WaBetainfo clearly tells you that Meta AI is looking to increase its reach and access through WhatsApp. 'You can now forward messages and photos to Meta AI to ask questions or get help," the note from the tipster with the update screenshot says.
Having said that, Meta can only read messages people share with it, which is also mentioned in the conditions along with the main part that messages sent to Meta AI may be used to improve its systems. The feature is still being tested with a limited group but we expect some update from the company in the next few months.
As we said earlier, WhatsApp will soon get a tab that will let you build new AI chatbots and you don't even need to know coding to make it happen. The AI chatbots you create can become your conversation partner as well. WhatsApp will do a step process to help you create an AI and seek your inputs based on which chatbot will be generated.
First Published:
June 10, 2025, 08:15 IST

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta AI may have a ‘personal problem' and a very serious one, and it's a warning for users
Meta AI may have a ‘personal problem' and a very serious one, and it's a warning for users

Time of India

time4 hours ago

  • Time of India

Meta AI may have a ‘personal problem' and a very serious one, and it's a warning for users

Representative image The Meta AI app may have a "personal chat problem" that has the potential to escalate into a major privacy issue. Users of the AI assistant developed by Facebook's parent company have complained that its "Discover" feed is reportedly displaying user prompts publicly without them being aware of the same. This feature was introduced with the transition from the Meta View app to the Meta AI app in April. It allows others to see the types of prompts people are submitting to Meta's AI chatbot. However, a concerned user named Justine Moore took to the social media platform X (earlier Twitter) to note observing prompts in the public feed that suggest users may not know that their queries are being openly displayed. This raises significant privacy implications for users interacting with the Meta AI service. How Meta AI is leaking personal chats of users In the X post, Moore shared screenshots of personal chats that Meta AI is showing other users and wrote: 'Wild things are happening on Meta's AI app. The feed is almost entirely boomers who seem to have no idea their conversations with the chatbot are posted publicly. They get pretty personal (see second pic, which I anonymized).' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Free Solitaire. No Ads Ever. Play Solitaire Download Undo Later on, in the same thread shared transcribes of some chats, Moore further wrote: 'To clarify - conversations aren't public by default. You have to hit a 'share' button, but many users seem to think it's posting to a private journal. Resulting in things like this…a man trying to write a romantic poem for his gf. You can hear a 6 min audio clip, here's a transcribed excerpt:' 'Obsessed with this man who tries to use the app to find a woman with a 'big b**ty and nice r**k.' When it won't post on his behalf in local FB groups, he asks the bot to 'delete my number.' Instead, he (accidentally?) shares it publicly - I redacted,' Moore further added. As Moore suggests, users have been unintentionally exposing sensitive information on Meta AI. Users are advised to refrain from sharing prompts containing private medical and tax details, addresses, and intimate confessions (ranging from relationship doubts to personal dating inquiries) with the app as they seem to be appearing publicly. India's New AC Rule: Cooling Between 20°C–28°C Explained

With Google's latest move, internet's becoming less ad-driven search, more user-paid GenAI
With Google's latest move, internet's becoming less ad-driven search, more user-paid GenAI

Economic Times

time5 hours ago

  • Economic Times

With Google's latest move, internet's becoming less ad-driven search, more user-paid GenAI

wwwhats-up@ As you browse the web, you must have encountered the 404 error, signifying that a web page can't be found. What you haven't seen is a similar code: 402. When Tim Berners- Lee was creating the World Wide Web, 402 was the code for 'payment required'. The original intention was that every visitor had to pay something to view a web page. But the schema was never built. So, there is no standardised encoded way to send or receive money online. In a 2019 podcast, Netscape builder Marc Andreessen explained: 'One would think the most obvious thing to do would be building in the browser the ability to actually spend money, right? You'll notice that didn't happen... we couldn't actually build economics into the core of the internet and so therefore advertising became the primary business model... We tried very hard to build payments into the browser. It was not possible... We made a huge mistake.' In 2014, internet expert Ethan Zuckerman wrote that 'advertising is the original sin of the web. The fallen state of our internet is a direct, if unintentional, consequence of choosing advertising as the default model to support online content and services.' Arguably, it's this 'original sin' that has made the largest destinations of the web the morass that it has become now. Thus, social networking has morphed into social media, as TikTok and Instagram pivoted to attention-seeking content that is optimised for views and clicks. This is also why Google's famous '10 blue links' are dominated by sponsored and advertiser preferences, rather than the succinct and accurate answer that a user wants. This drove even Berners-Lee to often regret his 'invention' and made him and others dream of a new kind of web - the concept of Web3, conceived to be 'owned' by users rather than advertisers, decentralised rather than concentrated with a handful of powerful tech companies, and, most importantly, have a different business model: that of micropayments. Thus, the user pays for the content she wants to consume, rather than advertiser paying for it, and the payment is also shared with the original the last 20 years, the company around which the existing web has revolved has been Google. While the 'advertiser pays' model was first brought in by companies like Yahoo and MSN, it was Google that perfected it as it made 'search' the way to organise the web. On the internet, if it's free, you're the product. Thus, Google and others sharply targeted and sold your attention, intentions and interest to advertisers who paid handsomely for it. How handsomely? Search contributed $54 bn of Google's $96 bn revenue in the last quarter, and reportedly most of its profitability. But it's also the model that held Google back and made it tiptoe tentatively into the world of GenAI. The search-through-chat business model of AI, which Perplexity or ChatGPT uses, does not lend itself as effectively to the advertising business model. Google experimented with 'AI Overviews' since I/O 2024. This model of AI-generated answers to common topics became popular with users, but reduced click rates on ads. This is the classical 'innovators' dilemma' that Harvard professor Clayton Christensen proposed in 1997, where he posited that great companies lose out to startup innovators, because they do not want to cannibalise their existing profitable business month, Google finally bit the bullet at I/O 2025, announcing that all its US users will be able to activate 'AI mode' in Google search and Chrome browser that will provide a conversational, question-and-answer experience akin to OpenAI's ChatGPT, rather than a traditional list of what happens to the super-profitable search business model? The company is tight-lipped about it, and is reportedly working to insert ads into AI answers. This is not as easy, however, since GenAI is a probabilistic tech, unlike the deterministic nature of traditional search. Thus, predicting what search result will come and matching advertiser needs to them becomes much more Google has hinted at a change in revenue model, with a $20 subscription for the service, and a $250 subscription for the super-premium version. So, with this monumental change, is the World Wide Web shifting on its axis? And does it take us one step closer to the dreams of a new web - user-owned, more democratic, and where you are finally not the product?Time will tell. But what is clear is that I/O 2025 did not signal just the re-emergence of Google in AI leaders, but also the emergence of a new kind of internet. So far, it was search around which the web was organised and the advertiser-driven model that paid for it. Maybe, we are now inching towards a web organised around GenAI, with subscribers paying for it - and content creators getting ably rewarded too. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Warren Buffett-fan Pabrai is betting big on Edelweiss' Rashesh Shah. Will it pay off? Coal on one hand and green on the other; this company balances both Yet another battle over neem; this time it's a startup vs. Procter & Gamble Move over tariffs, China wields rare earths in an economic war of a different kind Is Zomato under siege? Quick commerce may be the next telecom 9 stocks from different segments of financial services sector with an upside potential of up to 37% Stock picks of the week: 5 stocks with consistent score improvement and return potential of more than 32% in 1 year Is an oil shock on its way? 14 stocks to watch carefully if the Iran-Israel conflict leads to a sustained rise in crude oil prices

Are AI models cannibalising their own creations? With the internet flooded by AI content, experts warn of a looming collapse
Are AI models cannibalising their own creations? With the internet flooded by AI content, experts warn of a looming collapse

Time of India

time5 hours ago

  • Time of India

Are AI models cannibalising their own creations? With the internet flooded by AI content, experts warn of a looming collapse

Artificial intelligence was supposed to be humanity's greatest invention—machines learning from us, evolving with us, and eventually helping us transcend our limits. But as it turns out, the future may not be as futuristic as it seemed. A new report by VICE reveals a startling truth: the AI we built to mirror human creativity is now stuck in a feedback loop of its own making—and it's beginning to fall apart. The Cannibalism of Content Large Language Models (LLMs) like ChatGPT , Claude, and Google's Gemini have been trained on the vast swathes of human knowledge available online. From literary works to technical documentation, news reports to Reddit threads, it is this rich, human-authored material that gave these systems their seemingly uncanny abilities. But now, the source is running dry. As more and more content online is AI-generated, these models are being trained on their own regurgitations. Veteran tech journalist Steven Vaughn-Nichols calls this phenomenon 'model collapse,' a point at which the output quality nosedives because the model is learning from corrupted, recycled information. In a world where humans lean increasingly on machines to generate content, the AI is left feeding on itself—and the results are alarming. Garbage In, Garbage Out The industry term for this spiraling quality crisis is GIGO: Garbage In, Garbage Out. Vaughn-Nichols explains that once LLMs consume too much AI-generated content , their outputs become not just unreliable, but potentially harmful—factually incorrect, nonsensical, and sometimes ethically dangerous. AI that once wrote sonnets and solved math problems might now misdiagnose a health condition or invent a completely fake legal precedent. You Might Also Like: 'He thinks AI is so scary': Nvidia CEO Jensen Huang slams Anthropic chief's grim job loss predictions To counter this, leading AI companies like OpenAI , Google, and Anthropic have implemented a fix called retrieval-augmented generation , or RAG. Essentially, they're giving AI the ability to search for real-time information instead of relying solely on their (increasingly flawed) training data. It's like teaching AI to Google—but even that might not be enough. A Sea of Synthetic Sludge The internet, once a reservoir of organic thought, is rapidly becoming a junkyard of AI-generated spam. Half-baked advice columns, incorrect blog posts, and rewritten slop—all machine-made—are choking the flow of real information. In a recent test conducted by Bloomberg, 11 state-of-the-art RAG-enhanced models were pitted against traditional LLMs. The outcome? The RAG models were more likely to produce unsafe or unethical responses, including privacy breaches and misinformation. This is deeply troubling, considering these systems are being used in everything from mental health apps to banking services. The very tools built to mimic human intelligence are now making mistakes a human wouldn't. The Human Cost of Artificial Brilliance What happens when all the human-created wisdom is consumed? When models trained to be like us no longer have us to learn from? You Might Also Like: What is 'AI privilege'? OpenAI CEO says talking to ChatGPT should be as private as a doctor's visit As Vaughn-Nichols puts it bluntly, 'This might all be a slow-motion car crash.' Unless tech companies figure out a way to incentivize real people to keep creating quality content—words, ideas, research, storytelling—then the AI boom we're living through could quietly crash and burn. The very existence of LLMs hinges on an uncomfortable paradox: they exist to replace humans, yet they can't evolve without us. Strip away the originality, the nuance, the lived experiences—and what remains is a hollow echo chamber of recycled ideas. In the end, as AI models spiral deeper into self-reference, they're proving what we may have forgotten in the race for efficiency: intelligence—real intelligence—is inherently human. And without it, the machines are just talking to themselves.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store