logo
Zuckerberg launches standalone Meta AI app

Zuckerberg launches standalone Meta AI app

Express Tribune01-05-2025

Social media behemoth Meta unveiled its first standalone AI assistant app on Tuesday, challenging ChatGPT by giving users a direct path to its generative artificial intelligence models.
"A billion people are using Meta AI across our apps now, so we made a new standalone Meta AI app for you to check out," the company's CEO and founder Mark Zuckerberg said in a video posted on Instagram.
Zuckerberg said the app "is designed to be your personal AI" and could be primarily accessed through voice conversations with the interactions personalised to the individual user.
"We're starting off really basic, with just a little bit of context about your interests," the CEO said. "But over time, you're going be able to let Meta AI know a whole lot about you and the people you care about from across our apps, if you want."
Embracing the company's social media DNA, the app features a social feed allowing users to see AI-made posts by other users.
"We learn from seeing each other do it, so we put this right in the app," Meta chief product officer Chris Cox said Tuesday as he opened the tech titan's LlamaCon developers gathering devoted to its open-source AI model.
"You can share your prompts. You can share your art. It's super fun."
The new application also replaces Meta View as the companion app for Ray-Ban Meta smart glasses, allowing conversations to flow between glasses, mobile app and desktop interfaces, the company said.
"We were very focused on the voice experience; the most natural possible interface," Cox said. "You can hear interruptions and laughter and an actual dialogue - just like a phone call."
The executive explained that the feature isn't able to search the web, so asking about topics such as sports teams or the Papal conclave was off the table for now.
Users will have the option of letting Meta AI learn about them by looking at their activity on their Instagram or Facebook accounts.
"It will also remember things you tell it like your kids' names; your wife's birthday, and other things you want to make sure your assistant doesn't forget," Cox said.
Meta touted advantages of Llama at the one day event aimed at getting developers to embrace its AI model that it describes as open-source.
Open source means developers are free to customise key parts of the software as suits their needs.
"You have the ability to take the best parts of the intelligence from the different models and produce exactly what you need, which I think is going to be very powerful," Zuckerberg told developers tuned into LLamaCon. AFP

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Gen Z embraces ChatGPT for affordable therapy while experts caution on mental health risks
Gen Z embraces ChatGPT for affordable therapy while experts caution on mental health risks

Express Tribune

time14 hours ago

  • Express Tribune

Gen Z embraces ChatGPT for affordable therapy while experts caution on mental health risks

Young people from Generation Z are increasingly turning to ChatGPT and other AI chatbots as affordable, on-demand alternatives to traditional talk therapy. Users praise the AI's availability, non-judgmental responses, and lower cost compared to licensed therapists, with some claiming the chatbot has helped them more than years of conventional treatment. However, licensed mental health professionals caution against relying solely on AI for therapy. While AI can support therapeutic tools and provide consistent empathy without fatigue, it lacks the intuition, clinical expertise, and personalized care that human therapists offer. Experts warn that overdependence on chatbots may hinder users' ability to cope independently and could lead to inaccurate diagnoses or harmful advice. Recent incidents involving AI chatbots providing dangerous recommendations have raised concerns about safety, especially for vulnerable and underage users. Mental health organizations, including the American Psychological Association, urge caution and stress that AI should complement, not replace, professional care. They highlight the need for responsible development of AI tools guided by licensed experts to fill gaps for those unable to afford traditional therapy.

Hey chatbot, is this true? AI ‘factchecks' Pakistan-India war information
Hey chatbot, is this true? AI ‘factchecks' Pakistan-India war information

Business Recorder

time15 hours ago

  • Business Recorder

Hey chatbot, is this true? AI ‘factchecks' Pakistan-India war information

WASHINGTON: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification – only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots – including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini – in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. Memes continue: Pakistanis celebrate air dominance over India on social media But the responses are often themselves riddled with misinformation. Grok – now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries – wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content – something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'

Gary Marcus proposes an alternative to AI models
Gary Marcus proposes an alternative to AI models

Express Tribune

time2 days ago

  • Express Tribune

Gary Marcus proposes an alternative to AI models

Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." The optimism that humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability, has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance and text generation. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI - one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained, pointing out the tunnel vision of the LLMs. 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store