logo
#

Latest news with #LlamaAPI

Can Meta's AI Tools Disrupt Google (GOOGL) and OpenAI? Analysts Weigh In
Can Meta's AI Tools Disrupt Google (GOOGL) and OpenAI? Analysts Weigh In

Globe and Mail

time05-05-2025

  • Business
  • Globe and Mail

Can Meta's AI Tools Disrupt Google (GOOGL) and OpenAI? Analysts Weigh In

Meta Platforms (META) hosted its first AI-focused event, LlamaCon, and analysts came away feeling positive about the company's direction in artificial intelligence. Five-star Bank of America analyst Justin Post, who has a Buy rating and $640 price target on Meta, said that the event showed how flexible Llama is and how strong Meta's open-source position has become. He pointed out that Llama has now been downloaded over 1.2 billion times, offers a wide range of features, and is cheaper than many competitors. Meta also launched a new AI app, Llama API, and added tools for security and app development. Protect Your Portfolio Against Market Uncertainty Discover companies with rock-solid fundamentals in TipRanks' Smart Value Newsletter. Receive undervalued stocks, resilient to market uncertainty, delivered straight to your inbox. Separately, Morgan Stanley's 4.5-star rated analyst, Brian Nowak, said that Meta's new AI assistant app isn't as advanced as ChatGPT or Google's (GOOGL) Gemini just yet, but it's still early. He believes it has potential, especially if it uses Meta's massive amount of user data to offer personalized help, like recommending restaurants or planning trips. Since Meta is offering this app for free while others charge for similar tools, it could gain users faster. However, Nowak also said it's still uncertain whether the app will generate meaningful revenue or become a real threat to Google Search. Meanwhile, Jefferies 4.5-star analyst Brent Thill called the event an important step in Meta's move toward becoming a cloud platform. Until now, developers accessed Llama through other cloud providers like Amazon Web Services (AMZN) and Microsoft Azure (MSFT), but Meta's new Llama API allows it to make money directly from its own cloud services. Thill also pointed to new partnerships with chipmakers Groq and Cerebras and estimated that Llama could be worth $80 billion, based on other AI companies' valuations. In addition, he said that Meta's large user base and data access give it a major advantage. Is META Stock a Good Buy? Turning to Wall Street, analysts have a Strong Buy consensus rating on META stock based on 36 Buys, one Hold, and one Sell assigned in the past three months, as indicated by the graphic below. Furthermore, the average META price target of $680.53 per share implies 26.1% upside potential. See more META analyst ratings

Everything Meta Announced At LlamaCon
Everything Meta Announced At LlamaCon

Forbes

time01-05-2025

  • Business
  • Forbes

Everything Meta Announced At LlamaCon

LlamaCon Meta's first-ever LlamaCon developer conference focused on the strategic expansion of its artificial intelligence ecosystem. The company introduced a consumer-facing Meta AI app, released a preview of its Llama API, and unveiled security tools aimed at strengthening its open-source AI approach. These announcements represent Meta's calculated attempt to create a comprehensive AI portfolio that directly competes with closed AI systems like those from OpenAI while establishing new revenue channels for its open-source models. Meta introduced a dedicated Meta AI application that operates independently from its existing social platforms. Built using the company's Llama 4 model, this standalone application enables both text and voice interactions with Meta's AI assistant. The app includes capabilities for image generation and editing while featuring a social feed that allows users to share their AI conversations. This marks a significant shift from Meta's previous strategy of embedding AI exclusively within its existing applications like WhatsApp, Instagram and Facebook. The new application appears strategically timed as a preemptive response to OpenAI's rumored social network. By integrating social sharing features, Meta leverages its established strength in social networking while extending into the conversational AI space dominated by offerings like ChatGPT. The Llama API preview represents Meta's most significant shift toward commercializing its open-source AI models. This cloud-based service allows developers to access Llama models without managing infrastructure, requiring just one line of code. The API includes tools for fine-tuning and evaluation, starting with the Llama 3.3 8B model. Technical features include one-click API key creation, interactive model exploration playgrounds and lightweight software development kits in both Python and TypeScript. The API maintains compatibility with OpenAI's SDK, potentially lowering barriers for developers considering a switch from proprietary systems. This move transforms Meta's AI approach from primarily model distribution to providing comprehensive AI infrastructure. By offering cloud-based access to its models, Meta establishes a potential revenue stream from its AI investments while maintaining its commitment to open models. Meta announced technical collaborations with Cerebras and Groq to deliver significantly faster inference speeds through the Llama API. These partnerships enable Meta's models to perform up to 18 times faster than traditional GPU-based solutions. The performance improvements provide practical benefits for real-world applications. Cerebras-powered Llama 4 Scout achieves 2,600 tokens per second compared to approximately 130 tokens per second for ChatGPT. This speed differential enables entirely new categories of applications that require minimal latency, including real-time conversational agents, interactive code generation and rapid multi-step reasoning processes. Meta released a suite of open-source protection tools aimed at addressing security concerns that often prevent enterprise adoption of AI systems. These include Llama Guard 4 for text and image understanding protections, LlamaFirewall for detecting prompt injections and insecure code and Llama Prompt Guard 2 which improves jailbreak detection. The company also updated its CyberSecEval benchmark suite with new evaluation tools for security operations, including CyberSOC Eval and AutoPatchBench. A new Llama Defenders Program provides select partners with access to additional security resources. These security improvements address critical enterprise requirements while potentially removing barriers to adoption. By strengthening security capabilities, Meta positions Llama as viable for organizations with strict data protection needs. Meta announced expanded integrations with technology partners including NVIDIA, IBM, Red Hat and Dell Technologies to simplify enterprise deployment of Llama applications. The company also revealed the recipients of its second Llama Impact Grants program, awarding over $1.5 million to ten international organizations using Llama models for social impact. Grant recipients demonstrate diverse applications of Llama technology, from E.E.R.S. in the US which developed a chatbot for navigating public services to Doses AI in the UK which uses the technology for pharmacy operations and error detection. These implementations showcase Llama's flexibility across different domains and use cases. LlamaCon's announcements collectively position Meta as a direct challenger to OpenAI in the AI infrastructure market. Meta CEO Mark Zuckerberg reinforced this positioning during discussions with Databricks CEO Ali Ghodsi, stating that he considers any AI lab making its models publicly available to be allies "in the battle against closed model providers". Zuckerberg specifically highlighted the advantage of open-source models in allowing developers to combine components from different systems. He noted that "if another model, like DeepSeek, excels in certain areas - or if Qwen is superior in some respect - developers can utilize the best features from various models". For technology decision makers, Meta's announcements create new options in the AI deployment landscape. The Llama API eliminates infrastructure complexity that previously limited adoption of open models, while the partnership with Cerebras addresses performance concerns. Security tools reduce implementation risks for enterprises with strict compliance requirements. However, challenges remain. Meta's Llama 4 models received a lukewarm reception from developers when released earlier this year, with some noting they underperformed competing models from DeepSeek and others on certain benchmarks. The absence of a dedicated reasoning model in the Llama 4 family also represented a notable limitation compared to competitor offerings. The success of Meta's strategy will depend on its ability to deliver consistent model improvements while building enterprise trust in its commercial offerings. For organizations evaluating AI deployment options, Meta's announcements provide additional alternatives to proprietary systems while potentially reducing implementation barriers for open-source models.

Meta's New AI Assistant: Productivity Booster Or Time Sink?
Meta's New AI Assistant: Productivity Booster Or Time Sink?

Forbes

time30-04-2025

  • Business
  • Forbes

Meta's New AI Assistant: Productivity Booster Or Time Sink?

The Meta AI logo appears on a mobile phone with Meta AI visible on a tablet in this photo ... More illustration in Brussels, Belgium, on January 26, 2025. (Photo by Jonathan Raa/NurPhoto via Getty Images) Meta launched a new voice-enabled AI app at its inaugural LlamaCon event on April 29, 2025, which is integrated into Instagram, Messenger and Facebook's core experiences. At the event, the company also announced advancements to strengthen its open-source AI ecosystem, headlined by the limited preview launch of the Llama API, which combines closed-model APIs with open-source flexibility, offering one-click access, fine-tuning for Llama 3.3 8B, and compatibility with OpenAI's software development kit. Llama4 has surpassed 1 billion downloads since its launch two years ago. Meta expanded Llama Stack integrations with partners like Nvidia, IBM and Dell for enterprise deployment. On the security front, new tools like Llama Guard 4, LlamaFirewall, and CyberSecEval 4 were introduced alongside the Llama Defenders Program to bolster AI safety. Meta awarded $1.5M in Llama Impact Grants to 10 global recipients, including startups improving civic services, healthcare, and education. The new Meta AI app, built with Llama 4, was conceived as 'companion app' for Meta's AI glasses. While the development of versatile AI apps is promising, the spread of AI assistants to almost all digital platforms, even wearable tech, threatens to accelerate the very busyness they purport to tame. AI assistants begin by capturing your input, whether it's speech, which is converted to text via an automatic‐speech‐recognition engine, or direct keyboard entry. Next it packages that text, along with a snippet of recent conversational context, into a 'prompt' that's sent over to a powerful remote model such as OpenAI's ChatGPT, Meta's Llama, Google's Gemini, among others. In milliseconds, these models perform billions of parameter computations to predict and assemble a most likely satisfying response. To make their outputs more relevant for specialized tasks, developers fine-tune these base models on curated datasets or layer in real-time data retrieval. For instance, combines ChatGPT's base model with its own database of travel and pricing information to provide chat-based service for customers to plan their trips. Advanced systems may even combine computer vision with language understanding. For example, you can snap a photo of your utility bill and ask why charges spike in a given month, or take a photo of a broken component of your car and ask for repair advice. Finally, the text response is sent back to your device and, if you're using voice, rendered into speech by a text-to-speech engine. AI assistants are integrated into many software and applications, from Adobe's Acrobat AI to summarize documents and generate images to Nvidia's G-Assist in PC games. In consumer products, Amazon's Alexa powers Echo speakers and smart-home devices, Google Assistant lives on Android phones and Nest speakers, and Apple's Siri runs on iPhones, Macs, and HomePods—each leveraging its own blend of cloud-based or on-device intelligence to understand your requests and take action. Meanwhile, enterprises are embedding assistants in productivity tools, such as Microsoft 365 Copilot in Word, Excel, PowerPoint, Outlook, and Teams, to draft content, analyze data, and automate workflows in real time. The promise of time saved is seductive. Microsoft 365 Copilot drafts executive summaries in seconds, and Duolingo's AI tutors adapt to each learner's mistakes in real time. Zoom's live-transcript search transforms hours of recordings into keyword lookups. Yet those very efficiency gains often spur heavier workloads rather than lighten them—a phenomenon known as the Jevons paradox, where making a resource or task 'cheaper' leads to its increased consumption overall. In real-world practice, every minute reclaimed by AI is quickly folded into loftier content quotas or more frequent campaign cycles. Hence the advent of AI assistants may not lighten up the work of employees. When everyone has access to AI assistants, expectations for output and productivity will be higher. Hence people in the workplaces may feel more stretched than before. In addition to the rising expectations for productivity, AI assistants may also cause skill erosion. Just as reliance on GPS has dulled our innate navigation skills, AI assistants risk hollowing out foundational human capabilities. Students leaning on AI-generated essays lose the muscle for crafting compelling arguments and convincing prose. Financial analysts trusting AI-summarized earnings reports may overlook footnote anomalies or balance-sheet red flags. In healthcare, tools like Nuance's Dragon Medical One promise to free doctors from note-taking, yet clinicians who no longer manually encode patient histories may miss subtleties the AI fails to capture. Simultaneously, our attention fragments further: notifications ping as Adobe's Acrobat Assistant offers rewrites, Google Slides' Bard integration suggests slide outlines and edits, and Perplexity's AI Assistant can research topics and provide summaries of information directly within the WhatsApp chat, all reducing our patience for in-depth thinking and research. Meta AI's pledge to put users 'in control' assumes that frictionless interfaces equal greater agency. But true agency requires conscious choice, not mere convenience. If your AI assistant presents three 'optimal' meeting times, do you pause to question the meeting's necessity, or do you automatically select one? Moreover, every prompt, share, and purchase recommendation feeds back into personalization algorithms, which then shape what you see next. Over time, you become both the user and the used. Your preferences are subtly nudged by models that learn which suggestions keep you clicking, shopping or posting. To reap AI's benefits without ceding our autonomy, organizations and individuals must define clear guardrails. Disable nonessential notifications and limit AI-driven summaries to internal drafts, preserving human review for important materials. Carve out regular 'deep-work' intervals when assistants rest silent, safeguarding time for strategy, reading or unstructured conversation. Treat every AI output as a first draft—invest the effort to fact-check, recalculate and consult original sources. In mission-critical fields such as medicine, education and finance, design workflows that keep humans firmly in the loop, using AI to augment human judgment, not replace it. The era of AI assistants is upon us, reshaping our digital interfaces into something resembling natural conversation. By understanding how these systems operate, acknowledging both their genuine efficiencies and hidden costs, and deliberately shaping our interactions with them, we can ensure that these tools serve to reclaim our cognitive bandwidth rather than accelerate the relentless pace of modern life.

Meta introduces Llama API to attract AI developers
Meta introduces Llama API to attract AI developers

Time of India

time30-04-2025

  • Business
  • Time of India

Meta introduces Llama API to attract AI developers

By Kenrick Cai SAN FRANCISCO: Meta Platforms on Tuesday announced an application programming interface in a bid to woo businesses to more easily build AI products using its Llama artificial-intelligence models. Llama API, which was unveiled during the company's first-ever AI developer conference, will help Meta go up against APIs offered by rival model makers including Microsoft -backed OpenAI, Alphabet's Google and emerging low-cost alternatives such as China's DeepSeek. "You can now start using Llama with one line of code," chief product officer Chris Cox said during a keynote speech onstage. APIs allow software developers to customize and quickly integrate a piece of technology into their own products. For OpenAI, APIs constitute the firm's primary source of revenue. Meta, which released the latest version of Llama earlier this month, did not share any pricing details for the API. In a press release, it said the new API was available as a limited preview for select customers and would roll out broadly in weeks to months. The company also released a standalone AI assistant app earlier on Tuesday. It plans to test a paid subscription service of its AI chatbot in the second quarter, Reuters reported in February. Meta releases its Llama models largely free-of-charge for use by developers, a strategy CEO Mark Zuckerberg previously stated will pay off in the form of innovative products, less dependence on would-be competitors and greater engagement on the company's core social networks. "You have full agency over these custom models, you control them in a way that's not possible with other offers," Manohar Paluri, a vice president of AI, said at the conference. "Whatever model you customize is yours to take wherever you want, not locked on our servers." DeepSeek, which has also released partly open-source AI models, sparked a stock selloff in January amid concerns over the high costs of AI development needed by top U.S. firms. At the conference, Meta developers spoke about new techniques they used to significantly reduce costs and improve the efficiency of its newest Llama iteration. Zuckerberg welcomed increased competition that would steer the competitive ecosystem away from domination by a small number of leaders. "If another model, like DeepSeek, is better at something, then now as developers you have the ability to take the best parts of the intelligence from the different models and produce exactly what you need, which I think is going to be very powerful," Zuckerberg said.

Meta introduces Llama application programming interface to attract AI developers
Meta introduces Llama application programming interface to attract AI developers

Yahoo

time30-04-2025

  • Business
  • Yahoo

Meta introduces Llama application programming interface to attract AI developers

By Kenrick Cai SAN FRANCISCO (Reuters) -Meta Platforms on Tuesday announced an application programming interface in a bid to woo businesses to more easily build AI products using its Llama artificial-intelligence models. Llama API, which was unveiled during the company's first-ever AI developer conference, will help Meta go up against APIs offered by rival model makers including Microsoft -backed OpenAI, Alphabet's Google and emerging low-cost alternatives such as China's DeepSeek. "You can now start using Llama with one line of code," chief product officer Chris Cox said during a keynote speech onstage. APIs allow software developers to customize and quickly integrate a piece of technology into their own products. For OpenAI, APIs constitute the firm's primary source of revenue. Meta, which released the latest version of Llama earlier this month, did not share any pricing details for the API. In a press release, it said the new API was available as a limited preview for select customers and would roll out broadly in weeks to months. The company also released a standalone AI assistant app earlier on Tuesday. It plans to test a paid subscription service of its AI chatbot in the second quarter, Reuters reported in February. Meta releases its Llama models largely free-of-charge for use by developers, a strategy CEO Mark Zuckerberg previously stated will pay off in the form of innovative products, less dependence on would-be competitors and greater engagement on the company's core social networks. "You have full agency over these custom models, you control them in a way that's not possible with other offers," Manohar Paluri, a vice president of AI, said at the conference. "Whatever model you customize is yours to take wherever you want, not locked on our servers." DeepSeek, which has also released partly open-source AI models, sparked a stock selloff in January amid concerns over the high costs of AI development needed by top U.S. firms. At the conference, Meta developers spoke about new techniques they used to significantly reduce costs and improve the efficiency of its newest Llama iteration. Zuckerberg welcomed increased competition that would steer the competitive ecosystem away from domination by a small number of leaders. "If another model, like DeepSeek, is better at something, then now as developers you have the ability to take the best parts of the intelligence from the different models and produce exactly what you need, which I think is going to be very powerful," Zuckerberg said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store