Latest news with #Llama3


Economic Times
7 days ago
- Economic Times
Siri-ously? AI Got Clever, Not Conscious
By 2025, artificial intelligence will be an influential social force, not just a technological trend. AI systems have composed code, authored legislation, diagnosed medical conditions, and even composed music. But it became clear that despite the fact that machines got better at speed, cleverness, and oddly creative powers, they were lacking one essential element of intelligence: common sense, empathy, and humanity. Technically, 2025 saw many breakthroughs. OpenAI's GPT-4.5 and Anthropic's Claude 3.5 became popular choices for solving complex problems in business. Google DeepMind's Gemini amazed researchers with its strong reasoning skills. Meta's open-source Llama 3 models made cutting-edge tools available to more people. AI agents like Devin and Rabbit R1 were introduced to handle tasks ranging from personal chores to business processes. Yet, beyond such revolutions, a grim reality set in: AI still does not really get us. Meanwhile, generative models flirted with creativity but faltered with ethics. Deepfakes, which were previously easy to detect, were now nearly impossible to distinguish from actual videos and created confusion during political campaigns in various nations. Governments scrambled to codify the origins of content, whereas firms such as Adobe and OpenAI inserted cryptographic watermarks, which were hacked or disregarded shortly struggled most with social and emotional knowledge. Even with advances in multimodal learning and feedback, AI agents were unable to mimic true empathy. This was especially evident in healthcare and education, where communications centered on the human. Patients were not eager to trust the diagnoses from emotionless avatars, and students were more nervous when interacting with robotic tutors that weren't year wasn't filled with alarm bells. Open sourcing low-barrier models initiated a surge in bottom-up innovation, particularly in the Global South, where AI facilitated solutions in agriculture, education, and infrastructure. India's Bhashini project, based on local-language AI, became a template for inclusive tech development. One thing is certain in 2025: AI is fantastic but fragile. It cannot deal well with deeper meaning, but it can convincingly simulate intelligence. While not intelligent enough to guide us, machines are now intelligent enough to astonish us. While at present humans enjoy the advantage, the gap is closing faster than we was less about machines outsmarting humans than about redefining what intelligence is. AI showed limits in judgment, compassion, and moral awareness, even as it exhibited speed, scope, and intricacy. These are not flaws; they are reminders that context is as vital to intelligence as computation. The actual innovation is not in choosing between machines and humans but in creating a partnership in which the two complement each other's strengths. Real advancement starts there.


Time of India
7 days ago
- Time of India
Siri-ously? AI Got Clever, Not Conscious
Live Events By 2025, artificial intelligence will be an influential social force, not just a technological trend. AI systems have composed code, authored legislation, diagnosed medical conditions, and even composed music. But it became clear that despite the fact that machines got better at speed, cleverness, and oddly creative powers, they were lacking one essential element of intelligence: common sense, empathy, and 2025 saw many breakthroughs. OpenAI 's GPT-4.5 and Anthropic's Claude 3.5 became popular choices for solving complex problems in business. Google DeepMind 's Gemini amazed researchers with its strong reasoning skills. Meta's open-source Llama 3 models made cutting-edge tools available to more people. AI agents like Devin and Rabbit R1 were introduced to handle tasks ranging from personal chores to business beyond such revolutions, a grim reality set in: AI still does not really get us. Meanwhile, generative models flirted with creativity but faltered with ethics. Deepfakes, which were previously easy to detect, were now nearly impossible to distinguish from actual videos and created confusion during political campaigns in various nations. Governments scrambled to codify the origins of content, whereas firms such as Adobe and OpenAI inserted cryptographic watermarks, which were hacked or disregarded shortly struggled most with social and emotional knowledge. Even with advances in multimodal learning and feedback, AI agents were unable to mimic true empathy. This was especially evident in healthcare and education, where communications centered on the human. Patients were not eager to trust the diagnoses from emotionless avatars, and students were more nervous when interacting with robotic tutors that weren't year wasn't filled with alarm bells. Open sourcing low-barrier models initiated a surge in bottom-up innovation, particularly in the Global South, where AI facilitated solutions in agriculture, education, and infrastructure. India's Bhashini project, based on local-language AI, became a template for inclusive tech thing is certain in 2025: AI is fantastic but fragile. It cannot deal well with deeper meaning, but it can convincingly simulate intelligence. While not intelligent enough to guide us, machines are now intelligent enough to astonish us. While at present humans enjoy the advantage, the gap is closing faster than we was less about machines outsmarting humans than about redefining what intelligence is. AI showed limits in judgment, compassion, and moral awareness, even as it exhibited speed, scope, and intricacy. These are not flaws; they are reminders that context is as vital to intelligence as computation. The actual innovation is not in choosing between machines and humans but in creating a partnership in which the two complement each other's strengths. Real advancement starts there.


Gizmodo
14-07-2025
- Business
- Gizmodo
OpenAI Hits Pause on Its Meta Killer
OpenAI is delaying the release of its much-anticipated open-weight AI model, citing the need for 'additional safety tests' and last-minute concerns over 'high-risk areas,' CEO Sam Altman announced on X (formerly Twitter). The decision lands in the middle of a brutal AI arms race, particularly with Meta, which has been aggressively poaching OpenAI talent and championing open-source models like Llama 3. The model, which was slated to drop this week, would be OpenAI's first major open-weight system, meaning developers would be free to download and use the model's underlying code and data weights to build their own apps, research projects, or commercial tools. But as Altman pointed out, once these models are released, 'they can't be pulled back.' That's the nature of open-source, and it's exactly why this delay is raising eyebrows across the AI community. 'While we trust the community will build great things with this model, once weights are out, they can't be pulled back,' Altman wrote on X (formerly Twitter) on July 11. 'This is new for us and we want to get it right.' we planned to launch our open-weight model next week. we are delaying it; we need time to run additional safety tests and review high-risk areas. we are not yet sure how long it will take us. while we trust the community will build great things with this model, once weights are… — Sam Altman (@sama) July 12, 2025In AI, 'weights' are the millions of numerical values that act like the model's brain wiring, allowing it to make connections and decisions. When a company releases a model as 'open-weight,' it's not just sharing a blueprint; it's giving away the fully functional brain. Developers are free to download it, modify it, and use it for everything from building chatbots and productivity tools to creating deepfakes and other malicious applications. Open-sourcing models accelerates innovation, but it also raises the risk of misuse, misinformation, and untraceable custom versions. That's why the decision to delay, while frustrating to many, signals that OpenAI is trying to tread cautiously, especially as criticism around AI safety and 'model leaking' intensifies. According to developer chatter online, the delay may have been triggered by a major technical issue discovered just before launch. The rumored model was expected to be smaller than Kimi K2—the new open-weight model from Chinese AI startup Moonshot AI that reportedly clocks in at nearly a trillion parameters—but still 'super powerful,' according to early testers. Kimi K2, which is taking on ChatGPT with impressive coding capabilities at a lower price, was released on July 11, the same day as Altman's announcement. While some online speculators blamed the delay on Kimi's unexpectedly strong performance and a fear of being outshone, there's no confirmation of that from OpenAI. What is clear is that the company is feeling the pressure to deliver something that is safe, fast, and competitive. Rumors that OpenAI delayed their open-source model because of Kimi are fun, but from what I hear: – the model is much smaller than Kimi K2 (<< 1T parameters)– super powerful– but due to some (frankly absurd) reason I can't say, they realized a big issue just before release, so… — Yuchen Jin (@Yuchenj_UW) July 13, 2025OpenAI's delay comes at a time when Meta is eating its lunch, at least in the open-source department. Mark Zuckerberg's company has released increasingly powerful open-weight models like Llama 3, all while quietly hiring away top OpenAI researchers. The talent war is real, and it's affecting timelines and strategy across the board. By delaying this release, OpenAI may be hoping to avoid a flawed launch that could dent its credibility at a critical moment. But it also risks falling further behind Meta, which has already become the go-to platform for developers looking to build with transparent, modifiable AI tools. OpenAI hasn't offered a new timeline for the release. That silence is fueling speculation that the delay could last weeks. If retraining is truly on the table, it could push the launch closer to the fall. For now, the open-source community is in wait-and-see mode. And the question hanging over it all: Can OpenAI deliver a model that is powerful, safe, and competitive enough to match Meta's momentum and keep the Chinese rivals at bay? In other words, can they get it right before someone else does?
Yahoo
30-06-2025
- Entertainment
- Yahoo
Don't Worry Parents, Even AI Has Trouble Keeping up With Your Kids' Slang
Talking to kids is confusing at best, downright mind-boggling at worst. It's all, skibidi toilet this, bacon avocado that. Seriously, who comes up with this stuff? If you've ever felt like an out-of-date old trying to keep up with kids these days, you're not alone — even artificial intelligence (AI) has no idea what the younger generation is talking about. (And, honestly? We feel so much better!) A June 2025 study of four AI models, including GPT-4, Claude, Gemini, and Llama 3, found that all of them had trouble understanding slang terms used by Gen Alpha (born between 2010 and 2024). More from SheKnows The Viral 'Bacon Avocado' TikTok Trend Is Revealing Teens' Hidden Insecurities - & Scathing Insults 'Their distinctive ways of communicating, blending gaming references, memes, and AI-influenced expressions, often obscure concerning interactions from both human moderators and AI safety systems,' the study stated. In other words, the brain rot consumed by Gen Alpha that turns into today's most common phrases can't even be kept up with by computers. Researchers compared similar phrases like, 'fr fr let him cook,' which is actually supportive, and 'let him cook lmaoo,' which is insulting. Another example compared, 'OMGG you ate that up fr,' which is genuine praise, and 'you ate that up ig [skull],' which is masked harassment. After comparing AI to Gen Alpha and their parents, they found that Gen Alpha had a nearly perfect comprehension of their own slang (98 percent), while parents came in at 68 percent understanding, and AI models varied from 58.3 to 68.1 percent. It's encouraging that even AI can't keep up with what Gen Alpha and Gen Z says. After all, these slang terms come from the oddest, most obscure places, like a Justin Bieber crashout or random quotes from movies. It seems like you would have to be on the internet all the time to even have a hint what kids are saying nowadays, which Gen Alpha is. A 2025 study by Common Sense Media found that by the time kids are 2 years old, 40 percent of them have their own tablet, and by age 4, 58 percent do. By age 8, nearly 1 in 4 kids have their own cell phone. And on average, kids ages 5-8 spend nearly 3.5 hours a day using screen media, which includes TV, gaming, video chatting, and more. 'While technology keeps evolving, what children need hasn't changed,' Jill Murphy, Chief Content Officer of Common Sense Media, said in a statement. 'Parents can take practical steps: be actively involved in what your little ones are watching, choose content you can enjoy together, and connect screen time to real-world experiences — like acting out stories or discussing characters' feelings. Set clear boundaries around device use, establish tech-free times for meals and bedtime, and remember that media should be just one of many tools for nurturing your child's natural curiosity.'Best of SheKnows Celebrity Parents Who Are So Proud of Their LGBTQ Kids Here's Where Your Favorite Celebrity Parents Are Sending Their Kids to College Bird Names Are One of the Biggest Baby Name Trends for Gen Beta (& We Found 20+ Options)


Time of India
25-06-2025
- Business
- Time of India
ChatGPT-maker OpenAI's CEO Sam Altman breaks silence on relationship with Microsoft: "In any deep partnership there are ..."
OpenAI CEO Sam Altman dismissed recent reports of deteriorating relations with Microsoft , describing their multibillion-dollar partnership as beneficial for both companies despite ongoing negotiations over the AI startup's corporate restructuring. "Do you believe that, when you read those things?" Altman said when questioned about media reports suggesting tensions between the companies, according to the New York Times. He revealed having a "super nice call" with Microsoft CEO Satya Nadella on Monday, where they discussed their collaborative future. Altman's comments come amid reports that OpenAI executives have considered accusing Microsoft of anticompetitive behavior during their partnership negotiations, according to the Wall Street Journal. The potential nuclear option could involve seeking federal regulatory review of their contract terms for antitrust violations. OpenAI's corporate restructuring deadline raises stakes for both the companies by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Top Public Speaking Course for Children Planet Spark Book Now Undo OpenAI faces mounting pressure to convert from its nonprofit origins to a for-profit structure by year-end, risking $20 billion in funding if unsuccessful. Microsoft's approval is crucial for this conversion, with the software giant potentially receiving a stake ranging from 20 to 49 percent in the restructured company, the Financial Times reported. "Obviously in any deep partnership, there are points of tension and we certainly have those," Altman acknowledged to the Times. "But on the whole, it's been like really wonderfully good for both companies." The partnership has become increasingly complex as both companies now compete directly in AI products and services. Microsoft currently receives 20 percent of OpenAI's revenues up to $92 billion and maintains exclusive rights to sell OpenAI's technology through its Azure cloud platform. Altman and co.'s WindSurf acquisition becomes latest flashpoint in partnership negotiations Central to current tensions is OpenAI's $3 billion acquisition of coding startup Windsurf, with the AI company wanting to exempt the purchase from Microsoft's access to its intellectual property, according to the Wall Street Journal. Microsoft has reportedly prepared contingency plans, including potentially walking away from negotiations while relying on its existing contract through 2030, the Financial Times noted. The partnership strain reflects broader competitive dynamics as both companies increasingly vie for the same enterprise customers. Microsoft has begun diversifying beyond OpenAI models, recently adding Mark Zuckerberg 's Llama 3 and Elon Musk 's xAI Grok to its cloud services, while developing internal alternatives under AI chief Mustafa Suleyman . Despite Altman's reassuring tone, the relationship that once symbolized successful tech collaboration now faces its most challenging test, with billions in funding and market positioning hanging in the balance.