Latest news with #Mixtral


NDTV
02-06-2025
- Business
- NDTV
'Queen Of The Internet' Mary Meeker Calls AI The Most "Unprecedented" Tech Shift Yet
New Delhi: Venture capitalist Mary Meeker, once called the "Queen of the Internet," is back with her first big trends report since 2019. Titled " Trends - Artificial Intelligence," the 340-page deep dive argues the rise of AI is unlike any tech revolution seen before. "The pace and scope of change related to the artificial intelligence technology evolution is indeed unprecedented..." Ms Meeker writes, using the word "unprecedented" 51 times in the report. Rise Of AI Ms Meeker's report talks about how quickly AI has scaled. Instagram, WhatsApp, and YouTube took 2-4 years to hit 100 million users. ChatGPT did it in under three months. By April this year, ChatGPT had 800 million weekly users and now handles over 365 billion searches annually. Based on Morgan Stanley data, it took 6-12 years for half of US households to get access to mobile and desktop internet. For AI platforms, Ms Meeker predicts the same will happen in only three years. India Has A Major AI User Base India has turned out to be a vital market for AI platforms. The country contributes the highest percentage of mobile app users for ChatGPT (13.5 per cent), ahead of the US (8.9 per cent) and Germany (3 per cent). It is also the third-largest user base (6.9 per cent) for China's DeepSeek. "India has been a key user-base market for AI companies," the report said. Open Source vs Closed Source AI Ms Meeker says AI is splitting into two paths: closed models like GPT-4 and Claude, and open models like Llama and Mixtral. Closed models lead in performance and are favoured by enterprises, but they lack transparency. Open models are more accessible and are driving innovation in local languages, grassroots tools, and sovereign AI efforts. "We're watching two philosophies unfold in parallel, freedom vs control, speed vs safety, openness vs optimization - each shaping not just how AI works, but who gets to wield it," Ms Meeker writes. China, for instance, is leading the open-source race. As of Q2 2025, it has released major models like DeepSeek-R1, Alibaba's Qwen-32B, and Baidu's Ernie 4.5. Falling Costs, Soaring Competition While training costs for models have gone up (reaching up to $1 billion), inference costs have dropped by 99 per cent in two years, according to Stanford data. Nvidia's 2024 Blackwell GPU uses 105,000 times less energy per token compared to its 2014 predecessor. Google's TPU chips and Amazon's Trainium are also scaling rapidly. "These aren't side projects, they're foundational bets," Ms Meeker notes. Financial Reality Check Despite massive user growth, she has warned that revenue per user is still low with a median of $23 (Rs 2000) across most platforms. The industry is burning through cash. Investors are betting big, but it is still unclear who will come out on top. "Only time will tell which side of the money-making equation the current AI aspirants will land," she writes. AI Already Shaping The Real World AI is moving beyond apps, the report says. It is driving cars, running factory robots, and aiding in healthcare. Jobs aren't vanishing but evolving, with AI becoming a co-pilot for coders, writers, and analysts. AI-related job listings have jumped 448 per cent since 2018 Ms Meeker says. The Infrastructure Race Access to powerful tech, like GPUs, chips, and data centres, is now a global competition. Mary Meeker compares it to the space race during the Cold War. There are serious concerns too. AI can be biased, spread wrong info, or behave unpredictably. Ms Meeker says we need clear rules, honest leadership, and smarter systems to handle AI's fast growth.
Yahoo
10-05-2025
- Yahoo
Teachers Using AI to Grade Their Students' Work Sends a Clear Message: They Don't Matter, and Will Soon Be Obsolete
Talk to a teacher lately, and you'll probably get an earful about AI's effects on student attention spans, reading comprehension, and cheating. As AI becomes ubiquitous in everyday life — thanks to tech companies forcing it down our throats — it's probably no shocker that students are using software like ChatGPT at a nearly unprecedented scale. One study by the Digital Education Council found that nearly 86 percent of university students use some type of AI in their work. That's causing some fed-up teachers to fight fire with fire, using AI chatbots to score their students' work. As one teacher mused on Reddit: "You are welcome to use AI. Just let me know. If you do, the AI will also grade you. You don't write it, I don't read it." Others are embracing AI with a smile, using it to "tailor math problems to each student," in one example listed by Vice. Some go so far as requiring students to use AI. One professor in Ithaca, NY, shares both ChatGPT's comments on student essays as well as her own, and asks her students to run their essays through AI on their own. While AI might save educators some time and precious brainpower — which arguably make up the bulk of the gig — the tech isn't even close to cut out for the job, according to researchers at the University of Georgia. While we should probably all know it's a bad idea to grade papers with AI, a new study by the School of Computing at UG gathered data on just how bad it is. The research tasked the Large Language Model (LLM) Mixtral with grading written responses to middle school homework. Rather than feeding the LLM a human-created rubric, as is usually done in these studies, the UG team tasked Mixtral with creating its own grading system. The results were abysmal. Compared to a human grader, the LLM accurately graded student work just 33.5 percent of the time. Even when supplied with a human rubric, the model had an accuracy rate of just over 50 percent. Though the LLM "graded" quickly, its scores were frequently based on flawed logic inherent to LLMs. "While LLMs can adapt quickly to scoring tasks, they often resort to shortcuts, bypassing deeper logical reasoning expected in human grading," wrote the researchers. "Students could mention a temperature increase, and the large language model interprets that all students understand the particles are moving faster when temperatures rise," said Xiaoming Zhai, one of the UG researchers. "But based upon the student writing, as a human, we're not able to infer whether the students know whether the particles will move faster or not." Though the UG researchers wrote that "incorporating high-quality analytical rubrics designed to reflect human grading logic can mitigate [the] gap and enhance LLMs' scoring accuracy," a boost from 33.5 to 50 percent accuracy is laughable. Remember, this is the technology that's supposed to bring about a "new epoch" — a technology we've poured more seed money into than any in human history. If there were a 50 percent chance your car would fail catastrophically on the highway, none of us would be driving. So why is it okay for teachers to take the same gamble with students? It's just further confirmation that AI is no substitute for a living, breathing teacher, and that isn't likely to change anytime soon. In fact, there's mounting evidence that AI's comprehension abilities are getting worse as time goes on and original data becomes scarce. Recent reporting by the New York Times found that the latest generation of AI models hallucinate as much as 79 percent of the time — way up from past numbers. When teachers choose to embrace AI, this is the technology they're shoving off onto their kids: notoriously inaccurate, overly eager to please, and prone to spewing outright lies. That's before we even get into the cognitive decline that comes with regular AI use. If this is the answer to the AI cheating crisis, then maybe it'd make more sense to cut out the middle man: close the schools and let the kids go one-on-one with their artificial buddies. More on AI: People With This Level of Education Use AI the Most at Work