logo
France's Mistral launches Europe's first AI reasoning model

France's Mistral launches Europe's first AI reasoning model

Indian Express3 days ago

Mistral on Tuesday launched Europe's first AI reasoning model, which uses logical thinking to create a response, as it tries to keep pace with American and Chinese rivals at the forefront of AI development. The French startup has attempted to differentiate itself by championing its European roots, winning the support of French President Emmanuel Macron, as well as making some of its models open source in contrast to the proprietary offerings of OpenAI or Alphabet's Google.
Mistral is considered Europe's best shot at having a home-grown AI competitor, but has lagged behind in terms of market share and revenue. Reasoning models use chain-of-thought techniques – a process that generates answers with intermediate reasoning abilities when solving complex problems. They could also be a promising path forward in advancing AI's capabilities as the traditional approach of building ever-bigger large language models by adding more data and computing power begins to hit limitations.
For Mistral, which was valued by venture capitalists at $6.2 billion, an industry shift away from 'scaling up' could give it a window to catch up against better capitalized rivals. China's DeepSeek broke through as a viable competitor in January through its low-cost, open-sourced AI models, including one for reasoning. OpenAI was the first to launch its reasoning models last year, followed by Google a few months later. Meta, which also offers its models open-sourced, has not yet released a standalone reasoning model, though it said its latest top-shelf model has reasoning capabilities.
Mistral is launching an open-sourced Magistral Small model and a more powerful version called Magistral Medium for business customers. 'The best human thinking isn't linear – it weaves through logic, insight, uncertainty, and discovery. Reasoning language models have enabled us to augment and delegate complex thinking and deep understanding to AI,' Mistral said.
American companies have mostly kept their most advanced models proprietary, though a handful, such as Meta, has released open-source models. In contrast, Chinese firms ranging from DeepSeek to Alibaba have taken the open-source path to demonstrate their technological capabilities.
Mistral Small is available for download on Hugging Face's platform and can reason in languages including English, French, Spanish, Arabic and simplified Chinese.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Your VPN might be spying for China: Watchdog flags 17 apps with hidden ties on Apple and Google stores
Your VPN might be spying for China: Watchdog flags 17 apps with hidden ties on Apple and Google stores

Economic Times

time4 minutes ago

  • Economic Times

Your VPN might be spying for China: Watchdog flags 17 apps with hidden ties on Apple and Google stores

A new report by the Technology Transparency Project warns that 17 VPN apps, available on major app stores, may be secretly linked to Qihoo 360, a Chinese cybersecurity firm under U.S. sanctions. Experts fear user data could be accessed by Chinese authorities under China's broad surveillance laws. Tired of too many ads? Remove Ads How are these VPNs tied to China? Why is Qihoo 360 a concern? Tired of too many ads? Remove Ads Which VPN apps were flagged earlier? What are Apple and Google doing about it? Tired of too many ads? Remove Ads FAQs Think your VPN is keeping you anonymous? Think again. A major watchdog report just revealed that 17 popular VPN apps available on Apple and Google stores might be quietly handing over your data with links pointing straight to to a report released on Thursday by the Technology Transparency Project, the firm involved may have discreet links to China, where the government can monitor all user report claims that 17 apps, six from Apple's App Store, four from Google Play Store, and seven from both, have hidden connections to China, as quoted in a report by NBC News.A new report by the Technology Transparency Project warns that 17 VPN apps, available on major app stores, may be secretly linked to Qihoo 360 , a Chinese cybersecurity firm under U.S. 360 is a firm sanctioned by the U.S. Commerce Department in 2020 for potential links to the Chinese military. While the apps don't explicitly name Qihoo, corporate filings and company records suggest they are operated by shell companies acquired by Qihoo in 2019, as per a are mainly utilized to safeguard a user's privacy by complicating a website's ability to identify its visitors, or to bypass censorship restrictions. However, if a VPN provider does not implement substantial measures to automatically and permanently erase its users' search histories, it is probable that the company will retain logs of its clients' online is especially significant if the company is Chinese, since national legislation requires that intelligence and law enforcement agencies can access any personal data stored there without a Katie Paul explained that VPNs carry unique risks since they reroute all of a user's internet activity through their servers. If those servers are controlled or accessed by Chinese-linked firms, it means user data, including sensitive work information and browsing habits, could end up in Beijing's Sherman, a senior fellow at the Atlantic Council focusing on data privacy, informed that utilizing a VPN owned by China would be equivalent to surrendering one's browsing history to Beijing, as per a report by NBC News. Experts fear user data could be accessed by Chinese authorities under China's broad surveillance TTP, a technology-oriented branch of the Campaign for Accountability, an investigative nonprofit aimed at uncovering "corruption, negligence, and unethical conduct," released a report on Chinese VPN applications on April 1. TTP reports that several of the VPNs are indirectly tied to Qihoo applications are all virtual private networks, or VPNs, enabling a user to route their internet traffic through a company's internet service. Names such as VPNify, Ostrich VPN, and Now VPN do not explicitly indicate any connections to China or Chinese ownership in the app Qihoo 360 isn't listed as the direct developer, many apps are operated by entities like Lemon Seed, Autumn Breeze, and Innovative Connecting all tied to Qihoo via Chinese and Cayman Islands quickly removed three apps purportedly connected to Qihoo 360: Thunder VPN, Snap VPN, and Signal Secure VPN. Turbo VPN and VPN Proxy Master, both accessible on the Google Play Store, along with three additional options provided by Google, remain availableThe findings raise important questions about who really controls these "free" VPN services and what happens to your data when you trust the wrong if it logs your data and shares it with third parties especially if it's tied to governments with wide surveillance all, but many free VPNs have vague ownership and poor privacy policies. Always research the company behind the app.

Are AI models cannibalising their own creations? With the internet flooded by AI content, experts warn of a looming collapse
Are AI models cannibalising their own creations? With the internet flooded by AI content, experts warn of a looming collapse

Time of India

time12 minutes ago

  • Time of India

Are AI models cannibalising their own creations? With the internet flooded by AI content, experts warn of a looming collapse

Artificial intelligence was supposed to be humanity's greatest invention—machines learning from us, evolving with us, and eventually helping us transcend our limits. But as it turns out, the future may not be as futuristic as it seemed. A new report by VICE reveals a startling truth: the AI we built to mirror human creativity is now stuck in a feedback loop of its own making—and it's beginning to fall apart. The Cannibalism of Content Large Language Models (LLMs) like ChatGPT , Claude, and Google's Gemini have been trained on the vast swathes of human knowledge available online. From literary works to technical documentation, news reports to Reddit threads, it is this rich, human-authored material that gave these systems their seemingly uncanny abilities. But now, the source is running dry. As more and more content online is AI-generated, these models are being trained on their own regurgitations. Veteran tech journalist Steven Vaughn-Nichols calls this phenomenon 'model collapse,' a point at which the output quality nosedives because the model is learning from corrupted, recycled information. In a world where humans lean increasingly on machines to generate content, the AI is left feeding on itself—and the results are alarming. Garbage In, Garbage Out The industry term for this spiraling quality crisis is GIGO: Garbage In, Garbage Out. Vaughn-Nichols explains that once LLMs consume too much AI-generated content , their outputs become not just unreliable, but potentially harmful—factually incorrect, nonsensical, and sometimes ethically dangerous. AI that once wrote sonnets and solved math problems might now misdiagnose a health condition or invent a completely fake legal precedent. You Might Also Like: 'He thinks AI is so scary': Nvidia CEO Jensen Huang slams Anthropic chief's grim job loss predictions To counter this, leading AI companies like OpenAI , Google, and Anthropic have implemented a fix called retrieval-augmented generation , or RAG. Essentially, they're giving AI the ability to search for real-time information instead of relying solely on their (increasingly flawed) training data. It's like teaching AI to Google—but even that might not be enough. A Sea of Synthetic Sludge The internet, once a reservoir of organic thought, is rapidly becoming a junkyard of AI-generated spam. Half-baked advice columns, incorrect blog posts, and rewritten slop—all machine-made—are choking the flow of real information. In a recent test conducted by Bloomberg, 11 state-of-the-art RAG-enhanced models were pitted against traditional LLMs. The outcome? The RAG models were more likely to produce unsafe or unethical responses, including privacy breaches and misinformation. This is deeply troubling, considering these systems are being used in everything from mental health apps to banking services. The very tools built to mimic human intelligence are now making mistakes a human wouldn't. The Human Cost of Artificial Brilliance What happens when all the human-created wisdom is consumed? When models trained to be like us no longer have us to learn from? You Might Also Like: What is 'AI privilege'? OpenAI CEO says talking to ChatGPT should be as private as a doctor's visit As Vaughn-Nichols puts it bluntly, 'This might all be a slow-motion car crash.' Unless tech companies figure out a way to incentivize real people to keep creating quality content—words, ideas, research, storytelling—then the AI boom we're living through could quietly crash and burn. The very existence of LLMs hinges on an uncomfortable paradox: they exist to replace humans, yet they can't evolve without us. Strip away the originality, the nuance, the lived experiences—and what remains is a hollow echo chamber of recycled ideas. In the end, as AI models spiral deeper into self-reference, they're proving what we may have forgotten in the race for efficiency: intelligence—real intelligence—is inherently human. And without it, the machines are just talking to themselves.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store