Latest news with #Gemini2.5Flash-Lite


India Today
11 hours ago
- Business
- India Today
Google rolls out budget-friendly Gemini 2.5 Flash Lite, opens 2.5 Flash and Pro to all
Google has introduced a new addition to its Gemini AI model line-up — the Gemini 2.5 Flash-Lite. According to Google, this new AI model can deliver high performance at the lowest cost and fastest speeds yet. Alongside the new model, the company has announced the general availability of the Gemini 2.5 Flash and Pro models to all says that Gemini 2.5 Flash-Lite is its most affordable and fastest model in the 2.5 family. It has been built to handle large volumes of latency-sensitive tasks such as translation, classification, and reasoning at a lower computational cost. Compared to its predecessor, 2.0 Flash-Lite, the new model is said to deliver improved accuracy and quality across coding, maths, science, reasoning, and multimodal benchmarks. 'It excels at high-volume, latency-sensitive tasks like translation and classification, with lower latency than 2.0 Flash-Lite and 2.0 Flash on a broad sample of prompts,' says Google. advertisementGoogle highlights that despite being lightweight, 2.5 Flash-Lite comes with a full suite of advanced capabilities. These include support for multimodal inputs, a 1 million-token context window, integration with tools like Google Search and code execution, and the flexibility to modulate computational thinking based on budget. According to the company, these features make the Gemini 2.5 Flash-Lite ideal for developers looking to balance efficiency with robust AI 2.5 Flash-Lite availability The Gemini 2.5 Flash-Lite model is currently available in preview via Google AI Studio and Vertex AI. Google has also integrated customised versions of 2.5 Flash-Lite and Flash into its core products like Search, expanding their reach beyond developers to everyday 2.5 Flash and Pro models now available to allIn addition to introducing Flash-Lite, Google has also announced that its Gemini 2.5 Flash and Gemini 2.5 Pro models are now stable and generally available. These models were previously accessible to a select group of developers and organisations for early production to Google, companies like Snap, SmartBear, and creative tools provider Spline have already integrated these models into their workflows with encouraging results. Now that Flash and Pro are fully open, developers can use them in production-grade applications with greater the stable and preview models can be accessed through Google AI Studio, Vertex AI, and the Gemini app.


Time of India
a day ago
- Business
- Time of India
Google launches its most cost-efficient and fastest Gemini 2.5 model yet
Google has expanded its family of Gemini 2.5 of hybrid reasoning AI models . The company said that its Gemini 2.5 Pro and Gemini 2.5 Flash models are now generally available. Further, it released a preview of the new 2.5 Flash-Lite model which it claims is its most cost-efficient and fastest model yet. "We designed Gemini 2.5 to be a family of hybrid reasoning models that provide amazing performance, while also being at the Pareto Frontier of cost and speed," Google stated in its announcement. General availability of Gemini 2.5 Pro and Gemini 2.5 Flash models The generally available versions of Gemini 2.5 Flash and 2.5 Pro are now ready for production applications, a move Google attributes to valuable developer feedback gathered over recent weeks. Adding to the lineup, Google has introduced a preview of Gemini 2.5 Flash-Lite, touted as its most cost-efficient and fastest 2.5 model to date. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Is it better to shower in the morning or at night? Here's what a microbiologist says CNA Read More Undo "Gemini 2.5 Pro + 2.5 Flash are now stable and generally available. Plus, get a preview of Gemini 2.5 Flash-Lite, our fastest + most cost-efficient 2.5 model yet," Google CEO Sundar Pichai said in a post on X. "Exciting steps as we expand our 2.5 series of hybrid reasoning models that deliver amazing performance at the Pareto frontier of cost and speed," he added. Google says that this new version is designed to excel in high-volume, latency-sensitive tasks like translation and classification, offering lower latency than its predecessors, 2.0 Flash-Lite and 2.0 Flash, across a wide range of prompts. Despite its enhanced efficiency, 2.5 Flash-Lite retains the core capabilities that define the Gemini 2.5 family. These include the ability to adjust computational "thinking" based on budget, integrate with tools such as Google Search and code execution, support multimodal input (processing various data types), and offer a substantial 1-million-token context length, the company says. According to Google, the model also demonstrates "all-around higher quality" than 2.0 Flash-Lite across benchmarks in coding, math, science, reasoning, and multimodal tasks. Developers can access the preview of Gemini 2.5 Flash-Lite through Google AI Studio and Vertex AI, alongside the newly stable versions of 2.5 Flash and Pro. Both 2.5 Flash and Pro are also now accessible directly within the Gemini app. Furthermore, custom versions of 2.5 Flash-Lite and Flash have been integrated into Google Search.