logo
#

Latest news with #Gemini2.5Flash

Google Unveils Stitch, Replacing Galileo AI with Gemini-Powered Design Tool
Google Unveils Stitch, Replacing Galileo AI with Gemini-Powered Design Tool

Arabian Post

time3 days ago

  • Business
  • Arabian Post

Google Unveils Stitch, Replacing Galileo AI with Gemini-Powered Design Tool

Google has officially launched Stitch, a generative AI tool designed to streamline user interface design and frontend development. This new offering replaces the Galileo AI service, which will be discontinued on June 20, 2025. Stitch leverages Google's Gemini 2.5 models to transform text prompts and images into functional UI designs and corresponding code. Announced at the Google I/O 2025 developer conference, Stitch allows users to input natural language descriptions or image references—such as sketches or wireframes—to generate responsive UI layouts and frontend code. The tool supports customization of themes, color palettes, and user experience requirements, enabling developers to iterate on designs conversationally. Generated assets can be exported directly into applications or design platforms like Figma for further refinement. Stitch is powered by Google's Gemini 2.5 Pro and Gemini 2.5 Flash models. Gemini 2.5 Pro offers enhanced reasoning capabilities through its experimental 'Deep Think' mode, which enables the model to consider multiple hypotheses before responding. This mode has demonstrated strong performance on complex benchmarks, including the 2025 USAMO and LiveCodeBench. Gemini 2.5 Flash, designed for efficiency, has been optimized for speed and low-cost operations, making it suitable for rapid development cycles. ADVERTISEMENT The transition from Galileo AI to Stitch follows Google's acquisition of the AI-driven UI startup Galileo AI. Users of Galileo AI are encouraged to export their data, including designs and chat history, before the service shuts down on June 20, 2025. Imported conversations into Stitch will not retain continuity, necessitating users to adapt to the new platform's interface and capabilities. Stitch is currently available at no cost to users aged 18 and above in most countries, subject to Google's Terms of Service and Privacy Policy. The tool aims to make app creation more accessible and efficient for both experienced developers and newcomers by bridging the gap between design and development.

Google's AI Matryoshka: Rearchitecting the search giant with AI even as privacy concerns loom
Google's AI Matryoshka: Rearchitecting the search giant with AI even as privacy concerns loom

The Hindu

time23-05-2025

  • Business
  • The Hindu

Google's AI Matryoshka: Rearchitecting the search giant with AI even as privacy concerns loom

Google's annual I/O developer conference in 2025 was less a showcase of disparate product updates and more a systematic unveiling of an AI-centric future. The unspoken theme was that of a Matryoshka doll: at its core, a refined and potent artificial intelligence, with each successive layer representing a product or platform drawing life from this central intelligence. Google is not merely sprinkling AI across its offerings; it is fundamentally rearchitecting its vast ecosystem around it. The result is an increasingly interconnected and agentic experience, one that extends to users, developers, and enterprises alike, prompting a re-evaluation of the firm's responsibilities concerning the data that fuels this transformation. 'More intelligence is available, for everyone, everywhere,' declared Sundar Pichai, CEO of Google and its parent company, Alphabet. 'And the world is responding, adopting AI faster than ever before.' This statement signals a push towards a more intelligent, autonomous, and personalised Google. Yet, as each layer of this AI Matryoshka is peeled back, the data upon which this intelligence is built, the copyrighted material ingested by its models, and the implications for user privacy are brought into sharper focus, forming a critical, if less trumpeted, narrative. It has been nearly two years since Satya Nadella of Microsoft described Google as an '800-pound gorilla' challenged to perform new AI tricks. Google's response, particularly evident at I/O 2025, suggests the gorilla is learning to pirouette. At the innermost core of Google's AI strategy lie its foundational models. The keenly awaited Gemini 2.5 Flash and Pro models, now nearing general availability, represent more than incremental improvements; they are a refined engine for AI experiences. The 'enhanced reasoning mode in Gemini 2.5 Pro,' dubbed Deep Think, which leverages parallel processing, demonstrates impressive capabilities in complex mathematics and coding, even achieving a notable score on the 2025 USAMO, a demanding mathematics benchmark. While Deep Think will initially be available to select testers via the Gemini API, its potential to grapple with highly complex problems signals a significant advancement in AI reasoning. Workhorse Upgraded Gemini 2.5 Flash, the workhorse model, has also received substantial upgrades, purportedly becoming 'better in nearly every dimension.' It boasts increased efficiency, using 20-30% fewer tokens (the units of data processed by AI models), and is set to become the default in the Gemini application. These models, enhanced with native audio output for more naturalistic conversational interactions in 2.5 Pro and Flash, and a pioneering multi-speaker text-to-speech function supporting two voices across over 24 languages, constitute the powerful nucleus from which all other AI functionalities radiate. This computational prowess is built upon Google's proprietary Tensor Processing Units (TPUs). The seventh generation TPU, Ironwood, is said to deliver a tenfold performance increase over its predecessor, offering a formidable 42.5 exaFLOPS of compute per pod. Such hardware forms the bedrock for training and deploying these sophisticated AI systems. However, the very power of these generative models, especially Imagen 4 and Veo 3 for visual media, and Lyria 2 for music generation, necessitates a closer look at their training data. The creation of rich, nuanced outputs depends on ingesting colossal datasets. Persistent industry-wide concerns revolve around the use of copyrighted material without explicit consent or remuneration for original creators. Google highlighted tools such as SynthID, designed to watermark AI-generated content, and a new SynthID Detector for its verification. Yet, these are mitigations, not comprehensive solutions, to the intricate and ongoing debate surrounding copyright and fair use in an era increasingly defined by generative AI. The provenance and a Fiduciary responsibility over the data remain complex issues. Platform Proliferation One layer out from the core models are the platforms and APIs that democratise access to this AI. The Gemini API and Vertex AI are pivotal here, serving as the primary conduits for developers and enterprises. Google aims to improve the developer experience by offering 'thought summaries,' providing transparency into the model's reasoning, and extending 'thinking budgets' to Gemini 2.5 Pro, giving developers more control over computational resources. Critically, native SDK support for the Model Context Protocol (MCP) has been incorporated into the Gemini API. This represents a significant move towards fostering a more interconnected ecosystem of AI agents, enabling them to communicate and collaborate with greater efficacy by sharing contextual information. This inter-agent communication, while powerful, also introduces new vectors for data security considerations, as information flows between potentially diverse systems. Project Mariner, a research tool, is also being integrated into the Gemini API and Vertex AI, allowing users to experiment with its task automation capabilities. AI Meets the User The outermost layers of Google's AI Matryoshka are where users most directly encounter AI, often without fully comprehending the sophisticated infrastructure beneath. This is where Google is reimagining search, commerce, coding, and application integration. The 'AI Mode' in Search, scheduled for rollout to users in the United States, will offer enhanced reasoning and multimodal search capabilities, powered by a customised version of Gemini 2.5. A feature within this mode, Deep Search, is designed to generate comprehensive, cited reports. The quality and impartiality of these citations, especially when generated by AI, will be an area for careful scrutiny. Within AI Mode, a novel shopping experience will allow users to virtually try on clothes by uploading their own photographs. Once a product is selected, an 'agentic checkout' feature, initially available in the U.S., promises to complete the purchase. Such a feature inherently requires access to sensitive personal and financial data, raising questions about data minimisation, security, and the potential for profiling. The All-in-One App The Gemini application itself is being significantly augmented. The Live feature is now generally available on Android and iOS, and the app incorporates image generation. For subscribers to the new Google AI Ultra tier, the app will feature the latest video generation tool, complete with native audio. A 'Deep Research' function within the app can now draw upon users' private documents and images. While potentially offering powerful personal insights, this feature dives deep into personal data pools, demanding robust privacy safeguards and transparent consent mechanisms. How this data is firewalled, processed, and protected from misuse or overreach will be paramount. Canvas, the creative workspace within Gemini, has been made more intuitive with the Gemini 2.5 models, facilitating the creation of interactive infographics, quizzes, and even podcast-style Audio Overviews in 45 languages. Furthermore, Gemini is being integrated into the Chrome browser (initially for Pro and Ultra subscribers in the U.S.), enabling users to query and summarise webpage content. For developers, the new asynchronous coding agent, Jules, is now in public beta globally where Gemini models are accessible. It integrates directly with existing code repositories, understanding project context to write tests, build features, and rectify bugs using Gemini 2.5 Pro. Mr. Pichai's 'new phase of the AI platform shift' is undeniably underway. Google's introduction of a new Google AI Ultra subscription tier offers users differentiated access to its most advanced AI capabilities. This stratification, however, prompts questions about whether the most robust privacy-enhancing features or responsible AI controls will be universally available or if a 'privacy premium' could emerge, where deeper safeguards are reserved for paying customers. As Google rearchitects itself around AI, the intricate dance between innovation, utility, and the stewardship of data will define its next chapter. The layers of the Matryoshka are still being revealed, and with each one, the responsibilities grow.

Google I/O Reveals New AI Features for Gemini and Search
Google I/O Reveals New AI Features for Gemini and Search

TECHx

time22-05-2025

  • Business
  • TECHx

Google I/O Reveals New AI Features for Gemini and Search

Home » Emerging technologies » Artificial Intelligence » Google I/O Reveals New AI Features for Gemini and Search At its annual developer conference, Google I/O, the tech giant revealed a wide range of new AI-powered models, products, and features. These updates are aimed at enhancing the Gemini ecosystem and Search functionality. Google reported that the new tools are designed to deliver more intelligent, agentic, and personalized assistance to users globally. Some of the key announcements are especially relevant to users in the Middle East and North Africa (MENA) region. Google announced Gemini 2.5 Flash, a new, efficient model focused on speed and low cost. It follows the earlier launch of Gemini 2.5 Pro, which now leads the LMArena leaderboard in all categories. The Gemini 2.5 Flash model is now available for preview in Google AI Studio and will roll out in the Gemini app in June. The company also introduced Jules, an asynchronous AI coding agent. Jules can fix bugs and create pull requests simultaneously, allowing developers to focus on impactful coding tasks. It will be available in various countries, including MENA. As part of its Search updates, Google revealed the AI Overviews feature is now expanding to the MENA region. Powered by the Gemini model, AI Overviews offer quick summaries and helpful links, enabling users to understand topics faster. This feature is now available in more languages, including Arabic. However, AI Mode, Google's most advanced AI-powered Search experience, is currently limited to users in the United States and is not available in MENA. AI Mode includes multimodal reasoning and can even act as a virtual shopping agent using a single uploaded photo. Additional Gemini-related announcements include: Gemini Live with Camera is now available on Android and iOS devices, free for all Gemini users. AI-powered Quizzes have launched globally to support dynamic, personalized learning on desktop and mobile. Audio Overviews now let users create infographics or listen to podcast-style summaries in multiple languages, including Arabic. Furthermore, Imagen 4, Google's latest image generation model, is now accessible to users in the MENA region. It offers enhanced image detail and personalization. Meanwhile, some other updates like Veo 3 and Flow—tools focused on advanced video generation and filmmaking—are not yet available in MENA. These developments highlight Google's commitment to expanding AI accessibility and customization across regions. The announcements made at Google I/O 2024 reflect the company's broader AI strategy for innovation and regional inclusion.

Google unveils Gemini 2.5 updates for enhanced AI on Vertex
Google unveils Gemini 2.5 updates for enhanced AI on Vertex

Techday NZ

time22-05-2025

  • Business
  • Techday NZ

Google unveils Gemini 2.5 updates for enhanced AI on Vertex

Google has introduced enhancements to its Gemini 2.5 Flash and Pro AI models, expanding capabilities on the Vertex AI platform for organisations seeking more sophisticated and secure AI-driven applications. The latest updates to Gemini 2.5 Flash and Pro models focus on three principal areas: providing more transparent reasoning with 'thought summaries', introducing a new Deep Think mode for advanced problem solving, and strengthening protection against indirect prompt injection attacks. The 'thought summaries' feature is designed to improve clarity and auditability of enterprise AI systems. It systematically organises a model's raw thoughts, including key details and tool usage, into a clear format. The company said this would allow customers to validate complex AI tasks, ensure alignment with business logic, and simplify debugging processes. The aim is to build systems that are more trustworthy and dependable, addressing a key challenge in enterprise-scale AI deployments. For complex use cases such as mathematics and programming, Gemini 2.5 Pro is introducing an enhanced reasoning mode called Deep Think. This feature enables the model to consider multiple hypotheses simultaneously before producing a response. Utilising new research techniques in parallel thinking, Google intends for this to help in highly complex scenarios. Gemini 2.5 Pro Deep Think will initially be available to trusted testers via Vertex AI. Security remains a priority with the updated models. Google has increased Gemini's protection rate against indirect prompt injection attacks during tool use, aiming to make it more suitable for enterprise adoption where security compliance is often critical. The company describes Gemini 2.5 as its most secure model family to date. Gemini 2.5 Flash will become generally available on Vertex AI in early June, with Gemini 2.5 Pro to follow soon after. Google asserts that these updates will have a tangible impact on business operations, from streamlining processes to improving customer engagement. Enterprise users have reported efficiencies using Gemini 2.5 on Vertex AI. Mike Branch, Vice President Data & Analytics at Geotab, commented on the balance between performance and efficiency: "With respect to Geotab Ace (our data analytics agent for commercial fleets), Gemini 2.5 Flash on Vertex AI strikes an excellent balance. It maintains good consistency in the agent's ability to provide relevant insight to the customer question, while also delivering 25% faster response times on subjects where it has less familiarity. What's more, our early analysis suggests it could operate at potentially 85% lower cost per question compared to the Gemini 1.5 Pro baseline. This efficiency is vital for scaling AI insights affordably to our customers via Ace." Gemini 2.5 Pro is positioned as the most advanced model for more intricate enterprise requirements. In addition to Deep Think, it introduces features such as configurable Thinking Budgets, supporting up to 32,000 tokens of processing for finer control over resource allocation and more complex tasks. Yashodha Bhavnani, Vice President of AI Product Management at Box, described Gemini 2.5 Pro's role in addressing unstructured data: "Box is revolutionising how enterprises interact with their vast, and rarely organised, amounts of content. With Box AI Extract Agents, powered by Gemini 2.5 on Vertex AI, users can instantly extract precise insights from complex, unstructured content – whether it's scanned PDFs, handwritten forms, or image-heavy documents. Gemini 2.5 Pro's advanced reasoning makes it the top choice for tackling complex enterprise tasks, delivering 90%+ accuracy on complex extraction use cases and outperforming previous models in both clause interpretation and temporal reasoning, leading to a significant reduction in manual review efforts. This evolution pushes the boundaries of automation, allowing businesses to unlock and act upon their most valuable information with even greater impact and efficiency." Diverse organisations, including LiveRamp, are looking to Gemini 2.5 to broaden data-driven capabilities across business lines. Roopak Gupta, Vice President Engineering at LiveRamp, said: "With its improved reasoning capabilities and insightful responses, Gemini 2.5 provides tremendous potential for LiveRamp. Its advanced features can enhance our data analysis agents and add support across our product suite, including segmentation, activation, and clean room-powered measurement for advertisers, publishers, and retail media networks. We are committed to assessing the model's impact across a wide array of features and functionalities to ensure our clients and partners can unlock new use cases and enhance existing ones." Members of the Google Developer Experts community have also begun building new solutions using Gemini 2.5's enhanced context and reasoning features. Recent examples include a persona-based news recommender for supply chain analysts, a disaster preparedness app that delivers personalised guidance from weather data, and a GitHub Action that automates pull request reviews to identify errors and inconsistencies early in the software development process. These developments highlight the ongoing efforts by Google to expand the enterprise and developer capabilities within Vertex AI as businesses continue adopting artificial intelligence for a range of applications.

Everything you need to know about Google's Gemini and Search updates
Everything you need to know about Google's Gemini and Search updates

Campaign ME

time21-05-2025

  • Business
  • Campaign ME

Everything you need to know about Google's Gemini and Search updates

Google unveiled a suite of new AI-powered models, products, and features for Gemini and Search at its annual developer event, Google I/O. The updates and new features will offer enhanced AI assistance to users worldwide that is more intelligent, agentic, and personalised including: Models and products Gemini 2.5 Flash: Earlier in March, Gemini 2.5 Pro was launched and Gemini 2.5 Flash was announced, a budget friendly model designed for speed. The new model is available for preview in Google AI Studio for developers, and in the Gemini app for everyone starting June. Jules: An asynchronous AI coding agent that can fix bugs and create pull requests in parallel, so coders can focus on creative and impactful work. This will be available for developers in different countries, including those in MENA. Gemini and Search updates

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store