Latest news with #GeminiUltra
Yahoo
28-05-2025
- Business
- Yahoo
Alphabet (GOOGL) Stock Outperforms Nasdaq – Analyst Sees Promising Signs After I/O
We recently published a list of . In this article, we are going to take a look at where Alphabet Inc. (NASDAQ:GOOG) stands against other AI stocks gaining Wall Street's attention. One of the most notable analyst calls on Tuesday, May 27, was for Alphabet Inc. (NASDAQ:GOOGL). Cantor Fitzgerald analyst Deepak Mathivanan reiterated a 'Neutral' rating on the stock with a $171.00 price target. Alphabet Inc. (NASDAQ:GOOG) is an American multinational technology conglomerate holding company wholly owning the internet giant Google, amongst other businesses. A laptop and phone open to Google's services in an everyday setting. Discussing the company's recent announcements at Alphabet's I/O conference, the firm described the sentiment towards Alphabet shares as incrementally positive. Some developments discussed include the launch of AI Mode for U.S. search users, the integration of agentic capabilities into apps and services, including the Gemini app and Chrome, as well as the new Gemini Ultra subscription tier with Project Mariner, and the XR glasses designed for immersive AI experiences. 'In the past week, GOOGL hosted its annual I/O and Marketing Live conferences, where the company rolled out a long list of new products in its family of apps. Notable announcements from I/O, in our view, include: 1) launch of AI Mode for US search users, 2) integration of agentic capabilities into various apps & services including Gemini app, AI Mode, and Chrome soon, 3) new Gemini Ultra subscription tier with Project Mariner, and 4) XR glasses for immersive AI experiences. At Marketing Live, the key announcements include 1) expansion of ads in AI Overviews and launch of ads in AI Mode, 2) creative iteration with Veo 3 and Imagen 4, and 3) new agentic capabilities for campaign management. GOOGL's strong tech and infra capabilities were never in doubt and I/O made it abundantly clear that the company is willing to accelerate deployments of GenAI experiences into core products to improve user experience and defend share against competing GenAI products. We came away sensing an increased level of urgency at GOOGL post this I/O event. While some of the product launches are likely to take time before seeing meaningful adoption, we are incrementally positive on GOOGL shares post I/O. Shares outperformed the Nasdaq by 4-pts during the past week.' Overall, GOOGL ranks 1st on our list of AI stocks gaining Wall Street's attention. While we acknowledge the potential of GOOGL as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an AI stock that is more promising than GOOGL and that has 100x upside potential, check out our report about this cheapest AI stock. READ NEXT: and . Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Mayor
17-05-2025
- Business
- Business Mayor
Google's AlphaEvolve: The AI agent that reclaimed 0.7% of Google's compute – and how to copy it
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google's new AlphaEvolve shows what happens when an AI agent graduates from lab demo to production work, and you've got one of the most talented technology companies driving it. Built by Google's DeepMind, the system autonomously rewrites critical code and already pays for itself inside Google. It shattered a 56-year-old record in matrix multiplication (the core of many machine learning workloads) and clawed back 0.7% of compute capacity across the company's global data centers. Those headline feats matter, but the deeper lesson for enterprise tech leaders is how AlphaEvolve pulls them off. Its architecture – controller, fast-draft models, deep-thinking models, automated evaluators and versioned memory – illustrates the kind of production-grade plumbing that makes autonomous agents safe to deploy at scale. Google's AI technology is arguably second to none. So the trick is figuring out how to learn from it, or even using it directly. Google says an Early Access Program is coming for academic partners and that 'broader availability' is being explored, but details are thin. Until then, AlphaEvolve is a best-practice template: If you want agents that touch high-value workloads, you'll need comparable orchestration, testing and guardrails. Consider just the data center win. Google won't put a price tag on the reclaimed 0.7%, but its annual capex runs tens of billions of dollars. Even a rough estimate puts the savings in the hundreds of millions annually— enough, as independent developer Sam Witteveen noted on our recent podcast, to pay for training one of the flagship Gemini models, estimated to cost upwards of $191 million for a version like Gemini Ultra. VentureBeat was the first to report about the AlphaEvolve news earlier this week. Now we'll go deeper: how the system works, where the engineering bar really sits and the concrete steps enterprises can take to build (or buy) something comparable. AlphaEvolve runs on what is best described as an agent operating system – a distributed, asynchronous pipeline built for continuous improvement at scale. Its core pieces are a controller, a pair of large language models (Gemini Flash for breadth; Gemini Pro for depth), a versioned program-memory database and a fleet of evaluator workers, all tuned for high throughput rather than just low latency. A high-level overview of the AlphaEvolve agent structure. Source: AlphaEvolve paper. This architecture isn't conceptually new, but the execution is. 'It's just an unbelievably good execution,' Witteveen says. The AlphaEvolve paper describes the orchestrator as an 'evolutionary algorithm that gradually develops programs that improve the score on the automated evaluation metrics' (p. 3); in short, an 'autonomous pipeline of LLMs whose task is to improve an algorithm by making direct changes to the code' (p. 1). Takeaway for enterprises: If your agent plans include unsupervised runs on high-value tasks, plan for similar infrastructure: job queues, a versioned memory store, service-mesh tracing and secure sandboxing for any code the agent produces. A key element of AlphaEvolve is its rigorous evaluation framework. Every iteration proposed by the pair of LLMs is accepted or rejected based on a user-supplied 'evaluate' function that returns machine-gradable metrics. This evaluation system begins with ultrafast unit-test checks on each proposed code change – simple, automatic tests (similar to the unit tests developers already write) that verify the snippet still compiles and produces the right answers on a handful of micro-inputs – before passing the survivors on to heavier benchmarks and LLM-generated reviews. This runs in parallel, so the search stays fast and safe. In short: Let the models suggest fixes, then verify each one against tests you trust. AlphaEvolve also supports multi-objective optimization (optimizing latency and accuracy simultaneously), evolving programs that hit several metrics at once. Counter-intuitively, balancing multiple goals can improve a single target metric by encouraging more diverse solutions. Takeaway for enterprises: Production agents need deterministic scorekeepers. Whether that's unit tests, full simulators, or canary traffic analysis. Automated evaluators are both your safety net and your growth engine. Before you launch an agentic project, ask: 'Do we have a metric the agent can score itself against?' AlphaEvolve tackles every coding problem with a two-model rhythm. First, Gemini Flash fires off quick drafts, giving the system a broad set of ideas to explore. Then Gemini Pro studies those drafts in more depth and returns a smaller set of stronger candidates. Feeding both models is a lightweight 'prompt builder,' a helper script that assembles the question each model sees. It blends three kinds of context: earlier code attempts saved in a project database, any guardrails or rules the engineering team has written and relevant external material such as research papers or developer notes. With that richer backdrop, Gemini Flash can roam widely while Gemini Pro zeroes in on quality. Unlike many agent demos that tweak one function at a time, AlphaEvolve edits entire repositories. It describes each change as a standard diff block – the same patch format engineers push to GitHub – so it can touch dozens of files without losing track. Afterward, automated tests decide whether the patch sticks. Over repeated cycles, the agent's memory of success and failure grows, so it proposes better patches and wastes less compute on dead ends. Takeaway for enterprises: Let cheaper, faster models handle brainstorming, then call on a more capable model to refine the best ideas. Preserve every trial in a searchable history, because that memory speeds up later work and can be reused across teams. Accordingly, vendors are rushing to provide developers with new tooling around things like memory. Products such as OpenMemory MCP, which provides a portable memory store, and the new long- and short-term memory APIs in LlamaIndex are making this kind of persistent context almost as easy to plug in as logging. OpenAI's Codex-1 software-engineering agent, also released today, underscores the same pattern. It fires off parallel tasks inside a secure sandbox, runs unit tests and returns pull-request drafts—effectively a code-specific echo of AlphaEvolve's broader search-and-evaluate loop. AlphaEvolve's tangible wins – reclaiming 0.7% of data center capacity, cutting Gemini training kernel runtime 23%, speeding FlashAttention 32%, and simplifying TPU design – share one trait: they target domains with airtight metrics. For data center scheduling, AlphaEvolve evolved a heuristic that was evaluated using a simulator of Google's data centers based on historical workloads. For kernel optimization, the objective was to minimize actual runtime on TPU accelerators across a dataset of realistic kernel input shapes. Takeaway for enterprises: When starting your agentic AI journey, look first at workflows where 'better' is a quantifiable number your system can compute – be it latency, cost, error rate or throughput. This focus allows automated search and de-risks deployment because the agent's output (often human-readable code, as in AlphaEvolve's case) can be integrated into existing review and validation pipelines. This clarity allows the agent to self-improve and demonstrate unambiguous value. While AlphaEvolve's achievements are inspiring, Google's paper is also clear about its scope and requirements. The primary limitation is the need for an automated evaluator; problems requiring manual experimentation or 'wet-lab' feedback are currently out of scope for this specific approach. The system can consume significant compute – 'on the order of 100 compute-hours to evaluate any new solution' (AlphaEvolve paper, page 8), necessitating parallelization and careful capacity planning. Before allocating significant budget to complex agentic systems, technical leaders must ask critical questions: Machine-gradable problem? Do we have a clear, automatable metric against which the agent can score its own performance? Do we have a clear, automatable metric against which the agent can score its own performance? Compute capacity? Can we afford the potentially compute-heavy inner loop of generation, evaluation, and refinement, especially during the development and training phase? Can we afford the potentially compute-heavy inner loop of generation, evaluation, and refinement, especially during the development and training phase? Codebase & memory readiness? Is your codebase structured for iterative, possibly diff-based, modifications? And can you implement the instrumented memory systems vital for an agent to learn from its evolutionary history? Read More When to ignore — and believe — the AI hype cycle Takeaway for enterprises: The increasing focus on robust agent identity and access management, as seen with platforms like Frontegg, Auth0 and others, also points to the maturing infrastructure required to deploy agents that interact securely with multiple enterprise systems. AlphaEvolve's message for enterprise teams is manifold. First, your operating system around agents is now far more important than model intelligence. Google's blueprint shows three pillars that can't be skipped: Deterministic evaluators that give the agent an unambiguous score every time it makes a change. Long-running orchestration that can juggle fast 'draft' models like Gemini Flash with slower, more rigorous models – whether that's Google's stack or a framework such as LangChain's LangGraph. Persistent memory so each iteration builds on the last instead of relearning from scratch. Enterprises that already have logging, test harnesses and versioned code repositories are closer than they think. The next step is to wire those assets into a self-serve evaluation loop so multiple agent-generated solutions can compete, and only the highest-scoring patch ships. As Cisco's Anurag Dhingra, VP and GM of Enterprise Connectivity and Collaboration, told VentureBeat in an interview this week: 'It's happening, it is very, very real,' he said of enterprises using AI agents in manufacturing, warehouses, customer contact centers. 'It is not something in the future. It is happening there today.' He warned that as these agents become more pervasive, doing 'human-like work,' the strain on existing systems will be immense: 'The network traffic is going to go through the roof,' Dhingra said. Your network, budget and competitive edge will likely feel that strain before the hype cycle settles. Start proving out a contained, metric-driven use case this quarter – then scale what works. Watch the video podcast I did with developer Sam Witteveen, where we go deep on production-grade agents, and how AlphaEvolve is showing the way:


Hans India
12-05-2025
- Hans India
Google I/O 2025 Set for May 20: Android 16, Gemini AI, and XR Innovations in Focus
Google has officially announced the dates for its much-anticipated I/O 2025 developer conference, set to begin on May 20. The two-day event promises a wave of new software innovations, with major updates expected across Android, AI, web, and Cloud technologies. This year's conference will be open to all online, with livestreamed keynotes and technical sessions, while also hosting an in-person gathering at the Shoreline Amphitheatre in Mountain View, California. A Preview of What's Coming at Google I/O 2025 Android 16: Enhanced UI, Security, and Health Features Among the most anticipated announcements is the unveiling of Android 16, which has been making headlines in recent weeks. Though Google has remained tight-lipped about exact features, early reports suggest the update will bring redesigned volume controls, improved user interface elements, and advanced accessibility features. One of the standout additions is expected to be Advanced Protection Mode, a security-centric tool that automatically reinforces system defenses for users who may not dive into deeper privacy settings themselves. This feature is designed to cater to everyday users seeking better protection without needing to configure complex options. Health and wellness will also be a central theme, as Health Connect 2.0 debuts with Android 16. The new version will allow sharing of medical records in FHIR format, an industry standard embraced by healthcare providers. Google says apps will only access or modify this data with explicit user consent, further reinforcing its privacy-first stance. In addition, users can expect tools to track physical activity intensity, with workouts categorised using World Health Organisation guidelines, offering more precise health insights and coaching opportunities. Gemini and AI Announcements: Google's Expanding Ecosystem Artificial intelligence is set to be another major pillar at I/O 2025. The event's website highlights tools like Gemini open model, Google AI Studio, and NotebookLM, all pointing toward significant upgrades in AI capabilities. Leaks hint at a new version of Gemini Ultra, Google's high-end AI model. The enhanced version may come with additional features, possibly tied to a premium subscription. Gemini Ultra is expected to offer advanced natural language processing, coding support, and generative content capabilities. Google may also showcase Project Astra, an ambitious initiative focused on developing AI "agents" with real-time multimodal understanding — able to interpret text, images, video, and audio simultaneously. Another project gaining attention is Project Mariner, which aims to enable AI agents to navigate the web and perform actions on behalf of users. Clues about this feature surfaced in code found in Google AI Studio, mentioning something called "Computer Use." Android XR: Google's Leap into Extended Reality In a bid to join the mixed reality race, Google is set to reveal Android XR, an operating system developed in collaboration with Samsung. Designed for use with headsets and smart glasses, including Samsung's upcoming Project Moohan, Android XR aims to fuse virtual, augmented, and mixed reality experiences. What sets Android XR apart is its deep integration with Gemini AI, giving devices enhanced contextual understanding and user interaction capabilities. Google is also positioning XR as a universal platform for other manufacturers, potentially allowing third-party headsets to leverage the OS — placing it in direct competition with Apple Vision Pro and Meta Quest. As Google gears up for one of its most wide-ranging I/O conferences to date, expectations are sky-high. With innovations across mobile OS, AI infrastructure, health data, and mixed reality, Google I/O 2025 is shaping up to be a pivotal moment for the tech giant's ecosystem evolution.
Yahoo
11-05-2025
- Business
- Yahoo
Google I/O 2025: What to expect, including updates to Gemini and Android 16
Google I/O, Google's biggest developer conference of the year, is nearly upon us. Scheduled for May 20 to 21 at the Shoreline Amphitheatre in Mountain View, I/O will showcase product announcements from across Google's portfolio. Expect plenty of news relating to Android, Chrome, Google Search, YouTube, and — of course — Google's AI-powered chatbot, Gemini. Here's what to expect. AI is the tech du jour, and Google, like its rivals, has been investing heavily in it. A shoo-in for I/O is a new addition (or several) to Google's flagship Gemini family of AI models. Leaks over the past few weeks suggest that an updated Gemini Ultra model is on the way, Gemini Ultra being Google's top-of-the-line Gemini offering. With this upgraded Gemini Ultra may come a pricier Gemini subscription. Google offers a single premium tier, Gemini Advanced ($20 per month), to unlock additional capabilities in its Gemini chatbot, which is powered by the company's Gemini models. But Google may soon launch two new plans, Premium Plus and Premium Pro. It's not yet clear what benefits might be attached and how these plans might be priced relative to Gemini Advanced. Google will almost certainly talk about Astra, its wide-ranging effort to build AI apps and "agents" for real-time, multimodal understanding. Also probably on the agenda is Project Mariner, Google's AI "agents" that can navigate and take action across the web on a user's behalf. Folks on X spotted references to "Computer Use" in the code for Google's AI Studio developer platform, which could well pertain to Mariner. For the first time this year, Google is hosting a separate event dedicated to Android updates: The Android Show. It'll take place on Tuesday, roughly a week ahead of I/O. The latest version of Android, Android 16, will be the focus. Android 16 is expected to bring with it improved notifications and an entirely new design language, Material 3 Expressive. In a leaked blog post, Google describes Material 3 Expressive as a top-to-bottom overhaul, with greater responsiveness and "action elements" that pop. Android 16 is mostly a quality-of-life update, judging by reports. It'll introduce support for Auracast, which should make it easier to switch between Bluetooth devices. Also in tow are lock screen widgets and a range of new accessibility features. Google may also spotlight capabilities in the latest versions of Android XR, its mixed reality operating system, and Wear OS, the company's software for wearables. Going by the official I/O schedule, Google will have plenty to discuss following The Android Show and I/O keynote addresses. The schedule lists sessions dedicated to Chrome and Google Cloud, Google Play (the Android app store), Android development tools, and Gemma, Google's collection of "open" AI models. Last year, Google unveiled a few AI-themed surprises at I/O, including a set of models fine-tuned for education applications called LearnLM. An upgrade to Google's viral podcast-generating NotebookLM could be one such surprise. Leaked code reveals a "Video Overviews" tool that presumably would create video summaries, most likely leveraging Google's Veo 2 video-generating model. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


India Today
11-05-2025
- India Today
Google IO 2025 to kick off on May 20: Android 16, Gemini and more to expect
Google's annual developer conference -- Google I/O 2025 -- is all set to kick off on May 20. The 2-day event will unveil several new software updates. Google has already shared that the event will focus on four areas: Android, AI, web and Cloud. This means that the stage is set to introduce Android 16 and new features for Gemini AI. While announcing the dates, Google noted that 'you'll learn more about Google's newest products, technologies and innovations in AI.' The event will be 'open to everyone online' and will include 'livestreamed keynotes and sessions.' Similar to the previous events, there will also be a physical gathering at the Shoreline Amphitheatre in Mountain View, I/O 2025 event: What to expect -- Android 16: Android 16 has been generating considerable buzz in recent weeks, and it appears this upcoming event may finally clear up the speculation. According to reports, the new operating system might feature redesigned volume controls, a more refined user interface, and upgrades to accessibility tools. It's also anticipated to include support for health records, a more dynamic refresh rate, and further improvements to privacy and on the rumours, Android 16 will introduce an Advanced Protection Mode. This new security feature is designed to offer users an added layer of protection by tightening the device's security settings. When activated, it automatically implements a series of measures to enhance security—particularly beneficial for individuals who may not routinely explore or adjust advanced security options themselves. With Android 16, Google is set to enhance its Health Connect platform through the introduction of Health Connect 2.0 as well. This updated version will support the sharing of medical records in the FHIR format — the standard widely adopted by healthcare professionals. To maintain user privacy, apps will only be able to access or modify health data if the user provides clear and specific consent. The update will also bring in new features such as tracking the intensity of physical activity, categorising workouts as either moderate or strictly in line with World Health Organization more to expect from Android 16? Read 10 speculated features of Android 16. -- Gemini and other AI updates: Another key announcement is expected to be in the AI stage. The I/O homepage includes a 'Start building today' segment that spotlights the Gemini open model, Google AI Studio, and NotebookLM — suggesting these tools are likely to feature in the day's key announcements. Recent leaks suggest that an enhanced version of the Gemini Ultra model is on the horizon, with Gemini Ultra being Google's premium offering in the Gemini lineup. This upgraded model may be accompanied by a more expensive Gemini highly likely that Google will discuss Astra, its ambitious project aimed at developing AI applications and "agents" for real-time, multimodal comprehension. Also expected to feature is Project Mariner, which involves AI "agents" that can navigate the web and take actions on behalf of users. Users on X have spotted references to "Computer Use" in the code for Google's AI Studio platform, which could be related to Mariner.-- Android XR: Android XR, the extended reality operating system created by Google and Samsung, is expected to be unveiled at Google I/O 2025 event. The platform aims to integrate virtual, augmented, and mixed-reality experiences for headsets (including Samsung's Project Moohan) and smart glasses, with Google's Gemini AI at its core. Google is hoping Android XR will be capable of powering specialised headsets from other tech companies as well, positioning itself to compete with Apple and Meta in the mixed-reality market.