Latest news with #ProjectAstra


Indian Express
4 days ago
- Business
- Indian Express
‘AI won't make us lazy, it'll make us smarter': Google DeepMind CEO on learning and future of coding
Demis Hassabis, CEO of Google DeepMind, firmly believes that AI is going to transform education, coding and even drug discovery. In his recent podcast interview with Rowan Cheung, the founder of The RundownAI, Hassabis spoke about the biggest announcements, the AI as a companion conundrum, and how the next decade of technology will shape considering the rapid advancements in AI. The CEO asserted that AI is here to make us smarter. Last week, Google unveiled a plethora of AI applications at the Google I/O 2025. The search giant which is briskly moving forward in AI advancements showed a range of possibilities with its new AI Mode in Search to its universal AI assistant – Project Astra. Talking about the things that most excite him from Google I/O 2025, Demis Hassabis, CEO of Google DeepMind, said, 'If I had to pick a top three: Gemini 2.5 Pro DeepThink is a super frontier model on reasoning… Veo 3 is the first time we've combined audio and video… And Flash is probably going to surprise a lot of people.' Hassabis placed Gemini 2.5 Pro DeepThink at the top of the list, terming it a 'super frontier model on thinking'. He shared that Google's Veo 3 is the most advanced video generation model ever. 'It's the first time we've combined audio and video together—and we've made big strides in improving video quality,' he said. The CEO rounded up his top picks by describing Gemini Flash as a faster, lightweight model for mobile and embedded devices. Hassabis also mentioned Gemini Diffusion, which is a research milestone in speed and image generation. At Google I/O, perhaps one of the major highlights was Project Astra, which is an evolving AI assistant programmed to be proactive and multimodal and to be operational across phones and wearable devices. However, proactivity also comes with some challenges. 'You want something to be helpful, not annoying,' Hassabis explained. 'It's a complex research problem, understanding when you're busy, whether you're speaking to the assistant or a human, even your physical context.' According to him, getting it right is critical for the universal assistant vision, especially as Google works towards memory-sharing across devices. 'That's firmly on the roadmap in the following months,' he confirmed. Since big tech and AI startups are working towards making AI more personalised, the way people interact with these systems is bound to change. The Nobel Prize laureate acknowledged that users are likely to form bonds with their AI assistants. 'It's clear users want systems that know them well, understand their preferences, and carry on conversations from yesterday. But we'll also have to think about things like upgrades, especially after people spend time training their assistant.' He said that assistants could become indispensable not just for casual users but in professional workflows. When asked if overreliance on AI tools makes a user lazier or dumb, the CEO said he does not think of it that way. 'It's about teaching the next generation how to make the best use of these tools. They're already part of education, so let's embrace it and use it for better learning.' Hassabis is particularly optimistic about the potential of AI in education, especially through Google's LearnLM initiative. He told the host that with LearnLM, one could create flashcards on the fly, get suggestions on YouTube videos tailored to what they may be struggling with and even help them identify gaps in their understanding. When asked what advice he would give to educators on tailoring curriculum around AI, not as a replacement but as a tool, Hassabis reasoned that curricula need to evolve rapidly. 'Personalised learning where a student learns in class and continues at home with an AI tutor could be incredibly powerful.' He views AI as a tool to democratise education globally: 'You could bring much higher-quality learning to poorer parts of the world that don't have good education systems.' During the interview, Cheung mentioned that one area where AI is dramatically impacting has been software development, especially with the emergence of tools like Jules and Vibe coding. With these tools, AI is writing most of the code. In this scenario, Hassabis was asked, What makes for a good developer? The Google DeepMind executive responded, saying, 'I think the next era will be a creative one… Top engineers will be 10x more productive because they'll understand what the AI is doing and give better instructions. And hobbyists will get access to powerful tools previously out of reach.' Hassabis went on to predict that natural language could become the next programming language. 'When I started, I was coding in assembly. Then came C, Python… Now, natural language might be the final step.' On a similar tangent, if coding becomes easy, how will startups stay competitive? To this, Hassabis said that the competitive edge could come from 'distribution, execution speed, or deep vertical integration with specialist data.' Hassabis believes that hybrid AI systems will rise in importance, pointing at AlphaFold, the AI model that combines deep learning with biology and physics. The CEO, in a segment from 60 Minutes, claimed that AI may help cure all diseases in the next decade. When asked what he meant, he clarified that he meant AI could design hundreds of potential drugs. However, regulatory approval will still take time, but the possibilities are real. He explained that when early AI-designed drugs are validated and back-tested for safety and efficacy, regulations might evolve to trust AI predictions more. 'We've done it before,' he said, referencing AlphaFold. 'Mapping one protein used to take a PhD student five years. AlphaFold mapped 200 million in a single year. That's a billion years of PhD time saved.' In his short conversation, the Google DeepMind boss made it clear that AI is not just changing software; it is essentially redefining how we learn, work and treat disease. The 48-year-old British scientist is known to be a chess prodigy. He was knighted in 2023 for his services to AI. In 2024, Hassabis and John Jumper won the Nobel Prize in Chemistry for their work on AlphaFold, an AI system that predicts 3D protein structures. He co-founded DeepMind in 2010 with a mission to build Artificial General Intelligence (AGI). While most people perceive AGI to be smarter than humans, Hassabis defines it as systems that can do anything the human brain can do. Bijin Jose, an Assistant Editor at Indian Express Online in New Delhi, is a technology journalist with a portfolio spanning various prestigious publications. Starting as a citizen journalist with The Times of India in 2013, he transitioned through roles at India Today Digital and The Economic Times, before finding his niche at The Indian Express. With a BA in English from Maharaja Sayajirao University, Vadodara, and an MA in English Literature, Bijin's expertise extends from crime reporting to cultural features. With a keen interest in closely covering developments in artificial intelligence, Bijin provides nuanced perspectives on its implications for society and beyond. ... Read More


Forbes
6 days ago
- Business
- Forbes
Business Tech News: Google Rolls Out A Bunch Of AI Tech At It's I/O Conference
Google chief executive Sundar Pichai speaks during the tech titan's annual I/O developers conference ... More on May 14, 2024, in Mountain View, California. Google on Tuesday said it would introduce AI-generated answers to online queries made by users in the United States, in one of the biggest updates to its search engine in 25 years. (Photo by Glenn CHAPMAN / AFP) (Photo by GLENN CHAPMAN/AFP via Getty Images) Here are five things in business tech news that happened this week and how they affect your business. Did you miss them? Google's I/O 2025 keynote focused heavily on AI advancements, particularly with its Gemini AI models. The company showcased AI Mode in Google Search, which allows users to ask complex queries and receive AI-generated summaries. Google also introduced Project Astra, an AI assistant designed to handle everyday tasks like finding information in emails and making calls. Additionally, Google unveiled Android XR glasses, which provide augmented reality features, including real-time language translation and AI-powered assistance. Other highlights were Google Beam, a new AI-driven video communication platform, and Flow, an AI-powered filmmaking tool that integrates Google's advanced video and image generation models. (Source: Google) It's worth reading the entire post referenced above because Google has announced so many different AI-leveraged technologies that the only way to see what will impact your business is to dig into the details. Filmmakers and creators especially will be impressed. But so will small business owners and consumers who will see this technology offering many ways to improve productivity from writing emails to getting search results faster and more accurately. Uber Freight is making a major push into AI-driven logistics, aiming to streamline supply chain operations with its Insights AI platform. The company has developed over 30 AI agents designed to automate key logistics tasks throughout the freight lifecycle. Insights AI – which was quietly launched in 2023 – helps shippers analyze vast amounts of data, uncover hidden opportunities, and improve decision-making. Uber Freight is betting that its AI solutions will provide immediate benefits to both large enterprise customers and the nearly 10,000 shippers it works with. Additionally, Uber Freight has launched the industry's first scaled AI logistics network, powered by a proprietary logistics-specific large language model (LLM). This AI system integrates directly into Uber Freight's transportation management system (TMS), offering real-time intelligence and automation across the freight lifecycle. (Source: TechCrunch) The logistics industry has been one of the leaders in AI technology and it's understandable why. This industry relies on lots of data from different systems to schedule, track and deliver packages as quickly and affordably as possible. It's a scenario ripe for AI tools to help users query scenarios and pick the best that most suitable for their situation. Uber Freight is an example of a larger company that has developed applications for use internally and is now commercializing by rolling out these tools to the public. Microsoft has officially transitioned the Microsoft 365 app into the Microsoft 365 Copilot app, integrating AI-powered assistance across web, mobile, and Windows. This update enhances productivity by allowing users to ask questions, create content, draft documents, and build AI agents directly within the app. It's also unveiled Copilot Tuning that allows users to build AI models that work with their company's specific data and processes. Copilot Chat is available at no additional cost for Microsoft 365 license holders and those with a Microsoft 365 Copilot license. For personal accounts, Copilot Chat is accessible to Microsoft 365 Personal and Family subscribers, but not to users without these subscriptions. (Source: Engadget) There's a massive change happening in how Office products are to be used and these are the initial steps towards that. Instead of launching applications within Office (and similar platforms) we'll instead be using interfaces similar to ChatGPT where users explain what they're trying to do ('write a proposal' or 'do an analysis' or 'create a presentation') and the right applications will be chosen for us, running behind the scenes, taking requests from users and then creating results. Amazon's 2024 Small Business Empowerment Report highlights the significant impact independent sellers have had on the platform. Over the past 25 years, small businesses selling through Amazon have generated more than $2.5 trillion in sales and now account for over 60 percent of total sales on the site. In 2024 alone, U.S.-based independent sellers averaged $290,000 in annual sales, with over 55,000 sellers surpassing $1 million in revenue. These businesses have also created over 2 million jobs across the U.S., marking an 11 percent year-over-year increase in employment. Amazon attributes this success to its Fulfillment by Amazon (FBA) program, which has shipped over 80 billion items since its launch in 2006. The report also highlights the growing adoption of AI-powered tools, such as Amazon's Seller Assistant and generative AI listing enhancements, which help small businesses optimize their operations. (Source: Amazon) Like all corporate reports, there's always an agenda and Amazon's aim is to prove to the world that, rather than the common perception that it's putting small businesses out of business the platform actually provides huge opportunities for millions of small businesses. I agree with that and the numbers reported above don't lie. Going forward, look for more data from Amazon showing how their smaller merchants are leveraging AI to sell more products. It's early days for this but my expectation is that this usage will grow heavily – and be essential – in the years to come. Agentic AI is transforming ecommerce by streamlining the shopping experience, reducing friction, and automating purchasing decisions. Instead of traditional browsing, consumers increasingly rely on AI-powered assistants to find, compare, and buy products instantly. This shift is forcing merchants to rethink their strategies, as AI agents can now compress browsing, selection, and checkout into a single conversation. These AI systems integrate directly with merchant catalogs and payment platforms, allowing them to add items to carts and complete transactions autonomously. The technology is also reducing cart abandonment by removing common obstacles like account creation and payment re-entry. (Source: PYMNTS) And…speaking of Amazon. Agentic tools like the ones mentioned above are being deployed by many ecommerce providers. Merchants need to test, play and lean into them as they become more reliable. Each week I report on five business tech news stories and how they impact your business and mine.


Tom's Guide
6 days ago
- Business
- Tom's Guide
I'm not handing control of my wallet to an AI — and not even Google's AI shopping features can change that
The internet has absolutely revolutionized the way we shop. Whether you're after groceries, clothes, the best phones or something else entirely, our first instinct is to head online and see where we can pick it up the cheapest. Now, though, it seems like big tech is having another go at this — employing AI to do all that initial searching for you. Google just showed off a bunch of features related to this during the Google I/O 2025 keynote. And, if I'm being totally honest, I have very mixed feelings about the whole thing. Especially if it's not entirely clear if I will always get total control over how my money is being spent. The idea of buying with AI, or some approximation of it, is nothing new. One of the reasons Amazon created Alexa was to give customers the option to purchase items with their voice, rather than using an app or website — with particular emphasis on buying from Amazon itself. Shopping with Alexa has evolved a lot over the years, and the core feature is still around. You can ask Alexa to order something for you, after placing it in your basket, without ever looking at what it is. That's something that has never sat right with me. When I'm shopping I tend to do a lot of looking around. If I want something specific, that means checking different retailers, or at the very least using Google to see who has what. Then, if it's a generic product, I'll be browsing the different options to check up on price, features, materials and all the other things on offer. Odds are I'm not going to pick the first listing I see, or even the second and third. I need to find the right thing for me, and it may not be the most obvious or even the cheapest option. The whole shopping process is weighing up what I need, how much I have to pay, and when it needs to arrive — among other things. These are the things that are being calculated in my mind as I'm browsing, rather than something I can fully articulate off the top of my head. It's why I could never fully trust Alexa to pick those items for me, even if I'm the one that verifies my basket before any money is handed over. And that's not likely to change anytime soon, even if it's a different AI involved in the process. Google I/O keynote was filled to the brim with AI news and previews. To the point where it was actually quite difficult to keep up with everything Google had on the table. But two sections immediately stood out to me — and both of them were shopping related. The Project Astra demo, where Google's AI attempts to help fix a bike involves Gemini calling a local bike shop and then placing an order for a new tension screw. While Gemini never acts independently, the one thing I noticed was that it was able to place a pick-up order without actually telling the user how much it's all going to cost. A tension screw for a bike is not going to be expensive, even from a small independent store that can't cut costs like Amazon does. But the fact Google did all that without divulging key information is slightly concerning. Sure, it's on the user to actually ask those questions, but Gemini shouldn't need to be prompted to tell you all the important details. But that's at odds with later demos at I/O, where Google showed how Search's new AI Mode can make the act of shopping online easier. The short version is that using Gemini and LLMs, Google can now do research for you, and help you find the kind of stuff you may want to buy, and without you needing to use very specific keywords. But it was also made clear that this was all controlled by you, and regardless of how much research Google AI does for you the actual act of making the purchase is entirely down to you, rather than a blind purchase. Obviously these are different systems that account for shopping in very different ways. But you'd also think there would be some level of consistency between them both — especially since these demos were not happening live. I'm not a big fan of AI, and I've made my feelings very clear on that in the past. Features either feel too gimmicky to be of any use, or feel like they don't actually save me much in the way of effort. On top of that I'll always remain skeptical of Google's AI Search features, given how poor the AI Overviews have been since they first started rolling out. The new AI Mode shopping features seem rather interesting, and the ability to ask AI for recommendations on what to look for could be useful. That is, assuming it's able to do everything Google says it can do. But no matter what happens, I absolutely will not be handing over all the decision-making process to a machine — and I sure as heck won't be letting it buy stuff for me.

Business Insider
24-05-2025
- Business
- Business Insider
Google has a massive mobile opportunity, and it's partly thanks to Apple
Google's phones, tablets, and, yes, XR glasses are all about to be supercharged by AI. Google needs to seize this moment. Bank of America analysts this week even called Google's slew of new AI announcements a "Trojan horse" for its device business. For years, Apple's iOS and Google's Android have battled it out. Apple leads in the US in phone sales, though it still trails Android globally. The two have also gradually converged; iOS has become more customizable, while Android has become cleaner and easier to use. As hardware upgrades have slowed in recent years, the focus has shifted to the smarts inside the device. That could be a big problem for Apple. Its AI rollouts have proven lackluster with users, while more enticing promised features have been delayed. The company is reportedly trying to rebuild Siri entirely using large language models. Right now, it's still behind Google and OpenAI, and that gap continues to widen. During Google's I/O conference this week, the search giant bombarded us with new AI features. Perhaps the best example was a particularly grabby demo of Google's "Project Astra" assistant helping someone fix their bike by searching through the bike manual, pulling up a YouTube video, and calling a bike shop to see if certain supplies were in stock. It was, of course, a highly polished promotional video, but it made Siri look generations behind. "It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever," wrote Ben Thompson, analyst and Stratechery author, in an I/O dispatch this week. "Android is probably going to be the most important canvas for shipping a lot of these capabilities," he added. Google's golden opportunity Apple has done a good job of locking users into its ecosystem with iMessage blue bubbles, features like FaceTime, and peripherals like the Apple Watch that require an iPhone to use. Google's Pixel phone line, meanwhile, remains a rounding error when compared to global smartphone shipments. That's less of a problem when Google has huge partners like Samsung that bring all of its AI features to billions of Android users globally. While iPhone users will get some of these new features through Google's iOS apps, it's clear that the "universal assistant" the company is building will only see its full potential on Android. Perhaps this could finally get iOS users to make the switch. "We're seeing diminishing returns on a hardware upgrade cycle, which means we're now really focused on the software upgrade cycle," Bernstein senior analyst Mark Shmulik told Business Insider. Without major changes by Apple, Shmulik said he sees the gap in capabilities between Android and iOS only widening. "If it widens to the point where someone with an iPhone says, 'Well my phone can't do that,' does it finally cause that switching event from what everyone has always considered this incredible lock-in from Apple?" Shmulik said. Beyond smartphones Internally, Google has been preparing for this moment. The company merged its Pixel, Chrome, and Android teams last year to capitalize on the AI opportunity. "We are going to be very fast-moving to not miss this opportunity," Google's Android chief Sameer Samat told BI at last year's I/O. "It's a once-in-a-generation moment to reinvent what phones can do. We are going to seize that moment." A year on, Google appears to be doing just that. Much of what the company demoed this week is either rolling out to devices imminently or in the coming weeks. Google still faces the challenge that its relationships with partners like Samsung have come with the express promise that Google won't give its home-grown devices preferential treatment. So, if Google decides to double down on its Pixel phones at the expense of its partners, it could step into a business land mine. Of course, Google needs to think about more than smartphones. Its renewed bet on XR glasses is a bet on what might be the next-generation computing platform. Meta is already selling its own augmented reality glasses, and Apple is now doubling down on its efforts to get its own smart glasses out by the end of 2026, Bloomberg reported. Google this week demoed glasses that have a visual overlay to instantly provide information to wearers, which Meta's glasses lack and Apple's first version will reportedly also not have. The success of Meta's glasses so far is no doubt encouraging news for Google, as a new era of AI devices is ushered in. Now it's poised to get ahead by leveraging its AI chops, and Apple might give it the exact opening it's waited more than a decade for. "I don't know about an open goal," said Shmulik of Apple, "but it does feel like they've earned themselves a penalty kick."

Business Insider
23-05-2025
- Business
- Business Insider
Here are the 6 biggest takeaways from Google I/O, where the tech giant proved it has real AI momentum
Google made literally 100 announcements at I/O this week, a clear sign that the tech giant intends to dominate every aspect of AI, from its overhaul of Search to its latest AI models and wearables tech. The event was packed and, at times, felt electrifying. Google showed impressive stats about how its AI has taken off. It had plenty of far-out goals, too, like building a universal AI assistant and extended reality glasses that give directions in real time. I/O also showcased Google's vulnerabilities. Some releases clearly overlapped, while arch-rival OpenAI upstaged Google on Wednesday with a big announcement of its own. With the conference now over, here are six main takeaways. Google wants a 'total overhaul' of Search The biggest change touted at I/O was AI Mode — what CEO Sundar Pichai called a "total overhaul" of Google's most iconic feature. In AI Mode, users will have a far more conversational Search experience, asking Google questions directly about what they're looking for. That's a marked change from the traditional experience of going through a long list of links to find the right answer, which feels more clunky than ever in an age of AI chatbots. At the same time, AI features like these could cannibalize Google Search and threaten the tech giant's main cash cow, Google Ads. Google risks not figuring out how to heavily monetize these AI tools. That being said, it's already testing ads in AI Mode. Gemini everywhere Google's AI model family, Gemini, took center stage at I/O. Google announced that it will integrate Gemini into Chrome, allowing users to chat with its latest AI models while they browse. (The feature rolls out to subscribers this summer.) It's a shot across the bow to OpenAI's ChatGPT, which already has a popular Chrome extension. I/O also announced an array of updates to its Gemini app, which recently passed 400 million monthly active users — an impressive figure, though still behind ChatGPT. With an update called Personal Context, Gemini app users can get tailored responses based on personal data from Google services, like asking its AI to find a long-lost email. It's all part of a long-term plan to build a universal AI assistant: what Google calls Project Astra. While it's still unfinished, that plan feels more fleshed out now than when Business Insider tested Astra a year ago. Soaring AI traction New AI features are undeniably cool, though Google's AI traction garnered some of the biggest reactions at Pichai's keynote speech on Tuesday. Onstage, Pichai boasted that the number of tokens generated by Google across all its platforms a month had exploded 50 times to over 480 trillion since last year. The crowd gasped—it was a big moment. Last year's I/O felt like a giant teaser for coming AI features, with plenty of promise but little to show for it. This year felt different. Sergey Brin goes founder mode There was no greater manifestation of Google tripling down on AI than cofounder Sergey Brin crashing a fireside chat with DeepMind CEO Demis Hassabis. That was after Brin wandered around a pavilion trying on a pair of Google's XR glasses. At the chat, Brin said he goes into the office "pretty much every day now" to work on AI. He also said that retired computer scientists should get back to work to take advantage of the current environment. Brin has been back at Google since 2023 as the search giant races against AI rivals, and it's obvious he's in " founder mode" — something quite rare at a mature company. Google's smart glasses are here — sort of Google let BI briefly try on its prototype Android XR glasses, which have Gemini's AI features and allow users to ask questions. While the tech shows promise, it's still early days. Google staffers asked the throngs of I/O attendees lining up for demos not to ask about price, availability, or battery life. "We just don't know!" they said. The prototype glasses feel impressively lightweight — almost too much so, to the point that they felt like they might fall off our faces. The display sits only on the right lens and is practically invisible unless viewed at just the right angle under the right light. It's full-color, but it's small and subtle enough that you might miss the display entirely. We weren't allowed to view Google Maps or Photos in the glasses like Google showed off in its keynote. Instead, we put on the glasses and walked around a room filled with artwork on the walls and travel catalogs on a table that we could ask Gemini questions about. While Gemini correctly identified the artwork, it couldn't answer a basic travel query when we looked at the travel catalogs: "What is the cheapest flight to New York next month?" And because the display is only on one side, focusing on it made us feel a bit cross-eyed. The version we saw isn't the final design. It's missing the coming Warby Parker and Gentle Monster flair, though we did see glimmers of something promising here. Throwing everything against the wall may or may not work Google's announcements are undeniably impressive, though some of them felt repetitive. It's hard to understand the difference between Search Live and Gemini Live, for example. Both of them involve chatting with your phone about what it sees through its camera. Google's strategy of launching literally 100 different things at once could work for the company. It could also signal a lack of focus. BI was at an I/O panel when the news broke that OpenAI was buying former Apple design chief Jony Ive's hardware startup. Seeing OpenAI upstage Google like that felt a little ominous. The Google panel BI attended was quite dry and technical, with terms like AI-powered "tool calling" mentioned several times. The contrast with OpenAI's buzzy announcement couldn't be clearer. We even saw several attendees check their phones when the news came out. Google does have massive advantages in scale and distribution, thanks to Android and Chrome. Still, it's possible that in the long term, something like an AI-native device that ditches Google's ecosystem altogether eventually takes over. Investors got a taste of that risk last month, when the stock of Google's parent company, Alphabet, briefly tanked after Apple senior vice president Eddy Cue said search volume was shrinking due to AI.