logo
#

Latest news with #AlphaEvolve

Big in big tech: AI agents now code alongside developers
Big in big tech: AI agents now code alongside developers

Time of India

time25-05-2025

  • Business
  • Time of India

Big in big tech: AI agents now code alongside developers

Big Tech is doubling down on AI-powered coding agents—intelligent tools that go beyond assisting developers to actively collaborating with them. This week, Microsoft, Google, and OpenAI rolled out major upgrades that mark a shift in how software is built. These agents don't just generate code—they fix bugs, add features, and increasingly understand developer intent. The result? Compressed timelines, reduced manual grunt work, and the beginning of a fundamental shift in how programming teams function. Investors see software development as a high-fit application for agentic AI , or autonomous agents that can plan, execute, and self-correct across tasks. Coding, they believe, may be the killer use case. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Dhoni's Exclusive Home Interior Choice? HomeLane Get Quote Undo The week's biggest announcements: Microsoft: At its Build developer conference, Microsoft unveiled a new GitHub Copilot agent —a more proactive version of the AI tool that can now autonomously fix bugs and implement features. Instead of simply suggesting code snippets, the agent understands goals and acts on them. Live Events OpenAI: A week earlier, OpenAI introduced an upgraded version of its coding model Codex. The new agent is designed to handle multiple programming tasks in parallel—bringing multitasking capabilities to code generation. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Google DeepMind: Released AlphaEvolve , an advanced coding agent capable of tackling mathematical and computational problems. The system doesn't just generate code—it validates solutions using automated evaluators, reducing errors and hallucinations. Why this matters Coding appears to be the breakout application for agentic AI. Unlike creative writing or visual generation, software can be tested immediately—a program either runs or it doesn't. This gives developers a clear feedback loop, allowing faster refinement. However, these tools still struggle with subtle logic errors and hallucinations. As they generate more code, the risk of flawed output also grows. Still, the productivity gains are substantial. The shift is global AI now writes a third of Microsoft and Google's code, according to the companies. Indian startups are following suit. As reported by ET in April , AI agents are generating between 40–80% of code at some early- and growth-stage companies, using tools like ChatGPT, Claude, and Gemini. From prototypes to production systems, AI-written code is speeding up delivery cycles and changing how software teams operate—possibly forever.

Google DeepMind CEO Demis Hassabis disagrees with company's co-founder Sergey Brin on this one thing: 'We thought it was...'
Google DeepMind CEO Demis Hassabis disagrees with company's co-founder Sergey Brin on this one thing: 'We thought it was...'

Time of India

time25-05-2025

  • Business
  • Time of India

Google DeepMind CEO Demis Hassabis disagrees with company's co-founder Sergey Brin on this one thing: 'We thought it was...'

Left: Google DeepMind CEO Demis Hassabis, Right: Google co-founder Sergey Brin Google DeepMind CEO Demis Hassabis holds a more cautious outlook on the arrival of artificial general intelligence (AGI) than the Alphabet-owned company's co-founder Sergey Brin . Currently, AGI's definition is contested, with some focusing on human-level competence across all domains and others on an AI's capacity to learn, adapt and produce autonomous outputs beyond its training data. Despite both having access to similar data and insights into AI development, Hassabis' perspective differs from Brin's. In a recent conversation on the New York Times' Hard Fork podcast, it was noted that Brin expects AGI to arrive before 2030, while Hassabis has predicted that it will happen just after 2030. This difference in forecasts raises questions about how these Google executives may be perceiving differently from the same information. Hassabis also stated that he is sticking to a timeline he has maintained since DeepMind was founded in 2010. What Demis Hassabis has predicted about the arrival of AGI Talking at the NYT podcast, Hassabis said: 'We thought it was roughly a 20-year mission, and amazingly, we're on track. It's somewhere around there, I would think.' The prediction came after Brin jokingly accused Hassabis of 'sandbagging', which is intentionally downplaying timelines to later overdeliver. However, during the interview, Hassabis stood by his reasoning, pointing to the complexity of defining AGI itself. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World Undo "I have quite a high bar. It should be able to do all of the things that the human brain can do, even theoretically. And so that's a higher bar than, say, what the typical individual human could do, which is obviously very economically important,' Hassabis noted. When asked whether AGI would emerge through gradual improvements or sudden breakthroughs, Hassabis said both approaches are 'likely necessary.' 'We push unbelievably hard on the scaling,' he explained, while also funding 'blue sky' research such as AlphaEvolve . Last year, Anthropic CEO Dario Amodei predicted that AGI could arrive by 2026 or 2027, though he warned that unforeseen factors might delay its development. Other industry leaders share similar optimism: OpenAI CEO Sam Altman has suggested AGI could materialise during Trump's presidency, and Ark Invest's Cathie Wood has argued it could become a major engine of economic growth.

Alpha Evolve: The Self-Improving AI That's Breaking Boundaries
Alpha Evolve: The Self-Improving AI That's Breaking Boundaries

Geeky Gadgets

time21-05-2025

  • Science
  • Geeky Gadgets

Alpha Evolve: The Self-Improving AI That's Breaking Boundaries

What if machines could not only learn but also teach themselves to become better with each iteration? This isn't the plot of a sci-fi movie—it's the reality unfolding in artificial intelligence research. Systems like Google DeepMind's Alpha Evolve are pioneering a new frontier in AI: recursive self-improvement, where machines refine their own capabilities without constant human intervention. From breaking decades-old computational records to optimizing global data centers, Alpha Evolve is proving that AI can not only solve problems but also reinvent the way solutions are created. Yet, as promising as this sounds, it raises a critical question: how far can machines go in self-improvement before they outpace human oversight? AI Explained explore how Alpha Evolve's iterative learning process is reshaping fields like computational mathematics, hardware design, and energy efficiency. You'll discover how this system blends human ingenuity with machine precision to tackle challenges once thought insurmountable. But it's not all smooth sailing—Alpha Evolve's reliance on human-defined goals and its inability to independently identify new problems highlight the limits of today's AI. As we unpack the breakthroughs, limitations, and ethical considerations surrounding recursive AI systems, one thing becomes clear: the journey toward self-improving machines is as complex as it is fantastic. Alpha Evolve Overview How Alpha Evolve Works Alpha Evolve operates through a recursive process of code refinement, which begins with human-submitted problems and predefined evaluation metrics. The system employs a combination of smaller, faster models like Gemini Flash and more advanced systems such as Gemini Pro. These models collaborate to optimize performance while maintaining computational efficiency, making sure that resources are used effectively. A defining feature of Alpha Evolve is its evolutionary database, which stores successful prompts and solutions. This database allows the system to learn from past iterations, adapt to new challenges, and continuously improve its capabilities. By combining human creativity with machine-driven precision, Alpha Evolve bridges the gap between human ingenuity and computational power, creating a synergy that enhances problem-solving potential. Key Achievements Alpha Evolve has already demonstrated its fantastic potential through several new accomplishments: Advancing Computational Research: The system achieved a record-breaking tensor decomposition for matrix multiplication, surpassing a 50-year-old algorithm. This breakthrough highlights its ability to push the boundaries of mathematical and computational research. The system achieved a record-breaking tensor decomposition for matrix multiplication, surpassing a 50-year-old algorithm. This breakthrough highlights its ability to push the boundaries of mathematical and computational research. Optimizing Data Centers: By improving Google's data center operations, Alpha Evolve recovered 0.7% of global compute resources. This optimization translates into substantial energy savings and increased efficiency across Google's infrastructure. By improving Google's data center operations, Alpha Evolve recovered 0.7% of global compute resources. This optimization translates into substantial energy savings and increased efficiency across Google's infrastructure. Accelerating AI Development: The system contributed to the design of next-generation tensor processing units (TPUs), reducing training times for AI models and allowing faster development cycles. These achievements underscore Alpha Evolve's capacity to drive innovation across diverse fields, from computational mathematics to industrial optimization. What is Recursive Self-Improvement in AI? Alpha Evolve Explained Watch this video on YouTube. Here are additional guides from our expansive article library that you may find useful on AI Self-Improvement. Limitations and Challenges Despite its impressive accomplishments, Alpha Evolve is not without limitations. Its reliance on human-defined problems and evaluation metrics restricts its autonomy, as it cannot independently identify or define new challenges. In fields like natural sciences, where physical experiments are often required, the system's applicability remains limited. Additionally, while Alpha Evolve excels at optimizing existing processes, it lacks the ability to create entirely new systems or operate without human oversight. These constraints emphasize the ongoing necessity of human involvement in AI development. Clear problem definitions and robust evaluation metrics are essential to maximize the system's effectiveness and ensure its outputs align with intended goals. Addressing these challenges will be critical to unlocking Alpha Evolve's full potential. Future Directions for Alpha Evolve Several areas of improvement could significantly enhance Alpha Evolve's capabilities and broaden its impact: Expanding Context Windows: Increasing the size of the evolutionary database to accommodate larger context windows—potentially up to 10 million tokens—would enable the system to tackle more complex and nuanced problems. Increasing the size of the evolutionary database to accommodate larger context windows—potentially up to 10 million tokens—would enable the system to tackle more complex and nuanced problems. Integrating Advanced Models: Incorporating next-generation LLMs, such as Gemini 3, could improve performance, versatility, and adaptability across a wider range of applications. Incorporating next-generation LLMs, such as Gemini 3, could improve performance, versatility, and adaptability across a wider range of applications. Optimizing Search Algorithms: Refining the program generation processes could lead to faster and more accurate results, enhancing the system's efficiency. Refining the program generation processes could lead to faster and more accurate results, enhancing the system's efficiency. Improving Evaluation Metrics: Developing more sophisticated and domain-specific metrics would allow Alpha Evolve to address a broader spectrum of applications, from scientific research to industrial optimization. These advancements would not only enhance Alpha Evolve's functionality but also expand its potential to influence various industries and scientific disciplines. Broader Implications Alpha Evolve's recursive approach to self-improvement has far-reaching implications for science and technology. By automating the refinement of solutions, it demonstrates how AI can drive innovation in areas such as computational mathematics, hardware design, and energy efficiency. Its success also highlights the growing importance of interpretability, debugability, and predictability in mission-critical AI systems, making sure that outputs are reliable and aligned with human objectives. This development reflects a broader shift in AI research priorities. Traditional reinforcement learning methods are increasingly being complemented by iterative improvement approaches that emphasize adaptability and precision. This trend suggests a new direction for AI development, one that prioritizes continuous refinement over static optimization, paving the way for more dynamic and responsive systems. Ethical and Competitive Considerations Google DeepMind's commitment to ethical AI development is evident in its focus on applications that benefit humanity. By explicitly opposing the use of AI in warfare, the organization sets a standard for responsible innovation. However, the rapid pace of AI advancements raises critical questions about oversight, accountability, and equitable access to these technologies. As systems like Alpha Evolve become more capable, balancing innovation with ethical considerations will be essential. Collaboration between researchers, policymakers, and industry leaders will play a pivotal role in making sure that AI development aligns with societal values and priorities. Establishing clear guidelines and frameworks for responsible AI use will be crucial to navigating the challenges posed by increasingly autonomous systems. The Path Forward Alpha Evolve exemplifies the fantastic potential of recursive AI systems. Through iterative self-improvement, it has achieved breakthroughs in computational efficiency, hardware design, and applied sciences. While challenges remain, its development represents a significant step toward the realization of artificial general intelligence (AGI). As AI continues to evolve, systems like Alpha Evolve will shape the future of technology and its impact on society, offering new possibilities for innovation, progress, and the betterment of humanity. Media Credit: AI Explained Filed Under: AI, Technology News, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

'Unless You're Sure It Won't Kill You, Worry About It,' Says AI Godfather Geoffrey Hinton, Warning Of Superintelligence Risks
'Unless You're Sure It Won't Kill You, Worry About It,' Says AI Godfather Geoffrey Hinton, Warning Of Superintelligence Risks

Yahoo

time20-05-2025

  • Business
  • Yahoo

'Unless You're Sure It Won't Kill You, Worry About It,' Says AI Godfather Geoffrey Hinton, Warning Of Superintelligence Risks

"Unless you're sure it won't kill you, worry about it," said Geoffrey Hinton, the "godfather of AI," during an April 26 interview with CBS News, warning that unless humanity can be completely certain AI won't someday turn against it, it should be very worried. He warned that superintelligent AI may arrive sooner than most people expect — and once it does, it could become impossible to control. The physicist, who shares the 2024 Nobel Prize with John J. Hopfield, compared humanity's current approach to AI to raising a tiger cub, unaware it might maul its owner when it grows up. Don't Miss: 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. Hasbro, MGM, and Skechers trust this AI marketing firm — Hinton told CBS News he now believes there's a "good chance" superintelligent AI will exist in 10 years or less — a much shorter timeline than the five to 20 years he projected just last year. He estimated there's a 10% to 20% chance that these systems could eventually seize control. One key factor fueling Hinton's concern is the rapid rise of autonomous AI agents — not just chat bots but systems that can complete complex tasks without human input. Google DeepMind recently launched AlphaEvolve, an AI that independently developed a matrix algorithm that outperforms the 1969 Strassen algorithm in several use cases. Trending: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — The impact of these agents is already being felt. Hinton pointed to new systems that not only respond to prompts but can navigate websites, schedule tasks, and book tickets on their own. Microsoft (NASDAQ:MSFT), for instance, in March unveiled how AI agents are transforming office productivity. Likewise, Hugging Face released an open-source Open Computer Agent in April that simulates user interactions — booking tickets, filling out forms, and navigating sites in real-time. According to Hinton, these advances signal that "things have got, if anything, scarier than they were before." "They're all after the next shiny thing," said Hinton, warning that the race among tech giants and nations to build the most powerful AI makes it "very, very unlikely" that society will be able to avoid creating superintelligence. "The issue," he added, "is can we design it in such a way that it never wants to take control that it's always benevolent."The international AI arms race is intensifying. A March report from Stanford University''s AI Index showed that global AI investment surged past $350 billion last year, with China, the U.S., and the EU ramping up funding for both civilian and military applications. Hinton also voiced frustration with Big Tech's shifting ethics. He said he was "very disappointed" in Google — where he worked for over a decade — for reversing its ban on military use of AI. In February, Google updated its AI principles to remove restrictions on developing systems for weapons and surveillance — a move that stirred concern among researchers and watchdogs. "I wouldn't be happy working for any of them today," Hinton added. Read Next: Invest where it hurts — and help millions heal:. Inspired by Uber and Airbnb – Deloitte's fastest-growing software company is transforming 7 billion smartphones into income-generating assets – Image: Midjourney UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? MICROSOFT (MSFT): Free Stock Analysis Report This article 'Unless You're Sure It Won't Kill You, Worry About It,' Says AI Godfather Geoffrey Hinton, Warning Of Superintelligence Risks originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Sign in to access your portfolio

GitHub launches new AI coding agent that fixes bugs
GitHub launches new AI coding agent that fixes bugs

Economic Times

time20-05-2025

  • Business
  • Economic Times

GitHub launches new AI coding agent that fixes bugs

Copilot helps developers write code—it suggests lines of code or even whole functions while typing. In its latest upgrade, the AI agent will be more active, like a mini-assistant, for the developer. Instead of passively suggesting code, the agent will understand and act on goals. Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Developer platform GitHub has unveiled an artificial intelligence (AI) coding agent embedded directly into its AI tool, GitHub Copilot . Users can assign the agent tasks and it can fix bugs and add features on a developer's helps developers write code—it suggests lines of code or even whole functions while typing. In its latest upgrade, the agent will be more active, like a mini-assistant, for the developer. Instead of passively suggesting code, the agent will understand and act on agent will be built into Copilot itself, meaning developers don't have to install anything extra, and it will be a part of their workflow."Using state-of-the-art models, the agent excels at low-to-medium complexity tasks in well-tested codebases – adding features, fixing bugs, extending tests, refactoring code, improving documentation. It's all about keeping you in the magical flow state," CEO Thomas Dohmke said in a post on can tell the agent to fix the code, implement a search feature, or carry out autonomous actions like fix bugs by identifying broken logic, or understand context across makes GitHub Copilot more than just a coding assistant, and more like a coding collaborator. "Built around an integrated, secure and fully customizable development environment powered by GitHub Actions , the Copilot coding agent is amplifying human developers with trust by design," Dohmke comes days after Google DeepMind unveiled AlphaEvolve, an AI coding agent backed by its Gemini models. However, AlphaEvolve is designed for performance on algorithmic challenges, and is more research-grade and specialised. GitHub's agent, on the other hand, is designed to work alongside developers in real-world projects.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store