logo
#

Latest news with #AlexNet

OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it
OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it

Time of India

time27-05-2025

  • Business
  • Time of India

OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it

Former OpenAI chief scientist and co-founder Ilya Sutskever told his research team in 2023 that the company would need to build a protective bunker, often known as 'doomsday bunker,' before releasing artificial general intelligence (AGI), according to new revelations from an upcoming book about the AI company's internal turmoil. "We're definitely going to build a bunker before we release AGI," Sutskever declared during a 2023 meeting with OpenAI scientists, months before his departure from the company. When pressed about the seriousness of his proposal, he assured colleagues that bunker entry would be "optional." The startling disclosure comes from excerpts of "Empire of AI," a forthcoming book by former Wall Street Journal correspondent Karen Hao based on interviews with 90 current and former OpenAI employees. The book details the dramatic November 2023 boardroom coup that briefly ousted CEO Sam Altman , with Sutskever playing a central role in the failed takeover. Sutskever, who co-created the groundbreaking AlexNet in 2012 alongside AI pioneer Geoff Hinton, believed his fellow researchers would require protection once AGI was achieved. He reasoned that such powerful technology would inevitably become "an object of intense desire for governments globally." What made OpenAI co-founder want a 'doomsday bunker' Sutskever and others worried that CEO Altman's focus on commercial success was compromising the company's commitment to developing AI safely. These tensions were exacerbated by ChatGPT 's unexpected success, which unleashed a "funding gold rush" that safety-minded Sutskever could no longer control. "There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture," one researcher told Hao. "Literally a rapture." This apocalyptic mindset partially motivated Sutskever's participation in the board revolt against Altman. However, the coup collapsed within a week, leading to Altman's return and the eventual departure of Sutskever and other safety-focused researchers. The failed takeover, now called "The Blip" by insiders, left Altman more powerful than before while driving out many of OpenAI's safety experts who were aligned with Sutskever's cautious approach. Since leaving OpenAI, Sutskever has founded Safe Superintelligence Inc., though he has declined to comment on his previous bunker proposals. His departure represents a broader exodus of safety-focused researchers who felt the company had abandoned its original mission of developing AI that benefits humanity broadly, rather than pursuing rapid commercialization. The timing of AGI remains hotly debated across the industry. While Altman recently claimed AGI is possible with current hardware, Microsoft AI CEO Mustafa Suleyman disagrees, predicting it could take up to 10 years to achieve. Google leaders Sergey Brin and DeepMind CEO Demis Hassabis see AGI arriving around 2030. However, AI pioneer Geoffrey Hinton warns there's no consensus on what AGI actually means, calling it "a serious, though ill-defined, concept." Despite disagreements over definitions and timelines, most industry leaders now view AGI as an inevitability rather than a possibility. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Machine learning can spark many discoveries in science and medicine
Machine learning can spark many discoveries in science and medicine

Indian Express

time29-04-2025

  • Science
  • Indian Express

Machine learning can spark many discoveries in science and medicine

This new weekly column seeks to bring science into view — its ideas, discoveries, and debates — every Tuesday. We'll journey through the cosmos, across the quantum world, and alongside the tools that shape our understanding of reality. We may be living in a golden age of discovery — not just because we know more than ever before, but because the very way we do science is undergoing a profound transformation. There will soon be widespread methods for the prediction of sepsis or diabetic retinopathy or for the early detection of Alzheimer's. There will be custom-made drugs and treatments that take into account your age, gender and genetic type. In fact, the developments have been so rapid and extraordinary that some have predicted the end of conventional disease, as we know it, in a decade. Seasonal rainfall and cyclones will be predicted with more accuracy. Even before new drugs are synthesised, computers will figure out how efficient they could be. Why is scientific discovery changing? Throughout most of human scientific history, discovery was driven by patient human effort. Data was precious, experiments were hard-won, and scientists would painstakingly design algorithms — fitting functions, solving equations, building models — to extract insights. The amount of data available was modest, and the number of researchers able to work on it was sufficient. In that world, human ingenuity could keep pace with information. Today, that balance has broken. Across fields, the volume of data has exploded. Telescopes generate terabytes nightly. Genome sequencers run around the clock. Simulations churn out petascale outputs. Hardware — both observational and computational — has advanced dramatically. But human attention and the number of scientists have not scaled in the same way. Algorithms hand-crafted by experts that require constant tuning are no longer sufficient when data volumes dwarf our collective capacity to engage with them manually. Remarkably, just as this problem became acute, machine learning rose to meet it. Though the foundations of artificial intelligence stretch back decades, it is only in the past ten years — and especially the past five — that self-learning algorithms have matured into powerful and scalable scientific tools. The coincidence is striking: at the very moment that science risked drowning in its own data, machines emerged that could swim. Machine learning as a widely adopted method The rise of these algorithms itself is a story of convergence. Until the early 2010s, computers recognised patterns only when engineers wrote explicit rules. That changed with two watershed moments. First, a public contest called the ImageNet challenge provided a million labelled photographs to compete on. One entrant, a deep neural network dubbed AlexNet, learnt to identify objects by tuning its internal connections through trial and error on graphics processors originally built for video games. Without any hand-coded feature detectors, AlexNet halved the error rate of all previous systems. This proved that with enough data and compute, machines could learn complex patterns on their own. Then in 2016, DeepMind's AlphaGo – designed to play the ancient board game Go – demonstrated the power of reinforcement learning, an approach where a system improves by playing repeatedly and rewarding itself for wins. In a historic five-game match, AlphaGo defeated world champion Lee Sedol, surprising professionals by playing sequences of moves never before seen. In Go, the possible board configurations exceed those of chess by orders of magnitude. After Game Two's unexpected 'Move 37', Lee admitted, 'I am speechless,' a testament to the machine's capacity to innovate beyond human intuition. Breakthroughs across disciplines This convergence has opened the door to breakthroughs across disciplines. In biology, the protein-folding problem exemplifies the impact. A typical protein is a chain of 200–300 amino acids that can fold into an astronomical number of shapes, yet only one produces the correct biological function. Experimental methods to determine these structures can take months or fail outright. In 2020, DeepMind's AlphaFold2 changed that. Trained on decades of known protein structures and sequence data, it now predicts three-dimensional shapes in seconds with laboratory-level accuracy. Such accuracy accelerates drug discovery by letting chemists model how candidate molecules fit into their targets before any synthesis. Enzyme engineers can design catalysts for sustainable chemistry, and disease researchers can understand how mutations disrupt function. In recognition of this leap, the 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis, John Jumper, and David Baker. Machine learning has since become routine in fields ranging from chemistry and astronomy to genomics, materials science, and high-energy physics, where it mines vast datasets for insights no human could extract unaided. In addition to the power of the technique, the purchase that the technique now has in modern society may in part be attributed to the democratisation of software tools such as PyTorch and TensorFlow and the large number of online courses and tutorials which are freely available to the public. Can machine learning replace scientists? At present, the answer is no. The imagination required to frame the right questions, the intuition to know when a result matters, and the creativity to connect diverse ideas remain uniquely human strengths. Machine learning models excel at finding patterns but rarely explain why those patterns exist. Yet this may not be a permanent limitation. In time, systems could be trained not only on raw data but on the entire scientific literature — the published papers, reviews, and textbooks that embody human understanding. One can imagine, perhaps within decades, an AI that reads articles, extracts key concepts, identifies open questions, analyses new experiments, and even drafts research papers: a 'full-stack scientist' handling the loop from hypothesis to publication autonomously. We are not there yet. But we are laying the foundations. Today's scientific machine learning is about augmentation — extending our reach, accelerating our pace, and occasionally surprising us with patterns we did not think to look for. As more of science becomes algorithmically accessible, the frontier will be defined not by what we can compute but by what we can imagine.

CHM Makes AlexNet Source Code Available to the Public
CHM Makes AlexNet Source Code Available to the Public

Associated Press

time20-03-2025

  • Science
  • Associated Press

CHM Makes AlexNet Source Code Available to the Public

Mountain View, California, March 20, 2025 (GLOBE NEWSWIRE) -- In partnership with Google, the Computer History Museum (CHM), the leading museum exploring the history of computing and its impact on the human experience, today announced the public release and long-term preservation of the source code for AlexNet, the neural network that kickstarted today's prevailing approach to AI. 'Google is delighted to contribute the source code for the groundbreaking AlexNet work to the Computer History Museum,' said Jeff Dean, chief scientist, Google DeepMind and Google Research. 'This code underlies the landmark paper 'ImageNet Classification with Deep Convolutional Neural Networks,' by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, which revolutionized the field of computer vision and is one of the most cited papers of all time.' For more information about the release of this historic source code, visit CHM's blog post here. By the late 2000s, Hinton's graduate students at the University of Toronto were beginning to use graphics processing units (GPUs) to train neural networks for image recognition tasks, and their success suggested that deep learning could be a solution to general-purpose AI. Sutskever, one of the students, believed that the performance of neural networks would scale with the amount of data available, and the arrival of ImageNet provided the opportunity. Completed in 2009, ImageNet was a dataset of images developed by Stanford professor Fei-Fei Li that was larger than any previous image dataset by several orders of magnitude. In 2011, Sutskever persuaded Krizhevsky, a fellow graduate student, to train a neural network for ImageNet. With Hinton serving as faculty advisor, Krizhevsky did so on a computer with two NVIDIA cards. Over the course of the next year, he continuously refined and retrained the network until it achieved performance superior to its competitors. The network would ultimately be named AlexNet, after Krizhevsky. In describing the AlexNet project, Hinton told CHM, 'Ilya thought we should do it, Alex made it work, and I got the Nobel Prize.' Before AlexNet, very few machine learning researchers used neural networks. After it, almost all of them would. Google eventually acquired the company started by Hinton, Krizhevsky and Sutskever, and a Google team led by David Bieber worked with CHM for five years to secure its release to the public. About CHM Software Source Code The Computer History Museum has the world's most diverse archive of software and related material. The stories of software's origins and impact on the world provide inspiration and lessons for the future to global audiences—including young coders and entrepreneurs. The Museum has released other historic source code such as APPLE II DOS, IBM APL, Apple MacPaint and QuickDraw, Apple Lisa, and Adobe Photoshop. Visit our website to learn more. About CHM The Computer History Museum's mission is to decode technology—the computing past, digital present, and future impact on humanity. From the heart of Silicon Valley, we share insights gleaned from our research, our events, and our incomparable collection of computing artifacts and oral histories to convene, inform, and empower people to shape a better future. Carina Sweet Computer History Museum (650) 810-1059 [email protected]

An AI model from over a decade ago sparked Nvidia's investment in autonomous vehicles
An AI model from over a decade ago sparked Nvidia's investment in autonomous vehicles

Yahoo

time19-03-2025

  • Automotive
  • Yahoo

An AI model from over a decade ago sparked Nvidia's investment in autonomous vehicles

Nvidia CEO Jensen Huang's keynote Tuesday at the company's GTC 2025 conference stuck with tradition and was chock-full of announcements. But the company also snuck in a little history lesson. During the automotive portion of his speech, Huang referred to AlexNet, a neural network architecture that gained widespread attention in 2012 when it won a computer image-recognition contest. Designed by computer scientist Alex Krizhevsky in collaboration with Ilya Sutskever (who'd go on to found OpenAI) and AI researcher Geoffrey Hinton, AlexNet achieved 84.7% accuracy in an academic competition called ImageNET. The breakthrough result led to a resurgence of interest in deep learning, a subset of machine learning that leverages neural networks. Turns out, AlexNet spurred Nvidia to go "all in" on autonomous vehicles, the way Huang tells it. "The moment I saw AlexNet — and we've been working on computer vision for a long time — the moment I saw AlexNet was such an inspiring moment, such an exciting moment," he said onstage. "It caused us to decide to go all in on building self-driving cars. So we've been working on self-driving cars now for over a decade. We build technology that almost every single self-driving car company uses." Nvidia has notched partnerships with numerous automakers, automotive suppliers, and tech companies developing autonomous vehicles. Its latest, an expanded collaboration with GM, was announced this afternoon. Automakers like Tesla and autonomous vehicle developers Wayve and Waymo use Nvidia GPUs for data centers. Other companies tap Nvidia's Omniverse product to build "digital twins" of factories to virtually test production processes and design vehicles. Meanwhile, Mercedes, Volvo, Toyota, and Zoox have used Nvidia's Drive Orin computer system-on-chip, which is based on the chipmaker's Nvidia Ampere supercomputing architecture. Toyota and others are also employing Nvidia's safety-focused operating system, DriveOS. The upshot: Nvidia DNA is embedded in the automotive — and more specifically, the automated driving — industry. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store