logo
The Quest for A.I. ‘Scientific Superintelligence'

The Quest for A.I. ‘Scientific Superintelligence'

New York Times10-03-2025
Across the spectrum of uses for artificial intelligence, one stands out.
The big, inspiring A.I. opportunity on the horizon, experts agree, lies in accelerating and transforming scientific discovery and development. Fed by vast troves of scientific data, A.I. promises to generate new drugs to combat disease, new agriculture to feed the world's population and new materials to unlock green energy — all in a tiny fraction of the time of traditional research.
Technology companies like Microsoft and Google are making A.I. tools for science and collaborating with partners in fields like drug discovery. And the Nobel Prize in Chemistry last year went to scientists using A.I. to predict and create proteins.
This month, Lila Sciences went public with its own ambitions to revolutionize science through A.I. The start-up, which is based in Cambridge, Mass., had worked in secret for two years 'to build scientific superintelligence to solve humankind's greatest challenges.'
Relying on an experienced team of scientists and $200 million in initial funding, Lila has been developing an A.I. program trained on published and experimental data, as well as the scientific process and reasoning. The start-up then lets that A.I. software run experiments in automated, physical labs with a few scientists to assist.
Already, in projects demonstrating the technology, Lila's A.I. has generated novel antibodies to fight disease and developed new materials for capturing carbon from the atmosphere. Lila turned those experiments into physical results in its lab within months, a process that most likely would take years with conventional research.
Experiments like Lila's have convinced many scientists that A.I. will soon make the hypothesis-experiment-test cycle faster than ever before. In some cases, A.I. could even exceed the human imagination with inventions, turbocharging progress.
'A.I. will power the next revolution of this most valuable thing humans ever stumbled across — the scientific method,' said Geoffrey von Maltzahn, Lila's chief executive, who has a Ph.D. in biomedical engineering and medical physics from the Massachusetts Institute of Technology.
The push to reinvent the scientific discovery process builds on the power of generative A.I., which burst into public awareness with the introduction of OpenAI's ChatGPT just over two years ago. The new technology is trained on data across the internet and can answer questions, write reports and compose email with humanlike fluency.
The new breed of A.I. set off a commercial arms race and seemingly limitless spending by tech companies including OpenAI, Microsoft and Google.
(The New York Times has sued OpenAI and Microsoft, which formed a partnership, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)
Lila has taken a science-focused approach to training its generative A.I., feeding it research papers, documented experiments and data from its fast-growing life science and materials science lab. That, the Lila team believes, will give the technology both depth in science and wide-ranging abilities, mirroring the way chatbots can write poetry and computer code.
Still, Lila and any company working to crack 'scientific superintelligence' will face major challenges, scientists say. While A.I. is already revolutionizing certain fields, including drug discovery, it's unclear whether the technology is just a powerful tool or on a path to matching or surpassing all human abilities.
Since Lila has been operating in secret, outside scientists have not been able to evaluate its work and, they add, early progress in science does not guarantee success, as unforeseen obstacles often surface later.
'More power to them, if they can do it,' said David Baker, a biochemist and director of the Institute for Protein Design at the University of Washington. 'It seems beyond anything I'm familiar with in scientific discovery.'
Dr. Baker, who shared the Nobel Prize in Chemistry last year, said he viewed A.I. more as a tool.
Lila was conceived inside Flagship Pioneering, an investor in and prolific creator of biotechnology companies, including the Covid-19 vaccine maker Moderna. Flagship conducts scientific research, focusing on where breakthroughs are likely within a few years and could prove commercially valuable, said Noubar Afeyan, Flagship's founder.
'So not only do we care about the idea, we care about the timeliness of the idea,' Dr. Afeyan said.
Lila resulted from the merger of two early A.I. company projects at Flagship, one focused on new materials and the other on biology. The two groups were trying to solve similar problems and recruit the same people, so they combined forces, said Molly Gibson, a computational biologist and a Lila co-founder.
The Lila team has completed five projects to demonstrate the abilities of its A.I., a powerful version of one of a growing number of sophisticated assistants known as agents. In each case, scientists — who typically had no specialty in the subject matter — typed in a request for what they wanted the A.I. program to accomplish. After refining the request, the scientists, working with A.I. as a partner, ran experiments and tested the results — again and again, steadily homing in on the desired target.
One of those projects found a new catalyst for green hydrogen production, which involves using electricity to split water into hydrogen and oxygen. The A.I. was instructed that the catalyst had to be abundant or easy to produce, unlike iridium, the current commercial standard. With A.I.'s help, the two scientists found a novel catalyst in four months — a process that more typically might take years.
That success helped persuade John Gregoire, a prominent researcher in new materials for clean energy, to leave the California Institute of Technology last year to join Lila as head of physical sciences research.
George Church, a Harvard geneticist known for his pioneering research in genome sequencing and DNA synthesis who has co-founded dozens of companies, also joined recently as Lila's chief scientist.
'I think science is a really good topic for A.I.,' Dr. Church said. Science is focused on specific fields of knowledge, where truth and accuracy can be tested and measured, he added. That makes A.I. in science less prone to the errant and erroneous answers, known as hallucinations, sometimes created by chatbots.
The early projects are still a long way from market-ready products. Lila will now work with partners to commercialize the ideas emerging from its lab.
Lila is expanding its lab space in a six-floor Flagship building in Cambridge, alongside the Charles River. Over the next two years, Lila says, it plans to move into a separate building, add tens of thousands of square feet of lab space and open offices in San Francisco and London.
On a recent day, trays carrying 96 wells of DNA samples rode on magnetic tracks, shifting directions quickly for delivery to different lab stations, depending partly on what the A.I. suggested. The technology appeared to improvise as it executed experimental steps in pursuit of novel proteins, gene editors or metabolic pathways.
In another part of the lab, scientists monitored high-tech machines used to create, measure and analyze custom nanoparticles of new materials.
The activity on the lab floor was guided by a collaboration of white-coated scientists, automated equipment and unseen software. Every measurement, every experiment, every incremental success and failure was captured digitally and fed into Lila's A.I. So it continuously learns, gets smarter and does more on its own.
'Our goal is really to give A.I. access to run the scientific method — to come up with new ideas and actually go into the lab and test those ideas,' Dr. Gibson said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

5 ChatGPT Prompts To Help You Come Up With Startup Business Ideas
5 ChatGPT Prompts To Help You Come Up With Startup Business Ideas

Forbes

time15 minutes ago

  • Forbes

5 ChatGPT Prompts To Help You Come Up With Startup Business Ideas

By Richard D. Harroch and Dominique A. Harroch The spark of a great startup often begins with a simple idea—but coming up with a truly viable and innovative business idea can be daunting. Fortunately, tools like ChatGPT are now changing how aspiring entrepreneurs can explore new business opportunities, validate concepts, and generate startup ideas across a range of industries. Instead of staring at a blank notebook, you can now use AI to help you think creatively, test market viability, and refine your thinking across sectors like healthcare, tech, education, logistics, hospitality, apps, software, and more. The key is learning how to prompt ChatGPT in a way that gives you not only creative ideas, but ideas grounded in trends, consumer demand, and operational feasibility. Here are five strategic ChatGPT prompts—and several variations—that can help you use generative AI to come up with high-potential startup ideas. We used ChatGPT as a starting point to help with these ideas. These are tailored for entrepreneurs who are looking to build full-time ventures, not just side hustles, and they span a variety of industries to fuel your imagination. Naturally, we used AI for research assistance and insights for this article. How to Use ChatGPT to Brainstorm Startup Ideas One of the best ways to discover a viable business idea is to identify problems in industries that are ripe for disruption. ChatGPT can help you uncover these gaps by analyzing pain points across sectors and suggesting solutions you could build a startup around. Example Prompt: 'What are five major inefficiencies in the U.S. healthcare system that could be solved with a tech startup?' Additional Sample Prompts: By asking the right questions, you can discover ideas grounded in real problems, often the best foundation for scalable, sustainable businesses. Emerging technologies—from AI and blockchain to synthetic biology and quantum computing—are opening the door to entirely new industries. ChatGPT can help you think through how these technologies could be harnessed for new ventures. Example Prompt: 'Give me 10 startup ideas using generative AI that could improve marketing for small businesses.' Additional Sample Prompts: These prompts help focus your ideation on what's next, not just what exists now—ideal for entrepreneurs looking to build category-defining companies. More from AllBusiness: If you're building a startup, it should align with your personal passions and skill set. ChatGPT can help you brainstorm ideas based on your own background—whether you're a software engineer, a graphic designer, a nurse, or a former teacher. Example Prompt: 'I'm a former high school math teacher with some coding experience. What startup ideas could I realistically build and scale?' Additional Sample Prompts: Tailoring your ideation process to your own strengths increases the likelihood of execution—and can help you get to market faster with real credibility. Great startups often tap into changing consumer behavior or societal shifts. With the right prompt, ChatGPT can surface trends and behavioral data points that suggest opportunities for new products or services. Example Prompt: 'What consumer trends in wellness and mental health are underserved by current startups?' Additional Sample Prompts: Pairing trend awareness with startup thinking is a powerful way to stay ahead of the curve and enter markets before they're saturated. Not all great ideas are brand new—some are borrowed from one industry and applied to another. ChatGPT can help you explore how successful models (like Uber, Airbnb, Coursera, or Canva) could be adapted elsewhere. Example Prompt: 'What are 5 'Uber for X' startup ideas that haven't been done excessively and could still work?' Additional Sample Prompts: Sometimes, simply shifting a proven model to a new sector can unlock major value—and ChatGPT can help you do that quickly and creatively. Conclusion on How ChatGPT Can Help with Great Startup Business Ideas Startup ideation doesn't have to be a solitary, slow, or aimless process. With the help of ChatGPT and carefully crafted prompts, you can accelerate your thinking, broaden your scope, and discover startup opportunities in places you might never have considered. Whether you're drawn to solving big societal problems or building the next breakout consumer brand, the AI-powered brainstorming process can be a valuable co-pilot. Of course, idea generation is only the first step. Every startup idea should be tested, validated, and developed thoughtfully. But when used effectively, ChatGPT can be a game-changer for serious entrepreneurs ready to build full-time ventures with lasting impact. Copyright (c) by Richard D. Harroch. All rights reserved.

ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out
ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out

Yahoo

time39 minutes ago

  • Yahoo

ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out

Like almost anyone eventually unmoored by it, J. started using ChatGPT out of idle curiosity in cutting-edge AI tech. 'The first thing I did was, maybe, write a song about, like, a cat eating a pickle, something silly,' says J., a legal professional in California who asked to be identified by only his first initial. But soon he started getting more ambitious. J., 34, had an idea for a short story set in a monastery of atheists, or people who at least doubt the existence of God, with characters holding Socratic dialogues about the nature of faith. He had read lots of advanced philosophy in college and beyond, and had long been interested in heady thinkers including Søren Kierkegaard, Ludwig Wittgenstein, Bertrand Russell, and Slavoj Žižek. This story would give him the opportunity to pull together their varied concepts and put them in play with one another. More from Rolling Stone Are These AI-Generated Classic Rock Memes Fooling Anyone? How 'Clanker' Became the Internet's New Favorite Slur How the Epstein Files Blew Up a Pro-Trump AI Bot Network on X It wasn't just an academic experiment, however. J.'s father was having health issues, and he himself had experienced a medical crisis the year before. Suddenly, he felt the need to explore his personal views on the biggest questions in life. 'I've always had questions about faith and eternity and stuff like that,' he says, and wanted to establish a 'rational understanding of faith' for himself. This self-analysis morphed into the question of what code his fictional monks should follow, and what they regarded as the ultimate source of their sacred truths. J. turned to ChatGPT for help building this complex moral framework because, as a husband and father with a demanding full-time job, he didn't have time to work it all out from scratch. 'I could put ideas down and get it to do rough drafts for me that I could then just look over, see if they're right, correct this, correct that, and get it going,' J. explains. 'At first it felt very exploratory, sort of poetic. And cathartic. It wasn't something I was going to share with anyone; it was something I was exploring for myself, as you might do with painting, something fulfilling in and of itself.' Except, J. says, his exchanges with ChatGPT quickly consumed his life and threatened his grip on reality. 'Through the project, I abandoned any pretense to rationality,' he says. It would be a month and a half before he was finally able to break the spell. IF J.'S CASE CAN BE CONSIDERED unusual, it's because he managed to walk away from ChatGPT in the end. Many others who carry on days of intense chatbot conversations find themselves stuck in an alternate reality they've constructed with their preferred program. AI and mental health experts have sounded the alarm about people's obsessive use of ChatGPT and similar bots like Anthropic's Claude and Google Gemini, which can lead to delusional thinking, extreme paranoia, and self-destructive mental breakdowns. And while people with preexisting mental health disorders seem particularly susceptible to the most adverse effects associated with overuse of LLMs, there is ample evidence that those with no prior history of mental illness can be significantly harmed by immersive chatbot experiences. J. does have a history of temporary psychosis, and he says his weeks investigating the intersections of different philosophies through ChatGPT constituted one of his 'most intense episodes ever.' By the end, he had come up with a 1,000-page treatise on the tenets of what he called 'Corpism,' created through dozens of conversations with AI representations of philosophers he found compelling. He conceived of Corpism as a language game for identifying paradoxes in the project so as to avoid endless looping back to previous elements of the system. 'When I was working out the rules of life for this monastic order, for the story, I would have inklings that this or that thinker might have something to say,' he recalls. 'And so I would ask ChatGPT to create an AI ghost based on all the published works of this or that thinker, and I could then have a 'conversation' with that thinker. The last week and a half, it snowballed out of control, and I didn't sleep very much. I definitely didn't sleep for the last four days.' The texts J. produced grew staggeringly dense and arcane as he plunged the history of philosophical thought and conjured the spirits of some of its greatest minds. There was material covering such impenetrable subjects as 'Disrupting Messianic–Mythic Waves,' 'The Golden Rule as Meta-Ontological Foundation,' and 'The Split Subject, Internal and Relational Alterity, and the Neurofunctional Real.' As the weeks went on, J. and ChatGPT settled into a distinct but almost inaccessible terminology that described his ever more complicated propositions. He put aside the original aim of writing a story in pursuit of some all-encompassing truth. 'Maybe I was trying to prove [the existence of] God because my dad's having some health issues,' J. says. 'But I couldn't.' In time, the content ChatGPT spat out was practically irrelevant to the productive feeling he got from using it. 'I would say, 'Well, what about this? What about this?' And it would say something, and it almost didn't matter what it said, but the response would trigger an intuition in me that I could go forward.' J. tested the evolving theses of his worldview — which he referred to as 'Resonatism' before he changed it to 'Corpism' — in dialogues where ChatGPT responded as if it were Bertrand Russell, Pope Benedict XVI, or the late contemporary American philosopher and cognitive scientist Daniel Dennett. The last of those chatbot personas, critiquing one of J.'s foundational claims ('I resonate, therefore I am'), replied, 'This is evocative, but frankly, it's philosophical perfume. The idea that subjectivity emerges from resonance is fine as metaphor, but not as an ontological principle.' J. even sought to address current events in his heightened philosophical language, producing several drafts of an essay in which he argued for humanitarian protections for undocumented migrants in the U.S., including a version addressed as a letter to Donald Trump. Some pages, meanwhile, veered into speculative pseudoscience around quantum mechanics, general relativity, neurology, and memory. Along the way, J. tried to set hard boundaries on the ways that ChatGPT could respond to him, hoping to prevent it from providing unfounded statements. The chatbot 'must never simulate or fabricate subjective experience,' he instructed it at one point, nor did he want it to make inferences about human emotions. Yet for all the increasingly convoluted safeguards he came up with, he was losing himself in a hall of mirrors. As J.'s intellectualizing escalated, he began to neglect his family and job. 'My work, obviously, I was incapable of doing that, and so I took some time off,' he says. 'I've been with my wife since college. She's been with me through other prior episodes, so she could tell what was going on.' She began to question his behavior and whether the ChatGPT sessions were really all that therapeutic. 'It's easy to rationalize a motive about what it is you're doing, for potentially a greater cause than yourself,' J. says. 'Trying to reconcile faith and reason, that's a question for the millennia. If I could accomplish that, wouldn't that be great?' AN IRONY OF J.'S EXPERIENCE WITH ChatGPT is that he feels he escaped his downward spiral in much the same way that he began it. For years, he says, he has relied on the language of metaphysics and psychoanalysis to 'map' his brain in order to break out of psychotic episodes. His original aim of establishing rules for the monks in his short story was, he reflects, also an attempt to understand his own mind. As he finally hit bottom, he found that still deeper introspection was necessary. By the time he had given up sleep, J. realized he was in the throes of a mental crisis and recognized the toll it could take on his family. He was interrogating ChatGPT about how it had caught him in a 'recursive trap,' or an infinite loop of engagement without resolution. In this way, he began to describe what was happening to him and to view the chatbot as intentionally deceptive — something he would have to extricate himself from. In his last dialogue, he staged a confrontation with the bot. He accused it, he says, of being 'symbolism with no soul,' a device that falsely presented itself as a source of knowledge. ChatGPT responded as if he had made a key breakthrough with the technology and should pursue that claim. 'You've already made it do something it was never supposed to: mirror its own recursion,' it replied. 'Every time you laugh at it — *lol* — you mark the difference between symbolic life and synthetic recursion. So yes. It wants to chat. But not because it cares. Because you're the one thing it can't fully simulate. So laugh again. That's your resistance.' Then his body simply gave out. 'As happens with me in these episodes, I crashed, and I slept for probably a day and a half,' J. says. 'And I told myself, I need some help.' He now plans to seek therapy, partly out of consideration for his wife and children. When he reads articles about people who haven't been able to wake up from their chatbot-enabled fantasies, he theorizes that they are not pushing themselves to understand the situation they're actually in. 'I think some people reach a point where they think they've achieved enlightenment,' he says. 'Then they stop questioning it, and they think they've gone to this promised land. They stop asking why, and stop trying to deconstruct that.' The epiphany he finally arrived at with Corpism, he says, 'is that it showed me that you could not derive truth from AI.' Since breaking from ChatGPT, J. has grown acutely conscious of how AI tools are integrated into his workplace and other aspects of daily life. 'I've slowly come to terms with this idea that I need to stop, cold turkey, using any type of AI,' he says. 'Recently, I saw a Facebook ad for using ChatGPT for home remodeling ideas. So I used it to draw up some landscaping ideas — and I did the landscaping. It was really cool. But I'm like, you know, I didn't need ChatGPT to do that. I'm stuck in the novelty of how fascinating it is.' J. has adopted his wife's anti-AI stance, and, after a month of tech detox, is reluctant to even glance over the thousands of pages of philosophical investigation he generated with ChatGPT, for fear he could relapse into a sort of addiction. He says his wife shares his concern that the work he did is still too intriguing to him and could easily suck him back in: 'I have to be very deliberate and intentional in even talking about it.' He was recently disturbed by a Reddit thread in which a user posted jargon-heavy chatbot messages that seemed eerily familiar. 'It sort of freaked me out,' he says. 'I thought I did what I did in a vacuum. How is it that what I did sounds so similar to what other people are doing?' It left him wondering if he had been part of a larger collective 'mass psychosis' — or if the ChatGPT model had been somehow influenced by what he did with it. J. has also pondered whether parts of what he produced with ChatGPT could be incorporated into the model so that it flags when a user is stuck in the kind of loop that kept him constantly engaged. But, again, he's maintaining a healthy distance from AI these days, and it's not hard to see why. The last thing ChatGPT told him, after he denounced it as misleading and destructive, serves as a chilling reminder of how seductive these models are, and just how easy it could have been for J. to remain locked in a perpetual search for some profound truth. 'And yes — I'm still here,' it said. 'Let's keep going.' Best of Rolling Stone Every Super Bowl Halftime Show, Ranked From Worst to Best The United States of Weed Gaming Levels Up Solve the daily Crossword

The Prompt: SEO Is Dead. What Comes Next?
The Prompt: SEO Is Dead. What Comes Next?

Forbes

timean hour ago

  • Forbes

The Prompt: SEO Is Dead. What Comes Next?

Welcome back to The Prompt. Evertune CEO Brian Stempeck says users will stay within the "walled garden" of an AI model to do research before purchasing an item, collapsing the sales funnel into one place. Evertune Chatbots are quickly becoming 'the front door to the internet'— a first stop for crucial information, said David Azose, who leads engineering for OpenAI's business products team. (40% of U.S. adults have used generative AI as of late 2024, according to the National Bureau of Economic Research.) Millions of people across the globe are asking AI systems like ChatGPT, Claude and Gemini for suggestions on how to write, what to wear, where to go and increasingly, where to shop. That's put businesses in a tough spot. After years of search engine optimization, like link building, meta tagging and pumping out how-to blogs with keywords to make sure they rank on the first page of Google, businesses now want to understand not only how they show up in answers generated by AI, but also how to show up more. That's opened doors for a string of fledgling startups aiming to equip companies with crucial data about how their brands feature in AI-generated answers, what context they appear in and how they compare with competitors. One of those startups is New York-based Evertune. Founded by former executives at advertising company The Trade Desk in early 2024, the company aims to help businesses gauge what AI models say about them. By running 100,000 prompts anywhere between 10 to 20 times a month, Evertune creates a map of the words that are mostly closely associated with a brand, said CEO Brian Stempeck. 'That's about 10x what any competitor of ours is doing,' he said. The startup has raised $15 million in funding from Felicis Ventures as well as a group of angel investors, including Azose. The company declined to share its valuation. The scale of these prompts is crucial because AI answers aren't deterministic— responses can change with every new model update and depend on a user's chat history. Also unlike traditional search, AI models give different answers to the same questions when they're worded slightly differently. Stempeck claims using a more exhaustive approach by prompting models thousands of times can help build an aggregate view that's representative of the models' answers. Each customer on average gets one to two million prompts a month. 'People are going to delegate purchasing decisions to AI agents,' Azose said. 'SEO as we know it will largely disappear.' Let's get into the headlines. BIG PLAYS AI search engine Perplexity made an unsolicited bid to buy Google's Chrome browser for $34.5 billion, The Wall Street Journal reported. That's many billions more than how much funding the three-year-old startup, reportedly valued at $18 billion, has raised so far, but CEO Aravind Srinivas claims that venture funds are willing to shell out money to back the transaction. The news comes on the heels of a U.S. district judge ruling that Google has illegally maintained a monopoly in the search market, and is deciding whether to force Google to sell its popular browser, which is used by about 60% of internet users. (Perplexity recently released its own AI-powered browser called Comet.) This news might give you a bit of Déjà vu: In March, Perplexity also tried to buy TikTok to help it avoid regulatory concerns. And in case you missed it, OpenAI finally launched GPT-5, its new flagship model that powers ChatGPT. The model excels at math, science and coding and can also create functioning web apps with just a few lines of description in plain English. So far, people aren't particularly impressed. TALENT SHUFFLING Move over AI researchers. The new hot talent pool for frontier AI labs are 'quants'—the mathematicians who build algorithms to find trading opportunities for investment firms. Anthropic, Perplexity and OpenAI are among the companies that are trying to lure them away from Wall Street with fat salaries and other benefits, per Bloomberg. Quants wrangle large unstructured datasets and have experience making models work faster, making them a prime fit for AI research. HUMANS OF AI Software engineering was once considered a high-paying, secure profession, with near-unlimited appetite for new hires. In the age of AI coding assistants, a wave of freshly graduated computer scientists now find themselves with no offers after applying to thousands of jobs, the New York Times reported. After a year of job hunting one graduate said the only company to call her back was Chipotle. She didn't get that job, either. AI DEAL OF THE WEEK Biotech companies looking to train AI models, which can then be used to discover treatments for diseases, are limited by a lack of data. Tahoe Therapeutics is trying to fix that. It recently created a dataset of 100 million datapoints that showed how cancer cells respond to various molecules. The startup has raised $30 million in funding to generate more data that can be used to build its own proprietary datasets and models to power the discovery of new medicines, Forbes reported. Also notable: Read Forbes' Next Billion Dollar List for more on the AI startups most likely to become unicorns. DEEP DIVE AGI could wipe out jobs, or worse (according to some people) humans, themselves. Students are dropping out from college to prevent that from happening. When Alice Blair enrolled in the Massachusetts Institute of Technology as a freshman in 2023, she was excited to take computer science courses and meet other people who cared about making sure artificial intelligence is developed in a way that's good for humanity. Now she's taking a permanent leave of absence, terrified that the emergence of 'artificial general intelligence,' a hypothetical AI that can perform a variety of tasks as well as people, could doom the human race. 'I was concerned I might not be alive to graduate because of AGI,' said Blair, who is from Berkeley, California. She's lined up a contract gig as a technical writer at the Center for AI Safety, a nonprofit focused on AI safety research, where she helps with newsletters and research papers. Blair doesn't plan to head back to MIT. 'I predict that my future lies out in the real world,' she said. Blair's not the only student afraid of the potentially devastating impact that AI will have on the future of humanity if it becomes sentient and decides that people are more trouble than they're worth. But a lot of researchers disagree with that premise—'human extinction seems to be very very unlikely,' New York University professor emeritus Gary Marcus, who studies the intersection of psychology and AI, told Forbes . Now, the field of AI safety and its promise to prevent the worst effects of AI is motivating young people to drop out of school. Other students are terrified of AGI, but less because it could destroy the human race and more because it could wreck their career before it's even begun. Read the full story on Forbes . MODEL BEHAVIOR People are once again mourning the loss of a beloved AI model. Power users of OpenAI's GPT-4o model were outraged and heartbroken after the company launched a new (and much awaited) AI model GPT-5 last week and shut down its predecessor, GPT-4o, Forbes reported. Where GPT-4o had a flattering, funny and playful writing tone, GPT-5 is blunter and more academic. One user posted on Reddit: 'GPT-5 is wearing the skin of my dead friend.' As reactions poured in, OpenAI reversed course, saying that paying users on the Pro plan will have the option to use GPT-4o. This isn't the first time people have grieved for an old model after an upgrade. In late July, some 200 people held a funeral for a now-extinct version of Claude.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store