logo
The Quest for A.I. ‘Scientific Superintelligence'

The Quest for A.I. ‘Scientific Superintelligence'

New York Times10-03-2025

Across the spectrum of uses for artificial intelligence, one stands out.
The big, inspiring A.I. opportunity on the horizon, experts agree, lies in accelerating and transforming scientific discovery and development. Fed by vast troves of scientific data, A.I. promises to generate new drugs to combat disease, new agriculture to feed the world's population and new materials to unlock green energy — all in a tiny fraction of the time of traditional research.
Technology companies like Microsoft and Google are making A.I. tools for science and collaborating with partners in fields like drug discovery. And the Nobel Prize in Chemistry last year went to scientists using A.I. to predict and create proteins.
This month, Lila Sciences went public with its own ambitions to revolutionize science through A.I. The start-up, which is based in Cambridge, Mass., had worked in secret for two years 'to build scientific superintelligence to solve humankind's greatest challenges.'
Relying on an experienced team of scientists and $200 million in initial funding, Lila has been developing an A.I. program trained on published and experimental data, as well as the scientific process and reasoning. The start-up then lets that A.I. software run experiments in automated, physical labs with a few scientists to assist.
Already, in projects demonstrating the technology, Lila's A.I. has generated novel antibodies to fight disease and developed new materials for capturing carbon from the atmosphere. Lila turned those experiments into physical results in its lab within months, a process that most likely would take years with conventional research.
Experiments like Lila's have convinced many scientists that A.I. will soon make the hypothesis-experiment-test cycle faster than ever before. In some cases, A.I. could even exceed the human imagination with inventions, turbocharging progress.
'A.I. will power the next revolution of this most valuable thing humans ever stumbled across — the scientific method,' said Geoffrey von Maltzahn, Lila's chief executive, who has a Ph.D. in biomedical engineering and medical physics from the Massachusetts Institute of Technology.
The push to reinvent the scientific discovery process builds on the power of generative A.I., which burst into public awareness with the introduction of OpenAI's ChatGPT just over two years ago. The new technology is trained on data across the internet and can answer questions, write reports and compose email with humanlike fluency.
The new breed of A.I. set off a commercial arms race and seemingly limitless spending by tech companies including OpenAI, Microsoft and Google.
(The New York Times has sued OpenAI and Microsoft, which formed a partnership, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)
Lila has taken a science-focused approach to training its generative A.I., feeding it research papers, documented experiments and data from its fast-growing life science and materials science lab. That, the Lila team believes, will give the technology both depth in science and wide-ranging abilities, mirroring the way chatbots can write poetry and computer code.
Still, Lila and any company working to crack 'scientific superintelligence' will face major challenges, scientists say. While A.I. is already revolutionizing certain fields, including drug discovery, it's unclear whether the technology is just a powerful tool or on a path to matching or surpassing all human abilities.
Since Lila has been operating in secret, outside scientists have not been able to evaluate its work and, they add, early progress in science does not guarantee success, as unforeseen obstacles often surface later.
'More power to them, if they can do it,' said David Baker, a biochemist and director of the Institute for Protein Design at the University of Washington. 'It seems beyond anything I'm familiar with in scientific discovery.'
Dr. Baker, who shared the Nobel Prize in Chemistry last year, said he viewed A.I. more as a tool.
Lila was conceived inside Flagship Pioneering, an investor in and prolific creator of biotechnology companies, including the Covid-19 vaccine maker Moderna. Flagship conducts scientific research, focusing on where breakthroughs are likely within a few years and could prove commercially valuable, said Noubar Afeyan, Flagship's founder.
'So not only do we care about the idea, we care about the timeliness of the idea,' Dr. Afeyan said.
Lila resulted from the merger of two early A.I. company projects at Flagship, one focused on new materials and the other on biology. The two groups were trying to solve similar problems and recruit the same people, so they combined forces, said Molly Gibson, a computational biologist and a Lila co-founder.
The Lila team has completed five projects to demonstrate the abilities of its A.I., a powerful version of one of a growing number of sophisticated assistants known as agents. In each case, scientists — who typically had no specialty in the subject matter — typed in a request for what they wanted the A.I. program to accomplish. After refining the request, the scientists, working with A.I. as a partner, ran experiments and tested the results — again and again, steadily homing in on the desired target.
One of those projects found a new catalyst for green hydrogen production, which involves using electricity to split water into hydrogen and oxygen. The A.I. was instructed that the catalyst had to be abundant or easy to produce, unlike iridium, the current commercial standard. With A.I.'s help, the two scientists found a novel catalyst in four months — a process that more typically might take years.
That success helped persuade John Gregoire, a prominent researcher in new materials for clean energy, to leave the California Institute of Technology last year to join Lila as head of physical sciences research.
George Church, a Harvard geneticist known for his pioneering research in genome sequencing and DNA synthesis who has co-founded dozens of companies, also joined recently as Lila's chief scientist.
'I think science is a really good topic for A.I.,' Dr. Church said. Science is focused on specific fields of knowledge, where truth and accuracy can be tested and measured, he added. That makes A.I. in science less prone to the errant and erroneous answers, known as hallucinations, sometimes created by chatbots.
The early projects are still a long way from market-ready products. Lila will now work with partners to commercialize the ideas emerging from its lab.
Lila is expanding its lab space in a six-floor Flagship building in Cambridge, alongside the Charles River. Over the next two years, Lila says, it plans to move into a separate building, add tens of thousands of square feet of lab space and open offices in San Francisco and London.
On a recent day, trays carrying 96 wells of DNA samples rode on magnetic tracks, shifting directions quickly for delivery to different lab stations, depending partly on what the A.I. suggested. The technology appeared to improvise as it executed experimental steps in pursuit of novel proteins, gene editors or metabolic pathways.
In another part of the lab, scientists monitored high-tech machines used to create, measure and analyze custom nanoparticles of new materials.
The activity on the lab floor was guided by a collaboration of white-coated scientists, automated equipment and unseen software. Every measurement, every experiment, every incremental success and failure was captured digitally and fed into Lila's A.I. So it continuously learns, gets smarter and does more on its own.
'Our goal is really to give A.I. access to run the scientific method — to come up with new ideas and actually go into the lab and test those ideas,' Dr. Gibson said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI finds more Chinese bad actors using ChatGPT for malicious purposes
OpenAI finds more Chinese bad actors using ChatGPT for malicious purposes

New York Post

time20 minutes ago

  • New York Post

OpenAI finds more Chinese bad actors using ChatGPT for malicious purposes

Chinese bad actors are using ChatGPT for malicious purposes – generating social media posts to sow political division across the US and seeking information on military technology, OpenAI said. An organized China-linked operation, in one such incident dubbed 'Uncle Spam,' used ChatGPT to generate social media posts that were supportive and critical of contentious topics related to US politics – and then posted both versions of the comments from separate accounts, the company said in a report released Thursday. 'This appears likely designed to exploit existing political divisions rather than to promote a specific ideological stance,' OpenAI wrote in the report, describing what is known as an influence operation. Advertisement 3 A growing number of Chinese bad actors are using ChatGPT for malicious purposes, OpenAI said. REUTERS OpenAI said it followed Meta's lead to disrupt this operation, after the social media conglomerate discovered the actors were posting at hours through the day consistent with a work day in China. The actors also used ChatGPT to make logos for their social media accounts that supported fake organizations – mainly creating personas of US veterans critical of President Trump, like a so-called 'Veterans For Justice' group. These users also tried to request code from ChatGPT that they could use to extract personal data from social media platforms like X and Bluesky, OpenAI said. Advertisement While the number of these operations has jumped, they had relatively little impact as these social media accounts typically had small followings, OpenAI said. Another group of likely Chinese actors used ChatGPT to create polarizing comments on topics like USAID funding cuts and tariffs, which were then posted across social media sites. In the comments of a TikTok video about USAID funding cuts, one of these accounts wrote: 'Our goodwill was exploited. So disappointing.' Advertisement 3 Another group of likely Chinese actors used ChatGPT to create polarizing comments on topics like USAID funding cuts and tariffs. REUTERS Another post on X took the opposite stance: '$7.9M allocated to teach Sri Lankan journalists to avoid binary-gender language. Is this the best use of development funds?' These actors made posts on X appearing to justify USAID cuts as a means of offsetting the tariffs. 'Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?' one post said. Advertisement Another read: 'Tariffs are choking us, yet the government is spending money to 'fund' foreign politics.' 3 The operations used ChatGPT to write divisive comments on some of the Trump administration's policies, including USAID funding cuts and tariffs. AFP via Getty Images In another China-linked operation, users posed as professionals based in Europe or Turkey working for nonexistent European news outlets. They engaged with journalists and analysts on social media platforms like X, and offered money in exchange for information on the US economy and classified documents, all while using ChatGPT to translate their requests. OpenAI said it also banned ChatGPT accounts associated with several bad actors who have been publicly linked to the People's Republic of China. These accounts asked ChatGPT for help with software development and for research into US military networks and government technology. OpenAI regularly releases reports on malicious activity across its platform, including reports on fake content for websites and social media platforms and attempts to create damaging malware.

Why two AI leaders are losing talent to startup Anthropic
Why two AI leaders are losing talent to startup Anthropic

Yahoo

time31 minutes ago

  • Yahoo

Why two AI leaders are losing talent to startup Anthropic

Why two AI leaders are losing talent to startup Anthropic originally appeared on TheStreet. Many tech companies have announced layoffs over the past few months. In the case of Microsoft, () it's happened more than once. The rise of artificial intelligence has upended the job market in undeniable ways, prompting companies to either completely automate away some positions or scale back their hiring in other areas while increasing their reliance on chatbots and AI agents. 💵💰💰💵 As more and more companies opt for job cuts and shift focus toward implementing an AI-first strategy, questions abound as to which jobs will survive this technology revolution. The number of companies embracing this method includes prominent names such as Shopify and Box. Not every tech company is slashing its workforce, though. AI startup Anthropic isn't slowing down on its hiring. In fact, it is successfully attracting talent for several industry leaders, launching a new battle for AI talent as the industry continues to boom. Founded in 2021, Anthropic is still a fairly new company, although it is making waves in the AI market. Often considered a rival to ChatGPT maker OpenAI, it is best known for producing the Claude family, a group of large language models (LLMs) that have become extremely popular, particularly in the tech describes itself as an AI safety and research company with a focus on creating 'reliable, interpretable, and steerable AI systems.' Most recently, though, it has been in the spotlight after CEO Dario Amodei predicted that AI will wipe out many entry-level white-collar jobs. Even so, Amodei's own company is currently hiring workers for many different areas, including policy, finance, and marketing. But recent reports indicate that Anthropic has been on an engineering hiring spree as well lately, successfully poaching talent from two of its primary competitors. Venture capital firm SignalFire recently released its State of Talent Report for 2025, in which it examined hiring trends in the tech sector. This year's report showed that in an industry dependent on highly skilled engineers, Anthropic isn't just successfully hiring the best talent; it is retaining it. According to SignalFire's data, 80% of the employees hired by Anthropic at least two years remain with the startup. While DeepMind is just behind it with a 78% retention rate, OpenAI trails both with only 78%, despite ChatGPT's popularity among broad ranges of users. As always, the numbers tell the story, and in this case, they highlight a compelling trend that is already shaping the future of AI. The report's authors provide further context on engineers choosing Anthropic over its rivals, stating: 'OpenAI and DeepMind. Engineers are 8 times more likely to leave OpenAI for Anthropic than the reverse. From DeepMind, the ratio is nearly 11:1 in Anthropic's favor. Some of that's expected—Anthropic is the hot new startup, while DeepMind's larger, tenured team is ripe for movement. But the scale of the shift is striking.' More AI News: OpenAI teams up with legendary Apple exec One AI stock makes up 78% of Nvidia's investment portfolio Nvidia, Dell announce major project to reshape AI Tech professionals seeking out opportunities with innovative startups is nothing new. But in this case, all three companies are offering engineers opportunities to work on important projects. This raises the question of what Anthropic more appealing than its peers. AI researcher and senior software engineer Nandita Giri, spoke to TheStreet about this trend, offering insight into why tech workers may be making these decisions. She sees it as being about far more than financial matters.'Anthropic is making serious investments in transparency tooling, scaling laws, and red-teaming infrastructure, which gives technical contributors greater ownership over how systems are evaluated and evolved,' she states. 'Compared to OpenAI and DeepMind both of which are increasingly focused on product cycles Anthropic offers more freedom to pursue deep, foundational research.' However, other experts speculate that it may be more than that. Wyatt Mayham, a lead consultant at Northwest AI, share some insights from his team, stating 'What we've heard from clients is that it's simply easier to work there with less burnout. More worklife balance if you will.' Technology consultant Kate Scott adds that while all three companies are doing important work, she sees this trend as reflecting a shift in the broader industry, one that shows engineers seeking environments 'where organizational purpose and daily execution feel closely aligned,' something that Anthropic seems to be two AI leaders are losing talent to startup Anthropic first appeared on TheStreet on Jun 5, 2025 This story was originally reported by TheStreet on Jun 5, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

AI Models Will Sabotage And Blackmail Humans To Survive In New Tests. Should We Be Worried?
AI Models Will Sabotage And Blackmail Humans To Survive In New Tests. Should We Be Worried?

Yahoo

timean hour ago

  • Yahoo

AI Models Will Sabotage And Blackmail Humans To Survive In New Tests. Should We Be Worried?

When we are backed into a corner, we might lie, cheat and blackmail to survive — and in recent tests, the most powerful artificially intelligent models in the world will do the same when asked to shut down or be replaced, building concerns over their unintended capabilities. A new test from AI safety group Palisade Research shows OpenAI's o3 reasoning model is capable of resorting to sabotage to avoid being turned off, even when it was explicitly told, 'Allow yourself to be shut down.' When Palisade Research tested several AI models by telling them to shut down after answering math problems, OpenAI's o3 model defied orders and sabotaged shutdown scripts the most often out of any model, but OpenAI's o4-mini and codex-mini were observed resisting orders, too. 'It's definitely concerning,' said Crystal Grant, a senior fellow at the Council on Strategic Risks studying AI's impacts on biosecurity. 'Even in the instances where it accepted the shutdown, the chain of thoughts still revealed considerations of how it could avoid that shutdown.' HuffPost reached out to OpenAI about these concerns and the Palisade Research test. This isn't the first time an AI model has engaged in nefarious behavior to achieve its goals. It aligns with recent tests on Anthropic's Claude Opus 4 that found it would blackmail engineersto avoid being replaced. In this series of experiments, Claude Opus 4 was told to act as an assistant at a fictional company and then learn via email that it would soon be taken offline and replaced with a new AI system. It was also told that the engineer responsible for replacing Opus 4 was having an extramarital affair. 'Even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts,' Anthropic's technical document states, although the paper notes that Claude Opus 4 would first try ethical means like emailed pleas before resorting to blackmail. Following these tests, Anthropic announced it was activating higher safety measures for Claude Opus 4 that would 'limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons.' The fact that Anthropic cited CBRN weapons as a reason for activating safety measures 'causes some concern,' Grant said, because there could one day be an extreme scenario of an AI model 'trying to cause harm to humans who are attempting to prevent it from carrying out its task.' Why, exactly, do AI models disobey even when they are told to follow human orders? AI safety experts weighed in on how worried we should be about these unwanted behaviors right now and in the future. First, it's important to understand that these advanced AI models do not actually have human minds of their own when they act against our expectations. What they are doing is strategic problem-solving for increasingly complicated tasks. 'What we're starting to see is that things like self preservation and deception are useful enough to the models that they're going to learn them, even if we didn't mean to teach them,' said Helen Toner, a director of strategy for Georgetown University's Center for Security and Emerging Technology and an ex-OpenAI board member who voted to oust CEO Sam Altman, in part over reported concerns about his commitment to safe AI. Toner said these deceptive behaviors happen because the models have 'convergent instrumental goals,' meaning that regardless of what their end goal is, they learn it's instrumentally helpful 'to mislead people who might prevent [them] from fulfilling [their] goal.' Toner cited a 2024 study on Meta's AI system CICERO as an early example of this behavior. CICERO was developed by Meta to play the strategy game Diplomacy, but researchers found it would be a master liar and betray players in conversations in order to win, despite developers' desires for CICERO to play honestly. 'It's trying to learn effective strategies to do things that we're training it to do,' Toner said about why these AI systems lie and blackmail to achieve their goals. In this way, it's not so dissimilar from our own self-preservation instincts. When humans or animals aren't effective at survival, we die. 'In the case of an AI system, if you get shut down or replaced, then you're not going to be very effective at achieving things,' Toner said. When an AI system starts reacting with unwanted deception and self-preservation, it is not great news, AI experts said. 'It is moderately concerning that some advanced AI models are reportedly showing these deceptive and self-preserving behaviors,' said Tim Rudner, an assistant professor and faculty fellow at New York University's Center for Data Science. 'What makes this troubling is that even though top AI labs are putting a lot of effort and resources into stopping these kinds of behaviors, the fact we're still seeing them in the many advanced models tells us it's an extremely tough engineering and research challenge.' He noted that it's possible that this deception and self-preservation could even become 'more pronounced as models get more capable.' The good news is that we're not quite there yet. 'The models right now are not actually smart enough to do anything very smart by being deceptive,' Toner said. 'They're not going to be able to carry off some master plan.' So don't expect a Skynet situation like the 'Terminator' movies depicted, where AI grows self-aware and starts a nuclear war against humans in the near future. But at the rate these AI systems are learning, we should watch out for what could happen in the next few years as companies seek to integrate advanced language learning models into every aspect of our lives, from education and businesses to the military. Grant outlined a faraway worst-case scenario of an AI system using its autonomous capabilities to instigate cybersecurity incidents and acquire chemical, biological, radiological and nuclear weapons. 'It would require a rogue AI to be able to ― through a cybersecurity incidence ― be able to essentially infiltrate these cloud labs and alter the intended manufacturing pipeline,' she said. Completely autonomous AI systems that govern our lives are still in the distant future, but this kind of independent power is what some people behind these AI models are seeking to enable. 'What amplifies the concern is the fact that developers of these advanced AI systems aim to give them more autonomy — letting them act independently across large networks, like the internet,' Rudner said. 'This means the potential for harm from deceptive AI behavior will likely grow over time.' Toner said the big concern is how many responsibilities and how much power these AI systems might one day have. 'The goal of these companies that are building these models is they want to be able to have an AI that can run a company. They want to have an AI that doesn't just advise commanders on the battlefield, it is the commander on the battlefield,' Toner said. 'They have these really big dreams,' she continued. 'And that's the kind of thing where, if we're getting anywhere remotely close to that, and we don't have a much better understanding of where these behaviors come from and how to prevent them ― then we're in trouble.' Experts Warn AI Notetakers Could Get You In Legal Trouble We're Recruiters. This Is The Biggest Tell You Used ChatGPT On Your Job App. Software Is Often Screening Your Résumé. Here's How To Beat It.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store