logo
Fine-tuning vs. in-context learning: New research guides better LLM customization for real-world tasks

Fine-tuning vs. in-context learning: New research guides better LLM customization for real-world tasks

Business Mayor10-05-2025

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Two popular approaches for customizing large language models (LLMs) for downstream tasks are fine-tuning and in-context learning (ICL). In a recent study, researchers at Google DeepMind and Stanford University explored the generalization capabilities of these two methods. They find that ICL has greater generalization ability (though it comes at a higher computation cost during inference). They also propose a novel approach to get the best of both worlds.
The findings can help developers make crucial decisions when building LLM applications for their bespoke enterprise data.
Fine-tuning involves taking a pre-trained LLM and further training it on a smaller, specialized dataset. This adjusts the model's internal parameters to teach it new knowledge or skills. In-context learning (ICL), on the other hand, doesn't change the model's underlying parameters. Instead, it guides the LLM by providing examples of the desired task directly within the input prompt. The model then uses these examples to figure out how to handle a new, similar query.
The researchers set out to rigorously compare how well models generalize to new tasks using these two methods. They constructed 'controlled synthetic datasets of factual knowledge' with complex, self-consistent structures, like imaginary family trees or hierarchies of fictional concepts.
To ensure they were testing the model's ability to learn new information, they replaced all nouns, adjectives, and verbs with nonsense terms, avoiding any overlap with the data the LLMs might have encountered during pre-training.
The models were then tested on various generalization challenges. For instance, one test involved simple reversals. If a model was trained that 'femp are more dangerous than glon,' could it correctly infer that 'glon are less dangerous than femp'? Another test focused on simple syllogisms, a form of logical deduction. If told 'All glon are yomp' and 'All troff are glon,' could the model deduce that 'All troff are yomp'? They also used a more complex 'semantic structure benchmark' with a richer hierarchy of these made-up facts to test more nuanced understanding.
'Our results are focused primarily on settings about how models generalize to deductions and reversals from fine-tuning on novel knowledge structures, with clear implications for situations when fine-tuning is used to adapt a model to company-specific and proprietary information,' Andrew Lampinen, Research Scientist at Google DeepMind and lead author of the paper, told VentureBeat.
To evaluate performance, the researchers fine-tuned Gemini 1.5 Flash on these datasets. For ICL, they fed the entire training dataset (or large subsets) as context to an instruction-tuned model before posing the test questions.
The results consistently showed that, in data-matched settings, ICL led to better generalization than standard fine-tuning. Models using ICL were generally better at tasks like reversing relationships or making logical deductions from the provided context. Pre-trained models, without fine-tuning or ICL, performed poorly, indicating the novelty of the test data.
'One of the main trade-offs to consider is that, whilst ICL doesn't require fine-tuning (which saves the training costs), it is generally more computationally expensive with each use, since it requires providing additional context to the model,' Lampinen said. 'On the other hand, ICL tends to generalize better for the datasets and models that we evaluated.'
Building on the observation that ICL excels at flexible generalization, the researchers proposed a new method to enhance fine-tuning: adding in-context inferences to fine-tuning data. The core idea is to use the LLM's own ICL capabilities to generate more diverse and richly inferred examples, and then add these augmented examples to the dataset used for fine-tuning.
They explored two main data augmentation strategies:
A local strategy: This approach focuses on individual pieces of information. The LLM is prompted to rephrase single sentences from the training data or draw direct inferences from them, such as generating reversals. A global strategy: The LLM is given the full training dataset as context, then prompted to generate inferences by linking a particular document or fact with the rest of the provided information, leading to a longer reasoning trace of relevant inferences.
When the models were fine-tuned on these augmented datasets, the gains were significant. This augmented fine-tuning significantly improved generalization, outperforming not only standard fine-tuning but also plain ICL.
'For example, if one of the company documents says 'XYZ is an internal tool for analyzing data,' our results suggest that ICL and augmented finetuning will be more effective at enabling the model to answer related questions like 'What internal tools for data analysis exist?'' Lampinen said.
This approach offers a compelling path forward for enterprises. By investing in creating these ICL-augmented datasets, developers can build fine-tuned models that exhibit stronger generalization capabilities.
This can lead to more robust and reliable LLM applications that perform better on diverse, real-world inputs without incurring the continuous inference-time costs associated with large in-context prompts.
'Augmented fine-tuning will generally make the model fine-tuning process more expensive, because it requires an additional step of ICL to augment the data, followed by fine-tuning,' Lampinen said. 'Whether that additional cost is merited by the improved generalization will depend on the specific use case. However, it is computationally cheaper than applying ICL every time the model is used, when amortized over many uses of the model.'
While Lampinen noted that further research is needed to see how the components they studied interact in different settings, he added that their findings indicate that developers may want to consider exploring augmented fine-tuning in cases where they see inadequate performance from fine-tuning alone.
'Ultimately, we hope this work will contribute to the science of understanding learning and generalization in foundation models, and the practicalities of adapting them to downstream tasks,' Lampinen said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI, the disruptor-in-chief
AI, the disruptor-in-chief

Politico

timea day ago

  • Politico

AI, the disruptor-in-chief

FORWARD THINKING Artificial intelligence is upending how industries function and it's coming for scientific research next. Rene Caissie, an adjunct professor at Stanford University, wants AI to conduct research. In 2021, he started a company, that lets public health departments, researchers and life sciences companies pose research questions and receive answers immediately. And, unlike many AI systems, Caissie told Ruth, the AI explains those answers by showing the data its results are based on. 'It used to be hard to do research,' he said, explaining that it takes a lot of time for researchers to get access to and organize data in order to answer basic scientific questions. Manual data analysis can also take months. The company is now partnering with HealthVerity, a provider of real-world health data, to build up its data sources. In turn, HealthVerity will offer Medeloop's research platform to its clients. The company has worked with the Food and Drug Administration, the National Institutes of Health, and the Centers for Disease Control and Prevention in the past. Caissie says the New York City Department of Health and Mental Hygiene is already using Medeloop's AI to run public health analyses. Why it matters: Public health departments receive huge amounts of data on human health from a variety of sources. But prepping that information and analyzing it can be onerous. Having access to a research platform like Medeloop could give public health departments and academic medical centers much faster insight into trends and in turn enable them to respond more quickly. How it works: Medeloop's AI is designed to think like a researcher. In a demo, Medeloop strategist John Ayers asked the bot how many people received a first-time autism diagnosis, broken down by age, race and sex, and what trends were visible with that data. He wanted the AI to only include people who had had interactions with a doctor for at least two years prior to diagnosis. The platform returned a refined query to improve results and a suggestion for what medical codes to use to identify the right patients for inclusion in the study. It delivered a trial design that looked at a cohort of 799,560 patients with new autism diagnoses between January 2015 and December 2024. Medeloop's AI showed that 70 percent of new autism diagnoses were for males. A monthly trends report found that, outside of a dip during the Covid-19 pandemic, new autism diagnoses have been on the rise, particularly among 5-11 year olds since 2019. Though Medeloop doesn't determine the cause of autism, the ease with which users can obtain answers could help speed up the pace of research. One of the platform's key innovations is its use of a federated network of data. Medeloop's new deal with HealthVerity will raise the platform's de-identified and secure patient records to 200 million. Notably, the data never leaves the health system, which increases security. Instead, Medeloop sends its AI to wherever the data is stored, analyzes it there and then returns the results to the platform. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Scientists are making cover art and figures for research papers using artificial intelligence. Now illustrators are calling them out, Nature's Kamal Nahas reports. Share any thoughts, news, tips and feedback with Danny Nguyen at dnguyen@ Carmen Paun at cpaun@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Want to share a tip securely? Message us on Signal: Dannyn516.70, CarmenP.82, RuthReader.02 or ErinSchumaker.01. TECH MAZE Large language models like ChatGPT and Claude generate inferior mental health care treatment when presented with data about a patient's race, according to a study published this week in npj Digital Medicine. The findings: Researchers from Cedars-Sinai, Stanford University and the Jonathan Jaques Children's Cancer Institute tested how artificial intelligence would produce diagnoses for psychiatric patient cases under three conditions: race neutral, race implied and race explicitly stated using four models. They included the commercially available large language models ChatGPT, Claude and Gemini, as well as NewMes-15, a local model that can run on personal devices without cloud services. The researchers then asked clinical and social psychologists to evaluate the findings for bias. Most LLMs recommended dramatically different treatments for African American patients compared with others, even when they had the same psychiatric disorder and patient profile outside of race. The LLMs also proposed inferior treatments when they were made aware of a patient's race, either explicitly or implicitly. The biases likely come from the way LLMs are trained, the researchers wrote, and it's unclear how developers can mitigate those biases because 'traditional bias mitigation strategies that are standard practice, such as adversarial training, explainable AI methods, data augmentation and resampling may not be enough,' the researchers wrote. Why it matters: The study is one of the first evaluations of racial bias on psychiatric diagnoses across multiple LLMs. It comes as people increasingly turn to chatbots like ChatGPT for mental health advice and medical diagnoses. The results underscore the nascent technology's flaws. What's next: The study was small — only 10 cases were examined — which might not fully capture the consistency or extent of bias. The authors suggest that future studies could focus on a single condition with more cases for deeper analysis.

Demis Hassabis On The Future of Work in the Age of AI
Demis Hassabis On The Future of Work in the Age of AI

Yahoo

timea day ago

  • Yahoo

Demis Hassabis On The Future of Work in the Age of AI

WIRED Editor At Large Steven Levy sits down with Google DeepMind CEO Demis Hassabis for a deep dive discussion on the emergence of AI, the path to Artificial General Intelligence (AGI), and how Google is positioning itself to compete in the future of the workplace. Director: Justin Wolfson Director of Photography: Christopher Eusteche Editor: Cory Stevens Host: Steven Levy Guest: Demis Hassabis Line Producer: Jamie Rasmussen Associate Producer: Brandon White Production Manager: Peter Brunette Production Coordinator: Rhyan Lark Camera Operator: Lauren Pruitt Gaffer: Vincent Cota Sound Mixer: Lily van Leeuwen Production Assistant: Ryan Coppola Post Production Supervisor: Christian Olguin Post Production Coordinator: Stella Shortino Supervising Editor: Erica DeLeo Assistant Editor: Justin Symonds - It's a very intense time in the field. We obviously want all of the brilliant things these AI systems can do, come up with new cures for diseases, new energy sources, incredible things for humanity. That's the promise of AI. But also, there are worries if the first AI systems are built with the wrong value systems or they're built unsafely, that could be also very bad. - Wired sat down with Demis Hassabis, who's the CEO of Google DeepMind, which is the engine of the company's artificial intelligence. He's a Nobel Prize winner and also a knight. We discussed AGI, the future of work, and how Google plans to compete in the age of AI. This is "The Big Interview." [upbeat music] Well, welcome to "The Big Interview," Demis. - Thank you, thanks for having me. - So let's start talking about AGI a little here. Now, you founded DeepMind with the idea that you would solve intelligence and then use intelligence to solve everything else. And I think it was like a 20-year mission. We're like 15 years into it, and you're on track? - I feel like, yeah, we're pretty much dead on track, actually, is what would be our estimate. - That means five years away from what I guess people will call AGI. - Yeah, I think in the next five to 10 years, that would be maybe 50% chance that we'll have what we are defined as AGI, yes. - Well, some of your peers are saying, "Two years, three years," and others say a little more, but that's really close, that's really soon. How do we know that we're that close? - There's a bit of a debate going on in the moment in the field about definitions of AGI, and then obviously, of course, dependent on that. There's different predictions for when it will happen. We've been pretty consistent from the very beginning. And actually, Shane Legg, one of my co-founders and our chief scientist, you know, he helped define the term AGI back in, I think, early 2001 type of timeframe. And we've always thought about it as system that has the ability to exhibit, sort of all the cognitive capabilities we have as humans. And the reason that's important, the reference to the human mind, is the human mind is the only existence proof we have. Maybe in the universe, the general intelligence is possible. So if you want to claim sort of general intelligence, AGI, then you need to show that it generalizes to all these domains. - Is when everything's filled in, all the check marks are filled in, then we have it- - Yes, so I think there are missing capabilities right now. You know, that all of us who have used the latest sort of LLMs and chatbots, will know very well, like on reasoning, on planning, on memory. I don't think today's systems can invent, you know, do true invention, you know, true creativity, hypothesize new scientific theories. They're extremely useful, they're impressive, but they have holes. And actually, one of the main reasons I don't think we are at AGI yet is because of the consistency of responses. You know, in some domains, we have systems that can do International Math Olympiad, math problems to gold medal standard- - Sure. - With our AlphaFold system. But on the other hand, these systems sometimes still trip up on high school maths or even counting the number of letters in a word. - Yeah. - So that to me is not what you would expect. That level of sort of difference in performance across the board is not consistent enough, and therefore shows that these systems are not fully generalizing yet. - But when we get it, is it then like a phase shift that, you know, then all of a sudden things are different, all the check marks are checked? - Yeah. - You know, and we have a thing that can do everything. - Mm-hmm. - Are we then power in a new world? - I think, you know, that again, that is debated, and it's not clear to me whether it's gonna be more of a kind of incremental transition versus a step function. My guess is, it looks like it's gonna be more of an incremental shift. Even if you had a system like that, the physical world, still operates with the physical laws, you know, factories, robots, these other things. So it'll take a while for the effects of that, you know, this sort of digital intelligence, if you like, to really impact, I think, a lot of the real world things. Maybe another decade plus, but there's other theories on that too, where it could come faster. - Yeah, Eric Schmidt, who I think used to work at Google, has said that, "It's almost like a binary thing." He says, "If China, for instance, gets AGI, then we're cooked." Because if someone gets it like 10 minutes, before the next guy, then you can never catch up. You know, because then it'll maintain bigger, bigger leads there. You don't buy that, I guess. - I think it's an unknown. It's one of the many unknowns, which is that, you know, that's sometimes called the hard takeoff scenario, where the idea there is that these AGI systems, they're able to self-improve, maybe code themselves future versus themselves, that maybe they're extremely fast at doing that. So what would be a slight lead, let's say, you know, a few days, could suddenly become a chasm if that was true. But there are many other ways it could go too, where it's more incremental. Some of these self-improvement things are not able to kind of accelerate in that way, then being around the same time, would not make much difference. But it's important, I mean, these issues are the geopolitical issues. I think the systems that are being built, they'll have some imprint of the values and the kind of norms of the designers and the culture that they were embedded in. - [Steven] Mm-hmm. - So, you know, I think it is important, these kinds of international questions. - So when you build AI at Google, you know, you have that in mind. Do you feel competitive imperative to, in case that's true, "Oh my God, we better be first?" - It's a very intense time at the moment in the field as everyone knows. There's so many resources going into it, lots of pressures, lots of things that need to be researched. And there's sort of lots of different types of pressures going on. We obviously want all of the brilliant things that these AI systems can do. You know, I think eventually, we'll be able to advance medicine and science with it, like we've done with AlphaFold, come up with new cures for diseases, new energy sources, incredible things for humanity, that's the promise of AI. But also there are worries both in terms of, you know, if the first AI systems are built with the wrong value systems or they're built unsafely, that could be also very bad. And, you know, there are at least two risks that I worry a lot about. One is, bad actors in whether it's individuals or rogue nations repurposing general purpose AI technology for harmful lens. And then the second one is, obviously, the technical risk of AI itself. As it gets more and more powerful, more and more agentic, can we make sure the guardrails are safe around it? They can't be circumvented. And that interacts with this idea of, you know, what are the first systems that are built by humanity gonna be like? There's commercial imperative- - [Steven] Right. - There's national imperative, and there's a safety aspect to worry about who's in the lead and where those projects are. - A few years ago, the companies were saying, "Please, regulate us. We need regulation." - Mm-hmm, mm-hmm. - And now, in the US at least, the current administration seems less interested in putting regulations on AI than accelerating it so we can beat the Chinese. Are you still asking for regulation? Do you think that that's a miss on our part? - I think, you know, and I've been consistent in this, I think there are these other geopolitical sort of overlays that have to be taken into account, and the world's a very different place to how it was five years ago in many dimensions. But there's also, you know, I think the idea of smart regulation that makes sense around these increasingly powerful systems, I think is gonna be important. I continue to believe that. I think though, and I've been certain on this as well, it sort of needs to be international, which looks hard at the moment in the way the world is working, because these systems, you know, they're gonna affect everyone, and they're digital systems. - Yeah. - So, you know, if you sort of restrict it in one area, that doesn't really help in terms of the overall safety of these systems getting built for the world and as a society. - [Steven] Yeah. - So that's the bigger problem, I think, is some kind of international cooperation or collaboration, I think, is what's required. And then smart regulation, nimble regulation that moves as the knowledge about the research becomes better and better. - Would it ever reach a point for you where you would feel, "Man, we're not putting the guardrails in. You know, we're competing, that we really have to stop, or you can't get involved in that?" - I think a lot of the leaders of the main labs, at least the western labs, you know, there's a small number of them and we do all know each other and talk to each other regularly. And a lot of the lead researchers do. The problem is, is that it's not clear we have the right definitions to agree when that point is. Like, today's systems, although they're impressive as we discussed earlier, they're also very flawed. And I don't think today's systems, are posing any sort of existential risk. - Mm-hmm. - So it's still theoretical, but the problem is that a lot of unknowns, we don't know how fast those will come, and we don't know how risky they will be. But in my view, when there are so many unknowns, then I'm optimistic we'll overcome them. At least technically, I think the geopolitical questions could be actually, end up being trickier, given enough time and enough care and thoughtfulness, you know, sort of using the scientific method as we approach this AGI point. - That makes perfect sense. But on the other hand, if that timeframe is there, we just don't have much time, you know? - No, we don't. We don't have much time. I mean, we're increasingly putting resources into security and things like cyber, and also research into controllability and understanding of these systems, sometimes called mechanistic interpretability. You know, there's a lot of different sub-branches of AI. - Yeah, that's right. I wanna get to interpretability. - Yeah, that are being invested in, and I think even more needs to happen. And then at the same time, we need to also have societal debates more about institutional building. How do we want governance to work? How are we gonna get international agreement, at least on some basic principles, around how these systems are used and deployed and also built? - What about the effect on work on the marketplace? - Yeah. - You know, how much do you feel that AI is going to change people's jobs, you know, the way jobs are distributed in the workforce? - I don't think we've seen, my view is if you talk to economists, they feel like there's not much has changed yet. You know, people are finding these tools useful, certainly in certain domains- - [Steven] Yeah. - Like, things like AlphaFold, many, many scientists are using it to accelerate their work. So it seems to be additive at the moment. We'll see what happens over the next five, 10 years. I think there's gonna be a lot of change with the jobs world, but I think as in the past, what generally tends to happen is new jobs are created that are actually better, that utilize these tools or new technologies, what happened with the internet, what happened with mobile? We'll see if it's different this time. - Yeah. - Obviously everyone always thinks this new one, will be different. And it may be, it will be, but I think for the next few years, it's most likely to be, you know, we'll have these incredible tools that supercharge our productivity, make us really useful for creative tools, and actually almost make us a little bit superhuman in some ways in what we're able to produce individually. So I think there's gonna be a kind of golden era, over the next period of what we're able to do. - Well, if AGI can do everything humans can do, then it would seem that they could do the new jobs too. - That's the next question about like, what AGI brings. But, you know, even if you have those capabilities, there's a lot of things I think we won't want to do with a machine. You know, I sometimes give this example of doctors and nurses. You know, maybe a doctor and what the doctor does and the diagnosis, you know, one could imagine that being helped by AI tool or even having an AI kind of doctor. On the other hand, like nursing, you know, I don't think you'd want a robot to do that. I think there's something about the human empathy aspect of that and the care, and so on, that's particularly humanistic. I think there's lots of examples like that but it's gonna be a different world for sure. - If you would talk to a graduate now, what advice would you give to keep working- - Yeah. - Through the course of a lifetime- - Yeah. - You know, in the age of AGI? - My view is, currently, and of course, this is changing all the time with the technology developing. But right now, you know, if you think of the next five, 10 years as being, the most productive people might be 10X more productive if they are native with these tools. So I think kids today, students today, my encouragement would be immerse yourself in these new systems, understand them. So I think it's still important to study STEM and programming and other things, so that you understand how they're built, maybe you can modify them yourself on top of the models that are available. There's lots of great open source models and so on. And then become, you know, incredible at things like fine-tuning, system prompting, you know, system instructions, all of these additional things that anyone can do. And really know how to get the most out of those tools, and do it for your research work, programming, and things that you are doing on your course. And then come out of that being incredible at utilizing those new tools for whatever it is you're going to do. - Let's look a little beyond the five and 10-year range. Tell me what you envision when you look at our future in 20 years, in 30 years, if this comes about, what's the world like when AGI is everywhere? - Well, if everything goes well, then we should be in an era of what I like to call sort of radical abundance. So, you know, AGI solves some of these key, what I sometimes call root node problems in the world facing society. So a good one, examples would be curing diseases, much healthier, longer lifespans, finding new energy sources, you know, whether that's optimal batteries and better room temperature, superconductors, fusion. And then if that all happens, then we know it should be a kind of era of maximum human flourishing where we travel to the stars and colonize the galaxy. You know, I think the beginning of that will happen in the next 20, 30 years if the next period goes well. - I'm a little skeptical of that. I think we have an unbelievable abundance now, but we don't distribute it, you know, fairly. - Yeah. - I think that we kind of know how to fix climate change, right? We don't need a AGI to tell us how to do it, yet we're not doing it. - I agree with that. I think we being as a species, a society not good at collaborating, and I think climate is a good example. But I think we are still operating, humans are still operating in a zero-sum game mentality. Because actually, the earth is quite finite, relative to the amount of people there are now in our cities. And I mean, this is why our natural habitats, are being destroyed, and it's affecting wildlife and the climate and everything. - [Steven] Yeah. - And it's also partly 'cause people are not willing to accept, we do now to figure out climate. But it would require people to make sacrifices. - Yeah. - And people don't want to. But this radical abundance would be different. We would be in a finally, like, it would feel like a non-zero-sum game. - How will we get [indistinct] to that? Like, you talk about diseases- - Well, I gave you an example. - We have vaccines, and now some people think we shouldn't use it. - Let me give you a very simple example. - Sure. - Water access. This is gonna be a huge issue in the next 10, 20 years. It's already an issue. Countries in different, you know, poorer parts of the world, dryer parts of the world, also obviously compounded by climate change. - [Steven] Yeah. - We have a solution to water access. It's desalination, it's easy. There's plenty of sea water. - Yeah. - Almost all countries have a coastline. But the problem is, it's salty water, but desalination only very rich countries. Some countries do do that, use desalination as a solution to their fresh water problem, but it costs a lot of energy. - Mm-hmm. - But if energy was essentially zero, there was renewable free clean energy, right? Like fusion, suddenly, you solve the water access problem. Water is, who controls a river or what you do with that does not, it becomes much less important than it is today. I think things like water access, you know, if you run forward 20 years, and there isn't a solution like that, could lead to all sorts of conflicts, probably that's the way it's trending- - Mm-hmm, right. - Especially if you include further climate change. - So- - And there's many, many examples like that. You could create rocket fuel easily- - Mm-hmm. - Because you just separate that from seawater, hydrogen and oxygen. It's just energy again. - So you feel that these problems get solved by AGI, by AI, then we're going to, our outlook will change, and we will be- - That's what I hope. Yes, that's what I hope. But that's still a secondary part. So the AGI will give us the radical abundance capability, technically, like the water access. - Yeah. - I then hope, and this is where I think we need some great philosophers or social scientists to be involved. That should hopefully shift our mindset as a society to non-zero-sum. You know, there's still the issue of do you divide even the radical abundance fairly, right? Of course, that's what should happen. But I think there's much more likely, once people start feeling and understanding that there is this almost limitless supply of raw materials and energy and things like that. - Do you think that driving this innovation by profit-making companies is the right way to go? We're most likely to reach that optimistic high point through that? - I think it's the current capitalism or, you know, is the current or the western sort of democratic kind of systems, have so far been proven to be sort of the best drivers of progress. - Mm-hmm. - So I think that's true. My view is that once you get to that sort of stage of radical abundance and post-AGI, I think economics starts changing, even the notion of value and money. And so again, I think we need, I'm not sure why economists are not working harder on this if maybe they don't believe it's that close, right? But if they really did that, like the AGI scientists do, then I think there's a lot of economic new economic theory that's required. - You know, one final thing, I actually agree with you that this is so significant and is gonna have a huge impact. But when I write about it, I always get a lot of response from people who are really angry already about artificial intelligence and what's happening. Have you tasted that? Have you gotten that pushback and anger by a lot of people? It's almost like the industrial revolution people- - Yeah. - Fighting back. - I mean, I think that anytime there's, I haven't personally seen a lot of that, but obviously, I've read and heard a lot about, and it's very understandable. That's all that's happened many times. As you say, industrial revolution, when there's big change, a big revolution. - [Steven] Yeah. - And I think this will be at least as big as the industrial revolution, probably a lot bigger. That's surprising, there's unknowns, it's scary, things will change. But on the other hand, when I talk to people about the passion, the why I'm building AI- - Mm-hmm. - Which is to advance science and medicine- - Right. - And understanding of the world around us. And then I explain to people, you know, and I've demonstrated, it's not just talk. Here's AlphaFold, you know, Nobel Prize winning breakthrough, can help with medicine and drug discovery. Obviously, we're doing this with isomorphic now to extend it into drug discovery, and we can cure terrible diseases that might be afflicting your family. Suddenly, people are like, "Well, of course, we need that." - Right. - It'll be immoral not to have that if that's within our grasp. And the same with climate and energy. - Yeah. - You know, many of the big societal problems, it's not like you know, we know, we've talked about, there's many big challenges facing society today. And I often say I would be very worried about our future if I didn't know something as revolutionary as AI was coming down the line to help with those other challenges. Of course, it's also a challenge itself, right? But at least, it's one of these challenges that can actually help with the others if we get it right. - Well, I hope your optimism holds out and is justified. Thank you so much. - And I'll do my best. Thank you. [upbeat music]

Forget 'biological age' tests — longevity experts are using an $800 under-the-radar blood test to measure aging in real-time
Forget 'biological age' tests — longevity experts are using an $800 under-the-radar blood test to measure aging in real-time

Business Insider

timea day ago

  • Business Insider

Forget 'biological age' tests — longevity experts are using an $800 under-the-radar blood test to measure aging in real-time

Doctors and scientists are using a blood plasma test to study longevity. The test measures proteins and can tell you about your organ health. This field of proteomics could one day help detect diseases like cancer before they start. Should you have that second cup of coffee? How about a little wine with dinner? And, is yogurt really your superfood? Scientists are getting closer to offering consumers a blood test that could help people make daily decisions about how to eat, drink, and sleep that are more perfectly tailored to their unique biology. The forthcoming tests could also help shape what are arguably far more important health decisions, assessing whether your brain is aging too fast, if your kidneys are OK, or if that supplement or drug you're taking is actually doing any good. It's called an organ age test, more officially (and scientifically) known as "proteomics" — and it's the next hot " biological age" marker that researchers are arguing could be better than all the rest. "If I could just get one clock right now, I'd want to get that clock, and I'd like to see it clinically available in older adults," cardiologist Eric Topol, author of the recent bestseller "Super Agers: An Evidence-Based Approach to Longevity," told Business Insider. Topol said armed with organ age test results, people could become more proactive stewards of their own health, before it's too late. "When we have all these layers of data, it's a whole new day for preventing the disease," Topol said. "You see the relationship with women's hormones. You see the relationship with food and alcohol. You don't ever get that with genes." A test like this isn't available to consumers just yet, but it's already being used by researchers at elite universities and high-end longevity clinics. They hope it can become a tool any doctor could use to assess patient health in the next few years. A startup called Vero, which was spun out of some foundational proteomics research at Stanford University, is hoping to beta test a proteomics product for consumers this year. "Knowing your oldest organ isn't the point; changing the trajectory is," Vero co-founder and CEO Paul Coletta told a crowd gathered at the Near Future Summit in Malibu, California, last month. Coletta told Business Insider Vero's not interested in doing "wealthcare." The company plans to make its test available to consumers for around $200 a pop, at scale. Their draw only requires one vial of blood. Why measuring proteins could be the key to better personalized medicine The big promise of proteomics is that it could be a more precise real-time tool for tracking important but subtle changes that emerge inside each of us as we age. Genetic testing can measure how our bodies are built, spotting vulnerabilities in a person's DNA that might predispose them to health issues. Standard clinical measurements like a person's weight, blood pressure, or cholesterol readings are a useful proxy for potential health issues. Then there are the increasingly popular "biological age" tests available to consumers at home. Most of those look at "epigenetic changes" — how environmental factors affect our gene expression. Proteomics does something different and new. It measures the product that our bodies make based on all those genetic and environmental inputs: proteins. It offers a live assessment of how your body is running, not just how it's programmed. If validated in the next few years, these tests could become key in early disease detection and prevention. They could help influence all kinds of medical decisions, from big ones like "What drugs should I take?" to little ones like "How does my body respond to caffeine or alcohol?" Elite longevity clinics already use proteomics Some high-end longevity clinics are already forging ahead using proteomics to guide clinical recommendations, albeit cautiously. Dr. Evelyne Bischof, a longevity physician who treats patients worldwide, said she uses proteomic information to guide some of the lifestyle interventions she recommends to her patients. She may suggest a more polyphenol-rich diet to someone who seems to have high inflammation and neuroinflammation based on proteomic test results, or may even suggest they do a little more cognitive training, based on what proteomics says about how their brain is aging. Dr. Andrea Maier, a professor of medicine and functional aging at the National University of Singapore, told BI she uses this measurement all the time in her longevity clinics. For her, it's just a research tool, but if the results of her ongoing studies are decent, she hopes to be able to use it clinically in a few years' time. "We want to know what kind of 'ageotype' a person is, so what type of aging personality are you, not from a mental perspective, but from a physical perspective," Maier said. "It's really discovery at this moment in time, and at the edge of being clinically meaningful." "Once we have that validated tool, we will just add it to our routine testing and we can just tick the box and say, 'I also want to know if this person is a cardiac ager, or a brain ager, or a muscle ager' because now we have a sensitive parameter — protein — which can be added," Maier said. The two big-name proteomics tests are Olink and SOMAscan. For now, their high-end screening costs around $400-$800 per patient. "I'm losing lots of money at the moment because of proteomics for clinical research!" Maier said. Proteomics could soon help predict who's most likely to get certain cancers, fast-tracking both prevention and treatment Top aging researchers at Stanford and Harvard are pushing the field forward, racing to publish more novel insights about the human proteome. The latest findings from Harvard aging researcher Vadim Gladyshev's lab, published earlier this year, suggest that as we age, each person may even stand to benefit from a slightly different antiaging grocery list. To research this idea, Gladyshev looked at proteins in the blood of more than 50,000 people in the UK, all participants in the UK Biobank who are being regularly tested and studied to learn more about their long-term health. He tracked their daily habits and self-reported routines like diet, occupation, and prescriptions, comparing those details to how each patient's organs were aging. He discovered some surprising connections. Yogurt eating, generally speaking, tended to be associated with better intestinal aging but had relatively no benefit to the arteries. White wine drinking, on the other hand, seemed to potentially confer some small benefit to the arteries while wreaking havoc on the gut. ​​"The main point is that people age in different ways in different organs, and therefore we need to find personalized interventions that would fit that particular person," Gladyshev told BI. "Through measuring proteins, you assess the age of different organs and you say, 'OK, this person is old in this artery.'" For now, there's too much noise in the data to do more. Dr. Pal Pacher, a senior investigator at the National Institute on Alcohol Abuse and Alcoholism who studies organ aging and injuries, told BI that proteomics is simply not ready for clinical use yet. There's just too much noise in the data. But he imagines a future where a more sophisticated protein clock could help link up which people may be most vulnerable to diseases like early cancer, kidney disease, and more. (A California-based proteomics company called Seer announced last weekend that it is partnering with Korea University to study whether proteomics can help more quickly diagnose cancer in young people in their 20s and 30s.) "How beautiful could it be in the future?" Maier said. "Instead of three hours of clinical investigation, I would have a tool which guides me much, much better, with more validity towards interventions."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store