logo
The Definition of Transfer learning

The Definition of Transfer learning

Yahoo03-04-2025

Credit - Getty Images
This article is published by AllBusiness.com, a partner of TIME.
Transfer learning is a machine learning technique that allows a model trained on one task to be repurposed or fine-tuned for a related task, drastically reducing the amount of data and computational resources needed.
This method leverages pre-trained models on large datasets to perform well in new, often smaller, domains with limited labeled data. It has become increasingly popular in fields such as natural language processing, computer vision, and speech recognition, where vast amounts of data and time are typically required for training models from scratch.
Pre-trained models: In transfer learning, models are initially trained on large datasets, often unrelated to the target task. For example, models like BERT and GPT-4o in natural language processing are pre-trained on diverse text information.
Fine-tuning: After training on a large dataset, the model is fine-tuned on a smaller, domain-specific dataset. This involves adjusting the weights of the neural network to optimize performance for the new task.
Feature extraction: One key characteristic is that lower layers of a neural network trained on a large dataset capture general features, while higher layers are fine-tuned to specific features related to the target task.
Domain adaptation: Transfer learning allows models to adapt to tasks in a different but related domain. For example, a model trained on general images can be fine-tuned to identify specific objects like medical images or satellite imagery.
Image classification: A model trained on a large image dataset such as ImageNet can be repurposed for a new, smaller dataset.
Natural Language Processing (NLP): In NLP, large models like GPT-4o and BERT are trained on billions of words from the internet. These pre-trained models can then be fine-tuned for specific tasks such as sentiment analysis, question-answering, or text summarization with a much smaller amount of task-specific data.
Speech recognition: A speech recognition system trained on a broad dataset can be fine-tuned for recognizing specific accents or dialects in different languages. For example, a general English speech recognition system could be adapted to recognize Australian English or Indian English with limited labeled data.
Reduced training time: Since the model has already learned general features from the pre-training phase, the training process for a new task is much faster, often requiring fewer resources and less time.
Less data required: Transfer learning allows models to achieve high performance even with a limited amount of labeled data, making it particularly useful in situations where data collection is expensive or time-consuming.
Better performance with small datasets: Transfer learning often results in better performance on smaller datasets than training a model from scratch, because the model has already learned a robust representation from the large dataset.
Cross-domain applicability: It enables the use of knowledge from one domain (e.g., image recognition) to be applied to another related domain (e.g., medical imaging), enabling a wider range of applications for pre-trained models.
Task similarity requirement: Transfer learning works best when the source task (the one used to pre-train the model) is similar to the target task. If the two tasks are very different, transfer learning may not be effective or may even degrade performance.
Overfitting risk: When fine-tuning a model on a small dataset, there is a risk of overfitting, where the model becomes too specialized on the limited new data and fails to generalize well to unseen examples.
Computational resource requirements for pre-training: Although transfer learning reduces the resources needed for fine-tuning, pre-training large models on vast datasets is still computationally expensive and often requires high-performance hardware such as GPUs or TPUs.
Knowledge transfer limitations: Not all knowledge learned from one domain can be transferred effectively to another. For instance, a model trained on natural images may not transfer well to more specialized areas, like recognizing satellite images, where features are quite different.
Transfer learning is a powerful technique in machine learning, allowing models to adapt to new tasks efficiently by leveraging pre-trained knowledge.
This approach not only reduces the need for large amounts of labeled data but also accelerates the development of AI systems across various domains, from healthcare to NLP.
However, it does have its limitations, especially when the source and target tasks are not closely related or when the pre-training phase is highly resource-intensive.
Despite these challenges, transfer learning remains one of the most effective methods for improving model performance and accelerating AI research in numerous fields.
Copyright © by AllBusiness.com. All Rights Reserved
Contact us at letters@time.com.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT is making us weird
ChatGPT is making us weird

Business Insider

time5 hours ago

  • Business Insider

ChatGPT is making us weird

The other day, my family group chat lit up when I posed a question about whether it's important to say "please" and "thank you" to ChatGPT when asking it to conduct a niche search or plan out an itinerary. My mother, ever a stickler for manners, said she makes a conscious choice to behave in this way. A choice she said she makes to "keep myself human." Another loved one later admitted she's been leaning on the chatbot for guidance as she navigates a tricky moment in her marriage. And I couldn't resist my temptation to ask ChatGPT to evaluate how attractive I am after The Washington Post reported that people were asking it for beauty advice. (It said I have "strong, expressive features," then told me to stand up straighter and smile more.) But I know it's not just my immediate circle: ChatGPT is making everyone behave a little strange. As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren't only mimicking human interaction but quietly altering the expectations and norms that govern it. Business Insider spoke with four professionals who interact with chatbots like OpenAI's GPT models in radically different ways — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other, how we view ourselves, as well as how it's disrupting our manners and intimate lives. The conversations centered on ChatGPT, since OpenAI's chatbot is quickly becoming the AI world's equivalent of what Google is to search engines, but the professionals said similar conclusions could be drawn for Meta AI, Microsoft Copilot, Anthropic's Claude, or any other large language model on the market today. A change in the social contract Digital etiquette consultant and author Elaine Swann said that society has needed to adapt to new social cues as each wave of technology has changed our lives. While we've largely collectively agreed that it's all right to use shorthand in personal email correspondence and rude to take a cellphone call on speakerphone in public, we're still establishing a social code for how to interact with AI bots and agents. Kelsey Vlamis, a senior reporter at Business Insider, said she's started seeing a chatbot-related change in her personal life. While on vacation in Italy, she said her husband found himself impatient with their tour guide, consciously having to keep himself from interrupting with questions "since that's how he talks to ChatGPT when he is trying to learn something." Of course, he had to hold himself back, Vlamis added, "since that is not, in fact, how we talk to human beings." Since AI has gained momentum, social media is full of posts asking whether it's appropriate for a spouse to use ChatGPT to write a love note to their partner, or for a worker to rely on an AI agent to fill out a job application on their behalf. The jury's still out on situations like these. "AI is certainly smarter now, which is great for us, but at the same time, we have to be very careful that it doesn't substitute basically our judgment or empathy," Swann said. "We have to be careful with it, not just utilizing it as our sole source of information, but also making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about." Maintaining our baseline levels of respect — not just for each other, but the world around us — is also key, Swann said. After OpenAI CEO Sam Altman posted on X in late April that it costs "tens of millions of dollars" for the company to process niceties like "please" and "thank you" directed toward ChatGPT, she stressed that it's up to the company to make processing those statements more cost-effective, not up to users to stop being polite. "This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us." Altman, for his part, said the massive amount of funds used on polite requests toward ChatGPT is money "well spent." Exacerbated biases Laura Nelson, an associate professor of sociology at the University of British Columbia, said that because the world's most popular chatbots are created by American companies, written by US-based programmers, and trained primarily on content written in the English language, they have deeply entrenched biases that are often seen in Western cultures. "It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said. So if you ask ChatGPT to draw you a picture of a plate of breakfast, it'll conjure typical North American foods: bacon, eggs, sausage, and toast. It describes a bottle of wine as a "classic and thoughtful gift," though in many cultures, alcohol is rarely consumed, and a bottle would make a tone-deaf present. While those examples are relatively harmless, the bots also exacerbate more insidious and potentially damaging biases. A 2021 study published in Psychology & Marketing found that people prefer AI to be anthropomorphized as female in their devices, like it is in most pop culture representations, because it makes the technology seem more human. However, the study found that preference may be inadvertently entrenching the objectification of women. There have also been numerous reports that lonely, mostly male, users may verbally abuse or degrade their AI companions. Business Insider previously reported that artificial intelligence is also rife with discriminatory bias due to the data it's trained on, and ChatGPT in particular showed racial bias when screening résumés for jobs, over-selecting Asian women candidates and under-selecting Black men. While these biases may not immediately change our behavior, they can impact our thinking and the ways we operate as a society, Nelson said. And if ChatGPT or other AI applications are implemented into our decision-making, whether in our personal lives, in the workplace, or at the legal level, it'll have wide-reaching effects we haven't even considered yet. "There's just no question that AI is going to reflect our biases — our collective biases — back to it," Nelson said. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term. It's a tricky thing to get a handle on." A largely untraced social shift Concrete data about the societal shift caused by AI is hard to come by, but the companies behind the tech know something is happening. Many of them have dedicated teams to figure out what effect their technology has on users, but their publicly available findings aren't peer-reviewed like a typical scientific study would be. OpenAI announced that a recent update to the GPT-4o model had a hiccup. It was "noticeably more sycophantic" than prior models, the company said in a press release. While it passed OpenAI's self-described "vibe check" and safety testing, they rolled it back after realizing its programming to please the user could fuel anger, urge impulsive actions, or reinforce negative emotions "in ways that were not intended." The company's announcement highlighted that OpenAI is keenly aware that the various AI applications gaining momentum online — from digital romantic partners to study buddies to gift-suggesting elves — have also started to have creeping effects on human emotions and behavior. When reached for comment, a spokesperson for OpenAI directed Business Insider to the company's recent statements on sycophancy in GPT-4o and an early study of emotional well-being. OpenAI's research, conducted with users over the age of 18, found that emotional engagement with the chatbot is rare. However, heavy users were more likely to report an emotional connection to the bot, and those who had personal conversations with ChatGPT were more likely to report feelings of loneliness. An Anthropic spokesperson said the company has a dedicated research team, Societal Impacts, which isanalyzing Claude usage, how AI is being used across jobs, and studying what values AI models have. Representatives for Meta and Microsoft did not respond to requests for comment. Behavioral risks and rewards Nick Jacobson, an associate professor of psychiatry at Dartmouth's Center for Technology and Behavioral Health, conducted the first trial study delivering psychotherapy to clinical populations using generative AI. His research found that a carefully programmed chatbot can be a helpful therapeutic tool for people suffering from depression, anxiety, and eating disorders. Engagement among patients in the study rivaled that of in-person therapy, they saw a significant reduction in the severity of their symptoms, and, when measured using the same test as human providers, the patients in the study reported they bonded with their therapeutic chatbot with a similar intensity as a human therapist. "Folks were really developing this strong, working bond with their bot," Jacobson said, a factor which is key to a productive therapeutic relationship. However, most bots aren't programmed with the care and precision that Jacobson's was, so those emotional bonds could be developed with an AI that doesn't have the skills to handle their users' emotional needs in a productive way. "Nearly every foundational model will act in ways that are profoundly unsafe to mental health, in various ways, shapes, and forms, at rates that are totally unacceptable," Jacobson said. "But there are so many people that are using them for things like therapy and just plain companionship that it's becoming a real problem — I think folks should handle this with greater care than I think they are." Emma J. Smith, a relationship and sex therapist, said she believes in-person therapy comes with unique benefits that can't be replicated by AI, but she sometimes recommends using chatbots for anxious clients to practice social interactions in a low-stakes environment, "so if it goes badly, or you get stuck, there's no pressure." "But some of the drawbacks are, like anything really, if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world," Smith said. "Video games are probably fine for a lot of people, and then there are some people that it takes over, and then they're missing out on their non-virtual life because they're too involved. I can see that that would be a problem with these bots, but because this is so new, we know what we don't know." While the results of his trial were promising, Jacobson warned that the large language model used in his study was carefully trained for years by some of the most prominent scholars in the psychiatric field, unlike most "therapy" bots available online. "This has inherently got a lot more danger than a lot of folks are necessarily aware of," Jacobson said. "There's probably a great deal of good that can happen from this, but there's a great deal we don't know, like for example, when folks are turning to these things for companionship, does that actually enhance their ability to practice in social settings and build human bonds, or do folks actually further withdraw and replace what would be otherwise human relationships with these parasocial relationships with these chatbots?" Jacobson is particularly concerned about AI's impact on developmental processes among younger people who haven't grown up with old-school social norms and habits. While testifying before the Senate Commerce Committee in early May about child safety in the AI era, Altman said he would not want his son to have a best friend bond with an AI bot, adding that children require "a much higher level of protection" than adults using AI tools. "We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson said. "And in my mind, that's acting quite irresponsibly. You know, a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people."

Trump says he's withdrawing nomination of Musk associate Jared Isaacman to lead NASA
Trump says he's withdrawing nomination of Musk associate Jared Isaacman to lead NASA

New York Post

time5 hours ago

  • New York Post

Trump says he's withdrawing nomination of Musk associate Jared Isaacman to lead NASA

President Donald Trump announced late Saturday that he is withdrawing the nomination of tech billionaire Jared Isaacman, an associate of Trump adviser Elon Musk, to lead NASA, saying he reached the decision after a 'thorough review' of Isaacman's 'prior associations.' It was unclear what Trump meant and the White House did not immediately respond to an emailed request for an explanation. 'After a thorough review of prior associations, I am hereby withdrawing the nomination of Jared Isaacman to head NASA,' Trump wrote on his social media site. 'I will soon announce a new Nominee who will be Mission aligned, and put America First in Space.' 3 Jared Isaacman is no longer in the running to lead NASA. The Washington Post via Getty Images 3 Jared Isaacman was the first private citizen to conduct a spacewalk. AFP via Getty Images Trump announced in December during the presidential transition that he had chosen Isaacman to be the space agency's next administrator. Isaacman, 42, has been a close collaborator with Musk ever since buying his first chartered flight on Musk's SpaceX company in 2021. He is the CEO and founder of Shift4, a credit card processing company. He also bought a series of spaceflights from SpaceX and conducted the first private spacewalk. SpaceX has extensive contracts with NASA. The Senate Commerce, Science and Transportation Committee approved Isaacman's nomination in late April and a vote by the full Senate was expected soon. 3 President Trump announced plans to withdraw Isaacman's nomination after a 'thorough review' of his 'prior associations.' AP Musk appeared to lament Trump's decision after the news broke earlier Saturday, posting on the X site that, 'It is rare to find someone so competent and good-hearted.' SpaceX is owned by Musk, a Trump campaign contributor and adviser who announced this week that he is leaving the government after several months at the helm of the Department of Government Efficiency, or DOGE. Trump created the agency to slash the size of government and put Musk in charge. Semafor was first to report that the White House had decided to pull Isaacman's nomination.

Netflix Reveals New Trailer for Squid Game Season 3: Watch
Netflix Reveals New Trailer for Squid Game Season 3: Watch

Yahoo

time12 hours ago

  • Yahoo

Netflix Reveals New Trailer for Squid Game Season 3: Watch

The post Netflix Reveals New Trailer for Squid Game Season 3: Watch appeared first on Consequence. Netflix revealed the full trailer for Squid Game Season 3 during its Tudum fan event on Saturday, May 31st. Watch the clip below. The third and final season of the international hit will feature the return of Gi-hun (Lee Jung-jae) alongside familiar faces like The Front Man (Lee Byung-hun) and detective Hwang Jun-ho (Wi Ha-joon), as well as several players introduced in Season 2: Myung-gi, Dae-ho, Hyun-ju, Yong-sik, Geum-ja, Jun-hee, Min-su, and Nam-gyu. The new trailer finds Gi-hun back in the game after Season 2's rebellion, unsure why he's been allowed to live. But the answer may have something to do with The Front Man's plans for him, as 'Player 456' finds himself getting closer than ever before to the ones running the games. Because, as the trailer text says, 'all games come to an end.' Squid Game remains Netflix's most popular show to date. Season 1 holds the record for the streamer's most popular non-English TV show overall with 265.2 million views (as of May 5th, 2025), while Season 2 made its own mark by breaking the record for most views for a show in its premiere week. Revisit our deep dive into all the behind-the-scenes work that goes into dubbing Squid Game into English. Squid Game Season 3 premieres June 27th on Netflix. Popular Posts King of the Hill Revival Gets Hulu Release Date, New Opening Sequence Man Wearing Nazi T-Shirt Gets a Beatdown from Fans at Punk Rock Bowling Fest Eddie Murphy and Martin Lawrence Are Now In-Laws David Lynch's Personal Archive Going Up for Auction Dave Mustaine: Metallica Stole "Enter Sandman" Riff from Another Band Jason Bonham's Led Zeppelin Evening Announces Summer 2025 North American Tour Subscribe to Consequence's email digest and get the latest breaking news in music, film, and television, tour updates, access to exclusive giveaways, and more straight to your inbox.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store