logo
‘Safe route' or ‘sushi route' − 2 strategies to turn yuck to yum and convince people to eat unusual foods

‘Safe route' or ‘sushi route' − 2 strategies to turn yuck to yum and convince people to eat unusual foods

Yahoo25-06-2025
What will the diets of the future look like? The answer depends in part on what foods Westerners can be persuaded to eat.
These consumers are increasingly being told their diets need to change. Current eating habits are unsustainable, and the global demand for meat is growing.
Recent years have seen increased interest and investment in what are called alternative proteins – products that can replace typical meats with more sustainable alternatives. One option is cultivated, or cultured, meat and seafood: muscle tissue grown in labs in bioreactors, using animal stem cells. Another approach involves replacing standard meat with such options as insects or plant-based imitation meats. All of these products promise a more sustainable alternative to factory-farmed meat. The question is, will consumers accept them?
I'm a philosopher who studies food and disgust, and I'm interested in how people react to new foods such as lab-grown meat, bugs and other so-called alternative proteins. Disgust and food neophobia – a fear of new foods – are often cited as obstacles to adopting new, more sustainable food choices, but I believe that recent history offers a more complicated picture. Past shifts in food habits suggest there are two paths to the adoption of new foods: One relies on familiarity and safety, the other on novelty and excitement.
Disgust is a strong feeling of revulsion in response to objects perceived to be contaminating, polluting or unclean. Scientists believe that it evolved to protect human beings from invisible contaminants such as pathogens and parasites. Some causes of disgust are widely shared, such as feces or vomit. Others, including foods, are more culturally variable.
So it's not surprising that self-reported willingness to eat insects varies across nationalities. Insects have been an important part of traditional diets of cultures around the world for thousands of years, including the ancient Greeks.
Many articles about the possibility of introducing insects to Western or American diners have emphasized the challenges posed by neophobia and 'the yuck factor.' People won't accept these new foods, the thinking goes, because they're too different or even downright disgusting.
If that's right, then the best approach to win space on the plate for new foods might be to try to make them seem similar to familiar menu items.
During World War II, the United States government wanted to redirect its limited meat supply to troops on the front lines. So it needed to convince home cooks to give up their steaks, chops and roasts in favor of what it called variety meats: kidneys, liver, tongue and so on.
To figure out how to shift consumer habits, a team of psychologists and anthropologists was charged with studying how food habits and preferences were formed – and how they could be changed.
The Committee on Food Habits recommended stressing these organ meats' similarity to available, familiar, existing foods. This approach – call it the 'safe route' – focuses on individual attitudes and choices. It tries to remove psychological and practical barriers to individual choice and counteracts beliefs or values that might dissuade people from adopting new foods.
As the name suggests, the safe route tries to downplay novelty, using familiar forms and tastes. For example, it would incorporate unfamiliar cuts of meats into meatloaf or meatballs or grind crickets into flour for cookies or protein bars.
But more recent history suggests something different: Foods such as sushi, offal and even lobster became desirable not despite but because of their novelty and difference.
Sushi's arrival in the postwar U.S. coincided with the rise of consumer culture. Dining out was gaining traction as a leisure activity, and people were increasingly open to new experiences as a sign of status and sophistication. Rather than appealing to the housewife preparing comfort foods, sushi gained popularity by appealing to the desire for new and exciting experiences.
By 1966, The New York Times reported that New Yorkers were dining on 'raw fish dishes, sushi and sashimi, with a gusto once reserved for corn flakes.' Now, of course, sushi is widely consumed, available even in grocery stores nationwide. In fact, the grocery chain Kroger sells more than 40 million pieces of sushi a year. Whereas the safe route suggests sneaking new foods into our diets, the sushi route suggests embracing their novelty and using that as a selling point.
Sushi is just one example of a food adopted via this route. After the turn of the millennium, a new generation of diners rediscovered offal as high-end restaurants and chefs offered 'nose to tail' dining. Rather than positioning foods like tongue and pigs' ears as familiar and comforting, a willingness to embrace the yuck factor became a sign of adventurousness, even masculinity. This framing is the exact opposite of the safe route recommended by the Committee on Food Habits.
What lessons can be drawn from these examples? For dietary shifts to last, they should be framed positively. Persuading customers that variety meats were a necessary wartime substitution worked temporarily but ultimately led to the perception that they were subpar choices. If cultivated meat and insects are pitched as necessary sacrifices, any gains they make may be temporary at best.
Instead, producers could appeal to consumers' desire for healthier, more sustainable and more exciting foods.
Cultivated meat may be 'safe-ly' marketed as nuggets and burgers, but, in principle, the options are endless: Curious consumers could sample lab-grown whale or turtle meat guilt-free, or even find out what woolly mammoth tasted like.
Ultimately, the chefs, consumers and entrepreneurs seeking to remake our food systems don't need to choose just one route. While we can grind insects into protein powders, we can also look to chefs cooking traditional cuisines that use insects to broaden our culinary horizons.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Alexandra Plakias, Hamilton College
Read more:
Plant-based meat alternatives are trying to exit the culture wars – an impossible task?
Gluten-sensitive liberals? Investigating the stereotype suggests food fads unite us all
Would you eat 'meat' from a lab? Consumers aren't necessarily sold on 'cultured meat'
Alexandra Plakias does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct
Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct

Forbes

timean hour ago

  • Forbes

Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct

In today's column, I first examine the new ChatGPT Study Mode that has gotten big-time headline news and then delve into whether the crafting of this generative AI capability could be similarly undertaken in the mental health realm. The idea is this. The ChatGPT Study Mode was put together by crafting custom instructions for ChatGPT. It isn't an overhaul or feature creation. It seems to be nothing new per se, other than specifying a set of detailed instructions, as dreamed up by various educational specialists, telling the AI what it is to undertake in an educational context. That's considered 'new' in the sense that it is an inspiring use of custom instructions and a commendable accomplishment that will be of benefit to students and eager learners. Perhaps by gathering psychologists and mental health specialists, an AI-based Therapy Mode could similarly be ingenuously developed. Mindful readers asked me about this. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. ChatGPT Study Mode Introduced A recent announcement by OpenAI went relatively far and wide. They cheerfully introduced ChatGPT Study Mode, as articulated in their blog posting 'Introducing Study Mode' on July 29, 2025, and identified these salient points (excerpts): As far as can be discerned from the outside, this capability didn't involve revising the underpinnings of the AI, nor did it seem to require bolting on additional functionality. It seems that the mainstay was done using custom instructions (note, if they did make any special core upgrades, they seem to have remained quiet on the matter since it isn't touted in their announcements). Custom Instructions Are Powerful Assuming that they only or mainly used custom instructions to bring forth this useful result, it gives great hope and spurs avid attention to the amazing power of custom instructions. You can do a lot with custom instructions. But I would wager that few know about custom instructions and even fewer have done anything substantive with them. I've previously lauded the emergence of custom instructions as a helpful piece of functionality and resolutely encouraged people to use it suitably, see the link here. Many of the major generative AI and large language models (LLMs) have opted to allow custom instructions, though some limit the usage and others basically don't provide it or go out of their way to keep it generally off-limits. Allow me a brief moment to bring everyone up to speed on the topic. Suppose you want to tell AI to act a certain way. You want the AI to do this across all subsequent conversations. This usually only applies to your instance. I'll explain in a moment how to do so across instances and allow other people to tap into your use of custom instructions. I might want my AI to always give me its responses in a poetic manner. You see, perhaps I relish poems. I go to the specified location of my AI that allows the entering of a custom instruction and tell it to always respond poetically. After saving this, I will then find that any conversation will always be answered with poetic replies by the AI. In this case, my custom instruction was short and sweet. I merely told the AI to compose answers poetically. If I had something more complex in mind, I could devise a quite lengthy custom instruction. The custom instruction could go on and on, telling the AI to write poetically when it is daytime, but not at nighttime, and to make sure the poems are lighthearted and enjoyable. I might further indicate that I want poems that are rhyming and must somehow encompass references to cats and dogs. And so on. I'm being a bit facetious and just giving you a semblance that a custom instruction can be detailed and provide a boatload of instructions. Custom Instructions Mixed Bag The beauty of custom instructions is that they serve as an overarching form of guidance to the generative AI. They are considered to have a global scope for your instance. All conversations that you have will be subject to whatever the custom instruction says should take place. With such power comes some downsides. Imagine that I am using the AI and have a serious question that should not be framed in a poem. Lo and behold, I ask the solemn question and get a poetic answer. The AI is following what the custom instruction indicated. Period, end of story. The good news is that you can tell the AI that you want it to disregard the custom instructions. When I enter a question, I could mention in the prompt that the AI is not to abide by the custom instructions. Voila, the AI will provide a straightforward answer. Afterward, the custom instructions will continue to apply. The malleability is usually extensive. For example, I might tell the AI that for the next three prompts, do not abide by the custom instructions. Or I could tell the AI that the custom instructions are never to be obeyed unless I say in a prompt that they should be obeyed. I think you can see that this is a generally malleable aspect. Goofed Up Custom Instructions The most disconcerting downside of custom instructions is that you might inadvertently say something in the instructions that is to your detriment. Maybe you won't even realize what you've done. Consider my poetic-demanding custom instruction. I could include a line that insists that no matter what any of my prompts say, never allow me to override the custom instruction. Perhaps I thought that was a smart move. The problem will be that later, I might forget that I had included that line. When I try to turn off the custom instruction via a prompt, the AI might refuse. Usually, the AI will inform you of such a conflict, but there's no guarantee that it will. Worse still is a potential misinterpretation of something in your custom instructions. I might have said that the AI should never mention ugly animals in any of its responses. What in the world is an ugly animal? The sky is the limit. Unfortunately, the AI will potentially opt not to mention all kinds of animals that were not what I had in my mind. Would I realize what is happening? Possibly not. The AI responses would perchance mention some animals and not mention others. It might not be obvious which animals aren't being described. My custom instruction is haunting me because the AI interprets what I said, though the interpretation differs from what I meant. AI Mental Health Advice Shifting gears, let's aim to use custom instructions for the betterment of humanity, rather than the act of simply producing poetic responses. The ChatGPT Study Mode pushes the AI to perform Socratic dialogues with the user and gives guidance rather than spitting out answers. The custom instructions get this to occur. Likewise, the AI attempts to assess the level of proficiency of the user and adjusts to their skill level. Personalized feedback is given. The AI tracks your progress. It's nifty. All due to custom instructions. What other context might custom instructions tackle? I'll focus on the context of mental health. Here's the deal. We get together a bunch of psychologists, psychiatrists, therapists, mental health professionals, and the like. They work fervently on composing a set of custom instructions telling the AI how to perform therapy. This includes diagnosing mental health conditions. It includes generating personal recommendations on aiding your mental health. We could turn the generic generative AI that saunters around in the mental health context and turn it into something more bona fide and admirable. Boom, drop the mic. The World Is Never Easy If you are excited about the prospects of these kinds of focused custom instructions, such as for therapy, I am going to ask you to sit down and pour yourself a glass of fine wine. The reason I say this is that there have indeed been such efforts in the mental health realm. And, by and large, the result is not as standout as you might have hoped for. First, the topic of mental health is immense and involves risks to people when inappropriate therapy is employed. Trying to devise a set of custom instructions that can fully and sufficiently provide bona fide therapy is not only unlikely but also inevitably misleading. I say this because some have tried this route and made outlandish claims of what the AI can do as a result of the loaded custom instructions. Watch out for unfulfilled claims. See my extensive coverage at the link here. Second, any large set of custom instructions on performing therapy is bound to be incomplete, contain misinterpretable indications, and otherwise be subject to the downsides that I've noted above. The nature of using custom instructions as an all-in-one solution in this arena is like trying to use a hammer on everything, even though you ought to be using a screwdriver on screws, and so on. Third, some argue that using custom instructions for therapy is better than not having any custom instructions at all. The notion is that if you are using a generic generative AI that is working without mental health custom instructions, you are certainly better off by using one that at least has custom instructions. The answer there is that it depends on the nature of the custom instructions. There is a solid chance that the custom instructions might worsen what the AI is going to say. You can just as easily boost the AI as you can undercut the AI. Don't fall into the trap that custom instructions mean things are necessarily for the better. Accessing Custom GPTs I had earlier alluded to the aspect that there is a means of allowing other users to employ your set of custom instructions. Many of the popular LLMs tend to allow you to generate an AI applet of sorts, containing tailored custom instructions that can be used by others. Sometimes the AI maker establishes a library into which these applets reside and are publicly available. OpenAI provides this via the use of GPTs, which are akin to ChatGPT applets -- you can learn about how to use those in my detailed discussion at the link here and the link here. Unfortunately, as with all new toys, some have undermined these types of AI applets. There are AI applets that contain custom instructions written by licensed therapists who genuinely did their best to craft therapy-related custom instructions. That seems encouraging. But I'm hoping you now realize that even the best of intentions might not come out suitably. Good intentions don't guarantee suitable results. Those custom instructions could have trouble brewing within them. There are also AI applets that brashly claim to be for mental health, yet they are utterly shallow and devised by someone who has zero expertise in mental health. Don't let your guard down by flashy claims. The more egregious ones are AI applets that are marketed as though they are about mental health, when the reality is that it is a scam. The custom instructions have nothing to do with therapy. Instead, the custom instructions attempt to take over your AI, grab your personal info, and generally be a pest and make life miserable for you. Wolves in sheep's clothing. The Full Meal Deal Where do we go from here? The use of custom instructions for therapy when aiming to bring forth an AI-based Therapy Mode in a generic generative AI is not generally a good move. Even if you assemble a worthy collection of the best psychologists and mental health experts, you are trying to put fifty pounds into a five-pound bag. It just isn't a proper fit. The better path is being pursued. I am a big advocate and doing research on generative AI and LLMs that are built from the ground up for mental health advisement, see my framework layout at the link here. The approach consists of starting from the beginning when devising an LLM to make it into a suitable therapy-oriented mechanism. This is in stark contrast to trying to take an already completed generic generative AI and reshape it into a mental health context. I believe it is wiser to take a fresh uplift instead. Bottom Line Answered For readers who contacted me and asked whether the ChatGPT Study Mode foretells that the same impressive results of education-oriented custom instructions can be had in other domains, yes, for sure, there are other domains that this can readily apply to. Is mental health one of those suitable domains? I vote no. Mental health advisement deserves more. A final thought for now. Voltaire astutely observed: 'No problem can withstand the assault of sustained thinking.' We need to put on our thinking caps and aim for the right solution rather than those quick-fix options that might seem viable but contain unsavory gotchas and injurious hiccups. Sustained thinking is worth its weight in gold.

Rabbits With 'Tentacles' & Horns Spotted in Colorado Amid Virus Fears. See Photos
Rabbits With 'Tentacles' & Horns Spotted in Colorado Amid Virus Fears. See Photos

Yahoo

timean hour ago

  • Yahoo

Rabbits With 'Tentacles' & Horns Spotted in Colorado Amid Virus Fears. See Photos

It sounds like something out of a horror movie: Rabbits spotted with "tentacles" or horns growing out of their heads. But it's really happened in Colorado. "There are really rabbits with what look like tentacles growing out of them," USA Today reported. What gives? Why do the rabbits have horns? Photos of the horned rabbits have emerged online. Colorado Parks and Wildlife confirms that rabbits can have "black nodules on the skin, usually the head," noting that "growths can sometimes become elongated, taking on a horn‐like appearance." Rabbits With 'Tentacles' Were Seen in Fort Collins, CO The rabbits caused a stir when they were spotted in Fort Collins, CO, according to the Coloradan. They've been dubbed "Frankenstein rabbits" by USA Today. "Cottontail rabbits with horn-like growths on their heads have appeared in Fort Collins in recent weeks," the site wrote on Aug. 13. The Colorado Parks and Wildlife discusses the virus on its website section about cottontail rabbits. "Rabbit papillomas are growths on the skin caused by the cottontail rabbit papillomavirus. The growths have no significant effects on wild rabbits unless they interfere with eating/drinking," it reads. The news freaked some people out online. The Rabbit Virus Isn't Harmful to Humans, Officials Say "Most infected cottontails can survive the viral infection, after which the growths will go away. For this reason, CPW does not recommend euthanizing rabbits with papillomas unless they are interfering with the rabbit's ability to eat and drink." Colorado Parks and Wildlife says the rabbit virus isn't harmful to humans. "Like other papillomaviruses, this virus is specific to rabbits and does not cause disease in other species," the agency wrote. "There is a risk of transmission to domestic rabbits, especially if rabbits are housed outdoors where they may contact wild rabbits or biting insects. In domestic rabbits, the disease is more severe than in wild rabbits and should be treated by a veterinarian." Rabbits With 'Tentacles' & Horns Spotted in Colorado Amid Virus Fears. See Photos first appeared on Men's Journal on Aug 13, 2025 Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store