
Caffeine craze warning after pouches gain in popularity with teens
Marketing aimed at teenagers of such products is spreading on social media. A school resource officer for the Boise County Sheriff's Office in Idaho, David Gomez, began noticing the use of caffeine pouches last spring, according to NBC News. The pouches can contain more than 200 milligrams of caffeine, and students have been using them along with nicotine pouches such as Zyn.
Gomez noted that students would use the pouches as a disguise for nicotine pouches or they'll use both.
'They'll use the Zyn pouches that they put in their lip, and then they'll take a caffeine pouch,' said Gomez. 'They don't care what it is they're putting in their lip.'
Richard Mumby is a marketing executive who was part of the launch of the e-cigarette Juul, which has been accused of initiating a wave of teen vaping. He's now back with a caffeine pouch startup known as Wip.
A growing market is now trying to sell Americans on pouches as an alternative to caffeinated beverages.
Mumby told NBC News that caffeine is part of 'the fabric of many Americans' everyday lives.' But he said there's room to improve.
Wip and other companies are marketing caffeine pouches as a portable and affordable alternative to caffeinated drinks. It's a mix between nicotine pouches and energy drinks, placed between the lip and the gums to deliver the caffeine.
Pouches, most of which do not contain nicotine, tend to last between 20 minutes and an hour, but their effect can go on for longer.
Wip's flavors include mint, strawberry kiwi, and sour cherry, and each comes with 100 or 200 milligrams of caffeine, which is the same as about two cups of coffee. According to the U.S. Food and Drug Administration, that's half the amount of caffeine an adult can safely consume in an entire day.
'We take responsible caffeine consumption and responsible marketing of our product seriously,' a spokesperson for Wip told NBC News. 'While there are no legal age restrictions on caffeine products, we have adopted marketing guidelines that exceed the age-related standards set by the American Beverage Association for most common energy drinks. Wip is not intended for use by anyone under the age of 18, and our guidelines ensure the product is marketed responsibly.'
Exercise and nutrition sciences professor at George Washington University 's Milken Institute School of Public Health, Rob van Dam, studies caffeine. He shared concerns about the potency of some of the pouches being sold.
'It may be a bit different than coffee,' he told NBC News. 'It may hit faster, and you may overdose, in a way, more quickly.'
Another worry is the possible popularity among teens, as nicotine pouches and energy drinks are already widespread. Zyn maker Philip Morris has previously told NBC News that 'Zyn's marketing is directed toward legal age nicotine users who are 21+.'
The American Academy of Pediatrics has recommended that teens don't consume more than 100 milligrams of caffeine per day. The chair of the AAP's Committee on Nutrition, Dr. Mark Corkins, told NBC News that it would be better if teens avoided caffeine altogether.
'Caffeine, in general, is an area we are very concerned about,' he said. 'Pouches are just another delivery form.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
an hour ago
- The Guardian
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat


The Sun
5 hours ago
- The Sun
My skin was so dull but I found the best toner to make it hydrated and glowy, you won't find it in the beauty aisle
A WOMAN has shared how she got her skin from dull to glowy with a secret ingredient. Callere took to social media to show off the secret toner, but it wasn't something you'd find in the beauty aisle. 2 2 In the clip, she said: "When you stay consistent and your toner didn't fail you. "A year long, and it's still kicking in." Callere shared her skin before she started using the secret treatment and it was clear it looked dull and lacked moisture. But now, her skin looked bright and more youthful after applying the toner day and night. While many head to the beauty aisle to pick up a toner, Callere made her own at home using rice. She cleaned the rice with water before letting it ferment and turning to a milky liquid. The starchy water left over from rice has been proven to help skin look its best. Rice water contains antioxidants and compounds like ferulic acid and allantoin, which can brighten the skin, reduce pigmentation, and even out skin tone. You can make rice water at home by allowing the rice to soak in water for up to an hour, and then straining it into a bottle. Skincare whizz urges beauty fans to hit high street store for £2.49 spray that works a treat to stop spots on hot days To take it to the next level, you can also ferment the rice water. Let the rice water sit at room temperature for 1-2 days. The fermentation process boosts the levels of antioxidants within the preparation, making it even more effective for skin brightening and healing. It can be stored in the fridge for up to one week. The clip went viral with over 4 million views on her TikTok account @ callmereen107 and people were quick to thank her for the skincare hack. One person wrote: "Finally, found my people who loves the efficiency of rice. For less hassle buy rice flour. You can use it as a mask, iced cubes." "Rice toner and aloe vera plant, effective," penned a third. Biggest skincare trends for 2025 Face The Future's Head of Clinic, Kimberley Medd, shared the five skincare trends predicted to take off in 2025. 1. Exosomes Exosomes are the buzzword for 2025, taking advanced skin regeneration to new heights. These micro-messengers signal skin cells to repair damage, boost collagen, and accelerate recovery, and they're a gamechanger for targeting ageing. 2. Streamlined Skincare The age of 12-step routines is fading as consumers shift to more intentional, multi-functional products. Streamlining skincare not only saves time but also reduces the risk of overloading your skin. In 2025, we'll see a rise in hybrid products that combine active ingredients for simplified, effective results. 3. Vegan Collagen Plant-based collagen will dominate the skincare world this year, providing a sustainable, ethical alternative to traditional animal-derived collagen. Expect vegan collagen in everything from moisturisers to serums. 4. The Rise Of AI AI is revolutionising the beauty landscape, making it possible for consumers to get truly personalised skincare solutions. In 2025, we predict a dramatic shift towards AI-powered tools that help people understand their skin on a deeper level. 5. Hair Loss Solutions - Hair loss is an issue that affects more men than we often realise, and it's no longer just something we're talking about behind closed doors. This year, expect to see a continued rise in demand for treatments that not only tackle hair loss but also nurture overall scalp health. Meanwhile a fourth said: "So this actually works?" Someone else added: "Thank you."


The Independent
6 hours ago
- The Independent
Undeclared milk leads to US-wide butter recall
A voluntary recall has been issued for over 64,000 pounds of Bunge North America's NH European Style Butter Blend due to undeclared milk, a common allergen. The recall was initiated on 14 July and classified as a Class II recall by the FDA. The affected butter was distributed to 12 US centres and one in the Dominican Republic. This butter recall is part of a series of recent food and drink issues, including High Noon Vodka Seltzer being recalled as it was mislabeled as non-alcoholic energy drinks. Consumers are advised to check affected products and dispose of or return them.