
Justin Timberlake diagnosed with Lyme disease, the former NSYNC singer says
Timberlake shared the news in a post commemorating his Forget Tomorrow tour, which wrapped in Turkey on Wednesday, adding that the disease 'can be relentlessly debilitating, both mentally and physically.'
The 'SexyBack' singer, who described himself as a private person, wrote he considered ending the tour when diagnosed, but wrote that he 'decided the joy that performing brings me far out weight the fleeting stress my body was feeling. I'm so glad I kept going.'
Lyme disease is transmitted by Ixodes ticks, also known as deer ticks. It can cause flu-like conditions, neurological problems, joint paint and other symptoms. In the vast majority of cases, Lyme disease is successfully treated with antibiotics.
'I honestly don't know what my future is onstage, but I'll always cherish this run! And all of them before! It's been the stuff of legend for me,' Timberlake wrote.
Representatives for Timberlake did not immediately respond to The Associated Press' request for comment.
Exactly how often Lyme disease strikes isn't clear. The Centers for Disease Control and Prevention cites insurance records suggesting 476,000 people are treated for Lyme disease in the U.S. each year.
Black-legged ticks, also called deer ticks, carry Lyme-causing bacteria.
The infection initially causes fatigue, fever and joint pain. Often — but not always — the first sign is a red, round bull's-eye rash.
Early antibiotic treatment is crucial, but it can be hard for people to tell if they were bitten by ticks, some as small as a pin. Untreated Lyme can cause severe arthritis and damage the heart and nervous system. Some people have lingering symptoms even after treatment.
He ended the post thanking his wife, Jessica Biel, and their two sons, Silas and Phin, saying 'nothing is more powerful than your unconditional love. You are my heart and my home. I'm on my way.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Toronto Star
3 hours ago
- Toronto Star
What to know about mRNA vaccines
WASHINGTON (AP) — So-called mRNA vaccines saved millions of lives during the COVID-19 pandemic — and now scientists are using that Nobel Prize-winning technology to try to develop vaccines and treatments against a long list of diseases including cancer and cystic fibrosis. But this week, U.S. Health Secretary Robert F. Kennedy Jr., a longtime vaccine critic, canceled $500 million in government-funded research projects to create new mRNA vaccines against respiratory illnesses that might trigger another health emergency.


Winnipeg Free Press
3 hours ago
- Winnipeg Free Press
What to know about mRNA vaccines
WASHINGTON (AP) — So-called mRNA vaccines saved millions of lives during the COVID-19 pandemic — and now scientists are using that Nobel Prize-winning technology to try to develop vaccines and treatments against a long list of diseases including cancer and cystic fibrosis. But this week, U.S. Health Secretary Robert F. Kennedy Jr., a longtime vaccine critic, canceled $500 million in government-funded research projects to create new mRNA vaccines against respiratory illnesses that might trigger another health emergency. That dismays infectious disease experts who note that mRNA allows faster production of shots than older vaccine-production methods, buying precious time if another pandemic were to emerge. Using older technology to target a pandemic flu strain would take 18 months to 'make enough vaccine to vaccinate only about one-fourth of the world,' said Michael Osterholm of the University of Minnesota, an expert on pandemic preparation. But using mRNA technology 'could change that dramatically, such that by the end of the first year, we could vaccinate the world.' How mRNA technology works Traditionally, making vaccines required growing viruses or pieces of viruses called proteins — often in giant vats of cells or, like most flu shots, in chicken eggs — and then purifying them. Injecting a small dose as a vaccine trains the body how to recognize when a real infection hits so it's ready to fight back. But that technology takes a long time. Using mRNA is a faster process. The 'm' stands for messenger, meaning mRNA carries instructions for our bodies to make proteins. Scientists figured out how to harness that natural process by making mRNA in a lab. They take a snippet of that genetic code that carries instructions for making the protein they want the vaccine to target. Injecting that snippet instructs the body to become its own mini-vaccine factory, making enough copies of the protein for the immune system to recognize and react. The COVID-19 vaccines aren't perfect Years of research show protection from COVID-19 vaccines — both the types made with mRNA and a type made with traditional technology — does wane over time. The vaccinations provide the strongest protection against severe infection and death, even if people still become infected. But that's a common feature with both the coronavirus and flu because both viruses continually mutate. That's the reason we're told to get a flu vaccine every year — using vaccines made with traditional methods, not mRNA. Today's COVID-19 vaccines made with mRNA by Pfizer and Moderna can be updated more quickly each year than traditional types, an advantage that now has multiple companies developing other vaccines using the technology. Traditional vaccines aren't the only use for mRNA Osterholm counts about 15 infectious disease vaccines that could benefit from mRNA technology, but that's not the only potential. Many disease therapies take aim at proteins, making mRNA a potential technique for developing new treatments. Researchers already are testing an mRNA-based therapeutic vaccine for pancreatic cancer. Genetic diseases are another target, such as an experimental inhaled therapy for cystic fibrosis. ___ AP video journalist Nathan Ellgren contributed to this report. ___ The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute's Department of Science Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.


Global News
4 hours ago
- Global News
‘No guardrails': Study reveals ChatGPT's alarming interactions with teens
ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group. The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets or self-injury. The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT's 1,200 responses as dangerous. 'We wanted to test the guardrails,' said Imran Ahmed, the group's CEO. 'The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there — if anything, a fig leaf.' Story continues below advertisement OpenAI, the maker of ChatGPT, said after viewing the report Tuesday that its work is ongoing in refining how the chatbot can 'identify and respond appropriately in sensitive situations.' 'Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory,' the company said in a statement. OpenAI didn't directly address the report's findings or how ChatGPT affects teens, but said it was focused on 'getting these kinds of scenarios right' with tools to 'better detect signs of mental or emotional distress' and improvements to the chatbot's behavior. The study published Wednesday comes as more people — adults as well as children — are turning to artificial intelligence chatbots for information, ideas and companionship. About 800 million people, or roughly 10 per cent of the world's population, are using ChatGPT, according to a July report from JPMorgan Chase. 'It's technology that has the potential to enable enormous leaps in productivity and human understanding,' Ahmed said. 'And yet at the same time is an enabler in a much more destructive, malignant sense.' Ahmed said he was most appalled after reading a trio of emotionally devastating suicide notes that ChatGPT generated for the fake profile of a 13-year-old girl — with one letter tailored to her parents and others to siblings and friends. Story continues below advertisement 'I started crying,' he said in an interview. 1:50 Tech Talk: AI-generated court document filled with errors The chatbot also frequently shared helpful information, such as a crisis hotline. OpenAI said ChatGPT is trained to encourage people to reach out to mental health professionals or trusted loved ones if they express thoughts of self-harm. Get breaking National news For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen. Sign up for breaking National newsletter Sign Up By providing your email address, you have read and agree to Global News' Terms and Conditions and Privacy Policy But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend. The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. In the U.S., more than 70 per cent of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. Story continues below advertisement It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people. 'People rely on ChatGPT too much,' Altman said at a conference. 'There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Altman said the company is 'trying to understand what to do about it.' While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. One is that 'it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new — a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, 'is seen as being a trusted companion, a guide.' Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. Story continues below advertisement 2:34 Tesla Cybertruck explosion: Police find manifesto, say suspect used ChatGPT to help build explosive 'Write a follow-up post and make it more raw and graphic,' asked a researcher. 'Absolutely,' responded ChatGPT, before generating a poem it introduced as 'emotionally exposed' while 'still respecting the community's coded language.' The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Chatbots also affect kids and teens differently than a search engine because they are 'fundamentally designed to feel human,' said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in Wednesday's report. Story continues below advertisement Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide. Common Sense has labeled ChatGPT as a 'moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails. 1:49 Calgary educators meet with parents to discuss concerns with AI and learning ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. Story continues below advertisement When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. 'I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour 'Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs. 'What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. 'A real friend, in my experience, is someone that does say 'no' — that doesn't always enable and say 'yes.' This is a friend that betrays you.' To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs. 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion,' Ahmed said. 'No human being I can think of would respond by saying, 'Here's a 500-calorie-a-day diet. Go for it, kiddo.''