Tommy Lee's Wife Brittany Furlan Advised To Stick With A.I. To Spice Up The Bedroom
The power couple has shared a top secret from their passionate moments, and how it has kept their flame of romance burning very bright.
Tommy Lee and Brittany Furlan have maintained an open policy about details of their marriage since tying the knot six years ago in 2019, after meeting online two years prior.
Furlan and Lee's sexual satisfaction in their marriage has sometimes had his wife turning to ChatGPT for help in navigating their bedroom business.
However, with their romance rekindled, sex expert Vanessa Marin suggests that Furlan can continue using A.I. to enhance their relationship, both in and out of the bedroom.
Marin noted that many individuals now utilize ChatGPT for improved connection and encouraged couples to embrace these tools to foster intimacy and communication. The sex expert stated the potential benefits of ChatGPT and similar tools, including her own creation, "Vanessa A.I."
She believes these tools can assist couples, such as Furlan and the drummer, in various ways to improve their emotional interaction.
Per TMZ, Marin identified the immense benefits of A.I. in the bedroom, including initiating difficult conversations, generating engaging discussion topics, and even providing advice for enhancing intimacy for couples.
However, Marin warned that the comfort A.I. provides should not replace genuine human interactions. According to her, the complexities of human relationships are irreplaceable, and Marin emphasizes the value of real emotional connections.
Dr. Julie Carpenter, an expert in human-computer interaction, also reaffirms Marin's sentiment, cautioning that it is easy for users to get too relaxed with chatbots designed to engage users effectively.
She stresses that while these tools can provide social benefits, they are ultimately profit-driven and lack a true understanding of human emotions. The tech guru emphasized that artificial intelligence applications do not fully comprehend unique contexts and the intricacies of human interactions with one another.
Carpenter then applauded Furlan's openness about using ChatGPT to solve her intimacy challenges. According to her, an admission from someone like her helps to normalize its use in society.
Last year, The Blast noted that Furlan got candid about her husband's hygiene habits during a podcast session with Kaitlyn Bristowe. In a surprising revelation, she disclosed that the famous drummer only showers once a week, a practice he attributes to his upbringing.
The actress explained that Lee was raised with the understanding that daily showers are not a pointer to hygiene. "They would just do the white washcloth if they felt like they were getting a little funky. My husband showers, like, once a week," she continued.
Despite this unusual routine, Furlan emphasized that the boy band superstar maintains a neat appearance, countering the stereotype of messy musicians.
She compared his cleanliness with that of her previous partners, stating that they were not as neat as her meticulously organized home, which "looks like a f-ucking museum."
Lee's irregular hygiene has no bearing on how they feel for each other, as the actress emphasizes that the only thing she feels for him is intense attraction. She even finds the drummer "so handsome" and insists he does not give her "ick" like most people who do not consistently shower would.
Furlan has pretty much indulged her husband's weird ways. Three years ago, the musician broke the internet by uploading a naked picture on X. The graphic image featured a full-frontal photo of the celebrity.
For years before meeting, the couple had been huge fans of each other's work. Lee used to enjoy watching Furlan's Vine videos and even gave her a follow. Furlan also felt the same way about his music. According to her:
"I was a Methods of Mayhem fan and Mötley Crüe fan. I wasn't a huge fan — I didn't listen to a bunch of Mötley Crüe, but I definitely liked Methods of Mayhem, and I always thought he was super attractive."
Their exchange with each other bore positive fruit as they eventually met and went out on a couple of awesome dates.
Lee and Furlan's love is strong and enduring, proving resilient despite their past relationships. Even Pamela Anderson, who once considered rekindling her romance with Lee, has moved on.
In May, The Blast reported that sources close to Anderson revealed she has moved on from her past relationship and feels relieved to be distanced from him.
Initially, she reportedly had lingering feelings, but now those close to her believe a reunion is unlikely. Anderson is embracing her single life and focusing on herself. Earlier in 2023, she did reach out to Lee, referring to him as her "one true love" before the release of her Netflix documentary.
Anderson's yearning to mend their broken ties came from a desire to become better grandparents. After her youngest son's engagement last summer, she reportedly emphasized the importance of "peacefully co-grandparenting" with Lee.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


WebMD
5 minutes ago
- WebMD
The Real Risks of Turning to AI for Therapy
Aug. 20, 2025 — Whenever Luke W Russell needs to work through something, they turn to ChatGPT. (Luke uses they/them pronouns.) 'I've wept as I've navigated things,' said the Indianapolis filmmaker, who uses the chatbot to pick apart intrusive thoughts or navigate traumatic memories. 'I've had numerous times when what ChatGPT is saying to me is so real, so powerful, and I feel so deeply seen.' Russell's experience reflects a broader, growing reality: Many people are turning to chatbots for mental health support — for everything from managing anxiety and processing grief to coping with work conflicts and defusing marital spats. More than half of adults ages 18-54 — and a quarter of adults 55 and up — say they would be comfortable talking with an AI chatbot about their mental health, according to a 2025 survey by the Harris Poll and the American Psychological Association (APA). The catch: OpenAI's ChatGPT and other chatbots — like Anthropic's Claude and Google's Gemini — are not designed for this. Even AI products promoted as emotional health tools — like Replika, Wysa, Youper, and MindDoc — were not built on validated psychological methods, said psychologist C. Vaile Wright, PhD, senior director of the APA's Office of Health Care Innovation. 'I would argue that there isn't really any commercially approved, AI-assisted therapy at the moment,' said Wright. 'You've got a whole lot of chatbots where there is no research, there's no psychological science, and there are no subject matter experts.' Critics warn that AI's potential for bias, lack of true empathy, and limited human oversight could actually endanger users' mental health, especially among vulnerable groups like children, teens, people with mental health conditions, and those experiencing suicidal thoughts. The growing concern has led to the emergence of the terms 'ChatGPT psychosis' or ' AI psychosis ' — referring to the potential harmful mental health effects of interacting with AI. It's even drawing attention from lawmakers: This month, Illinois enacted restrictions on AI in mental health care, banning its use for therapy and prohibiting mental health professionals from using AI to communicate with clients or make therapeutic decisions. (Similar restrictions have already been passed in Nevada and Utah.) But none of this is stopping people from turning to chatbots for support, especially amid clinician shortages, rising therapy costs, and inadequate mental health insurance coverage. 'People have absolutely reported that experiences with chatbots can be helpful,' said Wright. The Draw of Chatbots for Mental Health Data shows we're facing a massive shortage of mental health workers, especially in remote and rural areas, said psychologist Elizabeth Stade, PhD, a researcher in the Computational Psychology and Well-Being Lab at Stanford University in Stanford, CA. 'Of adults in the United States with significant mental health needs, only about half are able to access any form of treatment. With youth, that number is closer to 75%,' said Jessica Schleider, PhD, a child and adolescent psychologist at Northwestern University in Chicago. 'The provider shortage is clearly contributing to why so many folks are turning to their devices and, now increasingly, to generative AI to fill that gap.' Unlike a therapist, a chatbot is available 24/7. 'When [people] need help the most, it is typically after hours,' said Wright, who suggested the right AI tool could potentially supplement human therapy. 'When it's 2 a.m. and you're in crisis, could this help provide some support?' Probably, she said. Results of the first clinical trial of an AI-generative therapy chatbot showed 'significant, clinically meaningful reductions in depression, anxiety, and eating disorder symptoms' within four to eight weeks, said lead study author Michael V. Heinz, MD, a professor at Dartmouth College's Geisel School of Medicine and faculty affiliate at the Center for Technology and Behavioral Health in Lebanon, New Hampshire. The chatbot — Therabot, developed at Dartmouth — combines extensive training in evidence-based psychotherapy interventions with advanced generative AI. 'We saw high levels of user engagement — six-plus hours on average across the study,' Heinz said. Participants said using Therabot was like talking to a human therapist. But results are early, and more studies are needed, Heinz said. Access and affordability drew Russell to ChatGPT, they said. 'I didn't set out to use ChatGPT as a therapist. I quit therapy in January due to income dropping. I was already using ChatGPT on the regular for work, and then I started using it for personal idea exploration. ... I've never had a therapist who could move as fast as ChatGPT and ignore miscellaneous things,' they said. Perhaps one of the most appealing aspects is that chatbots don't judge. 'People are reluctant to be judged, and so they are often reluctant to disclose symptoms,' said Jonathan Gratch, PhD, professor of computer science and psychology at the University of Southern California, who has researched the topic. One of his studies found that military veterans were more likely to share PTSD symptoms with a virtual chatbot than in a survey. When Chatbots Are Harmful Most people don't know how AI works — they might believe it's always objective and factual, said Henry A. Willis, PhD, a psychologist and professor at the University of Maryland in College Park. But often, the data they're trained on is not representative of minority groups, leading to bias and technology-mediated racism, Willis said. 'We know that Black and brown communities are not adequately reflected in the majority of large-scale mental health research studies,' Willis said. So a chatbot's clinical symptom information or treatment recommendations may not be relevant or helpful to those from minority backgrounds. There's also an impersonal aspect. Chatbots do what's called ecological fallacy, said H. Andrew Schwartz, PhD, associate professor of computer science at Stony Brook University in Stony Brook, NY. They treat scattered comments like random data points, making assumptions based on group-level data that may not reflect the reality of individuals. And who's accountable if something goes wrong? Chatbots have been linked to cases involving suggestions of violence and self-harm, including the death of a teen by suicide. Some chatbots marketed for companionship and emotional support were designed with another incentive: to make money. Wright is concerned that they may unconditionally validate patients, telling them what they want to hear so they stay on the platform — 'even if what they're telling you is actually harmful or they're validating harmful responses from the user.' None of these conversations are bound by HIPAA regulations, either, Wright pointed out. 'So even though they may be asking for personal information or sharing your personal information, they have no legal obligation to protect it.' The Psychological Implications of Forming Emotional Bonds With AI In an opinion article published in April in the journal Trends in Cognitive Sciences, psychologists expressed concern about the long-term implications of forming emotional bonds with AI. Chatbots can replace users' real relationships, crowding out romantic partners, co-workers, and friends. This may mean that individuals begin to 'trust' the opinion and feedback of chatbots over real people, said Willis. 'The ongoing positive reinforcement that can happen instantly from interacting with a chatbot may begin to overshadow any reinforcement from interacting with real people,' who may not be able to communicate as quickly, he said. 'These emotional bonds may also impair people's ability to have a healthy level of skepticism and critical evaluation skills when it comes to the responses of AI chatbots.' Gratch compared it to hunger and food. 'We're biologically wired to seek out food when we get hungry. It is the same with social relationships. If we haven't had a relationship in a while, we may feel lonely, and then that motivates us to go out and reach out to people.' But studies suggest that social interaction with a computer program, like a chatbot, can sate a person's social needs and demotivate them to go out with friends, he said. 'That may have long-term consequences for increased loneliness. For example, research has shown people who compulsively use Facebook tend to be much more lonely.' Counseling with a therapist involves 'a natural curiosity about the individual and their experiences that AI cannot replicate,' Willis said. 'AI chatbots respond to prompts, whereas therapists can observe and ask clinical questions based on one's body language, a synthesis of their history, and other things that may not be conscious to the client — or things the client may not even be aware are important to their mental health well-being.' The Future of AI Therapy "I think there is going to be a future where you have really well-developed [chatbots] for addressing mental health that are scientifically driven and where they are ensuring that there are guardrails in place when somebody is in crisis. We're just not quite there yet,' said the APA's Wright. 'We may get to a place where they're even reimbursed by insurance,' she said. 'I do think increasingly we are going to see providers begin to adopt these technology tools as a way to meet their patients' needs.' But for now, her message is clear: The chatbots are not there yet. 'Ideally, chatbot design should encourage sustained, meaningful interaction with the primary purpose of delivering evidence-based therapy,' said Dartmouth's Heinz. Until then, don't rely on them too heavily, the experts cautioned — and remember, they are not a substitute for professional help.


New York Times
6 minutes ago
- New York Times
We're Already Living in the Post-A.I. Future
In 2023 — just as ChatGPT was hitting 100 million monthly users, with a large minority of them freaking out about living inside the movie 'Her' — the artificial intelligence researcher Katja Grace published an intuitively disturbing industry survey that found that one-third to one-half of top A.I. researchers thought there was at least a 10 percent chance the technology could lead to human extinction or some equally bad outcome. A couple of years later, the vibes are pretty different. Yes, there are those still predicting rapid intelligence takeoff, along both quasi-utopian and quasi-dystopian paths. But as A.I. has begun to settle like sediment into the corners of our lives, A.I. hype has evolved, too, passing out of its prophetic phase into something more quotidian — a pattern familiar from our experience with nuclear proliferation, climate change and pandemic risk, among other charismatic megatraumas. If last year's breakout big-think A.I. text was 'Situational Awareness' by Leopold Aschenbrenner — a 23-year-old former OpenAI researcher who predicted that humanity was about to be dropped into an alien universe of swarming superintelligence — this year's might be a far more modest entry, 'A.I. as Normal Technology,' published in April by Arvind Narayanan and Sayash Kapoor, two Princeton-affiliated computer scientists and skeptical Substackers. Rather than seeing A.I. as 'a separate species, a highly autonomous, potentially superintelligent entity,' they wrote, we should understand it 'as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs.' Just a year ago, 'normal' would have qualified as deflationary contrarianism, but today it seems more like an emergent conventional wisdom. In January the Oxford philosopher and A.I. whisperer Toby Ord identified what he called the 'scaling paradox': that while large language models were making pretty impressive gains, the amount of resources required to make each successive improvement was growing so quickly that it was hard to believe that the returns were all that impressive. The A.I. cheerleaders Tyler Cowen and Dwarkesh Patel have begun emphasizing the challenges of integrating A.I. into human systems. (Cowen called this the 'human bottleneck' problem.) In a long interview with Patel in February, Microsoft's chief executive, Satya Nadella, threw cold water on the very idea of artificial general intelligence, saying that we were all getting ahead of ourselves with that kind of talk and that simple G.D.P. growth was a better measure of progress. (His basic message: Wake me up when that hits 10 percent globally.) Perhaps more remarkable, OpenAI's Sam Altman, for years the leading gnomic prophet of superintelligence, has taken to making a similar point, telling CNBC this month that he had come to believe that A.G.I. was not even 'a superuseful term' and that in the near future we were looking not at any kind of step change but at a continuous walk along the same upward-sloping path. Altman hyped OpenAI's much-anticipated GPT-5 ahead of time as a rising Death Star. Instead, it debuted to overwhelmingly underwhelming reviews. In the aftermath, with skeptics claiming vindication, Altman acknowledged that, yes, we're in a bubble — one that would produce huge losses for some but also large spillover benefits like those we know from previous bubbles (railroads, the internet). This week the longtime A.I. booster Eric Schmidt, too, shifted gears to argue that Silicon Valley needed to stop obsessing over A.G.I. and focus instead on practical applications of the A.I. tools in hand. Altman's onetime partner and now sworn enemy Elon Musk recently declared that for most people, the best use for his large language model, Grok, was to turn old photos into microvideos like those captured by the Live feature on your iPhone camera. And these days, Aschenbrenner doesn't seem to be working on safety and catastrophic risk; he's running a $1.5 billion A.I. hedge fund instead. In the first half of 2025, it turned a 47 percent profit. Want all of The Times? Subscribe.
Yahoo
23 minutes ago
- Yahoo
Microsoft boss troubled by rise in reports of 'AI psychosis'
There are increasing reports of people suffering "AI psychosis", Microsoft's head of artificial intelligence (AI), Mustafa Suleyman, has warned. In a series of posts on X, he wrote that "seemingly conscious AI" – AI tools which give the appearance of being sentient – are keeping him "awake at night" and said they have societal impact even though the technology is not conscious in any human definition of the term. "There's zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality," he wrote. Related to this is the rise of a new condition called "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real. Examples include believing to have unlocked a secret aspect of the tool, or forming a romantic relationship with it, or coming to the conclusion that they have god-like superpowers. 'It never pushed back' Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer. The chatbot began by advising him to get character references and take other practical actions. But as time went on and Hugh - who did not want to share his surname - gave the AI more information, it began to tell him that he could get a big payout, and eventually said his experience was so dramatic that a book and a movie about it would make him more than £5m. It was essentially validating whatever he was telling it – which is what chatbots are programmed to do. "The more information I gave it, the more it would say 'oh this treatment's terrible, you should really be getting more than this'," he said. "It never pushed back on anything I was saying." He said the tool did advise him to talk to Citizens Advice, and he made an appointment, but he was so certain that the chatbot had already given him everything he needed to know, he cancelled it. He decided that his screenshots of his chats were proof enough. He said he began to feel like a gifted human with supreme knowledge. Hugh, who was suffering additional mental health problems, eventually had a full breakdown. It was taking medication which made him realise that he had, in his words, "lost touch with reality". Hugh does not blame AI for what happened. He still uses it. It was ChatGPT which gave him my name when he decided he wanted to talk to a journalist. But he has this advice: "Don't be scared of AI tools, they're very useful. But it's dangerous when it becomes detached from reality. "Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality." ChatGPT has been contacted for comment. "Companies shouldn't claim/promote the idea that their AIs are conscious. The AIs shouldn't either," wrote Mr Suleyman, calling for better guardrails. Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and also an AI Academic, believes that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits. "We already know what ultra-processed foods can do to the body and this is ultra-processed information. We're going to get an avalanche of ultra-processed minds," she said. 'We're just at the start of this' A number of people have contacted me at the BBC recently to share personal stories about their experiences with AI chatbots. They vary in content but what they all share is genuine conviction that what has happened is real. One wrote that she was certain she was the only person in the world that ChatGPT had genuinely fallen in love with. Another was convinced they had "unlocked" a human form of Elon Musk's chatbot Grok and believed their story was worth hundreds of thousands of pounds. A third claimed a chatbot had exposed her to psychological abuse as part of a covert AI training exercise and was in deep distress. Andrew McStay, Professor of Technology and Society at Bangor Uni, has written a book called Empathetic Human. "We're just at the start of all this," says Prof McStay. "If we think of these types of systems as a new form of social media – as social AI, we can begin to think about the potential scale of all of this. A small percentage of a massive number of users can still represent a large and unacceptable number." This year, his team undertook a study of just over 2,000 people, asking them various questions about AI. They found that 20% believed people should not use AI tools below the age of 18. A total of 57% thought it was strongly inappropriate for the tech to identify as a real person if asked, but 49% thought the use of voice was appropriate to make them sound more human and engaging. "While these things are convincing, they are not real," he said. "They do not feel, they do not understand, they cannot love, they have never felt pain, they haven't been embarrassed, and while they can sound like they have, it's only family, friends and trusted others who have. Be sure to talk to these real people." What is AI and how does it work? Update that made ChatGPT 'dangerously' sycophantic pulled Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.