Latest news with #JoeRogan


Daily Mail
an hour ago
- Entertainment
- Daily Mail
Conservative stars now REGRET leaving California to follow Joe Rogan to Texas: 'It's no Los Angeles!'
More and more conservative stars are beginning to turn on Texas after leaving New York and California. Joe Rogan was the first big star to make the move, with the podcasting titan fleeing Los Angeles and moving his family into a $14 million mansion in Austin in 2020. Many of Rogan's comedian pals enthusiastically followed him there - only to deeply regret their decision after arriving. Tim Dillon was the first to jump ship, with the Thanksgiving star fleeing Austin after just a few months in the Lone Star State. Dillon had followed Rogan to Austin in 2020, only to make an emergency exit by the end of that year after discovering that the city didn't have enough good restaurants. 'It's a horrible city without a soul,' he told fellow comedian Whitney Cummings when describing his stint in Austin. 'It's not the live music capital of America. It's three heroin addicts busking with guitars. There's zero talent here in any capacity,' he raged. 'There's three restaurants that are good and I've been to all of them twice.' In another interview, Dillon said that Austin 'can't be compared to New York and Los Angeles.' He also told the H3 podcast that the city was filled with homeless people, had a 'sewage colored lake,' and that most of the residents would 'get drunk and shoot each other' for fun. Rogan's longtime friend, comedian and MMA fighter Brendan Schaub, has also come to regret his move to Austin. The 42-year-old relocated his family to the city earlier this year, but he recently confessed on his Fighter and the Kid podcast that he was 'heartbroken' about leaving Los Angeles and said he misses the city terribly. 'I miss my community and my routine,' he admitted. He also shared a bleak story about meeting another Los Angeles transplant in Texas who warned him that it might take up to three years for him to acclimate to life in Austin. 'He said, "Texas is great, best decision I've ever made. But you should know that it's no LA. There's no replacing LA",' Schaub recalled. Comedian Shane Gillis, who is another one of Rogan's pals, has also shared a similar sentiment. Gillis has repeatedly complained about the homeless situation in Austin, calling the drug-crazed vagrants in town 'screaming runners.' 'Texas f***ing blows,' he told comedian Andrew Schulz while sharing a story about how the power in his Austin home went out for three days due to a bad storm. 'It's hot as f**k. The second we ran out of power the house was 90 degrees and bugs came in immediately. The house was filled with bugs.' Gillis moved to Austin in 2023 because Texas has no income tax. He also wanted to be close to Rogan's standup club the Comedy Mothership. Even celebrities with no connection to Rogan or the comedy scene have voiced regrets about moving to Texas. Male model Lucky Blue Smith and his influencer wife Nara, who is famous for her trad wife content, left Los Angeles in 2022 to move to Dallas. However, within just two years the couple announced that they were leaving the city to live in Connecticut so that they could be closer to New York. In a TikTok video, Nara said that living in Connecticut would allow the couple to own a large house where they could raise their family, while still 'being closer to a bigger city for all the work that we do.' Earlier this week, transgender conservative influencer Blaire White announced that she was leaving Texas after four years to return to her home state of California. The 31-year-old fled her Hollywood home in 2021 amid rising homelessness and the state's tyrannical Covid policies to move to Austin, Texas. Conservative social media star Mike Cernovich, who lives in Orange County, has also come out swinging against Austin However, she announced this week that she's ready to return to California after spending the last four years in Austin. Addressing the major life change in a YouTube video, the transgender social media star shared her surprising reason behind the shock relocation. 'I was born there, so it is home for better or for worse,' Chico-born Blaire said. 'There are a lot of problems with California and a lot of people like to write off New York and California and say, "Just let them go overboard, let them burn," and I find that to be a very un-American perspective to hold,' she continued. 'California in my opinion is the most beautiful place in the world. Yes, I said the world,' she added. 'And it's even more of a shame because of that that it's run by demons.' While Blaire said that Los Angeles has now become 'ghetto and downtrodden,' she explained that she wants to return to the City of Angels to help improve it. 'I want to be someone who's part of the solution. I want to be someone who doesn't run from problems,' she insisted. 'I moved to Texas in the middle of Covid. So I moved to Texas in crisis. The lockdowns weren't ending, so much trauma from that, so much craziness, so it was kind of like an evacuation,' she continued. The YouTube star said that she's also eyeing a run for political office in the future and is excited to add her voice to California politics as a political commentator. Conservative social media star Mike Cernovich has also come out swinging against Austin. The MAGA influencer, who is based in Orange County, recently called the Texas city a 'total dump.' 'Austin was disgusting when I first went there, 2017 or so. I expected culture or whatever, it got so much hype,' he posted on X (formerly Twitter). 'I was looking forward to it. Total dump. There's like two blocks, a dirty river, flat land, and that street where all the drunks go to try kill each other.' From the early 1800s to the 1960s, New York was the undisputed most populous state in America. California overtook New York in 1964 and has been the most crowded ever since. New York dropped back to third place in 1994, when Texas surged past 18.1 million people. Florida later surpassed the Empire State. 'California in my opinion is the most beautiful place in the world. Yes, I said the world,' Blaire said in a YouTube video A February study from moveBuddha projected that Texas and Florida would be the first and second biggest states, respectively, by 2100, followed by California, Georgia, North Carolina, and New York. California has been steadily losing hundreds of thousands of citizens since 2019, before posting a moderate gain in 2023. Many cite high cost of living and poor quality of life as reasons why they left the Golden State. Data has shown nearly half of the people moving out of California in 2021 were millennials. Many of them headed to Texas counties around major cities such as Houston, Dallas and Austin. Florida, like Texas, has also had a population boom, with more than 700,000 people moving there in 2022. The fact that Texas and Florida don't levy income taxes on their citizens is a major pulling factor from California and New York, both of which have top marginal rates over 10 percent. Still, there are some major drawbacks that could slow these states' march to dominance. For one, both Texas and Florida have been hit with dramatically more natural disasters in the last 10 years, according to data from the National Oceanic and Atmospheric Administration.

Montreal Gazette
5 hours ago
- Montreal Gazette
Opinion: More of us are falling in love with our chatbot companion. Don't judge
Op Eds By People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an AI-powered bot, and they have millions of users. A recent survey of users found that 19 per cent of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarizing. In a New Yorker article titled 'Your AI Lover Will Change You,' futurist Jaron Lanier argued that 'when it comes to what will happen when people routinely fall in love with an AI, I suggest we adopt a pessimistic estimate about the likelihood of human degradation.' Podcaster Joe Rogan put it more succinctly — in a recent interview with Sen. Bernie Sanders, the two discussed the 'dystopian' prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: 'I'm like, oh, we're done. We're cooked.' We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviours. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that 'the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness.' This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: We should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behaviour. Governments should also consider safeguards to restrict access by younger users, or at least to control the behaviour of chatbots who are interacting with young people. And they should mandate better privacy protections — though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curriculums for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun. Neil McArthur is the director of the Centre for Professional and Applied Ethics at the University of Manitoba.
Yahoo
a day ago
- Yahoo
More people are considering AI lovers, and we shouldn't judge
People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an AI-powered bot, and they have millions of users. A recent survey of users found that 19% of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarizing. In a New Yorker article titled "Your A.I. Lover Will Change You," futurist Jaron Lanier argued that "when it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation." Podcaster Joe Rogan put it more succinctly -- in a recent interview with Sen. Bernie Sanders, the two discussed the "dystopian" prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: "I'm like, oh, we're done. We're cooked." We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviors. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. Complex relationships In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that "the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness." This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. Support and safety We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: we should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behavior. Governments should also consider safeguards to restrict access by younger users, or at least to control the behavior of chatbots who are interacting with young people. And they should mandate better privacy protections -- though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun. Neil McArthur is director of the Center for Professional and Applied Ethics at the University of Manitoba. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author. Solve the daily Crossword


UPI
a day ago
- UPI
More people are considering AI lovers, and we shouldn't judge
People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an AI-powered bot, and they have millions of users. A recent survey of users found that 19% of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarizing. In a New Yorker article titled "Your A.I. Lover Will Change You," futurist Jaron Lanier argued that "when it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation." Podcaster Joe Rogan put it more succinctly -- in a recent interview with Sen. Bernie Sanders, the two discussed the "dystopian" prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: "I'm like, oh, we're done. We're cooked." We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviors. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. Complex relationships In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that "the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness." This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. Support and safety We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: we should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behavior. Governments should also consider safeguards to restrict access by younger users, or at least to control the behavior of chatbots who are interacting with young people. And they should mandate better privacy protections -- though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun. Neil McArthur is director of the Center for Professional and Applied Ethics at the University of Manitoba. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.


CNN
a day ago
- Politics
- CNN
Joe Rogan tells Texas Democrat he should run for president
On his podcast, Joe Rogan told Texas State Rep. James Talarico (D) that he should run for president. Rogan was an influential supporter of President Trump during the 2024 campaign. Talarico joined CNN's Laura Coates to talk about how Democrats can retake power.