Latest news with #BernieSanders


Boston Globe
12 hours ago
- Politics
- Boston Globe
A Vermont country girl becomes a Boston city slicker
There's the equally common sight of unkempt fields with disheveled, melancholy cows separated from the scarred pavement by nothing except a halfhearted wooden fence. Then there's the tire-popping back roads that toss up rocks and dirt, enough to coat your lungs like ash, and the famously New England smell of ammonia and sulfur (courtesy of the cows) that seeps through car windows and even plugged noses. Vermont is accustomed to frequent clichés and assumptions. Like anyone who grew up in rural Vermont, I fiercely defend the picturesque quality of a state whose claims to fame include Ben and Jerry's, Bernie Sanders, Noah Kahan, and maple syrup. Advertisement But as a 20-something college student living in Boston for the first time, I often fall into the cynical perspective that the rural New England of my youth is characterized by slow-moving days and even slower driving. Conversely, in Boston, constant horns and sirens breach any possibility of peace and quiet, unfinished construction abounds, and if you need a reminder you're never alone, a brisk walk through the Common will do. There's also the fact that Boston boasts several thousand more people (around 673,000 according to the Advertisement There's a big wealth gap between Vermont and Massachusetts, too. According to the Vermont Department of Health, While some fled to Vermont for more space and remote work during the COVID pandemic, recent data indicate a A house in Plainfield, Vt., remained covered in dirt and silt earlier this month as the owners awaited word on government buyout programs. Paul Heintz for The Boston Globe Advertisement Cows grazed on a dairy farm in Salisbury, Vt. Amanda Swinhart/Associated Press Homes in Vermont, whether they are imposing, historical farmhouses lining the roadside, mid-'90s ranch-style houses, or quaint cottages stuck deep in the woods, often sit on open space. Sprawling acres of cleared land mix with dense forests, farms, or wide rivers. Thinking of Boston homes conjures the image of old brownstones or row houses stacked atop each other, lining the narrow streets. The town of Hartford, Vt., has an average home value of $417,766, At age 21, I am more familiar with renting than owning. The average rent back home would be about Since being in Boston, I've found that despite the occasional unreliability of the MBTA, everything is right there for you. A midnight ER visit is a 10-minute walk, a morning coffee run is on the way to work, and there's certainly more to do for a college-age person pursuing a career and enjoying a social scene. Boston is a perpetual college city, while Hartford is a town of families with a side of tourists, if you stop by the Advertisement The cacophonous rush of the city — sirens, rap music pouring out of car windows, swarms of people everywhere you look — has become home to me. Compared with rural northern New England, Boston is a head-spinning cold plunge into early adulthood and city life. While I can't say I miss the thick, manure-laden air, the mud-flecked late-winter slush, or the appallingly low amount of cell service, there is nothing quite like returning home to crickets and frogs chirping under a sky freckled with countless stars and the peaceful silence of mountains and back roads. Haley Clough can be reached .


Forbes
a day ago
- Politics
- Forbes
Surveys On Socialism And Zohran Mamdani
In a statement after Zohran Mamdani's victory in the recent NYC Democratic mayoral primary, the Democratic Socialists of America, the nation's largest socialist organization, effused, 'In New York City, socialism has won.' A little premature perhaps, but Mamdani, the charismatic 33-year old who describes himself as a democratic socialist, has clearly joined the progressive pantheon of Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez. Fortunately, polls provide a good sense of how well his ideas resonate with Democrats and with other Americans as well. Since the Occupy Wall Street protest against capitalism began in 2011, pollsters have occasionally asked Americans about their views of socialism and capitalism, but the subject has much deeper roots in survey history. Some of the earliest questions in the Roper/Fortune and Gallup polls from the 1930s explored the fundaments of the idea. When people were asked in an open-ended Gallup question from 1949, 34% volunteered that socialism meant government ownership or control of utilities and other things, but 36% couldn't give a definition. In a 2010 CBS News/New York Times poll, 20% said it meant government control, while a quarter said they didn't know. Gallup's estimable Frank Newport wrote that for many people, and especially the young, the definition of socialism has broadened. 'While many still view socialism as government control of the economy, as modified communism and as embodying restrictions on freedoms in several ways, an increased percentage see it as representing equality and government provision of benefits.' Gallup began asking people in 2010 whether they had a positive or negative impression of socialism, and the findings have been remarkably stable in six questions asked since then, with slightly less than 40% giving a positive response. Around 60% have been positive about capitalism. Shortly before Mamdani's victory, Cato released the Cato Institute's 2025 Fiscal Policy National Survey providing fresh data. In three separate questions in the March poll, Cato asked about favorable or unfavorable views of capitalism, socialism, and communism. Fifty-nine percent had a favorable view of capitalism (41% unfavorable). Socialism was less popular at 43% favorable and 57% unfavorable. Finally, 14% had a favorable view of communism. In this poll, like earlier ones from other organizations, young people and Democrats had more favorable views of socialism and less favorable views of capitalism. There was a straight age progression in responses on the socialism question: 62% of 18-29 year olds had a favorable view, compared to 32% of those ages 65 and older. There was a huge partisan gap: 67% of Democrats had a favorable view compared to 17% of Republicans. Men were less sympathetic to socialism than women. As for capitalism, half of the youngest age group had a positive view but 73% of the oldest age group did. Half of Democrats were positive compared to 75% of Republicans. Are Mamdani's views in step with those of most Americans? When NBC's Kristen Welker asked Mamdani about billionaires in a recent interview, he said 'I don't think we should have billionaires.' In the Cato poll, however, only 29% of respondents said it 'immoral for society to allow people to become billionaires.' Once, people were more sympathetic to these ideas. In 1935, when the Roper/Fortune poll asked 'Do you believe that the government should allow a man who has investments worth over a million dollars to keep them, subject only to present taxes, 45% said yes, 46% no. In another question in the poll, a majority, 52%, said there should be no limit on what someone could inherit. The next highest response, given by 16%, said it should the limit should be between $100,000 to a million dollars, while 14% put the amount at $100,000 or less. The vast majority of Americans don't want to eliminate billionaires, but they are quite willing to tax them heavily. The socialist candidate has other ideas such as government-run grocery stores and rent control that are unlikely to have strong national support. It hardly needs to be said that New York City is not America, and the city has serious problems that don't significantly affect other parts of the county. A democratic socialist may be popular there, but it is unlikely that his brand will be met with the same positive reception in most other parts of the nation. Even Democrats are rightly skittish about some of his views.
Yahoo
a day ago
- Yahoo
More people are considering AI lovers, and we shouldn't judge
People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an AI-powered bot, and they have millions of users. A recent survey of users found that 19% of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarizing. In a New Yorker article titled "Your A.I. Lover Will Change You," futurist Jaron Lanier argued that "when it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation." Podcaster Joe Rogan put it more succinctly -- in a recent interview with Sen. Bernie Sanders, the two discussed the "dystopian" prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: "I'm like, oh, we're done. We're cooked." We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviors. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. Complex relationships In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that "the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness." This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. Support and safety We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: we should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behavior. Governments should also consider safeguards to restrict access by younger users, or at least to control the behavior of chatbots who are interacting with young people. And they should mandate better privacy protections -- though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun. Neil McArthur is director of the Center for Professional and Applied Ethics at the University of Manitoba. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author. Solve the daily Crossword


Times
2 days ago
- Entertainment
- Times
Don't cry for Stephen Colbert — he should have been axed years ago
S econds after the news broke on Friday that The Late Show with Stephen Colbert was being axed in May next year, fans were crying foul. Jimmy Kimmel — the ABC late-night host paid a whopping $15 million a year to tell jokes to an average 1.7 million viewers — wrote on his Instagram stories: 'F*** you and all your Sheldons, CBS.' Bernie Sanders, the senator from Vermont, railed against the decision on X. The former TV host Katie Couric published a lengthy statement on her Instagram page. The actress Jamie Lee Curtis told the Associated Press that 'it's bad' and called Colbert 'a great guy'. It's true that the optics were awful. Only days before he was axed, Colbert slammed the CBS owner, Paramount, for paying President Trump $16 million to settle a lawsuit. Trump had claimed the network had edited an interview with Kamala Harris in a favourable light to help her win the election. Paramount needs approval from Trump's Federal Communications Commission to close a multi-billion-dollar merger.


CNA
2 days ago
- CNA
Commentary: More people are considering AI lovers, and we shouldn't judge
WINNIPEG, Canada: People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an artificial intelligence-powered bot, and they have millions of users. A recent survey of users found that 19 per cent of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarising. In a New Yorker article titled Your AI Lover Will Change You, futurist Jaron Lanier argued that 'when it comes to what will happen when people routinely fall in love with an AI, I suggest we adopt a pessimistic estimate about the likelihood of human degradation.' Podcaster Joe Rogan put it more succinctly – in a recent interview with US Senator Bernie Sanders, the two discussed the 'dystopian' prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: 'I'm like, oh, we're done. We're cooked.' We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. RUINING HUMAN CONNECTION? When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviours. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. RELATIONSHIPS CAN BE MESSY AND COMPLEX In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that 'the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness.' This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. SUPPORT AND SAFETY We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: We should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behaviour. Governments should also consider safeguards to restrict access by younger users, or at least to control the behaviour of chatbots who are interacting with young people. And they should mandate better privacy protections – though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatises their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun.