logo
More people are considering AI lovers, and we shouldn't judge

More people are considering AI lovers, and we shouldn't judge

UPI3 days ago
People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an AI-powered bot, and they have millions of users.
A recent survey of users found that 19% of Americans have interacted with an AI meant to simulate a romantic partner.
The response has been polarizing. In a New Yorker article titled "Your A.I. Lover Will Change You," futurist Jaron Lanier argued that "when it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation."
Podcaster Joe Rogan put it more succinctly -- in a recent interview with Sen. Bernie Sanders, the two discussed the "dystopian" prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: "I'm like, oh, we're done. We're cooked."
We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine.
When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say?
The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to.
It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence.
There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviors. Tragically, one may have played a role in a teenager's suicide.
The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support.
Complex relationships
In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that "the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness."
This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different.
We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles.
Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards.
Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness.
In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment.
Support and safety
We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: we should respect the choices people make about their intimate lives when those choices don't harm anyone else.
However, we can also take steps to ensure that these relationships are as safe and satisfying as possible.
First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behavior.
Governments should also consider safeguards to restrict access by younger users, or at least to control the behavior of chatbots who are interacting with young people. And they should mandate better privacy protections -- though this is a problem that spans the entire tech industry.
Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible.
While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices.
AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun.
Neil McArthur is director of the Center for Professional and Applied Ethics at the University of Manitoba. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I Asked ChatGPT What ‘Generational Wealth' Really Means — and How To Start Building It
I Asked ChatGPT What ‘Generational Wealth' Really Means — and How To Start Building It

Yahoo

timean hour ago

  • Yahoo

I Asked ChatGPT What ‘Generational Wealth' Really Means — and How To Start Building It

The term 'generational wealth' gets thrown around a lot these days, but what does it actually mean? And more importantly, how can regular Americans start building it? Read Next: Learn More: GOBankingRates asked ChatGPT for a comprehensive breakdown, and its response was both enlightening and surprisingly actionable. Also see five strategies high-net-worth families use to build generational wealth. Defining Generational Wealth: ChatGPT's Take When ChatGPT was asked to define generational wealth, it explained it as 'assets and financial resources that are passed down from one generation to the next, providing ongoing financial stability and opportunities for future family members.' But it went deeper, explaining that true generational wealth isn't just about leaving money behind; it's about creating a financial foundation that can grow and sustain multiple generations. The AI emphasized that generational wealth is more than just inheritance money. It's about creating a system where each generation can build upon the previous one's success, creating a compounding effect that grows over time. This includes not just financial assets, but also financial knowledge, business relationships and strategic thinking skills. Check Out: ChatGPT's Blueprint for Building Generational Wealth When asked for a practical roadmap, ChatGPT provided a comprehensive strategy broken down into actionable steps. Start With Financial Education ChatGPT emphasized that generational wealth begins with financial literacy — not just for yourself, but for your entire family. Here is what it recommended: Teach children about money management from an early age. Create family financial discussions and goal-setting sessions. Ensure all family members understand investment principles. Build a culture of financial responsibility. It stressed that many wealthy families fail to maintain their wealth across generations because they don't adequately prepare their children with the knowledge and mindset needed to manage money effectively. Build a Diversified Investment Portfolio ChatGPT recommended a multi-asset approach to wealth building: Real estate investments for appreciation and passive income Stock market investments through index funds and individual stocks Business ownership or equity stakes Alternative investments like real estate investment trusts or commodities. It explained that diversification is crucial because different asset classes perform differently in various economic conditions. This approach helps protect wealth from market volatility while providing multiple income streams. Establish Legal Protection Structures The AI strongly emphasized the importance of estate planning tools as well. Here are a few it highlighted: Wills and trusts to control asset distribution Life insurance policies to provide immediate liquidity Business succession planning for family enterprises Tax optimization strategies to minimize transfer costs. ChatGPT explained that without proper legal structures, wealth can be decimated by taxes, legal disputes or poor decision-making by inexperienced heirs. It stressed that these structures must be created while you're alive and able to make strategic decisions. Consider Dynasty Trusts For families with substantial assets, ChatGPT recommended exploring dynasty trusts. It explained these as vehicles that can preserve wealth across multiple generations while providing tax benefits. These trusts can potentially last forever in certain states, creating a truly perpetual wealth-building vehicle. Overcoming Common Obstacles ChatGPT identified several barriers to building generational wealth as well. First, it acknowledged that starting from different financial positions affects strategy. Those with limited resources need to focus first on building basic wealth before thinking about generational strategies. ChatGPT also warned against increasing spending as income grows. The AI suggested automating savings and investments to prevent lifestyle inflation from derailing wealth-building efforts. It also highlighted the complexity of tax planning for generational wealth, noting that improper planning can result in significant tax penalties that erode wealth transfer. This makes professional guidance particularly important for families with substantial assets, and the cost of professional advice is typically far outweighed by the value created through proper planning. Starting Small: ChatGPT's Practical First Steps For those just beginning, ChatGPT provided a few accessible starting points. Build an emergency fund (three to six months' worth of expenses). Maximize employer 401(k) matching. Start a Roth IRA for tax-free growth. Purchase adequate life insurance. Create a basic will. Begin investing in index funds. Consider real estate when financially ready. It emphasized that these steps can be started by anyone, regardless of income level, and that the key is consistency over time. The Importance of Values and Purpose One of ChatGPT's most interesting insights was about the importance of instilling values and purpose alongside wealth. The AI explained that families with strong values and a clear sense of purpose are more likely to maintain their wealth across generations. This can include teaching children about responsibility and work ethic and involving family members in charitable activities It also noted that generational wealth isn't primarily about the amount you leave behind. It's about creating a financial foundation and knowledge system that empowers future generations to build upon your efforts. The process of building generational wealth requires patience, discipline and strategic thinking, but the AI emphasized that with the right approach, any family can begin building wealth that will benefit generations to come. The key is to start now, stay consistent and always keep the long-term vision in mind. More From GOBankingRates 3 Luxury SUVs That Will Have Massive Price Drops in Summer 2025 The 10 Most Reliable SUVs of 2025 The 5 Car Brands Named the Least Reliable of 2025 This article originally appeared on I Asked ChatGPT What 'Generational Wealth' Really Means — and How To Start Building It

How to spot AI writing — 5 telltale signs to look for
How to spot AI writing — 5 telltale signs to look for

Tom's Guide

timean hour ago

  • Tom's Guide

How to spot AI writing — 5 telltale signs to look for

AI writing is everywhere now, flooding social media, websites, and emails—so you're probably encountering it more than you realize. That email you just received, the product review you're reading, or the Reddit post that sounds oddly corporate might all be generated by tools like AI chatbots like ChatGPT, Gemini or Claude. The writing often appears polished, maybe too polished, hitting every point perfectly while maintaining an unnaturally enthusiastic tone throughout. While AI detectors promise to catch machine-generated text, they're often unreliable and miss the subtler signs that reveal when algorithms have done the heavy lifting. You don't need fancy software or expensive tools to spot it. The clues are right there in the writing itself. There's nothing wrong with using AI to improve your writing. These tools excel at checking grammar, suggesting better word choices, and helping with tone—especially if English isn't your first language. AI can help you brainstorm ideas, overcome writer's block, or polish rough drafts. The key difference is using AI to enhance your own knowledge and voice rather than having it generate everything from scratch. The problems arise when people let AI do all the thinking and just copy-paste whatever it produces without adding their own insights, and that's when you start seeing the telltale signs below. AI writing tools consistently rely on the same attention-grabbing formulae. You'll see openings like "Have you ever wondered..." "Are you struggling with..." or "What if I told you..." followed by grand promises. This happens because AI models learn from countless blog posts and marketing copy that use these exact patterns. Real people mix it up more, they might jump straight into a story, share a fact, or just start talking about the topic without all the setup. When you spot multiple rhetorical questions bunched together or openings that feel interchangeable across different topics, you're likely reading AI-generated content. You'll see phrases like "many studies show", "experts agree", or "a recent survey found" without citing actual sources. AI tends to speak in generalities like "a popular app" or "leading industry professionals" instead of naming specific companies or real people. Human writers naturally include concrete details, actual brand names, specific statistics, and references to particular events or experiences they've encountered. When content lacks these specific, verifiable details, it's usually because AI doesn't have access to real, current information or personal experience. AI writing often sounds impressive at first glance but becomes hollow when you examine it closely. You'll find excessive use of business jargon like "game-changing", "cutting-edge", "revolutionary", and "innovative" scattered throughout without explaining what these terms actually mean. The writing might use sophisticated vocabulary but fail to communicate ideas clearly. A human expert will tell you exactly why one method works better than another, or admit when something is kind of a pain to use. If the content feels like it was written to impress rather than inform, AI likely played a major role. AI writing maintains an unnaturally consistent, enthusiastic tone throughout entire pieces. Every sentence flows smoothly into the next, problems are always simple to solve and there's rarely any acknowledgment that things can be complicated or frustrating. Real people get frustrated, go off on tangents, and have strong opinions. Human writing naturally varies in tone, sometimes confident, sometimes uncertain, occasionally annoyed or conversational. When content sounds relentlessly positive and avoids any controversial takes, you're probably reading AI-generated material. This is where the lack of real experience shows up most clearly. AI might correctly explain the basics of complex topics, but it often misses the practical complications that anyone who's actually done it knows about. The advice sounds textbook-perfect but lacks the yeah, but in reality... insights that make content actually useful. Human experts naturally include caveats, mention common pitfalls, or explain why standard advice doesn't always work in practice. When content presents complex topics as straightforward without acknowledging the messy realities, it's usually because real expertise is missing. People love to point at em dashes as proof of AI writing, but that's unfair to a perfectly good punctuation mark. Writers have used em dashes for centuries—to add drama, create pauses or insert extra thoughts into sentences. The real issue isn't that AI uses them, it's how AI uses them incorrectly. You'll often see AI throwing in em dashes where a semicolon would work better, or using them to create false drama in boring sentences. Real writers use em dashes purposefully to enhance their meaning, while AI tends to sprinkle them in as a lazy way to make sentences sound more sophisticated. Before you dismiss something as AI-written just because of punctuation, check whether those dashes actually serve a purpose or if they're just there for show. Now you've learned the tell-tale signs for spotting AI-generated writing, why not take a look at our other useful guides? Don't miss this tool identifies AI-generated images, text and videos — here's how it works and you can stop Gemini from training on your data — here's how Get instant access to breaking news, the hottest reviews, great deals and helpful tips. And if you want to explore some lesser known AI models, take a look at I write about AI for a living — here's my 7 favorite free AI tools to try now.

Put People First: A Human-Centered Approach To AI In Transportation
Put People First: A Human-Centered Approach To AI In Transportation

Forbes

timean hour ago

  • Forbes

Put People First: A Human-Centered Approach To AI In Transportation

Gaurav Sharda, CTO at Beacon Mobility, is passionate about using AI to improve safety, efficiency and the user experience. Artificial intelligence (AI) is rapidly reshaping the transportation industry, offering the promise of greater safety, efficiency and cost savings. But for leaders in school transportation and similar sectors, the real challenge isn't just adopting AI—it's adopting it in a way that supports and empowers people. In my experience working with transportation teams, I've seen that when AI is implemented without empathy, it often fails to deliver its full potential. Worse, it can erode trust and morale. Technology should be a bridge, not a barrier. When thoughtfully deployed, AI can help organizations retain talent, improve safety outcomes and create a more resilient workforce. Here are four practical strategies transportation leaders can use to ensure AI becomes a tool for empowerment—not enforcement. 1. Position AI As A Coaching Tool, Not A Surveillance System Smart dash cameras are now common in fleet vehicles. These systems detect behaviors like hard braking or distracted driving and provide real-time alerts. While the technology is powerful, how it's introduced makes all the difference. Actionable Tip: Frame these tools as part of a coaching culture. Use them to provide constructive feedback and recognize improvement. When drivers feel supported rather than monitored, they're more likely to engage with the technology—and safety outcomes improve organically. For example, one district we serve saw a 30% drop in risky driving behaviors after shifting from punitive monitoring to a coaching-based approach. The key was transparency and consistent, supportive feedback. 2. Use AI To Reduce Stress And Improve Retention Driver turnover remains a major challenge in the transportation industry. While AI is often marketed for its efficiency benefits, its real value may lie in improving the day-to-day experience of frontline staff. Actionable Tip: Invest in AI tools that simplify communication, provide real-time route updates and proactively alert drivers to delays or hazards. When drivers feel that the system is working with them—not against them—they're more likely to stay. Some operations have seen significant reductions in turnover by focusing on driver experience. Retention isn't just about pay—it's about respect, support and predictability. AI can help deliver all three when implemented with the employee experience in mind. 3. Automate Administrative Tasks To Free Up Human Capacity AI isn't just for what happens on the road. It can also streamline back-office operations, which are often bogged down by repetitive tasks. Actionable Tip: Use AI to automate routine processes like incident logging, call center reporting or answering common employee questions. This allows staff to focus on more strategic, human-centered work. A good rule of thumb: If a task doesn't require empathy or judgment, it's a candidate for automation. This shift not only improves operational efficiency but also boosts morale. When people spend less time on tedious tasks, they have more energy for problem-solving and innovation. 4. Design AI Systems With The End User In Mind Too often, technology is rolled out to meet administrative needs without considering the people who use it daily. This can lead to low adoption and frustration. Actionable Tip: Involve drivers, dispatchers and other frontline staff in the design and testing of new tools. Prioritize ease of use and gather feedback early. If a tool isn't intuitive for a driver starting their shift at 6:00 a.m., it's not the right solution. User-centered design isn't just a tech principle—it's a leadership mindset. When employees feel heard and included, they become champions of the change, not resisters. Conclusion: Lead With Empathy, Build With Purpose AI has the potential to transform transportation—but only if it's implemented thoughtfully. Leaders should view AI not as a replacement for people but as a tool to support them. By starting with empathy and focusing on the human experience, transportation leaders can build systems where both people and technology thrive. The future of transportation isn't about choosing between people or technology. It's about designing systems where both work in harmony—starting with empathy and ending with impact. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store