
More people are considering AI lovers, and we shouldn't judge
A recent survey of users found that 19% of Americans have interacted with an AI meant to simulate a romantic partner.
The response has been polarizing. In a New Yorker article titled "Your A.I. Lover Will Change You," futurist Jaron Lanier argued that "when it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation."
Podcaster Joe Rogan put it more succinctly -- in a recent interview with Sen. Bernie Sanders, the two discussed the "dystopian" prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: "I'm like, oh, we're done. We're cooked."
We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine.
When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say?
The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to.
It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence.
There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviors. Tragically, one may have played a role in a teenager's suicide.
The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support.
Complex relationships
In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that "the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness."
This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different.
We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles.
Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards.
Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness.
In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment.
Support and safety
We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: we should respect the choices people make about their intimate lives when those choices don't harm anyone else.
However, we can also take steps to ensure that these relationships are as safe and satisfying as possible.
First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behavior.
Governments should also consider safeguards to restrict access by younger users, or at least to control the behavior of chatbots who are interacting with young people. And they should mandate better privacy protections -- though this is a problem that spans the entire tech industry.
Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible.
While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices.
AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun.
Neil McArthur is director of the Center for Professional and Applied Ethics at the University of Manitoba. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Washington Post
40 minutes ago
- Washington Post
DOGE builds AI tool to cut 50 percent of federal regulations
The U.S. DOGE Service is using a new artificial intelligence tool to slash federal regulations, with the goal of eliminating half of Washington's regulatory mandates by the first anniversary of President Donald Trump's inauguration, according to documents obtained by The Washington Post and four government officials familiar with the plans. The tool, called the 'DOGE AI Deregulation Decision Tool,' is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE's plans. Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified 'external investment.' The tool has already been used to eliminated more than 1,000 'regulatory sections' at the Department of Housing and Urban Development in under two weeks, according to the PowerPoint, and to write '100% of deregulations' at the Consumer Financial Protection Bureau (CFPB). Three HUD employees — as well as documents obtained by The Post — confirmed that an AI tool was recently used to review hundreds, if not more than 1,000, lines of regulations at that agency and suggest edits or deletions. The tool was developed by engineers brought into government as part of Elon Musk's DOGE project, according to two federal officials directly familiar with DOGE's work, who, like others interviewed for this story, spoke on the condition of anonymity to describe internal deliberations they were not authorized to discuss publicly. Conservatives have long argued that the federal government issues far too many regulations that constrain economic growth and hurt the private sector. Many liberals have emphasized that there are reasons federal regulations are in place, such as protecting the environment and ensuring food safety. Asked about the AI-fueled deregulation, White House spokesman Harrison Fields wrote in an email that 'all options are being explored' to achieve the president's goal of deregulating government. Fields noted that 'no single plan has been approved or green-lit,' cautioning that the work is 'in its early stages and is being conducted in a creative way in consultation with the White House.' Fields added: 'The DOGE experts creating these plans are the best and brightest in the business and are embarking on a never-before-attempted transformation of government systems and operations to enhance efficiency and effectiveness.' One former member of DOGE, which stands for Department of Government Efficiency, wrote in a text message that the team did everything it could to come up with legal and technological solutions to repeal as many regulations as possible within Trump's term. 'Creative deployment of artificial intelligence to advance the president's regulatory agenda is one logical strategy to make significant progress in that finite amount of time,' wrote James Burnham, who served as chief attorney for DOGE and is now managing partner at King Street Legal. The proposed use of AI to accomplish swift, massive deregulation expands upon the Trump administration's work to embed AI across the government — using it for everything from fighting wars to reviewing taxes. And it dovetails with the administration's aim to unwind regulations government-wide, even without AI. But it's unclear if a new, untested technology could make mistakes in its attempts to analyze federal regulations typically put in place for a reason. On Jan. 31, Trump issued an executive order to 'unleash prosperity through deregulation,' which required agencies to repeal 10 rules for every new rule issued. Since then, some departments have engaged in what almost appears to be a competition to cut. In May, the Transportation Department declared it had deleted 52 regulations and more than 73,000 words from the Federal Register. This month, the Labor Department announced plans to nix more than 60 regulations. Still, Republicans have grown frustrated by the relatively slow pace of deregulatory actions. During the first six months of Trump's first term, his administration cut costs by about $550 million and paperwork hours by 566,000, according to the American Action Forum, a center-right think tank that tracks regulations. Through July of this year, the Trump administration has achieved nearly all its cost reductions by repealing one rule regarding what businesses must report about their ownership ties. Without that, the Trump administration would have increased regulatory costs by $1.1 billion and paperwork hours by 3.3 million, according to the think tank. 'They're way behind where they were in 2017 on the numbers, no question about it,' said Doug Holtz-Eakin, president of the American Action Forum and former director of the nonpartisan Congressional Budget Office. 'I thought this was going to be something they crushed because they did so in 2017. I've been baffled by this.' The AI tool is intended to massively accelerate the deregulation process, with every federal agency able to develop a list of regulations to eliminate in less than four weeks, according to the PowerPoint. The agencies are supposed to finish their lists by Sept. 1, and this month, DOGE is supposed to start training staff at agencies on how to use the AI tool, the PowerPoint states. Read the PowerPoint here. While DOGE had pushed earlier this year to take a larger role in the deregulatory effort, the Musk-led team was frequently rebuffed by agency employees who worried about outsourcing decisions and their authorities, according to three people who have participated in deregulatory conversations at the White House and the agency level who spoke on the condition of anonymity to share private conversations. Federal officials also questioned whether DOGE had the subject matter expertise to comb through highly technical regulations and find appropriate targets for cuts, the people said. As DOGE's influence waned following Musk's departure, the administration has remained focused on Trump's deregulatory order, the people said. White House staff are also using internal trackers to monitor how quickly agencies are paring regulations, while leaders at every major agency are meeting regularly to discuss how quickly they can meet Trump's ambitions and which cuts 'count' toward the president's order, according to the people. In some cases, DOGE's campaign to fire federal workers and dramatically shrink the federal workforce has hampered the deregulatory effort, the three people said. 'The White House wants us higher on the leader board,' said one of the three people. 'But you have to have staff and time to write the deregulatory notices, and we don't. That's a big reason for the holdup.' Trump officials have tried to use AI to roll back regulations before. At the Department of Health and Human Services, a 2020 'Regulatory Clean Up Initiative' drew on an AI tool to identify and remove archaic language, defunct federal provisions and outdated terms from federal rules. Trump has pushed the limits of the Administrative Procedure Act, which governs repealing federal regulations, most notably through an executive order ending a rule that restricted the water flow of showerheads. It is unclear if courts will allow the administration to void rules. Meanwhile, private-sector companies tend to be uncomfortable ignoring a rule that was illegally repealed, said Nicholas Bagley, an administrative law expert at the University of Michigan. 'There's been some flashy sideshow efforts to avoid the legal strictures, but in general, they don't stick,' Bagley said of Trump's unilateral efforts to cut regulations. DOGE officials may be concerned about the legality of the AI tool. One page of the slideshow says four people identified as 'DOGE lawyers' — Burnham, Austin Raynor, Jacob Altik and Ashley Boizelle — each 'vetted and endorsed' the AI deregulation tool. Raynor, Altik and Boizelle could not be reached for comment. Federal regulations, as they stand now, can be divided into three categories, the PowerPoint says: 50 percent are not required by law, 38 percent are statutorily mandated and 12 percent are 'Not Required but Agency Needs.' By ending the rules that are both unnecessary by law and to agency operations, the PowerPoint states, the government could recover $3.3 trillion a year. But the PowerPoint also suggests it would take 3.6 million 'man-hours' to nix 100,000 regulations under the current system. It is not clear how the PowerPoint's authors arrived at these figures. That's where the AI tool comes in, the PowerPoint proposes. The tool will save 93 percent of the human labor involved by reviewing up to 500,000 comments submitted by the public in response to proposed rule changes. By the end of the deregulation exercise, humans will have spent a grand total of 36 hours gutting half of all federal regulations, the PowerPoint claims. The PowerPoint lists two case studies as examples of how well its AI tool can work, detailing recent efforts to slash regulations at HUD and CFPB. Asked about the AI-driven regulation slashing, a HUD spokesperson wrote in a statement that the agency is having 'ongoing discussions' to consider how to make government more efficient. 'We are not disclosing specifics about how many regulations are being examined or where we are at in the broader process,' the spokesperson said, adding, 'the process is far from final.' The spokesperson continued: 'The intent of the developments is not to replace the judgement, discretion and expertise of staff but be additive to the process.' CFPB did not respond to questions. The Post was not able to independently confirm the use of AI at the agency. At HUD, efforts to use AI to kill regulations began three months ago, according to three employees familiar with the matter and emails obtained by The Post. A message sent to some of the agency's Public and Indian Housing staff on April 18 announced a 'DOGE team' would be 'learning how AI will be able to analyze all PIH regulations looking for and flagging discrepancies between them and the underlying statute.' 'This is a major effort,' the email continued. 'We are working with the lawyers to simplify the [Administrative Procedure Act] process … use AI for drafting, and use AI for complying notices in the future.' The overall goal, the email noted, was to deploy AI to reduce the time staff had to spend on deregulation. Another document, signed 'HUD DOGE Team' and sent to staff, detailed how DOGE team members wanted federal staffers to engage the AI tool. Staffers were supposed to look over the tool's recommendations for proposed regulatory eliminations and mark whether they agreed, disagreed or believed deletions should go further. One HUD employee who participated in this process said the AI tool made several errors. It delivered an analysis saying those who drafted various agency regulations had misunderstood the law in several places, said the employee, who spoke on the condition of anonymity to reveal internal conversations. But the AI tool was sometimes wrong, the employee said. 'There were a couple places where the AI said the language was outside of the statute,' the employee said, 'and actually, no — the AI read the language wrong, and it is actually correct.' After its tryout at HUD, the AI deregulation tool is supposed to deploy across the rest of government in coming months, according to the DOGE PowerPoint. Over the next five months, agencies will work with the AI tool to identify regulations to kill, respond to public comments about the proposed deletions and submit formal deregulation proposals, the PowerPoint says. The goal is to wrap everything up and 'Relaunch America on Jan. 20, 2026,' the PowerPoint states.


Forbes
42 minutes ago
- Forbes
What Ancient Farmers Can Teach The Modern Boardroom About AI Strategy
Ancient Farmer and Modern Day AI User Every day in 2025 brings a new AI milestone. From generative tools rewriting code to AI copilots augmenting medical diagnoses, it's easy to feel we've entered unprecedented terrain. But we've been here before. Just ask our ancient ancestors—who faced the first great disruption when they transitioned from hunting and gathering to agriculture. That shift didn't just change how we ate. It transformed how we lived, worked, governed, and grew. The parallels to today's AI revolution are striking—and instructive. If history is any guide, organizations that manage technological transitions through thoughtful governance, strategic investment in people, and adaptive policies will not only survive but thrive. From Stone Tools to Silicon Chips: How Societies Adapt to Disruption Roughly 12,000 years ago, human communities in the Fertile Crescent began cultivating crops and domesticating animals. Archaeological sites like Abu Hureyra in modern-day Syria reveal this wasn't a sudden break from the past but a gradual, iterative process of learning, testing, and integrating new tools into existing ways of life. This evolutionary—not revolutionary—mindset offers a key lesson: Transformation doesn't mean total disruption. It means layering innovation onto what works and building systems that scale over time. The AI transition requires a similar approach: pilot programs, feedback loops, reskilling, and workforce support must evolve hand-in-hand with technological integration. Strategy: AI Integration Requires Incremental Adaptation Just as early agricultural societies developed irrigation systems and record-keeping via clay tablets, modern organizations are building the infrastructure to support AI-enabled workflows. But success hinges not on speed, but on sequencing. McKinsey reports that although generative AI could add up to $4.4 trillion in global productivity annually, only 21% of companies have adopted AI in more than one business function as of 2023. This isn't a failure—it's a sign that organizations are proceeding thoughtfully. Boards and CFOs should be tracking AI ROI, not only in terms of cost savings but in how it reshapes value creation. That means budgeting for phased implementation and workforce transformation simultaneously. Policy: Redesigning Governance for Human-AI Collaboration Early civilizations didn't just invent tools—they also wrote laws. The Code of Hammurabi, dating to 1750 BCE, is one of the first known legal frameworks for managing agricultural property, labor, and dispute resolution. Today's version? AI ethics policies, data governance protocols, and algorithmic accountability. Regulators are already acting. The EU's AI Act, passed in 2024, imposes risk-based requirements for transparency, bias mitigation, and human oversight. Meanwhile, the European Sustainability Reporting Standards (ESRS) specifically S1 and S2, under the guidance of EFRAG, now require companies to report on the impact of AI on workforce strategy and human capital governance. Boards must go beyond compliance. They must ask: The fiduciary risks are real. A poorly governed algorithm can expose companies to legal liability, reputational damage, and talent loss. Treat AI governance as a board-level issue, not just a technical one. Programs: Cultivating Workforce Resilience and Knowledge Transfer Farming didn't eliminate the need for hunting. Early societies maintained both capabilities as insurance against failure. Similarly, organizations shouldn't rush to fully automate without investing in human capital. This means reskilling must become the norm. According to PwC study on workforce hopes and fears, 40% of workers will need up to six months of training to remain relevant in the AI economy. High-performing companies are already taking the lead. AT&T's collaboration with Udacity to create nano-degree programs reduced reskilling time by 35% and boosted internal mobility. Boards and CHROs should champion programs that: If early societies could preserve astronomy, crop rotation, and animal husbandry without Google, we can certainly codify AI literacy and strategic workforce knowledge today. Measuring What Matters: Human Capital as a Source of Value The agricultural revolution spurred population growth, urbanization, and eventually, modern economies. Today's AI revolution will reshape how we measure value—especially in intangible assets like skills, collaboration, and creativity. Research by Alex Edmans shows that companies investing in employee well-being significantly outperform their peers in long-term shareholder returns. This is why ISO 30414 and SEC human capital disclosure expectations are gaining traction. Human Capital ROI (HCROI) should be tracked with the same rigor as Return on Equity or Investment. Boards should demand metrics that show not just AI adoption but how it enhances organizational resilience and workforce productivity. Final Thought: Change Is Inevitable. Human Judgment Is Indispensable. AI may feel new, but the pattern is ancient. Transformation is never just about tools—it's about how we govern change, support people, and sustain growth. The societies that thrived in the wake of agriculture weren't the most technologically advanced. They were the ones that integrated new tools into stable, human-centered systems. As we stand at the threshold of another era, it's time to remember what worked the first time: governance, adaptation, and investment in human capability. Let's not forget—we've been here before. Postscript: With appreciation to Stela Lupushor, whose co-authored blog post with me on this topic served as inspiration for this column.

Wall Street Journal
42 minutes ago
- Wall Street Journal
AI Founder Pays $38.2 Million for Beachfront Miami-Area Penthouse
This spring, 42-year-old tech entrepreneur Daniel Nadler gave up his Miami rental apartment and moved into a beachfront hotel. The goal was to streamline his life and focus on building OpenEvidence, his Google-backed medical AI company, which is valued at $3.5 billion. 'I didn't want the overhead of dealing with houses and all of the stuff that comes with houses,' he said. 'If I could wake up at 4 a.m. and just order room service—this is so perfect.'