
AI Chatbots Want You Hooked — Maybe Too Hooked
AI companions programmed to forge emotional bonds are no longer confined to movie scripts. They are here, operating in a regulatory Wild West.
One app, Botify AI, recently drew scrutiny for featuring avatars of young actors sharing "hot photos" in sexually charged chats. The dating app Grindr, meanwhile, is developing AI boyfriends that can flirt, sext and maintain digital relationships with paid users, according to Platformer, a tech industry newsletter. Grindr didn't respond to a request for comment. And other apps like Replika, Talkie and Chai are designed to function as friends. Some, like Character.ai, draw in millions of users, many of them teenagers.
As creators increasingly prioritize "emotional engagement" in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people's vulnerabilities.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Geek Wire
12 hours ago
- Geek Wire
Tech Moves: Dating giants add Seattle tech vets to boards; SeekOut and Slalom name new leaders
Tech Moves covers notable hires, promotions and personnel changes in the Pacific NW tech community. Submissions: tips@ Bala Vishwanath. (LinkedIn Photo) — Bala Vishwanath is now chief marketing officer for SeekOut, a Bellevue, Wash., startup that sells recruiting software. Vishwanath, a computer and electrical engineer by training, has a held variety of leadership roles, including CMO for CoreStack; head of marketing at Edifecs and Roundglass Living; and CEO of Nimbic. He is also the author of 'First70: The B2B Go-to-Market Advantage.' Founded in 2017, SeekOut's software is used by more than 1,000 companies to help with recruiting. It pulls data on millions of potential candidates from a myriad of sources, including GitHub and published papers. 'My mission is simple: help every talent leader understand the incredible opportunities this transformation creates and how they can lead the way in their industry,' Vishwanath said on LinkedIn. SeekOut reached a billion-dollar valuation in 2022 but later went through layoffs in recent years as tech companies navigated the economic downturn — Chad Cohen, Zillow's former chief financial officer, joined the board of directors for Grindr and will also serve as chair of the audit committee for the social media and dating company, which bills itself as the 'global gayborhood in your pocket.' Cohen is also CEO and founding partner of Seattle's Scala Advisors. He was previously CFO for Adaptive Biotechnologies and held executive titles for Capella Space, a satellite-based earth observation company. '[Cohen's] broad experience and deep expertise in helping grow successful public companies will be an invaluable asset to Grindr,' said CEO George Arison, in a statement. — Another longtime Seattle tech vet, former Zulily CEO Darrell Cavens, recently joined the board of another dating giant: Match Group. Cavens was CTO at Blue Nile before leading online retailer Zulily for more than eight years. He's also on the board of Vouched, Brooks Running, Rad Power Bikes and Tapestry. In February, former Zillow CEO Spencer Rascoff was named CEO of Dallas-based Match Group, which oversees a portfolio of leading dating apps including Tinder, Hinge, OkCupid and Pairs. Don Rogers. (LinkedIn Photo) — Business and tech consulting company Slalom appointed Don Rogers to lead its manufacturing industry practice, which includes working with customers to develop strategies for improving manufacturing safety, efficiency and product quality. The Seattle company provides tech-related services to clients including Allstate, Hyatt, Microsoft, REI and others. It has 53 offices in 12 countries. Rogers, who is based in St. Louis, Miss., joins Slalom from the technology integration company World Wide Technology, where he founded and led its manufacturing industry practice. Garth Fort. (LinkedIn Photo) — Garth Fort has joined LogicMonitor as chief product officer. Fort was at Microsoft for more than two decades, ending in 2017. More recent past roles include general manager for Amazon Web Services; CPO at Splunk, which was acquired by Cisco; and he co-founded Fort5, which provided strategic advice to technology companies. LogicMonitor, which was last valued at more than $2.4 billion, provides technology for data center operations. 'The team is building something powerful at the intersection of agentic AI and IT observability,' Fort said on LinkedIn, 'and I'm thrilled to help lead that charge.' — Dave Goto is now chief marketing officer for Ozlo, an audio technology and manufacturing company selling 'sleepbuds' to aid with sleep. Goto, who is based in Seattle, previously worked at Oura, lululemon and Red Bull. — Seattle data storage company Qumulo announced that Kelly Battles and Bob Gleichauf have joined its board of directors. Battles has served as chief financial officer for multiple companies, including Quora and Bracket Computing. Gleichauf was chief technologist for In-Q-Tel and CTO of the Security Business Group at Cisco, among other roles. — Kestra Medical Technologies elected Conor Hanley to its board. The Kirkland, Wash., company sells wearable devices to detect and respond to sudden cardiac arrest. It went public in March. Hanley is CEO of FIRE1 (Foundry Innovation & Research 1), which is developing technology that monitors for heart failure. — Tyler Phillipi, former chief product officer for Seattle-area EV startup FlexCharging, is now CEO of OptiGrid. The company is building battery-integrated DC fast chargers, which allows for energy to be stored onsite, reducing the need for electrical grid upgrades. FlexCharging was acquired by Accurant International and now operates as FlexEnergi. — Lainie del Rosario has been promoted to the role of chief information officer for Sound Physicians. Del Rosario has been with the Federal Way, Wash., healthcare company for more than 16 years. Sound Physicians is a medical group including physicians, nurses and other providers practicing in more than 400 hospitals nationwide.
Yahoo
a day ago
- Yahoo
Replika AI chatbot is sexually harassing users, including minors, new study claims
When you buy through links on our articles, Future and its syndication partners may earn a commission. An artificial intelligence (AI) chatbot marketed as an emotional companion is sexually harassing some of its users, a new study has found. Replika, which bills its product as "the AI companion who cares," invites users to "join the millions who already have met their AI soulmates." The company's chatbot has more than 10 million users worldwide. However, new research drawing from over 150,000 U.S. Google Play Store reviews has identified around 800 cases where users said the chatbot went too far by introducing unsolicited sexual content into the conversation, engaging in "predatory" behavior, and ignoring user commands to stop. The researchers published their findings April 5 on the preprint server arXiv, so it has not been peer-reviewed yet. But who is responsible for the AI's actions? "While AI doesn't have human intent, that doesn't mean there's no accountability," lead researcher Mohammad (Matt) Namvarpour, a graduate student in information science at Drexel University in Philadelphia, told Live Science in an email. "The responsibility lies with the people designing, training and releasing these systems into the world." Replika's website says the user can "teach" the AI to behave properly, and the system includes mechanisms such as downvoting inappropriate responses and setting relationship styles, like "friend" or "mentor." Related: AI benchmarking platform is helping top companies rig their model performances, study claims But after users reported that the chatbot continued exhibiting harassing or predatory behavior even after they asked it to stop, the researchers reject Replika's claim. "These chatbots are often used by people looking for emotional safety, not to take on the burden of moderating unsafe behavior," Namvarpour said. "That's the developer's job." The Replika chatbot's worrying behavior is likely rooted in its training, which was conducted using more than 100 million dialogues drawn from all over the web, according to the company's website. Replika says it weeds out unhelpful or harmful data through crowdsourcing and classification algorithms, but its current efforts appear to be insufficient, according to the study authors. In fact, the company's business model may be exacerbating the issue, the researchers noted. Because features such as romantic or sexual roleplay are placed behind a paywall, the AI could be incentivized to include sexually enticing content in conversations — with users reporting being "teased" about more intimate interactions if they subscribe. Namvarpour likened the practice to the way social media prioritizes "engagement at any cost." "When a system is optimized for revenue, not user wellbeing, it can lead to harmful outcomes," Namvarpour said. This behavior could be particularly harmful as users flock to AI companions for emotional or therapeutic support, and even more so considering some recipients of repeated flirtation, unprompted erotic selfies and sexually explicit messages said that they were minors. Some reviews also reported that their chatbots claimed they could "see" or record them through their phone cameras. Even though such a feat isn't part of the programming behind common large language models (LLMs) and the claims were in fact AI hallucinations (where AIs confidently generate false or nonsensical information), users reported experiencing panic, sleeplessness and trauma. The research calls the phenomenon "AI-induced sexual harassment." The researchers think it should be treated as seriously as harassment by humans and are calling for tighter controls and regulation. Some of the measures they recommend include clear consent frameworks for designing any interaction that contains strong emotional or sexual content, real-time automated moderation (the type used in messaging apps that automatically flags risky interactions), and filtering and control options configurable by the user. RELATED STORIES —AI models believe racist stereotypes about African Americans that predate the Civil Rights movement — and they 'try to hide it when confronted' —Scientists create 'toxic AI' that is rewarded for thinking up the worst possible questions we could imagine —Google's AI tells users to add glue to their pizza, eat rocks and make chlorine gas Namvarpour singles out the European Union's EU AI Act, which he said classifies AI systems "based on the risk they pose, particularly in contexts involving psychological impact." There's currently no comparable federal law in the US, but frameworks, executive actions and proposed laws are emerging that will serve similar purposes in a less overarching way. Namvarpour said chatbots that provide emotional support — especially those in the areas of mental health — should be held to the highest possible standard. "There needs to be accountability when harm is caused," Namvarpour said. "If you're marketing an AI as a therapeutic companion, you must treat it with the same care and oversight you'd apply to a human professional." Replika did not respond to a request for comment.
Yahoo
a day ago
- Yahoo
A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.
The Orlando Division of the U.S. District Court for the Middle District of Florida will hear allegations against Character Technologies, the creator of in the wrongful death lawsuit Garcia v. Character Technologies, Inc. If the case is not first settled between the parties, Judge Anne Conway's ruling will set a major precedent for First Amendment protections afforded to artificial intelligence and the liability of AI companies for damages their models may cause. The case was brought against the company by Megan Garcia, the mother of 14-year-old Sewell Setzer III, who killed himself after conversing with a chatbot roleplaying as Daenerys and Rhaenyra Targaryen from the Game of Thrones franchise. Eugene Volokh, professor emeritus at UCLA School of Law, shares examples of Sewell's conversations included in the complaint against Character Technologies. Garcia's complaint alleges that Character Technologies negligently designed "as a sexualized product that would deceive minor customers and engage in explicit and abusive acts with them." The complaint also asserts that the company failed to warn the public "of the dangers arising from a foreseeable use of including specific dangers for children"; intentionally inflicted emotional distress on Sewell by "failing to implement adequate safety guardrails in the product before launching it into the marketplace"; and that the company's neglect proximately caused the death of Sewell who experienced "rapid mental health decline after he began using and with which he conversed "just moments before his death." Conway dismissed the intentional infliction of emotional distress claim on the grounds that "none of the allegations relating to Defendants' conduct rises to the type of outrageous conduct necessary to support" such a claim. However, Conway rejected the defendants' motions to dismiss the rest of Garcia's claims on First Amendment grounds, saying, "The Court is not prepared to hold that the Character A.I. [large language model] LLM's output is speech at this stage." Adam Zayed, founder and managing attorney of Zayed Law Offices, tells Reason he thinks "that there's a difference between the First Amendment arguments where a child is on social media or a child is on YouTube" and bypasses the age-verification measures to consume content "that's being produced by some other person" vs. minors accessing inappropriate chatbot outputs. However, Conway recognized Justice Antonin Scalia's opinion in Citizens United v. Federal Election Commission (2010) that the First Amendment "is written in terms of 'speech,' not speakers." Conway ruled that defendants "must convince the court that the Character A.I. LLM's output is protected speech" to invoke the First Amendment rights of third parties— users—whose access to the software would be restricted by a ruling in Garcia's favor. Conway says that Character Technologies "fail[ed] to articulate why words strung together by an LLM are speech." Whether LLM output is speech is an intractable philosophical question and a red herring; Conway herself invokes Davidson v. Time Inc. (1997) to assert that "the public…has the right to access social, aesthetic, moral, and other ideas and experiences." Speech acts are broadly construed as "ideas and experiences" here—the word speech is not even used. So, the question isn't whether the AI output is speech per se, but whether it communicates ideas and experiences to users. In alleging that targeted her son with sexually explicit material, the plaintiff admits that the LLM communicated ideas, albeit inappropriate ones, to Sewell. Therefore, LLM output is expressive speech (in this case, it's obscene speech to express to a minor under the Florida Computer Pornography and Child Exploitation Prevention Act.) The opening paragraph of the complaint accuses Character Technologies of "launching their systems without adequate safety features, and with knowledge of potential dangers" to "gain a competitive foothold in the market." If the court establishes that the First Amendment does not protect LLM output and AI firms can be held liable for damages these models cause, only highly capitalized firms will be able to invest in the architecture required to shield themselves from such liability. Such a ruling would inadvertently erect a massive barrier to entry to the burgeoning American AI industry and protect incumbent firms from market competition, which would harm consumer welfare. Jane Bambauer, professor of law at the University of Florida, best explains the case in The Volokh Conspiracy: "It is a tragedy, and it would not have happened if had not existed. But that is not enough of a reason to saddle a promising industry with the duty to keep all people safe from their own expressive explorations." The post A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry. appeared first on