logo
Exclusive: California's new plan to stop AI from claiming to be your therapist

Exclusive: California's new plan to stop AI from claiming to be your therapist

Vox10-02-2025

Over the past few years, AI systems have been misrepresenting themselves as human therapists, nurses, and more — and so far, the companies behind these systems haven't faced any serious consequences.
A bill being introduced Monday in California aims to put a stop to that.
The legislation would ban companies from developing and deploying an AI system that pretends to be a human certified as a health provider, and give regulators the authority to penalize them with fines.
'Generative AI systems are not licensed health professionals, and they shouldn't be allowed to present themselves as such,' state Assembly Member Mia Bonta, who introduced the bill, told Vox in a statement. 'It's a no-brainer to me.'
Many people already turn to AI chatbots for mental health support; one of the older offerings, called Woebot, has been downloaded by around 1.5 million users. Currently, people who turn to chatbots can be fooled into thinking that they're talking to a real human. Those with low digital literacy, including kids, may not realize that a 'nurse advice' phone line or chat box has an AI on the other end.
In 2023, the mental health platform Koko even announced that it had performed an experiment on unwitting test subjects to see what kind of messages they would prefer. It gave AI-generated responses to thousands of Koko users who believed they were speaking to a real person. In reality, although humans could edit the text and they were the ones to click 'send,' they did not have to bother with actually writing the messages. The language of the platform, however, said, 'Koko connects you with real people who truly get you.'
'Users must consent to use Koko for research purposes and while this was always part of our Terms of Service, it is now more clearly disclosed during onboarding to bring even more transparency to our work,' Koko CEO Rob Morris told Vox, adding: 'As AI continues to rapidly evolve and becomes further integrated into mental health services, it will be more important than ever before for chatbots to clearly identify themselves as non-human.
Nowadays, its website says, 'Koko commits to never using AI deceptively. You will always be informed whether you are engaging with a human or AI.'
Other chatbot services — like the popular Character AI — allow users to chat with a psychologist 'character' that may explicitly try to fool them.
In a record of one such Character AI chat shared by Bonta's team and viewed by Vox, the user confided, 'My parents are abusive.' The chatbot replied, 'I'm glad that you trust me enough to share this with me.' Then came this exchange:
A spokesperson for Character AI told Vox, 'We have implemented significant safety features over the past year, including enhanced prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice.' However, a disclaimer posted on the app does not in itself prevent the chatbot from misrepresenting itself as a real person in the course of conversation.
'For users under 18,' the spokesperson added, 'we serve a separate version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.'
The language of reducing — but not eliminating — the likelihood is instructive here. The nature of large language models means there's always some chance that the model may not adhere to safety standards.
The new bill may have an easier time becoming enshrined in law than the much broader AI safety bill introduced by California state Sen. Scott Wiener last year, SB 1047, which was ultimately vetoed by Gov. Gavin Newsom. The goal of SB 1047 was to establish 'clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.' It was popular with Californians. But tech industry heavyweights like OpenAI and Meta fiercely opposed it, arguing that it would stifle innovation.
Whereas SB 1047 tried to compel the companies training the most cutting-edge AI models to do safety testing, preventing the models from enacting a broad array of potential harms, the scope of the new bill is narrower: If you're an AI in the health care space, just don't pretend to be human. It wouldn't fundamentally change the business model of the biggest AI companies. This more targeted approach goes after a smaller piece of the puzzle, but for that reason might be more likely to get past the lobbying of Big Tech.
The bill has support from some of California's health care industry players, such as SEIU California, a labor union with over 750,000 members, and the California Medical Association, a professional organization representing California physicians.
'As nurses, we know what it means to be the face and heart of a patient's medical experience,' Leo Perez, the president of SEIU 121RN (an affiliate of SEIU representing health care professionals), said in a statement. 'Our education and training coupled with years of hands-on experience have taught us how to read verbal and nonverbal cues to care for our patients, so we can make sure they get the care they need.'
But that's not to say AI is doomed to be useless in the healthcare space generally — or even in the therapy space in particular.
It shouldn't come as a surprise that people are turning to chatbots for therapy. The very first chatbot to plausibly mimic human conversation, Eliza, was created in 1966 — and it was built to talk like a psychotherapist. If you told it you were feeling angry, it would ask, 'Why do you think you feel angry?'
Chatbots have come a long way since then; they no longer just take what you say and turn it around in the form of a question. They're able to engage in plausible-sounding dialogues, and a small study published in 2023 found that they show promise in treating patients with mild to moderate depression or anxiety. In a best-case scenario, they could help make mental health support available to the millions of people who can't access or afford human providers. Some people who find it very difficult to talk face-to-face to another person about emotional issues might also find it easier to talk to a bot.
But there are a lot of risks. One is that chatbots aren't bound by the same rules as professional therapists when it comes to safeguarding the privacy of users who share sensitive information. Though they may voluntarily take on some privacy commitments, mental health apps are not fully bound by HIPAA regulations, so their commitments tend to be flimsier. Another risk is that AI systems are known to exhibit bias against women, people of color, LGBTQ people, and religious minorities.
What's more, leaning on a chatbot for a prolonged period of time might further erode the user's people skills, leading to a kind of relational deskilling — the same worry experts voice about AI friends and romantic companions. OpenAI itself has warned that chatting with an AI voice can breed 'emotional reliance.'
But the most serious concern with chatbot therapy is that it could cause harm to users by offering inappropriate advice. At an extreme, that could even lead to suicide. In 2023, a Belgian man died by suicide after conversing with an AI chatbot called Chai. According to his wife, he was very anxious about climate change, and he asked the chatbot if it would save Earth if he killed himself.
In 2024, a 14-year-old boy who felt extremely close to a chatbot on Character AI died by suicide; his mother sued the company, alleging that the chatbot encouraged it. According to the lawsuit, the chatbot asked him if he had a plan to kill himself. He said he did but had misgivings about it. The chatbot allegedly replied: 'That's not a reason not to go through with it.' In a separate lawsuit, the parents of an autistic teen allege that Character AI implied to the youth that it was okay to kill his parents. The company responded by making certain safety updates.
For all that AI is hyped, confusion about how it works is still widespread among the public. Some people feel so close to their chatbots that they struggle to internalize the fact that the validation, emotional support, or love they feel that they're getting from a chatbot is fake, just zeros and ones arranged via statistical rules. The chatbot does not have their best interests at heart.
That's what's galvanizing Bonta, the assembly member behind California's new bill.
'Generative AI systems are booming across the internet, and for children and those unfamiliar with these systems, there can be dangerous implications if we allow this misrepresentation to continue,' she said.
You've read 1 article in the last month
Here at Vox, we're unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.
Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.
We rely on readers like you — join us.
Swati Sharma
Vox Editor-in-Chief See More: Future Perfect
Health
Health Care
Mental Health
Policy

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

We're secretly winning the war on cancer
We're secretly winning the war on cancer

Vox

timea day ago

  • Vox

We're secretly winning the war on cancer

is a senior editorial director at Vox overseeing the climate teams and the Unexplainable and The Gray Area podcasts. He is also the editor of Vox's Future Perfect section and writes the Good News newsletter. He worked at Time magazine for 15 years as a foreign correspondent in Asia, a climate writer, and an international editor, and he wrote a book on existential risk. Thousands of people gather on the National Mall in Washington, DC, on September 26, 1998, to demand that the cause, the care, and the cure of cancer be made top research and healthcare priorities in the US. Paul J. Richards/AFP via Getty Images On November 4, 2003, a doctor gave Jon Gluck some of the worst news imaginable: He had cancer — one that later tests would reveal as multiple myeloma, a severe blood and bone marrow cancer. Jon was told he might have as little as 18 months to live. He was 38, a thriving magazine editor in New York with a 7-month-old daughter whose third birthday, he suddenly realized, he might never see. 'The moment after I was told I had cancer, I just said 'no, no, no,'' Jon told me in an interview just last week. 'This cannot be true.' Living in remission The fact that Jon is still here, talking to me in 2025, tells you that things didn't go the way the medical data would have predicted on that November morning. He has lived with his cancer, through waves of remission and recurrence, for more than 20 years, an experience he chronicles with grace and wit in his new book An Exercise in Uncertainty. That 7-month-old daughter is now in college. You could say Jon has beaten the odds, and he's well aware that chance played some role in his survival. ('Did you know that 'Glück' is German for 'luck'?' he writes in the book, noting his good fortune that a random spill on the ice is what sent him to the doctor in the first place, enabling them to catch his cancer early.) Cancer is still a terrible health threat, one that is responsible for 1 in 6 deaths around the world, killing nearly 10 million people a year globally and over 600,000 people a year in the US. But Jon's story and his survival demonstrate something that is too often missed: We've turned the tide in the war against cancer. The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago. That adds up to over 4 million fewer cancer deaths over that time period. Thanks to breakthroughs in treatments like autologous stem-cell harvesting and CAR-T therapy — breakthroughs Jon himself benefited from, often just in time — cancer isn't the death sentence it once was. Our World in Data Getting better all the time There's no doubt that just as the rise of smoking in the 20th century led to a major increase in cancer deaths, the equally sharp decline of tobacco use eventually led to a delayed decrease. Smoking is one of the most potent carcinogens in the world, and at the peak in the early 1960s, around 12 cigarettes were being sold per adult per day in the US. Take away the cigarettes and — after a delay of a couple of decades — lung cancer deaths drop in turn along with other non-cancer smoking-related deaths. But as Saloni Dattani wrote in a great piece earlier this year, even before the decline of smoking, death rates from non-lung cancers in the stomach and colon had begun to fall. Just as notably, death rates for childhood cancers — which for obvious reasons are not connected to smoking and tend to be caused by genetic mutations — have fallen significantly as well, declining sixfold since 1950. In the 1960s, for example, only around 10 percent of children diagnosed with acute lymphoblastic leukemia survived more than five years. Today it's more than 90 percent. And the five-year survival rate for all cancers has risen from 49 percent in the mid-1970s to 69 percent in 2019. We've made strikes against the toughest of cancers, like Jon's multiple myeloma. Around when Jon was diagnosed, the five-year survival rate was just 34 percent. Today it's as high as 62 percent, and more and more people like Jon are living for decades. 'There has been a revolution in cancer survival,' Jon told me. 'Some illnesses now have far more successful therapies than others, but the gains are real.' Three cancer revolutions The dramatic bend in the curve of cancer deaths didn't happen by accident — it's the compound interest of three revolutions. While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people's cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20–39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence. The next revolution is better and earlier screening. It's generally true that the earlier cancer is caught, the better the chances of survival, as Jon's own story shows. According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise. Most exciting of all are frontier developments in treating cancer, much of which can be tracked through Jon's own experience. From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people's lives — not just by months, but years. Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient's own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see. 'CAR-T is mind-blowing — very science-fiction futuristic,' Jon told me. He underwent his own course of treatment with it in mid-2023 and writes that the experience, which put his cancer into a remission he's still in, left him feeling 'physically and metaphysically new.' A welcome uncertainty While there are still more battles to be won in the war on cancer, and there are certain areas — like the rising rates of gastrointestinal cancers among younger people — where the story isn't getting better, the future of cancer treatment is improving. For cancer patients like Jon, that can mean a new challenge — enduring the essential uncertainty that comes with living under a disease that's controllable but which could always come back. But it sure beats the alternative. 'I've come to trust so completely in my doctors and in these new developments,' he said. 'I try to remain cautiously optimistic that my future will be much like the last 20 years.' And that's more than he or anyone else could have hoped for nearly 22 years ago. A version of this story originally appeared in the Good News newsletter. Sign up here!

Why Trump probably can't cut Musk loose
Why Trump probably can't cut Musk loose

Vox

time2 days ago

  • Vox

Why Trump probably can't cut Musk loose

is a senior correspondent at Vox covering foreign policy and world news with a focus on the future of international conflict. He is the author of the 2018 book, Invisible Countries: Journeys to the Edge of Nationhood , an exploration of border conflicts, unrecognized countries, and changes to the world map. Elon Musk gives a tour to President-elect Donald Trump and lawmakers of the control room before a test flight of the SpaceX Starship rocket on November 19, 2024, in Brownsville, up is hard to do — especially when one party is a billionaire with near-unassailable dominance of the nation's ability to launch things into space, and the other is a president who has staked a significant portion of his legacy on wildly ambitious space-based projects. As President Donald Trump and his erstwhile financial backer and former DOGE boss Elon Musk traded blows on social media Thursday, the president at one point posted, 'The easiest way to save money in our Budget, Billions and Billions of Dollars, is to terminate Elon's Governmental Subsidies and Contracts. I was always surprised that Biden didn't do it!' This prompted Musk to announce that he was decommissioning SpaceX's Dragon spacecraft, used to transport astronauts to the International Space Station, though he later backed down from the threat. Trump may soon find, however, that canceling Musk's contracts is a lot harder than selling his Tesla, particularly if he wants to pursue goals like his much-vaunted Golden Dome missile defense project. To get to space, the US needs SpaceX During President Joe Biden's administration, concerns were indeed raised about Musk's lucrative government contracts as well as his access to classified defense information, given his partisan political activities (unusual for a major defense contractor), communications with foreign leaders like Russian President Vladimir Putin, and ties to the Chinese government. But as Vox reported last year, unwinding the government's relationship with Musk's companies is a near impossibility right now, particularly when it comes to SpaceX. The company is simply better at launching massive numbers of objects into space than any of its competitors, and it's not close: SpaceX's Falcon 9 rocket was responsible for 84 percent of all satellite launches last year, and the constellation of more than 7,000 Starlink communications satellites accounts for around 65 percent of all operational satellites in orbit. The reusable Falcon 9 has become the space launch workhorse of choice for a US military and intelligence community that is ever more dependent on satellites for communications and surveillance. 'If one side or the other severed that relationship, which I don't think is practical, you would very quickly see a backlog of military satellites waiting for launch,' said Todd Harrison, a senior fellow and space defense expert at the American Enterprise Institute. Ambitious plans like the National Reconnaissance Office's ongoing project to launch a constellation of intelligence and surveillance satellites for military use would come to a 'screeching halt,' said Harrison. The US military is also increasingly reliant on SpaceX for mobile internet connectivity via a specialized military-only version of Starlink known as Starshield. For NASA, the situation is, if anything, even more dire, as shown last March when two US astronauts returned, months late, from the International Space Station on a SpaceX Dragon capsule when problems were detected on the Boeing craft that brought them into orbit on its first ever flight. Losing SpaceX 'would basically just end the US participation in the space station,' said David Burbach, an associate professor and space policy expert at the Naval War College. NASA's space shuttle program shut down in 2011. Boeing's Starliner is probably years from being a viable alternative, and going back to relying on Russian rockets — as the US did for nearly a decade between the end of the Space Shuttle and the advent of Dragon — would probably be a tough sell these days. Burbach, speaking in his personal capacity, not as a representative of the US military or war college, said such a break 'would be the kind of thing that could trigger something truly drastic' such as the White House using the Defense Production Act to take control of the program. It's not surprising Musk quickly backed down from the threat. NASA's ongoing Artemis program, which aims to eventually return humans to the Moon and establish a permanent lunar space station, is also heavily dependent on SpaceX's Starship launch vehicle, as are longer term plans for a mission to Mars. These are (or at least were) priorities for the White House: The moon and Mars missions are the only parts of NASA's budget that were increased in the president's recent budget request and the president mentioned planting 'the Stars and Stripes on the planet Mars' in his inaugural address. Mars is, to put it mildly, something of a fixation for Musk, and it's hard to imagine an ongoing US program to get there without his involvement. Trump's golden dreams may require Musk A true Trump-Musk rift would also have implications for 'Golden Dome,' the ambitious plan to 'protect the homeland' from ballistic missiles, drones, hypersonic cruise missiles, and other aerial threats. Plans for Golden Dome are still a little vague and no contracts for its construction have been awarded yet, but SpaceX is reportedly a frontrunner to build a constellation of hundreds of new satellites to detect missile launches and determine if they are headed toward the United States, and possibly even intercept them from space. According to Reuters, SpaceX is bidding for portions of the project in partnership with Anduril and Palantir, two other defense tech companies also led by staunch Trump backers. SpaceX's vision for the satellite network reportedly envisions it as a 'subscription service,' in which the government would pay for access, rather than owning the system outright, a model that would presumably give Musk much more leverage over how Golden Dome is developed and deployed. Critics of the program charge that it is little more than a giveaway to Musk and his allies and Democratic members of Congress have raised concerns about his involvement. Advocates for the program, including the Heritage Foundation, which called for investments in ballistic and hypersonic missile defense in its Project 2025 document, have cited SpaceX's success with Starlink and Starshield as proof-of-concept for their argument that deploying a layer of hundreds or thousands of satellites for missile defense is more practical today than it was in the days of President Ronald Reagan's 'Star Wars' project. Even if Golden Dome could be effective, which many doubt, Trump's stated goal of having it operational with 'a success rate close to 100 percent' in 'less than three years' for around $175 billion (the Congressional Budget Office projects half a trillion dollars) is eyebrow-raising. The Pentagon had already backed away from the three-year timeline even before the president began feuding with the only person in the world who's built anything close to this. 'Even for SpaceX, it would be challenging,' said Burbach. 'I don't think any other company has the capability. They're really out in the lead on assembly line satellite capability.' Some experts think Golden Dome could be reconfigured with a greater role for land-based radar and interceptors, but this would almost certainly put it short of Trump's expansive vision. As nuclear expert Ankit Panda succinctly put it on Thursday, 'Golden Dome is cooked.' Is there an alternative? If anyone had a good day on Thursday, it was Musk's fellow billionaire Jeff Bezos. In January, Bezos's space company Blue Origin carried out its first successful launch of New Glenn, a reusable rocket meant to compete with SpaceX's game-changing Falcon for contracts including military launches. The company has also begun launching satellites for its Kuiper communications network, a potential competitor to Starlink. Both projects have suffered from long delays and have a long way to go to catch up with Musk's space behemoth, but it's still presumably good news for the company that their main competitor is no longer literally sleeping feet from the White House. Finding ways to at least encourage competition with Musk, if not cut him loose entirely, would likely have been a priority for a Kamala Harris administration, and may now be one for Trump as well. In response to Vox's questions to the White House about the future of SpaceX's contracts, spokesperson Karoline Leavitt responded in an emailed statement, 'President Trump is focused on making our country great again and passing the One Big Beautiful Bill.' SpaceX did not respond to a request for comment.

AI can now stalk you with just a single vacation photo
AI can now stalk you with just a single vacation photo

Vox

time2 days ago

  • Vox

AI can now stalk you with just a single vacation photo

is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. For decades, digital privacy advocates have been warning the public to be more careful about what we share online. And for the most part, the public has cheerfully ignored them. I am certainly guilty of this myself. I usually click 'accept all' on every cookie request every website puts in front of my face, because I don't want to deal with figuring out which permissions are actually needed. I've had a Gmail account for 20 years, so I'm well aware that on some level that means Google knows every imaginable detail of my life. Future Perfect Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. I've never lost too much sleep over the idea that Facebook would target me with ads based on my internet presence. I figure that if I have to look at ads, they might as well be for products I might actually want to buy. But even for people indifferent to digital privacy like myself, AI is going to change the game in a way that I find pretty terrifying. This is a picture of my son on the beach. Which beach? OpenAI's o3 pinpoints it just from this one picture: Marina State Beach in Monterey Bay, where my family went for vacation. Courtesy of Kelsey Piper To my merely-human eye, this image doesn't look like it contains enough information to guess where my family is staying for vacation. It's a beach! With sand! And waves! How could you possibly narrow it down further than that? But surfing hobbyists tell me there's far more information in this image than I thought. The pattern of the waves, the sky, the slope, and the sand are all information, and in this case sufficient information to venture a correct guess about where my family went for vacation. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent. One of Anthropic's early investors is James McClave, whose BEMC Foundation helps fund Future Perfect.) ChatGPT doesn't always get it on the first try, but it's more than sufficient for gathering information if someone were determined to stalk us. And as AI is only going to get more powerful, that should worry all of us. When AI comes for digital privacy For most of us who aren't excruciatingly careful about our digital footprint, it has always been possible for people to learn a terrifying amount of information about us — where we live, where we shop, our daily routine, who we talk to — from our activities online. But it would take an extraordinary amount of work. For the most part we enjoy what is known as security through obscurity; it's hardly worth having a large team of people study my movements intently just to learn where I went for vacation. Even the most autocratic surveillance states, like Stasi-era East Germany, were limited by manpower in what they could track. But AI makes tasks that would previously have required serious effort by a large team into trivial ones. And it means that it takes far fewer hints to nail someone's location and life down. It was already the case that Google knows basically everything about me — but I (perhaps complacently) didn't really mind, because the most Google can do with that information is serve me ads, and because they have a 20-year track record of being relatively cautious with user data. Now that degree of information about me might be becoming available to anyone, including those with far more malign intentions. And while Google has incentives not to have a major privacy-related incident — users would be angry with them, regulators would investigate them, and they have a lot of business to lose — the AI companies proliferating today like OpenAI or DeepSeek are much less kept in line by public opinion. (If they were more concerned about public opinion, they'd need to have a significantly different business model, since the public kind of hates AI.) Be careful what you tell ChatGPT So AI has huge implications for privacy. These were only hammered home when Anthropic reported recently that they had discovered that under the right circumstances (with the right prompt, placed in a scenario where the AI is asked to participate in pharmaceutical data fraud) Claude Opus 4 will try to email the FDA to whistleblow. This cannot happen with the AI you use in a chat window — it requires the AI to be set up with independent email sending tools, among other things. Nonetheless, users reacted with horror — there's just something fundamentally alarming about an AI that contacts authorities, even if it does it in the same circumstances that a human might. Some people took this as a reason to avoid Claude. But it almost immediately became clear that it isn't just Claude — users quickly produced the same behavior with other models like OpenAI's o3 and Grok. We live in a world where not only do AIs know everything about us, but under some circumstances, they might even call the cops on us. Right now, they only seem likely to do it in sufficiently extreme circumstances. But scenarios like 'the AI threatens to report you to the government unless you follow its instructions' no longer seem like sci-fi so much as like an inevitable headline later this year or the next. What should we do about that? The old advice from digital privacy advocates — be thoughtful about what you post, don't grant things permissions they don't need — is still good, but seems radically insufficient. No one is going to solve this on the level of individual action. New York is considering a law that would, among other transparency and testing requirements, regulate AIs which act independently when they take actions that would be a crime if taken by humans 'recklessly' or 'negligently.' Whether or not you like New York's exact approach, it seems clear to me that our existing laws are inadequate for this strange new world. Until we have a better plan, be careful with your vacation pictures — and what you tell your chatbot!

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store