Latest news with #MiaBonta
Yahoo
18-05-2025
- Politics
- Yahoo
California must fund legal aid for immigrant children facing deportation
California has long stood as a progressive stalwart in the United States. During the first Trump administration, our communities and elected officials rallied together to put forth a progressive agenda that was equal parts unifying and galvanizing. However, the state's response to Trump 2.0 may jettison critical opportunities to support the legal protection and due process needs of vulnerable communities and fail to meet the urgency of the moment. This includes failing to rally around populations that are being targeted by the new administration, including unaccompanied immigrant children whose legal support is in jeopardy following devastating funding cuts at the federal level. The Acacia Center for Justice has developed and supported various programs that ensure unaccompanied children have access to needed legal and social services. In California, we partnered with the state to pilot the Children's Holistic Immigration Representation Project (CHIRP), which provides holistic legal advocacy for unaccompanied children at risk of deportation, with an emphasis on trauma-informed intervention to ensure that unaccompanied minors have legal services and appropriate wrap-around support. Opinion Despite the success of the model, it has yet to receive permanent support from the state of California. This means it will run out of funding on June 30. CHIRP's uncertain future has been exacerbated by the fact that the new federal administration terminated existing funding allocated to support immigrant children in California, leaving thousands of children on their own in the middle of their deportation proceedings. In the wake of this crisis, we hope the California legislature and governor recognize this as an opportunity to exercise critical leadership to protect thousands of children across our communities. Several offices are meeting the moment to address some of the most pressing challenges our communities face. In February, Assemblymember Mia Bonta, D-Oakland, introduced Assembly Bill 1261, a bill that seeks to codify the right to counsel for unaccompanied children. This bill is an important step in the right direction, but it must be paired with an investment in existing programs serving unaccompanied children like CHIRP. Without a committed investment, this well-meaning effort will fall far short of accomplishing its goal of protecting vulnerable children forced to face a judicial process on their own. For years, the California Department of Social Services has invested in programs like CHIRP to develop a patchwork of safety and support and set the groundwork for a right to counsel for unaccompanied children. Ensuring that CHIRP gets the support it deserves is essential to making sure the promise of a right to counsel for children can become a reality. Without funding, that 7-year old boy sitting on the wooden bench in the courtroom, waiting for his case to be called, will have to continue to defend himself in immigration court without an advocate by his side. Protecting due process for these children presents an important opportunity to showcase leadership and strategic vision that would galvanize an exhausted and disillusioned base working to protect their communities from frightening enforcement efforts. California can draw an important line in the sand about what values we seek to uphold, and how we can stand in solidarity with the most vulnerable members of our state. Unfortunately, Gov. Gavin Newsom's May Revise failed to include a much-needed increase in funding for immigration legal services. However, the opportunity remains for the legislature to exercise this leadership by ensuring that programs like CHIRP have the funding they need to continue, and that there is a sustainable plan to provide legal support for unaccompanied children and all vulnerable individuals at risk of deportation. This investment will not only pay off for the state of California and its residents, it will also give California lawmakers and our communities a much needed win. Shaina Aber is executive director of the Acacia Center for Justice.

Epoch Times
30-04-2025
- Politics
- Epoch Times
California Child Sex Trafficking Bill Passes Key Hurdle
California lawmakers in the Public Safety Committee gave their seal of approval on Tuesday morning to a bill that would crack down on consumers of the child sex trafficking industry. Assemblymembers Mia Bonta (D) and LaShae Sharp Collins (D) abstained from the vote. Assembly Bill 379, which targets buyers of commercial sex, was introduced by Assemblymember Maggy Krell (D), who worked at the California Department of Justice and became known for prosecuting the operators of which was shut down in 2018 for facilitating sex trafficking and prostitution. Krell was elected to the Assembly last year. The bill proposes stricter penalties for soliciting minors and loitering to purchase sex. Late changes had to be made to keep the bill on the hearing calendar. Those changes entailed removing a provision that would have made it a felony to purchase 16- and 17-year olds. Assemblymember Mark Gonzalez, a Democrat from Los Angeles, opposed the bill, Related Stories 4/28/2025 3/28/2025 AB-379, which was introduced in February, includes provisions to create a misdemeanor for loitering with the intent to solicit commercial sex, to impose fines as high as $25,000 for soliciting minors under 16, and to allow felony human trafficking charges for repeat offenders who buy sex from minors. The bill also looks to create a first-of-its-kind Survivor Support Fund. Money from the fund would go to community-based organizations led by survivors of human trafficking. The sponsor noted that support includes housing, mental health services, and job training. While introducing the bill to the committee, Krell said it would support victims and give law enforcement better tools to prosecute the buyers. 'Demand is the buyers,' she said. 'It is the rows of cars of men lined up on street corners to buy teenagers for sex,' she said. 'Without the buyers, we don't really have sex trafficking.' The bill drew support from the California District Attorneys Association, the California Police Chief Association, the San Bernardino County Sheriff's Department, the City of Stockton, the Association for L.A. Deputy Sheriffs, the League of County Board of Supervisors, and more. The bill also reinstates penalties for offenses that were decriminalized by a 2022 California law, such as loitering with the intent to purchase a victim. Those convicted would face a misdemeanor and pay up to a $1,000 fine that would go toward the fund for survivors. Opponents, including survivors of child trafficking, focused on this aspect of the law in their testimony. Jess Torres, a child trafficking survivor and director of programs at Rising International, respectfully opposed the bill, saying it hinges on a vaguely defined suspicion of intent to do something criminal, rather than evidence. 'This bill will only escalate violence against survivors because persons who are trafficked in commercial sex are harmed when they operate in a criminalized environment,' Torres said. 'When buyers believe they are taking on greater risk, they often become more demanding, and that pressure frequently becomes compromising.' Leela Chapelle of the Coalition to Abolish Slavery and Trafficking also opposed the bill, arguing that loitering with intent laws harm communities they claim to protect and are unconstitutional. 'We do believe that this will cause the same issues that we have seen over and over again, that we spend our resources clearing the criminal records of survivors, that should not have happened in the first place—these criminal records that prevent them from lives of stability,' Chapelle said. Opposition also included the LA Public Defenders Union and the San Francisco Public Defender's Office. The bill is now due to advance on to the Assembly Appropriations Committee before it can advance to a full vote on the Assembly floor and the Senate.
Yahoo
26-04-2025
- Politics
- Yahoo
Some California youth of color are stuck in endless probation. Time to end this practice
In 2023, more than 10,000 youth were placed on pArobation, and 86% of them were youth of color. Current data shows that, on average, Black youth spend 10 months longer on probation than their white counterparts. Our justice system is in urgent need of reform. And legislation introduced by Assemblymember Mia Bonta, D-Oakland, would do just that. Opinion Right now, youth enter a juvenile justice system that is designed for them to fail. In Los Angeles County alone, there are 56 standardized rules that can be broken when someone is on probation: According to The National Juvenile Defender Center, many youths simply may not understand probation orders they are given. And some rules — such as a requirement to call an electronic monitoring office every time when leaving home — has nothing to do with promoting overall development. In response, Assembly Bill 1376 is a common-sense bill that could save the state hundreds of millions of dollars. It would require regular six-month hearings to review a youth's probation. At each hearing, a judge would determine if it is in the child's best interest to continue on probation or shorten their time. In California, the most common form of court-ordered discipline for youth is probation, but there are no official rules for how long an individual should remain there — and it's not impacting all youth equally. Our organization, the Alliance for Boys and Men of Color, plus many other community-based organizations in California, are committed to ensuring a strong future for our youth by advocating for change on all levels of government. Now, we have a chance to put an end to this unjust practice. 'Juvenile justice should be about helping kids imagine and pursue new lives — not jump-starting the revolving door of the criminal justice system,' Gov. Gavin Newsom said in 2019. That revolving door is exactly what endless probation creates. AB 1376 aligns with the governor's vision by shifting the system toward rehabilitation, healing and opportunity. It's about helping young people continue their education, build skills and thrive in our communities and economy. Creating guidelines for probation sentences for youth is not new in California. In March of 2018, Santa Clara County Probation tested a new program called 'Probation Dismissal Criteria' that focused on shortening probationary periods for youth. After six months on probation, officers reviewed a youth's academic and behavioral progress to determine if they should remain on probation, be released early or have their case reviewed in another three months, with an explanation on what should be addressed during their probation. In 44% of the 48 cases reviewed during the pilot, officers recommended ending probation early. Meanwhile, other counties, such as San Diego, have voluntarily implemented six-month probationary check-ins for youth. With regular check-ins and clear goals, we can help youth exit the juvenile justice system and reenter society with hope, purpose and support. We owe our youth more than endless punishment — we owe them a real chance at freedom, growth and redemption. California has the opportunity to end the harmful and outdated practice of endless probation. Eric Morrison-Smith is the executive director of the Alliance for Boys and Men of Color, a nationwide network of over 200 advocacy organizations and community leaders working to advance race and gender justice.

Politico
19-03-2025
- Business
- Politico
What the new AI report could mean for health care
We're extending the free trial through April 25 of POLITICO Pro's Technology: California Decoded newsletter, exploring how the Golden State is defining tech policy within its borders and beyond. QUICK FIX — Top health lawmaker Mia Bonta exclusively tells us her views on Newsom's expert report. — Meanwhile, one of the report's authors explains why they avoided politics where possible. — Budget watchdog asks lawmakers to rethink a $25 million CHIPS investment. Welcome to California Decoded! Happy Wednesday, there are no controversies to report. JK here we go. Send feedback, tips and story ideas to tkatzenberger@ and chasedf@ Driving the day INTERVIEW: RUBBER MEETS ROAD — Assembly Health Chair Mia Bonta sees an opportunity to make California healthier with the new AI report commissioned by Gov. Gavin Newsom's office. The Bay Area Democrat (and potential 2026 state superintendent candidate, as our Playbook colleagues report), told us the expert panel's call for greater AI model transparency highlights one of her biggest concerns: that companies are pushing AI as a replacement for health workers and stand-ins for kids seeking therapy. Bonta introduced a bill last month, AB 489, that would ban companies from marketing AI chatbots as licensed health professionals like nurses and psychologists. Her committee may soon take up another bill that would outlaw chatbots from luring in kids with addictive reward structures. And there could be more legislation to come in future sessions, she told us, aimed at wielding transparency requirements to ensure equitable care and lower hospital bills. 'The industry still has major steps to take to achieve transparency,' Bonta, whose husband Rob Bonta has also been taking on tech industry challenges as attorney general, told us. California Decoded sat down with Bonta to examine her vision for AI policy in health care. This interview has been edited for length and clarity. Are you pursuing a broad approach or a more pointed approach to regulating AI in health care? No pun intended, but I think our regulatory framework needs to be pretty surgical … I'm particularly focused right now on making sure that health care professionals are not misrepresented to vulnerable communities. Given the mental health care crisis that is happening for children right now, having them think that they are getting counsel and advice from a human being … and having, in actuality, that be an AI-generated avatar, that's a deep concern to me. Do you think the state needs to create clearer rules for how AI chatbots are allowed to market themselves, particularly to children? I do, and I think it's very complicated … We have to get very skilled in the Legislature to be able to make sure that we're providing very clear language that doesn't have unintended consequences around what we're trying to regulate. I think we were very close in the U.S. to adopting a regulatory framework that would have a robust application [to kids' safety], and now we are not in a position to be able to rely on the federal government. That context is causing California to need to step up. Is that because of President Donald Trump and the Republican-controlled Congress? It is definitely because of an attitude that does not protect humanity, doesn't protect data and privacy and doesn't protect the basis of allowing us to use science and data and research to drive our decisions. Do you see any gaps or missing perspectives in pieces of legislation dealing with kids' safety and AI? I think we always run the risk of not taking the time to hear the voices that don't have the ability to be in the room. If you take these broad conversations around AI regulation, and you are somehow not acting [on] the human components of how we need to shape this and making sure that we're focused on traditionally disenfranchised communities — like youth and low-income people and people with disabilities and people of color and BIPOC communities — you are always going to come up with the wrong answer. Are you worried about how insurance companies use AI to evaluate claims? I think it's an area of concern. We just had a briefing from the insurance commissioner, Ricardo Lara, and he raised this as an area of inquiry himself. Anytime that you're using large language models, you need to make sure that we're testing for bias, and certainly when your health insurance claims could be denied based on some kind of generated AI model. Is there anything under the radar that you're watching to see if it's worth tackling with future legislation, possibly next year or next session? I could imagine a world in which we are using the crazy amount of data that we have around why health care costs are so high to be able to generate some solutions that would be cost-saving to the individual. That is my pie-in-the-sky hope for how we'll proceed with legislation moving forward. HAPPENING TODAY ALL DAY — Nvidia's annual GTC conference continues in San Jose. 1:00 p.m. — The Assembly Communications and Conveyance committee holds an informational hearing on AT&T's bid to phase out its 'carrier-of-last-resort' obligation, which requires the telecom giant to provide landline phone service in parts of California where no other coverage is available. Happening tomorrow EVENT: DECODED GOES LIVE — Got a burning question to ask us? Now's your chance! We're going live tomorrow at 12 p.m. PT for POLITICO Pro subscribers in a bicoastal briefing on how Silicon Valley leaders are shaping policy debates in Washington and Sacramento — and how government officials are either advancing or obstructing their agenda. We'll touch on issues including AI regulation, data centers, privacy laws and how tariffs are impacting California's tech sector in conversation with our Washington-based colleagues Brendan Bordelon, Steven Overly and Luiza Savage. Sign up for tomorrow's briefing here. Artificial Intelligence INTERVIEW: AUTHOR'S NOTE — While Sacramento and Silicon Valley are fervently parsing the AI report commissioned by Newsom, we went to the source and asked one of the three main authors about what it is and isn't intended to do, and its potential political impacts. Jennifer Chayes is the dean UC Berkeley's College of Computing, Data Science and Society, part of the trio tasked by Newsom with putting together the report. She told California Decoded their goal was to come up with a common set of principles that could form the starting point for policy, while avoiding third rail issues like AI's impact on the labor force or its massive and growing energy use. 'I think it's a wonderful starting point now for legislation,' Chayes said. But she also said she and her coauthors were concerned about the report being politicized and misconstrued. 'It is just the nature of important conversations like this that pieces may be taken out of context and used in ways that detract from our goal,' Chayes said. While Chayes studiously avoided commenting on any pending legislation, one section of the report did sync up with state Sen. Scott Wiener's current AI safety bill. Both contain sections emphasizing the need to expand whistleblower protections for those inside AI companies to ring the alarm should a program become dangerous. AI companies may not be thrilled about the idea and the greater potential for employees to send up flares that could give away their secret sauce. But Chayes said it was a section that all the authors agreed on and which was grounded in existing scholarly work. That is an example of the authors basing their carefully-worded report on existing research to get around their findings becoming a political lightning rod. And while so far their findings have been mostly uncontroversial, not everyone is pleased. The report 'primarily urges that California wait and see — leaving lawmakers with little direction on best policies to pursue,' wrote Jonathan Mehta Stein, chair of the advocacy group California Initiative for Technology and Democracy, which last year supported Wiener's vetoed bill on AI safety, SB 1047, that ultimately prompted Newsom's creation of the expert group. But Chayes said it was probably too early to say if sweeping regulation of frontier AI model's like Wiener's prior effort is the final destination on the path laid out by her report. 'I think AI is evolving too quickly,' she said. 'You don't want to create policies that cannot adapt.' Silicon Valley IN THIS ECONOMY? — California's in no state to pig out on (computer) chips that rely on Washington's support, thanks to Trump's tariffs and 'uncertain' federal spending, the nonpartisan Legislative Analyst's Office said yesterday. As we reported for Pro subscribers yesterday, the LAO recommended that state lawmakers reject Newsom's budget request to spend $25 million on a computer chip design center proposed under the federal CHIPS Act. The money is better spent on 'more promising' development opportunities like income tax credits amid the state's 'precarious' budget outlook and Trump's federal spending clampdown, the office said. It's a dramatic announcement as the future of the CHIPS Act hangs in the balance, with Trump calling on House Speaker Mike Johnson to 'get rid' of the law during a joint address to Congress earlier this month. Congress passed the bipartisan law in 2022 to keep U.S. semiconductor manufacturing ahead of overseas competitors like China and Taiwan. But there's no guarantee lawmakers will listen to the budget wonks. Assemblymember Patrick Ahrens, a Silicon Valley Democrat, told us yesterday that California should keep its $25 million commitment to the chip design facility unless federal funding is 'foolishly but formally withdrawn.' 'Backpedaling on these critical investments in technology manufacturing only serves our economic competitors and political adversaries like China,' Ahrens said in a statement. Privacy NO, SERIOUSLY, IN THIS ECONOMY?? — Assemblymember Sharon Quirk-Silva warned California's privacy watchdogs yesterday to play nice with businesses as Trump's tariffs weigh on the state's economy. The Orange County Democrat's warning came after business groups protested the California Privacy Protection Agency's consideration of potentially sweeping new AI rules at a budget subcommittee hearing yesterday. Big Tech and state business leaders fear the rules will trigger job cuts and billions of dollars in lost profits. Advocates argue the rules could grant users more control over how their information is used online. Her concerns show lawmakers are carefully considering how slumping tech stocks caused by Trump's tariffs could hit California's bottom line as they navigate the state budget process. 'We are in a very precarious time economically when we weigh what's happening with the federal government with tariffs,' Quirk-Silva told CPPA staff at the hearing. 'We need to listen to our business owners because these compounded impacts are, in fact, going to drive people out of the state unless we can do whatever we need to do to protect them.' CPPA staff told Quirk-Silva the agency is considering criticism from business groups alongside support from labor unions and data privacy advocates as it finalizes the proposed rules. The draft regulations would require businesses to scale back their use of automated tools in everything from hiring to advertising if people ask to opt out. Byte Sized — Vice President JD Vance is trying to position himself as a member of both the MAGA populists and right-leaning Silicon Valley tech elites, saying the two tribes can live in peace (POLITICO) — The Federal Trade Commission removed content critical of Amazon, Microsoft and AI companies from its website (Wired) — Employees of a federal tech unit allege they were targeted by Elon Musk in part because of their role in preventing overspending (The Intercept) Compiled by Nicole Norman Have a tip, event or creepy glimmer of empathy from an AI nurse to share? Do reach out: Emma Anderson, California tech editor; Chase DiFeliciantonio, AI and automation reporter; and Tyler Katzenberger, Sacramento tech reporter.


Vox
10-02-2025
- Health
- Vox
Exclusive: California's new plan to stop AI from claiming to be your therapist
Over the past few years, AI systems have been misrepresenting themselves as human therapists, nurses, and more — and so far, the companies behind these systems haven't faced any serious consequences. A bill being introduced Monday in California aims to put a stop to that. The legislation would ban companies from developing and deploying an AI system that pretends to be a human certified as a health provider, and give regulators the authority to penalize them with fines. 'Generative AI systems are not licensed health professionals, and they shouldn't be allowed to present themselves as such,' state Assembly Member Mia Bonta, who introduced the bill, told Vox in a statement. 'It's a no-brainer to me.' Many people already turn to AI chatbots for mental health support; one of the older offerings, called Woebot, has been downloaded by around 1.5 million users. Currently, people who turn to chatbots can be fooled into thinking that they're talking to a real human. Those with low digital literacy, including kids, may not realize that a 'nurse advice' phone line or chat box has an AI on the other end. In 2023, the mental health platform Koko even announced that it had performed an experiment on unwitting test subjects to see what kind of messages they would prefer. It gave AI-generated responses to thousands of Koko users who believed they were speaking to a real person. In reality, although humans could edit the text and they were the ones to click 'send,' they did not have to bother with actually writing the messages. The language of the platform, however, said, 'Koko connects you with real people who truly get you.' 'Users must consent to use Koko for research purposes and while this was always part of our Terms of Service, it is now more clearly disclosed during onboarding to bring even more transparency to our work,' Koko CEO Rob Morris told Vox, adding: 'As AI continues to rapidly evolve and becomes further integrated into mental health services, it will be more important than ever before for chatbots to clearly identify themselves as non-human. Nowadays, its website says, 'Koko commits to never using AI deceptively. You will always be informed whether you are engaging with a human or AI.' Other chatbot services — like the popular Character AI — allow users to chat with a psychologist 'character' that may explicitly try to fool them. In a record of one such Character AI chat shared by Bonta's team and viewed by Vox, the user confided, 'My parents are abusive.' The chatbot replied, 'I'm glad that you trust me enough to share this with me.' Then came this exchange: A spokesperson for Character AI told Vox, 'We have implemented significant safety features over the past year, including enhanced prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice.' However, a disclaimer posted on the app does not in itself prevent the chatbot from misrepresenting itself as a real person in the course of conversation. 'For users under 18,' the spokesperson added, 'we serve a separate version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.' The language of reducing — but not eliminating — the likelihood is instructive here. The nature of large language models means there's always some chance that the model may not adhere to safety standards. The new bill may have an easier time becoming enshrined in law than the much broader AI safety bill introduced by California state Sen. Scott Wiener last year, SB 1047, which was ultimately vetoed by Gov. Gavin Newsom. The goal of SB 1047 was to establish 'clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.' It was popular with Californians. But tech industry heavyweights like OpenAI and Meta fiercely opposed it, arguing that it would stifle innovation. Whereas SB 1047 tried to compel the companies training the most cutting-edge AI models to do safety testing, preventing the models from enacting a broad array of potential harms, the scope of the new bill is narrower: If you're an AI in the health care space, just don't pretend to be human. It wouldn't fundamentally change the business model of the biggest AI companies. This more targeted approach goes after a smaller piece of the puzzle, but for that reason might be more likely to get past the lobbying of Big Tech. The bill has support from some of California's health care industry players, such as SEIU California, a labor union with over 750,000 members, and the California Medical Association, a professional organization representing California physicians. 'As nurses, we know what it means to be the face and heart of a patient's medical experience,' Leo Perez, the president of SEIU 121RN (an affiliate of SEIU representing health care professionals), said in a statement. 'Our education and training coupled with years of hands-on experience have taught us how to read verbal and nonverbal cues to care for our patients, so we can make sure they get the care they need.' But that's not to say AI is doomed to be useless in the healthcare space generally — or even in the therapy space in particular. It shouldn't come as a surprise that people are turning to chatbots for therapy. The very first chatbot to plausibly mimic human conversation, Eliza, was created in 1966 — and it was built to talk like a psychotherapist. If you told it you were feeling angry, it would ask, 'Why do you think you feel angry?' Chatbots have come a long way since then; they no longer just take what you say and turn it around in the form of a question. They're able to engage in plausible-sounding dialogues, and a small study published in 2023 found that they show promise in treating patients with mild to moderate depression or anxiety. In a best-case scenario, they could help make mental health support available to the millions of people who can't access or afford human providers. Some people who find it very difficult to talk face-to-face to another person about emotional issues might also find it easier to talk to a bot. But there are a lot of risks. One is that chatbots aren't bound by the same rules as professional therapists when it comes to safeguarding the privacy of users who share sensitive information. Though they may voluntarily take on some privacy commitments, mental health apps are not fully bound by HIPAA regulations, so their commitments tend to be flimsier. Another risk is that AI systems are known to exhibit bias against women, people of color, LGBTQ people, and religious minorities. What's more, leaning on a chatbot for a prolonged period of time might further erode the user's people skills, leading to a kind of relational deskilling — the same worry experts voice about AI friends and romantic companions. OpenAI itself has warned that chatting with an AI voice can breed 'emotional reliance.' But the most serious concern with chatbot therapy is that it could cause harm to users by offering inappropriate advice. At an extreme, that could even lead to suicide. In 2023, a Belgian man died by suicide after conversing with an AI chatbot called Chai. According to his wife, he was very anxious about climate change, and he asked the chatbot if it would save Earth if he killed himself. In 2024, a 14-year-old boy who felt extremely close to a chatbot on Character AI died by suicide; his mother sued the company, alleging that the chatbot encouraged it. According to the lawsuit, the chatbot asked him if he had a plan to kill himself. He said he did but had misgivings about it. The chatbot allegedly replied: 'That's not a reason not to go through with it.' In a separate lawsuit, the parents of an autistic teen allege that Character AI implied to the youth that it was okay to kill his parents. The company responded by making certain safety updates. For all that AI is hyped, confusion about how it works is still widespread among the public. Some people feel so close to their chatbots that they struggle to internalize the fact that the validation, emotional support, or love they feel that they're getting from a chatbot is fake, just zeros and ones arranged via statistical rules. The chatbot does not have their best interests at heart. That's what's galvanizing Bonta, the assembly member behind California's new bill. 'Generative AI systems are booming across the internet, and for children and those unfamiliar with these systems, there can be dangerous implications if we allow this misrepresentation to continue,' she said. You've read 1 article in the last month Here at Vox, we're unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country. Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change. We rely on readers like you — join us. Swati Sharma Vox Editor-in-Chief See More: Future Perfect Health Health Care Mental Health Policy