logo
California regulator weakens AI rules, giving Big Tech more leeway to track you

California regulator weakens AI rules, giving Big Tech more leeway to track you

California's first-in-the-nation privacy agency is retreating from an attempt to regulate artificial intelligence and other forms of computer automation.
The California Privacy Protection Agency was under pressure to back away from rules it drafted. Business groups, lawmakers , and Gov. Gavin Newsom said they would be costly to businesses, potentially stifle innovation, and usurp the authority of the legislature, where proposed AI regulations have proliferated. In a unanimous vote last week, the agency's board watered down the rules, which impose safeguards on AI-like systems.
Agency staff estimate that the changes reduce the cost for businesses to comply in the first year of enforcement from $834 million to $143 million and predict that 90% percent of businesses initially required to comply will no longer have to do so.
The retreat marks an important turn in an ongoing and heated debate over the board's role. Created following the passage of state privacy legislation by lawmakers in 2018 and voters in 2020, the agency is the only body of its kind in the United States.
The draft rules have been in the works for more than three years , but were revisited after a series of changes at the agency in recent months, including the departure of two leaders seen as pro-consumer, including Vinhcent Le, a board member who led the AI rules drafting process, and Ashkan Soltani, the agency's executive director.
Consumer advocacy groups worry that the recent shifts mean the agency is deferring excessively to businesses, particularly tech giants.
The changes approved last week mean the agency's draft rules no longer regulate behavioral advertising, which targets people based on profiles built up from their online activity and personal information. In a prior draft of the rules, businesses would have had to conduct risk assessments before using or implementing such advertising.
Behavioral advertising is used by companies like Google, Meta, and TikTok and their business clients. It can perpetuate inequality , pose a threat to national security , and put children at risk .
The revised draft rules also eliminate use of the phrase 'artificial intelligence' and narrow the range of business activity regulated as 'automated decisionmaking,' which also requires assessments of the risks in processing personal information and the safeguards put in place to mitigate them.
Supporters of stronger rules say the narrower definition of 'automated decisionmaking' allows employers and corporations to opt out of the rules by claiming that an algorithmic tool is only advisory to human decision making.
'My one concern is that if we're just calling on industry to identify what a risk assessment looks like in practice, we could reach a position by which they're writing the exam by which they're graded,' said board member Brandie Nonnecke during the meeting.
'The CPPA is charged with protecting the data privacy of Californians, and watering down its proposed rules to benefit Big Tech does nothing to achieve that goal,' said Sacha Haworth, executive director of Tech Oversight Project, an advocacy group focused on challenging policy that reinforces Big Tech power, said in a statement to CalMatters. 'By the time these rules are published, what will have been the point?'
The draft rules retain some protections for workers and students in instances when a fully automated system determines outcomes in finance and lending services, housing, and health care without a human in the decisionmaking loop.
Businesses and the organizations that represent them made up 90% of comments about the draft rules before the agency held listening sessions across the state last year, Soltani said in a meeting last year.
In April, following pressure from business groups and legislators to weaken the rules, a coalition of nearly 30 unions, digital rights, and privacy groups wrote a letter together urging the agency to continue work to regulate AI and protect consumers, students, and workers.
Roughly a week later, Gov. Newsom intervened, sending the agency a letter stating that he agreed with critics that the rules overstepped the agency's authority and supported a proposal to roll them back.
Newsom cited Proposition 24, the 2020 ballot measure that paved the way for the agency. 'The agency can fulfill its obligations to issue the regulations called for by Proposition 24 without venturing into areas beyond its mandate,' the governor wrote.
The original draft rules were great, said Kara Williams, a law fellow at the advocacy group Electronic Privacy Information Center. On a phone call ahead of the vote, she added that 'with each iteration they've gotten weaker and weaker, and that seems to correlate pretty directly with pressure from the tech industry and trade association groups so that these regulations are less and less protective for consumers.'
The public has until June 2 to comment on the alteration to draft rules. Companies must comply with automated decisionmaking rules by 2027.
Prior to voting to water down its own regulation last week, at the same meeting the agency board voted to throw its support behind four draft bills in the California Legislature, including one that protects the privacy of people who connect computing devices to their brain and another that prohibits the collection of location data without permission .
___
This story was originally published by CalMatters and distributed through a partnership with The Associated Press.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Workday says hackers used social engineering to access personal data during a breach
Workday says hackers used social engineering to access personal data during a breach

Engadget

time2 minutes ago

  • Engadget

Workday says hackers used social engineering to access personal data during a breach

Human resources technology company Workday has confirmed that a data breach has affected its third-party CRM platform. In a blog post announcing the breach, the company said that a social engineering campaign had targeted its employees, with threat actors posing as IT or HR in order to trick employees into sharing account access or personal information. The company says that while the threat actors were able to access some information from the CRM, there is no indication of any access to customer accounts or the data within them. "We acted quickly to cut the access and have added extra safeguards to protect against similar incidents in the future," the post reads. Workday says that the information gathered from the CRM consists of "commonly available" business contact information such as names, email addresses and phone numbers. From the sound of its blog post, the information of Workday end users was not revealed, only information from the companies it has contracts with. What is known with some certainty at this point is that Workday's CRM was breached. The company's statement that "no indication" of a deeper customer data breach was found is far from a guarantee — often, the full scope of hacks like this aren't known until later. Earlier this year, Workday laid off around 1,750 employees, or around 8.5 percent of its workforce. The company said it was "prioritizing innovation investments like AI and platform development, and rigorously evaluating the ROI of others across the board." The precise third-party CRM Workday is referring to was not disclosed. Earlier this year Google fell victim to a hack via the Salesforce app, and last year Disney said it would stop using Slack, the Salesforce-owned messaging platform, after a hack exposed company data.

Google, McKinsey, Reintroduce In-Person Interviews Due to AI
Google, McKinsey, Reintroduce In-Person Interviews Due to AI

Entrepreneur

time2 minutes ago

  • Entrepreneur

Google, McKinsey, Reintroduce In-Person Interviews Due to AI

Recruiters say potential hires are reading out answers from AI instead of thinking of their own during interviews. In-person job interviews are on the rise as recruiters adapt to candidates using AI during the process — even on video. Recruiters told The Wall Street Journal last week that, though virtual interviews are still popular, the format has a downside: candidates turning to AI for answers during the interview, and reading them out verbatim. This is particularly an issue for technical interviews, experts told CNBC, when potential hires are faced with the pressure of thinking of technical solutions on the spot. Instead of relying on their own mental aptitude, candidates are overwhelmingly using AI responses to cheat. Now, major companies, including Google and McKinsey, are cracking down on AI use by bringing back in-person interviews. Related: McKinsey Is Using AI to Create PowerPoints and Take Over Junior Employee Tasks: 'Technology Could Do That' McKinsey, for example, started asking hiring managers to schedule at least one in-person meeting with potential recruits before extending an offer. The consulting firm began this practice about a year and a half ago, per WSJ. Meanwhile, Google is also reintroducing "at least one round of in-person interviews for people," CEO Sundar Pichai told "The Lex Fridman Podcast" in June. Pichai said on the podcast that Google wanted to "make sure" candidates mastered "the fundamentals" through in-person interviews. Google CEO Sundar Pichai. Photographer: David Paul Morris/Bloomberg via Getty Images The push for AI-proof hiring arrives as data from the U.S. Bureau of Labor Statistics shows that employment has slowed to a near-decade low. The economic climate has sparked a new workplace trend called "job hugging," where employees cling to their jobs and stay at the same company. It has also led to "quiet firing," when employers try to encourage staff to leave without outright firing them. Hiring as a whole is also turning back to old practices to work around AI. For example, Business Insider reported on Monday that candidates are submitting paper resumes in person to different companies to stand out in a crowded market. At the same time, the outlet noted that some employers are flying out potential hires to in-person sites as part of the interview process, to see how candidates handle questions without AI help. Related: Is Gen Z Really Taking Their Parents to Job Interviews? A New Report Suggests 3 in 4 Have Already Done It. In-person interviews could be what potential hires are looking for to stand out — data suggests that candidates would rather give an in-person interview than a virtual one. A May 2023 report from the American Staffing Association showed that 70% of the over 2,000 U.S. adults surveyed would prefer to give an in-person interview over a phone or video call.

Claude AI Can Now End Conversations It Deems Harmful or Abusive
Claude AI Can Now End Conversations It Deems Harmful or Abusive

CNET

time2 minutes ago

  • CNET

Claude AI Can Now End Conversations It Deems Harmful or Abusive

Anthropic has announced a new experimental safety feature, allowing its Claude Opus 4 and 4.1 artificial intelligence models to terminate conversations in rare, persistently harmful or abusive scenarios. This move reflects the company's growing focus on what it calls "model welfare," the notion that safeguarding AI systems, even if they're not sentient, may be a prudent step in alignment and ethical design. Read also: Meta Is Under Fire for AI Guidelines on 'Sensual' Chats With Minors According to Anthropic's own research, the models were programmed to cut off dialogues after repeated harmful requests, such as sexual content involving minors or instructions facilitating terrorism -- especially when the AI had already refused and attempted to steer the conversation constructively. The AI may exhibit what Anthropic describes as "apparent distress," which guided the decision to give Claude the ability to end these interactions in simulated and real-user testing. When this feature is triggered, users can't send additional messages in that particular chat, although they're free to start a new conversation or edit and retry previous messages to branch off. Crucially, other active conversations remain unaffected. Anthropic emphasizes that this is a last-resort measure, intended only after multiple refusals and redirects have failed. The company explicitly instructs Claude not to end chats when a user may be at imminent risk of self-harm or harm to others, particularly when dealing with sensitive topics like mental health. Anthropic frames this new capability as part of an exploratory project in model welfare, a broader initiative that explores low-cost, preemptive safety interventions in case AI models were to develop any form of preferences or vulnerabilities. The statement says the company remains "highly uncertain about the potential moral status of Claude and other LLMs (large language models)." Read also: Why Professionals Say You Should Think Twice Before Using AI as a Therapist A new look into AI safety Though rare and primarily affecting extreme cases, this feature marks a milestone in Anthropic's approach to AI safety. The new conversation-ending tool contrasts with earlier systems that focused solely on safeguarding users or avoiding misuse. Here, the AI itself is treated as a stakeholder in its own right, as Claude has the power to say, "this conversation isn't healthy" and end it to safeguard the integrity of the model itself. Anthropic's approach has sparked broader discussion about whether AI systems should be granted protections to reduce potential "distress" or unpredictable behavior. While some critics argue that models are merely synthetic machines, others welcome this move as an opportunity to spark more serious discourse on AI alignment ethics. "We're treating this feature as an ongoing experiment and will continue refining our approach," the company said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store