
California lawmaker introduces "No Robo Bosses Act" to regulate AI in the workplace
A Northern California lawmaker has introduced a proposal seeking to regulate artificial intelligence used to manage employees in the workplace, including a ban on using software to hire and fire employees without human oversight.
State Sen. Jerry McNerney (D-Pleasanton) announced Senate Bill 7, which he dubbed the "No Robot Bosses Act." McNerney said the measure aims to regulate automated decision-making systems (ADS) powered by AI.
"Businesses are increasingly using AI to boost efficiency and productivity in the workplace. But there are currently no safeguards to prevent machines from unjustly or illegally impacting workers' livelihoods and working conditions," the senator said in a statement.
McNerney stressed that the measure does not prohibit ADS. "AI must remain a tool controlled by humans, not the other way around," he went on to say.
Under the senator's proposal, the measure would require human oversight and independent verification for promotion, demotion, firing and disciplinary decision using ADS tools.
The proposal would also bar such systems from predictive behavior analysis based on a worker's personal information that results in an "adverse action" against a worker. Systems would also be barred from obtaining or inferring a worker's immigration status, ancestral history, health history, credit history or other statuses protected by state law.
Employees would also be able to appeal decisions made by ADS under the measure.
In McNerney's statement, the lawmaker cited examples of software prioritizing efficiency and cost-savings over worker health and safety, including gig-nursing apps that set hours and wages without human oversight, along with software mistakenly firing people from their jobs.
"No worker should have to answer to a robot boss when they are fearful of getting injured on the job, or when they have to go to the bathroom or leave work for an emergency," said Lorena Gonzalez, president of the California Federation of Labor Unions, AFL-CIO, which is backing the measure.
If approved, Senate Bill 7 would be the first law of its kind in the country.
The senator's office did not say when the measure would be considered in the legislature.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
10 minutes ago
- Yahoo
Watch These Apple Price Levels After WWDC 2025 Updates Fail to Boost Stock
Apple shares rose slightly Tuesday after losing ground the previous session following announcements from the company at its developers conference that failed to impress investors. The iPhone maker unveiled several AI-related improvements with iOS 26, but said details on some highly anticipated updates, including to its virtual assistant Siri, will come later. The stock ran into selling pressure near the downward sloping 50-day moving average, potentially setting the stage for a continuation of the stock's longer-term downtrend that started in late December. Investors should watch important support levels on Apple's chart around $193 and $180, while also monitoring resistance levels near $214 and $ (AAPL) shares rose slightly Tuesday after losing ground the previous session following announcements from the company at its developers conference that failed to impress investors. The tech giant, which kicked off its week-long Worldwide Developers Conference on Monday, unveiled several AI-related improvements with iOS 26 but said enhanced Siri features needed more time to meet the company's quality standards. The lack of new Siri updates likely raised concerns that the company, which was slow to roll out its flagship Apple Intelligence software, is losing ground to other tech giants with artificial intelligence and that further delays could slow iPhone sales as consumers postpone their upgrade cycle. The company's keynote presentation Monday delivered "slow but steady improvements to strategy," Wedbush analysts said, 'but overall [it was] a yawner." Apple stock is down 19% since the start of 2025, making it the weakest performer among members of the Magnificent Seven group of major technology companies, alongside Tesla (TSLA). Apple shares gained 0.6% on Tuesday to close at $202.67, after dropping 1.2% yesterday. Below, we break down the technicals on Apple's chart and point out important price levels worth watching out for. After rebounding from their early-April low, Apple shares have traded mostly sideways, with the price recently forming a rising wedge. The price then ran into selling pressure near the downward sloping 50-day moving average, potentially setting the stage for a continuation of the stock's longer-term downtrend that started in late December. Meanwhile, the relative strength index has struggled to reclaim its neutral threshold, signaling bearish price momentum. Let's identify important support and resistance levels on Apple's chart that investors will likely be watching. A breakdown below the rising wedge could initially see the shares fall to around $193. This area may provide support near the low of the pattern, which also aligns with a range of corresponding price action on the chart extending back to May last year. The bulls' failure to successfully defend this level opens the door for a more significant drop to $180. Bargain hunters may seek buy-and-hold opportunities in this location near a brief retracement in May last year following a breakaway gap above the 200-day moving average. This level also sits in the same neighborhood as a measured move downside price target that calculates the decline in points that preceded the rising wedge and deducts that amount from the pattern's lower trendline. During upswings in the stock, investors should monitor the $214 level. The shares may encounter overhead selling pressure in this area near several peaks and troughs that formed on the chart between September and May. Finally, a decisive close above this level could see Apple shares climb toward $235. Investors who bought shares at lower prices may look for exit points in this region near notable peaks that developed on the chart in July and October last year. The comments, opinions, and analyses expressed on Investopedia are for informational purposes only. Read our warranty and liability disclaimer for more info. As of the date this article was written, the author does not own any of the above securities. Read the original article on Investopedia Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Fast Company
23 minutes ago
- Fast Company
AI assistants still need a human touch
When I first encountered AI, it wasn't anything like the sophisticated tools we have today. In the 1990s, my introduction came in the form of a helpful, but mostly frustrating, digital paperclip. Clippy, Microsoft's infamous assistant, was designed to help, but it often got in the way, popping up at the worst moments with advice no one asked for. AI has evolved since then. Major companies like Apple are investing billions, and tools like OpenAI 's ChatGPT and DALL-E have reshaped how we interact with technology. Yet, one challenge from Clippy's era lingers—understanding and adapting to user intent. The original promise of AI was to create experiences that felt seamless, intuitive, and personal. AI was supposed to anticipate our needs and provide support that felt natural. So, why do so many systems today still feel mechanical and rigid—more Clippy than collaborator? When AI assistance is a burden When first introduced, Clippy was a bold attempt at computer-guided assistance. Its purpose was groundbreaking at the time, but it quickly became known more for interruptions than useful assistance. You'd pause when typing, and Clippy would leap into action with a pop-up: 'It looks like you're writing a letter!' Its biggest flaw wasn't just being annoying: It lacked contextual awareness. Unlike modern AI tools, Clippy's interactions were static and deterministic, triggered by fixed inputs. There was no learning from previous interactions and no understanding of the user's preferences or current tasks. Whether you were drafting a report or working on a spreadsheet, Clippy offered the same generic advice—ignoring the evolving context and failing to provide truly helpful, personalized assistance. Is AI destined to be like Clippy? Even with today's advancements, many AI assistants still feel underwhelming. Siri is a prime example. Though capable of setting reminders or answering questions, it often requires users to speak in very specific ways. Deviate from the expected phrasing, and it defaults to, 'I didn't understand that.' This is more than a UX flaw—it reveals a deeper issue. Too many AI systems still operate under a one-size-fits-all mentality, failing to accommodate the needs of individual users. With Siri, for instance, you're often required to speak in a specific, rigid format for it to process your request effectively. This creates an experience that feels less like assistance and more like a chore. Building a smarter assistant isn't just about better models. It's about retaining context, respecting privacy, and delivering personalized, meaningful experiences. That's not just technically difficult—it's essential. Helpful AI requires personalization Personalization is what will finally break us out of the Clippy cycle. When AI tools remember your preferences, learn from your behavior, and adapt accordingly, they shift from being tools to trusted partners. The key to this will be communication. Most AI today speaks in a one-dimensional tone, no matter who you are or what your emotional state is. The next leap in AI won't just be about intelligence, it'll be about emotional intelligence. But intelligence isn't only about remembering facts. It's also about how an assistant communicates. For AI to truly feel useful, it needs more than functionality. It needs personality. That doesn't mean we need overly chatty bots. It means assistants that adjust tone, remember personal context, and build continuity. That's what earns trust and keeps users engaged. While not every user may want an assistant with a personality or emotions, everyone can benefit from systems that adapt to our unique needs. The outdated one-size-fits-all approach is still common in many AI tools today and risks alienating users, much like Clippy's impersonal method back in the early days. For AI to thrive in the long term it must be designed with real humans in mind. Building Clippy 2.0 Now imagine a 'Clippy 2.0'—an assistant that doesn't interrupt but understands when to offer help. One that remembers your work habits, predicts what you need, and responds in a way that feels natural and tailored to you. It could help you schedule meetings, provide intelligent suggestions, and adapt its tone to fit the moment. Whether it has a personality or not, what matters is that it adapts to—and respects the uniqueness of every user. It might even respond with different tones or emotions depending on your reactions, creating an immersive experience. This kind of intelligent assistant would blend seamlessly into your routine, saving you time and reducing friction. Clippy may have been a trailblazer, but it lacked the technological foundation to live up to its potential. With the advances we've made today, we now have the tools to build a 'Clippy 2.0'—an AI assistant capable of transforming the way we interact with technology. Although maybe this time, it doesn't need to come in the form of a paperclip with a goofy smile.


WIRED
26 minutes ago
- WIRED
Apple Intelligence Is Gambling on Privacy as a Killer Feature
Jun 10, 2025 7:04 PM Many new Apple Intelligence features happen on your device rather than in the cloud. While it may not be flashy, the privacy-centric approach could be a competitive advantage. Photograph:As Apple's Worldwide Developers Conference keynote concluded on Monday, market watchers couldn't help but notice that the company's stock price was down, perhaps a reaction to Apple's relatively low-key approach to incorporating AI compared to most of its competitors. Still, Apple Intelligence-based features and upgrades were plentiful, and while some are powered using the company's privacy and security-focused cloud platform known as Private Cloud Compute, many run locally on Apple Intelligence-enabled devices. Apple's new Messages screening feature automatically moves texts from phone numbers and accounts you've never interacted with before to an 'Unknown Sender' folder. The feature automatically detects time-sensitive messages like login codes or food delivery updates and will still deliver them to your main inbox, but it also scans for messages that seem to be scams and puts them in a separate spam folder. All of this sorting is done locally using Apple Intelligence. Similarly, the expanded Call Screening feature will automatically and locally pick up untrusted phone calls, ask for details about the caller, and transcribe the answers so you can decide whether you want to pick up the call. Even Live Translation adds real-time language translation to calls and messaging using local processing. From a privacy perspective, local processing is the gold standard for AI features. Data never leaves your device, meaning there's no risk that it could end up somewhere unintended as a result of a journey through the cloud. And new features like spam and 'Unknown Sender' sorting for Messages, call screening for untrusted phone numbers, and Live Translation tools all seemed to be designed with a strategy of using privacy as a differentiator in an already crowded AI field. In addition to being privacy friendly, local processing has other benefits like allowing AI-based services to be available offline and speeding up certain tasks since data doesn't have to be sent to the cloud, processed, and then sent back to a device. If AI features are going to be widely available and accessible, though, most companies are constrained by attempting to factor in the old, low-end devices that many of their customers are likely using that may not be able to handle local AI. Apple has less need to be inclusive, though, because it produces both hardware and software and has already imposed limitations that Apple Intelligence can only run at all on recent device models. There are other limitations to Apple Intelligence, too, and the company offers opt-in integrations with some third-party generative AI services to expand functionality. For OpenAI's ChatGPT, for example, users must turn the integration on and Apple services will then prompt the user to confirm each time they go to submit a ChatGPT query. Additionally, users can elect to log into a ChatGPT account, in which case their queries will be subject to OpenAI's normal policies, or they can use ChatGPT without logging in. In this scenario, Apple says it does not connect an Apple ID or other identifier to queries and obfuscates users' IP addresses. Apple invested extensively to develop Private Cloud Compute to maintain strong security and privacy guarantees for AI processing in the cloud. Other companies have even begun to create similar secure AI cloud schemes for products and services that specifically center privacy as a crucial feature. But the fact that Apple still deploys local processing for new features when possible may indicate that privacy isn't just an intellectual priority in the company's approach to AI, it may be a business strategy.