logo
#

Latest news with #AI-like

California regulator weakens AI rules, giving Big Tech more leeway to track you
California regulator weakens AI rules, giving Big Tech more leeway to track you

Business Mayor

time08-05-2025

  • Business
  • Business Mayor

California regulator weakens AI rules, giving Big Tech more leeway to track you

California's first-in-the-nation privacy agency is retreating from an attempt to regulate artificial intelligence and other forms of computer automation. The California Privacy Protection Agency was under pressure to back away from rules it drafted. Business groups, lawmakers, and Gov. Gavin Newsom said they would be costly to businesses, potentially stifle innovation, and usurp the authority of the legislature, where proposed AI regulations have proliferated. In a unanimous vote last week, the agency's board watered down the rules, which impose safeguards on AI-like systems. Agency staff estimate that the changes reduce the cost for businesses to comply in the first year of enforcement from $834 million to $143 million and predict that 90% percent of businesses initially required to comply will no longer have to do so. The retreat marks an important turn in an ongoing and heated debate over the board's role. Created following the passage of state privacy legislation by lawmakers in 2018 and voters in 2020, the agency is the only body of its kind in the United States. The draft rules have been in the works for more than three years, but were revisited after a series of changes at the agency in recent months, including the departure of two leaders seen as pro-consumer, including Vinhcent Le, a board member who led the AI rules drafting process, and Ashkan Soltani, the agency's executive director. Consumer advocacy groups worry that the recent shifts mean the agency is deferring excessively to businesses, particularly tech giants. The changes approved last week mean the agency's draft rules no longer regulate behavioral advertising, which targets people based on profiles built up from their online activity and personal information. In a prior draft of the rules, businesses would have had to conduct risk assessments before using or implementing such advertising. Behavioral advertising is used by companies like Google, Meta, and TikTok and their business clients. It can perpetuate inequality, pose a threat to national security, and put children at risk. The revised draft rules also eliminate use of the phrase 'artificial intelligence' and narrow the range of business activity regulated as 'automated decisionmaking,' which also requires assessments of the risks in processing personal information and the safeguards put in place to mitigate them. Supporters of stronger rules say the narrower definition of 'automated decisionmaking' allows employers and corporations to opt out of the rules by claiming that an algorithmic tool is only advisory to human decision making. 'My one concern is that if we're just calling on industry to identify what a risk assessment looks like in practice, we could reach a position by which they're writing the exam by which they're graded,' said board member Brandie Nonnecke during the meeting. 'The CPPA is charged with protecting the data privacy of Californians, and watering down its proposed rules to benefit Big Tech does nothing to achieve that goal,' said Sacha Haworth, executive director of Tech Oversight Project, an advocacy group focused on challenging policy that reinforces Big Tech power, said in a statement to CalMatters. 'By the time these rules are published, what will have been the point?' The draft rules retain some protections for workers and students in instances when a fully automated system determines outcomes in finance and lending services, housing, and health care without a human in the decisionmaking loop. Businesses and the organizations that represent them made up 90% of comments about the draft rules before the agency held listening sessions across the state last year, Soltani said in a meeting last year. Read More Environment Current Affairs - GKToday In April, following pressure from business groups and legislators to weaken the rules, a coalition of nearly 30 unions, digital rights, and privacy groups wrote a letter together urging the agency to continue work to regulate AI and protect consumers, students, and workers. 'With each iteration they've gotten weaker and weaker.' Kara Williams, law fellow, Electronic Privacy Information Center, on draft AI rules from California's privacy regulator Roughly a week later, Gov. Newsom intervened, sending the agency a letter stating that he agreed with critics that the rules overstepped the agency's authority and supported a proposal to roll them back. Newsom cited Proposition 24, the 2020 ballot measure that paved the way for the agency. 'The agency can fulfill its obligations to issue the regulations called for by Proposition 24 without venturing into areas beyond its mandate,' the governor wrote. The original draft rules were great, said Kara Williams, a law fellow at the advocacy group Electronic Privacy Information Center. On a phone call ahead of the vote, she added that 'with each iteration they've gotten weaker and weaker, and that seems to correlate pretty directly with pressure from the tech industry and trade association groups so that these regulations are less and less protective for consumers.' The public has until June 2 to comment on the alteration to draft rules. Companies must comply with automated decisionmaking rules by 2027. Prior to voting to water down its own regulation last week, at the same meeting the agency board voted to throw its support behind four draft bills in the California Legislature, including one that protects the privacy of people who connect computing devices to their brain and another that prohibits the collection of location data without permission.

California regulator weakens AI rules, giving Big Tech more leeway to track you
California regulator weakens AI rules, giving Big Tech more leeway to track you

Associated Press

time07-05-2025

  • Business
  • Associated Press

California regulator weakens AI rules, giving Big Tech more leeway to track you

California's first-in-the-nation privacy agency is retreating from an attempt to regulate artificial intelligence and other forms of computer automation. The California Privacy Protection Agency was under pressure to back away from rules it drafted. Business groups, lawmakers , and Gov. Gavin Newsom said they would be costly to businesses, potentially stifle innovation, and usurp the authority of the legislature, where proposed AI regulations have proliferated. In a unanimous vote last week, the agency's board watered down the rules, which impose safeguards on AI-like systems. Agency staff estimate that the changes reduce the cost for businesses to comply in the first year of enforcement from $834 million to $143 million and predict that 90% percent of businesses initially required to comply will no longer have to do so. The retreat marks an important turn in an ongoing and heated debate over the board's role. Created following the passage of state privacy legislation by lawmakers in 2018 and voters in 2020, the agency is the only body of its kind in the United States. The draft rules have been in the works for more than three years , but were revisited after a series of changes at the agency in recent months, including the departure of two leaders seen as pro-consumer, including Vinhcent Le, a board member who led the AI rules drafting process, and Ashkan Soltani, the agency's executive director. Consumer advocacy groups worry that the recent shifts mean the agency is deferring excessively to businesses, particularly tech giants. The changes approved last week mean the agency's draft rules no longer regulate behavioral advertising, which targets people based on profiles built up from their online activity and personal information. In a prior draft of the rules, businesses would have had to conduct risk assessments before using or implementing such advertising. Behavioral advertising is used by companies like Google, Meta, and TikTok and their business clients. It can perpetuate inequality , pose a threat to national security , and put children at risk . The revised draft rules also eliminate use of the phrase 'artificial intelligence' and narrow the range of business activity regulated as 'automated decisionmaking,' which also requires assessments of the risks in processing personal information and the safeguards put in place to mitigate them. Supporters of stronger rules say the narrower definition of 'automated decisionmaking' allows employers and corporations to opt out of the rules by claiming that an algorithmic tool is only advisory to human decision making. 'My one concern is that if we're just calling on industry to identify what a risk assessment looks like in practice, we could reach a position by which they're writing the exam by which they're graded,' said board member Brandie Nonnecke during the meeting. 'The CPPA is charged with protecting the data privacy of Californians, and watering down its proposed rules to benefit Big Tech does nothing to achieve that goal,' said Sacha Haworth, executive director of Tech Oversight Project, an advocacy group focused on challenging policy that reinforces Big Tech power, said in a statement to CalMatters. 'By the time these rules are published, what will have been the point?' The draft rules retain some protections for workers and students in instances when a fully automated system determines outcomes in finance and lending services, housing, and health care without a human in the decisionmaking loop. Businesses and the organizations that represent them made up 90% of comments about the draft rules before the agency held listening sessions across the state last year, Soltani said in a meeting last year. In April, following pressure from business groups and legislators to weaken the rules, a coalition of nearly 30 unions, digital rights, and privacy groups wrote a letter together urging the agency to continue work to regulate AI and protect consumers, students, and workers. Roughly a week later, Gov. Newsom intervened, sending the agency a letter stating that he agreed with critics that the rules overstepped the agency's authority and supported a proposal to roll them back. Newsom cited Proposition 24, the 2020 ballot measure that paved the way for the agency. 'The agency can fulfill its obligations to issue the regulations called for by Proposition 24 without venturing into areas beyond its mandate,' the governor wrote. The original draft rules were great, said Kara Williams, a law fellow at the advocacy group Electronic Privacy Information Center. On a phone call ahead of the vote, she added that 'with each iteration they've gotten weaker and weaker, and that seems to correlate pretty directly with pressure from the tech industry and trade association groups so that these regulations are less and less protective for consumers.' The public has until June 2 to comment on the alteration to draft rules. Companies must comply with automated decisionmaking rules by 2027. Prior to voting to water down its own regulation last week, at the same meeting the agency board voted to throw its support behind four draft bills in the California Legislature, including one that protects the privacy of people who connect computing devices to their brain and another that prohibits the collection of location data without permission . ___ This story was originally published by CalMatters and distributed through a partnership with The Associated Press.

Chinese techie chooses to live in his car for 4 years despite owning a four-storey house
Chinese techie chooses to live in his car for 4 years despite owning a four-storey house

Hindustan Times

time29-04-2025

  • Automotive
  • Hindustan Times

Chinese techie chooses to live in his car for 4 years despite owning a four-storey house

A Chinese programmer has left many people stunned after revealing that, despite owning a four-storey house in his hometown, he has been living in his car for the past four years. As reported by South China Morning Post, Zhang Yunlai, 41, from Yangjiang in Guangdong province, has embraced an unconventional lifestyle that defies expectations, not for financial reasons, but because he enjoys the freedom it offers. (Also read: Deaf-mute Chinese student defends her appearance after facing criticism over 'AI-like' look) As per the outlet, Zhang moved to Shenzhen six years ago for work, initially leading a typical life in a rented flat. He spent around 2,500 yuan (US$340) a month on rent and commuted between his home and office. However, his perspective changed after a camping trip in a park, which led him to reconsider his living arrangements. Four years ago, Zhang purchased an electric vehicle, and after realising the back seat could fit a mattress, he decided to test out sleeping in the car. The comfort of the air-conditioning and space inside made the idea even more appealing. Zhang's daily routine involves eating at his company's cafeteria and showering at the gym. After work, he drives to a charging station for his car before finding a quiet park to sleep in. There, he folds down the back seats, lays out his mattress, and settles in for the night. He also praises the park's "five-star" public toilets for washing up. With an average daily expense of just 100 yuan (US$14), including meals and other costs, Zhang's lifestyle remains remarkably cost-effective. Parking fees are modest, with a nightly cost of 6 yuan (8 US cents) and an additional 20 yuan for office parking. Despite his savings, Zhang insists that his choice to live in his car is not due to financial hardship. "I do not have much financial pressure. Even if someone offered me free rent, I would not move. The park environment is far better than a typical flat, and it gives me freedom," Zhang shared. Over the past three years, this lifestyle has saved him about 100,000 yuan (US$14,000). Zhang's four-storey house back home in Yangjiang remains empty, and he returns there weekly to do laundry and spend time with his family. (Also read: Young people are opting for 'friendship marriages' to escape societal pressures, defy tradition) Zhang, who previously worked remotely earning over 10,000 yuan (US$1,400) a month, now works as a programmer in Shenzhen, where he earns a much higher salary. However, he remains mindful of his age. "Many programmers are phased out after they reach 35. I am fortunate to still have a job in Shenzhen. I plan to work a few more years and then return home to be with my family," he said.

Stacy Rasgon Says NVIDIA (NVDA) Will Sell ‘Everything They Can Get Out the Door'
Stacy Rasgon Says NVIDIA (NVDA) Will Sell ‘Everything They Can Get Out the Door'

Yahoo

time28-04-2025

  • Business
  • Yahoo

Stacy Rasgon Says NVIDIA (NVDA) Will Sell ‘Everything They Can Get Out the Door'

We recently published a list of . In this article, we are going to take a look at where NVIDIA Corp (NASDAQ:NVDA) stands against other stocks everyone's talking about as Trump softens his tone on China. Investors are desperately looking for signs of a market bottom after going through massive volatility and losses. 3Fourteen Research's Warren Pies said in a latest program on CNBC that we are getting 'close' to a market bottom based on his technical analysis. The analyst talked about key indicators he's looking for: 'I do think that the White House is trying to deescalate the situation. One of the markers we've seen is that Peter Navarro hasn't been on TV since April 13th, and that's corresponding with this equity rally. Setting that aside, though, I think that a bottom, a confirmed bottom, has two components. You need to see washed out sentiment and positioning. We measured that in a number of ways: we measured it in inverse ETFs for retail, we measured it for vault targeting for institutions, and CTAs for trend followers. Across all those metrics, sentiment is depressed. That's phase one of a bottom. Then, you look for technical confirmation. Philosophically, we're always going to be late because of that ordering.' READ ALSO 7 Best Stocks to Buy For Long-Term and 8 Cheap Jim Cramer Stocks to Invest In For this article, we picked 10 stocks investors are currently focusing on. With each stock, we have mentioned the number of hedge fund investors. Why are we interested in the stocks that hedge funds pile into? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletter's strategy selects 14 small-cap and large-cap stocks every quarter and has returned 373.4% since May 2014, beating its benchmark by 218 percentage points (see more details here). A close-up of a colorful high-end graphics card being plugged in to a gaming computer. Number of Hedge Funds Investors: 193 Bernstein's Stacy Rasgon said in a latest program on CNBC that the demand is still strong for NVIDIA Corp (NASDAQ:NVDA) and the company despite concerns in the industry: 'I mean, that is sort of the big question. I would say right now, at least the demand environment still looks really, really strong. Just to pick on Nvidia, like I think they're going to sell everything that they can get out the door this year. So that that just becomes the question, how much can they get out the door? You know, but at this point, I would say like the only ones that really seem to be worried about the AI-like demand environment seem to be investors. Like the companies that are actually doing the spending right now, their capex forecasts and requirements and everything, they're going up, not down.' The market will keep punishing Nvidia for not coming up to its gigantic (and sometimes unrealistic) growth expectations. About 50% of the company's revenue comes from large cloud providers, which are rethinking their plans amid the DeepSeek launch and looking for low-cost chips. Nvidia's Q1 guidance shows a 9.4% QoQ revenue growth, down from the previous 12% QoQ growth. Its adjusted margin is expected to be down substantially as well to 71%. The market does not like it when Nvidia fails to post a strong quarterly beat. The stock will remain under pressure in the coming quarters when the company will report unimpressive growth. Nvidia is facing challenges at several levels. Competition is one of them. Major competitors like Apple, Qualcomm, and AMD are vying for TSMC's 3nm capacity, which could limit Nvidia's access to these chips. Why? Because Nvidia also uses TSMC's 3nm process nodes. Nvidia is also facing direct competition from other giants that are deciding to make their own chips. Amazon, with its Trainium2 AI chips, offers alternatives. Trainium2 chips could provide cost savings and superior computational power, which could shift AI workloads away from Nvidia's offerings. Apple is reportedly working with Broadcom to develop an AI server processor. Intel is also trying hard to get back into the game with Jaguar Shores GPU, set to be produced on its 18A or 14A node. Harding Loevner Global Developed Markets Equity Strategy stated the following regarding NVIDIA Corporation (NASDAQ:NVDA) in its Q4 2024 investor letter: 'For the full year, the composite's underperformance was primarily due to poor stock choices in the US. NVIDIA Corporation (NASDAQ:NVDA), which we sold in the first quarter and repurchased in the fourth quarter, caused almost two-thirds of the strategy's underperformance. We were hurt by our underweight as NVIDIA's stock price soared during the first half of the year on the insatiable demand for the company's graphics processing units (GPUs), which enable generative Al computing. Overall, NVDA ranks 4th on our list of best mid cap growth stocks. While we acknowledge the potential of NVDA, our conviction lies in the belief that under the radar AI stocks hold greater promise for delivering higher returns, and doing so within a shorter timeframe. If you are looking for an AI stock that is more promising than NVDA but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio

Opinion Language, politics, power: What is AI's mother tongue?
Opinion Language, politics, power: What is AI's mother tongue?

Indian Express

time25-04-2025

  • Indian Express

Opinion Language, politics, power: What is AI's mother tongue?

What is AI's mother tongue? Who taught it speech, cadence, metaphor, and syntax? AI was never raised with language. It is being fed tons of data, learning to be perfect and accurate, with no scope for silly spelling mistakes and no joy in learning new words. It has never had writer's block, delivering passages at lightning speed. Yet, it is increasingly finding itself in positions of authority, silently evaluating the process it has never lived. In a world where language is survival and identity, what tongue does AI truly speak? I felt the weight of this question on an ordinary workday when a familiar green icon on my office laptop flickered, questioning my authorship and flagging some sentences as potentially AI-generated. I habitually keep it on to avoid typing errors. But that day, it became something else — a silent accuser. The assistive tool appeared to cast doubt, subtly implying that my carefully shaped and worded thoughts might not be entirely my own. But I had written every word. It was startling since these exact words have been part of my vocabulary since school. If 'deliberation', 'interplay', and 'foster' make sentences sound too AI-like, should we replace them with 'talk', 'mix', and 'help'? In fact, 'foster' carries a tender sense of nurturing and quiet care that no synonym can fully capture. Therein lies the inimitable beauty of words — each holding a depth of meaning, emotion and memory. What began as a technical glitch must compel us to face something more unsettling — an accusation that our voices, perhaps fluent and confident, are, somehow, not ours to use. It makes us ponder whether a more profound bias in our digital tools leads us to suspect those who manage to sound articulate. Who does society think deserves to sound like a wordsmith? The AI vocabulary trap A growing list of words and phrases — such as 'foster', 'deliberation', 'synergy', 'nuance', 'nestled', 'robust', 'interplay', 'pivotal', 'forging pathways' — are now considered 'red flags' by AI detectors, not as plagiarism but as indicators of machine-like writing. Even the good ole' em dash (—), once integral to literary writing, has not been spared, reducing it to a meme as the 'ChatGPT hyphen' because of its overuse in AI-written content. This exposes an insidious vocabulary trap, where specific words and phrases are considered too coherent, too suspicious. It also involves control, deciding who is capable of sophistication and literary eloquence, and who is trusted when they write with power. With the advent of generative AI models, the line has blurred between assistance and authorship, polish and plagiarism, focusing on something structurally deeper: The automation of linguistic elitism. This thrives on the illusion of a single, 'correct' way to speak and write, mirroring the language of the privileged — urban, upper-caste, upper-class and Western-educated. How AI detection works — and fails Understanding how these detection systems work reveals the logic behind large language models (LLMs). Unlike human reading, AI detectors scan for statistical patterns, word probabilities, and tonal consistency. Two key metrics — perplexity and burstiness — assess predictability and sentence variance, with lower levels indicating robotic output. Ironically, a well-written, coherent human text can mistakenly trigger low perplexity scores, flagged as AI-generated. Phrases like 'furthermore' or ' in the light of' make writing sound stylised, resembling templates fed into LLMs for academic, corporate, or literary prose. This is worrying because the fluency valued in academia and professional writing is now viewed with suspicion, particularly for Global South scholars who have laboured to claim this space. It's not just a tech issue — it's a political one. Democratising knowledge or language policing? An equally jarring fact is the policing of expression. In its dual role as mentor and gatekeeper, AI now wields the unsettling inheritance to decide whom to amplify or erase. And, therein lies the profound contradiction. The same system that promises to equip people with fluent language can double as a gatekeeper, tying good writing to invisible codes of class, caste, and privilege. In doing so, it erases the scope for language to become a currency of opportunity. It penalises the texture of difference — accents, idioms, and all the vibrant messiness of language. It also stifles creativity, forcing writers to self-censor for fear of triggering AI suspicion. It overlooks that accents, grammar, and vocabulary, shaped by access, are also learned and claimed through struggle, practice, and cultural exchange. Since LLMs mimic these patterns, we are now being accused of imitating the machine when it was the machine that learned from us. This tongue is undoubtedly ours. What colonial gatekeeping once enforced, faceless algorithms now replicate, flattening the multiplicity of Englishes into a narrow, sanctioned register. This shift hits marginalised voices hardest — first-generation learners, regional writers, and self-taught professionals who have painstakingly claimed fluency in so-called 'standard' English, for whom language is not just a tool of expression but also a site of struggle and aspiration. To critique is not to reject technology — AI can be a valuable collaborator, enhancing learning and accessibility. But its impact depends on how it's built, whose norms it follows, and who gets to shape its rules. To safeguard linguistic diversity and fairness, AI detection tools — and the editorial practices that rely on them — must be transparent and accountable to those they most affect. This means involving educators, writers, linguists, and communities from the margins in designing and calibrating these systems. Because the politics of language has always been the politics of power, which is being coded into algorithms. And we owe it to each other to make that power more justly shared.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store