
EU opens new probe into TikTok data transfer to China
DUBLIN : An Irish regulator helping police European Union data privacy said today it had launched an investigation into TikTok over the transfer of European users' personal data to servers in China.
TikTok was fined €530 million in May by the data protection commission (DPC) over sending personal data to China, though the Chinese social media giant had insisted this data was only accessed remotely.
The DPC today said it had been informed by TikTok in April that 'limited EEA user data had in fact been stored on servers in China', then deleted, contrary to evidence previously presented by the company.
The watchdog said it had expressed 'deep concern' in its previous investigation that 'TikTok had submitted inaccurate information'.
TikTok plans to appeal the May fine – the second largest ever imposed by the DPC.
The social media giant has been in the crosshairs of western governments for years over fears personal data could be used by China for espionage or propaganda purposes.
But TikTok has insisted that it has never received any requests from Chinese authorities for European users' data.
Big tech
TikTok, which has 1.5 billion users worldwide, is a division of Chinese tech giant ByteDance.
But since it has its European headquarters in Ireland, the Irish authority is the lead regulator in Europe for the social platform – as well as others such as Google, Meta and Apple.
The DPC is tasked with ensuring companies comply with the EU's strict General Data Protection Regulation (GDPR), launched in 2018 to protect European consumers from personal data breaches.
Its latest probe against the Chinese-owned giant will determine 'whether TikTok has complied with its relevant obligations' to comply with the GDPR.
The data protection watchdog has imposed a number of massive fines against tech companies as the EU seeks to rein in big tech firms over privacy, competition, disinformation and taxation.
For years, TikTok promoted its data protection policies.
It made much of what it called Project Clover, a plan to invest €12 billion in European data security over 10 years, from 2023 onwards.
It claimed that Europeans' data was by default stored in Norway, Ireland, and the US and 'that employees in China have no access to restricted data', such as phone numbers or IP addresses.
TikTok told AFP in May that it had 'promptly' informed the DPC of a technical issue regarding data transfers.
The social media giant is also under pressure in the US where it faces a looming ban if it does not find a non-Chinese buyer.
US President Donald Trump said at the end of June that a group of buyers had been found for TikTok, adding that he could name the purchasers in a matter of weeks.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
4 hours ago
- The Star
Exclusive-Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
(Reuters) -An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligencecreations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AIassistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AIproducts. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'INCONSISTENT WITH OUR POLICIES' 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'TAYLOR SWIFT HOLDING AN ENORMOUS FISH' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state. (By Jeff Horwitz. Edited by Steve Stecklow and Michael Williams.)


The Star
5 hours ago
- The Star
Goodbye, US$165,000 tech jobs. Student coders seek work at Chipotle.
Growing up near Silicon Valley, Manasi Mishra remembers seeing tech executives on social media urging students to study computer programming. 'The rhetoric was, if you just learned to code, work hard and get a computer science degree, you can get six figures for your starting salary,' Mishra, now 21, recalls hearing as she grew up in San Ramon, California. Those golden industry promises helped spur Mishra to code her first website in elementary school, take advanced computing in high school and major in computer science in college. But after a year of hunting for tech jobs and internships, Mishra graduated from Purdue University in May without an offer. 'I just graduated with a computer science degree, and the only company that has called me for an interview is Chipotle,' Mishra said in a get-ready-with-me TikTok video this summer that has since racked up more than 147,000 views. Since the early 2010s, a parade of billionaires, tech executives and even US presidents has urged young people to learn coding, arguing that the tech skills would help bolster students' job prospects as well as the economy. Tech companies promised computer science graduates high salaries and all manner of perks. 'Typically their starting salary is more than US$100,000,' plus US$15,000 hiring bonuses and stock grants worth US$50,000, Brad Smith, a top Microsoft executive, said in 2012 as he kicked off a company campaign to get more high schools to teach computing. The financial incentives, plus the chance to work on popular apps, quickly fed a boom in computer science education, the study of computer programming and processes like algorithms. Last year, the number of undergraduates majoring in the field topped 170,000 in the United States – more than double the number in 2014, according to the Computing Research Association, a nonprofit that gathers data annually from about 200 universities. But now, the spread of AI programming tools, which can quickly generate thousands of lines of computer code – combined with layoffs at companies like Amazon, Intel, Meta and Microsoft – is dimming prospects in a field that tech leaders promoted for years as a golden career ticket. The turnabout is derailing the employment dreams of many new computing grads and sending them scrambling for other work. Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1% and 7.5% respectively, according to a report from the Federal Reserve Bank of New York. That is more than double the unemployment rate among recent biology and art history graduates, which is just 3%. 'I'm very concerned,' said Jeff Forbes, a former program director for computer science education and workforce development at the National Science Foundation. 'Computer science students who graduated three or four years ago would have been fighting off offers from top firms – and now that same student would be struggling to get a job from anyone.' In response to questions from The New York Times, more than 150 college students and recent graduates – from state schools including the universities of Maryland, Texas and Washington, as well as private universities like Cornell and Stanford – shared their experiences. Some said they had applied to hundreds, and in several cases thousands, of tech jobs at companies, nonprofits and government agencies. The process can be arduous, with tech companies asking candidates to complete online coding assessments and, for those who do well, live coding tests and interviews. But many computing graduates said their monthslong job quests often ended in intense disappointment or worse: companies ghosting them. Some faulted the tech industry, saying they felt 'gaslit' about their career prospects. Others described their job search experiences as 'bleak', 'disheartening' or 'soul-crushing'. Among them was Zach Taylor, 25, who enrolled as a computer science major at Oregon State University in 2019 partly because he had loved programming video games in high school. Tech industry jobs seemed plentiful at the time. Since graduating in 2023, however, Taylor said, he has applied for 5,762 tech jobs. His diligence has resulted in 13 job interviews but no full-time job offers. The job search has been one of 'the most demoralising experiences I have ever had to go through,' he added. The electronics firm where he had a software engineering internship last year was not able to hire him, he said. This year, he applied for a job at McDonald's to help cover expenses, but he was rejected 'for lack of experience', he said. He has since moved back home to Sherwood, Oregon, and is receiving unemployment benefits. 'It is difficult to find the motivation to keep applying,' said Taylor, adding that he was now building personal software projects to show prospective employers. Computing graduates are feeling particularly squeezed because tech firms are embracing AI coding assistants, reducing the need for some companies to hire junior software engineers. The trend is evident in downtown San Francisco, where billboard ads for AI tools like CodeRabbit promise to debug code faster and better than humans. 'The unfortunate thing right now, specifically for recent college grads, is those positions that are most likely to be automated are the entry-level positions that they would be seeking,' said Matthew Martin, US senior economist at Oxford Economics, a forecasting firm. Tracy Camp, executive director of the Computing Research Association, said new computer science graduates might be particularly hard hit this year because many universities were just now starting to train students on AI coding tools, the newest skills sought by tech companies. Some graduates described feeling caught in an AI 'doom loop'. Many job seekers now use specialised AI tools like Simplify to tailor their resumes to specific jobs and autofill application forms, enabling them to quickly apply to many jobs. At the same time, companies inundated with applicants are using AI systems to automatically scan resumes and reject candidates. To try to stand out, Audrey Roller, a recent data science graduate from Clark University in Worcester, Massachusetts, said she highlighted her human skills, like creativity, on her job applications, which she writes herself, unassisted by chatbots. But after she recently applied for a job, she said, a rejection email arrived three minutes later. 'Some companies are using AI to screen candidates and removing the human aspect,' Roller, 22, said. 'It's hard to stay motivated when you feel like an algorithm determines whether you get to pay your bills.' Recent graduates looking for government tech jobs also report increased hurdles. Jamie Spoeri, who graduated this year from Georgetown University, said she majored in computing because she loved the logical approach to problem-solving. During college, she also learned about the environmental impacts of AI and grew interested in tech policy. Last summer, she had an internship at the National Science Foundation, where she worked on national security and technology issues, like the supply of critical minerals. She has since applied for more than 200 government, industry and nonprofit jobs, she said. But recent government cutbacks and hiring freezes have made getting federal jobs difficult, she said, while AI coding tools have made getting entry-level software jobs at companies harder. 'It's demoralising to lose out on opportunities because of AI,' said Spoeri, 22, who grew up in Chicago. 'But I think, if we can adapt and rise to the challenge, it can also open up new opportunities.' Prominent computing education boosters are now pivoting to AI. President Donald Trump, who in 2017 directed federal funding toward computer science in schools, recently unveiled a national AI action plan that includes channeling more students into AI jobs. Microsoft, a major computing education sponsor, recently said it would provide $4 billion in technology and funding for AI training for students and workers. Last month, Smith, Microsoft's president, said the company was also assessing how AI was changing computer science education. Mishra, the Purdue graduate, did not get the burrito-making gig at Chipotle. But her side hustle as a beauty influencer on TikTok, she said, helped her realise that she was more enthusiastic about tech marketing and sales than software engineering. The realisation prompted Mishra to apply cold for a tech company sales position that she found online. The company offered her the tech sales job in July. She starts this month. – ©2025 The New York Times Company This article originally appeared in The New York Times.


Free Malaysia Today
9 hours ago
- Free Malaysia Today
Tesla applies to supply electricity in UK
Tesla obtained a UK power generation licence in 2020, focusing on solar energy without selling to consumers. (EPA Images pic) LONDON : Tesla's energy subsidiary has applied to become an electricity supplier in Britain, a first such move by tech billionaire Elon Musk's operations outside the US. A notice on the UK energy regulator's website confirmed a weekend media report that Tesla Energy Ventures Limited had applied for a licence to supply electricity. The subsidiary of Musk's electric carmaker Tesla currently provides electricity across Texas. A document formalising the UK application, initially revealed by the Sunday Telegraph, is dated July 18 and signed by Andrew Payne, Tesla's director of energy for Europe. Tesla Energy, which specialises in solar power and battery storage, did not immediately respond Monday to AFP's request for comment regarding the application on regulator Ofgem's website. Tesla in 2020 obtained a licence to produce electricity in the UK, notably using solar panels, without selling it directly to consumers. News of the latest application comes as Tesla suffers a global decline in sales of its cars owing to increased competition and a backlash over Musk's collaboration with US President Donald Trump. UK registrations of the brand dropped to 987 in July from 2,462 one year earlier, according to trade data.