logo
Meta will only make limited changes to pay-or-consent model, EU says

Meta will only make limited changes to pay-or-consent model, EU says

Time of India6 hours ago

Meta
Platforms may face daily fines if EU regulators decide the changes it has proposed to its pay-or-consent model fail to comply with an antitrust order issued in April, they said on Friday.
The warning from the European Commission, which acts as the EU competition enforcer, came two months after it slapped a 200-million-euro ($234 million) fine on the U.S. social media giant for breaching the Digital Markets Act (DMA) aiming at curbing the power of Big Tech.
The move shows the Commission's continuing crackdown against Big Tech and its push to create a level playing field for smaller rivals despite US criticism about the bloc's rules mainly targeting its companies.
Daily fines for not complying with the DMA can be as much as 5% of a company's average daily worldwide turnover.
The EU executive said Meta's pay-or-consent model introduced in November 2023 breached the DMA in the period up to November 2024, when it tweaked it to use less personal data for targeted advertising. The Commission has been scrutinising the changes since then.
The model gives Facebook and Instagram users who consent to be tracked a free service that is funded by advertising revenues. Alternatively, they can pay for an ad-free service.
The EU competition watchdog said Meta will only make limited changes to its pay-or-consent model rolled out last November.
"The Commission cannot confirm at this stage if these are sufficient to comply with the main parameters of compliance outlined in its non-compliance Decision," a spokesperson said.
"With this in mind, we will consider the next steps, including recalling that continuous non-compliance could entail the application of periodic penalty payments running as of 27 June 2025, as indicated in the non-compliance decision."
Meta accused the Commission of discriminating against the company and for moving the goalposts during discussions over the last two months.
"A user choice between a subscription for no ads service or a free ad supported service remains a legitimate business model for every company in Europe - except Meta," a Meta spokesperson said.
"We are confident that the range of choices we offer people in the EU doesn't just comply with what the EU's rules require - it goes well beyond them."
The EU watchdog dismissed Meta's discrimination charges, saying the DMA applies equally to all large digital companies doing business in the EU regardless of where they are incorporated or who their controlling shareholders are.
"We have always enforced and will continue to enforce our laws fairly and without discrimination towards all companies operating in the EU, in full compliance with global rules," the Commission spokesperson said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta may face daily fines over pay-or-consent model, EU warns
Meta may face daily fines over pay-or-consent model, EU warns

Time of India

time2 hours ago

  • Time of India

Meta may face daily fines over pay-or-consent model, EU warns

HighlightsMeta Platforms may incur daily fines if European Union regulators determine that its proposed changes to the pay-or-consent model do not adhere to an antitrust order issued in April. The European Commission has warned that continuous non-compliance with the Digital Markets Act could lead to penalties amounting to 5% of Meta's average daily worldwide turnover. Meta Platforms has criticized the European Commission for allegedly discriminating against the company, asserting that its user choice model remains a legitimate business structure in Europe. Meta Platforms may face daily fines if EU regulators decide the changes it has proposed to its pay-or-consent model fail to comply with an antitrust order issued in April, they said on Friday. The warning from the European Commission , which acts as the EU competition enforcer, came two months after it slapped a 200-million-euro ($234 million) fine on the U.S. social media giant for breaching the Digital Markets Act (DMA) aiming at curbing the power of Big Tech. The move shows the Commission's continuing crackdown against Big Tech and its push to create a level playing field for smaller rivals despite U.S. criticism about the bloc's rules mainly targeting its companies. Daily fines for not complying with the DMA can be as much as 5% of a company's average daily worldwide turnover. The EU executive said Meta's pay-or-consent model introduced in November 2023 breached the DMA in the period up to November 2024, when it tweaked it to use less personal data for targeted advertising. The Commission has been scrutinising the changes since then. The model gives Facebook and Instagram users who consent to be tracked a free service that is funded by advertising revenues . Alternatively, they can pay for an ad-free service. The EU competition watchdog said Meta will only make limited changes to its pay-or-consent model rolled out last November. "The Commission cannot confirm at this stage if these are sufficient to comply with the main parameters of compliance outlined in its non-compliance Decision," a spokesperson said. "With this in mind, we will consider the next steps, including recalling that continuous non-compliance could entail the application of periodic penalty payments running as of 27 June 2025, as indicated in the non-compliance decision." Meta accused the Commission of discriminating against the company and for moving the goalposts during discussions over the last two months. "A user choice between a subscription for no ads service or a free ad supported service remains a legitimate business model for every company in Europe - except Meta," a Meta spokesperson said. "We are confident that the range of choices we offer people in the EU doesn't just comply with what the EU's rules require - it goes well beyond them." "At a time when there are growing voices across Europe to change direction and focus on innovation and growth, this signals that the EU remains closed for business."

DeepSeek faces expulsion from app stores in Germany
DeepSeek faces expulsion from app stores in Germany

Time of India

time2 hours ago

  • Time of India

DeepSeek faces expulsion from app stores in Germany

HighlightsGermany's data protection commissioner Meike Kamp has requested that Apple Inc. and Google LLC remove the Chinese artificial intelligence startup DeepSeek from their app stores due to concerns over illegal data transfers to China. DeepSeek has been criticized for failing to provide adequate evidence that the personal data of German users is protected in China at a level comparable to that within the European Union. The technology company DeepSeek has faced scrutiny in multiple countries, with Italy already blocking its app and the Netherlands banning its use on government devices due to data security concerns. Germany 's data protection commissioner has asked Apple and Google to remove Chinese AI startup DeepSeek from their app stores in the country due to concerns about data protection, following a similar crackdown elsewhere. Commissioner Meike Kamp said in a statement on Friday that she had made the request because DeepSeek illegally transfers users' personal data to China. The two US tech giants must now review the request promptly and decide whether to block the app in Germany, she added, though her office has not set a precise timeframe. Google said it had received the notice and was reviewing it. DeepSeek did not respond to a request for comment. Apple was not immediately available for comment. According to its own privacy policy, DeepSeek stores numerous pieces of personal data, such as requests to its AI programme or uploaded files, on computers in China. "DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union," Kamp said. "Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies," she added. The commissioner said she took the decision after asking DeepSeek in May to meet the requirements for non-EU data transfers or else voluntarily withdraw its app. DeepSeek did not comply with this request, she added. DeepSeek shook the technology world in January with claims that it had developed an AI model to rival those from U.S. firms such as ChatGPT creator OpenAI at much lower cost. However, it has come under scrutiny in the United States and Europe for its data security policies. Italy blocked it from app stores there earlier this year, citing a lack of information on its use of personal data, while the Netherlands has banned it on government devices. Belgium has recommended government officials not to use DeepSeek. "Further analyses are underway to evaluate the approach to be followed," a government spokesperson said. In Spain, the consumer rights group OCU asked the government's data protection agency in February to investigate threats likely posed by DeepSeek, though no ban has come into force. US lawmakers plan to introduce a bill that would ban U.S. executive agencies from using any AI models developed in China. Reuters exclusively reported this week that DeepSeek is aiding China's military and intelligence operations.

Facebook users beware, Meta AI can scan all your phone photos anytime if you are not careful
Facebook users beware, Meta AI can scan all your phone photos anytime if you are not careful

India Today

time6 hours ago

  • India Today

Facebook users beware, Meta AI can scan all your phone photos anytime if you are not careful

Meta has consistently found itself at the centre of privacy debates. There's little doubt that the company has been using our data, for instance, our publicly posted photos across Facebook and Instagram, to train its AI models (more commonly known as Meta AI). But now, it seems Meta is taking things to another level. Recent findings suggest that it now wants full access to your phone's camera roll, meaning even photos you haven't shared on Facebook (or Instagram), reported by TechCrunch, some Facebook users have recently come across a curious pop-up while attempting to upload a Story. The notification invites them to opt into a feature called 'cloud processing.' On the surface, it sounds fair and safe, as Facebook says this setting will allow it to automatically scan your phone's camera roll and upload images to Meta's cloud 'on a regular basis.' In return, the company promises to offer 'creative ideas' such as photo collages, event recaps, AI-generated filters, and themed suggestions for birthdays, graduations, or other cool? But wait for it. When you agree to its terms of use, you're also giving Meta a go-ahead to analyse the content of your unpublished and presumably private photos on an ongoing basis as Meta AI looks at details such as facial features, objects in the frame, and even metadata like the date and location they were taken, to gradually become is little doubt that the idea is to make AI more helpful for you – the user – since AI needs all the data one can possibly fathom to make sense of the real world and respond accordingly to questions and prompts you are putting out. And Meta, on its part, says that this is an opt-in feature, which is to say that users can choose to disable it as and when they want. That's fair, but given that this is user data we're talking about and given Facebook's history, some users (and privacy advocates) would be tech giant had earlier admitted it had scraped all public content uploaded by adults on Facebook and Instagram since 2007 to help train its generative AI models. However, Meta hasn't clearly defined what 'public' means or what age qualifies someone as an 'adult' in its dataset from 2007. That haziness leaves a lot of room for different interpretations—and even more room for concern. Moreover, its updated AI terms, active since June 23, 2024, don't mention whether these cloud-processed, unpublished photos are exempt from being used as training Verge reached out to the Meta AI executives, but they bluntly denied that Meta, "is not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images."There is, thankfully, a way out. Facebook users can dive into their settings and disable this cloud processing feature. Once turned off, Meta promises it will begin deleting any unpublished images from the cloud within 30 days. Still, the very nature of this tool—pitched as a fun and helpful feature—raises questions about how users are nudged into handing over private data without fully realising the a time when AI is reshaping how we interact with tech, companies like Meta are testing the limits of what data they can collect, analyse, and potentially monetise eventually. This latest move blurs the lines between user assistance and data extraction. What used to be a conscious decision—posting a photo to share with the world—now risks being replaced with quiet uploads in the background and invisible AI eyes watching it all unfold. We'll see how things pan out.- Ends advertisement

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store