
Eminem sues Meta over unauthorized use of music in reels
LOS ANGELES, June 4: Rap icon Eminem has filed a lawsuit against Meta Platforms Inc., the parent company of Facebook, Instagram, and WhatsApp, accusing the tech giant of unauthorized use and distribution of his music through its social media tools, including Reels Remix and Original Audio.
The complaint was filed on May 30 in Michigan federal court by Eminem's production company, Eight Mile Style. The lawsuit alleges that Meta engaged in 'rampant' and 'knowing infringement' by making Eminem's songs available in its 'Music Libraries' for use in user-generated content — often without obtaining the necessary licenses. These songs, including major hits like 'Lose Yourself,' were allegedly used in millions of videos and streamed billions of times without proper authorization.
Eight Mile Style contends that Meta attempted to secure licensing through Audiam, Inc., a digital royalty collection company, but asserts that Audiam was never granted such authority on its behalf. The lawsuit claims Meta 'willfully' encouraged its billions of users to use Eminem's music without a license and is not eligible for Digital Millennium Copyright Act (DMCA) safe harbor protections due to its alleged knowledge and facilitation of the infringement.
While some songs were reportedly removed from Meta platforms following complaints — including 'Lose Yourself' — Eight Mile Style maintains that unauthorized covers and instrumental versions of the rapper's music remain accessible.
The company is seeking statutory damages of up to $150,000 per song, per platform — potentially amounting to over $1 million — along with actual damages, lost profits, and a permanent injunction against the continued unlicensed use of Eminem's music. Eight Mile Style has also requested a jury trial.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arab Times
4 days ago
- Arab Times
Eminem sues Meta over unauthorized use of music in reels
LOS ANGELES, June 4: Rap icon Eminem has filed a lawsuit against Meta Platforms Inc., the parent company of Facebook, Instagram, and WhatsApp, accusing the tech giant of unauthorized use and distribution of his music through its social media tools, including Reels Remix and Original Audio. The complaint was filed on May 30 in Michigan federal court by Eminem's production company, Eight Mile Style. The lawsuit alleges that Meta engaged in 'rampant' and 'knowing infringement' by making Eminem's songs available in its 'Music Libraries' for use in user-generated content — often without obtaining the necessary licenses. These songs, including major hits like 'Lose Yourself,' were allegedly used in millions of videos and streamed billions of times without proper authorization. Eight Mile Style contends that Meta attempted to secure licensing through Audiam, Inc., a digital royalty collection company, but asserts that Audiam was never granted such authority on its behalf. The lawsuit claims Meta 'willfully' encouraged its billions of users to use Eminem's music without a license and is not eligible for Digital Millennium Copyright Act (DMCA) safe harbor protections due to its alleged knowledge and facilitation of the infringement. While some songs were reportedly removed from Meta platforms following complaints — including 'Lose Yourself' — Eight Mile Style maintains that unauthorized covers and instrumental versions of the rapper's music remain accessible. The company is seeking statutory damages of up to $150,000 per song, per platform — potentially amounting to over $1 million — along with actual damages, lost profits, and a permanent injunction against the continued unlicensed use of Eminem's music. Eight Mile Style has also requested a jury trial.


Arab Times
13-05-2025
- Arab Times
Microsoft Teams to block screenshots of meetings starting July 2025
NEW YORK, May 11: Microsoft is preparing to launch a new feature in Microsoft Teams designed to prevent users from capturing screenshots of confidential material shared during meetings. The feature, titled Prevent Screen Capture, is set to roll out globally across Android, iOS, desktop, and web platforms starting in July 2025. According to a recent Microsoft 365 roadmap update, this feature will automatically place participants using unsupported platforms into audio-only mode, further ensuring the security of visual content. If someone attempts to take a screenshot, the screen will turn black, effectively blocking the capture of sensitive visuals. 'This feature will be supported on Teams desktop apps for Windows and macOS, as well as mobile apps for iOS and Android,' Microsoft confirmed. However, while digital screen captures will be blocked, the company acknowledges that physical methods, such as taking photos of a screen with an external device, can still bypass the protection. It remains unclear whether this feature will be turned on by default or if it will be configurable by meeting organizers or IT administrators. Microsoft's move follows a similar step by Meta, which recently introduced the 'Advanced Chat Privacy' feature on WhatsApp. That feature restricts users from saving or exporting sensitive content from both private and group chats. In parallel with the July rollout, Microsoft also plans to launch additional enhancements to Teams. These include updated town hall privileges for Teams Rooms on Windows, interactive BizChat/Copilot Studio agents for meetings and one-on-one calls, and a Copilot-powered tool to generate audio summaries of transcribed discussions. The audio overview feature will allow users to select speakers, customize tone, and set the summary length. Additionally, Microsoft recently reminded administrators that a new Teams Chat security update—designed to detect phishing via external access and impersonation—will become generally available by mid-February 2025. At the Enterprise Connect conference last year, Microsoft revealed that Teams now serves over 320 million monthly active users across 181 countries and in 44 languages, highlighting the platform's growing role in enterprise communication.


Arab Times
06-05-2025
- Arab Times
Meta's new AI chatbot draws criticism for deep data tracking
NEW YORK, May 6: Meta has launched a standalone artificial intelligence chatbot app, Meta AI, which quickly climbed to No. 2 on Apple's App Store charts. Promising a more 'personalized' experience, the app provides tailored answers and advice, along with a new social component that allows users to share their AI-generated conversations and images publicly. However, the app has also triggered significant privacy concerns due to its integration with Facebook and Instagram and its broad data collection practices. Meta AI draws on years of user data from these platforms to personalize interactions — raising alarms among privacy advocates. When first tested, Meta AI generated a "Memory" file identifying the user's interests based on conversations, including sensitive topics like fertility techniques, divorce, payday loans, and tax laws. The app automatically stores conversations and uses them to enhance responses, train future AI models, and eventually, serve targeted advertising. While users can delete stored data and memories, doing so requires navigating complex settings, and deleted data may not be fully erased. Meta spokesperson Thomas Richards defended the app's design, saying: 'We've provided valuable personalization for people on our platforms for decades. The Meta AI app is no different. We offer transparency and controls so people can manage their experience.' Still, experts warn the controls fall short. 'The disclosures and consumer choices around privacy settings are laughably bad,' said Ben Winters, director of AI and data privacy at the Consumer Federation of America. 'I would only use it for surface-level prompts—nothing personal.' Meta AI is deeply integrated with Facebook and Instagram. If users set up the app through their existing social accounts, the AI can access and share data across these platforms. To avoid this, users must create a separate Meta AI account. Moreover, the app records every interaction, including text and voice, and stores key facts in a personal 'Memory' file. These interactions feed directly into Meta's AI training systems. Unlike competitors such as ChatGPT and Google Gemini, Meta AI offers no opt-out option for data retention or training. Meta's terms of service warn users: 'Do not share information that you don't want the AIs to use and retain.' Privacy researchers caution that AI systems personalized at this level can lead to unforeseen consequences. 'Just because these tools act like your friend doesn't mean they are,' said Miranda Bogen, director at the Center for Democracy & Technology. Personalized AI, she noted, can create bias or reinforce stereotypes based on incomplete or inaccurate data. Another feature raising concerns is Meta AI's "Share" button, which posts chats publicly by default on a 'Discover' tab. There is no way to restrict sharing to just friends or private messages, though users can later hide shared posts. While Meta AI does not currently show ads, CEO Mark Zuckerberg recently indicated that product recommendations and advertising will be introduced. This raises questions about whether AI-generated advice may one day be influenced by commercial interests. For instance, during one test, the AI inferred parenthood from a conversation about baby bottles, potentially leading to assumptions and recommendations based on flawed data. Justin Brookman, director of technology policy at Consumer Reports, expressed concern: 'The idea of an agent is that it works on my behalf — not on trying to manipulate me on others' behalf. Personalized advertising powered by AI is inherently adversarial.' Currently, users can delete individual chats and clear their Memory file, but these actions don't always fully erase the data. To completely remove personal information, users must delete original chats and instruct the app to wipe all stored content. Meta maintains that it trains its models to minimize the risk of personal data appearing in other users' responses. However, critics argue the lack of opt-out mechanisms and the potential for data leakage make Meta AI's privacy model especially problematic. As personalized AI tools become more deeply embedded in everyday technology, experts are urging greater transparency and user control. For now, users are advised to be cautious about what they share with Meta AI — and to read the fine print.