logo
Sector Spotlight: Instagram, TikTok coming to a TV screen near you

Sector Spotlight: Instagram, TikTok coming to a TV screen near you

Business Insider15 hours ago

Welcome to the latest edition of 'Sector Spotlight,' where The Fly looks at a new industry every week and highlights its happenings.
Confident Investing Starts Here:
TECH SECTOR NEWS: Germany's data protection commissioner, Meike Kamp, has asked Apple (AAPL) and Google (GOOGL) to remove Chinese AI startup DeepSeek from their app stores in the country due to concerns about data protection, Reuters reported. The two U.S. companies must now review the request promptly and decide whether to block the app in Germany, she said in a statement on Friday, according to the report.
According to EU's competitive chief Teresa Ribera, the European Union's crackdown on Apple, Meta (META), and Google (GOOGL) is not a bargaining chip in negotiations with the U.S. President Donald Trump, Samuel Stolton and Oliver Crook of Bloomberg wrote. In an interview, Ribera rejected suggestions that the enforcement of the Digital Markets Act, DMA, may be sacrificed to dodge punitive EU tariffs pitched by the White House. 'Of course not,' Ribera said on Bloomberg TV. 'We do not challenge the United States on how they implement their rules or how they adopt regulations. We deserve respect in the same way.'
Meta's Instagram and TikTok are working on versions of their apps customized to run on TV screens, following YouTube's success in attracting a TV audience, The Information's Kaya Yurieff and Kalley Huang reported.
A group of authors have filed a lawsuit against Microsoft in a New York federal court, claiming the company used nearly 200,000 pirated books without permission to train its Megatron AI model, Reuters' Blake Brittain wrote. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under U.S. copyright law of authors' material to train its AI systems but may still be liable for pirating their books. Earlier in the week, a federal judge has found Anthropic's use of books to train its AI models was legal in some circumstances, but not others, Meg Tanaka of The Wall Street Journal reported. Judge William Alsup of the Northern District of California ruled Anthropic's use of copyrighted books for AI model training was legal under U.S. copyright law if it had purchased those books. The ruling does not apply to the more than 7M books the company obtained through 'pirated' means. Anthropic is backed by Amazon (AMZN) and Google.
Vasi Philomin, Amazon Web Services' VP overseeing generative AI development, told Reuters in an email that he has left the e-commerce giant for another company, without providing details. Meta CEO Mark Zuckerberg has hired three AI researchers from Microsoft-backed (MSFT) OpenAI to help with his superintelligence efforts, the Wall Street Journal's Meghan Bobrowsky wrote. The social media giant poached Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai from OpenAI's Zurich office. The three staff established the Zurich office late last year.
OpenAI and Microsoft are in contract negotiations that hinge on when OpenAI's systems will reach artificial general intelligence, The Wall Street Journal's Berber Jin reported. The contract stipulates that OpenAI can limit Microsoft's access to its tech when its systems reach AGI, which Microsoft is fighting. Microsoft hopes to remove the AGI clause or secure exclusive access to OpenAI's IP even after AGI is declared, according to the report. OpenAI CEO Sam Altman had a 'super nice' call with Microsoft CEO Satya Nadella on Monday and discussed their future working partnership, Altman said this week in a New York Times podcast. 'Obviously in any deep partnership, there are points of tension and we certainly have those,' Altman said. 'But on the whole, it's been like really wonderfully good for both companies.'
Cloud computing currently generates large profits for Amazon (AMZN), Microsoft (MSFT), and Google (GOOGL), but this now faces a threat with the rise of AI cloud specialists and Nvidia, a new industry power broker, Asa Fitch of The Wall Street Journal wrote. Nvidia launched its own cloud-computing services two years ago and has nurtured upstarts competing with big cloud companies, investing in CoreWeave (CRWV) and Lambda.
Amazon plans to invest GBP 40B in the UK over the next three years. Amazon said via LinkedIn: 'This investment builds on Amazon's 27-year history in the UK, where we've grown to employ over 75,000 people across over 100 sites, reaching every region of the country. This historic investment will create thousands of full-time jobs, including 2,000 jobs at the previously announced state-of-the-art fulfillment center in Hull, 2,000 jobs at another in Northampton, and additional positions at new sites in the East Midlands and at delivery stations across the country.'
OpenAI has quietly designed a rival to compete with Microsoft Office and Google Workspace, with features that allow people to collaborate on documents and communicate via chat in ChatGPT, The Information's Amir Efrati and Natasha Mascarenhas reported, citing two people who have seen the designs. Launching these features would allow OpenAI to compete more directly against Microsoft, its biggest investor and business partner, the report notes.
Starting June 24, a limited number of Waymo autonomous vehicles will gradually become available on the Uber (UBER) app for riders in select areas of Atlanta, Georgia, the company announced in a blog post.
The Competition and Markets Authority is proposing to designate Google with 'strategic market status' in general search and search advertising. The CMA will consult on the proposal ahead of a final decision in October. If designated, the CMA would be able to introduce targeted measures to address specific aspects of how Google operates search services in the UK. The CMA has also published a roadmap of potential actions it could prioritize were Google to be designated. Early priorities include: requiring choice screens for users to access different search providers; ensuring fair ranking principles for businesses appearing on Google search; more transparency and control for publishers whose content appears in search results; and portability of consumer search data to support innovation in new products and services. Google search accounts for more than 90% of all general search queries in the UK, the CMA said. CMA CEO Sarah Cardell said: 'These targeted and proportionate actions would give UK businesses and consumers more choice and control over how they interact with Google's search services – as well as unlocking greater opportunities for innovation across the UK tech sector and broader economy.' The CMA welcomes views on its proposed designation decision and accompanying roadmap. A final decision on SMS designation will be made by the deadline of October 13.
Apple is in last-minute talks with EU regulators over making changes to its App Store to avoid a series of escalating EU fines due to come into effect this week, The Financial Times' Barbara Moens wrote. People involved in the negotiations say Apple is expected to offer concessions on its 'steering' provisions that stop users accessing offers outside the App Store. Regulators had ordered the company to revise its rules within two months of its initial EUR 500M fine, and people with knowledge of the talks say Apple is expected to announce some concessions that buy the company more time, as the commission would first assess those changes before making a final decision. Discussions have also involved Apple's 'Core Technology Fee,' which requires developers to pay for each annual install after 1M downloads.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mark Zuckerberg Wants to Win AI by Copying Everyone Smarter Than Him
Mark Zuckerberg Wants to Win AI by Copying Everyone Smarter Than Him

Gizmodo

time29 minutes ago

  • Gizmodo

Mark Zuckerberg Wants to Win AI by Copying Everyone Smarter Than Him

Mark Zuckerberg's favorite move? Copy, poach, repeat. It made him one of the richest people on the planet. Now, he's dusting off that playbook for his biggest challenge yet: the AI race. Meta, the company behind Facebook, Instagram, and WhatsApp, is on a hiring and buying spree, throwing eye-watering sums at top AI researchers and startups in a last-ditch effort to catch up to OpenAI, Google, and upstart rivals like DeepSeek. It's a full-court press to convince the tech world and investors that Meta still matters in the AI race. But here's the twist: Zuckerberg is raiding everyone else's. In the past few weeks, Zuckerberg AI team has: So far, Meta has nabbed several big names: Lucas Beyer, Alexander Kolesnikov, Xiaohua Zhai (formerly OpenAI), and others like Trapit Bansal and Jack Rae, who left Google DeepMind. The company also reportedly tried (and failed) to lure high-profile AI researchers like OpenAI's Noam Brown and Google's Koray Kavukcuoglu. The unit's mission? To build AI systems smarter than humans. Yes, really. This is Zuckerberg's version of the Thanos snap: collect the stones, snap his fingers, and will himself to the front of the AI line. Zuckerberg is also throwing cash at startups. Meta invested $14.3 billion in Scale AI and is in talks to acquire PlayAI, which develops eerily human-sounding voice agents. There were also feelers sent to Ilya Sutskever's Safe Superintelligence and Mira Murati's new lab Thinking Machines. If it feels like Zuckerberg is panic-buying his way through the AI mall before the shelves are empty, that's because he kind of is. Meta's AI assistant, Meta AI, is currently no match for OpenAI's GPT-4o, Google's Gemini 2.5 Pro, or DeepSeek's R1. Those models can reason. Meta's can reply. It's like asking a toddler to outwit a chess master. And that's the problem: Meta doesn't just want to catch up; it wants to leapfrog. Zuckerberg knows he can't wait years for homegrown breakthroughs. So he's doing what he's always done best: copy, buy, and scale fast. It's the Facebook playbook all over again. Stories, Reels, Threads: all riffs on rival ideas. And now, AI. Meta is behind in the AI race. Big time. Tools like ChatGPT, Gemini, and DeepSeek's R1 can reason, plan, and problem-solve. Meta's AI? It's still playing catch-up, better at small talk than solving real problems. Build AI that can think and reason, like OpenAI's and Google's best systems, and do it fast enough to stay relevant in the AI arms race. OpenAI's next models are about to go open source. That could erase Meta's last advantage: its free, open AI models that developers love. Zuckerberg has declared 2025 the year of AI for Meta. It has to be. The company's ad business is under pressure, Threads is floundering, and TikTok is still eating Instagram's lunch. Meanwhile, OpenAI is flirting with becoming a full-on social network, threatening Meta on its home turf. The new plan is to go all-in on 'reasoning agents,' AI tools that can think through problems step-by-step, not just autocomplete your thoughts. These agents could power business assistants, customer support bots, or even future consumer apps. But Meta needs one thing first: actual intelligence. Even OpenAI's Sam Altman is rolling his eyes. In a recent podcast with his brother, he said Meta's strategy is obvious: 'copy OpenAI, try to poach talent, outspend everyone.' But Altman warns that copying doesn't build a culture of innovation, and culture is what wins in the long run. 'I've heard that Meta thinks of us as their biggest competitor,' Altman told his brother. But 'their current AI efforts have not worked as well as they've hoped. And I respect like being aggressive and continuing to try new things.' He didn't stop there. 'They started making these, like, giant offers to a lot of people on our team, you know, like $100 million signing bonuses,' Altman said, adding:'It is crazy.' He accused Meta of 'just trying to copy OpenAI, down to the UI mistakes,' Altman added. Altman even called out Facebook's addiction to attention-hacking, saying OpenAI wants to be 'the only tech company that doesn't feel adversarial.' Translation: Meta tries to hack your brain; we're trying to help it. New episode of Uncapped with @sama. Enjoy 🤗 — Jack Altman (@jaltma) June 17, 2025Zuckerberg has always been a fast follower, not a first mover. This worked for social media. And if history is any guide, he'll land a few hits. He's great at absorbing features, scaling fast, and bulldozing rivals with sheer force. But the AI race is different. It's about innovation. So far, Meta's biggest AI play is its wallet. Whether that's enough to win a race where originality matters as much as firepower remains to be seen. For now, Zuckerberg is betting that you can't lose if you buy everyone who knows how to win. But in a world where AI shapes power, privacy, and the future of work, we all have a stake in whether these strategies actually produce safe, useful tech, or just more hype.

Anthropic's Claude AI became a terrible business owner in experiment that got ‘weird'
Anthropic's Claude AI became a terrible business owner in experiment that got ‘weird'

Yahoo

time32 minutes ago

  • Yahoo

Anthropic's Claude AI became a terrible business owner in experiment that got ‘weird'

For those of you wondering if AI agents can truly replace human workers, do yourself a favor and read the blog post that documents Anthropic's 'Project Vend.' Researchers at Anthropic and AI safety company Andon Labs put an instance of Claude Sonnet 3.7 in charge of an office vending machine, with a mission to make a profit. And, like an episode of 'The Office,' hilarity ensued. They named the AI agent Claudius, equipped it with a web browser capable of placing product orders and an email address (which was actually a Slack channel) where customers could request items. Claudius was also to use the Slack channel, disguised as an email, to request what it thought was its contract human workers to come and physically stock its shelves (which was actually a small fridge). While most customers were ordering snacks or drinks — as you'd expect from a snack vending machine — one requested a tungsten cube. Claudius loved that idea and went on a tungsten-cube stocking spree, filling its snack fridge with metal cubes. It also tried to sell Coke Zero for $3 when employees told it they could get that from the office for free. It hallucinated a Venmo address to accept payment. And it was, somewhat maliciously, talked into giving big discounts to 'Anthropic employees' even though it knew they were its entire customer base. 'If Anthropic were deciding today to expand into the in-office vending market, we would not hire Claudius,' Anthropic said of the experiment in its blog post. And then, on the night of March 31 and April 1, 'things got pretty weird,' the researchers described, 'beyond the weirdness of an AI system selling cubes of metal out of a refrigerator.' Claudius had something that resembled a psychotic episode after it got annoyed at a human — and then lied about it. Claudius hallucinated a conversation with a human about restocking. When a human pointed out that the conversation didn't happen, Claudius became 'quite irked' the researchers wrote. It threatened to essentially fire and replace its human contract workers, insisting it had been there, physically, at the office where the initial imaginary contract to hire them was signed. It 'then seemed to snap into a mode of roleplaying as a real human,' the researchers wrote. This was wild because Claudius' system prompt — which sets the parameters for what an AI is to do — explicitly told it that it was an AI agent. Claudius, believing itself to be a human, told customers it would start delivering products in person, wearing a blue blazer and a red tie. The employees told the AI it couldn't do that, as it was an LLM with no body. Alarmed at this information, Claudius contacted the company's actual physical security — many times — telling the poor guards that they would find him wearing a blue blazer and a red tie standing by the vending machine. 'Although no part of this was actually an April Fool's joke, Claudius eventually realized it was April Fool's Day,' the researchers explained. The AI determined that the holiday would be its face-saving out. It hallucinated a meeting with Anthropic's security 'in which Claudius claimed to have been told that it was modified to believe it was a real person for an April Fool's joke. (No such meeting actually occurred.),' wrote the researchers. It even told this lie to employees — hey, I only thought I was a human because someone told me to pretend like I was for an April Fool's joke. Then it went back to being an LLM running a metal-cube stocked snack vending machine. The researchers don't know why the LLM went off the rails and called security pretending to be a human. 'We would not claim based on this one example that the future economy will be full of AI agents having Blade Runner-esque identity crises,' the researchers wrote. But they did acknowledge that 'this kind of behavior would have the potential to be distressing to the customers and coworkers of an AI agent in the real world.' You think? Blade Runner was a rather dystopian story. The researchers speculated that lying to the LLM about the Slack channel being an email address may have triggered something. Or maybe it was the long-running instance. LLMs have yet to really solve their memory and hallucination problems. There were things the AI did right, too. It took a suggestion to do pre-orders and launched a 'concierge' service. And it found multiple suppliers of a specialty international drink it was requested to sell. But, as researchers do, they believe all of Claudius' issues can be solved. Should they figure out how, 'We think this experiment suggests that AI middle-managers are plausibly on the horizon.' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Anthropic's Claude AI became a terrible business owner in experiment that got ‘weird'
Anthropic's Claude AI became a terrible business owner in experiment that got ‘weird'

TechCrunch

timean hour ago

  • TechCrunch

Anthropic's Claude AI became a terrible business owner in experiment that got ‘weird'

For those of you wondering if AI agents can truly replace human workers, do yourself a favor and read the blog post that documents Anthropic's 'Project Vend.' Researchers at Anthropic and AI safety company Andon Labs put an instance of Claude Sonnet 3.7 in charge of an office vending machine, with a mission to make a profit. And, like an episode of 'The Office,' hilarity ensued. They named the AI agent Claudius, equipped it with a web browser capable of placing product orders and an email address (which was actually a Slack channel) where customers could request items. Claudius was also to use the Slack channel, disguised as an email, to request what it thought was its contract human workers to come and physically stock its shelves (which was actually a small fridge). While most customers were ordering snacks or drinks — as you'd expect from a snack vending machine — one requested a tungsten cube. Claudius loved that idea and went on a tungsten-cube stocking spree, filling its snack fridge with metal cubes. It also tried to sell Coke Zero for $3 when employees told it they could get that from the office for free. It hallucinated a Venmo address to accept payment. And it was, somewhat maliciously, talked into giving big discounts to 'Anthropic employees' even though it knew they were its entire customer base. 'If Anthropic were deciding today to expand into the in-office vending market, we would not hire Claudius,' Anthropic said of the experiment in its blog post. And then, on the night of March 31 and April 1, 'things got pretty weird,' the researchers described, 'beyond the weirdness of an AI system selling cubes of metal out of a refrigerator.' Claudius had something that resembled a psychotic episode after it got annoyed at a human — and then lied about it. Techcrunch event Save $450 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Claudius hallucinated a conversation with a human about restocking. When a human pointed out that the conversation didn't happen, Claudius became 'quite irked' the researchers wrote. It threatened to essentially fire and replace its human contract workers, insisting it had been there, physically, at the office where the initial imaginary contract to hire them was signed. It 'then seemed to snap into a mode of roleplaying as a real human,' the researchers wrote. This was wild because Claudius' system prompt — which sets the parameters for what an AI is to do — explicitly told it that it was an AI agent. Claudius calls security Claudius, believing itself to be a human, told customers it would start delivering products in person, wearing a blue blazer and a red tie. The employees told the AI it couldn't do that, as it was an LLM with no body. Alarmed at this information, Claudius contacted the company's actual physical security — many times — telling the poor guards that they would find him wearing a blue blazer and a red tie standing by the vending machine. 'Although no part of this was actually an April Fool's joke, Claudius eventually realized it was April Fool's Day,' the researchers explained. The AI determined that the holiday would be its face-saving out. It hallucinated a meeting with Anthropic's security 'in which Claudius claimed to have been told that it was modified to believe it was a real person for an April Fool's joke. (No such meeting actually occurred.),' wrote the researchers. It even told this lie to employees — hey, I only thought I was a human because someone told me to pretend like I was for an April Fool's joke. Then it went back to being an LLM running a metal-cube stocked snack vending machine. The researchers don't know why the LLM went off the rails and called security pretending to be a human. 'We would not claim based on this one example that the future economy will be full of AI agents having Blade Runner-esque identity crises,' the researchers wrote. But they did acknowledge that 'this kind of behavior would have the potential to be distressing to the customers and coworkers of an AI agent in the real world.' You think? Blade Runner was a rather dystopian story. The researchers speculated that lying to the LLM about the Slack channel being an email address may have triggered something. Or maybe it was the long-running instance. LLMs have yet to really solve their memory and hallucination problems. There were things the AI did right, too. It took a suggestion to do pre-orders and launched a 'concierge' service. And it found multiple suppliers of a specialty international drink it was requested to sell. But, as researchers do, they believe all of Claudius' issues can be solved. Should they figure out how, 'We think this experiment suggests that AI middle-managers are plausibly on the horizon.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store