logo
Meta Invests Big in Startup Scale AI, Brings On Its CEO for AI Development Efforts

Meta Invests Big in Startup Scale AI, Brings On Its CEO for AI Development Efforts

Yahoo17 hours ago

Scale AI said Meta has made a "significant new investment" in the company that values the startup at $29 billion.
As part of the deal, Scale AI CEO Alexandr Wang will join Meta to work on artificial intelligence efforts.
The move comes as Meta CEO Mark Zuckerberg has reportedly been frustrated with the Facebook parent's level of AI progress.Artificial intelligence startup Scale AI said it received a 'significant new investment' from Meta (META) that values the company at more than $29 billion and sees its chief executive join the tech titan.
CEO Alexandr Wang posted on X that he will leave his role at Scale to work on Meta's AI efforts, while remaining on the startup's board.
The move comes as Meta CEO Mark Zuckerberg has reportedly been frustrated with the company's level of AI progress. Last month, The Wall Street Journal reported Meta was pushing back the launch of its latest Llama 4 large language model amid concerns about whether enough improvements had been made compared to previous iterations.
To speed up development, Zuckerberg is reportedly working to build what has been referred to internally at Meta as a "superintelligence group" that will sit near him at Meta's headquarters in Menlo Park, Calif.
Shares of Meta are little changed in recent trading. The stock is up 18% for 2025.
Read the original article on Investopedia

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

As disinformation and hate thrive online, YouTube quietly changed how it moderates content
As disinformation and hate thrive online, YouTube quietly changed how it moderates content

Yahoo

timean hour ago

  • Yahoo

As disinformation and hate thrive online, YouTube quietly changed how it moderates content

YouTube, the world's largest video platform, appears to have changed its moderation policies to allow more content that violates its own rules to remain online. The change happened quietly in December, according to The New York Times, which reviewed training documents for moderators indicating that a video could stay online if the offending material did not account for more than 50 per cent of the video's duration — that's double what it was prior to the new guidelines. YouTube, which sees 20 million videos uploaded a day, says it updates its guidance regularly and that it has a "long-standing practice of applying exceptions" when it suits the public interest or when something is presented in an educational, documentary, scientific or artistic context. "These exceptions apply to a small fraction of the videos on YouTube, but are vital for ensuring important content remains available," YouTube spokesperson Nicole Bell said in a statement to CBC News this week. But in a time when social media platforms are awash with misinformation and conspiracy theories, there are concerns that YouTube is only opening the door for more people to spread problematic or harmful content — and to make a profit doing so. YouTube isn't alone. Meta, which owns Facebook and Instagram, dialled back its content moderation earlier this year, and Elon Musk sacked Twitter's moderators when he purchased the platform in 2022 and rebranded it as X. "We're seeing a race to the bottom now," Imran Ahmed, CEO for the U.S.-based Center for Countering Digital Hate, told CBC News. "What we're going to see is a growth in the economy around hate and disinformation." WATCH | Experts warn Meta's moderation move will likely increase misinformation: YouTube's goal is "to protect free expression," Brooks said in her statement, explaining that easing its community guidelines "reflect the new types of content" on the platform. For example, she said, a long-form podcast containing one short clip of violence may no longer need to be removed. The Times reported Monday that examples presented to YouTube staff included a video in which someone used a derogatory term for transgender people during a discussion about hearings for U.S. President Donald Trump's cabinet appointees, and another that shared false information about COVID-19 vaccines but that did not outright tell people not to get vaccinated. A platform like YouTube does have to make some "genuinely very difficult decisions" when moderating content, says Matt Hatfield, executive director of the Canadian digital rights group OpenMedia. LISTEN | How Canada has come to play an outsized role in far-right misinformation: He believes platforms do take the issue seriously, but he says there's a balance between removing harmful or illegal content, such as child abuse material or clear incitements to violence, and allowing content to stay online, even if it's offensive to many or contains some false information. The problem, he says, is that social media platforms also "create environments that encourage some bad behaviour" among creators, who like to walk the line of what's acceptable. "The core model of these platforms is to keep you clicking, keep you watching, get you to try a video from someone you've never experienced before and then stick with that person." And that's what concerns Ahmed. He says these companies put profits over online safety and that they don't face consequences because there are no regulations forcing them to limit what can be posted on their platforms. He believes YouTube's relaxed policies will only encourage more people to exploit them. In a recent transparency report, YouTube said it had removed nearly 2.9 million channels containing more than 47 million videos for community guideline violations in the first quarter — that came after the reported policy change. The overwhelming majority of those, 81.8 per cent, were considered spam, but other reasons included violence, hateful or abusive material and child safety. LISTEN | Why you're being tormented by ads algorithms and AI slop: Hatfield says there is a public interest in having harmful content like that removed, but that doesn't mean all controversial or offensive content must go. However, he says YouTube does make mistakes in content moderation, explaining that it judges individual videos in a sort of "vacuum" without considering how each piece of content fits into a broader context. "Some content can't really be fairly interpreted in that way." Ahmed says companies should be held accountable for the content on their platforms through government regulation. He pointed to Canada's controversial but now-scuttled Online Harms Act, also known as Bill C-63, as an example. It proposed heavier sentences, new regulatory bodies and changes to a number of laws to tackle online abuse. The bill died when former prime minister Justin Trudeau announced his resignation and prorogued Parliament back in January. Ahmed says he hopes the new government under Prime Minister Mark Carney will enact similar legislation. Hatfield says he liked parts of that act, but his group ultimately opposed it after it tacked on some other changes to the Criminal Code and Human Rights Act that he says were unrelated to the platforms. He says groups like OpenMedia would have liked to see a strategy addressing business models that encourage users to post and profit off of "lawful but awful" content. "We're not going to have a hate-free internet," he said. "We can have an internet that makes it less profitable to spread certain types of hate and misinformation." WATCH | How people can become more discerning news consumers:

Even G1 Smart Glasses Review: Are these live-translating frames the future of smart tech?
Even G1 Smart Glasses Review: Are these live-translating frames the future of smart tech?

Yahoo

timean hour ago

  • Yahoo

Even G1 Smart Glasses Review: Are these live-translating frames the future of smart tech?

Smart glasses are about the most fashion forward bit of tech you can wear. Obviously, the features themselves are important, but if they look terrible (do we even need to mention the Google Glass horror show from over a decade ago?), they'll quickly become a novelty rather than a necessity, resigned to gathering dust in the drawer, a graveyard for old iPhone chargers and batteries. Fortunately, in the case of the new Even Realities G1 smart glasses, aesthetics aren't an issue. They're sleek, stylish (if a bit boxy), and most importantly discreet. They come in three colours (green, grey and brown) and two styles: Even G1 A and Even G1 B. The former has circular frames and the G1 B's are more rectangular and 'modern', according to the website. What separates the G1 glasses from others on the market is their simplicity, being solely powered by the highly functional and crisp Holistic Adaptive Optical System: a small LED projector that beams up green text onto the lenses. That's it. No built-in speakers or cameras—as seen on Meta's collaborations with Ray-Ban. No fuss. After linking up through Bluetooth to the Even Realities app on your phone, you'll be able to—occasionally ineffectively—translate languages in real time, use maps without staring down at your phone, and invisibly read from a teleprompter for public speaking. Charging the device is also simple and rarely needed with a 1.5 day battery life. The most functional aspect of the G1 device is the dashboard feature, where you can see notifications, the time, and other data, whilst going about your day. As a disclaimer, I found it necessary to change the default head-tilt angle required to access the dashboard from the factory setting of 20° to 10°. At 20°, with the more significant head raise required, it looked as though I had adopted a nervous twitch and was more difficult to use. Unfortunately, the Even AI feature is clunky. In theory, pressing a button on the left arm will allow you to ask the specs anything, but for me the loading speeds were slow and it struggled to understand many of my commands. Despite the G1's shortcomings and relatively high price point—it's an extra £125 to add perspective lenses—these could be the most functional smart specs on the market. Technical Specs Material: High quality magnesium and titanium alloys Weight: 38g Lens size: 41.8 x 47mm Display: Micro LED Pricing £594 (+£125 for prescription lenses) Where to buy Find the on the Even Realities website here

Meta to pay Scale AI at least $450M per year for five years, Forbes reports
Meta to pay Scale AI at least $450M per year for five years, Forbes reports

Business Insider

time5 hours ago

  • Business Insider

Meta to pay Scale AI at least $450M per year for five years, Forbes reports

As part of its deal to acquire a 49% stake in Scale AI, Meta plans to pay Scale AI no less than $450M per year for five years for its AI products, or over half of its annual AI spend, whichever is lower, Forbes' David Jeans reports, citing a person who worked on the deal. Such a commitment from the Facebook parent would make up a significant part of Scale's revenue, the author notes. Confident Investing Starts Here: Easily unpack a company's performance with TipRanks' new KPI Data for smart investment decisions Receive undervalued, market resilient stocks right to your inbox with TipRanks' Smart Value Newsletter Published first on TheFly – the ultimate source for real-time, market-moving breaking financial news. Try Now>>

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store