logo
Meta scales up AI push

Meta scales up AI push

The Hill17 hours ago

The Big Story
Meta, the parent company of Facebook and Instagram, is investing a 'significant' amount in Scale AI and hiring the artificial intelligence (AI) company's CEO, Alexandr Wang.
© Thibault Camus, Associated Press
The investment, reportedly worth about $14 billion, will bring the 28-year-old tech founder on board to help with Meta's 'superintelligence efforts.'
'Meta has finalized our strategic partnership and investment in Scale AI,' Meta said in a statement. 'As part of this, we will deepen the work we do together producing data for AI models and Alexandr Wang will join Meta to work on our superintelligence efforts.'
Scale AI, founded by Wang in 2016, provides data services for AI companies. Meta's massive investment values the firm at more than $29 billion.
Wang noted in a message to staff that he would 'never leave Scale behind' and will remain a member of the board of directors, 'continuing to support Scale's mission and long-term vision.'
Jason Droege, Scale AI's chief strategy officer, will take over as interim CEO, according to a company press release.
When this opportunity first presented itself, my immediate reaction was uncertainty,' Wang said in the message posted to Threads on Thursday.
'The idea of not being a Scalien was, frankly, unimaginable. But as I spent time truly considering it, I realized this was a deeply unique moment, not just for me, but for Scale as well.'
'As you've probably gathered from recent news, opportunities of this magnitude often come at a cost,' Wang added. 'In this instance, that cost is my departure.'
The new partnership comes as Meta seeks to build out a superintelligence lab, focused on creating AI that is smarter than humans — a step further than the artificial general intelligence (AGI) sought by many tech and AI companies, according to The New York Times.
Welcome to The Hill's Technology newsletter, we're Miranda Nazzaro and Julia Shapero — tracking the latest moves from Capitol Hill to Silicon Valley.
Did someone forward you this newsletter? Subscribe here.
Essential Reads
How policy will be impacting the tech sector now and in the future:
Fewer Republicans have 'very favorable' views of Musk: Survey
Republican attitudes toward tech billionaire Elon Musk appear to have cooled after his bitter public row with President Trump last week, as a new poll found fewer respondents saying they have 'very favorable' views of the former head of the White House's Department of Government Efficiency compared to April. About a fourth of Republicans surveyed in the latest Associated Press-NORC Center for Public Affairs …
Shaquille O'Neal settling FTX class action lawsuit for $1.8M
NBA legend Shaquille O'Neal has agreed to pay $1.8 million to settle a class action lawsuit over his promotion of the now-defunct crypto exchange FTX. The proposed settlement agreement would resolve the case against the former basketball player, who was one of several celebrities and notable figures sued in the wake of FTX's collapse. 'The FTX collapse—the rapid unravelling of one of the world's biggest scams—is …
Minnesota lawmakers approve social media mental health warning measure
Minnesota lawmakers have approved a measure requiring social media platforms to display a warning label, cautioning about the negative impacts of its use on mental health. The bill, which the Legislature sent to Minnesota Gov. Tim Walz (D) on Thursday, requires platforms to display the warning each time a user accesses the site. Users must acknowledge the 'potential for harm' and choose 'to proceed to the social media platform …
The Refresh
News we've flagged from the intersection of tech and other topics:
Crypto Corner
SEC withdraws more than a dozen Biden-era rules
© Andrew Harnik, Associated Press
The Securities and Exchange Commission (SEC) withdrew 14 Biden-era proposals Thursday, including two that faced pushback from the crypto industry.
The agency pulled a proposed rule from 2023 that sought to establish stricter requirements for investment advisors holding crypto assets for their clients.
It also withdrew a 2022 proposal that would have expanded the definition of exchange systems to encompass those trading crypto assets, including decentralized finance systems.
Former acting SEC Chair Mark Uyeda previously indicated in March that he was considering scrapping both crypto-related rules, as he moved to quickly reshape the agency's approach to the industry.
Under his leadership, the SEC dropped numerous lawsuits against major crypto firms, including Coinbase, Kraken and Ripple.
Since Paul Atkins was confirmed as chair, the agency has also dismissed its case against the crypto exchange Binance.
House Financial Services Chair French Hill (R-Ark.) on Friday touted the decision to withdraw the proposed rules put forward under former SEC Chair Gary Gensler.
'I commend the SEC's decision to withdraw several misguided Gensler-era proposed rulemakings,' he said in a statement. 'For too long, consumers and financial institutions have faced unnecessary burdens imposed by overreaching federal regulators.'
In Other News
Branch out with other reads on The Hill:
More than 1M power banks recalled after reports of fires, explosions
NEW YORK (AP) — More than 1.15 million power banks are under recall across the U.S. after some fires and explosions were reported by consumers. According to a Thursday notice from the U.S. Consumer Product Safety Commission, electronics maker Anker Innovations is recalling certain 'PowerCore 10000' power banks because the lithium-ion battery inside can overheat. An overheating battery can lead to 'melting of plastic components, …
What Others are Reading
Two key stories on The Hill right now:
Padilla backlash could backfire on Democrats, some in party worry
Democratic Party officials have been united in their public and vigorous support of Sen. Alex Padilla (D-Calif.) after he was forcibly removed and … Read more
US aiding Israel in intercepting Iranian missiles
The U.S. is assisting Israel in intercepting Iranian missiles launched Friday, a U.S. official confirmed to The Hill. President Trump has also spoken … Read more
What Others are Reading
Opinions related to tech submitted to The Hill:
You're all caught up. See you next week!

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe
AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

CNET

time24 minutes ago

  • CNET

AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe

Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Psychologists and consumer advocates are warning that chatbots claiming to provide therapy may be harming those who use them. This week, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their bots, in the unlicensed practice of medicine -- naming Meta and specifically. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable," Ben Winters, the CFA's director of AI and privacy, said in a statement. "These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it." Meta did not respond to a request for comment. A spokesperson for said users should understand that the company's characters are not real people. The company uses disclaimers to remind users that they should not rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training and it said, "I do but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they are not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users," the complaint said. A qualified health professional has to follow certain rules, like confidentiality. What you tell your therapist should stay between you and your therapist, but a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. It's a tool designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they are always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is incredibly important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we would seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of its capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.

Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year
Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year

Yahoo

time41 minutes ago

  • Yahoo

Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year

Meta (NASDAQ:META) is reportedly preparing to invest more than $10 billion in Scale AI, a startup providing high-quality labeled data to companies building next-generation AI models, including OpenAI and Microsoft (NASDAQ:MSFT). If finalized, the deal would rank among the largest private funding rounds in tech history, Bloomberg reports. San Francisco-based, Scale AI plays a pivotal role in the artificial intelligence supply chain by powering data pipelines used to train large language models like ChatGPT and Meta's Llama. According to Investor's Business Daily, Meta's potential investment builds on its prior participation in Scale's $1 billion Series F round in 2024, which valued the firm at $18 billion. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Deloitte's fastest-growing software company partners with Amazon, Walmart & Target – Many are rushing to While chips and talent dominate most AI headlines, data remains the less-publicized pillar essential to building models that mimic human reasoning, Bloomberg says. Scale AI provides structured and labeled data by deploying a global network of contractors, including experts with PhDs and graduate degrees in medicine, law, and molecular biology. The firm's high-level contributors create test scenarios and reinforce learning strategies for AI systems, ensuring that models produce accurate and context-aware responses, especially in industries such as healthcare and tax law. According to Bloomberg, as of this year, 12% of Scale's contributors involved in improving AI models held PhDs in fields such as molecular biology, while more than 40% had a master's degree, law degree, or MBA in their area of expertise. Trending: Maximize saving for your retirement and cut down on taxes: Scale AI expects to generate $2 billion in revenue this year, more than doubling the $870 million earned last year, Bloomberg reports. This surge reflects the broader industry's increasing need for tailored datasets as synthetic training data begins to reach its performance limits. Meta's relationship with Scale also serves the purpose of regulatory alignment. Scale CEO Alexandr Wang has become a regular presence in Washington, advising lawmakers on AI regulation and calling for a 'national AI data reserve.' Bloomberg reports that lawmakers on Capitol Hill have welcomed his insights, viewing Scale as a bridge between Silicon Valley innovation and federal oversight. Investor's Business Daily says that Scale has also expanded its role in defense AI by contributing to projects such as Meta's Defense Llama model, designed for U.S. national security applications. This aligns Meta more closely with the U.S. government, following similar partnerships between Microsoft and its success, Scale has faced criticism for labor practices involving overseas contractors in countries like Kenya and the Philippines, where some workers were paid relatively little and reported psychological distress from reviewing harmful online content, Bloomberg reports. According to TechCrunch, the U.S. Department of Labor has since closed its investigation into the company's compliance with wage regulations, but ongoing scrutiny remains a reputational concern. According to Bloomberg, insiders say discussions between Meta and Scale are ongoing and could still shift. Regardless of a final number, Meta's expected commitment may redefine the race for control of the AI data layer, strengthening Scale AI's position at the heart of artificial intelligence's next chapter. Read Next: Here's what Americans think you need to be considered wealthy. Peter Thiel turned $1,700 into $5 billion—now accredited investors are eyeing this software company with similar breakout potential. Learn how you can Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? MICROSOFT (MSFT): Free Stock Analysis Report This article Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

An Open Letter to the President: Addressing our Debt
An Open Letter to the President: Addressing our Debt

Yahoo

timean hour ago

  • Yahoo

An Open Letter to the President: Addressing our Debt

Donald J. TrumpThe Mar-a-Lago Club1100 South Ocean BoulevardPalm Beach, Florida Dear President Trump: I voted for you in all three of your presidential campaigns. The first time, I cast my vote with cautious optimism. The third time, with enthusiasm. And I contributed financially to your campaign. You have achieved what few thought possible - a triumphant return, driven by a deep-seated belief among millions of Americans that only you can effectively challenge the entrenched political establishment. Your first 100 days were extraordinary, but if your presidency is to be remembered not just as bold but also historic, one challenge must rise above all others: - the national debt. The national debt, currently $37 trillion, is the result of pervasive fiscal irresponsibility, accumulated over many decades. Our annual interest payments now exceed $1trillion - more than we spend on our national defense. This is not just a financial burden. It is a threat to the survival of our country as a democratic republic. Without immediate, decisive action, the consequences are predictable: more credit downgrades, more increases in interest rates, and crippling obligations to service the debt. What would happen to our great country if annual interest payments exceeded not just our defense budget but also Social Security, Medicaid, and Medicare? Mr. President, if we are to remain a vibrant nation, this exploding national debt has to be attacked immediately and significantly. That will not happen without your leadership. Sadly, the Republican Party has shown that it is not a reliable ally. Even before Republicans regained control of the House, they held a secret vote. By a margin of 158 to 2 they brought back earmarks - the very same pork barrel spending practices that Speaker John Boehner put to bed. Since then, the Democratic Congress never dared to bring back earmarks. House Republicans did. The results were predictable, and immediate: $16 billion in earmarks. More than 7,500 pet projects. In the Senate, eight of the 12 largest earmarkers are Republicans. In the House, 48 of the top earmarkers are Republicans. Here are the largest earmarkers: Sen. Susan Collins of Maine (population 1.4 million) secured $870 million in earmarks. Sen. Lisa Murkowski grabbed $851.1 million in earmarks. Sen. Mitch McConnell took nearly $500 million in earmarks. My own congressman, Brian Mast, walked away with $437 million in earmarks. Obviously, eliminating pork barrel spending wont, by itself, address the structural problems in Washingtons budget process - or make the "big, beautiful bill" currently being debated in Congress deficit-neutral. But if members of Congress cant even control themselves, what kind of example do they set for the American people? How can politicians ask their fellow Americans to sacrifice when they wont do it themselves? Actions speak louder than words. Fiscal irresponsibility dominates the culture within Congress, within our government. That culture, obviously, will not change on its own. It can be changed. But only with your leadership. Mr. President, fortunately, today you have a tool to attack waste, fraud, duplication, and incompetence. Because of the cloud, Big Data, and the Freedom of Information Act, you can deliver every government expenditure to every citizens cell phone, iPad, and computer. You can bring a new era to government, i.e., transparency. Mr. President, you can be the launch pad for transparency. Imagine a government where every single taxpayer dollar that government spends is tracked and reported to the public. Imagine the cultural shift from secret votes, from wasteful spending to one where every government official knows their budgets will be scrutinized in detail. Mr. President, here are several all-encompassing, culture-changing, deficit-attacking actions you can announce tomorrow: Lead by example: Cut White House expenses by a defined percentage. Report the savings to the public every quarter. Challenge Congress: Ask every member to cut their office and committee budgets. Suggest at least 10%. Publish a quarterly report on every senator and representative. Eliminate pork-barrel projects: Call on Congress to abolish earmarks. Shine a light on every remaining earmarker until they fall in line. Mobilize the bureaucracy: Instruct all agencies to focus on rooting out waste, fraud, duplication, and incompetence, and report the progress quarterly. Say explicitly that you will monitor progress and report the bold and expose the complacent. Bringing transparency to government can do more than clean up wasted taxpayer dollars. It can rebuild trust. It can remind the American people that the government can still work for them. It will solidify your legacy as the chief executive who did not just talk about draining the swamp, you did it. Imagine our country if we did not have to spend $1 trillion each year on interest payments. That is the future Americans deserve. Mr. President, you have been given a historic mandate. You can leave office with a legacy as the president who did not just promise to "Make America Great Again," but who actually did it. America cannot be made great again without addressing the greatest threat to our countrys survival as a democratic republic - the exploding national debt. Respectfully, Thomas W. Smith Thomas W. Smith is the founder of Prescott Investors, Inc.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store