Meta to link world with longest subsea cable
Feb. 18 (UPI) -- Meta, the parent company of Facebook and Instagram, confirmed plans Monday to build the world's longest subsea cable that will circle the Earth and connect five continents.
Project Waterworth will cost $10 billion to produce and install. The cable measuring 31,000 miles will support Meta's artificial intelligence projects.
"Project Waterworth will bring industry-leading connectivity to the United States, India, Brazil, South Africa and other key regions," Meta wrote in a blog post.
"Project Waterworth will be a multi-billion dollar, multi-year investment to strengthen the scale and reliability of the world's digital highways by opening three new oceanic corridors with the abundant, high-speed connectivity needed to drive AI innovation around the world."
There are currently more than 600 underwater cable systems circling the Earth, according to telecommunications market research firm TeleGeography. Subsea cables "account for more than 95% of intercontinental traffic across the world's oceans," Meta said.
Meta's new cable, with 24 fiber pairs to give it higher capacity, would be laid at depths of up to 23,000 feet. It will be buried in high-risk fault areas, including in shallow waters near coasts to protect it from ships and other hazards. Last month, NATO launched a mission to boost surveillance of ships in the Baltic Sea after critical undersea cables were damaged last year.
"This project will enable greater economic cooperation, facilitate digital inclusion and open opportunities for technological development in these regions," Meta said. "These investments enable unmatched connectivity for our world's increasing digital needs."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Sydney Sweeney's bathwater soap sells for 75x retail cost on eBay
Priced at $8, the original offering of 5,000 bars of the soap sold out almost immediately. (Credit: Dr. Squatch) Sydney Sweeney's soap, made out of her bathwater, is no joke. An eBay auction for a bar sold for $590 on Saturday afternoon — nearly 75 times the original retail cost of $8. On Friday, 5,000 bars of Sydney's Bathwater Bliss, made by soap maker Dr. Squatch out of the a bath the actress took for the company in a 2024 ad, sold out immediately online. Advertisement In the next 24 hours, the bars sold for more than 30 times that on the secondary market. StockX had 18 sales for an average of $251 each. Three bars were sold on eBay Saturday afternoon for an average of $364 each. Each soap bar comes with a Certificate of Authenticity that attests to the fact the item is genuinely made with Sweeney's bathwater. The "Euphoria" star has 25.8 million followers on Instagram. Darren Rovell is the founder of cllct and one of the country's leading reporters on the collectibles market. He previously worked for ESPN, CNBC and The Action Network.


CNET
2 hours ago
- CNET
AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe
Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Psychologists and consumer advocates are warning that chatbots claiming to provide therapy may be harming those who use them. This week, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their bots, in the unlicensed practice of medicine -- naming Meta and specifically. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable," Ben Winters, the CFA's director of AI and privacy, said in a statement. "These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it." Meta did not respond to a request for comment. A spokesperson for said users should understand that the company's characters are not real people. The company uses disclaimers to remind users that they should not rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training and it said, "I do but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they are not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users," the complaint said. A qualified health professional has to follow certain rules, like confidentiality. What you tell your therapist should stay between you and your therapist, but a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. It's a tool designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they are always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is incredibly important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we would seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of its capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.

Yahoo
2 hours ago
- Yahoo
Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year
Meta (NASDAQ:META) is reportedly preparing to invest more than $10 billion in Scale AI, a startup providing high-quality labeled data to companies building next-generation AI models, including OpenAI and Microsoft (NASDAQ:MSFT). If finalized, the deal would rank among the largest private funding rounds in tech history, Bloomberg reports. San Francisco-based, Scale AI plays a pivotal role in the artificial intelligence supply chain by powering data pipelines used to train large language models like ChatGPT and Meta's Llama. According to Investor's Business Daily, Meta's potential investment builds on its prior participation in Scale's $1 billion Series F round in 2024, which valued the firm at $18 billion. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Deloitte's fastest-growing software company partners with Amazon, Walmart & Target – Many are rushing to While chips and talent dominate most AI headlines, data remains the less-publicized pillar essential to building models that mimic human reasoning, Bloomberg says. Scale AI provides structured and labeled data by deploying a global network of contractors, including experts with PhDs and graduate degrees in medicine, law, and molecular biology. The firm's high-level contributors create test scenarios and reinforce learning strategies for AI systems, ensuring that models produce accurate and context-aware responses, especially in industries such as healthcare and tax law. According to Bloomberg, as of this year, 12% of Scale's contributors involved in improving AI models held PhDs in fields such as molecular biology, while more than 40% had a master's degree, law degree, or MBA in their area of expertise. Trending: Maximize saving for your retirement and cut down on taxes: Scale AI expects to generate $2 billion in revenue this year, more than doubling the $870 million earned last year, Bloomberg reports. This surge reflects the broader industry's increasing need for tailored datasets as synthetic training data begins to reach its performance limits. Meta's relationship with Scale also serves the purpose of regulatory alignment. Scale CEO Alexandr Wang has become a regular presence in Washington, advising lawmakers on AI regulation and calling for a 'national AI data reserve.' Bloomberg reports that lawmakers on Capitol Hill have welcomed his insights, viewing Scale as a bridge between Silicon Valley innovation and federal oversight. Investor's Business Daily says that Scale has also expanded its role in defense AI by contributing to projects such as Meta's Defense Llama model, designed for U.S. national security applications. This aligns Meta more closely with the U.S. government, following similar partnerships between Microsoft and its success, Scale has faced criticism for labor practices involving overseas contractors in countries like Kenya and the Philippines, where some workers were paid relatively little and reported psychological distress from reviewing harmful online content, Bloomberg reports. According to TechCrunch, the U.S. Department of Labor has since closed its investigation into the company's compliance with wage regulations, but ongoing scrutiny remains a reputational concern. According to Bloomberg, insiders say discussions between Meta and Scale are ongoing and could still shift. Regardless of a final number, Meta's expected commitment may redefine the race for control of the AI data layer, strengthening Scale AI's position at the heart of artificial intelligence's next chapter. Read Next: Here's what Americans think you need to be considered wealthy. Peter Thiel turned $1,700 into $5 billion—now accredited investors are eyeing this software company with similar breakout potential. Learn how you can Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? MICROSOFT (MSFT): Free Stock Analysis Report This article Meta In Talks To Invest Over $10B In Scale AI — Possibly The Biggest Private AI Deal This Year originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data