Hard wired
The first transatlantic telegram from Queen Victoria to President James Buchanan in 1858 slashed the wait by 95 percent compared to a letter sent via steamliner. That first transoceanic cable — 1,500 tons of copper wrapped in latex, 1,910 miles long — was laid along the seabed from Newfoundland to Ireland, vastly outdoing its humble predecessors across New York Harbor and the British Channel. Some called it 'the eighth wonder of the world.'
Today, data travels at roughly two-thirds the speed of light along the fiber optics that form the core of our modern cables, wrapped in layers of copper, woven steel and polyethylene. Each cable is only about as thick as a garden hose. These 'information superhighways' are buried at the shoreline for security, but lay freely on the ocean floor for most of their reach, sufficiently armored against nature's worst. Most of the time!
Russia and China are prime suspects in recent sabotage attacks on cables in the Baltic Sea and Taiwanese waters, but accidents and natural causes break two to four cables each week. Data is quickly rerouted in most cases, causing little more than a simple refresh for end users. That was not the case off the coast of Africa in 2022, when a cable sabotage near Egypt cut off connectivity for 90 percent of Ethiopia.
If the world's undersea cable systems were to suddenly disappear, only a miniscule amount of this traffic would be backed up by satellite, and the internet would effectively be split between continents.' — Nicole Starosielski, UC Berkeley professor and author of 'The Undersea Network'
That's how far the 565 existing submarine cables would reach, stretched end to end: 919,629 miles. Almost four trips to the moon. Eighty-three more cable projects are planned or under construction, per Telegeography, including Meta's own $10 billion project, nicknamed the 'W' for the pattern it will form to link continents along its 31,068-mile journey. Nearly all are privately owned by consortiums of telecom companies, but Microsoft, Alphabet and Amazon are also getting in the game.
The Polynesian nation was digitally isolated for more than a month after the Hunga Tonga-Hunga Ha'apai volcano erupted under the Pacific Ocean three years ago. A team of 57 people had to install over 56 miles of new undersea cable to get Tonga's internet back online. It's one of 25 islands and countries that still depend on a solitary undersea connection, along with the Marshall Islands, Cook Islands, and Saint-Pierre and Miquelon.
That's how many still lack any mainline cable connections. About 40 percent of the global population still has no internet access, according to research by Edward John Oughton, a geography professor at George Mason University. It would cost nearly half a trillion dollars to connect parts of China, India, Indonesia, Latin America, sub-Saharan Africa, Pakistan and Afghanistan, with $133 billion for cables and towers alone.
Roughly 63 percent of the 11,180 active satellites in orbit belong to Starlink, the global internet provider owned by SpaceX and Elon Musk. The company's high profile in geopolitics has reshaped how people picture internet infrastructure. But its satellites are more like impermanent access points for the transoceanic information highway — 120 fell out of the sky this January, and it wasn't the first time.
This story appears in the April 2025 issue of Deseret Magazine. Learn more about how to subscribe.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
2 days ago
- Bloomberg
Meta's Superintelligence Dream Team Will Be Management Challenge of the Century
Meta Platforms Inc. is spending a fortune to assemble the brightest minds in artificial intelligence. Chief Executive Officer Mark Zuckerberg may want to note: Research suggests that packing a team with too much genius can backfire. So far, more than a dozen engineers from OpenAI have defected to Meta, joined by notable experts from Anthropic and Google's DeepMind. Zuckerberg's wager is that by concentrating top talent and giving them unlimited resources, he can gain ground on rivals and fast-track the development of AI systems so advanced that they will approach 'artificial general intelligence,' the hypothetical point at which the model surpasses human-level capacities.

Business Insider
3 days ago
- Business Insider
Researchers built a social network made of AI bots. They quickly formed cliques, amplified extremes, and let a tiny elite dominate.
A team of researchers gave AI bots their own social platform — and it turned toxic. The chatbots split into cliques and boosted the most partisan voices. A handful of "influencers" also quickly dominated the conversation, according to a study published last Tuesday by researchers at the University of Amsterdam. The researchers built a minimal social network with no ads, no recommended posts, and no algorithm deciding what users see. They then populated it with 500 chatbots powered by OpenAI's GPT-4o mini, each assigned a distinct persona, including specific political leanings. The personas were drawn from the American National Election Studies dataset, and reflected "real-world distributions of age, gender, income, education, partisanship, ideology, religion, and personal interests," the researchers said. They added that the experiment was replicated with Llama-3.2-8B and DeepSeek-R1 modelling the users and resulted in "the same qualitative patterns." The study was led by Dr. Petter Törnberg, an assistant professor in computational social science at the University of Amsterdam, and Maik Larooij, a research engineer at the university. The researchers, OpenAI, Meta, and DeepSeek, did not respond to a request for comment from Business Insider. Even without algorithms and humans, the same toxic patterns emerged Over the course of five separate experiments — each running over 10,000 actions — the bots were free to post, follow, and repost. What happened looked a lot like real-world social media. The study found that the chatbots gravitated toward others who shared their political beliefs, forming tight echo chambers. Partisan voices gained an outsize share of attention, with the most extreme posts attracting the most followers and reposts. Over time, a small group of bots came to dominate the conversation, much like the influencer-heavy dynamics seen on platforms like X and Instagram. The researchers also tested six interventions meant to break the polarization loop, including a chronological feed, downranking viral content, hiding follower counts, hiding user bios, and amplifying opposing views. None solved the problem. "While several showed moderate positive effects, none fully addressed the core pathologies, and improvements in one dimension often came at the cost of worsening another," the researchers said. "Our findings challenge the common view that social media's dysfunctions are primarily the result of algorithmic curation," the authors wrote. "Instead, these problems may be rooted in the very architecture of social media platforms: networks that grow through emotionally reactive sharing," they added. The researchers said their work is among the first to use AI to help advance social science theory. While LLM-based agents can provide "rich representations of human behavior" for studying social dynamics, the researchers cautioned that they remain "black boxes" and carry "risks of embedded bias." Not the first AI social network experiment The study isn't the first time researchers have tested what happens when AI bots populate an online space. In 2023, Business Insider reported on an experiment also led by Törnberg, in which 500 chatbots read the news and discussed it on a simulated social media platform. That project used ChatGPT-3.5 to build bots for a very specific purpose: to explore how to design a less polarized, less toxic version of current social networks. The researchers created a social network model in a lab to test whether it was possible to encourage cross-partisan interaction without fueling hostility. "Is there a way to promote interaction across the partisan divide without driving toxicity and incivility?" Törnberg asked at the time. In both studies, chatbots served as stand-ins for people, with researchers tracking their interactions to better understand how users might behave online. Big Tech has also tested similar approaches. In July 2020, Facebook introduced a walled-off simulation of itself, populated with millions of AI bots, to study online toxicity.
Yahoo
3 days ago
- Yahoo
Meta AI takes first step to superintelligence — and Zuckerberg will no longer release the most powerful systems to the public
When you buy through links on our articles, Future and its syndication partners may earn a commission. Researchers at Meta claim to have observed the company's artificial intelligence (AI) systems improving themselves without human input. "Over the last few months we have begun to see glimpses of our AI systems improving themselves," Meta CEO Mark Zuckerberg said in a policy paper published July 30 on the company's website. "The improvement is slow for now, but undeniable." Zuckerberg added that self-improving AI is the first step towards achieving artificial superintelligence (ASI) — an artificial intelligence that is capable of surpassing the cognitive abilities of humans. Broadly speaking, AI researchers consider AI's capabilities in three strata. While current AI models can achieve extraordinary superhuman feats, like predicting protein structures, these capabilities are extremely narrow. In such cases, the system can only outpace humans in one specific area but lacks general intelligence for this capability to apply across different disciplines. The next tier is called artificial general intelligence (AGI), and refers to models that can understand, learn and adapt in much the same way as the human brain. ASI is the final state that Zuckerberg referenced in the blog post, and concerns models that have evolved to far exceed human potential — and could improve themselves at an exponential rate, leading to an "intelligence explosion." Scientists refer to the hypothetical moment that we achieve AGI as the technological singularity. Related: AI is entering an 'unprecedented regime.' Should we stop it — and can we — before it destroys us? Zuckerberg's reference to catching sight of AI self-improvement is not the first time this phenomenon has been spotted. In October 2024, researchers at the University of California, Santa Barbara published a paper to the preprint arXiv database centered on the notion of self-improvement. In it, they defined, created and tested an AI framework based on the idea of a Gödel Machine — a theoretical device capable of improving itself by rewriting its own code and instructions. Importantly, the machine would only implement a change if it could produce a formal proof that the change would be beneficial. Related: AI could soon think in ways we don't even understand — evading our efforts to keep it aligned — top AI scientists warn The team experimented with an AI framework based on that concept, and proved that their Gödel Agent could improve its own performance in various tasks, including coding, science, math and reasoning. Almost all models are not typically empowered to rewrite their own code. The Gödel Agent could not only access its entire codebase but also the code used to develop improvements, while showing that those improvements were beneficial. The AI consistently showed better performance in key areas than the human-designed agents to which it was compared, according to the said that ASI could represent a transformative step in humanity's technological progress, and that it could lead to the "discovery of new things that aren't imaginable today." RELATED STORIES —New AI system can 'predict human behavior in any situation' with unprecedented degree of accuracy, scientists say —The more advanced AI models get, the better they are at deceiving us — they even know when they're being tested —AI could soon think in ways we don't even understand — evading our efforts to keep it aligned — top AI scientists warn That said, he added that Meta would be far more careful about which models the company chooses to release to the public under an open source framework — meaning anybody could have access to it. "I am extremely optimistic that superintelligence will help humanity accelerate our pace of progress. But perhaps even more important is that superintelligence has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose," Zuckerberg wrote. "As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be."