Latest news with #latency


Forbes
a day ago
- Business
- Forbes
Why Low Latency Is The New Competitive Business Imperative
Rafael Umann, CEO and Founder at Azion. In the time it takes you to read this sentence—approximately seven seconds—market leaders have processed millions of transactions, responded to thousands of customer queries and made countless automated decisions. The difference between leaders and followers is no longer measured only in quarters or years. It's measured in milliseconds. In our hyper-connected global economy, the ability to respond instantly isn't just an advantage; it's a matter of survival. The Hidden Cost Of Latency In A Hyper-Connected World Every digital interaction creates value through a fundamental pattern: input, processing and output. Yet this seemingly simple model faces unprecedented challenges that traditional architectures cannot adequately address. Consider the physical constraint: Data traveling at the speed of light over fiber still requires 200 milliseconds to circle the globe. When combined with the inherent network processing that takes 50 milliseconds per hop, response times in centralized architectures exceed acceptable thresholds for modern digital experience. The business impact is substantial and quantifiable. Research underscores the impact with hard numbers: The "Milliseconds Make Millions" report from Deloitte (commissioned by Google) that was published in 2020 and is still one of the most comprehensive datasets available shows that trimming just 0.1 seconds on a crucial core web vitals metric (LCP) boosts conversion rates by up to 8.4% in retail and 10.1% in travel, and it raises average mobile order value by 9.2%. If one-tenth of a second can move the revenue needle this much, imagine what you could unlock by reclaiming full seconds. For enterprises processing millions of transactions daily, these delays translate directly into significant revenue loss. The challenge manifests across critical dimensions that compound the problem exponentially: • Physical distance creates unavoidable latency barriers. • Traditional hub-and-spoke cloud architectures create structural bottlenecks that worsen with scale. • Centralized decision making requires unnecessary round-trips for operations that could be executed locally. This architectural pattern resembles requiring headquarters approval for every field decision—an approach that proves both inefficient and unsustainable in markets demanding real-time responsiveness. When Milliseconds Mean Millions The business impact of ultra-low latency extends far beyond technical metrics, directly influencing competitive positioning and financial performance. In industries such as e-commerce, finance and real-time communications, even slight reductions in latency can lead to higher conversion rates, improved customer satisfaction and increased revenue. Speed improvements create a cascading effect throughout the customer journey. The same Google/Deloitte research found that a 0.1-second improvement leads to a 3.2% increase in users progressing from product listings to product detail pages, and a 9.1% increase from product detail to add-to-basket. The impact is even greater in luxury retail, where add-to-basket progression increased by 40.1%. For major retailers, these performance gains can translate into billions of dollars in additional revenue without requiring the acquisition of new customers. Media streaming services face perhaps the most direct latency challenges. McKinsey's 2025 "The 'Attention Equation'" research finds that streaming platforms delivering higher-quality, lower-latency experiences see stronger user engagement and lower churn, as attention and focus are directly linked to subscriber lifetime value and retention, especially for sports and live events, where real-time delivery is critical. Your Transformation Roadmap Executives preparing to compete in the real-time economy require a strategic approach combining immediate improvements with long-term architectural transformation. Based on analysis of successful implementations, companies should follow a phased methodology that delivers value while building toward comprehensive transformation. 1. Begin with a comprehensive latency audit, mapping all customer touchpoints and internal processes to identify bottlenecks and areas for improvement. Quantify the business impact of delays in terms of revenue, customer satisfaction and operational efficiency. This baseline assessment provides the foundation for measuring progress and return on investment (ROI). 2. Capture immediate improvements through proven optimizations. Deploy distributed web platforms to run serverless and implement both static and dynamic application caching strategies, optimize database queries and adopt modern protocols such as HTTP/3. These foundational enhancements often achieve 30% to 50% latency reduction while building organizational momentum, in my experience. 3. Design applications for distributed data processing, deploy or move to a distributed infrastructure able to run code in strategic markets (particularly those with inherent latency challenges) and implement event-driven architectures that minimize synchronous dependencies. This phase, typically spanning six to 12 months, creates genuine competitive differentiation. 4. Establish a sustainable competitive advantage through its own real-time AI capabilities. Develop unique applications that competitors cannot easily replicate and transform business models to capitalize on real-time capabilities others lack. This ongoing phase transforms latency optimization from a technical project to a strategic capability. The Real-Time Imperative Every millisecond saved represents an opportunity captured, a customer retained and an operation optimized for competitive advantage. The digital economy's share of global GDP is steadily increasing, with the most recent and robust forecasts from Forrester and IDCA/World Bank placing it between 15% and 17% as of 2024-2025, and expected to grow further by 2028. AI is driving this growth. Forrester also emphasizes that strategic architectural choices—balancing latency, data distribution, security and cost—are key to unlocking the full value of real-time, distributed AI. The strategic question facing executives is not whether their business requires ultra-low latency and AI capabilities. That necessity has been established by market dynamics. The critical question is whether your organization will achieve these capabilities before competitors establish insurmountable advantages. The race to real-time has already begun. Your competitive response time starts now. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


CNET
4 days ago
- CNET
Ready, Set, Game: Fix Your Lag With These Tips
According to a report from Liquid Web, over 95% of users who were surveyed in a pool of 1,000 complained about experiencing latency issues while gaming. That same report showed that these gamers were known to play games on PC, Xbox, PlayStation and Nintendo. Anyone who's spent time gaming online knows the frustration -- you pull the trigger or dive for a tackle, but thanks to lag, your action lands way too late. That's high ping in action, and it's a serious problem. It doesn't just slow you down, it gives everyone else an edge while you're stuck playing catch-up. A strong, stable connection with low ping isn't a luxury for online gaming -- it's a necessity. With the tech available today, there's no reason to settle for a laggy experience. There are ways to optimize your internet connection. If you're dealing with high ping, the first move is figuring out how bad it is. Once you've measured it, you can take steps. Whether that's tweaking your setup or upgrading your equipment to lower your ping and keep your games fast and responsive. First off, what is ping? Put simply, ping is a test of latency, or how long it takes your computer or console to send data back and forth from a specific server somewhere on the internet. Think of it as a courier. Whenever you use the internet for anything, you're sending a courier out to fetch whatever data you need to stream a show, use an app or play a game online. In this sense, your internet speed represents how much data can be carried at once, typically in megabits per second, or Mbps. Meanwhile, the ping tells you how long -- the latency -- the trip takes in milliseconds. Locating local internet providers You'll want to shoot for a ping of around 50 milliseconds or lower for online gaming. Over 50ms isn't terrible, but you definitely want to land under 100ms. After that, lag can become an issue. A good speed test will indicate where your ping is. There are plenty of free speed tests that can provide a detailed assessment of your home network's performance. Screenshot by Ry Crist/CNET Take a quick internet speed test Better yet, do a series of them -- and at different times of day, if you can. Your goal is simply to get a baseline sense of what your average speeds and ping look like before you start making changes. Oftentimes, the right set of tests can point you in the right direction as you start trying to speed things up. For instance, running speed tests on a computer that's wired to your modem and then comparing those results with what you see when you're wired to the router can help you figure out if it might be time to get a new one. Enlarge Image Looking to measure the ping to a specific site or server? Windows users can open the Command Prompt and typing "ping" followed by a space and then a URL or IP address. Screenshot by Ry Crist/CNET There are lots of free speed testing tools on the web these days. Claiming over 55 billion speed tests since 2006, the most popular is probably the Ookla Speedtest -- it's fast, simple and easy to use, and I like that it gives you control over which nearby server you're using. (Disclosure: Ookla is owned by the same parent company as CNET, Ziff Davis.) The speed test is another good option that includes a latency measurement, and if you'd like, SpeedSmart's speed test lets you measure the ping to servers all over the globe. For example, I was able to measure a latency of 30ms to a server about 750 miles away from me in New York, then 290ms to a server located in Sydney, roughly 10,000 miles away. The distance makes a huge difference. Whichever speed test you're using, start fresh by rebooting all of your hardware first -- then grab an Ethernet cable and connect your laptop or console directly to your router. From there, a speed test will tell you what your ping looks like before your router starts transmitting the signal throughout your home. After that, you can unplug and do several Wi-Fi speed tests at different spots around the house to see how much the numbers change. Enlarge Image If you switch from a wireless to a wired connection, you'll see faster speeds and reductions in ping in your speed tests. How big of a difference depends on your router and the speed of your connection. Ry Crist/CNET Play plugged in, if possible In my home, the ping went up by about 20% after unplugging and doing a wireless speed test at close range. It likely would have gone up even more if I had tested the wireless connection at a greater distance. A good set of powerline adapters can deliver speeds and latency close to what you'd expect from a direct, wired connection with your router. Josh Miller/CNET That's why most online gamers will tell you to use a wired connection whenever possible. That's all well and good if your computer or gaming console is in the same room as your router. Experts recommend placing your router is at a central location. However, if you're using a bedroom or a back room for gaming, then the wired approach probably sounds less feasible. One potential solution is mesh router systems. A mesh router isn't quite as good as plugging directly into the router, but a wired Ethernet connection from a mesh node to your gaming system will be better than a completely wireless connection. Check your cables While we're talking about the importance of a wired connection -- it's worthwhile to make sure that you're using up-to-date cables that can support today's top internet speeds. Just don't expect them to do much of anything as far as ping is concerned. In fact, during a recent run of speed tests, I tested both a 300Mbps fiber connection and a 50Mbps cable connection using multiple speed-check services at different times of day. I ran each round of tests four times -- once with the laptop connected to the modem via Wi-Fi, and then once again using a wired connection to the modem with each of the three most common types of Ethernet cables: Cat 5, Cat 5e and Cat 6. On both networks, the Cat 6 cable connection returned the highest average download speeds, but the type of cable didn't have a noticeable effect on ping, with all three averaging out to within 2ms of one another. Still, Ethernet cables with the Cat 5e or Cat 6 designation are your clear best bet, as they're made to handle top speeds of up to 1,000 or even 10,000Mbps. Outdated Cat 5 cables aren't designed for speeds higher than 100Mbps, and they don't do as much to prevent interference as signals pass through the copper wiring inside. If you're using cables like that, then it's worth picking up some new ones. Optimize your wireless setup Proper cabling is nice, but maybe you're playing a game on your phone, tablet or another device that can't easily benefit from a wired connection. In that case, upgrading to a better router might be the right play -- but you'll want to make sure that you're getting the most out of your current setup first. To bring our metaphorical data courier back into it, stronger signal strength in your home makes the first and last leg of his journey easier and faster, which can help bring your ping down. To do so, follow the basic best practices for optimizing your network's signal strength. Start by making sure you've got the router in a good, open spot that's free from immediate obstructions. Wi-Fi signals tend to angle downward, so the higher you can get it up off of the floor, the better. The angle of the antennas can make a difference, too. If you can, try staggering them at 45-degree intervals: one straight up, the next diagonal, the next straight back. It might take some experimentation, but you might be able to find a much steadier connection with just a few quick tweaks. Once you've done everything you can to optimize your router's performance, you'll want to run some final speed tests to see how much of a difference your efforts made. If you aren't able to get your ping below 50ms or so for a routine speed check to a server that's within a few hundred miles of you, then it's probably time to upgrade your equipment or internet provider. Is might be time for an equipment upgrade Upgrading to a good gaming router can definitely help, too. Along with fast speeds and powerful processors, most high-end options can prioritize gaming traffic above everything else to help keep your roommate's Netflix binge from slowing you down. Others promise to route your signal on the fastest possible path to whatever server your game is hosted on. That said, you'll want to be sure to understand the other factors in your home that might be affecting things before you spend hundreds on new networking hardware. Final option: Splurge on a new router Like I said, a fancy gaming router can help guarantee that your home's connection is optimized for gaming. If you're thinking about upgrading, start by looking for a feature called Quality of Service. It'll let you tell the router to prioritize gaming traffic above everything else, which comes in handy if you're sharing bandwidth with roommates or family members. CNET recommends the Linksys Velop Pro 6E. It performed the best in terms of overall jitter scores out of all the routers we've tested. Spending hundreds on a high-end gaming router is probably overkill for most people. However, some features could be worth it. Gianmarco Chumbe/James Martin/CNET Most gaming routers are aimed at die-hard gamers willing to spend big on their setups, so they aren't casual investments. Prices for current-gen models typically range from around $150 to as high as $500 or even more. That's a lot to pay for a bit less ping. That's why I'd rather try to improve my home's network conditions with a powerline adapter or, if I'm struggling with something that's at least 5 years old, with a more modest router upgrade.


Android Authority
22-07-2025
- Android Authority
T-Mobile's latest 5G network upgrades mean smoother video calls and online games
Joe Maring / Android Authority TL;DR T-Mobile is the first US-based carrier to upgrade to a new 'L4S' network technology, which reduces latency in its 5G networks. L4S improves upon existing technologies by decongesting networks in real-time, prioritizing applications that require faster internet. Thankfully, you don't need to upgrade to newer hardware to use T-Mobile's new low-latency tech. Choppy video calls or online games are among the most wicked ways to ruin someone's mood. Every carrier wants to avoid doing that, which results in the never-ending race to test and then implement newer network technologies at scale. In line with these efforts, T-Mobile is upgrading its 5G networks with a new technology that will make your video calls, VR experiences, and online gaming smoother and almost jitter-free. T-Mobile says it is the first carrier in the US to implement the L4S standard designed to decrease unwanted delays in transmission. L4S stands for 'Low Latency, Low Loss, Scalable throughput,' and decongests networks by allowing faster movements of data packets for specific applications in real-time. This is an improvement over existing congestion-control protocols, which first analyze packets of data and then prioritize certain apps, resulting in congestion at the receiver's end. Notably, L4S may not necessarily boost download or upload speeds on a 5G network. L4S is designed to make quicker decisions so there's no — or at least, minimal — congestion and, therefore, faster transmission. It has been available for wired networks for a while and also works with Wi-Fi networks, provided your device and Wi-Fi router support it. The upgrade occurs at its end, according to T-Mobile, which means users will be able to benefit from improved 5G connectivity without necessarily upgrading to new devices or buying more expensive data plans. These advancements are part of T-Mobile's preparations for 5G-Advanced, also known as 5.5G, which is the upcoming generation of mobile networks. While smoother video calling is one of the most significant applications of L4S, T-Mobile emphasizes its importance in extended reality (XR). Stable frame rates, made possible by the technology, reduce motion sickness or dizziness associated with virtual reality (VR) or XR headsets, or ensure higher-quality visuals in augmented reality (AR) glasses. Meanwhile, NVIDIA has already enabled L4S for its GeForce Now cloud gaming platform, and T-Mobile users should be able to enjoy more polished visuals and smoother rendering that simulates physical gaming hardware. T-Mobile adds that L4S will enable other technologies where high-speed remote access is key. One of them is remote driving, and T-Mobile shares the example of a ride-hailing company called Vay, which employs virtual operators who deliver rental vehicles to your doorstep in Berlin without physically driving the car themselves. The operator also expects to expand the technology to enterprise applications where 'ultra-responsive experiences' are crucial. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Associated Press
21-07-2025
- Business
- Associated Press
AlgoFusion 5.0: Inside AlgoFusion 5.0's Latency Engine for Execution Precision
NEW YORK, July 21, 2025 /PRNewswire/ -- AlgoFusion 5.0 has launched a specialized update focused on execution timing analytics, addressing one of the most critical yet under-measured dimensions of algorithmic trading: latency. The platform now enables users to monitor, visualize, and optimize the time elapsed between signal generation and trade execution, offering a granular view of strategy responsiveness across asset classes and timeframes. At the heart of this release is the new Execution Timing Suite, which introduces live dashboards, timestamped logic chains, and latency heatmaps, giving users a detailed perspective into how their strategies perform under real-market conditions. Unlike conventional performance metrics that focus on outcomes, this suite measures behavioral speed, exposing bottlenecks that can compromise timing-sensitive strategies. Core features in this release include: These new capabilities address the needs of multiple user profiles. For systematic traders, the suite allows for the refinement of high-frequency strategies. Discretionary managers gain insights into the factors contributing to suboptimal trade timing. Infrastructure teams benefit from increased visibility into how platform conditions influence operational performance. The execution timing tools are fully integrated into AlgoFusion's existing visual strategy builder, allowing users to view timing performance alongside logic flow, risk parameters, and outcome metrics. This makes it possible to optimize strategies holistically—balancing speed, structure, and statistical effectiveness. 'Latency is not just a technical detail—it's a competitive variable,' said Marcus Leighton, Head of Product Strategy at AlgoFusion. 'With this release, we're helping users understand how their systems behave in motion, not just in logic.' In addition to live trading environments, the Execution Timing Suite is available in simulation and backtest modes, making it a valuable resource for education, prototyping, and infrastructure benchmarking. Users can export time series data, generate reports for compliance or governance review, and benchmark multiple strategies against timing stability scores. This release aligns with AlgoFusion's broader objective to enhance the measurability, transparency, and execution-awareness of automated trading, supporting users in the development and deployment of data-driven strategies. About AlgoFusion 5.0 AlgoFusion 5.0 is a modular, multi-asset strategy platform designed to empower traders, analysts, and institutions with transparent automation tools. The system features visual logic construction, real-time performance tracking, explainability frameworks, and integrated diagnostics. Whether for live execution, simulation, or collaborative development, AlgoFusion 5.0 provides a high-resolution view into how strategies behave across conditions, timeframes, and infrastructures. Users Can Explore Execution Timing Tools in AlgoFusion 5.0: Disclaimer: The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended that users practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency and securities. Contact Travis Morgan AlgoFusion [email protected] Photo: View original content to download multimedia: SOURCE AlgoFusion 5.0


Forbes
08-07-2025
- Business
- Forbes
What Three Of The Biggest 2025 Tech Events Revealed About Connectivity
Ivo Ivanov is the CEO of DE-CIX. If 2024 explored the 'what' of generative AI, 2025 is exploring the 'how' of it. At the world's biggest tech gatherings—CES, MWC and Hannover Messe—the story wasn't just about new devices, dazzling demos or smarter machines. It was about what those innovations quietly demanded behind the scenes: a level of connectivity infrastructure that many businesses, governments and even industries aren't yet prepared to deliver. As the industrial-grade AI we're working toward becomes increasingly closer, the need for near-instantaneous data transfer across geographies is also becoming more acute. Life at the network edge, where these applications need to be, is making one thing crystal clear: Connectivity is no longer just an accessory to innovation. It's either the bottleneck or the breakthrough. Among the many aspects of connectivity, latency has emerged as the most visible constraint. It's also the least tolerable. Whether it's an agentic AI system designed to 'think' instantly about a customer query, a robot that relies on millisecond coordination across a production line, or a self-driving vehicle with sensors that are rendered useless without the network feedback to act on their output, a lack of 'responsiveness' is now a deal-breaker. Low latency is no longer a luxury for high-performance applications—it's the new baseline for the plethora of AI applications coming our way. This has been the breakout theme of some of the biggest tech events on the calendar in 2025; as we build smarter systems, we also need to reimagine the architecture that connects them. Not as a patchwork of nodes and endpoints, but as a living, scalable ecosystem designed for intelligent workloads and data transfer. What follows are three snapshots from CES, MWC and Hannover Messe that illustrate this turning point. Consumer Electronics Show (CES) 2025: Visible Expectations, Invisible Strain The year kicked off in Las Vegas, where CES 2025 went all out to demonstrate the remarkable impact AI is about to have on everyday life—from smart homes and cars to wearables and entertainment systems. Voice assistants now hold contextual conversations, while extended reality (XR) platforms deliver adaptive, AI-driven gaming experiences that blur the line between the real world and the virtual world. These applications rely on more than raw compute power—they demand fast, stable, low-latency connectivity between users, edge infrastructure and AI models running in far-off data centers, where even slight delays can dispel immersion or break the carefully modelled illusion of intelligence. The event also featured AI-enabled healthcare wearables, hearing aids and autonomous mobility solutions, all of which depend on dynamic, location-aware data. Yet most residential and public networks weren't designed for this level of complexity. The smarter the service, the more sensitive it becomes to latency and jitter. CES made one thing clear: The infrastructure behind our digital lives needs to evolve fast, or risk holding back the very innovations it's meant to support. Mobile World Congress (MWC) 2025: The Cloud Gets Crowded In February, Barcelona set the stage for another reality check—this time about where AI actually runs. At MWC, booths weren't just showing off new handsets or 5G upgrades, they were unveiling LLM-powered humanoid robots, 'sight-beyond-sight' vehicle-to-cloud software and the world's first official AI smartphone. But more than that, industry experts were talking about the ecosystem behind the tech: telcos, hyperscalers and AI developers jostling to define the infrastructure that will power AI at scale. What emerged was a clear trend toward decentralization. AI is no longer confined to hyperscale data centers. It's moving outward—to regional hubs, colocation sites and edge data centers and to a certain extent, into the devices themselves. The shift of focus from centralized training to distributed inference is now fully underway, and it's putting immense pressure on existing cloud and network architectures. As these ecosystems converge, so too do their limitations. Proprietary cloud environments and vendor-specific platforms were built for scale, not necessarily for interoperability. But real-time AI—whether for autonomous vehicles, smart factories or agentic assistants—requires seamless connectivity across providers, clouds and geographies. Without common standards and robust interconnection, distributed AI workloads will hit friction fast. Hannover Messe 2025: Industrial AI Has A Logistics Problem By April, the conversation had shifted from homes and handsets to factories and supply chains. At Hannover Messe, one of the world's largest industrial trade fairs, AI's potential was on full display—cognitive robotics, digital twins, autonomous systems and intelligent collaboration across factory floors and logistics hubs. But with every new demonstration came a familiar question: How fast can systems talk to one another? The most revealing takeaway from Hannover wasn't necessarily the sophistication of the AI models, but the infrastructural fragility beneath them. Industrial sites often span regions with wildly different levels of network maturity. Machines outfitted with advanced inference capabilities can only operate effectively if the data they depend on—sometimes hundreds of miles away—is delivered almost instantaneously. Add to this the rise of collaborative robotics and AI-managed energy systems, and the need for deterministic, ultra-reliable connectivity becomes mission-critical. Hannover Messe made the case that the intelligence of industry isn't limited by innovation, it's limited by distance, bandwidth and latency. Connecting The Dots Across three of the most influential tech events of the year, the message was consistent—the ability to move data quickly, securely and intelligently across the network of networks will determine which ideas scale and which ideas stall. CES showed us a near-future where consumer experiences become dependent on real-time, AI-powered interactions. MWC revealed how the infrastructure behind those experiences must interoperate in order to reach our AI goals. And Hannover Messe reminded us that when it comes to industrial automation, the stakes are higher, the environments are harsher and the tolerance for latency is virtually nonexistent. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?