
Unlocking Grid Capacity: Advancing Net Zero, Deferring $2 Million/Mile
Worker holding a Heimdall Power 'neuron', the smart sensor supporting dynamic line rating for ... More high-voltage transmission lines.
The energy transition is accelerating—but the infrastructure that delivers electricity isn't keeping pace. As electric vehicles, heat pumps, and AI-driven data centers push demand to record highs, the grid is being pushed to its limits. Transmission lines are congested, interconnection queues are growing, and new infrastructure isn't arriving fast enough.
While policymakers focus on long-term expansion, emerging technologies offer faster, more flexible ways to ease pressure. Advanced conductors, grid-enhancing technologies (GETs), and intelligent control systems are helping utilities increase capacity, reduce congestion, and improve reliability—often without laying new wires.
These innovations won't replace the need for long-term investment, but they can buy time, reduce costs, and accelerate the energy transition by making better use of the infrastructure we already have.
The grid we rely on was built for a different era—centralized, fossil-fuelled, and one-directional. Today's energy system is decentralized and dynamic. Rooftop solar, wind farms, EVs, and digital infrastructure are transforming how—and where—power flows. This shift to two-way, variable energy movement demands a grid that's smarter, more flexible, and more resilient. Meeting this challenge requires more than new generation—it demands significant upgrades to the grid itself.
According to the International Energy Agency (IEA), achieving climate and energy goals will require adding or refurbishing over 80 million kilometers of grid infrastructure by 2040—effectively doubling the global grid.
A 2023 analysis from the Energy Transitions Commission estimates the world will need to invest $1.3 trillion annually in zero-carbon power and another $0.9 trillion in transmission and distribution to reach net zero. Put simply: for every $100 spent on renewables, about $70 needs to be invested in the grid to ensure that clean energy actually reaches end users.
While this buildout is essential, smarter grid management can relieve pressure in the near term. Unlocking hidden capacity in the current system offers a faster, more efficient way to keep the energy transition moving.
Despite serving as critical infrastructure, most high-voltage lines still lack the sensors needed to understand real-time operating conditions, leaving utilities reliant on outdated static line capacity ratings. Instead of adapting to real world conditions, they must operate under worst-case assumptions. On a cool, breezy day, a line may be capable of carrying 30–40% more power than assumed, safely and reliably. The result is artificially constrained capacity, renewable energy curtailment, and mounting congestion.
Without accurate, real-time data, grid operators are left estimating rather than optimizing and the financial impact is staggering. U.S consumers are paying $20 billion in congestion costs while in Europe that figure is over €4 billion. Meanwhile, more than 80% of renewable energy projects sit idle in interconnection queues—waiting not for permits or panels, but because the grid can't accommodate the power they produce. This isn't a hardware problem—it's a data problem and solving it starts with seeing what's already there.
Installing sensors has historically been slow, costly, and disruptive though—often requiring taking lines offline. Now, new sensor technologies are offering a breakthrough—giving utilities real-time insight into grid performance and revealing hidden capacity already built into the system. In a world running short on time, capital, and carbon budget, this could be a game-changer.
Norwegian scaleup Heimdall Power is pioneering a smarter, faster approach to grid intelligence, using technology that enables safe, cost-effective deployment of sensors on live transmission lines. This opens the door to Dynamic Line Rating (DLR), a powerful way to unlock hidden capacity in the grid.
Heimdall's solution? Small compact sensors called 'Neurons', which can be rapidly deployed onto high-voltage lines using autonomous drones, enabling installation in under 60 seconds without line shutdowns. These smart devices measure real-time environmental and line conditions, like wind speed, temperature, and line sag—the invisible factors that determine how much electricity a line can actually carry at any given moment. By continuously analyzing these inputs, DLR calculates a line's real-time capacity—often allowing it to safely carry more power than outdated, worst-case static ratings would suggest
'Think of our sensors as speedometers for the grid,' CEO Jørgen Festervoll said in an interview. 'Right now, most utilities are driving blind—and slow— just to be safe. We give them real-time visibility.'
The pitch is bold and simple: with smarter technology, we can safely increase transmission line capacity by up to 40% and accelerate the clean energy transition all without laying a single new wire. 'We're the Apple Watch of the power grid,' Festervoll said.
Heimdall's edge is not just its sensors, but how fast it can be deployed. According to Festervoll there is no need for shutdowns, nor for heavy equipment. Entire transmission corridors can be digitized in days, not years—a seismic shift from the traditional infrastructure timeline. This means, says Festervoll, that utilities can defer up to $2 million per mile in new line construction, boost dispatch efficiency by 20%, and accelerate renewable integration by 40%. This is grid modernization that's fast, flexible, and economically compelling.
'We're solving visibility in the most under-instrumented trillion-dollar asset class on the planet,' Festervoll says.
Grids are the backbone of modern life—powering everything from industry and infrastructure to homes and hospitals. Yet for systems so complex, the real surprise isn't that they sometimes fail—it's that they don't fail more often. Trying to manage a modern grid without real-time visibility is like trying to direct air traffic without radar: it works—until it doesn't.
And when it doesn't, the consequences ripple far beyond the power sector. Grid failures can disrupt supply chains, halt public services, and cost economies billions. The recent blackout in Spain and Portugal, triggered by what appears to be a single line failure, revealed just how vulnerable the system can be.
That event highlighted a growing challenge: as power grids become more complex, our ability to manage them with outdated tools is rapidly eroding. Early reports point to failures in voltage control and chronic underinvestment, and the final analysis is expected to include a familiar set of technical recommendations—greater inertia to support renewables, improved trip settings, faster black-start capabilities, stronger cross-border coordination, and sharper attention to cyber risks and early warning signals.
Rather than simply revisiting familiar failures, this moment underscores the need to look forward—and adopt smarter tools. One utility, equipped with Heimdall's real-time data, avoided a similar outcome entirely. When a storm brought down a transmission line, the system showed that cooler temperatures had boosted capacity on a nearby line. Operators rerouted power in real time — and kept the lights on.
That's not just smart tech — that's economic resilience.
Heimdall's technology is part of a much larger opportunity to transform how we manage and optimize the grid. The shift toward smarter, more responsive infrastructure is gaining momentum—and it's not just a niche trend. The global smart grid market is expected to deliver $290 billion in global energy savings by 2029, according to Juniper Research. The International Energy Agency (IEA) echoes this, identifying real-time grid intelligence as essential for decarbonization, electrification, and the coming surge of EVs, heat pumps, and data centers.
But unlocking this potential requires more than just better technology—it demands smarter regulation. Today, most utilities are financially incentivized to build more physical infrastructure, not for using existing infrastructure better. Festervoll explains it bluntly: 'You get paid to pour concrete, not optimize electrons.'
To truly modernize the grid, the IEA is calling for a shift in how utilities are rewarded—focusing on outcomes like visibility, flexibility, and efficiency rather than traditional capital investment. The smart grid revolution isn't just about deploying sensors and software. It's about aligning technology and policy to build a system that's not only bigger, but fundamentally smarter.
In a world where electrification is accelerating, capital is constrained, and climate stakes are rising, knowing what you already have—and using it smarter—may be the most valuable energy asset of all.
As Festervoll says, 'If you don't know what you have, you'll overbuild.' And in today's economy, overbuilding isn't just inefficient — it's unaffordable.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
30 minutes ago
- Forbes
How SAP Is Managing AI And Data To Meet ERP Customers Where They Are
SAP CEO Christian Klein opened SAP Sapphire 2025 by highlighting today's business uncertainty and ... More emphasizing SAP's focus on helping customers adapt to new trade rules, regulations and technologies. The discussions at SAP's Sapphire 2025 event in Orlando were different than in previous years — focused, grounded and more customer-centric. SAP's key message was clear: ERP transformation doesn't need to be disruptive, nor is it one-size-fits-all. This is so important — and welcome — because many customers are still operating in hybrid computing environments, managing legacy on-premises systems while also moving some functions to the cloud, and they're navigating complex change cycles. Instead of urging them to leap into the unknown, SAP presented a more modular path centered on embedded AI, flexible data platforms and tools built to meet organizations where they are. I think this pragmatic messaging is a smart approach for SAP, and it was backed up by the announcements from the company throughout the conference. (Note: SAP is an advisory client of my firm, Moor Insights & Strategy.) One of the core architectural shifts discussed was SAP's effort to unify its platform. This is realized through tighter integration of the Business Technology Platform, SAP Business Suite and the Business Data Cloud, which entered controlled general availability earlier this year. BDC, which I wrote about in an earlier Forbes piece, consolidates services including SAP Datasphere, HANA Cloud, SAP Analytics Cloud and BW/4HANA into a single managed environment. It supports both SAP and non-SAP data and is built to reduce fragmentation, simplify access and support analytics, AI models and simulations without data duplication. BDC also includes extended support for older SAP BW systems, offering customers a bridge to modern cloud analytics with less disruption. Meanwhile, the Business Technology Platform (which you'll hear the company call BTP) continues to serve as SAP's foundation for extensibility and automation. On top of that, SAP Build — a tool for creating apps with little to no coding — now includes AI features to help generate code, design user interfaces and automate business logic. These improvements should help both technical and business teams build applications more efficiently and manage workflows with less effort. Integrating Joule — the company's generative AI assistant — across SAP Build, Analytics Cloud and key business applications reflects SAP's intention to make AI a daily utility, not a separate layer or some special extra feature. Among other functions, Joule can now generate and automate processes, surface contextual insights, launch prebuilt AI agents tailored to specific functions, answer natural-language questions and recommend actions based on real-time business data. SAP's AI assistant, Joule, helps orchestrate processes across key business areas such as finance, ... More supply chain, HR and customer experience. SAP's AI strategy is now rooted in an AI-first approach, with AI embedded across the portfolio, and its updated platform reflects this shift. At the center of this is the 'Business AI flywheel,' SAP's framework for linking applications, real-time data and AI — including agents — to support continuous improvement. This 'flywheel' concept includes the Business Data Cloud and Joule. Indeed, Joule plays a central role in this strategy. It's no longer just a task-based assistant — it's becoming an interface that works across products. With integrations for WalkMe (which SAP acquired in 2024) for in-app guidance and Perplexity AI for contextual search, Joule can provide real-time support based on company data. At Sapphire 2025, SAP also introduced AI Foundation, a centralized environment for building, managing and deploying AI agents. To keep those agents working properly, tools like Joule Studio and governance features powered by SAP LeanIX allow organizations to track how AI agents align with business capabilities. Looking ahead, SAP plans to embed AI into 400 business use cases by the end of 2025, reflecting its commitment to making AI part of the everyday experience rather than a standalone function. At the conference, SAP also introduced new intelligent applications built on the Business Data Cloud. These apps address specific needs — People Intelligence for workforce planning, Green Ledger for sustainability reporting, Spend Control Tower for managing procurement and supplier risk, 360 Customer for enhancing customer insights and engagement and the Sustainability Tower for tracking and improving ESG performance. Rather than offering broad, unfocused capabilities, each of these apps is designed to use AI and simulation to support targeted business scenarios. Support for ERP transformation projects remains a priority. SAP has repositioned its RISE with SAP and GROW with SAP programs to reflect the distinct needs of existing and new ERP customers. RISE with SAP is a comprehensive transformation framework for current on-premises SAP ERP customers that are moving to S/4HANA in the cloud. Meanwhile, GROW with SAP focuses on net-new customers adopting SAP cloud-based ERP and includes community-based support and best practices. Both programs are backed by SAP's Integrated Toolchain, which enables architectural modeling, scenario simulation, governance and user adoption planning. The Business Transformation Center, which comes with SAP support licenses, is another potentially helpful addition. BTC helps customers move their systems step by step, archiving old ones. This is a big deal for customers who are hesitant to make significant changes. SAP Build has also been improved to support these transformation projects with low-code and pro-code extensions powered by embedded AI. SCM was one of the more practical focus areas at the event. SAP showed how AI agents help with tasks like demand forecasting, supply chain planning and spotting issues in logistics and operations. Some customers shared early results, saying they've seen better visibility, faster cycle times and improved compliance, especially as they deal with today's shifting trade rules and global supply chain uncertainty. SAP connected this to the idea of Industry 5.0, where automation and AI still leave room for human judgment, accountability and transparency. That message seemed to land especially well with customers in healthcare, manufacturing and the public sector, where AI explainability makes a big difference. SAP also highlighted its growing partner ecosystem, which continues to expand the company's AI and data capabilities. Partners include Google Cloud for machine learning and analytics, Microsoft for productivity tools and infrastructure and AWS for industry-specific AI use cases. Accenture is supporting pre-configured cloud solutions to speed up deployment. Palantir contributes to operational modeling, while Cohere, Mistral AI and Deloitte's Zora AI focus on bringing scalable language models into SAP's environment. As touched on earlier, the partnership with Perplexity AI adds real-time, context-aware search directly into Joule. Databricks — already integrated with SAP's Business Data Cloud through a special partnership — is helping accelerate AI model development. Syniti is working with SAP to address data quality and data readiness, which is a key hurdle for many organizations. To its credit, SAP did not downplay the ongoing hurdles that its customers face. At the event, different customers expressed concern over pricing clarity, the complexity of transitioning to cloud deployments, the delayed availability of key features like full BDC rollout and Joule agent capabilities, and the challenge of mapping all the new tools to practical use cases. Many enterprises also still face foundational issues such as data fragmentation, siloed processes and limited organizational capacity for change. While SAP's tools are definitely improving, customers still need stronger enablement measures and more tailored roadmaps to act with confidence. With this in mind, I think SAP would benefit from focusing more on practical, outcome-driven roadmaps that show customers how new tools actually solve real business problems. It should make it easier to understand how features such as Joule and BDC fit into day-to-day workflows, not just how they fit conceptually. Customers also need more hands-on help — like clear migration plans, industry-specific examples and partner workshops — to build confidence and move forward faster. SAP Sapphire 2025 made it clear that SAP is focusing on helping customers move forward without forcing big, disruptive changes. This year's updates were about making things easier to manage — like better integration across BTP, the SAP Business Suite and the Business Data Cloud. That kind of unification matters for customers trying to connect data, simplify their systems and get more value from what they already have. SAP also expanded its partner network in useful ways to give customers access to more resources, whether that means getting help with cloud infrastructure, AI model development or real-time search. These are practical ways to expand what SAP can offer without trying to build everything in-house. I think customers still have concerns. Many are cautious about moving to the cloud, and with good reason — data cleanup, change management, pricing clarity and keeping things running during the transition are all real challenges. SAP's tools like the BTC and the reworked RISE with SAP and GROW with SAP programs are built to help with this, but organizations want clear guidance, too. In the end, SAP's message was that transformation doesn't have to mean tearing everything out and starting over. Most customers aren't looking for dramatic change; they want progress they can manage. SAP is starting to reflect that more in its products and messaging, and the shift is noticeable. For the ERP world, it's a reminder that the best path forward might not be the fastest, but the one that actually fits.


Forbes
32 minutes ago
- Forbes
AI, Context, And Code: The Quiet Revolution Reshaping Technology
An invisible protocol for AI is quietly replacing apps, search, and even speech. A digital human composed of contextual data trails a stream of information, symbolizing how AI ... More systems will increasingly rely on persistent, personalized context to act with memory, intent, and alignment. AI is everywhere, but it rarely understands us. Context is what turns noise into meaning. It's the connective tissue between moments, memories, and decisions—between what you meant to say and how it's understood. In human communication, context is often taken for granted. When we speak, we pull from shared experiences, references, tone, timing, and body language. Machines don't have that. They see a pattern, not a presence. They respond, but they don't relate. That's why context matters in AI. Without it, machines offer rhetorical fluency without real comprehension. They generate sentences that sound right, but they don't understand what matters most. This is where Model Context Protocol (MCP) comes in. MCP is the scaffolding that helps these pattern-recognition systems approximate something deeper: intersubjectivity—the ability to carry forward shared meaning across interactions. With MCP, machines don't just complete your sentence—they remember what came before, what constraints apply, and what goal you're trying to reach. It's not just helpful. It's foundational. We used to write code to command machines. Now, machines interpret context to act on our behalf. That shift is subtle, but it's rewriting the logic of computing. And that change isn't cosmetic. It's foundational. Model Context Protocol isn't a wrapper. It's not a prompt template. It's not a UX tool. At the core of this shift is a new architectural layer—one that, to date, has received little attention: the Model Context Protocol. If large language models gave us a new kind of intelligence, MCP provides that intelligence with continuity. Boundaries. Memory. Identity. MCP doesn't make models smarter. It makes them situated—capable of acting in our world, on our behalf, without spinning into chaos or contradiction. We don't need faster chips. We need clearer context. If we don't get this layer right, everything built on top of AI, including commerce, creativity, and communication, will falter. Search used to be a map. Now it's a destination. Apps used to be icons. Now they're invisible APIs. Conversation used to be the frontier. Now it's just a stepping stone to thought-based interaction via brain-computer interfaces. We're not asking machines to do things anymore. They are understanding us, and that changes everything. This isn't about the next big app or a killer chatbot. It's about the end of interfaces as we've known them. The UI is disappearing, and what replaces it isn't screens—it's contextual computation. Most people think this is about chat replacing search. That's only part of the picture. Yes, we've moved from lists of links to direct answers—but we've also moved from tapping apps to making requests. You won't open Lyft anymore. You'll say, 'Get me a ride.' And the system—your AI, your phone, your OS—will find the best option based on cost, loyalty, time of day, your calendar, your preferences, and your past behavior. Search and apps aren't disappearing entirely, but they are being reframed. What's rising is execution based on context. Another app store isn't replacing the app store—a new logic of fulfillment is replacing it. And increasingly, the system may choose the brand on your behalf—unless your preferences indicate otherwise. Intent has become the AI platform, and this is what I've said for decades: In the 2000s, that was your browser of choice. In the 2010s, your smartphone. Today, it's the system interpreting your intent. Tomorrow? It will be the invisible, yet essential, contextual architecture that surrounds every intelligent machine you interact with. And this is where Model Context Protocol comes in. MCP is an emerging open standard that facilitates structured communication between AI models and external tools. It is gaining adoption among leading platforms such as OpenAI and Google DeepMind. It enables continuity, constraint, and contextual intelligence by supplying models with a live, structured snapshot of the world they're entering—including the user's goals, past behavior, permissions, and environment. Imagine telling an AI, 'Get me to Austin by tomorrow afternoon for under $500.' Instead of asking follow-up questions, the system already knows your preferences, past decisions, calendar, and approval rules. It checks the right APIs, evaluates your loyalty points, and books the flight—no app-hopping, no extra clicks. That's not just a more intelligent assistant. That's intelligence equipped with context, structured, current, and fully aligned with your goals. Without MCP, models act statelessly—reacting only to the surface of user input, often forgetting what came before or guessing at constraints. With MCP, the model enters the moment in context, with clarity and relevance baked in. Most AI systems today operate in fragments. They respond to inputs, but lose track of continuity, constraints, and identity between sessions. The result? Responses that feel generic, misaligned, or too confident about the wrong thing. MCP flips that. It carries forward structured knowledge—information about who the user is, what they're trying to achieve, what tools are available, and what boundaries exist. It doesn't just process language. It acts with memory, accountability, and purpose. With MCP, you get continuity, transparency, and trust. That said, implementing MCP securely requires attention to risks such as prompt injection and tool permission leakage—challenges that developers and platform providers are actively exploring. To understand the foundational nature of MCP, look back at the origin story of the Web. When you 'surf the web,' you're not just clicking links. Behind every click, HTTP tells your browser how to make sense of what it's pulling: Without HTTP, your browser wouldn't know how to interpret a page. The internet would be a mess of unstructured files. You'd be flying blind. The Model Context Protocol operates in a similar manner, but for intelligence. Instead of structuring how we load pages, MCP structures how machines interpret people, tasks, constraints, and history. It travels with you—across sessions, devices, and domains—ensuring continuity, alignment, and understanding. But where HTTP resides in the browser, MCP is present everywhere—from your phone to your wearables, from your operating system to the immersive worlds you step into. It doesn't just structure virtual experiences. It orchestrates your entire computational footprint. Imagine you get the scary news that you have to be treated for non-Hodgkin's lymphoma. Today, your health records are scattered: electronic medical records (EMRs) in one system, genomics in another, and imaging data floating in the cloud. Your oncologist has to interpret a mosaic of fragmented data, often manually. But with MCP in place, a model assisting your care team has access to a structured, secure, real-time contextual protocol that includes: It doesn't guess. It consults. And every recommendation is tethered to what matters most—you. It's not just faster—it's more personal, more explainable, and more aligned with both clinical guidance and human nuance. You're on vacation. You buy a $600 watch in Lisbon. Normally, that would trigger a fraud alert or card freeze. But a context-aware system governed by MCP doesn't just see a transaction. It sees: Rather than block the charge, the system authorizes it and logs it as expected behavior. No alert. No friction. Total alignment. Because the system isn't just reacting to a data point—it's drawing from your real-time behavior, location, and intent to make a contextually intelligent decision. You enter a VR concert—an avatar-based show from your favorite artist. With no MCP, every experience has to be rebuilt from scratch: However, with MCP embedded at the system level, the environment doesn't need to ask. It already knows: So the system adapts instantly. Your experience feels fluid, personalized, and embodied—not because the model is innovative, but because MCP made the environment aware. These are three radically different domains, but they all share one common need: systems that understand us, not abstractly, but in a contextually relevant way. Different industries, different stakes—but the exact invisible requirement: intelligence that doesn't just compute, but understands. We used to build software with code-first logic—'if this, then that.' Intelligent systems don't work like that. They operate probabilistically. They interpret nuance. They guess what you meant. They decide how to respond based on what they know about you, about the world, and about the constraints you've given them. In other words, they operate in context, and the quality of that context determines the quality of every outcome. That's the revolution. Not faster chips. Not smarter models. Context as compute. Of course, context isn't a panacea. Bad context leads to brittle systems that overfit or misfire. And without transparency, it's nearly impossible to audit why a model made the decision it did. Precision must be earned—and constantly recalibrated. Brain-computer interfaces are no longer science fiction. The distance between intent and action is shrinking fast, and we're nearing a moment when you won't need to type, tap, or even speak. You'll think. The machine will act. In that world, there is no interface. No menus. No 'are you sure?' confirmation screen. Your brain becomes the input layer. And the system, if not fully aligned, becomes dangerous in its fluency. What disappears with conversation is not just UX—it's friction, correction, negotiation. When your mind sends a signal, there's no time to clarify. No chance to restate. No contextual cues, such as facial expressions or tone. The system must already know your preferences, values, limitations, and goals before executing anything on your behalf. This isn't just a shift in interaction; it's a fundamental change. It presents a profound challenge to accountability, regulation, and trust. If something goes wrong—if the system misunderstands your intent or violates your consent—what will we audit? There is no transcript. No written instructions. Only context. In healthcare, the stakes couldn't be higher. Imagine a BCI-enabled system monitoring your neurological signals to adjust a medication or initiate treatment. There's no margin for guesswork. The model must operate within a context grounded in clinical rules, patient history, and real-time consent. That's not just context—it's compliance by design. Commercially, this shifts how choices are made. You won't comparison-shop. You won't click. You'll express a need, and the system will fulfill it. If your brand isn't context-aware, it won't even be part of the decision. Marketing becomes metadata. Preference becomes architecture. This is why Model Context Protocol isn't just a technical spec, it's a governance framework. A way to encode not just what a machine can do, but what it should do, under the terms set by the human it serves. When conversation disappears, context becomes everything. And MCP is what keeps that context aligned, auditable, and human-centered. Today, OpenAI owns your context inside ChatGPT. Apple is building a closed-loop context layer around Siri. Google is doing the same with Gemini. Meta? They're still trying to get back in the room. These aren't just product strategies—they're positioning moves for contextual dominance. The same companies that monetized our clicks, scrolls, and attention spans now want to capture something more profound: our intent, our memory, our identity across time. In Web 2.0, the data economy was built on surveillance and micro-targeting. You didn't own your behavior—platforms did. Now, in the age of AI, they're updating that playbook. Instead of optimizing what you see, they're optimizing what gets done on your behalf. And if they own the context, they own the decision. The question is no longer, 'Who's watching?' It's: 'Whose values shape the system that acts in your name?' This is why platform companies are racing to build closed-loop context layers—ecosystems where your preferences are remembered, but not necessarily portable. Your digital identity may be persistent, but it's not sovereign. The future will depend on whether MCP becomes open, auditable, and user-governed, or whether context becomes the new extraction layer, just hidden behind predictive convenience. Because whoever controls that layer will influence: Context, not code. That's the new dividing line. Code tells machines what to do. Context tells them who they are. And when the machine acts on your behalf, only one of those matters. This is the new terrain for design, ethics, infrastructure, and sovereignty. Not smarter prompts. Not flashier apps. Contextual scaffolding for autonomous execution. In a world where consumers no longer tap, scroll, or search, brand visibility doesn't disappear—but it evolves. When decisions are made by AI systems interpreting context rather than by users navigating menus, brands must shift their focus from front-end design to contextual presence. That means designing for discovery within the system. If the AI is selecting the best option based on your price sensitivity, behavior, or preferences, then the question becomes: Are you structured to be chosen? The brand battle won't happen on screens. It will occur in context layers that determine what is relevant, helpful, and aligned. To win, brands need to think like structured data and act like trusted proxies. HTTP created the Web. MCP for AI will make the next layer: A world where intent flows invisibly through invisible systems. Where cognition, not clicks, defines our digital lives. And where proximity to context, not placement on a screen, determines which ideas, brands, and actions win. If you're still designing for the app economy, you're already behind the curve—design for context. Or disappear into someone else's. The future of AI won't be written in screens, apps, or even prompts. It will be written in the invisible thread of context—what systems remember, how they align, and who they serve. If you're not designing for context, you're not designing for the future of AI; you're defaulting to someone else's.


CBS News
33 minutes ago
- CBS News
Tesla's stock regains ground following Musk spat with Trump
What are the potential implications of the fallout between President Trump and Elon Musk? Tesla's stock price rose in morning trade, regaining some of the ground it lost after an acrimonious online dispute between Elon Musk, CEO of the electric car maker, and President Trump. Tesla shares closed down 14% on Thursday following the heated exchange, with Mr. Trump threatening to strip Musk's companies of their government contracts. The stock was up $15.20, or more than 5%, to $299.90 as of 10:45 a.m. EST. Wedbush tech analyst Dan Ives said the spat unnerved Tesla investors, he remained optimistic the stock would rebound. "Musk needs Trump and Trump needs Musk for many reasons, and these two becoming friends again will be a huge relief for Tesla shares," he wrote in a research note Friday. Tension between Musk and Mr. Trump "does not change our firmly bullish view of the autonomous future looking ahead that we value at $1 trillion alone for Tesla," Ives added, referring to Tesla's push into robo-taxis and self-driving cars. Musk's net worth on Thursday plunged $34 billion because of the fall in Tesla shares, according to the Bloomberg Billionaires Index. In addition to Tesla, Musk owns The Boring Company, Neuralink, SpaceX, X (formerly known as Twitter) and xAI. Tesla share prices have fallen 26% this year.