
Telecom Leaders Discuss AI-Driven B2B Shift at MWC 2025 - TECHx Media Telecom Leaders Discuss AI-Driven B2B Shift at MWC 2025
Telecom Leaders Discuss AI-Driven B2B Shift at MWC 2025
News Desk -
Share
The AI Beyond Boundaries Summit at Mobile World Congress (MWC) 2025 brought together global telecom leaders to explore new B2B opportunities in the AI era. The event highlighted the transformation of telecom companies (Telcos) into technology-driven enterprises (Techcos), emphasizing the role of AI, 5G, big data, cloud computing, and advanced computing in driving efficiency and innovation across industries.
A major milestone of the summit was the release of the Carrier Techco Transformation White Paper by GSMA Intelligence and industry leaders. Titled 'Taking the Plunge: Moving to Techco to Win with AI in B2B,' the white paper was presented by Allen Tang, President of ICT Marketing & Solution Sales at Huawei Middle East and Central Asia, and Wang Yongde, General Manager of Huawei Carrier XtoB Solutions Development. The report outlines a 'triple transition' for telecom operators to remain competitive: upgrading capabilities with AI factories and GPU-as-a-Service (GPUaaS), innovating services through 5G-powered applications in gaming, streaming, and fintech, and reconstructing ecosystems by integrating digital platforms and enterprise solutions. The Huawei 'Techco 1.0' model serves as a benchmark for transformation, focusing on business servitization, service platformization, and platform intelligentization, helping telecom operators unlock new revenue streams and market opportunities.
The white paper also highlights AI-driven success stories in the telecom sector. AI-powered Intelligent O&M Centers have improved network fault prediction accuracy by over 98%, while digital twin solutions in manufacturing have boosted enterprise operational efficiency by 40%. In financial markets, AI-based GPU resource scheduling has significantly reduced high-frequency transaction latency to milliseconds.
Peter Jarich, Head of GSMA Intelligence, emphasized that Techco transformation is not just about upgrading technology—it's a fundamental business shift. To capitalize on the trillion-dollar enterprise AI market, carriers must adopt a platform-driven strategy, optimizing the entire value chain from demand insight to service monetization. The Carrier Techco Transformation White Paper is now available on the GSMA official website, offering insights, frameworks, and strategies for telecom operators navigating the AI-driven future.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Khaleej Times
4 hours ago
- Khaleej Times
UAE: Samsung unveils AI Home experience with next-gen TVs, appliances and gaming tech
Samsung Gulf Electronics has launched the AI Home, a cutting-edge smart home experience in a villa-style setting, showcasing the brand's latest innovations in entertainment, appliances, and connected living — all powered by SmartThings, Samsung's IoT platform. The immersive showcase featured eight lifestyle zones, presenting real-life scenarios from cinema-style TV viewing to connected kitchens and productivity spaces. Key highlights included the 2025 TV lineup with expanded models across Neo QLED, OLED, QLED, and The Frame Pro. Samsung's new Vision AI engine enhances content and personalisation, while features like 8K/4K AI Upscaling Pro, Auto HDR Remastering Pro, and Color Booster Pro elevate audio-visual quality. The Neo QLED TVs, powered by the NQ8 AI Gen3 processor, now come in sizes up to 115 inches and include Glare Free tech and Quantum Matrix Technology Pro. Samsung also introduced the Wireless One Connect Box, simplifying setup with wireless 8K transmission. Its OLED range, led by the S95F and affordable S85 models, caters to gamers with 165Hz refresh rates, AI-enhanced visuals, and HDR gaming support. The Frame Pro blends art with technology, offering 4K wireless transmission and Pantone-validated visuals, while the Q Series Soundbars provide immersive audio with Dolby Atmos and Q Symphony. In smart appliances, Samsung revealed AI-powered Bespoke AI Refrigerators, Washers, Dryers, and Jet Ultra vacuum cleaners. These appliances feature AI Home displays for centralised control and use deep learning to optimise tasks like cleaning, laundry, and food management — all within the SmartThings ecosystem. Gaming saw a leap forward with the Odyssey 3D monitor, offering glasses-free 3D gaming via eye-tracking and real-time 2D-to-3D conversion, and the Smart M9 OLED, Samsung's first OLED Smart Monitor, designed for entertainment and productivity. With AI Home, Samsung is reimagining everyday living by merging intelligence, design, and seamless connectivity to create a future-ready, personalised lifestyle.


TECHx
11 hours ago
- TECHx
Can Ethical AI Be More Than a Talking Point?
Home » Editor's pick » Can Ethical AI Be More Than a Talking Point? Ethical AI is moving from talk to action as global laws, pledges, and accountability measures reshape how technology is built and deployed. AI is everywhere in 2025. It writes, designs, predicts, diagnoses, recommends, and increasingly, governs. From smart cities to courtrooms, its decisions are shaping our lives. But as AI grows more powerful, one question gets louder: Are we building it responsibly? Or are we just saying the right things? This month, the European Union made headlines with the passage of the AI Act, the first major attempt to regulate AI at scale. This sweeping law bans certain uses of AI, such as real-time facial recognition in public spaces and social scoring systems. It also imposes strict rules on high-risk applications like biometric surveillance, recruitment tools, and credit scoring. Why does this matter? Because it signals that AI governance is moving from voluntary ethics to enforceable law. The EU has set a precedent others may follow, much like it did with GDPR for data privacy. But here's the catch: regulation is only as effective as its enforcement. Without clear oversight and penalties, even the best laws can fall short. Europe's AI Act is a strong start, but the world is watching how it will be applied. Across the Atlantic, the United States is facing growing pressure to catch up. In May 2025, Congress held a new round of hearings with major AI players like OpenAI, Meta, Google DeepMind, and Anthropic. Lawmakers are calling for clear standards and transparency. Several of these companies have signed voluntary AI safety pledges, promising to develop systems responsibly. Meanwhile, South Korea is exploring a different path. Officials are developing an AI Ethics Certification, a system that would allow companies to prove that their models are fair, transparent, and safe. This is a smart move. Turning ethics into something measurable and certifiable could help bridge the gap between values and verification. However, the success of this initiative depends on how independent, transparent, and rigorous the certification process is. Principles Are Easy. Proof Is Hard. It's worth noting that almost every major AI company today has published a set of ethical principles. Words like trust , safety , accountability , and fairness appear prominently in blog posts and mission statements. But dig deeper and you'll find the real challenge: How are these principles enforced internally? Are external audits allowed? Are impact assessments made public? Is there a clear process to test and mitigate bias? When AI Ethics Fails We've already seen what happens when AI is built without enough attention to fairness or inclusivity. In 2023, a widely used hospital AI system in the U.S. was found to recommend fewer treatment options to Black patients. The cause? Biased training data that didn't account for structural inequalities in healthcare. In 2024, generative AI tools sparked criticism for gender and racial bias. When users searched for terms like 'CEO' or 'doctor,' the images generated were overwhelmingly of white men, despite the global diversity of those professions. These are not one-off glitches. They are symptoms of a deeper issue: AI systems trained on biased data will replicate, and even amplify, that bias at scale. That's why ethics can't be a box to check after a product launches. It must be embedded from the start. A New Ethical Frontier: The UAE Leads in the Middle East Encouragingly, ethical AI leadership is emerging from regions not traditionally known for tech regulation. The United Arab Emirates is one of them. The UAE's National AI Strategy 2031 places a strong emphasis on fairness, transparency, and inclusivity. This isn't just talk. Institutions like the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) are actively training a new generation of AI researchers with governance and ethics embedded in their education. This is a critical development. It shows that countries outside the usual power centers, like the U.S. and EU, can shape global norms. The UAE isn't just importing AI innovation; it's helping design how AI should be governed. Platforms for Global Dialogue Major events like AI Everything and GITEX GLOBAL, hosted in Dubai, are also evolving. They're no longer just product showcases. They now bring together global experts, policymakers, and ethicists to discuss responsible AI practices, risks, and solutions. These events are important, not only because they give emerging markets a voice in the AI ethics debate, but because they encourage cross-border collaboration. And that's exactly what AI governance needs. Why? Because AI systems don't stop at national borders. Facial recognition, large language models, predictive analytics, they all operate across regions. If we don't align on ethics globally, we risk creating fragmented systems with uneven protections. What Needs to Happen Now It's clear that we're moving in the right direction, but not fast enough. What's missing is the bridge between principles and practice. We need: Not just values, but verification. Not just pledges, but clear policies. Not just intentions, but independent audits. Ethics should be baked into the AI lifecycle, from design to deployment. That means testing for bias before the model goes live, ensuring transparency in how decisions are made, and creating clear channels for redress when systems fail. AI governance shouldn't slow innovation. It should guide it. The pace of AI innovation is staggering. Every week brings new tools, new capabilities, and new risks. But alongside that speed is an opportunity: to define the kind of AI future we want. In 2025, ethical AI should not be a trending topic or a marketing slogan. It must be the foundation, the baseline. Because when technology makes decisions about people, those decisions must reflect human values, not just machine logic. By Rabab Zehra, Executive Editor at TECHx.


The National
a day ago
- The National
Google antitrust case: AI takes centre stage in closing arguments
During closing arguments in the penalty portion of the Google antitrust trial on Friday, the judge asked a question about the fast-moving tech world that will likely give pause to legal and business experts. Federal Judge Amit Mehta, who last year found Alphabet-owned Google liable for maintaining a monopoly and exploiting its search sector dominance, wanted to know how search will evolve as he decides the "remedies", or punishment, for Google. 'Does the government believe there's a market for a new search engine to emerge as we think of it today, even with the remedies in place?' he said, interrupting Department of Justice (DOJ) lawyer David Dahlquist's closing arguments. Mr Mehta was referring to artificial intelligence. He implied that as he decides how to correct Google's monopoly, he needs to consider how quickly the tech landscape is shifting. The DOJ's antitrust case against Google is testing the limits of capitalism, profit and competition. It comes as artificial intelligence threatens to upend the internet search business models that allowed Google to dominate for decades. Almost all witnesses who spoke at the remedy portion of the trial seemed to acknowledge the speed of change in the tech world. The potency, promise and problems of AI in the context of existing business models surfaced several weeks ago when Apple executive Eddy Cue made comments from the witness box that briefly sent Google's stock careening. Mr Cue was responding to a question about user habits and the effect that AI is starting to have on search engine companies like Google. Eventually, his comments segued into a reflection of how technology businesses often struggle to adjust. 'People still are going to need toothpaste 20 years from now, 40 years from now. You may not need an iPhone 10 years from now. As crazy as that sounds,' Mr Cue, the senior vice president of services at Apple, told a lawyer representing Alphabet, owner of Google. 'You have to earn it. You have to develop,' he added, explaining that Apple's metrics showed that for the first time ever, overall searches done through Google seemed to have made a slight dip. Those comments reverberated throughout tech and legal communities, with some wondering if a remedy sought by the DOJ might be rendered moot by AI. How people search is changing, as AI swallows up website content and siphons off traffic. Mr Mehta last month sided with the DOJ and ruled that Google's search dominance harmed consumers with less choice. The Justice Department wants Mr Mehta to enact far-reaching penalties that would serve as a warning at other companies. In court on Friday, Mr Dahlquist, the government lawyer, reiterated the DOJ's desire that Google be prohibited from entering into default search agreements with hardware and device makers. He also pushed for strong requirements for Google to share search data and analytics with competitors, Perhaps most consequentially, he said Mr Mehta should require that Google divest Chrome, one of the world's most popular web browsers. 'We're here to make sure this cause and the remedies we propose are able to pry open the competition of this market,' Mr Dahlquist argued. 'We understood the assignment, but rather than provide this court with remedies to promote competition, Google provided milquetoast remedies that maintain status quo,' he continued, adding that Google was acting in bad faith to try to maintain its monopoly. 'Despite Google's efforts to avoid facts, those facts, as they've discovered, are stubborn things,' he added, taking a shot at the one of the world's most powerful tech companies and its phalanx of lawyers, sitting nearby. Google's lawyer, John Schmidtlein, didn't mince words in his response. 'Look at how incredibly invasive and broad they are,' he said, referring to the DOJ's remedies that Google believes 'lack causal connection' to its original motives for bringing the company to court. 'What's the amount of data that a company might need to be able to compete?' he rhetorically asked, criticising one of the DOJ's proposed remedies that Google share search data with potential competitors. Mr Mehta pushed back, saying that ample witnesses told the court that data would help increase their ability to compete, adding that it would be a 'difficult exercise' to try to address Google's criticism of the search data remedy proposal. He also asked the DOJ if AI platforms ChatGPT or Perplexity might be eligible to receive data. 'Not today, but it could eventually,' a DOJ lawyer responded. 'They eventually plan to compete with search companies and search indexes.' Just before the court broke for lunch on Friday, a senior Justice Department official told reporters that the DOJ was pleased with how the process was unfolding, even amid all the scrutiny from Google. 'Look this is a market that's been frozen in place for the better part of two decades," the official said. 'It's going to take a long time to restore competition in the search market.' That senior official also spoke to how the DOJ was trying to factor in fast-changing tech developments going forward as well as the current industry landscape. 'We don't know in the year 2035 what that's going to look like, the judge doesn't know and frankly not even Google knows,' the official explained, pivoting to issue of search data. 'So the game is, how, from a remedial standpoint how do we ensure effective remedies and that's very much about access to search data today and going forward.' Google's own proposed remedies are far lighter than those sought by the DOJ, including a solution that would give users the ability to change their default search provider at least every 12 months. The tech giant has also sought to maintain its ability to have contracts with device manufacturers. 'Browser companies like Apple and Mozilla should continue to have the freedom to do deals with whatever search engine they think is best for their users,' Google said. In late April, Google's chief executive Sundar Pichai made similar arguments to the court, calling proposed remedies 'too broad', and suggesting that fast-pace AI developments would blunt the DOJ's proposals. 'It would be trivial to reverse engineer and effectively build Google search from the outside,' Mr Pichai added. Closing arguments were expected to last throughout the day before Mr Mehta deliberates on a potential remedy.