logo
Opera Limited (OPRA): A Bull Case Theory

Opera Limited (OPRA): A Bull Case Theory

Yahoo4 hours ago

We came across a bullish thesis on Opera Limited on Shareholdersunite Essentials' Substack by Shareholders unite. In this article, we will summarize the bulls' thesis on OPRA. Opera Limited's share was trading at $18.17 as of June 24th. OPRA's trailing P/E was 19.54 according to Yahoo Finance.
A wide angle view of a bustling cityscape, capturing the potential of the consumer internet.
Opera (OPRA), a Norwegian browser innovator majority-owned by China's Kulun Tech, has built a compelling niche in a highly competitive market through constant innovation, product diversification, and a sharp focus on AI integration and user engagement. While much smaller than rivals, Opera has carved out a significant global presence, with over 293 million average monthly active users (MAUs) across its suite of products.
Flagship offerings include Opera One, a desktop browser redesigned around AI with the Aria assistant at its core; Opera GX, a gamer-focused browser with deep customization and high engagement; Opera Mini, a lightweight data-efficient browser thriving in emerging markets; and Opera Air, a mindfulness-centric product targeting Western audiences. Opera's strategy has emphasized early adoption of transformative technologies like AI and Web3, embedding features such as crypto wallets and stablecoin-based MiniPay directly into its platforms.
Aria, built on Opera's modular Composer AI engine, connects to multiple large language models (LLMs) and enables on-device local LLM use, supporting privacy and versatility. Opera is also developing the Browser Operator—an AI agent that automates web tasks on users' behalf. Meanwhile, Opera Gaming (via GameMaker and GX Games), Opera News (a personalized AI-driven content platform), and Opera Ads (a growing in-house advertising network) complement the core browser ecosystem.
Revenue growth has been driven by advertising, notably in e-commerce, and ARPU has surged due to a strategic pivot to high-monetization Western markets. Supported by major partners like Google, ASUS, and regional OEMs, Opera combines solid financials, shareholder returns, and expanding AI infrastructure, positioning it for sustained growth in browser-based digital ecosystems.
Previously, we covered a on Opera Limited by Welfare Capital in March 2025, which highlighted the company's strong browser business, Opera GX growth, and capital returns. The company's stock price has depreciated by approximately 1% since our coverage. This is because the thesis has yet to fully play out. Shareholdersunite shares a similar view but emphasizes Opera's AI and Web3 integration.
Opera Limited is not on our list of the 30 Most Popular Stocks Among Hedge Funds. As per our database, 14 hedge fund portfolios held OPRA at the end of the first quarter, which was 16 in the previous quarter. While we acknowledge the risk and potential of OPRA as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock.
READ NEXT: 8 Best Wide Moat Stocks to Buy Now and 30 Most Important AI Stocks According to BlackRock.
Connectez-vous pour accéder à votre portefeuille

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple, Google Told DeepSeek App Is Illegal in Germany
Apple, Google Told DeepSeek App Is Illegal in Germany

Yahoo

time16 minutes ago

  • Yahoo

Apple, Google Told DeepSeek App Is Illegal in Germany

(Bloomberg) -- Apple Inc. and Google's Android have been warned by a top German privacy regulator that the Chinese AI service DeepSeek, available on their app stores, constitutes illegal content because it exposes users' data to Chinese authorities. Philadelphia Transit System Votes to Cut Service by 45%, Hike Fares US Renters Face Storm of Rising Costs Squeezed by Crowds, the Roads of Central Park Are Being Reimagined Mapping the Architectural History of New York's Chinatown US State Budget Wounds Intensify From Trump, DOGE Policy Shifts The formal notification comes after DeepSeek ignored a May request to either pull its app from app stores in Germany or put in place safeguards when collecting local users' data and transmitting it to China, Berlin data protection commissioner Meike Kamp said in a statement on Friday. 'Chinese authorities have far-reaching rights to access personal data,' Kamp said. 'DeepSeek users don't have enforceable rights and effective legal remedies available to them in China, like they're guaranteed in the European Union.' Hangzhou-based DeepSeek shocked the global tech industry in January with its R1 large language model, which the Chinese startup claimed could rival much larger US systems at a fraction of the cost. After the Chinese app ignored requests to comply, the Berlin agency invoked a provision of the EU's Digital Services Act, which puts the onus on tech platforms like Apple and Google to take down illegal content on their platforms. They both must now swiftly review the notice and decide on how to comply, according to Kamp. While the regulator could have also fined DeepSeek, Kamp decided against it, because she wouldn't be able to enforce the penalty in China. Apple declined to comment. DeepSeek and Google didn't immediately reply to emails seeking comments. The German move follows a similar step by Italy's privacy regulator in January. In the US, authorities have concluded that DeepSeek gave support to the Chinese military and intelligence efforts and is expected to keep doing so, according to an American official. Lawmakers in Washington are preparing bipartisan legislation that would ban federal government agencies from using DeepSeek and other AI tools from foreign adversaries. --With assistance from Gian Volpicelli. (Updates with US government views on DeepSeek in final two paragraphs.) America's Top Consumer-Sentiment Economist Is Worried How to Steal a House Inside Gap's Last-Ditch, Tariff-Addled Turnaround Push Apple Test-Drives Big-Screen Movie Strategy With F1 Luxury Counterfeiters Keep Outsmarting the Makers of $10,000 Handbags ©2025 Bloomberg L.P. Sign in to access your portfolio

Why Reliability Is The Hardest Problem In Physical AI
Why Reliability Is The Hardest Problem In Physical AI

Forbes

time19 minutes ago

  • Forbes

Why Reliability Is The Hardest Problem In Physical AI

Dr. Jeff Mahler: Co-Founder, Chief Technology Officer, Ambi Robotics; PhD in AI and Robotics from UC Berkeley. getty Imagine your morning commute. You exit the highway and tap the brakes, but nothing happens. The car won't slow down. You frantically search for a safe place to coast, heart pounding, hoping to avoid a crash. Even after the brakes are repaired, would you trust that car again? Trust, once broken, is hard to regain. When it comes to physical products like cars, appliances or robots, reliability is everything. It's how we come to count on them for our jobs, well-being or lives. As with vehicles, reliability is critical to the success of AI-driven robots, from the supply chain to factories to our homes. While the stakes may not always be life-or-death, dependability still shapes how we trust robots, from delivering packages before the holidays to cleaning the house just in time for a dinner party. Yet despite the massive potential of AI in the physical world, reliability remains a grand challenge for the field. Three key factors make this particularly hard and point to where solutions might emerge. 1. Not all failures are equal. Digital AI products like ChatGPT make frequent mistakes, yet hundreds of millions of active users use them. The key difference is that these mistakes are usually of low consequence. Coding assistants might suggest a software API that doesn't exist, but this error will likely be caught early in testing. Such errors are annoying but permissible. In contrast, if a robot AI makes a mistake, it can cause irreversible damage. The consequences range from breaking a beloved item at home to causing serious injuries. In principle, physical AI could learn to avoid critical failures with sufficient training data. In practice, however, these failures can be extremely rare and may need to occur many times before AI learns to avoid them. Today, we still don't know what it takes in terms of data, algorithms or computation to achieve high dependability with end-to-end robot foundation models. We have yet to see 99.9% reliability on a single task, let alone many. Nonetheless, we can estimate that the data scale needed for reliable physical AI is immense because AI scaling laws show a diminishing performance with increased training data. The scale is likely orders of magnitude higher than for digital AI, which is already trained on internet-scale data. The robot data gap is vast, and fundamentally new approaches may be needed to achieve industrial-grade reliability and avoid critical failures. 2. Failures can be hard to diagnose. Another big difference between digital and physical AI is the ability to see how a failure occurred. When a chatbot makes a mistake, the correct answer can be provided directly. For robots, however, it can be difficult to observe the root causes of issues in the first place. Limitations of hardware are one problem. A robot without body-wide tactile sensing may be unable to detect a slippery surface before dropping an item or unable to stop when backing into something behind it. The same can happen in the case of occlusions and missing data. If a robot can't sense the source of the error, it must compensate for these limitations—and all of this requires more data. Long-time delays present another challenge. Picture a robot that sorts a package to the wrong location, sending it to the wrong van for delivery. The driver realizes the mistake when they see one item left behind at the end of the day. Now, the entire package history may need to be searched to find the source of the mistake. This might be possible in a warehouse, but in the home, the cause of failure may not be identified until the mistake happens many times. To mitigate these issues, monitoring systems are hugely important. Sensors that can record the robot's actions, associate them with events and find anomalies can make it easier to determine the root cause of failure and make updates to the hardware, software or AI on the robot. Observability is critical. The better that machines get at seeing the root cause of failure, the more reliable they will become. 3. There's no fallback plan. For digital AI, the internet isn't just training data; it's also a knowledge base. When a chatbot realizes it doesn't know the answer to something, it can search through other data sources and summarize them. Entire products like Perplexity are based on this idea. For physical AI, there's not always a ground truth to reference when planning actions in real-world scenarios like folding laundry. If a robot can't find the sheet corners, it's not likely to have success by falling back to classical computer vision. This is why many practical AI robots use human intervention, either remote or in-person. For example, when a Waymo autonomous vehicle encounters an unfamiliar situation on the road, it can ask a human operator for additional information to understand its environment. However, it's not as clear how to intervene in every application. When possible, a powerful solution is to use a hybrid AI robot planning system. The AI can be tightly scoped to specific decisions such as where to grasp an item, and traditional methods can be used to plan a path to reach that point. As noted above, this is limited and won't work in cases where there is no traditional method to solve the problem. Intervention and fallback systems are key to ensuring reliability with commercial robots today and in the foreseeable future. Conclusion Despite rapid advances in digital GenAI, there's no obvious path to highly reliable physical AI. It isn't just a technical hurdle; it's the foundation for trust in intelligent machines. Solving it will require new approaches to data gathering, architectures for monitoring/interventions and systems thinking. As capabilities grow, however, so does momentum. The path is difficult, but the destination is worth it. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

The UK's Governance Is Looking Vulnerable Again
The UK's Governance Is Looking Vulnerable Again

Bloomberg

time19 minutes ago

  • Bloomberg

The UK's Governance Is Looking Vulnerable Again

Welcome to the award-winning Money Distilled newsletter. I'm John Stepek. Every week day I look at the biggest stories in markets and economics, and explain what it all means for your money. Just a quick favour, if you've got time this lunchtime — it's the last time I'll ask, I promise — please help us out by filling in this questionnaire. Gloriously happy or deeply frustrated, we'd love to know how you feel your personal financial situation has changed in the last year.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store