logo
#

Latest news with #Llama

The Next Great Bubble: Riding Quantum, AI, and Crypto Stocks
The Next Great Bubble: Riding Quantum, AI, and Crypto Stocks

Yahoo

timea day ago

  • Business
  • Yahoo

The Next Great Bubble: Riding Quantum, AI, and Crypto Stocks

"When I see a bubble forming, I rush in to buy, adding fuel to the fire."— George Soros Bubbles are not just market anomalies, they are often features of major eras of innovation and expansion. From railroads to dot-coms and digital assets, investors have repeatedly underestimated how far narrative-driven excess can go. Today, we are in the early innings of what may become the next great bubble, driven by a convergence of artificial intelligence, quantum computing, and crypto-financial systems. Each of these technologies is transformational on its own. Together, they are catalyzing a new wave of speculative enthusiasm, and importantly, real capital investment across both public and private markets. The last couple of years, broad markets have performed very well led by a surge in thematic stocks tied to AI infrastructure and quantum computing, while tokenized financial rails have begun to proliferate more recently. Like in the late 1990s, the rally may seem unsustainable, and it likely will be, eventually. But history shows that there may be explosive gains in the years ahead for those who are well positioned. This article explores what a full-blown speculative peak could look like over the next couple of years, and how much upside may still lie ahead if the current bull run evolves into a true mania. We'll examine lessons from past bubbles, outline the broader themes driving this one, and highlight several individual stocks that could become major winners along the way. Artificial Intelligence: The Infrastructure Behind Superintelligence AI is already reshaping daily life, but we may only be scratching the surface of what's coming. The exponential growth in large language model (LLM) usage, combined with an arms race in data center construction, is driving massive demand for compute infrastructure. The growth in LLM usage across both consumer and enterprise applications has been nothing short of extraordinary. After becoming the fastest app in history to reach 100 million users, OpenAI's ChatGPT has continued its explosive ascent, now estimated to have over 1 billion weekly active users. The company is reportedly generating more than $10 billion in annual revenue, and while it's the most prominent player, it's far from alone. Competitors like Anthropic's Claude, Google's Gemini, and Meta's Llama are also accelerating rapidly, contributing to a broader AI boom. Yet the infrastructure behind these systems may be the most compelling part of the story. Meta CEO Mark Zuckerberg recently revealed plans for a multi-gigawatt data center, code-named Prometheus, describing it as a facility that could ultimately match the physical footprint of Manhattan. The scale of this ambition highlights the extraordinary capital intensity required to push the frontier of artificial intelligence. Global hyperscaler spending on data infrastructure is expected to reach nearly $400 billion this year, fueled not only by commercial demand but by geopolitical competition. Nations are racing to develop sovereign AI capabilities, with some viewing superintelligence as a matter of national security. Whoever controls the most advanced AI infrastructure could shape the future, not just of industries, but of economies, governance, and global power structures. What I'm trying to say is this: a truly historic amount of capital is gushing into this industry as the world's leading technologists attempt to build something akin to a digital god. Will they succeed? I honestly don't know. But what I do know is that the sheer scale of spending, investment, and adoption, from the world's wealthiest corporations and most powerful governments, is likely to drive the stock market significantly higher, especially for companies positioned in the right sectors. Key stocks positioned to benefit from this wave include: Nvidia NVDA – The dominant AI hardware provider powering the entire LLM and inference boom Palantir Technologies PLTR – Enterprise-level AI decision-making systems used by governments and defense Vertiv (VRT) – A critical infrastructure company providing thermal and power solutions to data centers Microsoft (MSFT), Meta (META) and Alphabet (GOOGL) – AI hyperscaler titans building and scaling LLM platforms and integrating intelligently into core businesses Constellation Energy (CEG) – Benefiting from the enormous power demands of next-gen AI facilities Image Source: Zacks Investment Research Quantum Computing: From Hype to Real-World Impact Quantum computing has quietly taken a leap forward in the last year. What was once seen as far-off science fiction is now entering its commercialization phase, with breakthroughs in qubit coherence, quantum error correction, and hybrid classical-quantum systems accelerating the timeline. While large-scale universal quantum computing is still several years away, practical use cases in logistics optimization, pharma, and material science are beginning to emerge. As these systems are paired with classical AI infrastructure, we may see the rise of hybrid models that radically outperform current methods. Most notably, over the past several months, quantum computing stocks have gone vertical. While skeptics point to the still-limited revenue and earnings across the sector, early commercial traction is beginning to materialize, and it appears the market is starting to price in the tremendous long-term potential. One thing is clear: capital is flowing aggressively into the industry. Promising public companies in the space include: IonQ IONQ – The current leader among quantum computing pure-plays, with real commercial traction Rigetti Computing (RGTI) – More speculative, but heavily shorted and capable of meme-stock behavior D-Wave Quantum QBTS – A legacy player focused on annealing-based quantum systems Quantum Computing Inc. (QUBT) – Ultra-speculative, with potential to catch fire in a retail bubble Image Source: Zacks Investment Research Bitcoin, Crypto, and Digital Assets: The Financial Layer of the New Era Bitcoin has reclaimed its position as the best-performing macro asset, outpacing gold, stocks, and real estate by a wide margin. While its role as a hedge against fiat instability and global uncertainty can be difficult to quantify, it has clearly proven effective as a portfolio diversifier, and increasingly serves as a release valve for excess liquidity in the financial system. The rise of tokenization and stablecoins is creating a new wave of speculative energy — not just in cryptocurrencies themselves, but in how traders and investors actually interface with the market. Tokenization refers to the process of putting real-world assets (like stocks, bonds, or even real estate) onto blockchain rails, allowing for 24/7 trading, fractional ownership, instant settlement, and increased global access. Meanwhile, stablecoins are offering a new form of liquidity for investors, something between a money market account and pure cash. This new form of liquidity lubricates financial markets broadly as it enables real-time settlement, faster capital redeployment, and round-the-clock market participation. Key stocks for exposure to this theme include: Coinbase COIN – The leading U.S.-listed crypto exchange and a beneficiary of retail and institutional flows and stablecoins Robinhood HOOD – The main point of access for speculative activites from retail as well as the first to introduce tokenization. MicroStrategy (MSTR) – Effectively a Bitcoin ETF with software revenue as a bonus Bitcoin ETF (IBIT)– Still the king, and still the most resilient asset in speculative cycles Image Source: Zacks Investment Research Buying Stocks With Eyes Wide Open If this is the beginning of the next great bubble, it's possible to imagine just how far markets could run. While it's impossible to predict exact outcomes, rough estimates suggest that the Nasdaq 100 could double, the S&P 500 could push toward 10,000, and Bitcoin might rally to $300,000 or more over the next couple of years. These are, of course, just rough estimates, but they reflect the type of moves we've seen in past bubbles when narratives take hold and capital flows turn parabolic. Yes, it may all come crashing down someday. But in the meantime, the market may reward those who are early, thoughtful, and positioned for narrative-driven upside. Just like Soros, sometimes the best trade is to lean into the mania and exit before the music stops. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report NVIDIA Corporation (NVDA) : Free Stock Analysis Report Palantir Technologies Inc. (PLTR) : Free Stock Analysis Report Coinbase Global, Inc. (COIN) : Free Stock Analysis Report Robinhood Markets, Inc. (HOOD) : Free Stock Analysis Report IonQ, Inc. (IONQ) : Free Stock Analysis Report D-Wave Quantum Inc. (QBTS) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research

Meta is trying to win the AI race. A new partnership with AWS could help
Meta is trying to win the AI race. A new partnership with AWS could help

CNN

time2 days ago

  • Business
  • CNN

Meta is trying to win the AI race. A new partnership with AWS could help

For Silicon Valley giants, getting ahead in the artificial intelligence race requires more than building the biggest, most capable models; they're also competing to get third-party developers to build new applications based on their technology. Now, Meta is teaming up with Amazon's cloud computing unit, Amazon Web Services, on an initiative designed to do just that. The program will provide six months of technical support from both companies' engineers and $200,000 in AWS cloud computing credits each to 30 US startups looking to build AI tools on Meta's Llama AI model. The partnership is set to be unveiled at AWS Summit in New York City on Wednesday. For Meta, the project could be a boost at a time when CEO Mark Zuckerberg is pouring enormous resources into his ambition to become a top player in the AI space. The company last month announced the creation of a new AI super intelligence team, after recruiting leading researchers away from competitors with massive pay packages. Meta also invested $14.3 billion into AI startup Scale, which included the hiring of its founder and CEO, Alexandr Wang, and several other top employees. And Amazon's investment, worth more than $6 million in total, could pay off if the startups continue using AWS's service to access the AI system after the six months. While Amazon has its own large language models, AWS's AI strategy has been to help companies access any model — or multiple — along with the intense computing power needed to run them. Early-stage startups can apply to the program and will be selected later this summer based on the 'potential impact of the proposed solutions and the technical ability' of their teams, AWS and Meta said in a statement. 'We have a long-standing relationship and partnership with Meta, and what we're aiming to do here with the Llama collaboration is really empower founders to build transformative AI using Llama models,' AWS Vice President and Global Head of Startups and Venture Capital Jon Jones told CNN exclusively ahead of the announcement. He added that AWS customers are already using Llama to, for example, create AI customer relationship management tools for auto dealerships or financial technology tools. At the heart of the Meta-AWS partnership is a push to support Llama, a leading open-source AI model, meaning the code behind the technology is publicly available — unlike proprietary or 'closed-source' models like OpenAI's ChatGPT and Anthropic's Claude. There's an industry debate over the benefits of open versus closed-source AI models. It goes something like this: Companies on Team Closed Source say they'll retain more control over how their technology is used, and it's a whole lot easier to build a business when your rivals don't know exactly how your systems work. Team Open Source says potentially transformative AI technology should be available for anyone to use and build on to democratize its benefits. Zuckerberg said last year that he believes 'open source is necessary for a positive AI future.' He added that his model will help open source become 'the industry standard,' since third parties have free access to build on the technology. In other words, Zuckerberg would like his technology to become the leading platform for developers building chatbots, agents and other AI apps, similar to how Apple and Google's operating systems have functioned in the mobile web era. But for startups looking to build on a large AI model, going the closed-source route can have practical benefits — when they pay to access the technology, they may also get a friendly user interface, tech support and a more personalized experience. With their partnership, AWS and Meta hope to provide some of those same benefits to startups building on Llama. And given the significant cost of computing power for AI systems, the AWS credits could be a boon to startups that don't expect to turn a profit immediately. 'We developed Llama because we believe greater access to powerful models is essential for driving progress in AI,' Ash Jhaveri, vice president of AI partnerships at Meta, said in a statement about the initiative. 'Startups are some of the most creative forces in tech, and we're looking forward to seeing how they'll use Llama to push boundaries, explore new frontiers, and shape the future of AI in bold and unexpected ways.'

The Trust Deficit In Artificial Intelligence
The Trust Deficit In Artificial Intelligence

News18

time2 days ago

  • News18

The Trust Deficit In Artificial Intelligence

Unlike traditional engineering systems, ML models rarely provide error bounds or correctness guarantees. Artificial Intelligence (AI) and Machine Learning (ML) have experienced transformative growth over the past fifteen years, propelled by advancements such as deep learning, convolutional neural networks, transformer architectures, and the emergence of large language models (LLMs). These innovations have enabled machines to comprehend and generate language and images in previously unimaginable ways. The transformer architecture, introduced in 2017, has revolutionised image analysis and synthesis, as well as natural language processing, by incorporating attention mechanisms that capture long-range context. This development has led to LLMs like ChatGPT, Llama, and Gemini, capable of language translation, summarisation, and content generation. Often trained on extensive internet-scale data, these models are adaptable to new tasks through prompt tuning. These technologies now power applications in diverse domains including healthcare, law, education, creative arts, business, public services, and autonomous systems. Numerous Indian agencies and startups are also leveraging AI systems using local context for various applications. Despite their remarkable capabilities, these models have significant limitations, particularly in terms of reliability and fairness. Unlike traditional engineering systems, ML models rarely provide error bounds or correctness guarantees. Their accuracy is typically validated only on data similar to the training set, not on real-world, dynamic environments. The assumption that deployment data will resemble training data frequently proves incorrect. Real-world data changes over time and circumstances (known as distribution shift), and detecting or adapting to these changes is challenging, especially post-deployment. In many applications, especially those involving unstructured data like images or text, defining the space of possible inputs is difficult. This makes probabilistic guarantees impossible and external validation challenging. Additionally, generative models are trained for coherence rather than factual accuracy, leading them to 'hallucinate" plausible-sounding but false information. Such failures are hard to detect, particularly for non-expert users, posing risks in high-stakes and safety-critical fields like medicine or autonomous systems. Moreover, the internal representations used by ML systems differ significantly from human cognition, as illustrated by adversarial attacks where minor, imperceptible changes to input can cause confident misclassifications, revealing the brittleness of these systems under slight perturbations. The disparity between human and machine failure points complicates the ethical deployment of AI systems in critical applications. Regarding fairness, AI systems often inherit or amplify social biases embedded in the data. Even if sensitive attributes like caste, gender, or religion are not explicitly included, models can learn proxy variables that lead to discriminatory outcomes, known as disparate impact. In diverse societies like India, where large sections of the population are digitally underrepresented, this issue is particularly problematic. Attempts to mitigate bias through data preprocessing or algorithmic corrections often reduce model accuracy without guaranteeing fairness. Research indicates that under realistic conditions, it is mathematically impossible to ensure equal fairness for all groups simultaneously. In conclusion, while AI holds immense potential for enhancing productivity, inclusion, and creativity, its application in public-facing scenarios demands extreme caution. These systems are best utilised where critical thinking can moderate their outputs, such as in exploratory research or informed personal use. Broader deployment requires rigorous monitoring, context-aware evaluation, and strong institutional oversight to ensure these powerful tools do not become sources of harm or inequality. The author is Head of Department of Computer Science, and Centre for Digitalisation, AI and Society at Ashoka University. Views expressed in the above piece are personal and solely that of the author. They do not necessarily reflect News18's views. view comments First Published: July 16, 2025, 13:43 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Zuck is building Meta data centers in tents now, part of a mad dash to catch up in AI
Zuck is building Meta data centers in tents now, part of a mad dash to catch up in AI

Business Insider

time4 days ago

  • Business
  • Business Insider

Zuck is building Meta data centers in tents now, part of a mad dash to catch up in AI

Mark Zuckerberg is in full founder mode again. This time it's the AI version. Meta fell behind in the generative AI race earlier this year when the company rolled out its Llama 4 model. The product was not up to scratch, compared to rival offerings from DeepSeek, OpenAI, Anthropic, and Google. Since then, Zuck has gone on a hiring bender, personally recruiting top AI talent with pay packages in the $100 million range and more. That's just one ingredient for success in generative AI. Another is high-quality data. Hence, Meta's weird $14 billion deal to buy just under half of Scale AI and its leader, Alex Wang. The third ingredient required is infrastructure, which is tech jargon for the AI chips (GPUs, etc), networking gear, and data centers needed to build, refine, and run giant AI models such as the Llama series. On Friday, SemiAnalysis blew the lid off Zuck's big plans for Meta's huge new AI infrastructure. You can read the report here. You'll need to pay to read the whole thing, but if you're serious about AI, you will prob want to shell out. Zuck confirmed some of the details of SemiAnalysis's report via a Facebook post on Monday. The CEO said Meta will build several new AI data centers that use more than 1 gigawatt of power each. This is how data center capacity is measured, and anything in the 1 gigawatt range is absolutely massive (or at least it was until now). Data centers in tents The thing that really caught my eye in the SemiAnalysis report is that Meta is building some of these AI data centers in tents right now. A Meta spokesperson confirmed this. Data centers house a lot of complex, expensive gear and they need to be kept cool in really controlled ways, otherwise some of the pricey gear will overheat. So, building data centers in tents is a sign of how quickly Zuck is moving to get Meta's new AI data centers up and running. Remember when Elon Musk had Tesla making Model 3 cars in tents outside the company's Fremont factory in 2018? That was about speed to market, and Zuck is now doing this for data centers — likely taking a page from Elon's strategy book. "Inspired by xAI's unprecedented time-to-market, Meta is embracing a datacenter design that prioritizes speed above all else," Semianalysis wrote in their Friday report. "They're already building more of them! Traditional datacenter and real estate investors, still somewhat reeling from xAI's Memphis site and time to market, will be shocked yet again." "From prefabricated power and cooling modules to ultra-light structures, speed is of the essence," it added. Tents get really hot, so this could be a challenge for running these prefab AI data centers. Indeed, Semianalysis reported that Meta could shut down workloads during the hottest summer days. Over the long term, Meta will likely build full data centers, but in the short and medium term, the company needs these facilities up and running "as soon as possible," Dylan Patel, CEO of SemiAnalysis, told me on Monday. And here's the full post from Zuck on Monday: "For our superintelligence effort, I'm focused on building the most elite and talent-dense team in the industry. We're also going to invest hundreds of billions of dollars into compute to build superintelligence. We have the capital from our business to do this. SemiAnalysis just reported that Meta is on track to be the first lab to bring a 1GW+ supercluster online. "We're actually building several multi-GW clusters. We're calling the first one Prometheus, and it's coming online in '26. We're also building Hyperion, which will be able to scale up to 5GW over several years. We're building multiple more titan clusters as well. Just one of these covers a significant part of the footprint of Manhattan. "Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher. I'm looking forward to working with the top researchers to advance the frontier!"

Mark Zuckerberg's $150 million job offers are spreading fear
Mark Zuckerberg's $150 million job offers are spreading fear

Sydney Morning Herald

time4 days ago

  • Business
  • Sydney Morning Herald

Mark Zuckerberg's $150 million job offers are spreading fear

Nothing says talent war like a $US100 million ($153 million) job offer. Mark Zuckerberg has been on a hiring blitz for AI's most revered scientists, sending them cold emails and offering them roles in his new Superintelligence Labs division whose goal is nothing less than to build artificial-intelligence software that's smarter than humans. You might wonder why the Meta chief executive officer, whose company already prints money from clever ad targeting and recommendation software, needs to build god-like AI, but you'd be underestimating the hottest prize in tech, which Alphabet's Google and OpenAI have been vying to win. Zuckerberg is now coming from behind with a viable shot at getting there first. Having attracted some of AI's top brains with huge sums and previous pledges to make AI free for all and potentially more impactful, he's now created momentum among other leading scientists who see his team as having a statistically higher chance of building 'super-intelligent' AI systems before anyone else. In just the last month, Zuckerberg has poached leading OpenAI scientist Lucas Beyer, who co-created the vision transformer; Ruoming Pang, who led Apple's efforts at building AI models; and Alexandr Wang, the former CEO of Scale AI who now co-leads Meta's Superintelligence Labs. In Wang's case, the cost to Meta was billions. But the result seems to be a halo effect as other big names in the field join, such as investor Nat Friedman and Daniel Gross, the CEO of Ilya Sutskever's startup Safe Superintelligence, and the remaining top talent starts to fear missing out on being the first to build super-intelligent AI. Of course, money is a great motivator, but many of these researchers are already wealthy, and their field is so ideologically charged and so close-knit that they're motivated by the glory of being published in Nature or having a hand in the biggest new AI model, just as much they are by the prospect of yachts and mansions. Zuckerberg's public commitment to open-source AI with his Llama model has already attracted scientists who believe such systems can have a more democratising impact if they're free for all. OpenAI made a similar bet early on, sharing much of its research freely 'for recruitment purposes,' according to its then-chief scientist Sutskever, before taking that work behind closed doors. Investors have long questioned Zuckerberg's willingness to invest in advanced AI models and then give them away, and the lacklustre performance of Llama's most recent models may put pressure on the Meta CEO to consider more commercial approaches to AI. Meta's models lag those of Google DeepMind and OpenAI — a variant is ranked at 17 in one real-time leaderboard — and they're more expensive to run. Many researchers reckon AI can eventually solve intractable human problems like ageing, climate change and cancer, and that, overwhelmingly through history, technology has been a net good for humanity. But for many, the desire to build that technology first is even more powerful, a dynamic not so different to the field of cancer research where scientists want to win the race to a cure as much as they want to find cures at all.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store