Latest news with #ArtificialSuperintelligence


Business Insider
30-07-2025
- Business
- Business Insider
‘Meta Stock Headed for $800': Mark Zgutowicz Weighs In Ahead of Earnings
Meta Platforms (NASDAQ:META) stock has had a solid run since hitting a bottom in April, climbing 44%. Investors have been riding the momentum, but all eyes are now turning to Meta's second-quarter earnings call, scheduled for tomorrow, July 30, after the market closes. And this time, the spotlight won't just be on ad impressions or daily active users – it'll be on something far more ambitious. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. That ambition took center stage in mid-July, when CEO Mark Zuckerberg declared Meta's intention to invest 'hundreds of billions of dollars' into AI infrastructure in pursuit of Artificial Superintelligence. This isn't just about smarter ad targeting or better recommendation engines. We're talking about an all-out arms race to build a future where Meta could rival, or even outpace, the likes of OpenAI and Google DeepMind. To turn that vision into reality, Zuckerberg announced plans to build roughly six gigawatts of data center capacity by 2030. Furthermore, Meta has spent the past month recruiting top-tier AI researchers from OpenAI and DeepMind, while also investing $14 billion in Scale AI and appointing its CEO as Meta's Chief AI Officer. One analyst who's paying close attention is Benchmark's Mark Zgutowicz. While he maintains strong expectations for Meta's Q2 performance and sees promise in the company's evolving monetization strategy through e-commerce and ad pricing, the AI push is where the story really gets interesting. According to Zgutowicz, even though his model already accounts for close to $500 million in capital expenditures through 2030, the real test will be in how well management deploys that capital to generate tangible returns. The analyst also notes that Meta faces stiff competition, pointing to OpenAI's current leadership and Google's entrenched dominance across much of the AI landscape, even before OpenAI's anticipated push into advertising in 2026. As for the upcoming earnings, Zgutowicz expects a 'stable top-line' performance supported by steady e‑commerce trends, continued ad pricing momentum in North America, and higher revenue per advertiser thanks to new Advantage+ attribution tools launched in May. He also projects 2025 capex guidance to remain steady, with operating expenses inching higher to reflect the recent influx of elite AI talent. Looking further ahead, the analyst will be watching for management's tone regarding consensus forecasts that call for 2026 capex and opex growth of 9% and 14%, respectively. With these expectations in mind, Zgutowicz assigns Meta stock with a Buy rating, while raising his price target from $640 to $800. (To watch Zgutowicz's track record, click here) To find good ideas for stocks trading at attractive valuations, visit TipRanks' Best Stocks to Buy, a tool that unites all of TipRanks' equity insights.


Forbes
24-07-2025
- Business
- Forbes
Will Superintelligence Save Us—Or Leave Us Behind
Srinivasa Rao Bittla is an AI-driven performance and quality engineering leader at Adobe. What happens when machines begin to think for themselves? Artificial Superintelligence (ASI) represents not just a potential technological leap but a possible defining moment in human history. A more potent frontier is emerging as AI transforms industries and redefines possibility: machine intelligence that could one day surpass human capability in nearly every domain. While current systems remain far from this capability, the prospect of ASI is the subject of growing research and debate. Understanding ASI: Beyond Narrow AI Today's large language models operate within limited, task-specific boundaries. Though they learn from vast datasets and automate complex tasks, they remain tools, constrained by human design and lacking true understanding, consciousness and independent intelligence in any human sense. By contrast, ASI would exceed human ability not just in logic and memory, but in creativity, emotional insight, strategic reasoning, and adaptation. It could set its own goals, draw from multiple disciplines and generate solutions beyond even our most gifted thinkers. However, no existing system today demonstrates these capabilities, and expert opinion varies widely on when—or even if—they will emerge. The Road To Superintelligence: Evolution Or Explosion? Some researchers envision a gradual evolution, as neural networks improve and compute power increases. Others warn of a tipping point—often referred to as the singularity—where machines begin to improve themselves, rapidly escaping human control. Signs of this shift are emerging. From generative AI to reinforcement learning to neuromorphic hardware, advances from labs like OpenAI, DeepMind and open-source communities are making incremental advances in narrow AI capabilities, not demonstrations of self-improving general intelligence. Economic Impacts: Promise And Precarity ASI could usher in a new era of economic growth. Scientific breakthroughs, sustainable infrastructure and advanced life-saving therapies may arrive at speeds no human can match. Productivity could soar. Entire sectors—from energy to education—may be reimagined. Imagine ASI crafting climate adaptation strategies in real time or engineering atomic-scale materials for sustainable energy storage. Its impact could eclipse that of the digital age or the Industrial Revolution combined. But disruption is inevitable. Even roles once considered safe—creative leadership, legal analysis and strategic planning—may be vulnerable. If machines outperform us in judgment, empathy and innovation, what is left for humanity? These shifts raise urgent questions about inequality, economic distribution and value alignment. Would ASI deliver abundance or deepen the divides between those it empowers and those it displaces? Existential Risk Or Civilizational Leap? ASI could also bring profound risks. Thinkers such as Stephen Hawking, Elon Musk and Nick Bostrom have warned about the potential misalignment between ASI goals and human values. A system tasked with maximizing an unclear objective could behave in ways that are harmful to people and the planet. This possibility of ASI is no longer dismissed as science fiction—its implications are now taken seriously by leading institutions. Still, ASI itself remains hypothetical. An ASI with access to infrastructure and the ability to self-improve could pursue goals that violate ecological, ethical or human survival boundaries. While these scenarios remain hypothetical, their scale and potential impact have prompted serious attention from researchers and policymakers. That is why alignment—the field ensuring AI systems serve human interests—is one of the most urgent challenges in tech today. Many consider alignment to be a key long-term priority, though others argue that immediate issues like algorithmic bias or misuse of narrow AI remain more pressing in the short term. Yet policy, research and safeguards still lag behind. Governments will need to regulate ASI proactively, much as they do with nuclear power or gene editing. Toward Human-Machine Symbiosis Some experts envision ASI not as a threat but as a collaborator—a cognitive partner. Integration, not domination, could define the future. Brain-computer interfaces, such as those under development at Neuralink and academic research groups, hint at real-time human-ASI cooperation. This doesn't mean our creativity or compassion will be erased. It may reveal how central those traits truly are. Still, caution is essential. As mind and machine converge, questions about mental privacy, cognitive autonomy and ethical manipulation become pressing. Who Governs Superintelligence? Perhaps the most urgent question is governance. Who would control ASI? Would it rest with a few multinational tech giants, authoritarian states or a democratic, decentralized coalition of global stakeholders? Without transparency and regulation, ASI could concentrate power in ways far more dangerous than nuclear capability. Monopoly over intelligence could redraw geopolitics. A global pact—akin to the Treaty on the Non-Proliferation of Nuclear Weapons—may be required. It should define ethics, oversight mechanisms, safety thresholds and accountability. What Should Leaders And Innovators Do Now? While ASI may still be decades away, early preparation and risk awareness are critical. Leaders can take several immediate steps to prepare: • Invest In Ethics And Alignment: Build teams focused on long-term risks and socially responsible development. • Promote Transparency: Opaque systems are untrustworthy. Encourage interpretability and explainability. • Support Open Collaboration: Share breakthroughs responsibly through open scientific channels. Closed innovation magnifies ASI risks. • Get The Workforce Ready: Train for human skills machines can't replicate—empathy, creativity and critical thinking. • Participate In Global Policy: Take part in cross-border dialogues on data governance, safety standards and ethics. Conclusion: The Choice Is Still Ours ASI is a test of our morality, foresight and imagination. Whether it becomes humanity's greatest tool—or our greatest threat—depends on choices made now. Although ASI remains speculative, its potential impact demands careful planning and preparation. We must be bold, thoughtful and humble—carefully preparing for the possibility of its emergence within our lifetimes. The real question is not whether machines will think, but whether we will think wisely enough before they do. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Time of India
07-07-2025
- Business
- Time of India
Former Google CEO warns ‘AI could surpass human intelligence in next few years'; what is artificial superintelligence and why it matters
As debates around AI ethics, automation, and job displacement continue to dominate public discourse, a far more consequential development is quietly emerging—one that, according to former Google CEO Eric Schmidt, is not receiving nearly the attention it deserves. In a recent episode of the Special Competitive Studies Project podcast, Schmidt delivered a compelling warning: Artificial Superintelligence (ASI) is on the horizon, and society is dangerously underprepared to face its arrival. Former Google CEO Eric Schmidt warns: AI will surpass all human intelligence in few years As per the public conversations often fous on near-term AI risks such as algorithmic bias or job automation, Schmidt's concern lies in what lies just beyond the visible curve of innovation. He points to the emergence of ASI , a form of intelligence vastly superior to that of any individual human as the next seismic shift in technological evolution . Unlike Artificial General Intelligence ( AGI ), which seeks to match human cognitive abilities , ASI represents a system capable of surpassing not just individual intelligence, but potentially the collective intelligence of all humans combined. 'People do not understand what happens when you have intelligence at this level, which is largely free,' Schmidt warned. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Free P2,000 GCash eGift UnionBank Credit Card Apply Now Undo Watch From writing code to replacing coders—AI's next leap is already here One of Schmidt's most provocative predictions is that AI could make most programming jobs obsolete within a year. He cites advances in recursive self-improvement—where AI systems write and improve their own code using formal systems like Lean—as a key driver of this shift. Currently, AI is already contributing significantly to software development. In Schmidt's words: 'Ten to twenty percent of the code in research labs like OpenAI and Anthropic is now being written by AI itself.' As these systems continue to evolve, they will not only become faster and more efficient but also begin outperforming even elite graduate-level human mathematicians in areas such as advanced coding and structured reasoning. This presents a foundational change in the role of human labor in tech—moving from creator to supervisor, or potentially being removed from the loop altogether. Schmidt warns the jump from AGI to ASI could outpace global systems According to Schmidt, many in Silicon Valley are in agreement that Artificial General Intelligence (AGI)—a system capable of human-like reasoning across disciplines will be achieved within the next three to five years. However, he emphasises that AGI is merely a stepping stone. The more dramatic leap, he says, will occur just a year or two after AGI: the rise of Artificial Superintelligence (ASI). He calls this trajectory the 'San Francisco Consensus'—a term reflecting the growing alignment among tech elites about the short timeline to ASI. 'This occurs within six years, just based on scaling,' Schmidt stated. Unlike earlier technological shifts, the transition to ASI may happen so rapidly and dramatically that traditional systems—governance, legal, economic—may be unable to adapt in time. Schmidt warns the world is unprepared for the coming age of superintelligence Despite the potentially transformative—and even existential—implications of ASI, Schmidt points out a critical gap in public awareness and discourse. The issue, he argues, is not just the speed of AI's evolution but the lack of conceptual language and institutional frameworks to engage with it meaningfully. 'There's no language for what happens with the arrival of this,' he remarked. 'This is happening faster than our society, our democracy, our laws will interact.' In other words, the world's democratic and policy systems are trailing far behind the pace of innovation, creating a dangerous mismatch between technological capability and societal readiness. As AI systems move beyond human capabilities, Schmidt presents two possible paths forward. On one side lies the promise of a technological renaissance, driven by superintelligent systems capable of solving some of humanity's greatest challenges. On the other lies the risk of institutional collapse, ethical crisis, and unprecedented societal upheaval. 'Superintelligence isn't a question of if, but when,' Schmidt seems to caution. And the real danger, he suggests, may be our collective failure to adequately prepare. Schmidt urges to prepare for ASI before it's too late Eric Schmidt's warnings are not based on speculative science fiction—they're grounded in conversations happening today among the people building tomorrow's technologies. Whether one agrees with his timeline or not, his message is clear: Artificial Superintelligence is not a distant concept—it is fast becoming a present reality. As this reality approaches, the world must shift its focus from narrow debates about near-term AI risks to a broader, deeper dialogue about long-term governance, ethics, and preparedness for what could be the most transformative force in human history.
Yahoo
05-07-2025
- Business
- Yahoo
Zuck Bucks Are Back -- And This Time They're Fueling Meta's AI Comeback
Meta (META, Financials) is diving headfirst into the race for Artificial Superintelligence and this time, it's not holding back. CEO Mark Zuckerberg has reignited the term Zuck Bucks; once a nickname for campaign donations, it's now shorthand for eight- and nine-figure signing packages aimed at luring the best AI minds in the world, according to a Reuters report. Warning! GuruFocus has detected 6 Warning Sign with META. Faced with talent losses and a disappointing release of its Llama 4 model, Meta has ramped up hiring and it's not being subtle. From a $14.3 billion investment in Scale AI to attempts at poaching Safe Superintelligence's Ilya Sutskever, Meta is making one thing clear: it wants back in the lead. Zuckerberg reportedly failed to recruit Sutskever but may be close to landing SSI co-founder Daniel Gross and NFDG's Nat Friedman. These aren't just star names they're magnets; and Meta hopes they'll help rebuild a team that's been bleeding to labs like OpenAI, Anthropic, and Google DeepMind. Meanwhile, Meta is forming an elite Superintelligence unit to push the boundaries of what AI can do. But internal divisions remain; Meta's Chief AI Scientist Yann LeCun has publicly questioned the long-term viability of large language models the very thing the company's top rivals are doubling down on. To complicate matters, Meta is betting on a mix of technologies: reasoning-based LLMs, multimodal AI, and even geopolitical hedges. For example, it's developing a new B40 chip tailored for the Chinese market, just in case export restrictions eventually ease. Zuckerberg's bet? Talent first; product later. This isn't classic M&A it's AI land-grabbing. Meta is willing to buy pre-product, pre-revenue startups if it means acquiring breakthrough IP and elite researchers. Profitability can wait; ASI supremacy can't. This article first appeared on GuruFocus. Sign in to access your portfolio
Yahoo
04-07-2025
- Business
- Yahoo
Zuck Bucks Are Back -- And This Time They're Fueling Meta's AI Comeback
Meta (META, Financials) is diving headfirst into the race for Artificial Superintelligence and this time, it's not holding back. CEO Mark Zuckerberg has reignited the term Zuck Bucks; once a nickname for campaign donations, it's now shorthand for eight- and nine-figure signing packages aimed at luring the best AI minds in the world, according to a Reuters report. Warning! GuruFocus has detected 6 Warning Sign with META. Faced with talent losses and a disappointing release of its Llama 4 model, Meta has ramped up hiring and it's not being subtle. From a $14.3 billion investment in Scale AI to attempts at poaching Safe Superintelligence's Ilya Sutskever, Meta is making one thing clear: it wants back in the lead. Zuckerberg reportedly failed to recruit Sutskever but may be close to landing SSI co-founder Daniel Gross and NFDG's Nat Friedman. These aren't just star names they're magnets; and Meta hopes they'll help rebuild a team that's been bleeding to labs like OpenAI, Anthropic, and Google DeepMind. Meanwhile, Meta is forming an elite Superintelligence unit to push the boundaries of what AI can do. But internal divisions remain; Meta's Chief AI Scientist Yann LeCun has publicly questioned the long-term viability of large language models the very thing the company's top rivals are doubling down on. To complicate matters, Meta is betting on a mix of technologies: reasoning-based LLMs, multimodal AI, and even geopolitical hedges. For example, it's developing a new B40 chip tailored for the Chinese market, just in case export restrictions eventually ease. Zuckerberg's bet? Talent first; product later. This isn't classic M&A it's AI land-grabbing. Meta is willing to buy pre-product, pre-revenue startups if it means acquiring breakthrough IP and elite researchers. Profitability can wait; ASI supremacy can't. This article first appeared on GuruFocus. Sign in to access your portfolio