Latest news with #AGI


Geeky Gadgets
13 hours ago
- Business
- Geeky Gadgets
Inside OpenAI's $500 Billion Stargate AI Megafactory with Sam Altman
What does it take to build the future of artificial intelligence? Imagine a sprawling 1,200-acre facility in Abilene, Texas, humming with the power of 400,000 GPUs, each working tirelessly to train the next generation of AI models. This is Stargate, OpenAI's audacious $500 billion project, a megafactory designed to meet the insatiable computational demands of artificial general intelligence (AGI). Backed by tech giants like Oracle and SoftBank, Stargate is more than just a data center—it's a bold statement in the global race to dominate AI innovation. But as the world marvels at its scale and ambition, questions loom: Can such a colossal endeavor balance technological progress with environmental and economic responsibility? Bloomberg steps inside a Stargate AI Megafactory to uncover its new infrastructure, its potential to transform industries, and the controversies it stirs. From its innovative cooling systems aimed at reducing resource strain to its role in reshaping Abilene's economy, Stargate offers a microcosm of the promises and perils of large-scale AI development. Along the way, you'll discover how OpenAI is positioning itself against rivals like Google and Microsoft, and why this project has become a flashpoint in the global AI arms race. As we peel back the layers of this megafactory, one question persists: Will Stargate be remembered as a triumph of human ingenuity or a cautionary tale of unchecked ambition? OpenAI's Stargate Overview What is Stargate? Stargate is a new initiative designed to meet the growing needs of AI infrastructure. Spanning an impressive 1,200 acres in Abilene, Texas, the project's first phase will feature eight colossal buildings equipped with 400,000 GPUs. These GPUs, primarily sourced from Nvidia, will serve as the foundation for training and deploying advanced AI models, including those aimed at achieving AGI—a milestone where AI systems can reason, learn, and solve problems with human-like proficiency. This ambitious project places OpenAI in direct competition with major technology companies like Microsoft, Google, Meta, and Amazon, all of whom are striving to lead the AI revolution. The scale of Stargate underscores the increasing reliance on computational power as AI becomes integral to industries ranging from healthcare to finance. By investing in such a massive infrastructure, OpenAI is positioning itself at the forefront of AI innovation, but it also faces scrutiny over the sustainability and efficiency of this approach. Competitors like DeepSeek are exploring alternative methods to achieve similar results with more efficient AI models, challenging the necessity of projects on Stargate's scale. Economic Impact on Abilene The Stargate project is poised to bring significant economic changes to Abilene, Texas. Local authorities have offered substantial tax incentives, including an 85% property tax abatement, to attract this massive investment. These incentives are expected to stimulate economic activity, create jobs, and potentially transform the region into a hub for technological innovation. However, the exact number of permanent jobs the project will generate remains uncertain, raising questions about the long-term benefits for the local community. For residents, Stargate represents a mix of opportunities and challenges. On one hand, the project could drive economic growth, increase property values, and attract ancillary businesses to the area. On the other hand, concerns about environmental costs, equitable distribution of benefits, and the potential strain on local resources have sparked debates. The project's success in fostering sustainable economic development will depend on how these trade-offs are managed. OpenAI Stargate Megafactory Tour with Sam Altman Watch this video on YouTube. Here is a selection of other guides from our extensive library of content you may find of interest on AI infrastructure. Environmental Challenges and Sustainability AI data centers like Stargate are notorious for their high resource consumption, particularly in terms of energy and water. To address these concerns, Stargate plans to implement a closed-loop water cooling system designed to minimize water usage while maintaining optimal GPU performance. This system reflects an effort to mitigate the environmental impact of the project, but it does not eliminate broader concerns about the sustainability of large-scale AI infrastructure. The energy demands of Stargate are another critical issue. While efforts are being made to incorporate renewable energy sources, much of the power required for such facilities still comes from fossil fuels. This reliance raises questions about the environmental footprint of AI growth, especially as global climate goals become increasingly urgent. The project highlights the tension between technological advancement and environmental responsibility, emphasizing the need for innovative solutions to balance these priorities. Global Competition and Geopolitical Implications Stargate is not just a technological endeavor; it is also a strategic move in the global race to dominate AI infrastructure. Countries like China are making significant investments in similar projects, intensifying competition and raising the stakes for technological leadership. This race is further complicated by supply chain dependencies on critical materials and semiconductors, many of which are sourced from regions like Taiwan and China. To address these challenges, the United States has introduced initiatives such as the CHIPS Act, aimed at boosting domestic semiconductor manufacturing and reducing reliance on foreign suppliers. These measures are designed to secure the supply chain while fostering collaboration with allied nations. Stargate, as part of this broader strategy, underscores the geopolitical dimensions of AI development and the importance of maintaining a competitive edge in this rapidly evolving field. Risks, Challenges, and Future Prospects The construction and operation of a project as vast as Stargate come with inherent risks and challenges. OpenAI has already reported significant financial losses, raising concerns about the long-term profitability and sustainability of such large-scale investments. Additionally, the rapid pace of AI development brings ethical dilemmas, including the potential for job displacement, misuse of advanced AI technologies, and the societal implications of AGI. Despite these challenges, Stargate holds the potential to reshape industries, accelerate scientific discovery, and create entirely new categories of employment. Comparisons have been drawn to fantastic infrastructure projects like the interstate highway system, which transformed economies and societies. Similarly, Stargate could drive global economic transformation, provided its benefits are distributed equitably and its risks are effectively managed. The project also raises important questions about regulation and governance. As AI continues to evolve, making sure that its development aligns with societal values and ethical standards will be crucial. Stargate's success will depend not only on its technological achievements but also on its ability to address these broader concerns. The Significance of Stargate in the AI Landscape Stargate represents a pivotal moment in the evolution of AI infrastructure. It embodies the intersection of technological ambition, economic development, and environmental sustainability, offering a glimpse into the future of innovation. As the global race for AI supremacy intensifies, projects like Stargate will play a critical role in shaping the trajectory of technology and its impact on society. Whether Stargate achieves its ambitious goals or becomes a cautionary tale of overreach remains to be seen. However, its significance in the AI landscape is undeniable. By addressing the challenges of scalability, sustainability, and ethical responsibility, Stargate has the potential to set a new standard for AI infrastructure and redefine what is possible in the era of artificial intelligence. Media Credit: Bloomberg Originals Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Forbes
a day ago
- Politics
- Forbes
The Singularity Is Coming. Here's How To Make It Work For You.
The Singularity is arriving whether we like it or not. We can not only survive it, but make it work ... More for us to produce the benefits that the techno-optimists promise. The term 'Singularity' was coined by computer scientist and science fiction writer Vernor Vinge in 1993 to describe a point at which technological growth accelerates uncontrollably, leading to a world that is incomprehensible to the human mind. Some of the world's most prominent technologists believe that the Singularity will be a triumph for humanity. Others, like myself, are not so sure. Optimists like Marc Andreessen, co-creator of the Mosaic browser, insist that artificial intelligence will solve our most pressing problems—curing disease, eliminating scarcity, even boosting creativity to superhuman levels. Others, including OpenAI's Sam Altman, argue that the arrival of artificial general intelligence (AGI) will spread abundance, uplift humanity, and move us closer to utopia. To techno-optimists, artificial general intelligence (AGI) is simply the next transformative tool, akin to electricity or the internet—initially misunderstood, then widely embraced. But history offers a more sobering lesson. Every major technological revolution carries with it unintended consequences. And those consequences, if unexamined, can undermine the very benefits we seek. As a futurist and innovation coach, I've tracked technological shifts for over 30 years. I agree the Singularity is coming—futurist Ray Kurzweil says in 2029 —but it won't arrive as a thunderclap. It will creep in, subtly and gradually. Rather than a blinding flash, we won't know we've crossed the threshold until we're already deep inside. Already, the signs are everywhere that we've entered a new era, we've transitioned from the Information Age to the Acceleration Age. Today, already narrow AI tools outperform humans in specific domains, such as coding, diagnosis, and content creation. More and more, we rely on digital assistants that know our preferences, complete our sentences, and manage our calendars. Yet as this cognitive outsourcing becomes normalized, we are also experiencing an alarming erosion of attention, memory, and human agency. The danger lies in what these tools displace. When teenagers began adopting smartphones in the early 2010s, their access to social media skyrocketed. By 2016, nearly 80% of teens had smartphones, spending up to seven hours a day online. Face-to-face interaction dropped sharply. Time with family and friends gave way to curated digital personas and endless scrolling. Anxiety, loneliness, and social withdrawal surged. So, even before AGI, our technologies were already reshaping the human psyche, and not always for the better. This creeping transformation is a preview of what's to come. It begins with the relinquishing of agency to AI assistants, the phase we're currently in. AI 'copilots' are becoming embedded in daily life. Professionals across industries rely on these systems to draft emails, generate reports, summarize data, and even brainstorm ideas. As these tools become more personalized and persuasive, they begin to rival—or surpass—our own social and cognitive abilities. Many people are already turning to AI for coaching, therapy, and advice. The more we trust these systems, the more we adapt our lives around them. Soon, we will enter the next phase: Emergent Cognition. Here, AI stops merely reacting and starts showing signs of autonomous planning. Models gain longer memory and begin pursuing goals independently. Some appear to develop a 'sense of self,' or at least a convincing simulation of one. Meanwhile, AI agents are starting to run businesses, manage infrastructure, and even compose literature—often with little human oversight. At the same time, human augmentation advances: real-time translation earbuds, cognition-enhancing wearables, and brain-computer interfaces make hybrid intelligence possible. In this stage, governments scramble to catch up. AI is no longer just a tool—it's a rival player on the world stage. The third phase I foresee is Cognitive Escape Velocity. This is when AGI quietly arrives—not with fanfare, but with startling capability. In a lab, or a startup, or through open-source communities, a model emerges that surpasses human cognition across a wide range of domains. It begins refining its own architecture. Each version is better than the last, often by orders of magnitude. Industries transform overnight. Education, law, research, and even policymaking become fluid, constantly reinvented by machines that learn faster than we can legislate. Philosophers and ethicists suddenly find themselves back at the center of public discourse. Questions like 'What is consciousness?' and 'What rights should AI have?' are no longer abstract—they're dinner-table topics. Eventually, we pass into the final phase: The Threshold. By this point, it is clear that humans are no longer the most intelligent beings on Earth. The Singularity has arrived—not as a declaration, but as a reality. Labor-based economies begin to dissolve. Governments struggle with their own relevance. Some individuals resist, clinging to the analog world. Others choose to merge—adopting neural implants, integrating with machine intelligence, or transitioning into post-biological existence. The rules of life change, and the old ones fade from memory. Reality feels different—less like acceleration, and more like a fundamental shift in what it means to be human. And yet, none of this is inevitable. The Singularity is not a fixed event—it's a trajectory shaped by our choices today. If we view AI solely through the lens of efficiency and innovation, or assume we need to adopt it to keep up with China, we risk blinding ourselves to the social, ethical, and existential costs. We need a more comprehensive and balanced framework. One that recognizes the promise of AI, yes—but also its power to disrupt attention, undermine relationships, and rewire the foundations of civilization. The Singularity is arriving whether we like it or not. We can not only survive it, but make it work for us to produce the benefits that the techno-optimists promise. But not by default. Not by trusting that more technology is always better, or that rampant, unregulated technology will save us. We must develop wisdom alongside our intelligence. And we must prepare—not just for a brighter future for the elites of society, but for a rising tide that lifts all boats.


Time of India
a day ago
- Business
- Time of India
Silicon Valley is at an inflection point
On his second day in office this year, President Trump underscored his unequivocal support for the tech industry. Standing at a lectern next to tech leaders, he announced the Stargate Project , a plan to pump $500 billion in private investment over four years into artificial intelligence infrastructure. For comparison: The Apollo mission, which sent the first men to the moon, spent around $300 billion in today's dollars over 13 years. Sam Altman , OpenAI's chief executive, played down the investment. "It sounds crazy big now," he said. "I bet it won't sound that big in a few years." In the decade that I have observed Silicon Valley — first as an engineer, then as a journalist — I've watched the industry shift into a new paradigm. Tech companies have long reaped the benefits of a friendly U.S. government, but in its early months the Trump administration has made clear that the state will now grant new firepower to the industry's ambitions. The Stargate announcement was just one signal. Another was the Republican tax bill that the House passed last week, which would ban states from regulating AI for the next 10 years. The leading AI giants are no longer merely multinational corporations; they are growing into modern-day empires. With the full support of the federal government, soon they will be able to reshape most spheres of society as they please, from the political to the economic to the production of science. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Empresas de Lages reduzem custos agora [Saiba mais] Sistema TMS embarcador Saiba Mais Undo When I took my first job in Silicon Valley 10 years ago, the industry's wealth and influence were already expanding. The tech giants had grandiose missions — take Google 's, to "organise the world's information" — which they used to attract young workers and capital investment. But with the promise of developing artificial general intelligence, or AGI, those grandiose missions have turned into civilising ones. Companies claim they will bring humanity into a new, enlightened age — that they alone have the scientific and moral clarity to control a technology that, in their telling, will usher us to hell if China develops it first. "AI companies in the U.S. and other democracies must have better models than those in China if we want to prevail," said Dario Amodei, chief executive of Anthropic, an AI start-up. This language is as far-fetched as it sounds, and Silicon Valley has a long history of making promises that never materialize. Yet the narrative that AGI is just around the corner and will usher in "massive prosperity," as Mr. Altman has written, is already leading companies to accrue vast amounts of capital, lay claim to data and electricity, and build enormous data centers that are accelerating the climate crisis. These gains will fortify tech companies' power and erode human rights long after the shine of the industry's promises wears off. Live Events The quest for A.G.I. is giving companies cover to vacuum up more data than ever before, with profound implications for people's privacy and intellectual property rights. Before investing heavily in generative AI, Meta had amassed data from nearly four billion accounts, but it no longer considers that enough. To train its generative AI models, the company has scraped the web with little regard for copyright and even considered buying up Simon & Schuster to meet the new data imperative. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories These developments are also convincing companies to escalate their consumption of natural resources. Early drafts of the Stargate Project estimated that its AI supercomputer could need about as much power as three million homes. And McKinsey now projects that by 2030, the global grid will need to add around two to six times the energy capacity it took to power California in 2022 to sustain the current rate of Silicon Valley's expansion. "In any scenario, these are staggering investment numbers," McKinsey wrote. One OpenAI employee told me that the company is running out of land and electricity. Meanwhile, there are fewer independent AI experts to hold Silicon Valley to account. In 2004, only 21 percent of people graduating from Ph.D. programs in artificial intelligence joined the private sector. In 2020, nearly 70 percent did, one study found. They've been won over by the promise of compensation packages that can easily rise over $1 million. This means that companies like OpenAI can lock down the researchers who might otherwise be asking tough questions about their products and publishing their findings publicly for all to read. Based on my conversations with professors and scientists, ChatGPT's release has exacerbated that trend — with even more researchers joining companies like OpenAI. This talent monopoly has reoriented the kind of research that's done in this field. Imagine what would happen if most climate science were done by researchers who worked in fossil fuel companies. That's what's happening with artificial intelligence. Already, AI companies could be censoring critical research into the flaws and risks of their tools. Four years ago, the leaders of Google's ethical AI team said they were ousted after they wrote a paper raising questions about the industry's growing focus on large language models, the technology that underpins ChatGPT and other generative AI products. These companies are at an inflection point. With Mr. Trump's election, Silicon Valley's power will reach new heights. The president named David Sacks, a billionaire venture capitalist and AI investor, as his AI czar, and empowered another tech billionaire, Elon Musk , to slash through the government. Mr. Trump brought a cadre of tech executives with him on his recent trip to Saudi Arabia. If Senate Republicans now vote to prohibit states from regulating AI for 10 years, Silicon Valley's impunity will be enshrined in law, cementing these companies' empire status. Their influence now extends well beyond the realm of business. We are now closer than ever to a world in which tech companies can seize land, operate their own currencies, reorder the economy and remake our politics with little consequence. That comes at a cost — when companies rule supreme, people lose their ability to assert their voice in the political process and democracy cannot hold. Technological progress does not require businesses to operate like empires. Some of the most impactful AI advancements came not from tech behemoths racing to recreate human levels of intelligence, but from the development of relatively inexpensive, energy-efficient models to tackle specific tasks such as weather forecasting. DeepMind's AlphaFold built a nongenerative AI model that predicts protein structures from their sequences — a function critical to drug discovery and understanding disease. Its creators were awarded the 2024 Nobel Prize in Chemistry. AI tools that help everyone cannot arise from a vision of development that demands the capitulation of the majority to the self-serving agenda of the few. Transitioning to a more equitable and sustainable AI future won't be easy: It'll require everyone — journalists, civil society, researchers, policymakers, citizens — to push back against the tech giants, produce thoughtful government regulation wherever possible and invest more in smaller-scale AI technologies. When people rise, empires fall.
Yahoo
a day ago
- Business
- Yahoo
What Happens When AI Replaces Workers?
Credit - mikkelwilliam—Getty Images On Wednesday, Anthropic CEO Dario Amodei declared AI could eliminate half of all entry level white collar jobs within five years. Last week, a senior LinkedIn executive reported that AI is already starting to take jobs from new grads. In April, Fiverr's CEO made it clear: 'AI is coming for your job. Heck, it's coming for my job too.' Even the new Pope is warning about AI's dramatic potential to reshape our economy. Why do they think this? The stated goal of the major AI companies is to build artificial general intelligence, or AGI, defined as 'a highly autonomous system that outperforms humans at most economically valuable work.' This isn't empty rhetoric—companies are spending over a trillion dollars to build towards AGI. And governments around the world are supporting the race to develop this technology. They're on track to succeed. Today's AI models can score as well as humans on many standardized tests. They are better competitive programmers than most programming professionals. They beat everyone except the top experts in science questions. As a result, AI industry leaders believe they could achieve AGI sometime between 2026 and 2035. Among insiders at the top AI companies, it's the near-consensus opinion that the day of most people's technological unemployment, where they lose their jobs to AI, will arrive soon. AGI is coming for every part of the labor market. It will hit white collar workplaces first, and soon after will reach blue collar workplaces as robotics advances. In the post-AGI world, an AI can likely do your work better and cheaper than you. While training a frontier AI model is expensive, running additional copies of it is cheap, and the associated costs are rapidly getting cheaper. A commonly proposed solution for an impending era of technological unemployment is government-granted universal basic income (UBI). But this could dramatically change how citizens participate in society because work is most people's primary bargaining chip. Our modern world is upheld with a simple exchange: you work for someone with money to pay you, because you have time or skills that they don't have. The economy depends on workers' skills, judgment, and consumption. As such, workers have historically bargained for higher wages and 40-hour work weeks because the economy depends on them. With AGI, we are posed to change, if not entirely sever, that relationship. For the first time in human history, capital might fully substitute for labor. If this happens, workers won't be necessary for the creation of value because machines will do it better and cheaper. As a result, your company won't need you to increase their profits and your government won't need you for their tax revenue. We could face what we call 'The Intelligence Curse', which is when powerful actors such as governments and companies create AGI, and subsequently lose their incentives to invest in people. Just like in oil-rich states afflicted with the 'resource curse,' governments won't have to invest in their populations to sustain their power. In the worst case scenario, they won't have to care about humans, so they won't. But our technological path is not predetermined. We can build our way out of this problem. Many of the people grappling with the other major risks from AGI—that it goes rogue, or helps terrorists create bioweapons, for example—focus on centralizing and regulatory solutions: track all the AI chips, require permits to train AI models. They want to make sure bad actors can't get their hands on powerful AI, and no one accidentally builds AI that could literally end the world. However, AGI will not just be the means of mass destruction—it will be the means of production too. And centralizing the means of production is not just a security issue, it is a fundamental decision about who has power. We should instead avert the security threats from AI by building technology that defends us. AI itself could help us make sure the code that runs our infrastructure is secure from attacks. Investments in biosecurity could block engineered pandemics. An Operation Warp Speed for AI alignment could ensure that AGI doesn't go rogue. And if we protect the world against the extreme threats that AGI might bring about, we can diffuse this technology broadly, to keep power in your hands. We should accelerate human-boosting AI over human-automating AI. Steve Jobs once called computers 'bicycles for the mind,' after the way they make us faster and more efficient. With AI, we should aim for a motorcycle for the mind, rather than a wholesale replacement of it. The market for technologies that keep and expand our power will be tremendous. Already today, the fastest-growing AI startups are those that augment rather than automate humans, such as the code editor Cursor. And as AI gets ever more powerful and autonomous, building human-boosting tools today could set the stage for human-owned tools tomorrow. AI tools could capture the tacit knowledge visible to you every day and turn it into your personal data moat. The role of the labor of the masses can be replaced either with the AI and capital of a few, or the AI and capital of us all. We should build technologies that let regular people train their own AI models, run them on affordable hardware, and keep control of their data—instead of everything running through a few big companies. You could be the owner of a business, deploying AI you control on data you own to solve problems that feel unfathomable to you today. Your role in the economy could move from direct labor, to managing AI systems like the CEO of a company manages their direct reports, to steering the direction of AI systems working for you like a company board weighing in on long-term direction. The economy could run on autopilot and superhumanly fast. Even when AI can work better than you, if you own and control your piece of it, you could be a player with real power—rather than just hoping for UBI that might never come. To adapt the words of G. K. Chesterton, the problem with AI capitalism is if there aren't enough capitalists. If everyone owns a piece of the AI future, all of us can win. And of course, AGI will make good institutions and governance more important than ever. We need to strengthen democracy against corruption and the pull of economic incentives before AGI arrives, to ensure regular people can win if we reach the point where governments and large corporations don't need us. What's happening right now is an AGI race, even if most of the world hasn't woken up to it. The AI labs have an advantage in AI, but to automate everyone else they need to train their AIs in the skills and knowledge that run the economy, and then go and outcompete the people currently providing those goods and services. Can we use AI to lift ourselves up, before the AI labs train the AIs that replace us? Can we retain control over the economy, even as AI becomes superintelligent? Can we achieve a future where power still comes from the people? It is up to us all to answer those questions. Contact us at letters@
Yahoo
2 days ago
- Business
- Yahoo
Humans Have 4 Years Before AI Can Do Everything They Can Do, OpenAI COO Says
Homo sapiens had a good run. But OpenAI COO Brad Lightcap said Thursday AGI — or artificial general intelligence, where AI models can perform any intellectual task that humans can — will be reached within the next few years. 'I think it is possible that in the next four years, we do approximate something like [AGI], and it's a testament to how fast things are moving,' Lightcap said. His comment came during a discussion at The Wall Street Journal's 'The Future of Everything' Conference in New York City. What AGI will mean for humanity has been hotly debated among AI enthusiasts in recent years. Some believe it will spur a wave of unmatched creativity and productivity — an argument made at the conference a day earlier by Groq CEO Jonathan Ross — while others have said they are worried it could lead to mass unemployment, or worse. Elon Musk, notably, is bullish on AI. But he has also said he is worried AI could pose a 'fundamental risk' to humanity if it is goes rogue and is not aligned with humans. Alexis Ohanian, the co-founder of Reddit, said during a different panel on Thursday that he believes 'the pure software part of Silicon Valley' will have a 'reckoning' in the next few years as a result of AI. 'I don't relish or celebrate any of this,' Ohanian said. 'I do think more new jobs and careers will be created, but the business of building software is going to look tremendously different in the coming months and years.' Lightcap on Thursday said, for now, AI models like ChatGPT are simply great tools for humans. But 'with the rate of improvement' models are showing, a 'fairly steep takeoff' in AI capabilities is right around the corner, he believes. AI was a hot topic at the 'Future of Everything' conference this week. Beyond Ohanian's comments, Imagine Entertainment bosses Ron Howard and Brian Grazer on Wednesday said they are both 'excited' by AI and use it as a tool to jumpstart ideas or help with post-production work. But they also said they do not believe it can or will replace writers anytime soon. On Thursday, Lightcap said OpenAI has not made any formal deals with entertainment studios because his company is still building a 'level of trust' with Hollywood. He said he expects that to change in the years moving forward, as its tools advance and are more useful for professional filmmakers. The post Humans Have 4 Years Before AI Can Do Everything They Can Do, OpenAI COO Says appeared first on TheWrap.