
Mark Zuckerberg's Manhattan-sized data centre to power Meta's next-gen AI revolution by 2026
In a post shared on Threads, Meta's social media platform, Zuckerberg saidPrometheus, located in Ohio, will be the first in a series of 'titan clusters,' vast data centre complexes intended to meet the enormous energy and computational demands of next-generation AI models. He described these facilities as'multi-gigawatt clusters', making them among the largest in the world.
The announcement underlines Meta's growing investment in AI infrastructure, part of its broader strategy to develop 'superintelligence', an advanced form of AI that could outperform humans in a wide range of tasks. According to Zuckerberg, the company intends to spend'hundreds of billions of dollars' on these efforts, signalling a long-term commitment to lead the global AI race.
Notably, the scale of the infrastructure is unprecedented. Meta's largest data centre, currently under construction in Richland Parish, Louisiana, is said to be nearly the size of Manhattan. By comparison, most existing data centres operate with only a few hundred megawatts of capacity. Meta's upcoming facilities aim to cross the one-gigawatt threshold, enough energy to power around 900,000 homes annually, marking a new benchmark in data processing power.
Meta's aggressive expansion comes amid growing industry competition. Other major tech players, including OpenAI and Oracle, are also investing in large-scale data centre projects to keep up with the soaring computational needs of generative AI models. Analyst group SemiAnalysis has forecasted that Meta may be the first to achieve a 'supercluster' exceeding one gigawatt of capacity.
Zuckerberg's renewed focus on AI follows internal frustrations with the company's previous performance in the field. Over recent months, he has personally recruited a formidable team of AI experts, poaching talent from rivals such as OpenAI and Google DeepMind. Among the high-profile additions is Alexandr Wang, co-founder of Scale AI, who now serves as Meta's Chief AI Officer following a $14.3 billion investment for a 49 per cent stake in his company.
Other notable hires include former GitHub CEO Nat Friedman, AI entrepreneur Daniel Gross, and ex-Apple engineer Ruoming Pang, who reportedly joined Meta with a compensation package exceeding $200 million.
Despite these significant capital outlays, Meta's core advertising business across Facebook, Instagram, WhatsApp and Messenger continues to fuel robust revenue growth, giving the company confidence to fund its AI ambitions internally.
(With inputs from Bloomberg)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
31 minutes ago
- Mint
Mint Explainer: Is OpenAI exaggerating the powers of its new ChatGPT Agent?
Leslie D'Monte OpenAI has flagged the agent as high-risk under its safety framework. Is this just marketing hype or a sign that AI is genuinely becoming more powerful and autonomous? OpenAI CEO Sam Altman. Photo AFP Gift this article On Thursday, OpenAI launched its autonomous ChatGPT Agent, a tool that's capable of finding and buying things online, managing your calendar, and booking you an appointment with a doctor. It's essentially a digital assistant that doesn't just provide information but complete actual tasks. On Thursday, OpenAI launched its autonomous ChatGPT Agent, a tool that's capable of finding and buying things online, managing your calendar, and booking you an appointment with a doctor. It's essentially a digital assistant that doesn't just provide information but complete actual tasks. That being said, OpenAI has flagged the agent as high-risk under its safety framework, warning it could potentially be used to create dangerous biological or chemical substances. Is this just marketing hype, timed to build momentum for the launch of GPT-5, or a sign that AI agents are genuinely becoming more powerful and autonomous, akin to the agents who protect the computer-generated world of The Matrix? What is ChatGPT Agent? Say you want to rearrange your calendar, find a doctor and schedule an appointment, or research competitors and deliver a report. ChatGPT Agent can now do it for you. Also Read | Deep research with AI is days' worth of work in minutes The agent can browse websites, run code, analyse data, and even create slide decks or spreadsheets—all based on your instructions. It combines the strengths of OpenAI's earlier tools—operator (which could navigate the web) and deep research (which could analyse and summarise information)—into a single system. You stay in control throughout: ChatGPT asks for permission before doing anything important, and you can stop or take over at any time. This new capability is available to Pro, Plus, and Team users through the tools dropdown. How does it work? ChatGPT Auses a powerful set of tools to complete tasks, including a visual browser to interact with websites like a human, a text-based browser for reasoning-heavy searches, a terminal for code execution, and direct application programming interface (API) access. It can also connect to apps such as Gmail or GitHub to fetch relevant information. You can log in to websites within the agent's browser, allowing it to dig deeper into personalised content. All of this runs on its own virtual computer, which keeps track of context even across multiple tools. The agent can switch between browsers, download and edit files, and adapt its methods to complete tasks quickly and accurately. It's built for back-and-forth collaboration—you can step in anytime to guide or change the task, and ChatGPT can ask for more input when needed. If a task takes time, you'll get updates and a notification on your phone once it's done. Has OpenAI tested its performance? OpenAI said on Humanity's Last Exam (HLE), which tests expert-level reasoning across subjects, ChatGPT Agent achieved a new high score of 41.6, rising to 44.4 when multiple attempts were run in parallel and the most confident response was selected. On FrontierMath, the toughest known math benchmark, the agent scored 27.4% using tools such as a code-executing terminal—far ahead of previous models. In real-world tasks, ChatGPT agent performs at or above human levels in about half of the cases, based on OpenAI's internal evaluations. These tasks include building financial models, analysing competitors, and identifying suitable sites for green hydrogen projects. ChatGPT Agent also outperforms others on specialised tests such as DSBench for data science, and the SpreadsheetBench for spreadsheet editing (45.5% vs Copilot Excel's 20.0%). On BrowseComp and WebArena, which test browsing skills, the agent achieves the highest scores to date, according to OpenAI. What are some of the things it can do? Consider the case of travel planning. The agent won't just suggest ideas but navigate booking websites, fill out forms, and even make reservations one you give it permission. You can also ask it to read your emails, find meeting invitations, and automatically schedule appointments in your calendar, or even draft and send follow-up emails. This level of coordination typically required juggling between apps, but the agent manages it in a single conversational flow. Another example involves shopping and price comparison. You can tell the agent to 'order the best-reviewed smartphone under ₹ 15,000", and it can search online stores, compare prices and reviews, and proceed to checkout on a preferred platform. Customer support and task automation are other examples, where the agent is used to troubleshoot an issue, log into support portals, and even file return or refund requests. How are AI agents typically built? Unlike basic chat bots, AI agents are autonomous systems that can plan, reason, and complete complex, multi-step tasks with minimal input—such as coding, data analysis, or generating reports. They are built by combining ways to take in information, think, and take action. Developers begin by deciding what the agent should do, following which the agent collects data like such as or images from its environment. AI agents use large language models (LLMs) like GPT-4 as their core 'brain", which allows them to understand and respond to natural language instructions. To allow AI agents to take action, developers connect the LLM to things like a web browser, code editor, calculator, and APIs for services such as Gmail or Slack. Frameworks like LangChain help integrate these parts, and keep track of information. Some AI agents learn from experience and get better over time. Testing and careful setup make sure they work well and follow rules. Does ChatGPT Agent have credible competition? Google's Project Astra, part of its Gemini AI line, is developing a multimodal assistant that can see, hear, and respond in real time. Gemini CLI is an open-source AI agent that brings Google's Gemini model directly to the terminal for fast, lightweight access. It integrates with Gemini Code Assist, offering developers on all plans AI-powered coding in both VS Code and the command line. Microsoft is embedding Copilot into Windows, Office, and Teams, giving its agent access to workflows, system controls, and productivity tools, soon enhanced by a dedicated Copilot Runtime. Meta is building more socially focused agents within messaging and the metaverse, which could evolve into utility tools. Apple is revamping Siri through Apple Intelligence, combining GPT-level reasoning with strict privacy features and deep on-device integration. Other smart agents include Oracle's Miracle Agent, IBM's Watson tools, Agentforce from Salesforce Anthropic's Claude 3.5, and Perplexity AI's action-oriented agents through its Comet project, blending search with agentic behaviour. The competitive advantage, though, may go to companies that can integrate these AI agents into everyday applications and call for action with a single, unified tool – a task that ChatGPT Agent has demonstrated. Why did OpenAI warn that ChatGPT Agent could be used to trigger biological warfare? OpenAI claimed ChatGPT Agent's superior capabilities could, in theory, be misused to help someone create dangerous biological or chemical substances. However, it clarified that there was no solid evidence it could actually do so. Regardless, OpenAI is activating the highest level of safety measures under its internal 'preparedness framework'. These include thorough threat modeling to anticipate potential misuse, special training to ensure the model refuses harmful requests, and constant monitoring using automated systems that watch for risky behaviour. There are also clear procedures in place for suspicious activity. Should we take this risk seriously? Ja-Nae Duane, AI expert and MIT Research Fellow and co-author of SuperShifts, said the more autonomous the agent, the more permissions and access rights it would require. For example, buying a dress requires wallet access; scheduling an event requires calendar and contact list access. 'While standard ChatGPT already presents privacy risks, the risks from ChatGPT Agent are exponentially higher because people will be granting it access rights to external tools containing personal information (like calendar, email, wallet, and more). There's a significant gap between the pace of AI development and AI literacy; many people haven't even fully understood ChatGPT's existing privacy risks, and now they're being introduced to a feature with exponentially more risks," he said. Also Read | Google's Veo 3 brings the era of video on command Duane added that the key risks included data leaks, mistaken actions, prompt injection, and account compromise, especially when handling sensitive information. Malicious actors, he warned, could exploit them by manipulating inputs, abusing tool access, stealing credentials, or poisoning data to bias outputs. Poor third-party integration and an over-reliance of them could worsen the impact, while the agent's 'black box" nature would make it hard to trace errors, he added. In the wrong hands, these agents could be weaponised for fraud, phishing, or even to generate malware. What are the other concern areas for enterprises? Developers are increasingly deploying AI agents across IT, customer service, and enterprise workflows. According to Nasscom, 46% of Indian firms are experimenting with these agents, particularly in IT, HR, and finance, while manufacturing leads in robotics, quality control, and automation. Beyond concerns around hallucinations, security, privacy, and copyright or intellectual property (IP) violations, a key challenge for businesses is ensuring a return on investment. Gartner noted that many so-called agentic use cases could be handled by simpler tools and predicted that more than 40% of such projects would be scrapped by 2027 over high costs, unclear value, or inadequate risk controls. Of the thousands of vendors in this space, only around 130 are seen as credible; many engage in 'agent washing" by repackaging chatbots, robotic process automation (RPA), or basic assistants as autonomous agents. Nasscom corroborated these concerns, highlighting that 62% of enterprises were still only testing agents in-house. Why is 'humans-in-the-loop' a must? OpenAI CEO Sam Altman advised granting agents only the minimum access needed for each task, not blanket permissions. Nasscom believes that to scale responsibly, enterprises must prioritise human-AI collaboration, trust, and data readiness. It has recommended firms adopt AI agents with a 'human-in-the-loop" approach, reflecting the need for oversight and contextual judgment. According to Duane, users must understand both the tool's strengths and its limits, especially when handling sensitive data. Caution is key, as misuse could have serious consequences. She also emphasised the importance of AI literacy, noting that AI was evolving far faster than most people's understanding of how to use it responsibly. Also Read | Mint Primer: Are firms wasting their money on AI agents? Topics You May Be Interested In


Economic Times
an hour ago
- Economic Times
Don't panic, but act fast: Perplexity CEO on skills needed not just to survive AI job crisis but also thrive in career
Shift Focus from Scrolling to Skill-Building The Pace of Change Is Unforgiving Entrepreneurship as a Safety Net As artificial intelligence continues to advance at an accelerated pace, it is reshaping the global workforce and sparking concerns about widespread job displacement. Industry leaders have weighed in on how AI could eliminate traditional roles, especially in white-collar sectors. Among them, Perplexity AI CEO Aravind Srinivas is urging individuals—particularly young professionals—to move beyond panic and instead focus on adapting swiftly. In a recent conversation with Matthew Berman, he shared insights on the kind of skills and mindset needed not just to survive this transition, but to build a successful career in an AI-driven offered a blunt recommendation to the younger generation: spend less time mindlessly scrolling through social media and more time learning how to use AI tools . He emphasized that those who become proficient with these technologies will gain a strong advantage in the job market , as the gap between AI users and non-users noted that individuals at the forefront of AI adoption will be significantly more employable than those who resist or delay learning. His message was clear—embracing AI is now critical for career Perplexity CEO also acknowledged the immense challenge posed by the speed at which AI is developing. With major advancements occurring every few months, he pointed out that this pace is testing human adaptability like never warned that those unable to keep up risk being pushed out of the workforce. This concern aligns with statements from other tech leaders such as Dario Amodei, CEO of Anthropic, who predicted AI could eliminate half of entry-level white-collar jobs in the next five years. AI pioneer Geoffrey Hinton has also cautioned that artificial intelligence may soon take over much of the routine intellectual labor currently done by acknowledging the threat of job loss, Srinivas sees entrepreneurship as a viable path forward. He suggested that those affected by AI-driven changes could either start their own ventures using AI or join emerging companies that embrace these his view, the traditional employment model may shrink, but opportunities will emerge for those who can use AI strategically. He believes the next wave of job creation will come not from large corporations, but from smaller, nimble ventures led by individuals who understand how to integrate AI into their outlook is shared in part by others in the tech world who believe that AI won't just destroy jobs—it will redefine them. Nvidia CEO Jensen Huang has argued that AI will enhance human capabilities and allow people to focus on higher-value this optimistic take, Srinivas emphasized the urgency of the moment. He encouraged people to act quickly—not out of fear, but with a sense of responsibility for their own future. Mastering AI, he suggested, will soon be as fundamental as knowing how to use a computer.
&w=3840&q=100)

Business Standard
2 hours ago
- Business Standard
Baby Grok: Elon Musk's xAI plans to launch child-friendly AI app soon
Tech billionaire Elon Musk has announced that his AI firm xAI is planning to launch a child-friendly version of chatbot Grok. The app, named 'Baby Grok', is likely to build upon Grok's capabilities, but with stricter safeguards and curated content to suit younger audiences. "We're going to make Baby Grok @xAI, an app dedicated to kid-friendly content," Musk said in an X post on Sunday (IST). The move signals Musk's intention to expand AI usage into the domain of child education and entertainment, while addressing growing concerns about children's exposure to AI. Moving away from controversies Musk's announcement of a new app could help the company pivot away from the recent controversies, especially when it comes to protecting young users. Ahead of the curve A child-friendly version could also give the chatbot a competitive edge, as most rivals are yet to launch dedicated apps for youngsters. Grok competes with the likes of OpenAI's ChatGPT and Google's Gemini. Recently, the chatbot has been more deeply integrated into Musk's social network platform X, where it publicly engages with users. Expanding domains Earlier this month, xAI also unveiled 'Grok for Government', an initiative aimed at developing AI solutions tailored for US government agencies. The project focuses on creating intelligent, agentic workflows to support various administrative and security functions. According to media reports, xAI stated that the program will bring its advanced AI capabilities to federal, local, state, and national security customers, signalling the firm's growing interest in public sector partnerships. New updates On July 9, xAI released its latest version of the AI chatbot -- Grok 4. According to xAI, Grok 4 is the most intelligent AI model in the world. Grok 4 includes native tool use and real-time search integration, the company said. We utilised Colossus, our 200,000 GPU cluster, to run reinforcement learning training that refines Grok's reasoning abilities at pretraining scale, the firm added.