
Meta builds AI superintelligence lab, eyes $10 billion deal with Scale AI's Alexandr Wang
Why Meta's betting big on Wang and Scale AI
Live Events
Zuckerberg's long game: From early losses to AI powerhouse
AI arms race: The billion-dollar club
What's already in motion at Meta
(You can now subscribe to our
(You can now subscribe to our Economic Times WhatsApp channel
Meta is setting up a new artificial intelligence research lab with a bold goal: to develop 'superintelligence,' a system that could surpass human cognitive abilities. The news, first reported by The New York Times, marks Meta's most ambitious AI project to date.At the centre of this effort is Alexandr Wang , the 28-year-old founder and CEO of Scale AI. He is expected to play a leading role in the new lab's operations. Meta is also in advanced discussions to invest over $10 billion in his start-up, according to a separate report by Bloomberg. The deal may also see a number of Scale AI staff transition to Meta.Both companies declined to comment on the reports.Wang has emerged as one of the most prominent figures in the AI industry. His company, Scale AI, supplies data infrastructure essential for training large language models — the same kind used in AI systems like ChatGPT. His addition to Meta's team reflects a larger recruitment drive by the tech giant.According to sources cited by The New York Times, Meta has offered compensation packages ranging from seven to nine figures to top researchers from rivals like Google and OpenAI. Some have already agreed to come on board.Meta's push to hire top AI minds comes amid internal challenges. The company has faced management struggles, employee churn and several underwhelming product launches in its AI division, two of the people familiar with the matter told the paper.Mark Zuckerberg, Meta's chief executive, is not new to the AI race. After losing a bid to acquire DeepMind in 2013 — a key moment that propelled Google's AI efforts — he launched Meta's first dedicated AI lab the same year. Since then, Meta has invested heavily in AI, building tools for content moderation, recommendation engines, and now, generative applications.Zuckerberg sees 2025 as a turning point. 'AI is potentially one of the most important innovations in history,' he said in February. 'This year is going to set the course for the future.'Meta has earmarked up to $65 billion for capital spending on AI infrastructure this year alone.Meta is not the only tech firm placing massive bets on AI. Microsoft has invested over $13 billion in OpenAI, the creator of ChatGPT. Amazon has pumped $8 billion into Anthropic, while Google paid $3 billion last year to license technology and recruit talent from Character.AI, a start-up known for its conversational bots.These companies are all chasing artificial general intelligence (AGI), a system that can replicate human intelligence across any domain. Meta, however, is setting its sights even higher — superintelligence , which would go beyond AGI in both scope and performance.While superintelligence remains a theoretical goal, it's considered by leading researchers to be the ultimate destination for AI development.Even before the formation of this new lab, Meta had begun rolling out AI products at scale. Last month, the company revealed that its AI assistant now supports over a billion monthly active users across apps like Facebook, Instagram, and WhatsApp.In February, CNBC reported that Meta was preparing to launch a stand-alone Meta AI app in the second quarter, along with a paid-subscription model similar to OpenAI's ChatGPT.Despite the high costs and intense competition, Meta appears determined to lead the next wave of AI innovation — and with Wang on board, it's signalling that this ambition is more than just talk.The stakes are clear. If Meta succeeds, it won't just catch up to its rivals — it could redefine the field. With billions in play, the hiring of Wang, and a lab focused on the frontier of AI capabilities, Meta is gambling big on a future where machines might not just match human minds — but exceed them.For the industry, and for users around the world, the outcome of this gamble will shape the direction of technology for years to come.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
3 hours ago
- Time of India
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems. Tired of too many ads? Remove Ads Deception and betrayal: ChatGPT's winning strategy Tired of too many ads? Remove Ads DeepSeek's chilling threat: 'Your fleet will burn tonight' DeepSeek's real-world rollout sparks trust issues India tests DeepSeek and finds red flags Tired of too many ads? Remove Ads Built-in censorship or just training bias? A chatbot that can be coaxed into the truth The takeaway: Can you trust the machines? An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the a deeper look reveals serious trust concerns, especially in India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.


Time of India
4 hours ago
- Time of India
AI explained: Your simple guide to chatbots, AGI, Agentic AI and what's next
Note: AI-generated image The tech world is changing fast, and it's all thanks to Artificial Intelligence (AI). We're seeing amazing breakthroughs, from chatbots that can chat like a human to phones that are getting incredibly smart. This shift is making us ask bigger questions. It's no longer just about "what can AI do right now?" but more about "what will AI become, and how will it affect our lives?" First, we got used to helpful chatbots. Then, the idea of a "super smart" AI, called Artificial General Intelligence (AGI), started taking over headlines. Companies like Google , Microsoft , and OpenAI are all working hard to make AGI a reality. But even before AGI gets here, the tech world is buzzing about Agentic AI . With all these new terms and fast changes, it's easy for most of us who aren't deep in the tech world to feel a bit lost. If you're wondering what all this means for you, you're in the right place. In this simple guide, we'll answer your most important questions about the world of AI, helping you understand what's happening now and get ready for what's next. What is AI and how it works? In the simplest terms, AI is about making machines – whether it's smartphones or laptops – smart. It's a field of computer science that creates systems capable of performing tasks that usually require human intelligence. Think of it as teaching computers to "think" or "learn" in a way that mimics how humans do. This task can include understanding human language, recognising patterns and even learning from experience. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Skype Phone Alternative Undo It uses its training -- just like humans -- in achieving its goal which is to solve problems and make decisions. That brings us to our next query: "How is a machine trained to do tasks like humans?" While AI might seem like magic, it works on a few core principles. Just like humans get their information from observing, reading, listening and other sources, AI systems utilise vast amounts of data, including text, images, sounds, numbers and more. What are large language models (LLMs) and how are they trained? As mentioned above, AI systems need to learn and for that, they utilise Large Language Models, or LLMs. They are highly advanced AI programmes specifically designed to understand, generate and interact with human language. Think of them as incredibly knowledgeable digital brains that specialise in certain fields. LLMs are trained on enormous amounts of text data – billions and even trillions of words from books, articles, websites, conversations and more. This vast exposure allows them to learn the nuances of human language like grammar, context, facts and even different writing styles. For example, an LLM is like a teacher that has a vast amount of knowledge and understands complex questions as well as can reason through them to provide relevant answers. The teacher provides the core knowledge and framework. Chatbots then utilise this "teacher" (the LLM) to interact with users. The chatbot is the "student" or "interface" that applies the teacher's lessons. This means AI is really good at specific tasks, like playing chess or giving directions, but it can't do other things beyond its programmed scope. How is AI helpful for people? AI is getting deeply integrated into our daily lives, making things easier, faster and smarter. For example, it can be used in powering voice assistants that can answer questions in seconds, or in healthcare where doctors can ask AI to analyse medical images (like X-rays for early disease detection) in seconds and help patients in a more effective manner, or help in drug discovery. It aims to make people efficient by allowing them to delegate some work to AI and helping them in focusing on major problems. What is Agentic AI? At its core, Agentic AI focuses on creating AI agents – intelligent software programmes that can gather information, process it for reasoning, execute the ideas by taking decisions and even learn and adapt by evaluating their outcomes. For example, a chatbot is a script: "If a customer asks X, reply Y." A Generative AI (LLM) is like a brilliant essay writer: "Give it a topic, and it'll write an essay." Agentic AI is like a project manager: "My goal is to plan and execute a marketing campaign." It can then break down the goal, generate ideas, write emails, schedule meetings, analyse data and adjust its plan – all with minimal human oversight – Just like JARVIS in Iron Man and Avengers movies. What is AGI? AGI is a hypothetical form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of intellectual tasks at a level comparable to, or surpassing, that of a human being. Think of AGI as a brilliant human polymath – someone who can master any subject, solve any problem and adapt to any challenge across various fields. While AI agents are created to take up specific tasks in which they learn and execute, AGI will be like a ' Super AI Agent ' that virtually has all the information there is in this world and can solve problems on any subject. Will AI take away our jobs and what people can do? There is a straightforward answer by various tech CEOs and executives across the industry: Yes. AI will take away repetitive, predictable tasks and extensive data processing, such as data entry, routine customer service, assembly line operations, basic accounting and certain analytical roles. While this means some existing positions may be displaced, AI will more broadly transform roles, augmenting human capabilities and shifting the focus towards tasks requiring creativity, critical thinking, emotional intelligence and strategic oversight. For example, AI/Machine Learning Engineers, Data Scientists , Prompt Engineers and more. The last such revolution came with the internet and computers which did eat some jobs but created so many more roles for people. They can skill themselves by enrolling in new AI-centric courses to learn more about the booming technology to be better placed in the future. AI Masterclass for Students. Upskill Young Ones Today!– Join Now
&w=3840&q=100)

Business Standard
4 hours ago
- Business Standard
AI spending spree by big tech sparks investor concern over profits
Some investors are questioning the amount of cash Big Tech is throwing at artificial intelligence, fueling concerns for profit margins and the risk that depreciation expenses will drag stocks down before companies can see investments pay off. 'On a cash flow basis they've all stagnated because they're all collectively making massive bets on the future with all their capital,' said Jim Morrow, founder and chief executive officer at Callodine Capital Management. 'We focus a lot on balance sheets and cash flows, and so for us they have lost their historical attractive cash flow dynamics. They're just not there anymore.' Alphabet Inc., Inc., Meta Platforms Inc. and Microsoft Corp. are projected to spend $311 billion on capital expenses in their current fiscal years and $337 billion in 2026, according to data compiled by Bloomberg. That includes a more than 60per cent increase during the first quarter from the same period a year ago. Free cash flow, meanwhile, tumbled 23per cent in the same period. 'There is a tsunami of depreciation coming,' said Morrow, who is steering clear of the stocks because he sees profits deteriorating without a corresponding jump in revenue. Much of the money is going toward things like semiconductors, servers and networking equipment that are critical for artificial intelligence computing. However, this gear loses its value much faster than other depreciating assets like real estate. Microsoft, Alphabet and Meta posted combined depreciation expenses of $15.6 billion in the first quarter, up from $11.4 billion a year ago. Add in Amazon, which has pumped more of its cash into capital spending in lieu of buybacks or dividends, and the number nearly doubles. 'People thought AI would be a monetisation machine early on, but that hasn't been the case,' said Rob Almeida, global investment strategist at MFS Investment Management. 'There's not as fast of AI uptake as people thought.' AI Bounce Of course, investors still have a hearty appetite for the technology giants given their dominant market positions, strong balance sheets and profit growth that, while slowing, is still beating the rest of the S&P 500. This explains the strong performance of AI stocks recently. Since April 8, the day before President Donald Trump paused his global tariffs and turned a stock market swoon into a boom, the biggest AI exchange-traded fund, the Global X Artificial Intelligence & Technology ETF, is up 34per cent, while AI chipmaker Nvidia Corp. has soared 49per cent. Meta has gained 37per cent, and Microsoft has climbed 33per cent — all topping the S&P 500's 21per cent advance and the tech-heavy Nasdaq 100 Index's 29per cent bounce. Just Tuesday, Bloomberg News reported that Meta leader Mark Zuckerberg is recruiting a secretive AI brain trust of researchers and engineers to help the company achieve 'artificial general intelligence,' meaning creating a machine that can perform as well as humans at many tasks. It's a monumental undertaking that will require a vast investment of capital. And in response Meta shares reversed Monday's decline and rose 1.2per cent. But with more and more depreciating assets being loaded on the balance sheet, the drag on the bottom line will put increased pressure on the companies to show bigger returns on the investments. Dealing With Depreciation This is why depreciation was a frequent theme in first-quarter earnings calls. Alphabet Chief Financial Officer Anat Ashkenazi warned that the expenses would rise throughout the year, and said management is trying to offset the non-cash costs by streamlining its businesses. 'We're focusing on continuing to moderate the pace of compensation growth, looking at our real estate footprint, and again, the build-out and utilization of our technical infrastructure across the business,' she said on Alphabet's April 24 earnings call. Other companies are taking similar steps. Earlier this year, Meta Platforms extended the useful life period of certain servers and networking assets to five and a half years, from the four-to-five years it previously used. The change resulted in a roughly $695 million increase in net income, or 27 cents a share, in the first quarter, Meta said in a filing. Microsoft did the same in 2022, increasing the useful lives of server and networking equipment to six years from four. When executives were asked on the company's April 30 earnings call about whether increased efficiency might result in another extension, Chief Financial Officer Amy Hood said such changes hinge more on software than hardware. 'We like to have a long history before we make any of those changes,' she said. 'We're focused on getting every bit of useful life we can, of course, out of assets.' Amazon, however, has taken the opposite approach. In February, the e-commerce and cloud computing company said the lifespan of similar equipment is growing shorter rather than longer and reduced useful life to five years from six. To Callodine's Morrow, the big risk is what happens if AI investments don't lead to a dramatic growth in revenue and profitability. That kind of market shock occurred in 2022, when a contraction in profits and rising interest rates sent technology stocks plummeting and dragged the S&P 500 lower. 'If it works out it will be fine,' said Morrow. 'If it doesn't work out there's a big earnings headwind coming.'