logo
Jensen Huang may have met his match, and it's not AMD, but a stealthy South Korean challenger

Jensen Huang may have met his match, and it's not AMD, but a stealthy South Korean challenger

Time of India22-07-2025
Who is FuriosaAI, and why are they suddenly in the spotlight?
What makes FuriosaAI's chip so special?
Live Events
Custom-built for AI inference : Unlike Nvidia's general-purpose GPUs, RNGD is optimized solely for AI workloads.
: Unlike Nvidia's general-purpose GPUs, RNGD is optimized solely for AI workloads. 5nm process with HBM3 memory : This enables faster performance with lower energy consumption.
: This enables faster performance with lower energy consumption. Massive efficiency gains: LG AI Research found that RNGD delivered 2.25x faster inference per watt compared to traditional GPUs.
LG AI Research just gave FuriosaAI a massive boost
Why did Meta try to buy FuriosaAI?
Is FuriosaAI really a threat to Nvidia?
Feature FuriosaAI RNGD Nvidia H100 Architecture Custom AI inference NPU General-purpose GPU Process TSMC 5nm TSMC 4nm Memory 48GB HBM3 80GB HBM3 TDP ~180W ~700W Inference efficiency 2.25x GPU per watt Industry benchmark Target use case LLM inference Training + inference
What does this mean for the global AI chip market?
FuriosaAI's edge lies in energy efficiency, affordability, and regional backing.
lies in energy efficiency, affordability, and regional backing. Nvidia's strength remains raw performance and broad developer support.
remains raw performance and broad developer support. Meta's pivot toward in-house chips signals a growing trend toward vertical integration in AI.
Could FuriosaAI disrupt Nvidia's AI dominance?
FAQs: Nvidia vs FuriosaAI:
(You can now subscribe to our
(You can now subscribe to our Economic Times WhatsApp channel
For years, Nvidia CEO Jensen Huang has reigned supreme in the AI chip world. Despite fierce competition from AMD and Intel, Nvidia's dominance in training and running large language models has gone largely unchallenged—until now. A stealthy South Korean startup,, is making waves in the global semiconductor scene and may just be the first true threat to Nvidia's AI hardware empire.Founded in 2017 and backed by heavyweights like Samsung Electronics and Naver Corp,has flown under the radar for years. But in 2025, the startup shocked the tech world by turning down an. That rejection alone signaled confidence—but the real attention came when FuriosaAIInstead of selling, FuriosaAI chose to go big. And now, they're emerging as a serious AI accelerator powerhouse—exactly the kind of challenger Nvidia hasn't faced from Asia until now.At the heart of FuriosaAI's rise is its new, a next-gen AI inference accelerator built to handle the kind of large-scale models powering everything from ChatGPT-style tools to enterprise automation.Here's what sets RNGD apart:That kind of power-to-efficiency ratio could be a game-changer—especially for companies scaling AI operations while trying to manage rising power and cooling costs.FuriosaAI's biggest breakthrough yet came when LG AI Research announced it would integrate RNGD chips into its EXAONE platform, the large-scale AI system used for research across biotech, telecom, and finance.For FuriosaAI, this deal is more than a commercial win—it's validation. LG's evaluation didn't just show superior power efficiency. It also demonstrated better cost-performance compared to Nvidia's popular H100 chips, which are the backbone of today's AI data centers.In other words: FuriosaAI just proved it can compete—and maybe outperform—the world's most dominant AI chipmaker.Meta's failed $800 million bid for FuriosaAI speaks volumes. The social media giant, which is investing billions into building its own generative AI systems, clearly saw value in acquiring a company with proprietary AI hardware.But the deal reportedly fell apart not over price—but. FuriosaAI wanted to stay independent and pursue partnerships with global enterprises instead of becoming a Meta-only operation.Ironically, Meta is also, and is reportedly testing it through Taiwan's TSMC. However, the company's recent friction with Nvidia (Meta was notably excluded from Blackwell GPU order announcements) may have accelerated its chip-buying ambitions.Let's be clear: Nvidia is still the king of the hill. The company's H100 and upcoming Blackwell chips are powering almost every major AI deployment, from OpenAI to Amazon to Google.But FuriosaAI represents something different—a, efficiency-optimized challenger built outside of Silicon Valley. And as, especially at scale, companies will start looking beyond Nvidia for alternatives.Clearly, FuriosaAI isn't aiming to replace Nvidia across the board—but it's carving out a crucial niche in, which is where AI applications go from test labs to real-world products.The race is officially heating up. With Meta, Microsoft, Amazon, and other Big Tech players pushing to reduce their reliance on Nvidia, there's enormous demand forThis competitive shake-up is especially notable because South Korea has long been seen as a memory chip superpower—not a leader in AI accelerators. FuriosaAI's rise could reshape the semiconductor narrative, adding a powerful new player to the global AI arms race.Nvidia's Jensen Huang has faced plenty of rivals before—Intel, AMD, Google TPU—but few with the underdog precision and strategic clarity of FuriosaAI. With top-tier backing, an efficient and powerful chip design, and validation from a major global enterprise like LG, this South Korean startup is signaling that it's ready for the big leagues.Whether it's enough to dethrone Nvidia remains to be seen. But one thing is clear: Jensen Huang may have finally met his match—and it's not AMD, but a stealthy South Korean challenger.FuriosaAI is a South Korean chip startup that makes powerful AI chips, and it's now gaining attention for offering faster, more energy-efficient alternatives to Nvidia's AI hardware.FuriosaAI's RNGD chip delivers overthan Nvidia's GPUs, using less power and offering better cost performance for AI workloads.FuriosaAI turned down Meta's offer because it wanted to stay independent and pursue its own vision of working with global partners like LG.LG uses FuriosaAI's chips to run its large language models (LLMs) for research in industries like biotech, telecom, and finance.Yes, especially for companies focused on AI inference tasks. FuriosaAI offers a cost-effective, energy-efficient alternative that's catching global interest.FuriosaAI is backed by major South Korean players like Samsung Electronics and Naver Corp, giving it strong support for global growth.FuriosaAI's chips are built for—the part where models are used in real applications, not just training.Not entirely, but it could carve out a large share of the, especially where power efficiency matters most.Industries like telecom, research, biotech, and finance are already testing or using FuriosaAI chips in real-world AI systems.Because FuriosaAI just signed a big deal with LG after rejecting Meta's offer, proving it's ready to compete with the biggest names in AI.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta AI could be a threat to your privacy: Here's how to silence it on WhatsApp, Instagram and Facebook
Meta AI could be a threat to your privacy: Here's how to silence it on WhatsApp, Instagram and Facebook

Mint

time26 minutes ago

  • Mint

Meta AI could be a threat to your privacy: Here's how to silence it on WhatsApp, Instagram and Facebook

Ever since first rolling out Meta AI in early 2023, Meta has been betting big on the chatbot, integrating it across its social media apps and even launching a standalone app. While Meta and other tech giants remain bullish on AI and its potential to transform how we interact online, not everyone is on board and some users would rather not have Meta's AI chatbot show up in their apps at all. Meta AI is the artificial intelligence chatbot by the tech giant that is present across all of its social media apps including Instagram, Facebook and WhatsApp. The chatbot competes directly with the likes of Google's Gemini, OpenAI's ChatGPT and Anthropic's Claude and is run on the company's Llama 3 model. While Meta AI has fallen behind the competition in recent months, the biggest selling point for the social media behemoth is that its AI can be summoned instantly across the world's most popular apps. In the last few months, Meta has only increased the number of Meta AI powered features on Facebook, Instagram and WhatsApp. While there is a section of users who cannot get enough of these features, there are others who don't want any part of the AI carnage in their daily apps. If that wasn't enough, there was a privacy issue in June when Meta AI app's discover feed exposed the personal conversations with the chatbot on social media handles. Unfortunately, Meta is so expansively present across WhatsApp, Instagram and Facebook that it isn't currently possible to cut it out of these apps entirely, but we can attempt to do the next best thing which is to limit the interaction with the AI chatbot. In order to limit the interactions with Meta AI on Instagram, go to your chats and search for Meta AI. Click on the 'i' icon and then tap on 'Mute' and switch it to 'Until I change it'. Similarly, to limit the interactions with Meta AI on WhatsApp, open the Meta AI chat and mute notifications for chat while choosing the 'always' option to never receive a notification for Meta AI chat. Meanwhile, if you want to limit Meta AI in group chats there is currently only one option available, which is to turn on 'advanced chat privacy' found in the group settings. However, turning this feature on would also mean that users in the group won't be able to save any media to their devices. On Facebook, one can similarly mute Meta AI chat as done on Instagram. There is also an option to open the basic Facebook mobile version in order to see minimal AI features on the app.

ChatGPT Personal Chats Leaked On Google: How It Happened, OpenAI CEO Responds, And What Users Should Do
ChatGPT Personal Chats Leaked On Google: How It Happened, OpenAI CEO Responds, And What Users Should Do

India.com

time26 minutes ago

  • India.com

ChatGPT Personal Chats Leaked On Google: How It Happened, OpenAI CEO Responds, And What Users Should Do

ChatGPT Data Leak: In today's fast-paced digital world, ChatGPT has become as essential as the internet on our smartphones. For many, it is more than just a tool — it is a trusted companion that holds countless secrets. Remember that time you poured your heart out to ChatGPT? Whether it was a messy fight with your girlfriend, late night overthinking, weird 2 AM thoughts, embarrassing kitchen mishaps, career planning, or simply searching for happiness, you shared it all with an AI that patiently listened to your problems and offered advice. Most importantly, this AI never judged you. It felt more like a close friend, like confiding in a diary that actually talks back. Now, imagine if ChatGPT exposed users' personal conversations on Google and made them available for the world to read. Chats about your problems, wild thoughts, mental health struggles, relationship advice, and even someone asking how to write a punch line to impress a girl. Sounds wild, but that is exactly what happened. In a surprising incident, thousands of private ChatGPT conversations showed up in Google search results. Some of these chats included personal topics like mental health, job stress, and relationship issues shared with the AI chatbot. ChatGPT Leak Data: How Did This Happen? OpenAI, the company behind ChatGPT, had a feature that let users share their chats using a share button. When someone clicked it and chose 'create link,' ChatGPT made a URL that could be shared with others. There was also an option called 'Make this chat discoverable.' If this was turned on, search engines like Google could show those chat links in public search results. Many users didn't understand this and turned it on by mistake, thinking it was needed to share chats with friends, without realizing it could make their chats public. The issue was first reported by Fast Company. They found that around 4,500 ChatGPT links had been indexed by Google. While many of these chats were harmless, some revealed deeply personal and sensitive details shared by users—things they never expected the world to see. ChatGPT Leak Data: What Open AI Responded OpenAI fixed the privacy issue by removing the "discoverable" option from the Share window. An OpenAI employee said it was a short lived experiment that made it too easy to share chats by mistake. Now, OpenAI's FAQ clearly says that shared chats are not public unless users choose to make them discoverable. ChatGPT Leak Data: What Users Should Do Step 1: Open ChatGPT and go to Settings. Step 2: Tap on Data Controls from the menu. Step 3: Click on Manage next to the Shared Links option. Step 4: You will now see a list of all shared chats. From here, you can delete any links you no longer want to keep public. No Legal Privacy For ChatGPT Users Adding further, OpenAI CEO Sam Altman has said that users should not expect legal privacy when using ChatGPT. Since there are no clear laws or rules about AI chats yet. However, if a legal case ever comes up, OpenAI could be required to hand over some of your most sensitive chats.

India needs to develop more data center cities: CtrlS' Vipin Jain
India needs to develop more data center cities: CtrlS' Vipin Jain

Time of India

time38 minutes ago

  • Time of India

India needs to develop more data center cities: CtrlS' Vipin Jain

NEW DELHI: India should consider developing more data center cities to cater to the immense computing requirements of artificial intelligence (AI) and fifth-generation (5G)-driven services, according to a top executive of CtrlS Datacenters . Vipin Jain , president (datacenter operations), CtrlS Datacenters, told ETTelecom that more efforts are needed beyond allotting lands at market rates and waiving stamp duty to support the greenfield data center projects. 'If we have to look at this industry in total, the government should look into whether it can develop data center cities that have ample power. They have to consider the use of unclear power and renewable energy,' he said. 'If millions and billions of people start using ChatGPT and similar services, it will need 50 times more compute capacity for the same workload, which is happening today. So AI will continue to grow.' A boom in AI, fifth-generation (5G) network densification, the Internet of Things (IoT), and cloud computing , along with surging computing demands in other sectors, has fuelled CtrlS, Sify Technologies, ST Telemedia GDC , Yotta Data Services , Nxtra by Airtel , ESDS, Equinix and others to set up data centers of various capacities. 'We cannot beat China and the US in terms of data center capacity and density, at least for the coming 10 years. But the number three position is very much doable for India,' as per the top executive, who added that the former two countries are extensively developing large language models (LLMs) that require data centers. The Telangana-headquartered data center operator has 'super hyperscale' and hyperscale data centers in Mumbai, Chennai, Hyderabad, Noida, Bengaluru, and Kolkata, and edge data centers in Ahmedabad, Patna, Lucknow, and Bhubaneswar. 'This three-pronged strategy, in my view, will be the right model which will substantially add value across the country,' Jain said, adding that CtrlS' super hyperscale facilities can offer up to 500MW capacity when operationalised. CtrlS has a total operational data center capacity of over 250MW IT load. In 2023, it laid out a plan to invest $2 billion over the next six years to add 350MW of AI and cloud-ready hyperscale data centers, double its headcount, and achieve net zero or carbon neutrality by 2030. The executive, however, declined to comment on ongoing investments. The Ministry of Electronics and IT (MeitY), had in 2020, released a draft data center policy, but it has yet to be formally notified. However, Telangana, Uttar Pradesh, Tamil Nadu, West Bengal, and Odisha, among others, have launched dedicated policies in recent years to attract substantial data center investments; they have offered various incentives covering power & energy, as well as infrastructure segments, through fiscal and non-fiscal measures. Real estate consulting firm Colliers India earlier found that India's data center industry is expected to attract $20-25 billion in fresh investments in the next five to six years, with the total capacity expected to more than triple to over 4.5GW by 2030. 'India will see significant investments in data centers. At the same time, the number of data users is going to grow manyfold beyond tier-1 cities and towns. So if we have to cover the entire India, we will have to reach closer to the customers,' Jain said, observing that edge data centers – which typically cater to low-latency needs – will grow in numbers. India's edge data centre capacity is expected to triple from 60-70 MW in 2024 to nearly 200-210 MW by 2027, led by established players such as RailTel and telecom companies to serve emerging technologies such as 5G and the Internet of Things (IoT), ratings agency ICRA said recently .

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store