logo
NxtGen Bets Big on Demand for High-Performance Data Centres

NxtGen Bets Big on Demand for High-Performance Data Centres

Entrepreneur25-06-2025
Its GPU capacity is expected to double within this financial year, and we aim to sustain that growth trajectory over the next two years
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
The hills of Ramanagar district, 60 kilometres away from the IT hub of Bengaluru, is most famously known for the shooting spot of Bollywood cult movie Sholay. The hills will still remind you of Gabbar Singh's legendary dialogues in the 1975 blockbuster. But, this story isn't about Sholay. Cut to the modern artificial intelligence (AI)-driven era of technology. And, in the vicinity of these hills, you see NxtGen's massive data centre standing tall on a hillock in the Bidadi Industrial area of Ramnagar.
In 2012, A.S. Rajgopal embarked on the journey of founding NxtGen, where he has been instrumental in steering its remarkable growth over the past decade. The company's flagship high-density data centre facility, sprawling over 10 acres of land in the Bidadi today serves more than 900 customers across the country. NxtGen operates two cloud platforms – SpeedCloud built on Red Hat OpenStack and OpenShift application platforms and three industry vertical clouds for the Government, Financial Services, and Healthcare sectors.
A.S Rajgopal, CEO & MD at NxtGen Cloud Technologies
NxtGen Cloud Technologies was incorporated in August 2012 and commenced operations in 2014. Today, it is on an expansion mode and its overall cloud capacity is growing at 32 per cent annually, supported by a consistent 24 per cent CAGR from existing captive customers over the past 5 years. "Our GPU capacity is expected to double within this financial year, and we aim to sustain that growth trajectory over the next two years," says Rajgopal, CEO & MD at NxtGen Cloud Technologies.
Currently, NxtGen operates five large-scale datacenters that power its sovereign cloud operations. Collectively, these support: over 400,000 virtual CPUs, 1.6 million GB of memory, 200,000 TB of high-performance storage, and 140,000 TB of archival storage.
"Our flagship Bengaluru facility is purpose-built for high-density workloads and houses a large GPU cluster, including NVIDIA H200, AMD, and Intel GPUs. This site is central to enabling India's enterprise-scale AI adoption," says Rajgopal.
Future Growth
The demand for high-performance data centres is being driven by clients' ongoing digital transformation as they modernize legacy systems and deploy cloud-native applications. There is also an uptick in generative AI (GenAI) adoption and demand for sovereign cloud infrastructure.
"In just the last 3 months, we have built over 40 enterprise-specific AI use cases, signaling growing traction. There has been an increase in demand for sovereign cloud infrastructure in government, BFSI, and healthcare sectors, aligned with national priorities for data protection and self-reliance," says Rajgopal.
NxtGen is also focussing on industry-specific value creation. It has a dedicated Government Cloud for hosting population-scale applications such as those for the Election Commission of India. It has a Financial Services Cloud pre-integrated with over 800 regulatory and operational controls, offering compliance-ready infrastructure.
"We are expanding our offerings in healthcare and manufacturing, tailored for sector-specific needs. SMEs are showing strong uptake of our SpeedCloud platform for cost-efficient digital transformation," says Rajgopal.
Asked about his future plans, Rajgopal says, "Our short-term focus is on scaling and hosting enterprise-grade AI use cases that can stand the test of time. We anticipate compute requirements reaching 300 KW per rack, far beyond traditional datacenter capabilities, making infrastructure modernisation imperative. The AI landscape is evolving rapidly, and our goal is to remain agile, providing our customers with the best mix of technology, scalability, and talent access."
NxtGen has secured up to Series B funding rounds with investments from renowned entities such as the International Finance Corporation, Intel Capital Corporation, and Iron Mountain.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The quiet ban that could change how AI talks to you
The quiet ban that could change how AI talks to you

Fast Company

timea few seconds ago

  • Fast Company

The quiet ban that could change how AI talks to you

As AI chatbots become ubiquitous, states are looking to put up guardrails around AI and mental health before it's too late. With millions of people turning to AI for advice, chatbots have begun posing as free, instant therapists – a phenomenon that, right now, remains almost completely unregulated. In the vacuum of regulation on AI, states are stepping in to quickly erect guardrails where the federal government hasn't. Earlier this month, Illinois Governor JB Pritzker signed a bill into law that limits the use of AI in therapy services. The bill, the Wellness and Oversight for Psychological Resources Act, blocks the use of AI to ' provide mental health and therapeutic decision-making,' while still allowing licensed mental health professionals to employ AI for administrative tasks like note taking. The risks inherent in non-human algorithms doling out mental health guidance are myriad, from encouraging recovering addicts to have a ' small hit of meth ' to engaging young users so successfully that they withdraw from their peers. One recent study found that nearly a third of teens find conversations with AI as satisfying or more satisfying than real-life interactions with friends. States pick up the slack, again In Illinois, the new law is designed to 'protect patients from unregulated and unqualified AI products, while also protecting the jobs of Illinois' thousands of qualified behavioral health providers,' according to the Illinois Department of Financial & Professional Regulation (IDFPR), which coordinated with lawmakers on the legislation. 'The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,' IDFPR Secretary Mario Treto, Jr said. Violations of the law can result in a $10,000 fine. Illinois has a history of successfully regulating new technologies. The state's Biometric Information Privacy Act (BIPA), which governs the use of facial recognition and other biometric systems for Illinois residents, has tripped up many tech companies accustomed to operating with regulatory impunity. That includes Meta, a company that's now all-in on AI, including chatbots like the ones that recently made chats some users believed to be private public in an open feed. Earlier this year, Nevada enacted its own set of new regulations on the use of AI in mental health services, blocking AI chatbots from representing themselves as 'capable of or qualified to provide mental or behavioral health care.' The law also prevents schools from using AI to act as a counselor, social worker or psychologist or from performing other duties related to the mental health of students. Earlier this year, Utah added its own restrictions around the mental health applications of AI chatbots, though its regulations don't go as far as Illinois or Nevada. The risks are serious In February, the American Psychological Association met with U.S. regulators to discuss the dangers of AI chatbots pretending to be therapists. The group presented its concerns to an FTC panel, citing a case last year of a 14-year-old in Florida who died by suicide after becoming obsessed with a chatbot made bt the company 'They are actually using algorithms that are antithetical to what a trained clinician would do,' APA Chief Executive Arthur C. Evans Jr. told The New York Times. 'Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.' We're still learning more about those risks. A recent study out of Stanford found that chatbots marketing themselves for therapy often stigmatized users dealing with serious mental health issues and issued responses that could be inappropriate or even dangerous. 'LLM-based systems are being used as companions, confidants, and therapists, and some people see real benefits,' co-author and Stanford Assistant Professor Nick Haber said. 'But we find significant risks, and I think it's important to lay out the more safety-critical aspects of therapy and to talk about some of these fundamental differences.'

Apple's plan for AI could make Siri the animated center of your smart home
Apple's plan for AI could make Siri the animated center of your smart home

The Verge

timea minute ago

  • The Verge

Apple's plan for AI could make Siri the animated center of your smart home

Apple is developing a bunch of products and features to deliver its vision of AI, including multiple robots, a smart home display, and a revamped version of Siri with new technology powering it, according to an extensive report from Bloomberg. The company's generative AI efforts lag those from other big tech companies, and it delayed some upgrades to Siri earlier this year, but these rumored new initiatives point to the smart home as a key place for its AI technology One of the robots is apparently a tabletop robot that 'resembles' an iPad mounted to an arm that can move around and follow users as they move around a room, Bloomberg says. Apple has already shared a preview of what this could look like: earlier this year, the company published research showing a tabletop robot that looks like a real-life version of the Pixar logo with a lamp on the end of the arm. In videos, it's quite charming — it can even dance. A key part of the device, which Apple is aiming to launch in 2027, would be a more visual version of Siri that users could have more natural conversations with, like what's possible with ChatGPT's voice mode. Bloomberg says Apple has tested using an animated take on the Finder logo for Siri, but the company is also apparently thinking about ideas that are more like Memoji. Apple is also revamping Siri so that it's powered by LLMs. Apple is working on other robots, too, including an Amazon Astro-like robot that has wheels, and it has 'loosely discussed' humanoid robots, according to Bloomberg. By the 'middle of next year,' Apple plans to launch a smart home display that will let you do things like control your smart home, play music, take notes, and do video calls, Bloomberg says, and this device could have the new look for Siri. The display and the tabletop robot may have a new OS that can be used by multiple people, and could be able to personalize what's shown to a user by scanning their face with a front-facing camera. Bloomberg says the smart home screen resembles a Google Nest Hub but has a square display. In addition to the smart home display, Apple is also working on a security camera, and it plans to develop 'multiple types of cameras and home-security products as part of an entirely new hardware and software lineup,' Bloomberg says. Posts from this author will be added to your daily email digest and your homepage feed. See All by Jay Peters Posts from this topic will be added to your daily email digest and your homepage feed. See All Apple Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Smart Home Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

What Trump's AI Action Plan Means For Copyright
What Trump's AI Action Plan Means For Copyright

Forbes

timea minute ago

  • Forbes

What Trump's AI Action Plan Means For Copyright

President Trump's recently-unveiled AI Action Plan conceptually attempts to address the tension between the rights of copyright owners to control their works and the need of AI companies to use copyrighted works to train their systems. Other solutions, some pro-copyright owner and some not, have arisen to try and address the problem. Federal Courts find Fair Use Whether AI companies must secure permission from copyright owners to use their copyrighted works to train generative AI models continues to be a murky and debatable issue. In two recent federal court rulings on the issue, federal judges in the Northern District of California ruled that the use of copyrighted books to train AI systems - Anthropic's Claude system and Meta's Llama system, respectively - was a fair use and therefore did not require the book authors' permission. Those decisions, however, are not controlling outside of their jurisdictions, and, more importantly, are on or subject to appeal. Therefore, they could be reversed - although in my opinion, they will not be. Thus, they do not provide any definitive answer. Moreover, those decisions, like all court decisions, are limited to their facts. Other AI models, which use copyrighted works differently than Claude or Llama, might require different legal outcomes. Of note, Universal Studios and Disney are currently suing Midjourney for using their copyrighted works, alleging facts that seem much more troublesome than those involved in the Anthropic and Meta suits. President Trump's Proposed Solution The Trump administration favors the fair use position. President Trump has just released an AI Action Plan that prioritizes building the country's AI capabilities and removing regulatory and other barriers to that end. Speaking at a recent AI Summit, the President said: 'You can't be expected to have a successful AI program when every single article, book or whatever you've studied you're expected to pay for. We appreciate that, but you just can't do that because it's not do-able. And if you're going to try and do that, you're not going to have a successful program.' Echoing the analysis of Judge Alsup in his fair use decision, which analogized reading a book to increase one's knowledge to using a book to train an AI system, the President said: 'When a person reads a book or an article, you've gained great knowledge. That does not mean that you're violated copyright laws or have to make deals with every content provider,' he said. 'You just can't do it. China's not doing it.' How exactly the administration will implement such a rule, whether it will, and what authority the AI Action Plan would have remains to be seen. Legislative Solutions Meanwhile, on July 21, Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT) introduced a bill that would require AI companies to secure permission from copyright owners before using their works to train AI systems. The AI Accountability and Personal Data Protection Act would create a private tort action against any company using copyrighted material to train an AI system without the copyright owner's permission. The bill also contains provisions that any agreement to the contrary, other than a collective bargaining agreement, would be void. Market Based Solutions Some AI companies are striking deals to compensate copyright owners – or at least the companies who control copyrighted works - for using their works to train AI systems. Examples include a deal struck between Amazon and the New York Times, and between Open AI and News Corp. and the Associated Press. Opt-Out Solutions Other AI models have instituted 'opt-out' features in their end user agreements or user settings, allowing users to opt out of allowing the model to use its own creations to further train itself. Indeed, laws in countries outside the U.S., such as in the EU, have laws that expressly allow rightsholders to reserve their rights in their work from data-mining, effectively an opt-out of AI data training. Article 4(3) of the 2019 Directive on Copyright and Related Rights in the Digital Single Market states: 'The exception or limitation provided for [purposes of text and data mining] shall apply on condition that the use of works and other subject matter referred to in that paragraph has not been expressly reserved by their rightsholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.' Given this, I noted with interest that DreamWorks used the following disclaimer in the credits of its recent film The Bad Guys 2: 'ALL RIGHTS IN THIS WORK ARE RESERVED FOR PURPOSES OF LAWS IN ALL JURISDICTIONS PERTAINING TO DATA MINING OR AI TRAINING, INCLUDING BUT NOT LIMITED TO ARTICLE 4(3) OF DIRECTIVE (EU) 2019/790. THIS WORK MAY NOT BE USED TO TRAIN AI.' Whether this opt-out is or will be legally effective under U.S. law remains to be seen. The copyright/AI wars continue. Stay tuned.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store