
How Namit Malhotra Is Building the Future of AI Content Creation
DNEG Founder & CEO Namit Malhotra
In an era where content is king and scale is everything, Namit Malhotra is rewriting the playbook—merging Oscar-winning artistry with generative AI, proprietary IP, and global production infrastructure. As Chairman and CEO of DNEG—the eight-time Academy Award®-winning visual effects giant behind Dune, Oppenheimer, and Tenet—Malhotra is building not just a studio, but a blueprint for the future of storytelling and AI content creation.
"We've always been a creative-first company," he says. "But the way we deliver creativity has to evolve."
That evolution is now playing out across continents and sectors, anchored by a singular vision: position India not just as a services hub, but as the world's most advanced engine for content creation. Through a series of strategic acquisitions—from London-based VFX leader Double Negative to AI powerhouse Metaphysic (Forbes)—and the launch of his own content production and financing arm Prime Focus Studios, Malhotra has scaled his company into a global juggernaut operating in 24 cities across four continents. He's layered on deep technical innovation through Brahma's AI product suite, and paired it with a bold infrastructure-first mindset—building not just tools or studios, but a full-stack ecosystem to control every stage of storytelling, from data to distribution. His ambition isn't to serve the global entertainment industry. It's to redefine it.
At just 18, Namit Malhotra turned his father's garage in Mumbai into ground zero for what would become a global media empire. It was 1995, and a computer graphics course had shown him the future: entire films, made on a single Mac. While others scoffed, Malhotra moved fast. He recruited three instructors from the graphics school where he studied, launched Video Workshop, and began editing shows and music videos for India's top networks. To fund it, he bet everything, securing a loan against the family home.
"Digital tools were going to transform cinema—I could feel it. I just had to move first."
During the '90s boom in Indian pop culture, Video Workshop built early momentum producing dance-offs, music countdowns, and youth entertainment for mainstream pop culture shows and networks, including Channel V, India's equivalent of MTV.
Behind the headlines is a rare advantage: Malhotra's fluency across capital markets, creative storytelling, and deep tech. He's one of the few founder-operators to take a company public, raise hundreds of millions in private equity, execute global M&A, and simultaneously lead creative production and technology development.
From acquiring Double Negative—a top-tier British VFX studio known for Inception, The Dark Knight Rises, and Harry Potter films—to raising hundreds of millions from investors such as Novator Capital and United Al Saqer Group, and leading the development of an enterprise AI platform now valued at $$1, Malhotra has consistently used his financial savvy to fuel platform innovation and his creative instinct to give it cultural resonance.
Legendary Hollywood producers Charles Roven and Namit Malhotra on set. Roven is involved as a ... More producer, alongside Namit Malhotra, for the live-action Ramayana film.
His producer credits through Prime Focus Studios span both Hollywood and Indian blockbusters, reflecting a global vision for storytelling at scale, including:
"The biggest challenge lies in navigating three moving targets: ever-evolving technology, unpredictable creative processes, and a rapidly shifting financial model. Understanding business, tech, and creative gives me a 360-degree view," Malhotra says.
In 1997, Malhotra merged his Video Workshop with his father's equipment rental company to form Prime Focus. By 2006, he had taken the company public on the Indian stock exchange (Forbes) and expanded operations to London, Los Angeles, New York, and Vancouver.
Prime Focus gained prominence by pioneering theatrical 3D conversion for major franchises like Harry Potter and Star Wars, backed by institutional investors including Standard Chartered PE, Macquarie and Aid Capital. But the game-changing move came in 2014, when Prime Focus acquired Double Negative, an acclaimed London-based VFX studio. The merger formed DNEG, combining Prime Focus's global scale with Double Negative's creative pedigree.
Renowned Hollywood stunt director Guy Norris with Namit Malhotra on the set of Ramayana. Norris is ... More collaborating with actor and producer Yash to choreograph the action sequences for the movie. The film is directed by Nitesh Tiwari and is being produced by Namit Malhotra's Prime Focus Studios and Yash's Monster Mind Creations.
The result was a global powerhouse that bridged Hollywood studio relationships with a scalable production platform. Under Malhotra's leadership, DNEG has since won eight Academy Awards for Best Visual Effects, partnering with some of the most visionary directors in the world: (Forbes)
While a planned SPAC deal fell through in 2022, momentum never slowed.
DNEG now builds the underlying infrastructure of visual storytelling. From cloud-based workflows and real-time rendering to virtual production and AI content creation, it enables scalable creative delivery.
"Technology isn't the product—it's what enables the experience," Malhotra says.
In February 2025, Brahma, the content-tech venture backed by Malhotra, acquired Metaphysic, the generative AI startup known for its photorealistic neural performance toolset as used in feature films such as HERE, Furiosa and Alien: Romulus. The deal values Brahma at $1.43 billion, with support from Abu Dhabi's United Al Saqer Group.
With 800+ engineers, Brahma is building AI-native products across video, image, and audio—bringing together DNEG's industry-leading VFX tools, Metaphysic's groundbreaking AI technology, Ziva's award-winnning technology for the creation of digital human and character simulations, and CLEARⓇ's purpose-built enterprise AI platform - and opening up new sectors beyond film & TV, including advertising, gaming and sports.
'This isn't about chasing AI trends,' Malhotra says. 'It's about building foundational infrastructure.'
"We're moving from bespoke productions to modular storytelling," Malhotra says. "You don't need to fly a crew across five countries. Brahma lets you simulate the entire experience in high fidelity."
This modular model opens up new frontiers—from immersive brand campaigns with AI ambassadors to personalized mental health therapy powered by emotional modeling.
Brahma's infrastructure enables creators to generate, license, and monetize synthetic content at scale. Think: AWS for storytelling. Smart contracts and on-chain IP rights management ensure transparency and trust as AI reshapes media ownership.
Malhotra's strategy isn't to chase features. It's to build a vertically integrated platform platform that future content runs on, spanning from pre-production to post, from data to distribution.
"Whoever owns the pipes, wins," he says. "Features fade. Infrastructure lasts."
Namit Malhotra
In May 2025, Malhotra and the State Government of Maharashtra in India announced a $400M entertainment complex in Mumbai, combining: world-class production studios to facilitate high-end content creation, supported by a state-of-the-art digital infrastructure; live entertainment facilities, including theme parks and experience centres; and lifestyle experiences, including shopping and dining destinations. All in one global destination.
'We're creating the most advanced content hub in the world—rooted in Mumbai, made for the world,' says Malhotra. 'This new site will be a best practice example of what India can deliver in technology, creativity, and entertainment, and will become a worldwide leisure destination, right at the heart of one of the world's oldest filmmaking industries. We are bringing India to the world by bringing the world to India.'
Malhotra's vision is to own the entire pipeline—from the tools to the platforms, from training to monetization.
And it's global. "We're not just telling Indian stories," he says. "We're building Indian systems that can scale globally."
Streaming put Indian content on the global map. DNEG, Prime Focus Studios, and Brahma aim to make India the world's content creation engine.
As streaming plateaus and AI hype gives way to infrastructure wars, Namit Malhotra isn't waiting to be disrupted—he's building the infrastructure, controlling the platforms, and shaping the future of content creation.
"The world doesn't need more content," he says. "It needs better systems to create it--AI content creation. That's what we're building."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time Business News
15 minutes ago
- Time Business News
Detector de IA Understanding the Technology Behind Identifying AI-Generated Content
To address these challenges, Detector de IA has been developed—specialized tools designed to determine if content was created by a human or generated by artificial intelligence. This article explores how AI detectors work, their applications, limitations, and the future of this important technology. An Detector de IA is a tool or algorithm developed to examine digital content and assess whether it was produced by a human or generated by an artificial intelligence system. These detectors are capable of analyzing text, images, audio, and video to detect patterns commonly associated with AI-generated content. AI detectors are being widely adopted across multiple sectors such as education, journalism, academic research, and social media content moderation. As AI-generated content continues to grow in both volume and complexity, the need for accurate and dependable detection methods has increased dramatically. Detector de IA rely on a combination of computational techniques and linguistic analysis to assess the likelihood that content was generated by an AI. Here are some of the most common methods: Perplexity measures the predictability of a text, indicating how likely a sequence of words is based on language patterns. AI-generated text tends to be more predictable and coherent than human writing, often lacking the spontaneity and errors of natural human language. Lower perplexity scores typically suggest a greater chance that the text was generated by an AI system. AI writing often exhibits specific stylistic patterns, such as overly formal language, repetitive phrasing, or perfectly structured grammar. Detectors look for these patterns to determine authorship. Certain detectors rely on supervised learning models that have been trained on extensive datasets containing both human- and AI-generated content. These models learn the subtle distinctions between the two and can assign a probability score indicating whether a given text was AI-generated. Newer methods include embedding hidden watermarks into AI-generated content, which can be identified by compatible detection tools. In some cases, detectors also analyze file metadata for clues about how and when content was created. Several platforms and tools have emerged to help users detect AI-generated content. Some of the most well-known include: GPTZero : One of the first widely adopted detectors designed to identify content generated by large language models like ChatGPT. : One of the first widely adopted detectors designed to identify content generated by large language models like ChatGPT. : Popular in academic and publishing settings, this tool offers plagiarism and AI content detection in a single platform. : Popular in academic and publishing settings, this tool offers plagiarism and AI content detection in a single platform. Turnitin AI Detection : A go-to tool for universities, integrated into the Turnitin plagiarism-checking suite. : A go-to tool for universities, integrated into the Turnitin plagiarism-checking suite. Copyleaks AI Content Detector : A versatile tool offering real-time detection with detailed reports and language support. : A versatile tool offering real-time detection with detailed reports and language support. OpenAI Text Classifier (now retired): Initially released to help users differentiate between human and AI text, it laid the groundwork for many newer detectors. With students increasingly using AI tools to generate essays and homework, educational institutions have turned to AI detectors to uphold academic integrity. Teachers and universities use these tools to ensure that assignments are genuinely authored by students. AI-written news articles, blog posts, and press releases have become common. AI detectors help journalists verify the originality of their sources and combat misinformation. Writers, publishers, and editors use AI detector to ensure authenticity in published work and to maintain brand voice consistency, especially when hiring freelancers or accepting guest submissions. Social media platforms use AI detection tools to identify and block bot-generated content or fake news. This improves content quality and user trust. Organizations are increasingly required to meet ethical and legal responsibilities by disclosing their use of AI. Detection tools help verify content origin for regulatory compliance and transparency. Despite their usefulness, AI detectors are far from perfect. They face several notable challenges: Detectors may mistakenly classify human-written content as AI-generated (false positive) or vice versa (false negative). This can have serious consequences, especially in academic or legal settings. As generative models like GPT-4, Claude, and Gemini become more advanced, their output increasingly resembles human language, making detection significantly harder. The majority of AI detectors are predominantly trained on English-language content. Their accuracy drops when analyzing content in other languages or domain-specific writing (e.g., legal or medical documents). Users can easily modify AI-generated content to bypass detection. A few manual edits or paraphrasing can make it undetectable to most tools. As AI detectors become more prevalent, ethical questions arise: Should users always be informed that their content is being scanned for AI authorship? Can a student or professional be penalized solely based on a probabilistic tool? How do we protect freedom of expression while maintaining authenticity? There is an ongoing debate about striking the right balance between technological regulation and user rights. Looking forward, AI detectors are expected to become more accurate, nuanced, and embedded into digital ecosystems. Some future developments may include: Built-in AI Signatures : AI models could embed invisible watermarks into all generated content, making detection straightforward. : AI models could embed invisible watermarks into all generated content, making detection straightforward. AI-vs-AI Competition : Detection tools may be powered by rival AI systems trained to expose the weaknesses of generative models. : Detection tools may be powered by rival AI systems trained to expose the weaknesses of generative models. Legislation and Standards : Governments and industry bodies may enforce standards requiring disclosure when AI is used, supported by detection audits. : Governments and industry bodies may enforce standards requiring disclosure when AI is used, supported by detection audits. Multi-modal Detection: Future detectors will analyze not only text but also images, videos, and audio to determine AI involvement across all content types. Detector de IA have become vital tools in a world where artificial intelligence can mimic human creativity with striking accuracy. They help preserve trust in digital content by verifying authenticity across education, journalism, and communication. However, as generative AI evolves, so too must detection tools—becoming smarter, fairer, and more transparent. In the coming years, the effectiveness of AI detectors will play a critical role in how societies manage the integration of AI technologies. Ensuring that content remains trustworthy in the age of artificial intelligence will depend not only on technological advancement but also on ethical application and regulatory oversight. TIME BUSINESS NEWS


Forbes
17 minutes ago
- Forbes
AI Expands Our Capacity, But Are We Expanding Our Skills?
Asian woman watching hologram screens. Business and technology concept. Smart office. GUI (Graphical ... More User Interface). Nearly 25 years ago, our HR business partner team was facing a challenge similar to what we are experiencing today with AI adoption. As we transitioned to a shared services model, we recalibrated not just how work got done, but also what work was most valuable. The 'less valued' transactional tasks that most HR business partners wanted to offload, like answering questions about benefits, running headcount reports, and managing performance issues, were the very things that gave them their sense of worth. What appeared to be resistance to a changing strategy was actually resistance to a changing identity. As an HR business partner, I had been preparing for a more strategic role through graduate studies, doing some external coaching work, and practicing what I was learning with the leaders I supported. I knew the skills needed to be a 'strategic HR business partner', and while I didn't have all of them, I was clear on the gap I needed to close. However, many of my colleagues had not confronted that deeper question of what 'strategic' really meant and the skills they needed to acquire and practice to become that. Today, as AI promises to automate routine work, we're seeing the same pattern. On the surface, employees are resisting the use of technology and AI tools, but what they are ultimately resisting is the change to their identity. The roles they have played, the workflows they have been part of, even how they are expected to communicate is changinging, yet most leaders are not being clear about what those roles and that identity is evolving to. Is the new identity that we are supposedly carving out for them better than the one they have? Would it be better for them to resist and continue to have a healthy level of skepticism until they better understand what is expected in their new work context? Beyond identity, I am also seeing the skills gap that is not being addressed. How are organizations supporting employees in not only learning AI tools, but in helping them develop new skills that they need to learn with all that extra time? The cultural shift we underwent in HR so many years ago was a two-way contract. It was we, as HR Business Partners, agreeing to change, but also our managers and HR leadership clearly articulating what that change looks like and how they were going to help us get there. The option was clear: either we evolve to become more strategic, partnering with business leaders differently, move into a more operational role, or leave the company altogether. Some HR business partners chose to move into the shared services center when they realized they genuinely enjoyed the transactional work they had spent years trying to escape. Others left the company to continue doing the very work they tried to escape from for so many years. Most of us who stayed shared one major mindset shift. Rather than asking, 'Which tasks are being taken away?' we asked, 'Which behaviors and skills do I need to develop to deliver greater value to the business?' And we made sure we had a development system that was set up to learn and practice those skills with each other and our leaders. AI is taking over more of the transactional work, but are we ready to tackle the strategic work it now makes possible? Before rolling out another AI tool, organizations must demonstrate to employees how this shift benefits them as well as the business and provide them with the skills to work effectively alongside AI. The real question is no longer, 'What more can we get out of our employees with AI?', but rather, 'What more can we do for and with our employees and AI?' A recent Stanford study showed that while 83% of employees in China see AI as an opportunity for growth, only 39% of U.S. workers share that optimism. We are throwing tools, stats, and training at people in the name of more productivity and efficiency without connecting those things to what matters in an employee's day-to-day workflow—no wonder we are not excited in the US. The fundamentals of Change leadership include explaining why we are changing, what is changing, and how we are helping people change. Yet, every day, we continue to launch another internal AI sandbox and announce that usage will be tracked and measured. When has counting course completions ever guaranteed that employees are adopting the change and gaining the skills they need? Most organizations today see the opportunity of AI through two lenses: The productivity lens: 'Do more tasks per hour' and the efficiency lens: "Process every request faster." But what if we could be more productive and efficient while also giving employees additional opportunities to grow and develop those skills they will need to be more strategic when they are being more productive and effective with their time? What if instead of asking, 'How can we do more tasks per hour?" we ask: 'How can I use the time I get back from leveraging AI on more strategic work?" When people see that their company is leveraging AI as an amplifier to their work, rather than outplacing them, they will more readily dive into identifying opportunities where AI can free them up to focus on higher-impact activities. Here are two examples from Moderna and Klarna in how they have approached the 'why, what, and how' of leveraging AI with human augmentation. Moderna's shift to "work planning" from 'workforce planning' in the context of AI is a great example of this. Moderna's 'Why': create a more integrated strategic road map that not only increases opportunities for drug discovery but also allows employees to rethink workflows. Moderna's 'What': Develop over 3,000 tailored versions of ChatGPT, called GPTs, designed to facilitate specific tasks, like dose selection for clinical trials and internally, address basic HR questions related to performance, equity and benefits. Moderna's 'How': Redesign and reimagine how technology and people interface by merging HR and IT under one Chief People Officer. Moderna is also upskilling employees, helping them to leverage that productive time that they got back on more strategic work. What if instead of asking, 'How can we do more with less?' we ask: 'How do we ensure we are focused on the right priorities?' We know that being efficient alone isn't going to get us the results we want. Without deliberate human judgment, AI will only amplify hustle habits that keep us busy with everything, rather than prioritizing the few things that truly matter. Klarna's experience illustrates the risks of prioritizing efficiency solely over effectiveness. Their journey illustrates why focusing solely on speed and cost reduction, rather than both employee and customer value, ultimately leads to less effective results and a significant disconnection with employees. Klarna's 'Why': Klarna's CEO gave a clear 'why' to employees on using AI, but it didn't include developing or upskilling employees. It was solely focused on leveraging AI as much as possible to address customer service questions, disregrding the long-term impact and potential scenarios of letting go of 700 representatives. Klarna's 'What': After replacing customer service representatives with AI, they had to reverse course when they realized they'd "amputated" empathy from their customer interactions. Klarna's 'How': Today, they've adopted a hybrid approach where AI handles simple queries while humans manage the more complex cases. When AI handles the transactional load and we empower people to think strategically, we do more than speed up work; we redefine it. Instead of setting AI and humans in opposition, let's reinforce what they can accomplish together: higher productivity and efficiency, but also greater strategic leverage. In your next 1:1, open up a discussion about AI beyond how much they are using the tools. Ask them how they are leveraging their time now that AI is doing more of the tactical tasks. Ask them what support they need to keep doing that kind of strategic work. Clarify the skills you would like to see them build and ask how you can support them in developing those skills. It's through these kinds of discussions that we can continue to work strategically with AI, not against it, with humans leading the way.


Bloomberg
17 minutes ago
- Bloomberg
How AI Drives Company Transformation
Bloomberg's Sonali Basak, Connie Leung, Senior Director, Regional Industry Leader - Asia, Worldwide Financial Services, Microsoft, Esther Wong, Founder & Chief Investment Officer, 3C AGI Partner discuss how they are investing to ensure the long-term continued growth of their businesses. (Source: Bloomberg)