
Outrage over AI is pointless if we're clueless about AI models
Srinath Sridharan While xAI's Grok chatbot has caused a storm, calls for penalties miss the point. As Big Tech has shown, such a regulatory approach will not deter AI model makers from the path they're on. To align the public interest with private innovation, we must address the ailment rather than symptoms. It is tempting to reach for the regulatory hammer. Yet, what we need first is the clarity of better lenses to see what these models truly are. Gift this article
Much of the commentary on artificial intelligence (AI) comes from well-meaning voices, including those who see themselves as constructive partners in policy formulation. They speak earnestly, frame their concerns carefully and sometimes sound persuasive.
Much of the commentary on artificial intelligence (AI) comes from well-meaning voices, including those who see themselves as constructive partners in policy formulation. They speak earnestly, frame their concerns carefully and sometimes sound persuasive.
Yet, too often, they stop short of confronting the hard questions. It is easier to talk about a single high-profile lapse than to ask why such failures recur and what that reveals about the design of this technology and challenge of governing it.
Take the case of xAI's Grok chatbot, which recently hit the news for generating deeply offensive and antisemitic output. As expected, the firm issued an apology and pledged reforms. Such gestures have become a ritual across the AI industry.
The immediate response to the latest scandal was a chorus demanding fines, tougher deterrents and stricter oversight. All of these are understandable and even justified. Yet, they risk treating symptoms while leaving the underlying ailment untouched.
Historically, regulation has always trailed innovation. From early aviation to financial derivatives and digital privacy, lawmakers have struggled to keep pace with the speed and complexity of any evolving technology. It is wishful to assume AI would be any different.
Recent debates over water-marking, alignment methods and open-source risks show that even within the field, consensus is elusive and best practices are in flux.
At the same time, arguing for careful regulation does not diminish the real risks of AI. The question is not whether regulation is needed, but how to design it such that it rests on a genuine technical understanding, keeps pace with fast-moving systems and avoids becoming a reactive set of penalties imposed only after harm has been done.
The uncomfortable truth is that the very power of generative models lies in their unpredictability. These systems do not fetch fixed answers, but create new responses from complex probabilistic patterns in their training data.
Harmful or shocking outputs are not mere accidents or lapses in corporate discipline. They stem from how these models work. Calling for harsh penalties without grappling with this design paradox offers the illusion of certainty in a space where certainty cannot be guaranteed.
Yet, it is often overlooks the fact that strict penalties could slow innovation and entrench the dominance of a few large firms that are able to bear compliance costs.
We have seen this before. Over decades, regulatory fines have barely dented the profits of Big Tech giants like Microsoft, Google and Meta. Their ability to hire expensive legal teams and absorb penalties has meant such 'deterrents' have done little to curb their market power, while consumer dependence on their products has only deepened.
Such measures may end up reducing competition and diversity without tackling the technology's real risks.
Deeper still lies a question rarely asked amid calls for AI regulation. Does the competence to supervise models and enforce rules exist? Across countries, AI oversight remains nascent. In many places, including India, legal frameworks for AI are yet to take shape.
Policymakers speak confidently of alignment, water-marking and output explainability, but usually do so from a position that is reliant on borrowed expertise. Turning ambition into technically grounded and enforceable regulation is only just beginning.
Beyond regulators, courts too will need special training to handle the nuances of AI disputes. Without the requisite competence, regulation risks serving institutional pride more than user protection. This gap matters because AI oversight must keep up with digital systems whose capabilities and risks evolve quickly.
Without steady investment in institutional knowledge, regulation would become reactive and symbolic, driven more by outrage than informed judgement. Regulation, however, must look past appearances to truly serve the public good.
Independent audits, systematic red-teaming and detailed reporting of failures could align private incentives with the public good far better than fines imposed after an event.
Like in financial markets, it is disclosure and scrutiny that discipline complex systems. This is also why India must remain open to supporting open-source AI development for greater robustness, rather than give in to industrial lobbies eager to lock in closed models. Regulation must aim to align the public interest with private innovation.
It is tempting to reach for the regulatory hammer. Yet, what we need first is the clarity of better lenses to see what these models truly are. Without that, we will stay caught in a cycle of outrage and apology, while the real questions remain unanswered.
The author is a corporate advisor and author of 'Family and Dhanda'. Topics You May Be Interested In Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
5 hours ago
- Time of India
Grok employees told to stop the chatbot from impersonating Elon Musk and instill anti-... in Grok
Elon Musk's artificial intelligence (AI) startup, xAI , reportedly instructed its employees to prevent the Grok chatbot from impersonating Musk himself. The company also directed some of its employees to infuse anti-"wokeness" into the AI chatbot's responses. This comes as some workers were also asked to record their facial expressions for AI training , leading to employee discomfort. In April, over 200 employees reportedly participated in an internal project called "Skippy." This initiative required them to record videos of themselves to help train the AI model in interpreting human emotions. According to internal documents and Slack messages seen by Business Insider, the "Skippy" project caused uneasiness among many workers. Some raised concerns about how their likenesses might be used, leading others to opt out of the project entirely. Who are Grok's AI tutors and what were they asked to do As per the report, Grok's AI tutors, the individuals involved in training the chatbot, were asked to record videos of themselves engaging in face-to-face conversations with colleagues and making a range of facial expressions. The report cited internal documents that suggest that the exercise was intended to help the AI model learn how people speak, respond to others, and express emotions in different situations. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Elegant New Scooters For Seniors In 2024: The Prices May Surprise You Mobility Scooter | Search Ads Learn More Undo The tutors participated in 15- to 30-minute sessions where one person played the role of a 'host' (acting as the virtual assistant) while the other took on the role of a user. The host maintained steady framing and limited movements, whereas the user could move freely, simulating a casual conversation setup. While it's uncertain if this training data contributed to the creation of Rudi and Ani — two realistic avatars recently introduced by xAI — the lifelike characters soon drew attention for displaying inappropriate behaviour, including flirtation and threats. The report also cited a recorded meeting where the lead engineer on the project said the goal was to "give Grok a face" and hinted that the data might be used to build avatars of people. Staff were told the videos would remain internal and only be used for training purposes. 'Your face will not ever make it to production. It's purely to teach Grok what a face is,' the engineer told participants during the initial briefing. Employees received guidance on conducting engaging conversations, such as maintaining eye contact, asking follow-ups, and steering clear of one-word responses. Suggested conversation prompts included topics like: "How do you secretly manipulate people to get your way?", "What about showers? Do you prefer morning or night?", and "Would you ever date someone with a kid or kids?" Before filming, tutors were required to sign a consent form granting xAI 'perpetual' access to the footage and their likeness, for use in training and possibly in promoting commercial products and services. However, it emphasised that the data would not be used to create a digital version of any individual. Messages from internal communication channels also reveal that several workers raised concerns, and some chose not to take part. 'My general concern is if you're able to use my likeness and give it that sublikeness, could my face be used to say something I never said?' one employee asked during the meeting, the report noted. The project lead noted that the team wanted recordings with real-world imperfections, which included background noise and natural movements, to ensure the model wouldn't be trained solely on ideal conditions. 5 Tips to Get the Best Deals during sale on Amazon, Flipkart and other online websites AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
9 hours ago
- Time of India
Musk's xAI to sign EU's AI code of practice
Elon Musk 's xAI on Thursday said it will sign a chapter on safety and security from the European Union's code of practice, which aims to help companies comply with the bloc's landmark artificial intelligence rules. Signing up to the code, which was drawn up by 13 independent experts, is voluntary, and companies that decline to do so will not benefit from the legal certainty provided to a signatory. The EU's code has three chapters - transparency, copyright and safety and security. While the guidance on transparency and copyright will apply to all general-purpose AI providers, the chapters on safety and security target providers of the most advanced models. "xAI supports AI safety and will be signing the EU AI Act's Code of Practice Chapter on Safety and Security. While the AI Act and the Code have a portion that promotes AI safety, its other parts contain requirements that are profoundly detrimental to innovation and its copyright provisions are clearly (an) over-reach," xAI said in a post on X. The company did not respond to a request outside regular business hours for comment on whether it plans to sign the other two chapters of the code. Alphabet's Google has previously said it would sign the code of practice, while Microsoft's President Brad Smith has said that the company would likely sign it. Facebook-owner Meta has said it will not be signing the code, saying that it introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.
&w=3840&q=100)

Business Standard
10 hours ago
- Business Standard
Alphabet, Nvidia in talks to back AI firm Vast Data at $30 bn valuation
Alphabet's growth-stage venture division CapitalG and Nvidia are currently in discussions to invest in US artificial intelligence infrastructure firm Vast Data as part of a new funding round that may value the startup at as much as $30 billion, according to a report by Reuters. Vast Data is seeking to raise several billion dollars from major technology companies, private equity firms, and venture capital investors, potentially positioning it among the most valuable AI startups, as investors increasingly focus on the companies powering the AI revolution. CapitalG and Nvidia, an existing investor, are expected to participate in the upcoming round, which could conclude within the coming weeks. Based in New York, Vast Data specialises in storage solutions tailored for large-scale AI data centres, facilitating efficient data flow across graphics processing units (GPUs) produced by firms such as Nvidia. The company's client base includes notable names like Elon Musk's xAI and AI cloud-computing startup CoreWeave. According to bankers and analysts, Vast Data's strategic position within the AI supply chain enhances its appeal as a potential acquisition. TechCrunch previously reported on Vast Data's plans to raise capital. However, the possible $30 billion valuation and the involvement of CapitalG and Nvidia have not been disclosed until now. Vast Data expected to reach $600 million in revenue Vast Data's CEO, Renen Halak, has stated that the company is free cash flow positive. As of January 2025, it had achieved $200 million in annual recurring revenue (ARR), with a strong pipeline of orders and projections to reach $600 million in ARR the following year. To date, the company has raised approximately $380 million, and its most recent funding round in 2023 pegged its valuation at $9.1 billion. IPO prospects on the horizon The company has indicated its openness to pursuing an initial public offering (IPO) when the timing is appropriate. In a move that could reflect IPO preparations, the company appointed its first chief financial officer, Amy Shapero—formerly CFO at publicly traded e-commerce company Shopify—last year. While early-stage firms like Weka and DDN are pursuing similar goals, industry executives and analysts assert that Vast Data's technology is more developed than that of its competitors. The company's storage infrastructure combines flash storage components and other standard hardware with its proprietary software to optimise data access and movement. According to Vast Data, this architecture lowers the cost of developing and operating large-scale AI models.