
When ‘good enough' AI gets you fined (or fired!)
In a world obsessed with faster, cheaper outputs, AI has made 'good enough' look very tempting when it comes to legal and risk advisory outputs. Need an obligation map - "There's a tool for that!" Want to summarise 400 regulatory clauses - "Just prompt the bot."
But compliance isn't a race - it's a contract with regulators, stakeholders and the public. And when shortcuts miss the mark, "We used AI" simply won't get you off the hook. In fact, it might raise the bar for what's considered reckless disregard.
Speed ≠ Safety - the case of the collapsing proposal
Let's start with a recent real-life story.
A multinational firm wrestling with niche rules recently invited proposals from several firms. Our bid emphasised expertly curated obligation libraries, legal and risk oversight and 'incremental AI assistance'. Another vendor promised a single platform that would "write all obligations, map all controls and keep them updated automatically".
During due diligence, however, the other vendor conceded they could offer speed - but not accuracy. They could offer no assurance that the tool's recommendations were accurate or that it would satisfy a regulator asking the reasonable-steps question. The firm's compliance leaders pressed harder: would the vendor underwrite the output? The answer was no. The value proposition collapsed and along with it the illusion that AI without expert oversight can meet the needs of complex regulated entities and placate their supervisory bodies.
Context ≠ Comprehension: The case where automation missed real-world control
In yet another cautionary tale, a a high risk venue operator initially relied on AI-generated risk controls to satisfy venue compliance rules (i.e. no under 18 patrons). The tool pulled in industry practice and recommended a range of complex measures but it completely missed a key, simple, manual control: the presence of two full-time security staff who checked patrons on entry. AI simply couldn't see what wasn't written down.
This offers a sobering lesson: just because AI can summarise what's on a page doesn't mean it understands what happens on the ground.
When AI belongs in your compliance stack
None of this is a blanket warning against using AI. Used properly, AI is already driving value in risk and compliance, including:
Scanning policy libraries for inconsistent language
Flagging emerging risks in real-time from complaints or case data
Improving data quality at capture
Drafting baseline documentation for expert review
Identifying change impacts across jurisdictions and business units
But note the pattern: AI handles volume and repetition; humans handle nuance and insight. The most robust use cases right now treat automation as an accelerant and not a replacement. This is because the line between support and substitution must be drawn carefully and visibly.
Ask this first before plugging in your next tool
As regulators pivot from rule-based assessments to 'reasonable steps' accountability, the key question is no longer just "Did we comply?" but "Can we prove we understood the risk and chose the right tools to manage it?" If your AI-assisted compliance map can't explain its logic, show its exclusions or withstand scrutiny under cross-examination then you don't have a time-saver - you've got a liability.
So before you plug in an 'all-in-one automation' solution, first ask: Will this tool produce explainable and auditable outcomes? Is there clear human oversight at every high-risk stress point? Can we justify our decision to use this tool, especially when something goes wrong? If the answer to any of these is no, you're not accelerating your compliance strategy - you're undermining it.
We all love speed, but in risk, speed without precision is a rounding error waiting to become a headline. Compliance leaders have a duty to make sure that what's fast is also right and that when it's not, there's someone accountable.
In this era of 'good enough' AI, being good is simply no longer good enough…Being right is.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
8 hours ago
- Mint
Not Google or Bing! This search engine lets you block AI images in search results
Whether we like it or not, artificial intelligence has quickly become a part of our lives. While the technology powers many helpful features, it has also filled our timelines with AI-generated images and videos. This is where DuckDuckGo's new feature comes in. The privacy-focused search engine now lets users choose how much AI they want in their lives by offering the option to hide AI-generated images in search results. However, the feature isn't enabled by default. Users will need to manually go into the search menu to change the filters if they want to reduce the number of AI-generated images they see. On the DuckDuckGo website, a new 'AI Images' option now appears in the search menu. By default, it is set to 'Show', but users can manually click on the menu and set it to 'Hide' to significantly reduce AI-generated images in search results. DuckDuckGo admits the feature won't catch every AI-generated image, but it says their frequency should drop significantly. To power the feature, DuckDuckGo relies on open-source blocklists like the 'nuclear' list from uBlockOrigin and the uBlacklist Huge AI Blacklist. Interestingly, DuckDuckGo used the search term 'baby peacock' to illustrate how well its no-AI image filter works. For context, Google had faced criticism after users noticed that searching for 'baby peacock' mostly surfaced AI-generated images alongside real artwork, leading to concerns about the dilution of real information. While Google has since worked to improve transparency around AI-generated images, it still doesn't provide an option to hide them entirely. Search for your desired image Head to the Images tab and click on the AI Images menu, then tap on 'Hide' You should now see fewer AI-generated images in your search results If you don't want to go through the whole process, DuckDuckGo has also created a separate space for users who want to avoid AI in their search results. Heading over to will automatically enable the AI image filter while also switching off AI-assisted summaries and hiding Duck AI chat icons.


The Hindu
12 hours ago
- The Hindu
‘Raanjhanaa' re-release: Director Anand L Rai and producer at loggerheads over AI-tweaked climax
The 2013 film Raanjhanaa is being re-released in Tamil Nadu with an altered AI-generated "happy" ending, a possible first for the industry that a heartbroken director Aanand L Rai terms "dystopian experiment" and producer Eros Media World describes as a "creative reimagining". The unrequited romance drama, starring Tamil superstar Dhanush and Sonam Kapoor, ended with the lead actor's death. Not anymore -- at least in the Tamil version. "The timeless love story returns to the big screen! #Ambikapathy re-releasing in theatres from August 1st... A new ending powered by AI," Upswing Entertainment said while making the announcement earlier this month. The Hindi film was released as Ambikapathy in the Tamil dubbed version in 2013. Eros has collaborated with Upswing Entertainment on regional market outreach and promotions for the Tamil language re-release. Rai was dismayed. "I'm heartbroken that this is the future we're heading toward, where intent and authorship are disposable. All I can do is dissociate myself from such a reckless and dystopian experiment," he said in a statement to PTI. 'Raanjhanaa didn't need a new climax. It had heart, and honesty. It became a cult film because people connected to it with its flaws, and imperfections. To see its ending altered without a word of discussion is a gross violation not just of the film, but of the trust of the fans who've carried the film in their hearts for 12 years,' the filmmaker said. Eros, in its response shared over mail with PTI, said the film's re-release is part of the company's broader strategy to refresh and reintroduce classic cinematic works to newer audiences in regional markets. "This is a creative reimagining, not a replacement, and is consistent with global industry practices including anniversary editions, alternate cuts, and modernised remasters," the company's group CEO Pradeep Dwivedi said. "We categorically reject Mr. Rai's allegations, which are not only factually incorrect but also legally unfounded. The re-release is a respectful reinterpretation and not a 'tampering' of the original. It is clearly positioned as an alternate, AI-enhanced version—akin to Classic cuts or re-edits seen globally." The producer of a cinematographic work, under Indian law, is its legal author and moral rights vest with the producer—not the director, the company added. Raanjhanaa was directed by Rai from a script penned by Himanshu Sharma. Set in Varanasi and Delhi, the story follows Kundan (Dhanush), a Hindu boy who falls in love with Zoya (Kapoor), a Muslim girl, from childhood. Towards the end of the film, Kundan is shot at a rally and dies later at a hospital. In Rai's view, Eros' actions in changing the end opens a "dangerous door". ALSO READ:Kriti Sanon starts shooting for 'Tere Ishk Mein' co-starring Dhanush 'They raise urgent legal and ethical questions of the impact of decisions like this on the moral rights of creators. Even worse is their apparent decision to alter the actors' contributions without their consent! How can they digitally manipulate an actor's input almost a decade after a film's release? This strips away their agency, and raise serious concerns under personality and image rights.' If such a move goes "unquestioned", he warned, it could set a precedent for similar actions in the future. "What stops anyone from 'updating' any film, performance, or legacy to suit short-term profiteering?" he asked. Eros said they have always been at the forefront of leveraging emerging technologies. The objective of the alternative ending is to enhance viewer engagement and present a fresh perspective—one that complements the original storyline and is clearly labelled as an alternate version. Asked if the film's team was kept in the loop, the company said Eros holds the sole and exclusive copyright and producer rights of Raanjhanaa, including the legal and moral rights under Indian law. "The reinterpretation has been developed with sensitivity and respect for the original creative team's contribution," Dwivedi said in the statement. The company said it has acknowledged "Mr Rai's concerns and responded to him respectfully, reiterating our legal position and creative intent". "We regret that he has chosen to publicly distance himself from the project, despite the film being a product of collaborative effort where rights are lawfully vested with the producer. Our re-release is an homage to the film's legacy, not a deviation from it. All rights, decisions, and creative control related to Raanjhanaa remain solely with Eros International Media Limited as the exclusive copyright holder," the company said. Rai is currently working on a follow-up to Raanjhanaa with Dhanush. Titled Tere Ishq Mein, the movie also stars Kriti Sanon and is set to be released in theatres worldwide on November 28.


New Straits Times
13 hours ago
- New Straits Times
Boosting AI literacy for professional communication
THE British Council is strengthening its corporate training strategy across the Asia-Pacific region to address the growing impact of artificial intelligence (AI) on workplace communication. According to David Neufeld, Corporate English Solutions (CES) Sales Head for the region, professionals are increasingly relying on AI-generated writing without adequate review, which can result in issues with clarity, relevance and factual accuracy. "We are not training people how to use AI. Rather, we are trying to help them with what AI outputs, to be better business communicators," said Neufeld. He noted that many corporate clients, particularly in the banking, financial services and insurance sectors, now have internal AI tools. However, employees often forward AI-generated content without editing, even when it contains grammatical errors or irrelevant details. This overreliance, Neufeld warned, creates a risk of miscommunication in high-stakes situations. The British Council, he explained, trains professionals to assess, refine and apply AI-generated content using structured frameworks designed for the workplace. These frameworks provide support in areas such as business writing, interpersonal communication, influencing, and trust-building techniques. "We want participants to think critically about what AI produces. Is it accurate? Is it appropriate for the audience? Can it stand up to scrutiny?" Neufeld added. He emphasised that professionals must also learn to navigate AI's limitations, including outdated data, hallucinations and factual inaccuracies—particularly when handling sensitive or time-critical communication. At the British Council's Lunch and Learn 2025 session held on 10 July, participants were introduced to three targeted training modules aimed at building communication confidence in AI-assisted environments. The first session taught participants how to use the Point, Reason, Example, Point (PREP) structure to organise AI-generated text into persuasive messages. The second focused on negotiation skills, using frameworks such as Best Alternative to a Negotiated Agreement (BATNA), Bottom Line, and Most Desirable Outcome, with AI used to simulate role plays. The final session applied the British Council's six Cs—clear, correct, concise, coherent, complete and courteous—to improve clarity and tone in AI-written content. Neufeld said these frameworks help participants keep human judgement at the centre of communication. "AI is useful for drafting and simulating ideas, but humans must still decide what to say, how to say it, and whether it's appropriate," he said. British Council CES operates on a business-to-organisation model and delivers training to clients in the corporate, government and education sectors. Malaysia and Singapore are currently two of its largest markets in Southeast Asia, although demand in Thailand, Vietnam and Indonesia is on the rise. Neufeld, who has lived in Malaysia since 2010, began his tenure with the British Council as a corporate trainer and now leads CES across the Asia-Pacific region. He said demand for AI-related training has grown steadily over the past two years, as organisations race to integrate generative tools into their operations. The British Council's observations align with broader trends among learning and development (L&D) teams in the region. AI is increasingly being used to create personalised assessments, enhance learner engagement, automate feedback, and deliver training at scale across multiple locations. However, the British Council cautions that challenges remain. Neufeld said the absence of clear organisational policies, ethical concerns, and a loss of the human touch in communication are among the top risks raised by clients. "Some worry AI might replace certain roles; others are concerned about bias, or using inaccurate data that goes unchallenged," he said. To adapt, the British Council is placing greater emphasis on developing communication fundamentals and soft skills with its corporate clients. According to Neufeld, these include active listening, clarity in messaging, critical thinking, emotional intelligence, and the ability to adapt tone to suit the audience and situation. The British Council has also identified creativity, time management and conflict resolution as vital skills for navigating increasingly complex and fast-changing workplaces. These areas are integrated into CES training programmes, alongside language competency and task-based communication models. Looking ahead, the British Council anticipates broader workplace transformation over the next five to ten years, with AI serving as a central driver. Shifts in job roles, workforce composition, economic uncertainty, and rising expectations around employee well-being are all contributing to a new approach to learning. Neufeld said the British Council's corporate clients are also becoming more conscious of the reputational risks posed by poor communication. "A bad message can hurt trust. Whether written by a person or a machine, it still reflects your brand," he said. In response, British Council Malaysia has incorporated more digital tools into its delivery model while maintaining interactive and context-based learning. Clients are increasingly requesting hybrid solutions that combine face-to-face workshops with online modules and follow-up coaching. The British Council has stated that its role is not to replace corporate L&D teams, but to support them in ensuring communication remains a core skill in the age of automation. "Even with AI doing the heavy lifting in some areas, we still need people who can lead with empathy, explain ideas clearly, and respond in real time," said Neufeld. The British Council is the United Kingdom's international organisation for cultural relations and educational opportunities, providing services in English language education, examinations, arts and cultural exchange. Founded in 1934 and present in over 100 countries, the British Council builds lasting trust and cooperation through language, culture and global partnerships. Now in its 90th year, the organisation continues to evolve, helping individuals and institutions around the world connect, learn and collaborate with the UK to foster peace, prosperity and shared progress.