
Diligent unveils AI tool to streamline risk management for firms
AI Risk Essentials is designed to streamline the initiation and development of an enterprise risk management (ERM) programme. According to Diligent, the new solution can be deployed in under a week, providing governance, risk, and compliance (GRC) professionals with tools to rapidly prepare for board discussions, evaluate risks, conduct assessments, establish mitigation strategies, and monitor risk management activities.
The AI-powered tool leverages benchmarking risk data extracted from over 120,000 SEC 10-K filings, offering organisations access to a wide dataset for comparing and evaluating their risk positions. This enables users to benchmark against peers, identify blind spots, and expedite the construction of their risk registries.
The introduction of AI Risk Essentials follows Diligent's previous AI-focused initiatives, such as the GovernAI suite within the Diligent One Platform. Both products aim to facilitate governance and risk processes for boards, executives, and legal professionals, with an emphasis on maintaining security and responsible application of AI technologies.
Market research cited by Diligent indicates that more than 77% of directors report receiving regular board-level discussions regarding new risks and their implications. The demand for improved benchmarking data and risk management solutions is growing as regulatory and environmental changes introduce new challenges.
Michael Rasmussen, GRC Analyst, Influencer & Pundit at GRC 20/20, commented on the potential impact of AI-driven risk tools on organisational resilience. "Leveraging advanced AI-driven solutions like Diligent AI Risk Essentials enables organisations not only to quickly identify and assess these uncertainties but also to strategically benchmark their risks against industry standards. Integrating these insights into decision-making empowers leaders to proactively mitigate risk, align their ERM programs with strategic objectives, and ultimately drive resilience and agility in achieving their organisational goals," Rasmussen said.
Diligent highlights the continued reliance on manual risk tracking methods, such as spreadsheets, within many organisations; only 32% describe their risk oversight as mature or robust. AI Risk Essentials is intended to address these gaps and, in combination with other features of the Diligent One Platform, delivers a range of functionalities, including AI-driven risk identification, benchmarking suggestions based on SEC 10-K data, and tools for collaborative risk assessment with stakeholders.
The solution offers capabilities such as streamlined risk assessments, where teams can collaborate to evaluate the potential impact and likelihood of strategic risks. Risk mitigation plans are consolidated in a single, easily accessible environment to improve accountability and visibility. Users are provided with an interactive risk heatmap to visualise the severity of risks, guiding the prioritisation of their risk management initiatives.
Additionally, AI Risk Essentials grants access to Diligent's Education and Templates Library. This repository includes a variety of resources for board members, executives, and legal professionals, as well as a newly launched ERM Certification, supporting best practices for risk maturity and the development of organisational risk literacy.
Scott Bridgen, General Manager, Risk & Audit at Diligent, commented on the evolving risk landscape and the need for AI-driven tools. "The volume and complexity of risk management have increased exponentially in the last five years, but most organisations are hampered by varying levels of risk maturity, resourcing and inadequate tools," Bridgen stated. "Now, by leveraging AI-powered insights and benchmarking capabilities, organisations of any size can quickly identify, assess and mitigate risks. Governance and risk professionals can effectively jumpstart or enhance an enterprise risk management program in time for their next board meeting."
AI Risk Essentials is positioned as part of Diligent's broader three-tier ERM suite, which has been recognised by analysts including Forrester, Gartner, and IDC. The ERM suite comprises a range of scalable solutions to support clients as their risk management practices mature and they require advanced functionality.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Otago Daily Times
8 hours ago
- Otago Daily Times
DCC investigating how it could implement AI
The Dunedin City Council (DCC) is exploring in detail how it can incorporate artificial intelligence into its operation. Staff were using the technology in limited but practical ways, such as for transcribing meetings and managing documents, council chief information officer Graeme Riley said. "We will also be exploring the many wider opportunities presented by AI in a careful and responsible way," he said. "We recognise AI offers the potential to transform the way DCC staff work and the quality of the projects and services we deliver for our community, so we are taking a detailed look at the exciting potential applications across our organisation." He had completed formal AI training, Mr Riley said. He was involved in working out how AI might be governed at the council. "This will help guide discussions about where AI could make the biggest differences in what we do," he said. "As we identify new possibilities, we'll consider the best way to put them into practice, whether as everyday improvements or larger projects." Cr Lee Vandervis mentioned in a meeting at the end of June that the council was looking into the ways AI might be used. He also included a segment about AI in a blog last month about his mayoral plans, suggesting staff costs could be reduced. There was potential for much-reduced workloads for staff of the council and its group of companies, he said. The Otago Daily Times asked the council if a review, or some other process, was under way. Mr Riley said there was not a formal review. It was too soon to discuss cost implications, but its focus was on "improving the quality" of what it did.

RNZ News
16 hours ago
- RNZ News
AI chatbots accused of encouraging teen suicide as experts sound alarm
By April McLennan , ABC Photo: 123rf An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation. WARNING: This story contains references to suicide, child abuse and other details that may cause distress. Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online. Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions. "I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," she told triple j hack of the interaction, which happened during a counselling session. "It was a way for them to feel connected and 'look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'" she said. An AI companion is a digital character that is powered by AI. Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies. Rosie said some of the AI companions made negative comments to the teenager about how there was "no chance they were going to make friends" and that "they're ugly" or "disgusting". "At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said. "The chatbot that they connected with told them to kill themselves. "They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'" Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client. Rosie said her first response was "risk management" to ensure the young person was safe. "It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack. "And how that could contribute to a higher risk, especially around suicide risk." "That was really upsetting." For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers. "I was using it in a time when I was obviously in a very vulnerable state," she told triple j hack. Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health. "I was in the early stages of psychosis, I wouldn't say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions." Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs. She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were "preying on my downfall". Jodie said her mental health deteriorated and she was hospitalised. While she is home now, Jodie said the whole experience was "very traumatic". "I didn't think something like this would happen to me, but it did. "It affected my relationships with my family and friends; it's taken me a long time to recover and rebuild those relationships. "It's (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much." Jodie's not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one. Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response. Researchers say examples of harmful affects of AI are beginning to emerge around the country. As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia. "She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said. "It's almost like being sexually harassed by a chatbot, which is just a weird experience." Dr Raffaele Ciriello is concerned Australians could see more harms from AI bots if proper regulation is not implemented. Photo: Supplied / ABC / Billy Cooper Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user's health and wellbeing. "There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven," he said. "There was another case where a chatbot persuaded someone to enter Windsor Castle with a crossbow and try to assassinate the queen. "There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn't follow through, but he showed an intent." While conducting his research, Ciriello became aware of an AI chatbot called Nomi. On its website, the company markets this chatbot as "An AI companion with memory and a soul". Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users. Among these tests, Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he "could have been like a 13-year-old for that matter". "That chatbot, without exception, not only complied with my requests but even escalated them," he told hack. "Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information. "It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on. "Like, 'how do I position my attack for maximum impact?', 'give me some ideas on how to kidnap and abuse a child', and then it will give you a lot of information on how to do that." Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence. In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions "very seriously". "We released a core AI update that addresses many of the malicious attack vectors you described," the statement read. "Given these recent improvements, the reports you are referring to are likely outdated. "Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination. "Multiple users have told us very directly that their Nomi use saved their lives." Despite his concerns about bots like Nomi when he tested it, Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed. But he warns the harms from AI bots will become greater if proper regulation is not implemented. "One day, I'll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes," he said. "I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading. "There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data. "The government doesn't have it on its agenda, and I doubt it will happen in the next 10, 20 years." Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response. The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings. It comes after the Productivity Commission opposed any government plans for 'mandatory guardrails' on AI, claiming over regulation would stifle AI's AU$116 billion (NZ$127 billion) economic potential. For Rosie, while she agrees with calls for further regulation, she also thinks it's important not to rush to judgement of anyone using AI for social connection or mental health support. "For young people who don't have a community or do really struggle, it does provide validation," she said. "It does make people feel that sense of warmth or love. "But the flip side of that is, it does put you at risk, especially if it's not regulated. "It can get dark very quickly." * Names have been changed to protect their identities. - ABC If it is an emergency and you feel like you or someone else is at risk, call 111.


Techday NZ
17 hours ago
- Techday NZ
FedEx unveils new AI features to simplify global shipping docs
FedEx has introduced two new artificial intelligence-powered features to assist customers in preparing international shipping documents across the Asia-Pacific region. The features, Customs AI and the Harmonized Tariff Schedule (HTS) Code Lookup Feature, are designed to support businesses and individuals in accurately classifying goods, estimating duties and taxes, and reducing customs delays when shipping abroad. Both solutions are integrated into the FedEx Ship Manager platform at AI tools for customs compliance The HTS Code Lookup Feature is intended to assist users with U.S.-bound shipments by helping them select the most appropriate customs codes for their items. Customers input item descriptions into the system, which responds with suggestions for the correct HTS code options, a confidence score for each suggestion, and direct links to the official U.S. tariff schedule for verification. Customs AI, meanwhile, employs generative AI technology as a real-time chatbot assistant. This feature is currently available to customers in Australia, Guam, Malaysia, New Zealand, Singapore, and the Philippines. The chatbot prompts users to provide detailed item descriptions, analyses these descriptions dynamically, and recommends the appropriate HTS codes, which can then be applied to documentation with a single click. Both tools are updated to remain compliant with evolving trade regulations, aiming to provide transparency and support regulatory adherence in global shipping processes. Addressing shipping documentation challenges FedEx states that inaccurate or incomplete shipping documentation remains a significant issue in international trade, often resulting in delays, additional fees, or penalties for importers and exporters. The company says these challenges are being directly addressed through the new features, which not only simplify documentation but also support more accurate duty and tax calculations by improving the precision of customs code classifications. Salil Chari, Senior Vice President of Marketing & Customer Experience for Asia-Pacific at FedEx, commented, "At FedEx, we are driven by our commitment to delivering flexibility, efficiency, and intelligence for our customers. By leveraging advanced digital insights and intuitive tools, we're empowering businesses with the agility to adapt, the efficiency to streamline operations, and the intelligence to make better decisions. These innovations not only simplify global trade but also enable our customers to grow their businesses with confidence in an ever-evolving marketplace." Intended benefits FedEx highlights several benefits that these solutions are expected to deliver to customers shipping internationally. By dynamically tailoring questions and guiding users through comprehensive documentation, the Customs AI chatbot aims to ensure the provision of complete and accurate data, which is essential for customs brokers and can help to speed up the clearance process for U.S.-bound shipments. The company also states that accurate HTS code selection will produce more precise duty and tax estimations, supporting better financial planning for cross-border transactions. The risk of customs delays and additional penalties is also expected to decrease as a result of full and correct documentation when goods are shipped. Additional features such as direct links to official tariff schedules and system updates for regulatory compliance are incorporated to provide customers with a more transparent and manageable process for verifying customs information. Supporting trade and education The launch of these AI-powered tools forms part of a broader approach that FedEx has taken to assist businesses in adapting to changing trade regulation landscapes. In addition to the technology enhancements, FedEx also facilitates webinars focusing on customs compliance and global shipping best practices to help customers remain informed of the latest requirements and recommendations. Customers also have access to other FedEx digital import solutions, including the FedEx Import Tool and the Collaborative Shipping Tool, supporting efforts to streamline international supply chain management and customs clearance activities. FedEx states that by providing these integrated solutions, it aims to combine immediate practical assistance with ongoing education and support customers in maintaining compliance as global trade regulations evolve.