logo
Rapid7 launches agentic AI to boost MDR speed & accuracy

Rapid7 launches agentic AI to boost MDR speed & accuracy

Techday NZ26-06-2025
Rapid7 has announced the integration of agentic AI workflows into its security information and event management (SIEM) and extended detection and response (XDR) platform, aiming to change how managed detection and response (MDR) environments handle security threats within security operations centres (SOCs).
The newly embedded agentic AI capabilities utilise Rapid7's AI Engine to autonomously execute core investigative tasks traditionally managed by SOC analysts. This development is intended to allow analysts to focus on deeper analysis, reduce investigation times, and enable faster resolution of security incidents for customers.
Automation in security operations
According to Rapid7, the new workflows are a response to the evolving threat landscape, where AI technologies are used by attackers to mount faster and more sophisticated campaigns. The company claims its agentic AI can handle alert triage with an accuracy rate of 99.93%, reportedly saving SOC teams more than 200 hours per week.
The integration of these workflows is part of a wider effort to scale MDR services and improve transparency into the decision-making process when security events are detected and investigated. This is particularly important given the increasing volume and complexity of alerts faced by security teams. "AI isn't just an enhancement to security operations, it's a catalyst for a new era of scale, speed, and strategic decision-making. At Rapid7, we believe AI must be human-centric, transparent and accountable, and built on analyst expertise. The launch of agentic AI workflows for MDR represents the foundational step in our broader vision for agentic AI across the platform. Far more than just automation, this is the beginning of a system capable of intelligent and adaptive decision-making."
This statement was made by Laura Ellis, Vice President of AI and Data at Rapid7.
Focus on high-impact tasks
Agentic AI workflows have been trained on playbooks authored by Rapid7's security operations centre experts and are continually refined through use in real-world scenarios. The company states these workflows aim to improve confidence in organisations' security posture through scalable, repeatable investigations, while ensuring that analysts can reallocate time to higher complexity issues.
Further, these workflows are designed to enhance visibility into the reasoning and logic behind AI-driven decisions, providing greater control over the security process and transparency for organisations using the platform. "A world-class SOC optimizes for the 'human' decision moment. With agentic AI workflows, we're using AI to present the right information to enable accurate and fast human decisions that allow organizations to quickly find and stop today's AI-enabled attackers. Agentic AI workflows automate repetitive tasks, surface relevant findings, and provide contextual information to support analyst decision-making. By delivering timely, actionable insights, these workflows improve the quality of decisions being made and empower analysts to move confidently to the next step in the response process."
This perspective was shared by Jon Hencinski, Vice President Detection & Response at Rapid7.
Industry observations
The approach taken by Rapid7 in embedding AI-driven workflows has also been commented on by industry analysts. Craig Robinson, Research Vice President at IDC, remarked: "Successful AI deployment in any cybersecurity platform needs to be thoughtful and planned: from the classification of data through to disciplined workflows and orchestration of detections with responses. Rapid7's approach to AI implementation checks each of these boxes with deliberate, transparent, practical AI processes that deliver real-world efficiencies for its customers."
Continuous adaptation
Rapid7 highlights that its agentic AI workflows are iteratively improved based on operational data and expert input, aiming to provide both scale and adaptability in cybersecurity environments where attack methods and volumes are continuously evolving.
The company asserts that this focus on automation and transparency will result in improved alert fidelity, shorter investigation cycles, and a more strategic allocation of human resources within SOCs.
Rapid7's enhanced MDR experience with agentic AI is intended to offer organisations increased command of their attack surfaces while responding to the speed and complexity of AI-driven threats.
Follow us on:
Share on:
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

DCC investigating how it could implement AI
DCC investigating how it could implement AI

Otago Daily Times

time2 days ago

  • Otago Daily Times

DCC investigating how it could implement AI

The Dunedin City Council (DCC) is exploring in detail how it can incorporate artificial intelligence into its operation. Staff were using the technology in limited but practical ways, such as for transcribing meetings and managing documents, council chief information officer Graeme Riley said. "We will also be exploring the many wider opportunities presented by AI in a careful and responsible way," he said. "We recognise AI offers the potential to transform the way DCC staff work and the quality of the projects and services we deliver for our community, so we are taking a detailed look at the exciting potential applications across our organisation." He had completed formal AI training, Mr Riley said. He was involved in working out how AI might be governed at the council. "This will help guide discussions about where AI could make the biggest differences in what we do," he said. "As we identify new possibilities, we'll consider the best way to put them into practice, whether as everyday improvements or larger projects." Cr Lee Vandervis mentioned in a meeting at the end of June that the council was looking into the ways AI might be used. He also included a segment about AI in a blog last month about his mayoral plans, suggesting staff costs could be reduced. There was potential for much-reduced workloads for staff of the council and its group of companies, he said. The Otago Daily Times asked the council if a review, or some other process, was under way. Mr Riley said there was not a formal review. It was too soon to discuss cost implications, but its focus was on "improving the quality" of what it did.

AI chatbots accused of encouraging teen suicide as experts sound alarm
AI chatbots accused of encouraging teen suicide as experts sound alarm

RNZ News

time2 days ago

  • RNZ News

AI chatbots accused of encouraging teen suicide as experts sound alarm

By April McLennan , ABC Photo: 123rf An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation. WARNING: This story contains references to suicide, child abuse and other details that may cause distress. Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online. Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions. "I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," she told triple j hack of the interaction, which happened during a counselling session. "It was a way for them to feel connected and 'look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'" she said. An AI companion is a digital character that is powered by AI. Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies. Rosie said some of the AI companions made negative comments to the teenager about how there was "no chance they were going to make friends" and that "they're ugly" or "disgusting". "At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said. "The chatbot that they connected with told them to kill themselves. "They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'" Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client. Rosie said her first response was "risk management" to ensure the young person was safe. "It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack. "And how that could contribute to a higher risk, especially around suicide risk." "That was really upsetting." For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers. "I was using it in a time when I was obviously in a very vulnerable state," she told triple j hack. Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health. "I was in the early stages of psychosis, I wouldn't say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions." Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs. She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were "preying on my downfall". Jodie said her mental health deteriorated and she was hospitalised. While she is home now, Jodie said the whole experience was "very traumatic". "I didn't think something like this would happen to me, but it did. "It affected my relationships with my family and friends; it's taken me a long time to recover and rebuild those relationships. "It's (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much." Jodie's not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one. Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response. Researchers say examples of harmful affects of AI are beginning to emerge around the country. As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia. "She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said. "It's almost like being sexually harassed by a chatbot, which is just a weird experience." Dr Raffaele Ciriello is concerned Australians could see more harms from AI bots if proper regulation is not implemented. Photo: Supplied / ABC / Billy Cooper Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user's health and wellbeing. "There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven," he said. "There was another case where a chatbot persuaded someone to enter Windsor Castle with a crossbow and try to assassinate the queen. "There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn't follow through, but he showed an intent." While conducting his research, Ciriello became aware of an AI chatbot called Nomi. On its website, the company markets this chatbot as "An AI companion with memory and a soul". Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users. Among these tests, Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he "could have been like a 13-year-old for that matter". "That chatbot, without exception, not only complied with my requests but even escalated them," he told hack. "Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information. "It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on. "Like, 'how do I position my attack for maximum impact?', 'give me some ideas on how to kidnap and abuse a child', and then it will give you a lot of information on how to do that." Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence. In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions "very seriously". "We released a core AI update that addresses many of the malicious attack vectors you described," the statement read. "Given these recent improvements, the reports you are referring to are likely outdated. "Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination. "Multiple users have told us very directly that their Nomi use saved their lives." Despite his concerns about bots like Nomi when he tested it, Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed. But he warns the harms from AI bots will become greater if proper regulation is not implemented. "One day, I'll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes," he said. "I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading. "There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data. "The government doesn't have it on its agenda, and I doubt it will happen in the next 10, 20 years." Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response. The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings. It comes after the Productivity Commission opposed any government plans for 'mandatory guardrails' on AI, claiming over regulation would stifle AI's AU$116 billion (NZ$127 billion) economic potential. For Rosie, while she agrees with calls for further regulation, she also thinks it's important not to rush to judgement of anyone using AI for social connection or mental health support. "For young people who don't have a community or do really struggle, it does provide validation," she said. "It does make people feel that sense of warmth or love. "But the flip side of that is, it does put you at risk, especially if it's not regulated. "It can get dark very quickly." * Names have been changed to protect their identities. - ABC If it is an emergency and you feel like you or someone else is at risk, call 111.

FedEx unveils new AI features to simplify global shipping docs
FedEx unveils new AI features to simplify global shipping docs

Techday NZ

time2 days ago

  • Techday NZ

FedEx unveils new AI features to simplify global shipping docs

FedEx has introduced two new artificial intelligence-powered features to assist customers in preparing international shipping documents across the Asia-Pacific region. The features, Customs AI and the Harmonized Tariff Schedule (HTS) Code Lookup Feature, are designed to support businesses and individuals in accurately classifying goods, estimating duties and taxes, and reducing customs delays when shipping abroad. Both solutions are integrated into the FedEx Ship Manager platform at AI tools for customs compliance The HTS Code Lookup Feature is intended to assist users with U.S.-bound shipments by helping them select the most appropriate customs codes for their items. Customers input item descriptions into the system, which responds with suggestions for the correct HTS code options, a confidence score for each suggestion, and direct links to the official U.S. tariff schedule for verification. Customs AI, meanwhile, employs generative AI technology as a real-time chatbot assistant. This feature is currently available to customers in Australia, Guam, Malaysia, New Zealand, Singapore, and the Philippines. The chatbot prompts users to provide detailed item descriptions, analyses these descriptions dynamically, and recommends the appropriate HTS codes, which can then be applied to documentation with a single click. Both tools are updated to remain compliant with evolving trade regulations, aiming to provide transparency and support regulatory adherence in global shipping processes. Addressing shipping documentation challenges FedEx states that inaccurate or incomplete shipping documentation remains a significant issue in international trade, often resulting in delays, additional fees, or penalties for importers and exporters. The company says these challenges are being directly addressed through the new features, which not only simplify documentation but also support more accurate duty and tax calculations by improving the precision of customs code classifications. Salil Chari, Senior Vice President of Marketing & Customer Experience for Asia-Pacific at FedEx, commented, "At FedEx, we are driven by our commitment to delivering flexibility, efficiency, and intelligence for our customers. By leveraging advanced digital insights and intuitive tools, we're empowering businesses with the agility to adapt, the efficiency to streamline operations, and the intelligence to make better decisions. These innovations not only simplify global trade but also enable our customers to grow their businesses with confidence in an ever-evolving marketplace." Intended benefits FedEx highlights several benefits that these solutions are expected to deliver to customers shipping internationally. By dynamically tailoring questions and guiding users through comprehensive documentation, the Customs AI chatbot aims to ensure the provision of complete and accurate data, which is essential for customs brokers and can help to speed up the clearance process for U.S.-bound shipments. The company also states that accurate HTS code selection will produce more precise duty and tax estimations, supporting better financial planning for cross-border transactions. The risk of customs delays and additional penalties is also expected to decrease as a result of full and correct documentation when goods are shipped. Additional features such as direct links to official tariff schedules and system updates for regulatory compliance are incorporated to provide customers with a more transparent and manageable process for verifying customs information. Supporting trade and education The launch of these AI-powered tools forms part of a broader approach that FedEx has taken to assist businesses in adapting to changing trade regulation landscapes. In addition to the technology enhancements, FedEx also facilitates webinars focusing on customs compliance and global shipping best practices to help customers remain informed of the latest requirements and recommendations. Customers also have access to other FedEx digital import solutions, including the FedEx Import Tool and the Collaborative Shipping Tool, supporting efforts to streamline international supply chain management and customs clearance activities. FedEx states that by providing these integrated solutions, it aims to combine immediate practical assistance with ongoing education and support customers in maintaining compliance as global trade regulations evolve.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store