logo
SOCRadar boosts MSSP support with free AI training, new tools

SOCRadar boosts MSSP support with free AI training, new tools

Techday NZ03-06-2025
SOCRadar has announced an expansion of its Managed Security Service Provider (MSSP) programme designed to support partners in scaling operations, automating threat workflows, and improving service delivery.
As part of the enhanced programme, SOCRadar will provide free AI Agent and Automation Training to its partners. This training aims to educate MSSPs on the use of AI agents and generative AI (GenAI) technologies to streamline security operations centre (SOC), threat intelligence, and vulnerability management processes.
The training is described as platform-agnostic, equipping MSSP partners with hands-on experience to build their own AI-powered workflows, irrespective of the specific tools they currently deploy.
Alongside the introduction of free training, SOCRadar has implemented several enhancements to its MSSP programme, including multi-tenant licensing, threat intelligence use cases designed specifically for MSSPs, a Multi-Tenant Management Console, and configurable External Threat Assessment Reports.
"Our enhanced MSSP program enables partners to scale smartly and serve clients more effectively. By combining AI Agents with our extended threat intelligence capabilities, MSSPs can double their operational efficiency—automating routine workflows, accelerating incident response, and delivering tailored intelligence without adding headcount. We believe AI Agents and GenAI will be foundational to the future of MSSPs, and we're committed to helping our partners lead that transformation," Huzeyfe Onal, Chief Executive Officer of SOCRadar, said.
According to SOCRadar, its AI agents are intelligent automation components embedded within the company's Extended Threat Intelligence (XTI) platform. These agents utilise Large Language Models (LLMs) and automation scripts with the ability to execute complex, multi-stage cybersecurity workflows.
Unlike traditional scripts or static rules, SOCRadar's AI agents can analyse contextual information, make decisions based on data, and take actions across multiple IT systems. This approach is intended to reduce the manual workload for analysts, while increasing both the speed and accuracy of threat detection and response.
MSSPs can create what SOCRadar refers to as "smart workflows" by establishing specific goals and operational boundaries for each AI agent. The agents then apply planning, reasoning, and learning methods to support tasks such as identifying threats, enriching data, correlating alerts, or prioritising vulnerabilities for remediation.
The company listed several key benefits of its framework for MSSPs, including the automation of threat intelligence, SOC, and vulnerability management tasks; reduction in analyst workload while accelerating detection and response times; improvement in decision accuracy with a reduction in false positives; enablement of continuous monitoring across multiple clients without increasing staffing; and the potential to increase both scalability and profitability whilst preserving service quality.
SOCRadar reports that it serves over 800 customers in 70 countries. Its Extended Threat Intelligence Platform makes use of artificial intelligence and machine learning for threat detection and to deliver actionable intelligence against cyber threats. The suite of offerings includes Cyber Threat Intelligence, External Attack Surface Management, Brand Protection, Dark Web Monitoring, and Supply Chain Threat Intelligence.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI uptake accelerates in APAC as ANZ firms face skills shortage
AI uptake accelerates in APAC as ANZ firms face skills shortage

Techday NZ

time17 hours ago

  • Techday NZ

AI uptake accelerates in APAC as ANZ firms face skills shortage

Organisations in Australia and New Zealand are reporting early business gains from artificial intelligence (AI), but significant challenges remain, especially in bridging skills gaps across sectors, according to new research commissioned by Dell Technologies and NVIDIA. AI adoption trends across APAC The IDC analysis, based on surveys across Asia Pacific (APAC), found a rapid acceleration of AI, generative AI (GenAI), and machine learning (ML) adoption in the region. APAC's AI-centric server market is forecast to reach AUD$366 billion by 2025, with 21% of organisations in Australia and New Zealand (ANZ) noting measurable improvements from AI and ML deployments. Furthermore, 5% of ANZ businesses identified AI as central to their competitiveness. Spending on GenAI is also gaining momentum. This year, 84% of APAC organisations are expected to allocate between AUD$1.5 to $3 million for GenAI initiatives. Across APAC, 38% of AI budgets are devoted to GenAI, higher than the global average of 33%. Despite the optimism, businesses face difficulties aligning AI initiatives with strategic goals and integrating AI with existing workflows. Deployment strategies and challenges Deployment strategies are evolving, with public cloud (including multicloud) leading in 2024. However, there is rising demand for private and on-premises AI deployments, driven by security, cost efficiency, data sharing, collaboration, and industry-specific needs. Organisations are increasingly adopting specialised AI models and focusing on data security and infrastructure choices across cloud environments. Key challenges for scaling AI and GenAI include rising IT costs, regulatory and compliance risks, and meeting energy efficiency commitments. Skills shortages are cited by over 72% of APAC enterprises as a significant concern, leading to delays in digital transformation and product development. Security and privacy remain critical considerations, with organisations turning to external service providers for support in AI system security, infrastructure modernisation, and workforce training. Approaches to AI adoption The report notes that APAC organisations are generally following a structured, phased approach to AI adoption, targeting high-impact use cases to deliver measurable benefits while mitigating risks. Priorities include investing in suitable infrastructure, fostering AI-centric teams, aligning AI strategies with business goals, and implementing robust data governance frameworks. Australian and New Zealand businesses are placing increasing reliance on external experts to tackle skills shortages and build scalable infrastructure. Across APAC, 60% of businesses rely on external developers for AI applications, with 30% developing AI in-house and 10% using commercial off-the-shelf solutions. Partnerships with technology providers have become essential for roadmap development, infrastructure support, implementation, and workforce development. AI uptake across key industries Banks and financial services are leading AI adoption in APAC, with 84% using AI and 67% deploying GenAI, often to enhance fraud detection, anti-money laundering, and operational efficiency. Financial services spending on AI and GenAI is projected to grow at a compound annual rate of 25-31% from 2023 to 2028. A majority of banks prefer composing AI solutions using enterprise platforms, which requires ongoing expert support in security and data management. In manufacturing, 78% of companies are using AI, with 54% adopting GenAI for supply chain optimisation, predictive maintenance, and real-time production monitoring. Half of manufacturers prefer tailored AI solutions, focusing on areas such as manufacturing execution systems, supply chain integration, and workforce upskilling. The energy sector is leveraging AI (83%) and GenAI (73%) for grid optimisation, predictive maintenance, and energy distribution. Australian engineering firm Worley has begun deploying generative solutions to enhance efficiency and collaboration. Investments are being made in responsible AI practices and infrastructure to promote the transition to cleaner energy and improve grid management across the region. Healthcare organisations report 86% adoption of AI and 59% adoption of GenAI, with a focus on diagnostics and predictive analytics. Notably, CSIRO's Virga computer cluster is being used in partnership with Queensland Children's Hospital to train AI models for diagnosing cystic fibrosis in paediatric patients. Over half of regional healthcare organisations prefer composed AI solutions, often turning to vendors to address complexities around regulations and skills shortages. Retailers in APAC are using AI (82%) and GenAI (63%) for personalisation, inventory planning, dynamic pricing, and fraud prevention. Around 43% prefer composed AI solutions. Retailers report challenges with data readiness and talent availability, prompting a combination of internal capability building and reliance on outside vendors. Expert perspective "The Asia Pacific region holds immense potential to lead the way in AI adoption and innovation. Now is the time for enterprises to move beyond proof of concept (POC) and focus on achieving measurable return on investment (ROI)," said Chris Kelly, Senior Vice President, Infrastructure Solutions Group Specialty Sales, APJC, Dell Technologies. "The journey to consistent ROI is complex and requires comprehensive support across every stage - strategy, use case development, data preparation, governance, optimisation, and scaling AI implementations. With the support from technology partners, enterprises can overcome adoption challenges and accelerate their path to impactful, results-driven AI outcomes." Research methodology The findings reflect multiple IDC data sources and surveys conducted between August 2023 and August 2024, spanning up to 919 respondents from various industries throughout APAC. The research highlights both opportunities and ongoing barriers as companies use AI and GenAI to drive productivity, efficiency, and new business models across the region.

Michael Parker joins TurinTech to lead Artemis AI expansion
Michael Parker joins TurinTech to lead Artemis AI expansion

Techday NZ

time5 days ago

  • Techday NZ

Michael Parker joins TurinTech to lead Artemis AI expansion

Michael Parker, previously of Docker, has joined TurinTech as Vice President of Engineering to oversee the scaling of the company's Artemis AI engineering platform. Appointment and background Parker brings considerable experience in developer tooling and platform engineering, having held senior roles at Docker, where he was responsible for leading modernisation of the company's cloud platform as well as improving the developer experience. His career includes building scalable systems and managing distributed engineering teams globally. At Docker, Parker was involved in steering the firm's transition from infrastructure-focused solutions to developer-first tooling, leading initiatives such as platform modernisation and overseeing the user experience behind Docker Hub. Role at TurinTech In his new post at TurinTech, Parker will be responsible for engineering delivery across both cloud and on-premises deployments of Artemis. He will focus on integrating AI agents into software development processes, overseeing planning workflows and deploying outcome-based review tools, aiming to enable developers to work seamlessly with AI technologies. TurinTech's Artemis platform is built to support the new era of agentic AI in software development, offering teams guidance, validation of AI contributions, and aligning development work with organisational goals. The platform is structured around an outcome-first approach, prioritising productivity gains that can be measured and verified. Mike Basios, Chief Technology Officer at TurinTech, commented: "We're building Artemis to help teams get the most out of AI - whether that's LLMs, agents, or both. It's not about generating more code - it's about delivering measurably improved outcomes." Parker's appointment comes as TurinTech prepares for a broader rollout of Artemis. The platform is already in use by several global enterprises, including Intel and Taylor Wessing, as part of its limited launch phase earlier this year. Addressing the challenges facing the adoption of agentic AI, Parker emphasised the importance of structured workflows in development environments reliant on AI agents. "Agentic development is a powerful shift, but it needs structure to succeed," said Michael Parker, VP of Engineering. "With Artemis, we're building the planning and workflow intelligence that lets AI agents work more like real teammates. Developers stay in control, but get meaningful support - from scoping to implementation to validation. It's about tackling the real-world friction in today's GenAI tools and making AI genuinely useful in everyday engineering." TurinTech reports growing demand for Artemis, as organisations recognise the need for platforms that not only generate code but also deliver functional, production-ready software with a clear focus on organisational outcomes. Market response Leslie Kanthan, CEO and Co-founder of TurinTech, said that interest in Artemis has expanded since its initial roll-out. He highlighted the significance of Parker's recruitment in supporting the company's ambitions to increase the platform's availability to more teams worldwide. "Demand for Artemis continues to grow since our limited launch earlier this year. Global enterprises like Intel and Taylor Wessing are already engaging, and we're seeing strong developer interest in our AI-driven engineering platform. With Michael onboard, we're excited to accelerate availability and bring the power of Artemis to more teams, faster." As part of the broader expansion, Parker has also recruited former colleagues Johnny Stoten and Diogo Ferreira, who previously held roles at Docker, to further bolster the engineering function at TurinTech. TurinTech focuses on building systems that evolve and improve both code and machine learning models. Its products, including Artemis for code and evoML for machine learning pipelines, use agentic planning, evolutionary algorithms and real-time validation to achieve results that can be measured in a production environment. The aim is to help clients move beyond basic AI generation, facilitating the deployment of software that is robust, efficient and aligned with organisational objectives.

Cloudera launches on-premises AI platform for secure enterprise use
Cloudera launches on-premises AI platform for secure enterprise use

Techday NZ

time6 days ago

  • Techday NZ

Cloudera launches on-premises AI platform for secure enterprise use

Cloudera has introduced the latest version of its Data Services, enabling enterprises to deploy generative AI capabilities on their own infrastructure and behind their firewall. The updated release makes Private AI available on premises, offering organisations a way to develop and manage AI models securely using their own data centres. This approach addresses growing concerns over sensitive information and intellectual property, allowing companies to keep data in-house rather than relying on public cloud environments. Security and governance are central to the new offering. The inclusion of built-in governance tools and hybrid portability empowers organisations to establish their own sovereign data clouds. According to research by Accenture, 77% of organisations currently lack foundational data and AI security measures necessary to safeguard critical models, data pipelines, and cloud infrastructure. Cloudera's release directly targets these issues, promising to accelerate enterprise AI deployments. The newly available on-premises capabilities allow organisations to decrease infrastructure expenses, improve productivity for data teams, and streamline AI deployment timelines. These improvements, Cloudera asserts, will help customers move from prototype to production in weeks instead of months. Management of the entire data lifecycle is also available both on-premises and in public cloud, using the same cloud-native services, to provide consistency and flexibility. Users gain cloud-native agility while maintaining a secure environment behind their firewall. Acceleration of workload deployment, automated security enhancements, and a faster time to value for AI initiatives are among the noted benefits. Key features Significant components of this release include the availability of Cloudera AI Inference Service and AI Studios in the data centre for the first time. Both tools were previously limited to cloud environments and are designed to address obstacles commonly faced by enterprises in adopting AI technologies. Cloudera AI Inference Service is now available on premises and benefits from NVIDIA acceleration. It is described as one of the industry's first AI inference services with embedded NIM microservice capabilities. This tool supports the deployment and management of large-scale AI models directly in enterprise data centres, where data is already securely held. Cloudera AI Studios brings a low-code approach to building and deploying GenAI applications and agents. The on-premises availability aims to democratise the AI application lifecycle by offering pre-built templates for both technical and non-technical teams. Results from an independently commissioned Total Economic Impact study by Forrester Consulting highlight operational improvements following adoption. According to the study, a composite organisation saw an 80% reduction in time-to-value for workload deployment, a 20% productivity increase for practitioners and platform teams, and overall savings of 35% from utilising the new architecture. The study also noted hardware utilisation improvements from 30% to 70%, and a reduction in required capacity by 25% to over 50% after infrastructure modernisation. Industry perspectives Industry analyst Sanjeev Mohan commented on the market context, noting the dual pressures of AI adoption and data protection. "Historically, enterprises have been forced to cobble together complex, fragile DIY solutions to run their AI on-premises. Today the urgency to adopt AI is undeniable, but so are the concerns around data security. What enterprises need are solutions that streamline AI adoption, boost productivity, and do so without compromising on security." Leo Brunnick, Chief Product Officer at Cloudera, described the development as a shift in data management strategies, emphasising agility and modern architecture. "Cloudera Data Services On-Premises delivers a true cloud-native experience on-premises, providing agility and efficiency without sacrificing security or control. This release is a significant step forward in data modernization, moving from monolithic clusters to a suite of agile, containerized applications." Toto Prasetio, Chief Information Officer at BNI, highlighted the value of secure generative AI for regulated industries such as banking, where compliance and data protection are paramount. "BNI is proud to be an early adopter of Cloudera's AI Inference service. This technology provides the essential infrastructure to securely and efficiently expand our generative AI initiatives, all while adhering to Indonesia's dynamic regulatory environment. It marks a significant advancement in our mission to offer smarter, quicker, and more dependable digital banking solutions to the people of Indonesia." The latest software release from Cloudera is available for deployment in enterprise data centres and is being presented to customers to demonstrate its AI and data platform capabilities.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store