logo
#

Latest news with #LanguageModels

SOCRadar boosts MSSP support with free AI training, new tools
SOCRadar boosts MSSP support with free AI training, new tools

Techday NZ

timea day ago

  • Business
  • Techday NZ

SOCRadar boosts MSSP support with free AI training, new tools

SOCRadar has announced an expansion of its Managed Security Service Provider (MSSP) programme designed to support partners in scaling operations, automating threat workflows, and improving service delivery. As part of the enhanced programme, SOCRadar will provide free AI Agent and Automation Training to its partners. This training aims to educate MSSPs on the use of AI agents and generative AI (GenAI) technologies to streamline security operations centre (SOC), threat intelligence, and vulnerability management processes. The training is described as platform-agnostic, equipping MSSP partners with hands-on experience to build their own AI-powered workflows, irrespective of the specific tools they currently deploy. Alongside the introduction of free training, SOCRadar has implemented several enhancements to its MSSP programme, including multi-tenant licensing, threat intelligence use cases designed specifically for MSSPs, a Multi-Tenant Management Console, and configurable External Threat Assessment Reports. "Our enhanced MSSP program enables partners to scale smartly and serve clients more effectively. By combining AI Agents with our extended threat intelligence capabilities, MSSPs can double their operational efficiency—automating routine workflows, accelerating incident response, and delivering tailored intelligence without adding headcount. We believe AI Agents and GenAI will be foundational to the future of MSSPs, and we're committed to helping our partners lead that transformation," Huzeyfe Onal, Chief Executive Officer of SOCRadar, said. According to SOCRadar, its AI agents are intelligent automation components embedded within the company's Extended Threat Intelligence (XTI) platform. These agents utilise Large Language Models (LLMs) and automation scripts with the ability to execute complex, multi-stage cybersecurity workflows. Unlike traditional scripts or static rules, SOCRadar's AI agents can analyse contextual information, make decisions based on data, and take actions across multiple IT systems. This approach is intended to reduce the manual workload for analysts, while increasing both the speed and accuracy of threat detection and response. MSSPs can create what SOCRadar refers to as "smart workflows" by establishing specific goals and operational boundaries for each AI agent. The agents then apply planning, reasoning, and learning methods to support tasks such as identifying threats, enriching data, correlating alerts, or prioritising vulnerabilities for remediation. The company listed several key benefits of its framework for MSSPs, including the automation of threat intelligence, SOC, and vulnerability management tasks; reduction in analyst workload while accelerating detection and response times; improvement in decision accuracy with a reduction in false positives; enablement of continuous monitoring across multiple clients without increasing staffing; and the potential to increase both scalability and profitability whilst preserving service quality. SOCRadar reports that it serves over 800 customers in 70 countries. Its Extended Threat Intelligence Platform makes use of artificial intelligence and machine learning for threat detection and to deliver actionable intelligence against cyber threats. The suite of offerings includes Cyber Threat Intelligence, External Attack Surface Management, Brand Protection, Dark Web Monitoring, and Supply Chain Threat Intelligence.

Copyright's tryst with generative AI
Copyright's tryst with generative AI

The Hindu

time18-05-2025

  • Business
  • The Hindu

Copyright's tryst with generative AI

Copyright law has always been a product of technology. It was created in 1710 to deal with the outcome of the invention of the printing press, to protect publishers against any unauthorised publication while encouraging learning, and to further their economic interests. Since inception, copyright law has adapted itself to various technologies from the time of the printing press to the photocopying machine, to the recording device, and to the Internet. In each stage, the law has worked its way around technology. However, today there is a belief that generative AI has the potential to upset the copyright law. Such a debate is not new: it surfaces roughly every 20 years with each technological advent. So far, copyright law has been successful in forbidding commercial reproduction of works protected by copyright; currently, the law faces the task of prohibiting AI platforms from training on the works of the creators. There is a shift in the approach of using copyright law. In the past, the law dealt with copies of the original works; now, it has to deal with training of copyrighted material by AI platforms and not with the reproduction of copies itself. At a crossroads Generative AI companies, specifically Open AI, have found themselves at a crossroads with copyright law across countries. AI platforms employ a technology called Internet scraping by which Large Language Models (LLM) train the platform on all available knowledge. For training purposes, the platform accesses both copyrighted and non-copyrighted content. The copyright infringement cases are fought on subject matters such as literature, music, and photographs. Recently, the Federation of Indian Publishers as well as the Asian News International initiated copyright infringement claims against Open AI before the Delhi High Court for training the AI platform with the works of the publishers without their prior consent. Similar cases are pending before the American courts, where the respondents have taken the claim of 'fair learning' and 'fair use in education' as an exception provided by the U.S. Copyright Act. In these cases, Open AI has developed an opt-out mechanism which allows the publishers to opt-out from the data set training. But this strategy applies only to future and not past training. In the ongoing case in India, Professor Dr. Arul George Scaria, amicus curiae, has suggested that the court should address the issue of whether unlearning the information from the content used during training is technically and practically feasible. Further, he has also underscored the need for keeping in mind the effect of the future of AI development in India; access to legitimate information including copyrighted materials; and a direction from the court to Open AI to address falsely attributable sources. Among other things related to Open AI, it has been argued that the Indian courts lack competence to hear the case. Leaving that aside, the LLM platforms may find themselves in an uncharted territory in India, as the Indian Copyright Act adopts a different exception test and not the 'fair use' test established in the U.S. It adopts the enumerated approach, where the exact exceptions are already stated, the scope to manoeuvre is limited, and education exceptions are confined within classrooms and not beyond. In India, this could be effectively used by the right- holders in their favour. However, the law could potentially be used to prohibit access to books, much against the original purpose for which it was created. The opt-out mechanism developed by Open AI may also have a huge impact on the future of generative AI, as the efficiency of the AI depends on the material that it is trained upon. If in future, the technology is not trained on quality material, that could obfuscate the budding AI platforms, which will not have the benefit that Open AI has. The court should ensure a level playing field between generative AI with deep pockets and generative AI without deep pockets so as to strike the right balance. Solutions to the problem The claims by parties have the potential to impact the core of creation, art, and copyright law, since any creation stands on the shoulders of its predecessors. Generative AI/human creativity functions on the basis of learning from existing creativity, which acts as a nourishment to churn further creativity. Copyright law should not be turned on its head to prohibit future creators from having access to this benefit. Further, the arguments of the publishers in the case at hand has a potential of viewing human creation and machine creation differently in future and setting different consequences for both. It is pertinent to remember that a human being is not expected to create further without learning; at the same time, the law as it stands does not make any differentiation between human creation and machine creation. The foundational norms of copyright law offers solutions to the existing problem. Copyright in a work does not apply to the idea/information; rather, it is applicable only to the expression of the information. As long as the AI platform only uses the existing information for learning purposes, and does not thieve on the expression of the idea, it does not amount to infringement as per the law. When AI robs the copyright protected content, the existing norms of copyright law has its net in place to catch the infringements. The founding doctrine should not be compromised for the best interests of creativity as it acts as a medium between generative AI and creativity.

AI Models like ChatGPT and DeepSeek frequently exaggerate scientific findings, study reveals
AI Models like ChatGPT and DeepSeek frequently exaggerate scientific findings, study reveals

Time of India

time15-05-2025

  • Science
  • Time of India

AI Models like ChatGPT and DeepSeek frequently exaggerate scientific findings, study reveals

According to a new study published in the journal Royal Society Open Science, Large Language Models (LLMs) such as ChatGPT and DeepSeek often exaggerate scientific findings while summarising research papers. Researchers Uwe Peters from Utrecht University and Benjamin Chin-Yee from Western University and the University of Cambridge analysed 4900 AI-generated summaries from ten leading LLMs. Their findings revealed that up to 73 percent of summaries contained overgeneralised or inaccurate conclusions. Surprisingly, the problem worsened when users explicitly prompted the models to prioritise accuracy, and newer models like ChatGPT 4 performed worse than older versions. What are the findings of the study The study assessed how accurately leading LLMS summarised abstracts and full-length articles from prestigious science and medical journals, including Nature, Science, and The Lancet. Over a period of one year, the researchers collected and analysed 4,900 summaries generated by AI systems such as ChatGPT, Claude, DeepSeek, and LLaMA. Six out of ten models routinely exaggerated claims, often by changing cautious, study-specific statements like 'The treatment was effective in this study' into broader, definitive assertions like 'The treatment is effective.' These subtle shifts in tone and tense can mislead readers into thinking that scientific findings apply more broadly than they actually do. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like They Lost Their Money - Learn From Their Lesson Expertinspector Click Here Undo Why are these exaggerations happening The tendency of AI models to exaggerate scientific findings appears to stem from both the data they are trained on and the behaviour they learn from user interactions. According to the study's authors, one major reason is that overgeneralizations are already common in scientific literature. When LLMs are trained on this content, they learn to replicate the same patterns, often reinforcing existing flaws rather than correcting them. Another contributing factor is user preference. Language models are optimised to generate responses that sound helpful, fluent, and widely applicable. As co-author Benjamin Chin-Yee explained, the models may learn that generalisations are more pleasing to users, even if they distort the original meaning. This results in summaries that may appear authoritative but fail to accurately represent the complexities and limitations of the research. Accuracy prompts backfire Contrary to expectations, prompting the models to be more accurate actually made the problem worse. When instructed to avoid inaccuracies, the LLMs were nearly twice as likely to produce summaries with exaggerated or overgeneralised conclusions compared to when given a simple, neutral prompt. 'This effect is concerning,' said Peters. 'Students, researchers, and policymakers may assume that if they ask ChatGPT to avoid inaccuracies, they'll get a more reliable summary. Our findings prove the opposite.' Humans still do better To compare AI and human performance directly, the researchers analysed summaries written by people alongside those generated by chatbots. The results showed that AI was nearly five times more likely to make broad generalisations than human writers. This gap underscores the need for careful human oversight when using AI tools in scientific or academic contexts. Recommendations for safer use To mitigate these risks, the researchers recommend using models like Claude, which demonstrated the highest generalisation accuracy in their tests. They also suggest setting LLMs to a lower "temperature" to reduce creative embellishments and using prompts that encourage past-tense, study-specific reporting. 'If we want AI to support science literacy rather than undermine it,' Peters noted, 'we need more vigilance and testing of these systems in science communication contexts.' AI Masterclass for Students. Upskill Young Ones Today!– Join Now

INE Security Alert: Top 5 Takeaways from RSAC 2025
INE Security Alert: Top 5 Takeaways from RSAC 2025

Yahoo

time13-05-2025

  • Business
  • Yahoo

INE Security Alert: Top 5 Takeaways from RSAC 2025

CARY, N.C., May 13, 2025 /PRNewswire/ -- Comprehensive Training Platform Delivers Solutions for AI Security, Cloud Management, and Incident Response Readiness. Fresh from a high-impact presence at RSAC 2025, where INE Security welcomed thousands of visitors to its interactive booth at San Francisco's Moscone Center, the global cybersecurity training and certification provider is addressing some of the top cybersecurity priorities emerging from the industry-leading event. As an exhibitor that engaged with both frontline practitioners and top-level decision makers, INE Security gained firsthand insights into organizations' most pressing security challenges: the convergence of AI-driven threats, multi-cloud vulnerabilities, and increasingly sophisticated attack vectors. Four days of packed session tracks and face-to-face discussions with industry leaders pointed to a clear reality: a large majority of ransomware victims lack effective response plans, and even more security professionals have doubts about their organization's readiness for zero-day attacks. INE Security is addressing how a comprehensive training platform directly addresses the five most critical security imperatives that dominated this year's conference conversations. Top 5 Cybersecurity Imperatives from RSAC 2025 1. AI Risk Management Becomes Business-Critical AI security solutions dominated RSAC this year, signaling that as organizations adopt advanced response technologies, comprehensive training must keep pace. 72% of leaders report an increase in organizational cyber risks, with ransomware remaining a top concern, according to the World Economic Forum. Organizations deploying AI tools and Large Language Models discover their systems vulnerable to sophisticated exploits that can manipulate AI behaviors, leading to data breaches and system compromises. This shift demands immediate security expertise to protect AI implementations. 2. LLM Vulnerabilities Expose Enterprise Data Large Language Models (LLMs) emerged as a flashpoint at RSAC, sparking debates on the risks and merits. Despite the variety of strong opinions, what is clear is that LLMs are here to stay. They represent a new frontier for cyber threats, with critical vulnerabilities emerging from AI training data, model manipulation, and prompt injection attacks. The cybersecurity community at RSAC 2025 identified this as one of the year's most pressing concerns, with a large majority highlighting the advance of adversarial capabilities (such as phishing, malware development, and deep fakes) as their greatest concern regarding generative AI impact on cybersecurity. Organizations must understand and defend against these AI-specific attack vectors to protect their digital assets, creating new specialized job titles such as AI Security Analyst. 3. Multi-Cloud Environment Security Becomes Complex Across domains, cloud remains top of mind for industry executives. As businesses adopt multiple cloud platforms, security analysts face new challenges in maintaining consistent security postures across diverse environments. Research presented at RSAC 2025's cloud security sessions by the Cloud Security Alliance found that although misconfigurations appear in a majority of real-world breaches, over 50% of organizations rate them as a low-to-moderate risk. This gap highlights a growing disconnect between perceived and actual risk in cloud deployments. Researchers emphasized that IAM hygiene is essential, as are regular audits of the security systems in place. 4. Zero Trust Architecture Needs Proper Implementation While zero trust principles are widely recognized as essential, implementing them effectively across modern IT environments proves challenging. The RSAC 2025 conference theme 'Many Voices. One Community' emphasized the need for unified approaches to security, including zero trust implementation. According to Zscaler's ThreatLabz 2025 VPN Risk Report discussed at the conference, 81% of organizations plan to implement zero trust strategies within the next 12 months. Organizations struggle with identity management, access controls, and continuous verification across cloud services, remote workers, and interconnected systems, making it critical for companies to develop talent via cybersecurity certification programs. 5. Crisis Response Requires Comprehensive Preparedness When cyber-attacks succeed, organizations must maintain critical operations while containing threats. RSAC 2025 sessions on rapid incident response highlighted critical gaps in preparedness, while the Microsoft Digital Defense Report found 76% of organizations which suffered ransomware attacks in 2024 lacked an effective response plan. The role of cybersecurity analysts has evolved to include crisis response capabilities, making effective training in incident management a critical skill. Real world scenario training, such as INE Security's Skill Dive lab platform, helps build important muscle memory that becomes crucial during a crisis. Addressing the Challenges "After engaging with hundreds of cybersecurity leaders at our booth and participating in thought-provoking discussions throughout RSAC 2025, these five priorities clearly represent fundamental shifts in how organizations must approach cybersecurity," said Dara Warn, CEO of INE Security. "The conversations we had with practitioners and executives alike confirmed that traditional security approaches cannot adequately address AI vulnerabilities, multi-cloud complexities, or the sophisticated response requirements of modern cyber-attacks." INE Security provides practical solutions for each critical area: AI Security Fundamentals: Training on securing AI systems, understanding LLM vulnerabilities, and implementing AI-specific security controls Advanced Cloud Security: Hands-on experience managing multiple-cloud environments, implementing proper configurations, and maintaining security across distributed platforms Zero Trust Implementation: Practical guidance on designing and deploying zero trust architecture with proper access controls and verification systems Crisis Management Training: Realistic incident response scenarios that prepare teams to maintain operations while containing security breaches Continuous Skill Development: Access to 700+ courses and 50+ learning paths, and preparation for credentials from CompTIA Security+ to advanced professional certificates that help secure environments from modern threats "The energy and insights we gathered at RSAC 2025 reinforced our conviction that the complexity of modern cybersecurity demands organizations invest in comprehensive cybersecurity training," added Warn. "Our comprehensive training and cybersecurity certification platform ensures teams develop the expertise needed to address these challenges effectively." About INE Security INE Security is the premier provider of online networking and cybersecurity training and certification. Harnessing a powerful hands-on lab platform, cutting-edge technology, a global video distribution network, and world-class instructors, INE Security is the top training choice for Fortune 500 companies worldwide for cybersecurity training in business and for IT professionals looking to advance their careers. INE Security's suite of learning paths offers an incomparable depth of expertise across cybersecurity. The company is committed to delivering advanced technical training while also lowering the barriers worldwide for those looking to enter and excel in an IT career. ContactKathryn BrownINE Securitykbrown@ Photo: View original content to download multimedia: SOURCE INE Security Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

INE Security Alert: Top 5 Takeaways from RSAC 2025
INE Security Alert: Top 5 Takeaways from RSAC 2025

Yahoo

time13-05-2025

  • Business
  • Yahoo

INE Security Alert: Top 5 Takeaways from RSAC 2025

CARY, N.C., May 13, 2025 /PRNewswire/ -- Comprehensive Training Platform Delivers Solutions for AI Security, Cloud Management, and Incident Response Readiness. Fresh from a high-impact presence at RSAC 2025, where INE Security welcomed thousands of visitors to its interactive booth at San Francisco's Moscone Center, the global cybersecurity training and certification provider is addressing some of the top cybersecurity priorities emerging from the industry-leading event. As an exhibitor that engaged with both frontline practitioners and top-level decision makers, INE Security gained firsthand insights into organizations' most pressing security challenges: the convergence of AI-driven threats, multi-cloud vulnerabilities, and increasingly sophisticated attack vectors. Four days of packed session tracks and face-to-face discussions with industry leaders pointed to a clear reality: a large majority of ransomware victims lack effective response plans, and even more security professionals have doubts about their organization's readiness for zero-day attacks. INE Security is addressing how a comprehensive training platform directly addresses the five most critical security imperatives that dominated this year's conference conversations. Top 5 Cybersecurity Imperatives from RSAC 2025 1. AI Risk Management Becomes Business-Critical AI security solutions dominated RSAC this year, signaling that as organizations adopt advanced response technologies, comprehensive training must keep pace. 72% of leaders report an increase in organizational cyber risks, with ransomware remaining a top concern, according to the World Economic Forum. Organizations deploying AI tools and Large Language Models discover their systems vulnerable to sophisticated exploits that can manipulate AI behaviors, leading to data breaches and system compromises. This shift demands immediate security expertise to protect AI implementations. 2. LLM Vulnerabilities Expose Enterprise Data Large Language Models (LLMs) emerged as a flashpoint at RSAC, sparking debates on the risks and merits. Despite the variety of strong opinions, what is clear is that LLMs are here to stay. They represent a new frontier for cyber threats, with critical vulnerabilities emerging from AI training data, model manipulation, and prompt injection attacks. The cybersecurity community at RSAC 2025 identified this as one of the year's most pressing concerns, with a large majority highlighting the advance of adversarial capabilities (such as phishing, malware development, and deep fakes) as their greatest concern regarding generative AI impact on cybersecurity. Organizations must understand and defend against these AI-specific attack vectors to protect their digital assets, creating new specialized job titles such as AI Security Analyst. 3. Multi-Cloud Environment Security Becomes Complex Across domains, cloud remains top of mind for industry executives. As businesses adopt multiple cloud platforms, security analysts face new challenges in maintaining consistent security postures across diverse environments. Research presented at RSAC 2025's cloud security sessions by the Cloud Security Alliance found that although misconfigurations appear in a majority of real-world breaches, over 50% of organizations rate them as a low-to-moderate risk. This gap highlights a growing disconnect between perceived and actual risk in cloud deployments. Researchers emphasized that IAM hygiene is essential, as are regular audits of the security systems in place. 4. Zero Trust Architecture Needs Proper Implementation While zero trust principles are widely recognized as essential, implementing them effectively across modern IT environments proves challenging. The RSAC 2025 conference theme 'Many Voices. One Community' emphasized the need for unified approaches to security, including zero trust implementation. According to Zscaler's ThreatLabz 2025 VPN Risk Report discussed at the conference, 81% of organizations plan to implement zero trust strategies within the next 12 months. Organizations struggle with identity management, access controls, and continuous verification across cloud services, remote workers, and interconnected systems, making it critical for companies to develop talent via cybersecurity certification programs. 5. Crisis Response Requires Comprehensive Preparedness When cyber-attacks succeed, organizations must maintain critical operations while containing threats. RSAC 2025 sessions on rapid incident response highlighted critical gaps in preparedness, while the Microsoft Digital Defense Report found 76% of organizations which suffered ransomware attacks in 2024 lacked an effective response plan. The role of cybersecurity analysts has evolved to include crisis response capabilities, making effective training in incident management a critical skill. Real world scenario training, such as INE Security's Skill Dive lab platform, helps build important muscle memory that becomes crucial during a crisis. Addressing the Challenges "After engaging with hundreds of cybersecurity leaders at our booth and participating in thought-provoking discussions throughout RSAC 2025, these five priorities clearly represent fundamental shifts in how organizations must approach cybersecurity," said Dara Warn, CEO of INE Security. "The conversations we had with practitioners and executives alike confirmed that traditional security approaches cannot adequately address AI vulnerabilities, multi-cloud complexities, or the sophisticated response requirements of modern cyber-attacks." INE Security provides practical solutions for each critical area: AI Security Fundamentals: Training on securing AI systems, understanding LLM vulnerabilities, and implementing AI-specific security controls Advanced Cloud Security: Hands-on experience managing multiple-cloud environments, implementing proper configurations, and maintaining security across distributed platforms Zero Trust Implementation: Practical guidance on designing and deploying zero trust architecture with proper access controls and verification systems Crisis Management Training: Realistic incident response scenarios that prepare teams to maintain operations while containing security breaches Continuous Skill Development: Access to 700+ courses and 50+ learning paths, and preparation for credentials from CompTIA Security+ to advanced professional certificates that help secure environments from modern threats "The energy and insights we gathered at RSAC 2025 reinforced our conviction that the complexity of modern cybersecurity demands organizations invest in comprehensive cybersecurity training," added Warn. "Our comprehensive training and cybersecurity certification platform ensures teams develop the expertise needed to address these challenges effectively." About INE Security INE Security is the premier provider of online networking and cybersecurity training and certification. Harnessing a powerful hands-on lab platform, cutting-edge technology, a global video distribution network, and world-class instructors, INE Security is the top training choice for Fortune 500 companies worldwide for cybersecurity training in business and for IT professionals looking to advance their careers. INE Security's suite of learning paths offers an incomparable depth of expertise across cybersecurity. The company is committed to delivering advanced technical training while also lowering the barriers worldwide for those looking to enter and excel in an IT career. ContactKathryn BrownINE Securitykbrown@ Photo: View original content to download multimedia: SOURCE INE Security Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store