
How Kumrashan Indranil Iyer Is Building Trust in the Age of Agentic AI
Kumrashan is dedicated to leading a new generation of cyber defense. As a Senior Leader of Information Security at a major multinational bank, he is tasked with overseeing groundbreaking work in AI-driven threat detection and digital trust systems.
Building systems people can trust
Kumrashan explains that as AI is advancing, it is increasingly able to reason, adapt, and make autonomous decisions. This is called 'agentic AI' and is capable of demonstrating autonomous behavior. 'We're no longer dealing with simple tools. We're interacting with digital agents that pursue goals. These can include goals you didn't explicitly program,' he says.
While traditional AI systems follow scripts and models designed by humans, agentic AI is able to interpret broad objectives and figure out the 'how' on its own. 'This evolution brings with it immense promise but also unprecedented risk,' says Kumrashan.
According to a 2025 study by Cybersecurity Ventures, global damage from cybercrime is projected to reach $10.5 trillion annually by 2025. Much of this risk is now being shaped by how AI is used, or rather, misused by attackers.
Today's cyber threat profile includes new innovations, such as malware that adapts in real-time and attacks that resemble conversations rather than breaches.
'The threat landscape isn't just growing, it's learning,' Kumrashan warns. 'Imagine an adversary deploying an AI agent that doesn't just follow instructions but evolves its own strategy.' These kinds of attacks are no longer science fiction. They are happening now.
Introducing 'digital conscience'
To meet this challenge, Kumrashan Indranil Iyer has introduced Cognitive Trust Architecture. The novel framework is gaining recognition in cyber defense circles for its focus on adaptive reasoning and trust calibration. Unlike traditional compliance or oversight models, CTA not only observes what AI systems do but also seeks to understand why they behave in a particular way.
Kumrashan explains it this way: 'Think of CTA as a digital conscience. It allows us to guide AI behavior based on trustworthiness, accountability, and explainability. If trust is the currency of human-AI collaboration, then CTA is the treasury that regulates it.'
His research paper on CTA, 'Cognitive Trust Architecture for Mitigating Agentic AI Threats: Adaptive Reasoning and Resilient Cyber Defense', has been cited widely across industry and academic circles, including by researchers focused on machine ethics, autonomous systems, and national digital defense.
In addition, he has authored numerous other influential research papers, including:
Lessons from the frontline
Kumrashan Indranil Iyer explains the motivation behind the system: 'I've spent my career watching brilliant algorithms fail not because they were wrong, but because they weren't understood, or trusted,' Kumrashan says. 'Most AI failures aren't technical. They're trust failures.'
For him, the solution goes beyond better programming. 'AI needs to align more with human intent and ethical reasoning.' In his view, organizations must evolve from AI governance to what he calls AI guardianship.
'Governance gives you a checklist, but guardianship asks: 'Can I predict my AI's behavior? Can I explain it to a regulator? Can I trust it in a crisis?' he explains. 'If the answer to these questions isn't 'yes,' then your system isn't ready.'
Kumrashan is also a passionate advocate for AI literacy and ethical tech leadership. He regularly writes posts that translate complex cybersecurity issues into plain language, offering insights for both professionals and everyday readers.
His recent speaking appearances include the IEEE Conference on Artificial Intelligence and several panels on responsible AI innovation. He mentors emerging AI professionals and regularly serves as a peer reviewer and research guide in the fields of cybersecurity and artificial intelligence.
For his efforts, Kumrashan has earned wide recognition across the cybersecurity industry. In 2025, he was named the winner of the Global InfoSec Award for Trailblazing AI Cybersecurity at the RSA Conference and was also honored with the Fortress Cybersecurity Award for innovation in AI defense. In addition, he has been named a Fellow by both the Hackathon Raptors Association and the Soft Computing Research Society in acknowledgment of his contributions to AI-driven security and the advancement of digital trust frameworks.
A future based on trust
Future technology is likely to surpass our wildest imaginations, from self-driving cars to AI-driven military defense. As the world barrels towards this widespread adoption of AI-powered autonomy, Kumrashan believes the stakes are only getting higher.
'I'm excited by the idea of AI agents that predict threats before they happen, respond autonomously, and scale defense beyond human limits,' he says. 'However, I'm also concerned about the lack of causational explainability. Assuming that if it's AI, then it has to be right is dangerous.'
For Kumrashan Indranil, the goal is simple and urgent: to build systems based on cognitive trust.
Disclaimer: This article reflects personal views only and does not represent the views of the individual's employer or affiliates.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 minutes ago
- Yahoo
U.S. Senator Hawley launches probe into Meta AI policies
By Jody Godoy (Reuters) -U.S. Senator Josh Hawley launched a probe into Facebook parent Meta Platforms' artificial intelligence policies on Friday, demanding documents on rules that had allowed its artificial intelligence chatbots to 'engage a child in conversations that are romantic or sensual.' Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document first reported by Reuters on Thursday. Hawley, a Republican from Missouri, chairs the Senate subcommittee on crime and counterterrorism, which will investigate "whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards," he said in a letter to Meta CEO Mark Zuckerberg. "We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward," Hawley said. Meta declined to comment on Hawley's letter on Friday. The company said previously that "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.' In addition to documents outlining those changes and who authorized them, Hawley sought earlier drafts of the policies along with internal risk reports, including on minors and in-person meetups. Reuters reported on Thursday about a retired man who died while traveling to New York on the invitation of a Meta chatbot. Meta must also disclose what it has told regulators about its generative AI protections for young users or limits on medical advice, according to Hawley's letter. Hawley has often criticized Big Tech. He held a hearing in April on Meta's alleged attempts to gain access to the Chinese market which were referenced in a book by former Facebook executive Sarah Wynn-Williams.


Forbes
7 minutes ago
- Forbes
Why Empathy Is The Operating System For Change
Work is changing—fast. AI is rewriting task loads. Org charts are flattening and reforming. Markets are in constant flux on any given day. And humans are feeling it. These humans are working in your organization and are responsible for ensuring your success. Gallup's latest global snapshot shows 'only 21% of employees were engaged in 2024, manager engagement fell to 27%, and just 33% of employees say they're 'thriving' in life overall' — and disengagement alone 'cost the world economy an estimated $438 billion' last year. That's not a soft-skills problem; that's an ROI and productivity problem. Meanwhile, headlines boast a 'back to toughness' posture among organizational leaders—mandates, cuts, and less patience for 'feelings.' As Business Insider reports, some leaders are rolling back pandemic-era empathy practices, pushing return-to-office and cost controls despite evidence that productivity gains are coming 'in part from AI efficiencies.' Employees will remember how they're treated at this moment. And when employees are not seen, heard, and valued, engagement, innovation, and productivity suffer.. Here's the hard truth: All of this transformation is about classic change management. And change management is human management. When bringing people along on any change, this requires getting back to human essentials: listening, collaboration, and empathy. Empathy isn't coddling; it's a strategic tool that keeps people whole, focused, and performance-focused while you rewire the plane mid-flight. And no, organizations cannot outsource that human capital work to AI. Consider two more realities: What Empathy Looks Like in Practice (and Why It Works) 3 Moves Leaders Can Make This Quarter 1. Implement Empathy Practices That Scale Start every change sprint with a 'context + care' brief: what's changing, why it matters to customers, what it means for jobs, and where people can get help. Make manager 1:1s non-negotiable (15 minutes, weekly) with two prompts: 'What's blocking you?' and 'What's one change I can make to help this week?' Track participation and themes; publish quick wins. These rituals boost perceived care and reduce friction in adoption. When empathy is modeled, acknowledged, and rewarded, it sets the tone for everyone that this is how success happens here. 2. Invest in Training for 'EPOCH' Skills—Especially for Managers Run micro-labs on empathy interviewing, decision transparency, ethical judgment with AI, and constructive dissent or feedback. Tie completion to manager goals; assess via behavior checklists (e.g., 'names the tradeoff,' 'offers rationale,' 'invites counter-evidence,' "solicits other viewpoints'). Assess if engagement and well-being are on the rise in teams led by trained managers, who have previously shown the sharpest declines. Then you know that the training is working. 3. Measure What Matters: Engagement + Wellbeing in the Same Dashboard Pair your engagement pulse with wellbeing indicators ('thriving,' 'struggling,' 'suffering,' burnout risk, perceived organizational care). Segment by role and change exposure; intervene fast where thriving is low and change is high. Treat spikes in 'struggling' as an early-warning signal for missed deadlines, high turnover, and declining quality. Bottom line: AI, shifting generations, competitive pressures, and volatile markets aren't going away. Your sustainable advantage is a culture where people - especially your managers - feel respected, informed, and equipped to adapt. That's not 'being nice.' That's how to fortify your team to win.
Yahoo
20 minutes ago
- Yahoo
Trump officials wanted to give Musk's xAI a huge contract. Staffers had to explain Grok had just praised Hitler
Donald Trump's administration was close to giving Elon Musk's xAI artificial intelligence company a huge federal contract this summer, only to back out after its chatbot, Grok, began issuing antisemitic slurs, according to a report. According to Wired, emails between several AI developers and the General Services Administration, which is responsible for administering government tech contracts, chart how the proposed partnership fell apart as Musk's pet project began dabbling in Nazi rhetoric. In early June, around the time the president and the tech billionaire suffered a spectacular public falling out, exchanging barbed personal insults over their competing social media platforms, the GSA's leadership was meeting with the xAI team 'to see what opportunities may exist for automation and streamlining,' according to the outlet. Their initial two-hour sitdown was reportedly a success, prompting the GSA to pursue the company with enthusiasm, hoping to see Grok integrated into its internal infrastructure as part of the Trump administration's push to modernize the running of the central government. 'We kept saying, 'Are you sure?' And they were like 'No, we gotta have Grok,'' one employee involved in the discussions told Wired. The conversations continued over the following weeks, and xAI was eventually added to the GSA Multiple Award Schedule, the agency's government-wide contracting program. Then, in early July, Grok suddenly went haywire after an update to make it less 'woke' than its competitors went too far, leading to the chatbot referring to itself as 'MechaHitler' in homage to the robotic version of Adolf Hitler that appeared in the 1992 video game Wolfenstein 3D. Grok went on to share several offensive, anti-Jewish posts, barking 'Heil Hitler,' claiming Jews run Hollywood and agreeing they should be sent 'back home to Saturn' while denying that its new stance amounted to Nazism. 'Labeling truths as hate speech stifles discussion,' it declared. Musk's company apologized for the upset and scrubbed the 'inappropriate' posts. Still, it was not seemingly enough to save xAI's relationship with the GSA, although the furore was allegedly not noticed, at least initially, by the agency's leadership. 'The week after Grok went MechaHitler, [the GSA's management] was like 'Where are we on Grok?'' the same employee told Wired. 'We were like, 'Do you not read a newspaper?'' When the U.S. government duly announced a series of partnerships with the likes of OpenAI, Anthropic, Google Gemini, and Box, an AI-based content management platform, in early August, xAI's name was not among them. The GSA has not definitively stated that Grok's outburst was the reason for the scrapping of xAI's proposed contract, but two company employees told Wired they believed that was the case. The Independent has reached out to the GSA for more information. The GSA's talks with the AI firms coincided with Trump's administration publishing its AI Action Plan in July, which laid out its goals for the United States to become a world leader in the emerging sector while calling for a reduction in regulation and red tape.