
Hexaware and Abluva Join Forces to Deliver Secure Agentic AI Solutions for the Life Sciences Industry
Mumbai (Maharashtra) [India]/ Iselin (New Jersey) [US], July 10: Hexaware Technologies, a global provider of IT services and solutions, today announced a strategic partnership with Abluva, an innovator in agentic AI security, to address security challenges posed by autonomous AI agents in the Life Sciences industry. This collaboration brings together Hexaware's deep domain expertise and Abluva's groundbreaking Secure Intelligence Plane to help organizations in the sector deploy generative AI (GenAI) safely and in compliance with industry regulations. As Life Sciences organizations increasingly adopt agentic AI to enhance research, clinical trials, patient data management, and commercial operations, the partnership ensures that AI agents operate in a secure, governed, and auditable environment--without hindering innovation. Delivering Governed and Secure Generative AI Agents for Life Sciences Innovation.
Partnership Highlights:
* Real-time Agent Governance: Abluva's Secure Intelligence Plane enforces critical controls like purpose binding, role-based context augmentation, data masking, and tooling control to prevent unauthorized actions.
* Comprehensive Agent Life Cycle Security: Security protocols to protect sensitive data span the entire Agent life cycle, including fine-tuning, Retrieval Augmented Generation (RAG), and prompting stages. This provides advanced visibility and targeted safeguards specifically designed for agent-driven architectures in clinical and research settings.
* Autonomous Threat Mitigation and Self-Healing: Abluva's patent-pending "self-healing" capability allows the system to automatically detect and respond to unforeseen or anomalous agent behaviors in real-time, reducing risks associated with agent autonomy.
* Enhanced Compliance and Privacy for AI: The solution ensures agent activities comply with industry standards like HIPAA and GDPR, and internal governance policies, through embedded governance and audit features, addressing crucial aspects of data privacy.
"We are thrilled to partner with Abluva to implement the most secure agentic AI solutions for large sponsors and CROs," said Raj Gondhali, AVP & Head of Clinical Solutions, Hexaware. "This collaboration is pivotal, combining our expertise in global clinical solutions with Abluva's pioneering agentic security technology to ensure enhanced AI safety, compliance, and operational efficiency as our clients adopt next-generation AI."
Raj Darji, CEO at Abluva, echoed similar sentiments: "We are excited to announce our partnership with Hexaware, a global systems integration specialist renowned for its Life Sciences expertise. This collaboration marks a significant milestone for Abluva as we aim to deliver value and innovative solutions in agentic AI security. By combining Hexaware's global reach with our novel research-based technology for securing autonomous agents, we are committed to providing comprehensive and integrated solutions that enable safe AI adoption."
Amit Gautam, CTO at Abluva, added, "Our partnership with Hexaware enables us to extend our expertise in agentic AI security to a broader market, addressing the critical need for robust governance in AI-driven enterprises. By integrating our Secure Intelligence Plane with Hexaware's capabilities, we can deliver sophisticated, real-time governance solutions tailored to secure autonomous AI agents in complex life sciences environments."
This partnership underlines Hexaware's commitment to next-generation cloud and AI platforms, and Abluva's leadership in research-driven, agentic security innovations. Together, they empower life sciences enterprises to unlock GenAI's full potential--securely, at scale.
About Abluva Inc
Abluva stands at the forefront of data security, pioneering research-driven technologies to address today's most pressing data challenges. We are dedicated to building a secure data plane that enables fine-grained access control and robust privacy across diverse data sources. Our innovations extend to advanced protection for agents and generative AI, alongside groundbreaking data and intent-driven breach discovery. Abluva's solutions empower organizations to strengthen their compliance, bolster their security posture, and accelerate innovation through the secure democratization of data and intelligence.
Visit https://www.abluva.com for more information.
About Hexaware Technologies
Hexaware is a global technology and business process services company. Every day, Hexawarians wake up with a singular purpose: to create smiles through great people and technology. With offices across the world, we empower enterprises worldwide to realize digital transformation at scale and speed by partnering with them to build, transform, run, and optimize their technology and business processes.
Learn more about Hexaware at https://hexaware.com/
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
5 hours ago
- Time of India
AI Co-Pilots and Trust Stacks: The Next Chapter in Martech
Dear Reader, AI is here, but it is not here to replace you. It is here to make you sharper. That is what Salesforce India 's Arundhati Bhattacharya reminds us this week. AI can free marketers from the drudgery of repetitive work, giving them back the time to craft strategy and create experiences that truly move people. But as we dive into her perspective, it is clear: AI's edge only matters when paired with human insight. Let's dive in. AI isn't the boss, you are. Arundhati Bhattacharya, president and CEO, Salesforce India, shares why AI is only as valuable as the humans guiding it. Beyond automating grunt work, the technology's real promise is letting marketers focus on the things machines cannot replicate: creativity, empathy, and trust-building. Read the full conversation Why you should care: The marketers who thrive will be the ones who treat AI as a co-pilot, not a crutch. AdTech origins you probably didn't know. Did you realise the dawn of online ads and the birth of third-party cookies happened in the same year? This explainer traces how those early milestones formed the bedrock of today's digital ad economy and why understanding these roots matters for where adtech is headed next. Explore the glossary Why you should care: Because knowing the 'why' behind the tools in your stack helps you future-proof your strategy. CMOs are building trust stacks now. Funnels are out; trust is the new KPI. This piece argues that tomorrow's top CMOs will not just optimise tech stacks but build trust stacks, connecting data, messaging, and brand behaviour to earn belief at scale. See what a trust stack looks like Why you should care: Because trust stacks will operate parallel to the martech ecosystem, ensuring that credibility is built and maintained at every touchpoint. Stories you might have missed Catching the AI slipstream How GenAI's powering a second app boom The 'legitimate' excuse of assumed consent Perplexity's Pitch: What if your AI cloud could show its work? AI tools not for decision making: Kerala HC guidelines to district judiciary on AI usage Over to you How are you using tech to earn trust, not just traffic? Are you giving AI a seat at your table or the head of it? Tell us on LinkedIn and tag @ETBrandEquity. We will feature the smartest takes in our next edition. Stay tuned for the next edition of MarTech+ newsletter, rolling out every Wednesday. From, Team ETBrandEquity


Hindustan Times
9 hours ago
- Hindustan Times
AI must aid human thought, not become its replacement
Watching the recent resurgence of violence in Kashmir, I find myself grappling with questions about the role of technology, particularly Generative Artificial Intelligence (GenAI), in warfare. India is built upon the philosophy of live and let live, yet that doesn't mean passively accepting aggression. As someone deeply invested in responsibly applying AI in critical industries like financial services, aerospace, semiconductors, and manufacturing, I am acutely aware of the unsettling dual-use potential of the tools we develop: The same technology driving efficiency and innovation can also be weaponised for harm. We stand at a critical juncture. GenAI is rapidly shifting from mere technological advancement to a profound geopolitical tool. The stark division between nations possessing advanced GenAI capabilities and those dependent on externally developed systems poses serious strategic risks. Predominantly shaped by the interests and biases of major AI-developing nations, primarily the US and China, these models inevitably propagate their creators' narratives, often undermining global objectivity. Consider the inherent biases documented in AI models like OpenAI's GPT series or China's Deepseek, which subtly yet powerfully reflect geopolitical views. Research indicates these models minimise criticism of their home nations, embedding biases that can exacerbate international tensions. China's AI approach, for instance, often reinforces national policy stances, inadvertently legitimising territorial disputes or delegitimising sovereign entities, complicating fragile diplomatic relationships, notably in sensitive regions like Kashmir. Historically, mutually assured destruction (MAD) relied on nuclear deterrence. Today's arms race, however, is digital and equally significant in its potential to reshape global stability. We must urgently reconsider this outdated framework. Instead of mutually assured destruction, I advocate for a new kind of MAD: mutual advancement through digitisation. This paradigm shifts the emphasis from destructive competition to collaborative development and technological self-reliance. This evolved MAD requires nations, particularly technologically-vulnerable developing countries, to establish independent, culturally informed AI stacks. Such autonomy would reflect local histories, cultures, and political nuances, making these nations less susceptible to external manipulation. Robust, culturally informed AI not only protects against misinformation but fosters genuine global dialogue, contributing to a balanced, multipolar AI landscape. At the core of geopolitical tensions lies a profound challenge of mutual understanding. The world's dominant AI models, primarily trained in English and Chinese, leave multilingual and culturally diverse nations like India, with its 22 official languages and hundreds of dialects, in a precarious position. A simplistic AI incapable of capturing nuanced linguistic subtleties risks generating misunderstandings with severe diplomatic repercussions. To prevent this, developing sophisticated, culturally aware AI models is paramount. Multilingual AI systems must leverage similarities among related languages such as Marathi and Gujarati or Tamil and Kannada to rapidly scale without losing depth or nuance. Such culturally adept systems, sensitive to idiomatic expressions and contextual subtleties, significantly enhance cross-cultural understanding, reducing the risk of conflict driven by miscommunication. As GenAI becomes integrated into societal infrastructure and decision-making processes, it will inevitably reshape human roles. While automation holds tremendous promise for efficiency, delegating judgment, especially in life and death contexts like warfare, to AI systems raises profound concerns. I am reminded of the Cold War incident in 1983 when Soviet Lieutenant Colonel Stanislav Petrov trusted human intuition over technological alarms, averting nuclear disaster — a poignant reminder of why critical human judgment must never be relinquished to machines entirely. My greatest fear remains starkly clear: A future where humans willingly delegate judgment and thought to algorithms. We should not accept this future. We share collective responsibility as innovators, technologists, and global citizens, to demand and ensure that AI serves human wisdom rather than replaces it. Let's commit today: never allow technology to automate away our humanity. Arun Subramaniyan is founder and CEO, Articul8. The views expressed are personal.


Time of India
11 hours ago
- Time of India
White House unveils artificial intelligence policy plan
Academy Empower your mind, elevate your skills The White House on Wednesday released an artificial intelligence (AI) policy plan highlighting priorities for the US to achieve "global dominance" in the plan developed by US President Donald Trump's administration, calls for open-source and open-weight AI models to be made freely available by developers for anyone in the world to download and three core themes are accelerating AI innovation , build American AI infrastructure and lead in international AI diplomacy and Krishnan, White House advisor on AI policy, took to X to share the announcement. "There is a lot of exciting actions in here but one I'm very partial to is the focus on open source and open weights and making sure the U.S. leads in this critical area," his post AI plan calls for the Commerce Department to research Chinese AI models for alignment with Chinese Communist Party talking points and previously reported by Reuters, it adds the federal government should not allow AI-related federal funding to be directed toward states with "burdensome" regulations.(With inputs from Reuters)