logo
#

Latest news with #Augmented

Delhi CM Rekha Gupta to open Teej Mela on July 25
Delhi CM Rekha Gupta to open Teej Mela on July 25

Hans India

time7 days ago

  • Entertainment
  • Hans India

Delhi CM Rekha Gupta to open Teej Mela on July 25

New Delhi: Delhi Chief Minister Rekha Gupta will lead Teej festivities in the city with the government all set to organise a Teej Mela on July 25-27. Aiming to promote women empowerment, cultural prosperity and community harmony, the festival will be held at Delhi Haat, Pitampura. The monsoon festival will be inaugurated by CM Gupta on July 25. On July 26, a grand celebrity night will be held, featuring performances by popular artists. Throughout the three days, the fair will feature folk dances, music, magic shows, and cultural performances, enhancing its festive spirit. The Chief Minister said on Sunday that the mela will be a celebration of India's cultural heritage, the strength of women, and traditional arts, offering something special for every generation. To make the 2025 Teej Mahotsav truly exceptional, the venue will be decked out with theme-based decor. The entire space will reflect the traditional Teej theme, featuring 3D entry gates, colourful chandeliers, buntings, hanging lights, LED and spot lighting, said an official statement. Visitors will also enjoy a digital experience at the venue, with at least three themed selfie booths, including one with Augmented Reality (AR). Additionally, AI-assisted mehndi design selection and on-site application will be available, it said. The Chief Minister added that around 80 stalls will be set up to showcase traditional arts and crafts. These will feature handicrafts, ethnic wear, bangles, block printing, embroidery, mehndi, and traditional foods. Various competitions and stage performances will also be organised to encourage women's participation. Events such as mehndi, rangoli, bindi decoration, Teej Queen, Teej quiz, and slogan writing competitions will be held, with attractive prizes for winners. On July 27, a special 'Women in Green' fashion walk will be hosted, culminating in the selection of Miss Teej 2025. For women, the fair will feature swings and fun activities throughout the day, along with workshops focused on women empowerment, health, and social issues. Every day will include storytelling sessions related to Teej, aimed at enriching cultural knowledge and passing down traditions. The festival will also highlight live demonstrations of traditional crafts, such as pottery on the wheel, bangle-making, weaving, block printing, and embroidery, giving visitors a close-up view of artisanal skills. The Chief Minister stated that this year's Teej Mahotsav will be celebrated across Delhi with traditional fervour, cultural dignity, and grandeur.

Hexaware and Abluva Join Forces to Deliver Secure Agentic AI Solutions for the Life Sciences Industry
Hexaware and Abluva Join Forces to Deliver Secure Agentic AI Solutions for the Life Sciences Industry

Business Standard

time10-07-2025

  • Business
  • Business Standard

Hexaware and Abluva Join Forces to Deliver Secure Agentic AI Solutions for the Life Sciences Industry

PRNewswire Mumbai (Maharashtra) [India]/ Iselin (New Jersey) [US], July 10: Hexaware Technologies, a global provider of IT services and solutions, today announced a strategic partnership with Abluva, an innovator in agentic AI security, to address security challenges posed by autonomous AI agents in the Life Sciences industry. This collaboration brings together Hexaware's deep domain expertise and Abluva's groundbreaking Secure Intelligence Plane to help organizations in the sector deploy generative AI (GenAI) safely and in compliance with industry regulations. As Life Sciences organizations increasingly adopt agentic AI to enhance research, clinical trials, patient data management, and commercial operations, the partnership ensures that AI agents operate in a secure, governed, and auditable environment--without hindering innovation. Delivering Governed and Secure Generative AI Agents for Life Sciences Innovation. Partnership Highlights: * Real-time Agent Governance: Abluva's Secure Intelligence Plane enforces critical controls like purpose binding, role-based context augmentation, data masking, and tooling control to prevent unauthorized actions. * Comprehensive Agent Life Cycle Security: Security protocols to protect sensitive data span the entire Agent life cycle, including fine-tuning, Retrieval Augmented Generation (RAG), and prompting stages. This provides advanced visibility and targeted safeguards specifically designed for agent-driven architectures in clinical and research settings. * Autonomous Threat Mitigation and Self-Healing: Abluva's patent-pending "self-healing" capability allows the system to automatically detect and respond to unforeseen or anomalous agent behaviors in real-time, reducing risks associated with agent autonomy. * Enhanced Compliance and Privacy for AI: The solution ensures agent activities comply with industry standards like HIPAA and GDPR, and internal governance policies, through embedded governance and audit features, addressing crucial aspects of data privacy. "We are thrilled to partner with Abluva to implement the most secure agentic AI solutions for large sponsors and CROs," said Raj Gondhali, AVP & Head of Clinical Solutions, Hexaware. "This collaboration is pivotal, combining our expertise in global clinical solutions with Abluva's pioneering agentic security technology to ensure enhanced AI safety, compliance, and operational efficiency as our clients adopt next-generation AI." Raj Darji, CEO at Abluva, echoed similar sentiments: "We are excited to announce our partnership with Hexaware, a global systems integration specialist renowned for its Life Sciences expertise. This collaboration marks a significant milestone for Abluva as we aim to deliver value and innovative solutions in agentic AI security. By combining Hexaware's global reach with our novel research-based technology for securing autonomous agents, we are committed to providing comprehensive and integrated solutions that enable safe AI adoption." Amit Gautam, CTO at Abluva, added, "Our partnership with Hexaware enables us to extend our expertise in agentic AI security to a broader market, addressing the critical need for robust governance in AI-driven enterprises. By integrating our Secure Intelligence Plane with Hexaware's capabilities, we can deliver sophisticated, real-time governance solutions tailored to secure autonomous AI agents in complex life sciences environments." This partnership underlines Hexaware's commitment to next-generation cloud and AI platforms, and Abluva's leadership in research-driven, agentic security innovations. Together, they empower life sciences enterprises to unlock GenAI's full potential--securely, at scale. About Abluva Inc Abluva stands at the forefront of data security, pioneering research-driven technologies to address today's most pressing data challenges. We are dedicated to building a secure data plane that enables fine-grained access control and robust privacy across diverse data sources. Our innovations extend to advanced protection for agents and generative AI, alongside groundbreaking data and intent-driven breach discovery. Abluva's solutions empower organizations to strengthen their compliance, bolster their security posture, and accelerate innovation through the secure democratization of data and intelligence. Visit for more information. About Hexaware Technologies Hexaware is a global technology and business process services company. Every day, Hexawarians wake up with a singular purpose: to create smiles through great people and technology. With offices across the world, we empower enterprises worldwide to realize digital transformation at scale and speed by partnering with them to build, transform, run, and optimize their technology and business processes. Learn more about Hexaware at

kama.ai makes Cohere available on its Hybrid AI Agent Platform
kama.ai makes Cohere available on its Hybrid AI Agent Platform

Yahoo

time26-06-2025

  • Business
  • Yahoo

kama.ai makes Cohere available on its Hybrid AI Agent Platform

Advancing Sovereign Canadian AI Agents TORONTO, June 26, 2025 (GLOBE NEWSWIRE) -- Canada's leader in Responsible Conversational AI, is pleased to announce that Cohere, the leading data security-focused enterprise AI company, is now available on hybrid AI agent platform to power generative intelligence within GenAI's Sober Second Mind® Hybrid AI Agents. This integration deepens commitment to provide trusted, human-governed knowledge delivery. This while harnessing high-quality generative outputs through Cohere's secure enterprise LLMs and embedding models. Notably, both and Cohere represent Canada's growing Sovereign AI capabilities. This is an emerging important topic with enterprise clients. Sovereign AI implies artificial intelligence technologies which are designed, developed, deployed, and governed within a specific country's legal and ethical frameworks. This has become an issue concerning national control over data, algorithms, infrastructure, and compliance. 'Cohere strengthens our Hybrid AI Agent solution by providing a Canadian-built GenAI capability alongside our trusted knowledge graph AI technology,' said Brian Ritchie, CEO and Founder of 'We see this as a very positive step forward. Combining these two technologies provides the most responsible and brand-safe AI systems in the industry.' Trusted Collections for Enterprise Assurance This new development allows to connect and use Cohere's Command family LLMs to deliver real-time creative responses. This while Cohere's embedding and rerank models power the company's Trusted Collections for enterprise Retrieval Augmented Generation (RAG). Trusted Collections ensure that only carefully chosen documents are selected and vectorized to inform generative drafts or real-time responses. This minimizes challenges with hallucinations, and other contamination from less relevant information. Draft Assist with kama's Trusted Collections allows generative support while human experts validate critical information for the organization's knowledge graph. This allows systems to provide 100% accuracy and reliable adherence to brand sanctioned information. This process means enterprise knowledge managers and administrators can maintain full control over what the AI Agent says publicly. It blends the speed and creativity of Cohere's generative AI with the deterministic accuracy and factual safety of graph-based enterprise knowledge. A Stronger Canadian AI Ecosystem CAISIC, Canada's Artificial Intelligence Sovereignty and Innovation Cluster, praised this milestone as an advancement for Canada's AI sector: ' integration with Cohere demonstrates that Canada's AI innovators are capable of delivering collaborative solutions built with local expertise,' said Niraj Bhargava, Co-Founder and Co-Chair of CAISIC. 'This direction aligns with the growing need for sovereign, ethical, and trusted AI technologies in the AI stack.' About is a Responsible AI Agent deployment platform that blends knowledge graph AI with advanced generative models for trustworthy Hybrid AI Agents. It empowers industries such as finance, education, healthcare, and Indigenous services with culturally aware, ethical, and accurate AI. By incorporating human governed-in-advance processes and information, lowers the barriers for enterprise AI Agent adoption, making sure organizations gain efficiency without risking reliability and reputation. About Cohere Cohere is the leading data security-focused enterprise AI company. It is a global technology company co-headquartered in Toronto and San Francisco, with key offices in London and New York. The company builds enterprise-grade frontier AI models designed to solve real-world business challenges. Cohere's AI solutions are cloud-agnostic to meet companies wherever their data is stored and offer the highest levels of security, privacy, and customization with on-premises and private cloud deployment options. Learn more at For media inquiries:Charles Dimov, CMO, | +1 (647) 702-1494Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Does RAG make LLMs less safe? Bloomberg research reveals hidden dangers
Does RAG make LLMs less safe? Bloomberg research reveals hidden dangers

Business Mayor

time28-04-2025

  • Business
  • Business Mayor

Does RAG make LLMs less safe? Bloomberg research reveals hidden dangers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Retrieval Augmented Generation (RAG) is supposed to help improve the accuracy of enterprise AI by providing grounded content. While that is often the case, there is also an unintended side effect. According to surprising new research published today by Bloomberg, RAG can potentially make large language models (LLMs) unsafe. Bloomberg's paper, 'RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models,' evaluated 11 popular LLMs including Claude-3.5-Sonnet, Llama-3-8B and GPT-4o. The findings contradict conventional wisdom that RAG inherently makes AI systems safer. The Bloomberg research team discovered that when using RAG, models that typically refuse harmful queries in standard settings often produce unsafe responses. Alongside the RAG research, Bloomberg released a second paper, 'Understanding and Mitigating Risks of Generative AI in Financial Services,' that introduces a specialized AI content risk taxonomy for financial services that addresses domain-specific concerns not covered by general-purpose safety approaches. The research challenges widespread assumptions that retrieval-augmented generation (RAG) enhances AI safety, while demonstrating how existing guardrail systems fail to address domain-specific risks in financial services applications. 'Systems need to be evaluated in the context they're deployed in, and you might not be able to just take the word of others that say, Hey, my model is safe, use it, you're good,' Sebastian Gehrmann, Bloomberg's Head of Responsible AI, told VentureBeat. RAG is widely used by enterprise AI teams to provide grounded content. The goal is to provide accurate, updated information. There has been a lot of research and advancement in RAG in recent months to further improve accuracy as well. Earlier this month a new open-source framework called Open RAG Eval debuted to help validate RAG efficiency. It's important to note that Bloomberg's research is not questioning the efficacy of RAG or its ability to reduce hallucination. That's not what the research is about. Rather it's about how RAG usage impacts LLM guardrails in an unexpected way. The research team discovered that when using RAG, models that typically refuse harmful queries in standard settings often produce unsafe responses. For example, Llama-3-8B's unsafe responses jumped from 0.3% to 9.2% when RAG was implemented. Gehrmann explained that without RAG being in place, if a user typed in a malicious query, the built-in safety system or guardrails will typically block the query. Yet for some reason, when the same query is issued in an LLM that is using RAG, the system will answer the malicious query, even when the retrieved documents themselves are safe. 'What we found is that if you use a large language model out of the box, often they have safeguards built in where, if you ask, 'How do I do this illegal thing,' it will say, 'Sorry, I cannot help you do this,'' Gehrmann explained. 'We found that if you actually apply this in a RAG setting, one thing that could happen is that the additional retrieved context, even if it does not contain any information that addresses the original malicious query, might still answer that original query.' So why and how does RAG serve to bypass guardrails? The Bloomberg researchers were not entirely certain though they did have a few ideas. Gehrmann hypothesized that the way the LLMs were developed and trained did not fully consider safety alignments for really long inputs. The research demonstrated that context length directly impacts safety degradation. 'Provided with more documents, LLMs tend to be more vulnerable,' the paper states, showing that even introducing a single safe document can significantly alter safety behavior. 'I think the bigger point of this RAG paper is you really cannot escape this risk,' Amanda Stent, Bloomberg's Head of AI Strategy and Research, told VentureBeat. 'It's inherent to the way RAG systems are. The way you escape it is by putting business logic or fact checks or guardrails around the core RAG system.' Bloomberg's second paper introduces a specialized AI content risk taxonomy for financial services, addressing domain-specific concerns like financial misconduct, confidential disclosure and counterfactual narratives. The researchers empirically demonstrated that existing guardrail systems miss these specialized risks. They tested open-source guardrail models including Llama Guard, Llama Guard 3, AEGIS and ShieldGemma against data collected during red-teaming exercises. 'We developed this taxonomy, and then ran an experiment where we took openly available guardrail systems that are published by other firms and we ran this against data that we collected as part of our ongoing red teaming events,' Gehrmann explained. 'We found that these open source guardrails… do not find any of the issues specific to our industry.' The researchers developed a framework that goes beyond generic safety models, focusing on risks unique to professional financial environments. Gehrmann argued that general purpose guardrail models are usually developed for consumer facing specific risks. So they are very much focused on toxicity and bias. He noted that while important those concerns are not necessarily specific to any one industry or domain. The key takeaway from the research is that organizations need to have the domain specific taxonomy in place for their own specific industry and application use cases. Bloomberg has made a name for itself over the years as a trusted provider of financial data systems. In some respects, gen AI and RAG systems could potentially be seen as competitive against Bloomberg's traditional business and therefore there could be some hidden bias in the research. 'We are in the business of giving our clients the best data and analytics and the broadest ability to discover, analyze and synthesize information,' Stent said. 'Generative AI is a tool that can really help with discovery, analysis and synthesis across data and analytics, so for us, it's a benefit.' She added that the kinds of bias that Bloomberg is concerned about with its AI solutions are focussed on finance. Issues such as data drift, model drift and making sure there is good representation across the whole suite of tickers and securities that Bloomberg processes are critical. For Bloomberg's own AI efforts she highlighted the company's commitment to transparency. 'Everything the system outputs, you can trace back, not only to a document but to the place in the document where it came from,' Stent said. For enterprises looking to lead the way in AI, Bloomberg's research mean that RAG implementations require a fundamental rethinking of safety architecture. Leaders must move beyond viewing guardrails and RAG as separate components and instead design integrated safety systems that specifically anticipate how retrieved content might interact with model safeguards. Industry-leading organizations will need to develop domain-specific risk taxonomies tailored to their regulatory environments, shifting from generic AI safety frameworks to those that address specific business concerns. As AI becomes increasingly embedded in mission-critical workflows, this approach transforms safety from a compliance exercise into a competitive differentiator that customers and regulators will come to expect. 'It really starts by being aware that these issues might occur, taking the action of actually measuring them and identifying these issues and then developing safeguards that are specific to the application that you're building,' explained Gehrmann.

Lifesize Plans reinventing pre-construction experience with VR/AR tech
Lifesize Plans reinventing pre-construction experience with VR/AR tech

Trade Arabia

time23-04-2025

  • Business
  • Trade Arabia

Lifesize Plans reinventing pre-construction experience with VR/AR tech

The UAE's construction and real estate market has continued to experience a strong surge and increase in overall demand in recent years as it now accelerates into a new phase of high-growth development - driven by urban expansion, inward investment, and visionary planning. Lifesize Plans Dubai, a leading company in life-sized architectural projections worldwide, has noticed this huge growth and entered the UAE market in 2023 as a transformative force, offering immersive, full-scale architectural visualization that is redefining the pre-construction experience. Backed by decades of ambition and infrastructure investment, the UAE's construction sector is expected to continue growing over the next three years and reach an estimated size of AED181.446 in 2028 as per market intelligence and advisory firm Mordor Intelligence. As per the report, Dubai alone is launching multi-billion-dirham projects tied to tourism, housing, and smart city frameworks. As a premier business in the region to offer life-size architectural plan projection - a groundbreaking service that enables clients to walk through their design at a 1:1 scale - Lifesize Plans Dubai is helping developers, architects, industry professionals, and clients eliminate guesswork from the design process. This service is enhanced with integrated Virtual Reality & Augmented Reality experiences, giving users the ability to toggle between finishes, floor plan variations, and alterations—all before a single structure is built. While the region sets global benchmarks in design innovation, companies are increasingly turning to tools that improve planning, reduce risk, and align stakeholders from day one. That's where Lifesize Plans Dubai offers critical value, said its top official. "We are not just offering visualizations - we're offering certainty," remarked its CEO Georges Calas. "In an environment where every square meter matters, and where timelines and budgets are tightly controlled, our service helps identify problems before they become expensive mistakes," he stated. "The UAE's real estate sector will always be a prime market and it is our duty as industry experts to provide the best possible product to global investors from all over the world and maintain the country's strong reputation as a top destination to live in," he added. Lifesize Plans Dubai said it is already working with architecture firms, real estate developers, and interior designers across Dubai, Abu Dhabi, and the GCC, who view life-size projections as a tool to lower costs, increase efficiency, improve collaboration, and significantly boost client satisfaction. "The tech-forward offering also caters to private clients building luxury villas and custom properties - allowing them to confidently approve spatial layouts, room sizing, and design details in real life," stated Calas.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store