Latest news with #InfosysKnowledgeInstitute


Time of India
7 hours ago
- Business
- Time of India
Agentic AI rises: 86% see higher risks, only 2% meet responsible AI gold standards
Infosys Knowledge Institute (IKI),the research arm of Infosys (NSE, BSE, NYSE: INFY), a global leader in next-generation digital services and consulting, today unveiled critical insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI . The report, Responsible Enterprise AI in the Agentic Era, surveyed over 1,500 business executives and interviewed 40 senior decision-makers across Australia, France, Germany, UK, US, and New Zealand. The findings show that while 78% of companies see RAI as a business growth driver, only 2% have adequate RAI controls in place to safeguard against reputational risk and financial loss. The report analyzed the effects of risks from poorly implemented AI, such as privacy violations, ethical violations, bias or discrimination, regulatory non-compliance, inaccurate or harmful predictions, among others. It found that 77% of organizations reported financial loss, and 53% of organizations have suffered reputational impact from such AI related incidents. Key findings include: AI risks are widespread and can be severe 95% of C-suite and director-level executives report AI-related incidents in the past two years.39% characterize the damage experienced from such AI issues as 'severe' or 'extremely severe'.86% of executives aware of agentic AI believe it will introduce new risks and compliance issues. Responsible AI (RAI) capability is patchy and inefficient for most enterprises Only 2% of companies (termed 'RAI leaders') met the full standards set in the Infosys RAI capability benchmark — termed 'RAISE BAR' with 15% (RAI followers) meeting three-quarters of the 'RAI leader' cohort experienced 39% lower financial losses and 18% lower severity from AI do several things better to achieve these results including developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan. Executives view RAI as a growth driver 78% of senior leaders see RAI as aiding their revenue growth and 83% say that future AI regulations would boost, rather than inhibit, the number of future AI on average companies believe they are underinvesting in RAI by 30%. With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage. To help organizations build scalable, trusted AI systems that fuel growth while mitigating risk, Infosys recommends the following actions: Learn from the leaders: Study the practices of high-maturity RAI organizations who have already faced diverse incident types and developed robust product agility with platform governance: Combine decentralized product innovation with centralized RAI guardrails and RAI guardrails into secure AI platforms: Use platform-based environments that enable AI agents to operate within preapproved data and a proactive RAI office: Create a centralized function to monitor risk, set policy, and scale governance with tools like Infosys' AI3S (Scan, Shield, Steer). Balakrishna D.R., EVP – Global Services Head, AI and Industry Verticals, Infosys said, 'Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases, would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability. This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate. Companies should not discount the important role a centralized RAI office plays as enterprise AI scales, and new regulations come into force.' Jeff Kavanaugh, Head of Infosys Knowledge Institute, Infosys, said, 'Today, enterprises are navigating a complex landscape where AI's promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of Responsible AI, there's a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era.'


Forbes
a day ago
- Business
- Forbes
Lack of Responsible AI Safeguards Coming Back To Bite, Survey Suggests
Everyone is talking about 'responsible AI,' but few are doing anything about it. A new survey shows that most executives see the logic and benefits of pursuing a responsible AI approach, but little has been done to make this happen. As a result, many have already experienced issues such as privacy violations, systemic failures, inaccurate predictions, and ethical violations. While 78% of companies see responsible AI as a business growth driver, a meager 2% have adequate controls in place to safeguard against reputational risk and financial loss stemming from AI, according to the survey of 1,500 AI decision-makers published by Infosys Knowledge Institute, the research arm of Infosys. The survey was conducted in March and April of this year. So what exactly constitutes responsible AI? The survey report's authors outlined several elements essential to responsible AI, starting with explainability, 'a big part of gaining trust in AI systems.' Technically, explainability involves techniques to 'explain single prediction by showing features that mattered most for specific result," as well as counterfactual analysis that 'identifies the smallest input changes needed to change a model outcome.' Another techniques, chain-of-thought reasonings, "breaks down tasks into intermediate reasoning stages, making the process transparent." Other processes essential to attaining responsible AI include continuous monitoring, anomaly detection, rigorous testing, validation, robust access controls, following ethical guidelines, human oversight, along with data quality and integrity measures. Most do not yet use these techniques, the survey's authors found. Only 4% have implemented at least five of the above measures. Eighty-three percent deliver responsible AI in a piecemeal manner. On average, executives believe they are underinvesting in responsible by at least 30%. There's an urgency to adopting more responsible AI measures. Just about all the survey's respondents, 95%, report having AI-related incidents in the past two years. At least 77% reported financial loss as a result of AI-related incidents, and 53% suffered reputational impact from such AI related incidents. Three quarters cited damage that was at least considered 'substantial,' with 39% claiming the damage was 'severe' or 'extremely severe.' AI errors "can inflict damage faster and more widely than a simple database error of a rogue employee," the authors pointed out. Those leading the way with responsible AI have seen 39% lower financial losses, 18% lower average severity from their AI incidents. Leading AI incidents experienced over the past two years include the following: The executives with more advanced responsible AI initiatives take measure such as developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan, the survey report's authors stated.


Korea Herald
a day ago
- Business
- Korea Herald
As Agentic AI Gains Traction, 86% of Enterprises Anticipate Heightened Risks, Yet Only 2% of Companies Meet Responsible AI Gold Standards
With 95% of enterprises facing incidents, Infosys research reveals wide gap between AI adoption and responsible AI readiness, exposing most enterprises to reputational risks and financial loss BENGALURU, India, Aug. 14, 2025 /PRNewswire/ -- Infosys Knowledge Institute (IKI), the research arm of Infosys (NSE: INFY), (BSE: INFY), (NYSE: INFY), a global leader in next-generation digital services and consulting, today unveiled critical insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI. The report, Responsible Enterprise AI in the Agentic Era, surveyed over 1,500 business executives and interviewed 40 senior decision-makers across Australia, France, Germany, UK, US, and New Zealand. The findings show that while 78% of companies see RAI as a business growth driver, only 2% have adequate RAI controls in place to safeguard against reputational risk and financial loss. The report analyzed the effects of risks from poorly implemented AI, such as privacy violations, ethical violations, bias or discrimination, regulatory non-compliance, inaccurate or harmful predictions, among others. It found that 77% of organizations reported financial loss, and 53% of organizations have suffered reputational impact from such AI-related incidents. Key findings include: AI risks are widespread and can be severe Executives view RAI as a growth driver With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage. To help organizations build scalable, trusted AI systems that fuel growth while mitigating risk, Infosys recommends the following actions: Balakrishna D.R., EVP – Global Services Head, AI and Industry Verticals, Infosys, said, "Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases, would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability. This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate. Companies should not discount the important role a centralized RAI office plays as enterprise AI scales, and new regulations come into force." Jeff Kavanaugh, Head of Infosys Knowledge Institute, Infosys, said, "Today, enterprises are navigating a complex landscape where AI's promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of Responsible AI, there's a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era." To read the full report, please visit here. Methodology Infosys used an anonymous format to conduct an online survey of 1,502 business executives across industries across Australia, New Zealand, France, Germany, the United Kingdom, and the United States Australia, France, Germany, UK, US, and New Zealand, as well as qualitative interviews with 40 senior executives. About Infosys Infosys is a global leader in next-generation digital services and consulting. Over 320,000 of our people work to amplify human potential and create the next opportunity for people, businesses, and communities. We enable clients in 59 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer clients, as they navigate their digital transformation powered by cloud and AI. We enable them with an AI-first core, empower the business with agile digital at scale and drive continuous improvement with always-on learning through the transfer of digital skills, expertise, and ideas from our innovation ecosystem. We are deeply committed to being a well-governed, environmentally sustainable organization where diverse talent thrives in an inclusive workplace. Visit to see how Infosys (NSE, BSE, NYSE: INFY) can help your enterprise navigate your next. Safe Harbor Certain statements in this release concerning our future growth prospects, or our future financial or operating performance, are forward-looking statements intended to qualify for the 'safe harbor' under the Private Securities Litigation Reform Act of 1995, which involve a number of risks and uncertainties that could cause actual results or outcomes to differ materially from those in such forward-looking statements. The risks and uncertainties relating to these statements include, but are not limited to, risks and uncertainties regarding the execution of our business strategy, increased competition for talent, our ability to attract and retain personnel, increase in wages, investments to reskill our employees, our ability to effectively implement a hybrid work model, economic uncertainties and geo-political situations, technological disruptions and innovations such as artificial intelligence ("AI"), generative AI, the complex and evolving regulatory landscape including immigration regulation changes, our ESG vision, our capital allocation policy and expectations concerning our market position, future operations, margins, profitability, liquidity, capital resources, our corporate actions including acquisitions, and cybersecurity matters. Important factors that may cause actual results or outcomes to differ from those implied by the forward-looking statements are discussed in more detail in our US Securities and Exchange Commission filings including our Annual Report on Form 20-F for the fiscal year ended March 31, 2025. These filings are available at Infosys may, from time to time, make additional written and oral forward-looking statements, including statements contained in the Company's filings with the Securities and Exchange Commission and our reports to shareholders. The Company does not undertake to update any forward-looking statements that may be made from time to time by or on behalf of the Company unless it is required by law.


Economic Times
a day ago
- Business
- Economic Times
AI mishaps hit 95% executives, only 2% firms meet responsible use standards: Infosys study
TIL Creatives Almost every executive using artificial intelligence (AI) for professionals has faced at least one problematic incident, but only a handful of companies are using the new-age technology responsibly, Infosys said in a report on Thursday. The 'Responsible enterprise AI in the agentic era' report by Infosys Knowledge Institute, the research arm of Infosys, found that 95% of executives who use enterprise AI experienced at least one incident. The report took inputs from 1,500 executives in the US, Canada, the UK, Germany, France, and Australia and New Zealand (ANZ). Privacy violations, systemic failures, inaccurate or harmful predictions, and ethical violations were the most common incidents that executives sampled for the survey reported. Nearly three-quarters of the companies covered in the Infosys report considered the damage 'substantial,' while 39% said the impact was 'severe' or 'extremely severe.' Financial losses most common in AI incidents More than three-fourths (77%) of the time, these incidents resulted in a direct financial loss, while the remaining instances involved reputational or legal damages, according to the Infosys report. However, executives considered loss of face much more threatening to their business than financial losses, it being the most common, the size of financial losses due to AI errors is relatively small, the report said. 'The average company in our sample reported financial losses from enterprise AI incidents of about $800,000 over two years. In total, this equates to between $750 million and $1.5 billion across the sample, and when extrapolated, represents an annual cost of between $1.4 billion and $2.9 billion globally across all businesses ($2.1 billion on average),' the report said. Responsible AI on back burner Despite the widespread troubles, the Infosys survey found only 2% of the surveyed companies meeting the IT services company's standards of responsible AI use. Nearly 15% of the sample size meet three-fourths of the standards, while 83% implement them in a piecemeal manner, the report executives cited the lack of resources and the ever-changing regulations for their weak RAI processes. On average, leaders sought an additional 30% of responsible AI (RAI) spending, which already accounts for 25% of overall AI costs. Meanwhile, financial losses from enterprise AI incidents amount to only 8%, making the demand for higher spending a risky the human resources front, a larger RAI team enables the deployment of more enterprise AI initiatives. However, the success rate of deployments as a proportion of total initiatives falls from 24% to 21% as team size grows. Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Tariffs, tantrums, and tech: How Trump's trade drama is keeping Indian IT on tenterhooks Good, bad, ugly: How will higher ethanol in petrol play out for you? As big fat Indian wedding slims to budget, Manyavar loses lustre As 50% US tariff looms, 6 key steps that can safeguard Indian economy Stock Radar: JSPL forms Ascending Triangle pattern on weekly charts, could hit fresh 52-week high soon Nifty and business are different species: 5 small-cap stocks from different sectors with upside potential of up to 30% F&O Radar | Deploy Bear Put Spread in Nifty to play index's negative stance amid volatility Wealth creation: Look beyond the obvious in some things; 10 fertilizer sector companies worth watching

Cision Canada
a day ago
- Business
- Cision Canada
As Agentic AI Gains Traction, 86% of Enterprises Anticipate Heightened Risks, Yet Only 2% of Companies Meet Responsible AI Gold Standards
With 95% of enterprises facing incidents, Infosys research reveals wide gap between AI adoption and responsible AI readiness, exposing most enterprises to reputational risks and financial loss BENGALURU, India, Aug. 14, 2025 /CNW/ -- Infosys Knowledge Institute (IKI), the research arm of Infosys (NSE: INFY), (BSE: INFY), (NYSE: INFY), a global leader in next-generation digital services and consulting, today unveiled critical insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI. The report, Responsible Enterprise AI in the Agentic Era, surveyed over 1,500 business executives and interviewed 40 senior decision-makers across Australia, France, Germany, UK, US, and New Zealand. The findings show that while 78% of companies see RAI as a business growth driver, only 2% have adequate RAI controls in place to safeguard against reputational risk and financial loss. The report analyzed the effects of risks from poorly implemented AI, such as privacy violations, ethical violations, bias or discrimination, regulatory non-compliance, inaccurate or harmful predictions, among others. It found that 77% of organizations reported financial loss, and 53% of organizations have suffered reputational impact from such AI-related incidents. Key findings include: AI risks are widespread and can be severe 95% of C-suite and director-level executives report AI-related incidents in the past two years. 39% characterize the damage experienced from such AI issues as "severe" or "extremely severe." 86% of executives aware of agentic AI believe it will introduce new risks and compliance issues. Responsible AI (RAI) capability is patchy and inefficient for most enterprises Only 2% of companies (termed "RAI leaders") met the full standards set in the Infosys RAI capability benchmark — termed "RAISE BAR" with 15% (RAI followers) meeting three-quarters of the standards. The "RAI leader" cohort experienced 39% lower financial losses and 18% lower severity from AI incidents. Leaders do several things better to achieve these results, including developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan. Executives view RAI as a growth driver 78% of senior leaders see RAI as aiding their revenue growth and 83% say that future AI regulations would boost, rather than inhibit, the number of future AI initiatives. However, on average, companies believe they are underinvesting in RAI by 30%. With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage. To help organizations build scalable, trusted AI systems that fuel growth while mitigating risk, Infosys recommends the following actions: Learn from the leaders: Study the practices of high-maturity RAI organizations who have already faced diverse incident types and developed robust governance. Blend product agility with platform governance: Combine decentralized product innovation with centralized RAI guardrails and oversight. Embed RAI guardrails into secure AI platforms: Use platform-based environments that enable AI agents to operate within preapproved data and systems. Establish a proactive RAI office: Create a centralized function to monitor risk, set policy, and scale governance with tools like Infosys' AI3S (Scan, Shield, Steer). Balakrishna D.R., EVP – Global Services Head, AI and Industry Verticals, Infosys, said, "Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases, would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability. This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate. Companies should not discount the important role a centralized RAI office plays as enterprise AI scales, and new regulations come into force." Jeff Kavanaugh, Head of Infosys Knowledge Institute, Infosys, said, "Today, enterprises are navigating a complex landscape where AI's promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of Responsible AI, there's a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era." To read the full report, please visit here. Methodology Infosys used an anonymous format to conduct an online survey of 1,502 business executives across industries across Australia, New Zealand, France, Germany, the United Kingdom, and the United States Australia, France, Germany, UK, US, and New Zealand, as well as qualitative interviews with 40 senior executives. About Infosys Infosys is a global leader in next-generation digital services and consulting. Over 320,000 of our people work to amplify human potential and create the next opportunity for people, businesses, and communities. We enable clients in 59 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer clients, as they navigate their digital transformation powered by cloud and AI. We enable them with an AI-first core, empower the business with agile digital at scale and drive continuous improvement with always-on learning through the transfer of digital skills, expertise, and ideas from our innovation ecosystem. We are deeply committed to being a well-governed, environmentally sustainable organization where diverse talent thrives in an inclusive workplace. Visit to see how Infosys (NSE, BSE, NYSE: INFY) can help your enterprise navigate your next. Safe Harbor Certain statements in this release concerning our future growth prospects, or our future financial or operating performance, are forward-looking statements intended to qualify for the 'safe harbor' under the Private Securities Litigation Reform Act of 1995, which involve a number of risks and uncertainties that could cause actual results or outcomes to differ materially from those in such forward-looking statements. The risks and uncertainties relating to these statements include, but are not limited to, risks and uncertainties regarding the execution of our business strategy, increased competition for talent, our ability to attract and retain personnel, increase in wages, investments to reskill our employees, our ability to effectively implement a hybrid work model, economic uncertainties and geo-political situations, technological disruptions and innovations such as artificial intelligence ("AI"), generative AI, the complex and evolving regulatory landscape including immigration regulation changes, our ESG vision, our capital allocation policy and expectations concerning our market position, future operations, margins, profitability, liquidity, capital resources, our corporate actions including acquisitions, and cybersecurity matters. Important factors that may cause actual results or outcomes to differ from those implied by the forward-looking statements are discussed in more detail in our US Securities and Exchange Commission filings including our Annual Report on Form 20-F for the fiscal year ended March 31, 2025. These filings are available at Infosys may, from time to time, make additional written and oral forward-looking statements, including statements contained in the Company's filings with the Securities and Exchange Commission and our reports to shareholders. The Company does not undertake to update any forward-looking statements that may be made from time to time by or on behalf of the Company unless it is required by law.