Enterprises Say Stronger Governance is Key to AI Business Value: ISG Research
ISG AI Impact Summit to feature Bread Financial, Brink's, Polaris, U.S. Bank, Moderna, Takeda, Carnival, Wayfair, Pfizer and KeyBank to discuss AI adoption, governance and value
STAMFORD, Conn., May 13, 2025--(BUSINESS WIRE)--More than half of enterprises say AI governance is a major concern that, if not addressed, could limit the deployment and potential value of large language models (LLMs) through 2027, according to research from global AI-centered technology research and advisory firm Information Services Group (ISG) (Nasdaq: III).
Strategies for enterprise AI adoption, governance and competitive advantage will be explored at the ISG AI Impact Summit, June 3–4 at Hotel Commonwealth in Boston. Leaders from Bread Financial, Brink's, Polaris, U.S. Bank, Moderna, Takeda, Carnival, Wayfair, Pfizer and KeyBank will join the event to discuss their organization's approach to leveraging AI for business value.
"ISG research finds better coordination and governance are the top enterprise priorities for maximizing the value of AI initiatives," said David Menninger, executive director of ISG Software Research and host of the ISG AI Impact Summit. "Understanding the technology and acting now to address foundational requirements should be central to every enterprise strategy, to keep governance concerns from limiting the deployment and realized value of AI and machine learning models over the next two years."
Gaurav Kapil, CISO of Bread Financial, will address AI security and governance in the first keynote presentation at the event, "Leveraging AI in Cyber: How CISOs Are Transforming AI Security into Business Advantage," drawing from his board-level experiences to share practical approaches to position AI security as an enterprise-wide competitive advantage driver.
Additional sessions on the first day of the two-day event include, "Modernizing Data Architecture for AI Success: Lessons Learned," with Mruna Asoori, global VP of Data, AI and Architecture for Brink's Inc., and "From Pilot to Production: Scaling AI Across Business Units," with Vishal Brown, global digital offering strategy & GTM leader, NTT DATA; Anna Carr, product owner, AI Innovation, Polaris Inc.; Dhivya Nagasubramanian, lead AI solutions architect vice president, U.S. Bank, and Susana Selles Valls, associate director, AI & Automation, Moderna.
"Data is the foundation of all AI, GenAI and agentic systems, especially those used in business applications that provide real productivity gains," Menninger said. "AI, like analytics, must lead to action. As AI continues to evolve, it must begin translating data into effective strategies to drive tangible results and streamline processes in fields ranging from healthcare to supply chain management."
The keynote presentation on the second day of the event, "How Agentic AI Transformed Healthcare Decision-Making at Takeda," will be delivered by Takeda SVP and Chief Technology Officer Leo Barella, who will explore how a "fail-fast, learn-fast" mindset catalyzes innovative, AI-driven healthcare solutions at the biopharmaceutical company.
A panel discussion, "The Human Side of AI: Skills, Culture, and Organization Design," with Curt Persaud, AVP, IT Guest Technology at Carnival Cruise Line; Vivek Mukhatyar, GenAI medical engagement lead at Pfizer, and Paul Joseph, next gen transformation leader, KeyBank, will look at the critical skill gaps, continuous learning and inclusive upskilling initiatives necessary in an AI-powered economy.
Matt Skattum, CIO and business partner, Polymers, Fluor & Energy Materials, at Orbia will deliver a keynote, "Bridging Business Priorities and IT Capabilities to Drive Scalable AI in Manufacturing," looking at how strategic integration, rather than siloed execution, enables scalable, high-impact AI initiatives. Later in the day, John Isch, director, Infrastructure Solutions at Orange Business, will deliver a featured presentation, The "Future-Ready AI Architecture: Building for What's Next."
Robert Truxler, director of Engineering at Wayfair, will participate in the final discussion of the event, with advice on balancing innovation with governance, establishing clear risk guidelines, and turning ethical considerations into competitive advantages, in "Responsible AI in Practice: Approaches to Ethical AI Development."
In the ISG Startup Challenge, entrepreneurs including Reid McCrabb, co-founder and CEO of Linkt AI, which develops custom AI agents designed to automate complex, knowledge-based workflows, will pitch their business innovations to a panel of judges for an audience vote.
IBM, Orange Business, NTT DATA and Accenture are ISG AI Impact Summit event sponsors. The Global Voice of CX is an influencer partner, and CIO Applications and Hifo.co are media partners.
Additional information and registration are available on the event website.
About ISG
ISG (Nasdaq: III) is a global AI-centered technology research and advisory firm. A trusted partner to more than 900 clients, including 75 of the world's top 100 enterprises, ISG is a long-time leader in technology and business services that is now at the forefront of leveraging AI to help organizations achieve operational excellence and faster growth. The firm, founded in 2006, is known for its proprietary market data, in-depth knowledge of provider ecosystems, and the expertise of its 1,600 professionals worldwide working together to help clients maximize the value of their technology investments.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250513735705/en/
Contacts
Press Contacts:Will Thoretz, ISG+1 203 517 3119will.thoretz@isg-one.com Julianna Sheridan, Matter Communications for ISG+1 978 518 4520isg@matternow.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
22 minutes ago
- Yahoo
Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns
The High Court of England and Wales says lawyers need to take stronger steps to prevent the misuse of artificial intelligence in their work. In a ruling tying together two recent cases, Judge Victoria Sharp wrote that generative AI tools like ChatGPT 'are not capable of conducting reliable legal research." 'Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,' Judge Sharp wrote. 'The responses may make confident assertions that are simply untrue.' That doesn't mean lawyers cannot use AI in their research, but she said they have a professional duty 'to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work.' Judge Sharp suggested that the growing number of cases where lawyers (including, on the U.S. side, lawyers representing major AI platforms) have cited what appear to be AI-generated falsehoods suggests that 'more needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court,' and she said her ruling will be forwarded to professional bodies including the Bar Council and the Law Society. In one of the cases in question, a lawyer representing a man seeking damages against two banks submitted a filing with 45 citations — 18 of those cases did not exist, while many others 'did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application,' Judge Sharp said. In the other, a lawyer representing a man who had been evicted from his London home wrote a court filing citing five cases that did not appear to exist. (The lawyer denied using AI, though she said the citations may have come from AI-generated summaries that appeared in 'Google or Safari.') Judge Sharp said that while the court decided not to initiate contempt proceedings, that is 'not a precedent.' 'Lawyers who do not comply with their professional obligations in this respect risk severe sanction,' she added. Both lawyers were either referred or referred themselves to professional regulators. Judge Sharp noted that when lawyers do not meet their duties to the court, the court's powers range from 'public admonition' to the imposition of costs, contempt proceedings, or even 'referral to the police.' Error in retrieving data Sign in to access your portfolio Error in retrieving data


TechCrunch
33 minutes ago
- TechCrunch
Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns
The High Court of England and Wales says lawyers need to take stronger steps to prevent the misuse of artificial intelligence in their work. In a ruling tying together two recent cases, Judge Victoria Sharp wrote that generative AI tools like ChatGPT 'are not capable of conducting reliable legal research.' 'Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,' Judge Sharp wrote. 'The responses may make confident assertions that are simply untrue.' That doesn't mean lawyers cannot use AI in their research, but she said they have a professional duty 'to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work.' Judge Sharp suggested that the growing number of cases where lawyers (including, on the U.S. side, lawyers representing major AI platforms) have cited what appear to be AI-generated falsehoods suggests that 'more needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court,' and she said her ruling will be forwarded to professional bodies including the Bar Council and the Law Society. In one of the cases in question, a lawyer representing a man seeking damages against two banks submitted a filing with 45 citations — 18 of those cases did not exist, while many others 'did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application,' Judge Sharp said. In the other, a lawyer representing a man who had been evicted from his London home wrote a court filing citing five cases that did not appear to exist. (The lawyer denied using AI, though she said the citations may have come from AI-generated summaries that appeared in 'Google or Safari.') Judge Sharp said that while the court decided not to initiate contempt proceedings, that is 'not a precedent.' Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW 'Lawyers who do not comply with their professional obligations in this respect risk severe sanction,' she added. Both lawyers were either referred or referred themselves to professional regulators. Judge Sharp noted that when lawyers do not meet their duties to the court, the court's powers range from 'public admonition' to the imposition of costs, contempt proceedings, or even 'referral to the police.'

Epoch Times
an hour ago
- Epoch Times
Wall Street Review: S&P 500 Breaks 6,000 in Broad Market Rally
The rally in U.S. stocks continued to broaden this week, driven by a resilient labor market and easing trade tensions between the United States and China. For the first time since it reached an all-time high in February, the S&P 500 Index retook the 6,000 mark on June 6 and closed up 1.5 percent for the week. The Dow Jones Industrial Average gained 1.17 percent to finish at 42,746. The Nasdaq rose by 2.18 percent to 19,529, while the Russell 2000 rallied 3.19 percent.