logo
#

Latest news with #NationalAssociationofCorporateDirectors

Business Leaders Called To Align Tech Decisions With Corporate Values
Business Leaders Called To Align Tech Decisions With Corporate Values

Forbes

time16 hours ago

  • Business
  • Forbes

Business Leaders Called To Align Tech Decisions With Corporate Values

Signs point to an emerging, if informal, social contact on AI deployment. With the rapid rollout of AI, corporate leaders are increasingly being called to consider the proper alignment between technology strategies and organizational purposes and values. It's a call that speaks to an informal, yet important 'social license' between companies and their stakeholders on the use of technology, and its impact on labor, among other interests. And it's a call that's been reflected in recent comments from influential religious, legal and business leaders, including Pope Leo XIV, Amazon CEO Andrew Jassy, and Wachtell Lipton Rosen & Katz Founding Partner Martin Lipton. Attention to this informal social license arose from President Joseph Biden's 2023 Executive Order on the 'Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.' This (now revoked) Executive Order identified eight specific principles on which AI development should be guided, including a commitment to supporting American workers and preventing 'harmful labor-force disruptions'. The National Association of Corporate Directors ('NACD') indirectly acknowledged the AI social license in its 2024 Blue Ribbon Commission Report, 'Technology Leadership in the Boardroom: Driving Trust and Value.' The Report called upon boards to 'move fast and be bold' with respect to AI deployment, while simultaneously acting as a 'guardrail to uphold organizational values and protect stakeholders' interests'. In a May 12, 2025, address to the College of Cardinals, Pope Leo spoke broadly about the social concerns with AI, focusing particularly on what he described as the challenges to the defense of human dignity, justice and labor that arise from 'developments in the field of artificial intelligence.' A recent article in The Wall Street Journal chronicled the long-running dialogue between the Vatican and Silicon Valley on the ethical implications of AI. Indeed, on June 17, Pope Leo delivered a written message to a two-day international conference in Rome focusing on AI, ethics and corporate governance. In his message, the Pope urged AI developers to evaluate its implications in the context of the 'integral development of the human person and society…taking into account the well-being of the human person not only materially, but also intellectually and spiritually…'. This 'alignment' concern was underscored by a recent post by the highly regarded Mr. Lipton, encouraging corporate boards to maintain their organizational values while pursuing value through AI. 'Boards should consider in a balanced manner the effect of technological adoptions on important constituencies, including employees and communities, as opposed to myopically seeking immediate expense-line efficiencies at any cost.' There certainly is little question that for many companies, generative AI is likely to have a disruptive impact on labor; that the efficiency gains expected from AI implementation could result in a reduced or dramatically altered workforce. The related question is the extent to which 'corporate values' should encompass a response to tech-driven labor disruption. Note in this regard the long-standing position of NACD is that a positive workforce culture is a significant corporate asset. A recent memo from Amazon CEO Andrew Jassy offers a positive example of how to address the strategy/values alignment challenge ‒ by being transparent with employees, well in advance, about the coming transformation and its impact on the workforce, and by offering practical suggestions on how employees can best prepare for it: Those who embrace this change, become conversant in AI, help us build and improve our AI capabilities internally and deliver for customers, will be well-positioned to have high impact and help us reinvent the company. As boards work with management to deploy AI, they should be in regular conversation on which valued-centered decisions the board must be informed, and on which such decisions they may be asked to decide, or merely advise. Such a dialogue is likely to enhance the reflection on corporate purposes and values within decisions regarding strategy and technology. Of course, that incorporation can come in many different ways and from many different directions; the Amazon example being one of them. There are no established guidelines on how leadership might approach the strategy/values alignment discussion. But there is a growing recognition that corporate values must be accommodated in some manner into the AI decision-making. Most likely, effective alignment will balance the inevitability of AI—driven workforce impact with initiatives that advance employee well-being and 'positively augment human work,' including initiatives that minimize job-displacement risks and maximize career opportunities related to AI. For as the NACD suggests, the ultimate AI deployment message to the board is that '[I]t's about what you can do, but also what you should do.'

How Board-Level AI Governance Is Changing
How Board-Level AI Governance Is Changing

Forbes

time28-03-2025

  • Business
  • Forbes

How Board-Level AI Governance Is Changing

Technology and AI governance remains a top concern for corporate directors and executives in 2025 ... More relative to safeguarding data, managing new technologies, and ensuring the necessary skills in the boardroom and across the organization. Technology and AI governance remains a top concern for corporate directors and executives in 2025 relative to safeguarding data, managing new technologies, and ensuring the necessary skills in the boardroom and across the organization. Effective board members understand the importance of technology for their businesses and their role in governing it. According to the National Association of Corporate Directors (NACD) 2025 Trends and Priorities Survey, three of the 10 Top Director's Trends for 2025 involve technology governance. Cybersecurity threats and AI remain at the center of director concerns around technology. In WTW's November 2024 Emerging and Interconnected Risks Survey, executives worldwide listed AI and cyber risk as the top two out of 752 emerging risks today. Additionally, WTW's 2025 Directors & Officers Risk Survey reports data loss and cyberattacks are both within the top three risks. As covered previously in this space, effective leaders have shifted from traditional risk management protocols to more dynamic and responsible governance models for managing AI's growth across industries and applications, and adhering to their values. Classical rules-based governance structures and processes often fail to address AI's unique challenges and opportunities and cannot keep pace with rapid advancements. Effective leaders adopt guiding principle-based governance practices that allow their organizations to benefit from AI technologies while reducing risks and increasing trust and accountability. Effective leaders employ 'responsible AI' – the process of developing and operating AI systems that align with organizational purpose and values while achieving desired business impact. Responsible AI governance models are flexible and responsive including mechanisms for regular updates, feedback loops and continuous improvement. These models enable leaders to design governance practices specifically addressing AI, adapt to internal and external changes and remain effective and relevant in both the short and long term. Recently, at an AI roundtable in London hosted by TWIN Global, professor and corporate director Dr. Helmuth Ludwig shared insights from his research conducted in partnership with Professor Dr. Benjamin van Giffen regarding board effectiveness in governing AI. Ludwig and van Giffen reported that although AI is top of mind even for nontechnical business executives and board members, most boards struggle to understand both the implications of AI for their businesses and their role in governing it. The authors identify four categories of board-level AI governance issues and examples of effective practices for each. 1. Strategy and Firm Competitiveness – Effective boards recognize AI as a strategic enabler and differentiator that influences an organization's competitive position and business model. These boards adopt two key practices: First, they ensure that AI is reflected in business strategy, for example, addressing how AI is impacting business decisions and priorities, as well as the execution of those priorities. Second, they incorporate AI into the board's annual strategy meeting, including internal and external views, ensuring systematic board-level discussions about AI. 2. Capital Allocation – Effective boards govern capital allocation decisions related to AI by promoting experimentation with AI, securing investment for platforms and tools and enhancing AI capabilities through external partnerships and potential mergers and acquisitions. They use the annual budgeting process to secure investment in both foundational and differentiated AI capabilities across the company and support external AI partnerships and M&A. 3. AI Risks – Effective boards treat AI risk oversight as a key board duty. They discuss updates in AI technology developments and applications that are relevant to their business strategy and its execution. They review publicly available cases of AI bias and discrimination that occur even at highly advanced technology companies. They recognize how AI introduces new risks and adds complexity to risk management protocols. Effective boards actively ensure the implementation of processes that manage AI risks. They focus on ethical and reputational risks arising from the use of AI, data risks, and legal and regulatory risks. They invite AI risk experts to present at board or audit/risk committee meetings and integrate AI into enterprise risk management activities. 4. AI Technology Competence – Effective boards focus on the technology competence of both the board and the executive management team. To ensure competent engagement in critical AI governance decisions, effective directors require at least a foundational level of AI competency, if not more. This may mean expanding the role of board committees (such as setting up technology and innovation committees). Effective boards make certain that the CEO and executive management are ready to execute the company's AI agenda and assess AI competencies in succession planning and CEO search criteria. Effective boards assess whether their directors are ready for AI, and they 'seize a moment of impact' to transition from merely acknowledging AI's importance to actively discussing it and initiating board-level AI governance.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store