Latest news with #TrustworthyAI


Forbes
4 days ago
- Business
- Forbes
The Evolving Role Of The Chief Learning Officer In The Age of AI
Beena Ammanath - Global Deloitte AI Institute Leader, Founder of Humans For AI and Author of "Trustworthy AI" and "Zero Latency Leadership." Most enterprises are somewhere on a transformative journey with AI. Some are still mastering the basics and experimenting with ideas; others are already running complex applications at scale. The common denominator for every organization on the road to AI-fueled operations is the need for skilled talent. This has created a nearly unquenchable demand for AI-ready employees. The imperative is to look internally and determine how to help everyone—from the boardroom to the C-suite to the lines of business—gain the skills and knowledge they need to use and drive business value with AI. This is a known challenge, but what is less clear is who in the organization is responsible for workforce upskilling. In many organizations, the chief learning officer (CLO) is the epicenter of skills and knowledge development, and the CLO's role is beginning to include the challenge of equipping every employee with the AI skills they need. The task is complicated by the fact that innovation is moving quickly, and what is new and differentiating today may soon be table-stakes capabilities or left behind entirely. A 2024 Deloitte survey of workplace skills found that 70% of responding workers said "they worked at a company that pushed employees to learn a new technology-based skillset, only for that technology to fall out of use." If organizations are perpetually chasing skills development for the latest groundbreaking innovation, they may miss out on a more transformational, more strategic opportunity to prepare their workforces for the future, whatever technologies it holds. A People-Focused Perspective The CLO is positioned to answer today's call for AI workforce development in part because CLOs do not look at the workforce purely through a technology lens. This is a human challenge. For CLOs, now is a moment to lead, and tactically, there are several avenues to explore. The basics of learning apply to AI, and for most employees, the imperative is to build AI literacy and a working familiarity with the applications used in the organization. Workshops, virtual demonstrations, speakers, self-directed learning and third-party training may be good places to start. A recent Deloitte Generative AI survey found 67% of responding organizations have invested in internal tools to help employees build GenAI familiarity, and 59% have published training and learning resources for talent development. The approaches and types of materials for learning other skills are likely to be effective with AI. State-of-the-art LLMs can be wonderful teachers. The core of their value is their capacity to consume large volumes of information and present it in consumable, natural language outputs. This is ideal for helping employees master the basics of the AI lexicon, the history of innovation, and AI functionality and model types. Using an LLM, workers can revisit a topic multiple times and ask for explanations and examples until they feel confident in their new knowledge. Of course, LLMs are susceptible to inaccurate or entirely false outputs, but when it comes to basic AI literacy, most LLMs are likely good enough to establish foundational knowledge and familiarity. While some AI risks are evident (e.g., inaccurate outputs), many are not, nor are the ethical implications of how AI is developed, deployed and managed. Workers need support in learning how to use and manage AI responsibly, and there is reason to think focused learning and training are effective. A Deloitte report investigated technology ethics training, which revealed that 70% of respondents who received ethics training changed their behavior when working with technology. Given the opportunity, most employees are likely to put technology ethics knowledge to good use, which is important not just for ethical application but also for managing AI in light of regulations, enterprise culture and industry standards. GenAI-enabled applications and, more recently, AI agents, are expected to increasingly liberate workers from mundane tasks and even entire workflows. With more time and capacity, these employees can be dedicated to higher-level, more strategic work. In this, they will need the human skills of teamwork, communication, problem-solving and critical thinking. However, human skills development often does not receive as much attention as technical skills. Providing opportunities for the workforce to improve its people skills is important, regardless of how powerful or capable AI becomes. Final Thought The CLO may not have total insight into the workforce skills needed across business units. The AI knowledge needed for accounting and payroll may be different than that required for customer engagement, enterprise strategy and other functional domains. The leaders of these business units likely have an incisive view into the kinds of skills their workers need to develop. Coordination and collaboration among executives and other leaders can help reveal where the organization's AI initiatives can be enhanced with a focus not on technology but on people. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Forbes
08-07-2025
- Business
- Forbes
When The Old Becomes New: Revisiting Opportunities In Abandoned AI Use Cases
Beena Ammanath - Global Deloitte AI Institute Leader, Founder of Humans For AI and Author of "Trustworthy AI" and "Zero Latency Leadership." Organizations are searching for valuable AI use cases, but the most transformational opportunities might not be new. Instead, some of the best ideas may be found in the pile of past technology pilots that did not make it to production. Generative AI, and more recently, agentic systems, are maturing quickly. With GenAI, early applications have been deployed for customer-facing and back-office activities, such as chatbots for call centers or AI-generated summaries of legal documents. Yet, commoditized GenAI applications such as these are available to every enterprise at this point. The bolder value vision is in using new AI capabilities to solve long-standing inefficiencies or problems that may have been targeted before, albeit with inferior technology. In the hunt for use cases, business leaders should reexamine projects that were previously attempted but abandoned because the necessary technology was nascent or did not yet exist. Looking Back To Point The Way Forward Here's an example of how an idea conceived years ago may be ripe for today's technology landscape. A global manufacturer maintains 400+ ERP systems, and the number of systems complicates procurement and leads to significant discrepancies in product costs across business units, even when parts are ordered from the same vendor. What's more, the procurement leader has limited visibility into what different groups in the company are paying for parts, the data is fragmented and unstructured and some data exists only on paper. The opportunity is to improve visibility into data and ERP systems to optimize costs. Even a decade ago, technology was not ready for this use case. I know because I led a team that attempted to build this kind of solution. We pulled prices and item descriptions from the central data lake and built an application to give procurement leaders more visibility into parts data. The significant savings generated inspired us to reach farther with automation. One issue was that parts descriptions used technical terms and numbers that were difficult to decipher, especially for non-technical employees. Attempting to overcome this, we sought to build an abstract layer on top of the data that allowed users to input conversational language descriptions and display corresponding parts across ERP systems. Easier said than done. Not only was the technology for natural language processing (NLP) still maturing, but there were troves of data that were not yet digitized (e.g., paper drawings), and image recognition and processing at the time were nascent, at best. Ultimately, while we did capture some value, the bolder aspiration to use several types of automation to optimize procurement came up short. The technology was not mature enough to make our vision a reality. Even still, it was a good idea, and it remains a good idea. I am not the only executive to have had this experience. Many other examples are found across industries and enterprises, and therein is the opportunity. Today, technology capabilities offer far more sophisticated functions than those in years past, and this offers another avenue for (re)discovering where GenAI and agentic systems can generate value. Tapping Untapped Sources Of Ideas Many leaders who pushed the boundaries of technology capabilities years ago are in senior executive positions today. Their institutional and historical knowledge is fodder for thinking about AI applications that do more than automate a discrete part of an existing process. Indeed, the most transformational opportunities with GenAI and agentic systems may have already been identified. With this perspective, the next steps take place in the C-suite. Executives at leading companies are already collaborating to pursue AI value. In the deliberations over strategic goals, capital deployment and technology transformation, leaders should think back on prior technology demonstrations and pilots—perhaps even at another enterprise—and reconsider whether those good ideas still hold merit. As well as consulting their own experiences, executives can turn to their business units to solicit ideas from career employees and review documentation from previous projects. Board members may also be a source of opportunities and ideas, informed by their knowledge of the business and their own leadership experiences. The core insight for leaders is to keep an eye on technology advances. We are in a transformational moment with the growing maturity of GenAI and agentic systems. Valuable ideas come from everywhere, including the past. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Newsweek
03-07-2025
- Politics
- Newsweek
Culture War on Harvard Spells Disaster for America's AI Future
Advocates for ideas and draws conclusions based on the interpretation of facts and data. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. The battle between the White House and Harvard University over a $2.2 billion federal funding freeze and demands to ban international students is no isolated attack. It's part of a broader war on liberal higher education—and a harbinger of a wider global struggle. A federal court ruling may have temporarily blocked the student ban, but the message is clear: these attacks are ideological, deliberate, and dangerous. The 24 universities backing Harvard's lawsuit know this is bigger than campus politics. Undermining academia weakens one of the last independent institutions shaping AI's impact on society. By weakening the institutions that embed human knowledge and ethical reasoning into AI, we risk creating a vacuum where technological power advances without meaningful checks, shaped by those with the fastest resources, not necessarily the best intentions. The language used in discussions about ethical AI—terms like "procedural justice," "informed consent," and "structural bias"—originates not from engineering labs, but from the humanities and social sciences. In the 1970s, philosopher Tom Beauchamp helped author the Belmont Report, the basis for modern medical ethics. Legal scholar Alan Westin's work at Columbia shaped the 1974 Federal Privacy Act and the very notion that individuals should control their own data. This intellectual infrastructure now underpins the world's most important AI governance frameworks. Liberal arts scholars helped shape the EU's Trustworthy AI initiative and the OECD's 2019 AI Principles—global standards for rule of law, transparency, and accountability. U.S. universities have briefed lawmakers, scored AI companies on ethics, and championed democratized access to datasets through the bipartisan CREATE AI Act. But American universities face an onslaught. Since his inauguration, Trump has banned international students, slashed humanities and human rights programs, and frozen more than $5 billion in federal funding to leading universities like Harvard. These policies are driving us into a future shaped by those who move fastest and break the most. Left to their own devices, private AI companies give lip service to ethical safeguards, but tend not to implement them. And several, like Google, Meta, and Amazon, are covertly lobbying against government regulation. Harvard banners hang in front of Widener Library during the 374th Harvard Commencement in Harvard Yard in Cambridge, Massachusetts, on May 29, 2025. Harvard banners hang in front of Widener Library during the 374th Harvard Commencement in Harvard Yard in Cambridge, Massachusetts, on May 29, 2025. Rick Friedman / AFP/Getty Images This is already creating real-world harm. Facial recognition software routinely discriminates against women and people of color. Denmark's AI-powered welfare system discriminates against the most vulnerable. In Florida, a 14-year-old boy died by suicide after bonding with a chatbot that reportedly included sexual content. The risks compound when AI intersects with disinformation, militarization, or ideological extremism. Around the world, state and non-state actors are exploring how AI can be harnessed for influence and control, sometimes beyond public scrutiny. The Muslim World League (MWL) has also warned that groups like ISIS are using AI to recruit a new generation of terrorists. Just last month, the FBI warned of scammers using AI-generated voice clones to impersonate senior U.S. officials. What's needed is a broader, more inclusive AI ecosystem—one that fuses technical knowledge with ethical reasoning, diverse cultural voices, and global cooperation. Such models already exist. The Vatican's Rome Call for AI Ethics unites tech leaders and faith groups around shared values. In Latin America and Africa, grassroots coalitions like the Mozilla Foundation have helped embed community voices into national AI strategies. For instance, MWL Secretary-General Mohammad Al-Issa recently signed a landmark long-term memorandum of understanding with the president of Duke University, aimed at strengthening interfaith academic cooperation around shared global challenges. During the visit, Al-Issa also delivered a keynote speech on education, warning of the risks posed by extremists exploiting AI. Drawing on his work confronting digital radicalization by groups like ISIS, he has emerged as one of the few global religious figures urging faith leaders to be directly involved in shaping the ethical development of AI. The United States has long been a global AI leader because it draws on diverse intellectual and cultural resources. But that edge is fading. China has tripled its universities since 1998 and poured billions into state-led AI research. The EU's newly passed AI Act is already reshaping the global regulatory landscape. The world needs not just engineers, but ethicists; not just coders, but critics. The tech industry may have the tools to build AI, but it is academia that holds the moral compass to guide it. If America continues undermining its universities, it won't just lose the tech race. It will forfeit its ability to lead the future of AI. Professor Yu Xiong is Associate Vice President at the University of Surrey and founder of the Surrey Academy for Blockchain and Metaverse Applications. He chaired the UK All-Party Parliamentary Group on Metaverse and Web 3.0 advisory board. The views expressed in this article are the writer's own.