Latest news with #responsibleAI


Fox News
3 days ago
- Business
- Fox News
Trump praised by faith leaders for AI leadership as they warn of technology's 'potential peril'
Evangelical leaders praised President Donald Trump for his leadership on artificial intelligence ("AI") in an open letter published last week, while cautioning him to ensure the technology is developed responsibly. Dubbing Trump the "AI President," the religious leaders wrote that they believe Trump is there by "Divine Providence" to guide the world on the future of AI. The signatories said they are "pro-science" and fully support the advancement of technology which benefits their own ministries around the world. "We are also pro-economic prosperity and economic leadership for America and our friends. We do not want to see the AI revolution slowing, but we want to see the AI revolution accelerating responsibly," the letter says. The faith leaders warned about the technology advancing at an out-of-control pace that could cause "potential peril" for mankind. They cited concerns raised by industry leaders Elon Musk, Bill Gates and Sam Altman, warning that AI would take jobs away in most industries and could eventually cause human suffering. The U.S. should not hesitate in its efforts to "win the AI race," the pastors told Trump, but cautioned that victory mustn't come at any cost. "As people of faith, we believe we should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control," the letter states. "The spiritual implications of creating intelligence that may one day surpass human capabilities raises profound theological and ethical questions that must be thoughtfully considered with wisdom. One does not have to be religious to recognize religion as a type of compounding wisdom over the centuries, and virtually all religious traditions warn against a world where work is no longer necessary or where human beings can live their lives without any guardrails," the leaders wrote. They urged Trump to develop an advisory council or delegate authority to an existing agency or council "which would convene leaders who will pay attention especially not only to what AI CAN do but also what it SHOULD do." A group of 18 pastors and faith leaders signed on to the letter, which was spearheaded by prominent Christian leaders, Rev. Johnnie Moore, president of the Congress of Christian Leaders, and Rev. Samuel Rodriguez, President of the National Hispanic Christian Leadership Council. The letter comes weeks after Pope Leo XIV compared the advancements in AI to the Industrial Revolution and called on the Catholic Church to confront the challenges AI poses to human dignity, labor and society. In April, Trump signed an executive order to implement AI education in the classroom to create "educational and workforce development opportunities for America's youth." The AI order, Trump's latest pro-AI measure, established a White House task force for AI and education that will work with federal agencies and the private sector to help draft AI programs for schools.


Forbes
6 days ago
- Business
- Forbes
The Future Is Explainability – Why AI Must Earn Our Trust
As enterprises shift from AI experimentation to scaled implementation, one principle will separate hype from impact: explainability. This evolution requires implementing 'responsible AI' frameworks that effectively manage deployment while minimizing associated risks. The responsible AI approach, termed in the industry as 'explainability,' creates a balanced methodology that is ethical, pragmatic, and deliberate when integrating AI technologies into core business functions. Responsible AI shifts past generative AI's buzz (LLMs, voice/image generators) by harmonizing AI applications with corporate objectives, values, and risk tolerance. This approach typically features purpose-built systems with clearly defined outcomes. Forward-thinking companies making sustained investments prioritize automating routine tasks to decrease human dependency while enabling AI to manage repetitive processes. However, they maintain a balance where humans remain informed of system changes and actively oversee them. And in my view, this is the key to maturing AI. Explainability helps business leaders overseeing data analytics better interpret decisioning as concerns have become essential as businesses pursue AI's promised cost savings and increased automation. Explainability helps demystify AI decision-making. Business leaders overseeing analytics need visibility into why an AI system makes certain recommendations. This transparency is key as organizations scale their AI deployments and seek to build internal trust. According to McKinsey & Company, explainability increases user engagement and confidence, which are vital ingredients for successful, enterprise-wide adoption. As businesses embrace automation to drive efficiency and cost savings, interpretability becomes essential for governance, compliance, and decision support. Explainability agents are a new class of AI models designed to interpret and communicate the reasoning behind complex AI decisions, particularly in black-box systems such as deep neural networks. These agentic AI assistants are autonomous, goal-driven, and capable of adapting to changing conditions in real-time. Take, for example, a manufacturer managing MRO (maintenance, repair, and operations) inventory. An explainability agent can continuously reassess stocking levels by analyzing supply, demand, asset usage, and work orders. It can then suggest dynamic adjustments and explain the rationale behind each one. This improves efficiency and empowers supply chain leaders to make informed, confident decisions. As enterprises grow more sophisticated in their AI adoption, they recognize the limits of generic, pre-trained models. Instead, they're embracing purpose-built AI that: The goal is to improve timelines, cut costs, and increase productivity, responsibly and at scale. Responsible AI also involves rigorous risk management. A recent National Institute of Standards & Technology (NIST) report highlights how AI systems trained on evolving data can behave unpredictably, creating legal, reputational, or operational vulnerabilities. Responsible AI means designing systems that are explainable, testable, and aligned with human oversight, not just accurate. For example, responsible AI systems can segment sensitive data to prevent it from being processed by third-party large language models (LLMs). In another case, a supply chain AI platform might explain every recommendation with data-backed context, allowing users to see what the AI suggests and why it matters. This transparency builds user trust, facilitates informed decision-making, and accelerates execution by ensuring stakeholders align with AI-driven strategies. Ultimately, it empowers organizations to unlock AI's full potential, without losing control. AI doesn't need to be mysterious. With explainability agents and purpose-built systems, businesses can harness the power of AI in a transparent, ethical, and results-driven way. Enterprise users shouldn't just use AI—they should be able to understand and trust it. In the next phase of AI adoption, companies that prioritize responsible, agentic AI will reap long-term value while remaining resilient, agile, and accountable.


Harvard Business Review
09-05-2025
- Harvard Business Review
Inside Adobe's Approach to Assessing AI Risk
HectorWhile concerns about the risks of generative AI persist, people are increasingly embracing AI's power and practical applications in everyday life. But despite sharing common underlying technology, AI applications range from the mundane to the complex. This broad spectrum of use cases makes it challenging to determine which guardrails are appropriate to support responsible AI innovation.