Latest news with #ResponsibleAI


Forbes
2 days ago
- Business
- Forbes
How Amazon, Citi, And C3 Demonstrate Responsible AI Leadership
Bhavnish Walia, leads AI Risk Management at Amazon and serves as Senior Risk Manager for the company's Responsible AI initiatives, Bjorn Bolinder, Find The Light Photography 2024 Artificial Intelligence is quickly reshaping the retail and financial services landscape. Firms like Amazon, Citi, and are integrating AI into core operations for efficient payments, personalized offers, and effective fraud detection. With AI becoming increasingly critical to business strategy, responsible deployment is no longer a choice, it's a necessity. Bhavnish Walia, who leads AI Risk Management at Amazon and serves as Senior Risk Manager for the company's Responsible AI initiatives, says: 'There's never been a more critical time to define the future of finance. Generative AI has moved beyond theory, it's reshaping risk, and our role is to ensure it does so responsibly." Throughout the retail industry, AI drives quicker checkouts, easier seller onboarding, dynamic pricing, fraud detection in real-time, and personalized promotions. Amazon infuses AI into almost every aspect of its payments and risk infrastructure, from seller onboarding to carrying out anti-fraud measures enhancing both speed and security. But as adoption increases, so do the stakes. Walia has set global benchmarks for the responsible implementation of artificial intelligence in high-risk e-commerce systems. He created Amazon's first Anti–Money Laundering AI Governance Framework and Model Risk Management Policy to evaluate large language models prior to production deployment, with a focus on mitigating both customer and operational risks. This framework integrates regulatory scorecards, human-in-the-loop controls, and shadow testing environments, providing a composite evaluation metric to ensure that AI systems in payments and anti–money laundering are compliant, explainable, and fair by design. 'As we adopt AI at scale across retail and fraud detection, we can no longer treat these systems as black boxes,' Walia states. Aligned with regulations such as the EU AI Act, NIST's AI Risk Management Framework, and the White House's Blueprint for an AI Bill of Rights, Walia's approach integrates stringent pre-deployment testing to detect and mitigate model hallucinations, bias, and toxicity, ensuring that LLM models are safe for usage. He has also built post-deployment monitoring systems that are auditable and continuously assess algorithmic behavior, enabling ongoing compliance and transparency. As Walia's work at Amazon demonstrates, integrating AI responsibly into online retail systems requires more than technical expertise, it demands structural accountability. This ethos is now influencing the broader industry, with banking institutions implementing similar governance-first approaches to ensure AI not only delivers results but also earns the trust of customers. Seth Rubin, formerly VP of Lending Marketing Analytics at Citibank, led transformative efforts in applying AI for pricing optimization and enhancing customer experiences across multiple marketing channels. His team developed machine learning models to predict customer lifetime value as well as price elasticity, allowing data-informed decision-making that weighed business growth against customer trust. "AI enables us to personalize at scale, but every model we bring to production must meet a high bar for fairness, transparency, and business relevance," Rubin emphasizes. 'It's not just about what works, it's about being able to explain why it works, to both stakeholders and regulators.' Rubin's approach exemplifies a growing movement across the finance sector: embedding ethical AI governance throughout the modeling lifecycle, from experimentation to real-world deployment. Meanwhile, enterprise AI firm enables online retails and financial institutions to detect anomalies, manage credit risk, and maintain regulatory compliance at scale. Senior AI/ML Software Engineer Swaroop Rath develops generative AI for enterprise applications. His work involves incorporating models like ChatGPT into finance and online retail mission-critical systems, designing systems that are not only performant but secure, traceable, and auditable. "Enterprise AI must be explainable and robust," Rath says. "It's not just what the model predicts, but why and whether you can trace it back for regulators, auditors, or customers." By creating AI workflows that record model lineage and explain decisions, Rath is bridging the gap between innovation and compliance. The potential of AI in retail and finance is clear: more intelligent decision-making, quicker implementation, and more targeted customer experiences. Yet the dangers particularly in such critical areas as payments, fraud, pricing, and customer eligibility call for prudent governance. Execs such as Walia, Rubin, and Rath show us that responsible AI isn't a technical goal, it's a strategic necessity. With regulatory pressure mounting and customer expectations changing, the winners will be the ones that develop AI systems that aren't just powerful, but principled. For more like this on Forbes, check out What Is Agentic AI And What Will It Mean For Financial Services? and AI's Growing Role In Financial Security And Fraud Prevention.


Zawya
12-06-2025
- Business
- Zawya
The Responsible AI Institute appoints Matthew Martin as Global Advisor
Matthew brings over two decades of cybersecurity expertise to help organizations navigate evolving regulatory landscapes and deploy responsible AI with confidence Texas, U.S., – The Responsible AI Institute (RAI Institute), a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations, has appointed Matthew Martin, founder and CEO of Two Candlesticks and an international leader in cybersecurity, as a member of its Global Advisory Board. Matthew's extensive cybersecurity expertise will be leveraged to help organizations strengthen AI governance, enhance transparency, and scale innovation responsibly. With over 25 years of experience in the cybersecurity industry, Matthew has led and implemented security operations at Fortune 100 financial services companies. As CEO of Two Candlesticks, he currently provides high-level cybersecurity consultancy, strategy, and frameworks to underserved markets and regions. He will apply this expertise to his role at the RAI Institute to build awareness for transparent AI practices and help organizations overcome critical technological, ethical, and regulatory challenges. 'AI has the power to truly transform the world. If done correctly, it democratizes a lot of capabilities that used to be reserved just for developed markets. This is exactly why industries need organizations like the RAI Institute,' said Matthew Martin, Global Advisor at RAI Institute and CEO of Two Candlesticks. 'I'm proud to be a part of such a forward-thinking institute that's leading the way in advancing responsible AI innovation across diverse markets. Its mission directly aligns with my passion for playing an active role in establishing a resilient, future-ready cybersecurity foundation for all.' Through its global network of responsible AI experts, the RAI Institute offers valuable insights to practitioners, policymakers, and regulators. With over 34,000 members and collaborators, its community spans technology, finance, healthcare, academia, and government agencies. Its goal is to operationalize responsible AI through education, benchmarking, verification, and third-party risk assessments. 'We are so pleased to have Matthew on board as a Global Advisor for the RAI Institute. His drive for serving the underserved in cybersecurity makes him a perfect addition to the board as we advance responsible AI across the entire ecosystem,' said Manoj Saxena, Chairman and Founder of the Responsible AI Institute. 'Trusted AI foundations lead to sustainable and scalable AI solutions. It's through the expert contributions of industry leaders like Matthew that we can strengthen our mission to ensure a secure future for AI.' In addition to his role at RAI Institute, Matthew holds advisory positions on the boards of Ironscales, Trustwise, Stealth, and Surge Ventures. Through his work at Two Candlesticks, he is making robust cybersecurity strategies accessible, efficient, and impactful across Africa, Asia, Europe, the Middle East, and the Americas. About Responsible AI Institute (RAI Institute) Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations. Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.


Fast Company
05-06-2025
- Business
- Fast Company
AI isn't coming for your job—it's coming for your company
Debate about whether artificial intelligence can replicate the intellectual labor of doctors, lawyers, or PhDs forgoes a deeper concern that's looming: Entire companies—not just individual jobs—may be rendered obsolete by the accelerating pace of AI adoption. Reports suggesting OpenAI will charge $20,000 per month for agents trained at a PhD level spun up the ongoing debate about whose job is safe from AI and whose job is not. 'I've not seen it be that impressive yet, but it's likely not far off,' James Villarrubia, head of digital innovation and AI at NASA CAS, told me. Sean McGregor, the founder of Responsible AI Collaborative who earned a PhD in computer science, pointed out how many jobs are about more than just a set of skills: 'Current AI technology is not sufficiently robust to allow unsupervised control of hazardous chemistry equipment, human experimentation, or other domains where human PhDs are currently required.' The big reason I polled the audience on this one was because I wanted to broaden my perspective on what jobs would be eliminated. Instead, it changed my perspective. AI needs to outperform the system, not the role Suzanne Rabicoff, founder of the human agency think tank and fractional practice, The Pie Grower, gave me some reading assignments from her work, instead of a quote. Her work showed me that these times are unprecedented. But something clicked in my brain when she said in her writing that she liked the angle of more efficient companies rising instead of jobs being replaced at companies with a lot of tech and human capital debt. Her response to that statement? 'Exactly my bet.' Sure, this is the first time that a robot is doing the homework for some college students. However, there is more precedent for robots moving market share than for replacing the same job function across a sector. Fortune 500 companies—especially those bloated with legacy processes and redundant labor—are always vulnerable to decline as newer, more nimble competitors rise. And not because any single job is replaced, but because the foundational economics of their business models no longer hold. AI doesn't need to outperform every employee to render an enterprise obsolete. It only needs to outperform the system. Case study: The auto industry Take, for example, the decline of American car manufacturers in the late 20th century. In the 1950s, American automakers had a stranglehold on the car industry, not unlike today's tech giants. In 1950, the U.S. produced about 75% of the world's cars. But in the 1970s, Japanese automakers pioneered the use of robotics in auto manufacturing. These companies produced higher-quality vehicles at great value thanks to leaner operations that were also more precise. Firms like GM struggled to keep up, burdened by outdated factories and excessive human capital costs—including bloated pensions. The seismic shift in the decades to follow paints a picture of what could be in store for large companies now. In 1960, the U.S. produced about 48% of the world's cars, while Japan accounted for just 5%. By 1980, Japan had captured around 29% of the market, while the U.S. had fallen to 23%. Today's AI shakeup could look similar. Decades from now, we could look at Apple similarly to how we look at Ford now. AI startups with more agile structures are poised to eat market share. On top of that, startups can focus on solving specialized problems, sharpening their competitive edge. Will your company shrivel and die? The fallout has already begun. Gartner surveyed organizations in late 2023, finding that about half were developing their own AI tools. By the end of 2024, that dropped to 20%. As hype around generative AI cools, Gartner notes that many chief information officers are instead using outside vendors—either large language model providers or traditional software sellers with AI-enhanced offerings. In 2024, AI startups received nearly half of the $209 billion in global venture funding. If only 20% of legacy organizations currently feel confident competing with these upstarts, how many will feel that confidence as these startups mature? While headlines continue to fixate on whether AI can match PhD-level expertise, the deeper risk remains largely unspoken: Giant companies will shrivel and some may die. And when they do, your job is at risk whether you greet customers at the front desk or hold a PhD in an engineering discipline. But there are ways to stay afloat. One of the most impactful pieces of advice I ever received came from Jonathan Rosenberg, former SVP of products at Google and current advisor to Alphabet, when I visited the company's campus in college. 'You can't just be great at what you do, you have to catch a great wave. Early people think it's about the company, then the job, then the industry. It's actually industry, company, job…' So, how do you catch the AI wave? Ankur Patel, CEO of Multimodal, advises workers to learn how to do their current jobs using AI tools that enhance productivity. He also notes that soft skills—mobilizing people, building relationships, leading teams—will become increasingly valuable as AI takes over more technical or routine tasks. 'You can't have AI be a group leader or team leader, right? I just don't see that happening, even in the next generation forward,' Patel said. 'So I think that's a huge opportunity…to grow and learn from.'


Skift
30-05-2025
- Business
- Skift
Can AI Change Travel Without Sacrificing Trust? Insights from Amadeus' Head of Research
Rodrigo Acuna Agost of Amadeus explains how AI is shaping the future of travel while building trust through responsible data practices. Amadeus powers much of the global travel ecosystem, processing around 3 billion flight searches every day. As Head of Research and AI Center of Excellence, Rodrigo Acuna Agost leads efforts to turn this vast data into smarter travel experiences. At the upcoming Skift Data + AI Summit, Rodrigo will share how Amadeus uses artificial intelligence to personalize journeys, improve operations, and enhance customer satisfaction. His work highlights both the potential and challenges of AI in travel. Balancing innovation with ethics, Amadeus is pioneering a Responsible AI Framework to ensure technology is applied transparently and fairly, maintaining trust with customers and partners worldwide. What's one of the most exciting ways you're using data right now to make business or customer decisions? 'No data, no AI!' 'Amadeus connects many travel industry participants, allowing us to process large amounts of data. This facilitates innovation and the improvement of experiences for both travelers and customers.' 'Today, we already operate at scale thanks to this. To give some examples, Amadeus processes approximately 3 billion flight search requests every day! Additionally, our solutions enable our customers to make well-informed decisions through the use of advanced insights, including artificial intelligence. We strive to ensure a seamless and efficient experience throughout their journey.' 'We firmly believe that AI has the potential to greatly enhance every stage of a traveler's journey: from personalized content that sparks imagination and encourages them to explore new destinations, through enhanced traveler-centric search and shopping experiences, personalized customer operations, to the evaluation of customer sentiment post-trip.' How are you balancing innovation with responsible data governance? 'Any new technology comes with its opportunities and challenges. While AI brings exciting options, it also introduces potential risks such as bias, data privacy issues, intellectual property conflicts, and cybersecurity threats. To manage those risks, we are adapting our processes and policies to implement a Responsible AI Framework that will enable us to adhere to the European AI Act, as well as any other emerging relevant AI regulations relevant for our global operations.' 'Is it a constraint? Not at all! It ensures we comply with our values and ethical principles and reinforces the trust we have built with our customers, partners and employees, while pushing the boundaries of AI knowledge in travel with a drive to improve the overall travel experience.' What emerging trends in AI or data are you watching most closely right now? 'AI is progressing rapidly, with research updates and announcements occurring almost daily. Our Research team tracks these trends and exploratory projects. These programs help us to identify where to focus more while keeping monitoring any progress.' 'In 2025, we are focusing on two major areas: Agentic AI: Agentic AI refers to artificial intelligence systems that can act autonomously, make decisions, and perform tasks without human intervention. In the travel industry, agentic AI could significantly enhance customer service by providing personalized travel recommendations, handling bookings, and managing itineraries. It could also streamline operations for travel companies through automated processes such as baggage handling, flight scheduling, and real-time updates, ultimately improving efficiency and customer satisfaction. Responsible AI: Responsible AI emphasizes the ethical and accountable use of technology, ensuring AI systems operate fairly and transparently. For the travel industry, responsible AI can build trust with consumers by safeguarding privacy and data security, making unbiased travel suggestions, and ensuring equitable access to travel resources. Responsible AI can also help mitigate potential negative impacts, such as over-tourism and environmental degradation, by promoting sustainable travel options.' Hear More from Amadeus at Skift Data + AI Summit Rodrigo Acuna Agost offers a compelling case for how Amadeus is not only innovating at scale but doing so with a deep sense of responsibility. From agentic AI to ethical frameworks, his work sits at the cutting edge of travel technology. To hear more from Rodrigo and other data and AI leaders reshaping the future of travel, join us at the Skift Data + AI Summit. Discover how industry pioneers are turning data into smarter decisions and AI into meaningful experiences. Get Your Tickets 1 Person Solo Ticket $895 Buy Now 2-6 PeOPLE Group Tickets From $815 each Buy Now Who Is Already Joining The Skift Data + AI Summit This June Skift Data + AI Summit in New York City is fast approaching - take a look at who is planning to attend and add your company's name to the list. You won't want to miss this exciting global travel industry event. Read More


News18
22-05-2025
- Business
- News18
Indo-American Vaniya Agrawal, Ex-Microsoft Employee, Disrupts Company's Event Again
Last Updated: Vaniya Agrawal, an India-American techie, disrupted Microsoft's Build 2025 event, protesting the company's cloud contracts with Israel. This is her second protest since April. An India-American techie, Vaniya Agrawal, played a key role in disrupting a Microsoft event for second time since April. Notably, Microsoft's annual developer conference, Build 2025, has been hit by a series of back-to-back disruptions from pro-Palestine activists. Lately, former Microsoft engineer Vaniya Agrawal led the high-profile protest to disrupt the event. Agrawal hit the headlines in April by interrupting Microsoft's 50th anniversary celebration. This week, she disrupted 'Build 2025' event which was a Build session on AI security. Beside ex-Microsoft employee Hossam Nasr, she shouted down Neta Haiby, Microsoft's Head of Security for AI, and Sarah Bird, who heads Responsible AI, as a mark of protest against the company's cloud contracts with the Israeli government. The protest marks the third straight day of unrest at Build 2025. Earlier on May 19, an employee interrupted CEO Satya Nadella's keynote by shouting 'Free Palestine" and demanding accountability for Microsoft's tech deals with Israel. The following day, a Palestinian tech worker stormed executive Jay Parikh's Azure AI presentation, declaring, 'Cut ties! No Azure for apartheid!" A former employee in the company's AI division, Agrawal had earlier accused Microsoft leadership—including Bill Gates and Satya Nadella—of complicity in civilian deaths in Gaza, saying '50,000 Palestinians in Gaza have been murdered with Microsoft technology." She was fired shortly after the April protest without serving her notice period. Though, Microsoft has not publicly made any remarks on the protests or the employees' dismissals. However, the repeated disturbance reflect growing internal dissent over Microsoft's role in supplying cloud infrastructure to the Israeli Ministry of Defense, amid heightened scrutiny over the humanitarian toll of the Gaza conflict. Watch India Pakistan Breaking News on CNN News18. Stay updated with all the latest business news, including market trends, stock updates, tax, IPO, banking finance, real estate, savings and investments. Get in-depth analysis, expert opinions, and real-time updates—only on News18. Also Download the News18 App to stay updated! First Published: May 22, 2025, 16:59 IST