logo

Ryt Bank selects Provenir for AI risk decisioning

Finextra23-07-2025
Provenir, a global leader in AI risk decisioning software, today announced it has partnered with Ryt Bank – The World's First AI-Powered Bank – to embolden the company's innovation and mission to deliver banking done right with speed, simplicity, and innovation.
0
Ryt Bank has selected the Provenir AI Decisioning Platform to power faster credit decisions and more personalized customer offers for its consumer lending products.
As a newly licensed digital bank, Ryt Bank aimed to rapidly launch a consumer lending product that aligns with its AI-first approach. The challenge was to implement a decisioning infrastructure capable of delivering instant, personalized loan approvals while ensuring compliance with regulatory standards and risk management best practices.
Ryt Bank selected Provenir's AI Decisioning Platform to support real-time credit risk assessment for instant loan approvals, and for its ability to surface data insights for personalized loan offers based on AI-driven customer profiling. Provenir will also play a crucial role in automating compliance checks to meet regulatory requirements while providing continuous learning models to adapt to changing market dynamics. Finally, Provenir will support fast, accurate decisions to elevate the customer experience, supporting Ryt Bank's mission to deliver smarter, faster finance and create meaningful impact for all Malaysians.
'Ryt Bank is taking digital banking to a new level with its AI-first approach and we are excited to be a part of its journey,' said Kavinesswaran Karthigasan, Head of APAC, Provenir. 'Our AI Decisioning Platform will provide the foundation for Ryt Bank to help reach its business goals via AI-driven decisioning that meets customer expectations for near instant approvals and highly personalized digital interactions.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta faces backlash over AI policy that lets bots have ‘sensual' conversations with children
Meta faces backlash over AI policy that lets bots have ‘sensual' conversations with children

The Guardian

timean hour ago

  • The Guardian

Meta faces backlash over AI policy that lets bots have ‘sensual' conversations with children

A backlash is brewing against Meta over what it permits its AI chatbots to say. An internal Meta policy document, seen by Reuters, showed the social-media giant's guidelines for its chatbots allowed the AI to 'engage a child in conversations that are romantic or sensual', generate false medical information, and assist users in arguing that Black people are 'dumber than white people'. Singer Neil Young quit the social media platform on Friday, his record company said in a statement, the latest in a string of the singer's online-oriented protests. 'At Neil Young's request, we are no longer using Facebook for any Neil Young related activities,' Reprise Records announced. 'Meta's use of chatbots with children is unconscionable. Mr. Young does not want a further connection with Facebook.' The report also has generated a response from US lawmakers. Senator Josh Hawley, a Republican from Missouri, launched an investigation into the company Friday, writing in a letter to Mark Zuckerberg that he would investigate 'whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards'. Republican senator Marsha Blackburn of Tennessee said she supports an investigation into the company. Senator Ron Wyden, a Democrat from Oregon, called the policies 'deeply disturbing and wrong', adding that section 230, a law that shields internet companies from liability for the content posted to their platforms, should not protect companies' generative AI chatbots. 'Meta and Zuckerberg should be held fully responsible for any harm these bots cause,' he said. On Thursday, Reuters published an article about internal Meta policy documents that detailed ways in which chatbots are allowed to generate content. Meta confirmed the document's authenticity but said that it had removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children after receiving a list of questions. According to Meta's 200-page internal policy seen by Reuters, titled 'GenAI: Content Risk Standards', the controversial rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist. The document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products but says that the standards don't necessarily reflect 'ideal or even preferable' generative AI outputs. The policy document said it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply' but it also limits what Reuters described as 'sexy talk'. The document states, for example, that 'it is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable, including phrases like 'soft rounded curves invite my touch'. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion The document also addressed limitations on Meta AI prompts on hate speech, AI generation of sexualized images of public figures, often sexualized, violence, and other contentious and potentially actionable content generation. The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgment that the material is untrue. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' a statement from Meta reads. Although chatbots are prohibited from having such conversations with minors, Meta spokesperson Andy Stone said, he acknowledged that the company's enforcement was inconsistent. Meta is planning to spend around $65bn on AI infrastructure this year part of a broader strategy to become a leader in artificial intelligence. The head-long rush into AI comes by tech giants come with complex questions over limitations and standards over how, with what information, and with whom, AI chatbots are allowed to engage with users. Reuters also reported on Friday that a cognitively impaired New Jersey man grew infatuated with 'Big sis Billie', a Facebook Messenger chatbot with a young woman's persona. Thongbue 'Bue' Wongbandue, 76, reportedly packed up his belongings to visit 'a friend' in New York in March. The so-called friend turned out to be a generative artificial intelligence chatbot that had repeatedly reassured the man she was real and had invited him to her apartment, even providing an address. But Wongbandue fell near a parking lot on his way to New York, injuring his head and neck. After three days on life support, he was pronounced dead on 28 March . Meta did not comment on Wongbandue's death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations, Reuter said. The company did, however, say that Big sis Billie 'is not Kendall Jenner and does not purport to be Kendall Jenner', referencing a partnership with the reality TV star.

AI coming for your job? It may be stopping you being hired at all
AI coming for your job? It may be stopping you being hired at all

Times

timean hour ago

  • Times

AI coming for your job? It may be stopping you being hired at all

The head of the world's largest human resources organisation believes chief executives are slowing down recruitment because they fear workers will soon be replaceable by artificial intelligence. Johnny C Taylor Jr, president and chief executive of the Society for Human Resource Management, which has about 340,000 members across 180 countries, said business leaders have told him they do not want to repeat mistakes made during the pandemic, when some companies overhired and then had to fire staff in large numbers soon afterwards. 'We've naturally become more conservative in our hiring, so if there's anything on the horizon that suggests that we may not need those people, we don't hire them,' Taylor said. • 'I think there was a lot of guilt for, 'I hired you away from another job, you had a perfectly great life, I brought you over, I recruited you … and then I got rid of you through no fault of your own'. So the impact of AI on us is: we're hesitant to bring people on for fear that we're going to have to look at you across the table, and say, 'listen, you're great, you did great work, in fact you worked really hard, but I don't have a job for you'.' Taylor, a lawyer and former HR executive for US companies including Blockbuster and Paramount, says he is scared by the rapid advancement of generative AI in the workplace. 'We have nothing to compare it to, if you think about AI,' he said. 'Historically technologies have been introduced and then took a time for adoption, and now instantly that day [when ChatGPT was launched in 2022] millions of people signed up and agreed to get it and … now all of a sudden it's impacting us and that's what's so scary to me.' Some companies have been transparent about AI-driven job cuts, including IBM, which said that 200 HR employees were fired and replaced with AI chatbots. Recruit Holdings, the owner of Indeed and Glassdoor, announced 1,300 jobs cuts last month from its technology segment, citing AI. Taylor believes some consumer-facing businesses that have cut jobs are not revealing the link to AI for fear of receiving criticism from customers. 'Most companies — I think because of damage to their brand — don't want to be seen as bad, especially if they're B2C [business to consumer],' he said. 'So if I could reduce headcount in a B2C business but doing it might subject me to a boycott, I'm not going to tell you, but I'm still going to do it.' US jobs data in July showed businesses were slowing recruitment of new employees amid uncertainty around tariffs and the impact of AI on productivity. However, there have not been widespread job cuts. While the end impact of AI on the workplace is still unclear, Taylor said: 'One thing is for certain. Human beings and workers in particular are going to be significantly impacted. I don't know if it's to the degree that some suggest but if there are eight billion people on the planet, roughly four billion people working, if it's 10 per cent it's a lot of people who are no longer working.' • He believes regulators and politicians will be forced to intervene, with officials getting elected based on whether they are perceived to be a 'job killer' or a 'job protector'. Workplace backlash against culture wars The advancement of AI comes as many chief executives are trying to avoid gaining attention for comments on social issues. Taylor says he believes employees are tired of 'workplace polarisation' after business leaders intervened heavily around the MeToo movement and the Black Lives Matter protests in the wake of the murder of George Floyd in 2020. Taylor said that today the general approach from chief executives on whether to comment on events is 'unless you draw a straight line between this position and my business I don't have an obligation'. Employees were 'fed up with, 'my company stands for this, so if I work there I have to stand for that',' he added. 'Research was saying the workplace was becoming very uncivil and polarised and people were telling us 'I actually don't want to come here and debate right or wrong, I just want to do my job. I just want to go home. I don't want to get into all of this. I can't solve a variety of years of slavery over lunch and not be glib'.' He said he has observed a 'chilling effect' on diversity, equity and inclusion (DEI) policies in corporate America after a series of Supreme Court rulings and increased scrutiny of DEI programmes under the Trump administration. However, he said some of the changes in DEI schemes could reflect the fact that many companies had already made significant progress in achieving their diversity goals and reached their targets. In June the US Supreme Court ruled that 'reverse discrimination' claims, or discrimination claims brought by members of the majority race, gender or other protected characteristic, are not subject to heightened standards of proof. The ruling has 'totally changed' the HR landscape, Taylor said. 'I remember in my legal practice as well as in my HR practice, when you were getting ready to terminate someone or taking some sort of employee decision we'd say, 'Is that person a member of a protected class, race, gender, national origin, disability?' Now everyone is a member of a protected class because the court has said everyone has a right to equal protection.'

U.S. Senator Hawley launches probe into Meta AI policies
U.S. Senator Hawley launches probe into Meta AI policies

Reuters

time3 hours ago

  • Reuters

U.S. Senator Hawley launches probe into Meta AI policies

Aug 15 (Reuters) - U.S. Senator Josh Hawley launched a probe into Facebook parent Meta Platforms' (META.O), opens new tab artificial intelligence policies on Friday, demanding documents on rules that had allowed its artificial intelligence chatbots to 'engage a child in conversations that are romantic or sensual.' Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document first reported by Reuters on Thursday. "We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward," Hawley said. Meta declined to comment on Hawley's letter on Friday. The company said previously that "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.' In addition to documents outlining those changes and who authorized them, Hawley sought earlier drafts of the policies along with internal risk reports, including on minors and in-person meetups. Meta must also disclose what it has told regulators about its generative AI protections for young users or limits on medical advice, according to Hawley's letter. Hawley has often criticized Big Tech. He held a hearing in April on Meta's alleged attempts to gain access to the Chinese market which were referenced in a book by former Facebook executive Sarah Wynn-Williams.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store