logo
WNS Recognized as a Leader in Mortgage and Loan Services by NelsonHall

WNS Recognized as a Leader in Mortgage and Loan Services by NelsonHall

News184 days ago
Mumbai, Maharashtra, India & London, United Kingdom & New York, United States – Business Wire India WNS (Holdings) Limited (NYSE: WNS), a digital-led business transformation and services company, today announced that it has been recognized as a 'Leader' in NelsonHall NEAT's 2025 Mortgage and Loan Services evaluation.
WNS' domain and industry expertise acquired through serving regional and local lenders across multiple markets and its unique Mortgage-as-a-Service model of operations with embedded intelligent automation, AI and Gen AI have been recognized as key company strengths. The report highlights the company's offerings across origination, servicing, title, and escrow which leverage WNS-proprietary IP for processing, underwriting, administration, and collections.
'We are thrilled to be recognized as a Leader by NelsonHall in Mortgage and Loan services. WNS' unique ability to 'co-create' solutions that combine deep domain expertise with AI-led technology platforms enables our clients to adapt to rapidly changing regulatory requirements, improve efficiency and insights, and deliver differentiated outcomes," said Keshav R. Murugesh, Group CEO, WNS.
'WNS' mortgage and loan services enable transformation with intelligent automation and BPS services to improve operational performance," said Andy Efstathiou, Program Director for Banking at NelsonHall. 'Its title and collateral management services enable lenders to reduce operational risk while also reducing the cost of delivery." WNS' banking and financial services group employs more than 14,000 people including approximately 5,000 transformation experts who deliver technology and operations services to over 100 financial institutions worldwide. The Mortgage and Loan practice at WNS was started in 2006 and since has grown to support more than 600 processes across financial industry clients.
About NelsonHall NelsonHall is the leading global analyst firm dedicated to helping organizations understand the 'art of the possible' in digital operations transformation. With analysts in the U.S., Europe, and India, NelsonHall provides buy-side organizations with detailed, critical information on markets and vendors (including NEAT assessments) that helps them make fast and highly informed sourcing decisions. And for vendors, NelsonHall provides deep knowledge of market dynamics and user requirements to help them hone their go-to-market strategies. NelsonHall's analysis is based on rigorous, primary research, and is widely respected for the quality and depth of its insight.
About WNS WNS (Holdings) Limited (NYSE: WNS) is a digital-led business transformation and services company. WNS combines deep domain expertise with talent, technology, and AI to co-create innovative solutions for over 700 clients across various industries. WNS delivers an entire spectrum of solutions including industry-specific offerings, customer experience services, finance and accounting, human resources, procurement, and research and analytics to re-imagine the digital future of businesses. As of June 30, 2025, WNS had 66,085 professionals across 65 delivery centers worldwide including facilities in Canada, China, Costa Rica, India, Malaysia, the Philippines, Poland, Romania, South Africa, Sri Lanka, Turkey, the United Kingdom, and the United States.
For more information, visit www.wns.com or follow us on Facebook, Twitter, LinkedIn, and Instagram.
Safe Harbor Provision This document includes information which may constitute forward-looking statements made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995, the accuracy of which are necessarily subject to risks, uncertainties, and assumptions as to future events. Factors that could cause actual results to differ materially from those expressed or implied are discussed in our most recent Form 10-K and other filings with the Securities and Exchange Commission. WNS undertakes no obligation to update or revise any forward-looking statements, whether as a result of new information, future events, or otherwise.
(Disclaimer: The above press release comes to you under an arrangement with Business Wire India and PTI takes no editorial responsibility for the same.). PTI PWR PWR
view comments
First Published:
August 12, 2025, 18:00 IST
News agency-feeds WNS Recognized as a Leader in Mortgage and Loan Services by NelsonHall
Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI'
Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI'

Mint

time15 hours ago

  • Mint

Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI'

Melbourne, Aug 16 (The Conversation) Australian workers are secretly using generative artificial intelligence (Gen AI) tools – without knowledge or approval from their boss, a new report shows. The 'Our Gen AI Transition: Implications for Work and Skills' report from the federal government's Jobs and Skills Australia points to several studies, showing between 21 per cent and 27 per cent of workers (particularly in white collar industries) use AI behind their manager's back. Why do some people still hide it? The report says people commonly said they: -'feel that using AI is cheating' -have a 'fear of being seen as lazy' -and a 'fear of being seen as less competent'. What's most striking is that this rise in unapproved 'shadow use' of AI is happening even as the federal treasurer and Productivity Commission urge Australians to make the most of AI. The new report results highlight gaps in how we govern AI use at work, leaving workers and employers in the dark about the right thing to do. As I've seen in my work – both as a legal researcher looking at AI governance and as a practising lawyer – there are some jobs where the rules for using AI at work change as soon as you cross a state border within Australia. Risks and benefits of AI 'shadow use' The 124-page Jobs and Skills Australia report covers many issues, including early and uneven adoption of AI, how AI could help in future work and how it could affect job availability. Among its most interesting findings was that workers using AI in secret, which is not always a bad thing. The report found those using AI in the shadows are sometimes hidden leaders, 'driving bottom-up innovation in some sectors'. However, it also comes with serious risks. "Worker-led 'shadow use' is an important part of adoption to date. A significant portion of employees are using Gen AI tools independently, often without employer oversight, indicating grassroots enthusiasm but also raising governance and risk concerns." The report recommends harnessing this early adoption and experimentation, but warns: "In the absence of clear governance, shadow use may proliferate. This informal experimentation, while a source of innovation, can also fragment practices that are hard to scale or integrate later. It also increases risks around data security, accountability and compliance, and inconsistent outcomes." Real-world risks from AI failures The report calls for national stewardship of Australia's Gen AI transition through a coordinated national framework, centralised capability, and a whole-of-population boost in digital and AI skills. This mirrors my research, showing Australia's AI legal framework has blind spots, and our systems of knowledge, from law to legal reporting, need a fundamental rethink. Even in some professions where clearer rules have emerged, too often it's come after serious failures. In Victoria, a child protection worker entered sensitive details into ChatGPT about a court case concerning sexual offences against a young child. The Victorian information commissioner has banned the state's child protection staff from using AI tools until November 2026. Lawyers have also been found to misuse AI, from the United States and the United Kingdom to Australia. Yet another example – involving misleading information created by AI for a Melbourne murder case – was reported just yesterday. But even for lawyers, the rules are patchy and differ from state to state. (The Federal Court is among those still developing its rules.) For example, a lawyer in New South Wales is now clearly not allowed to use AI to generate the content of an affidavit, including 'altering, embellishing, strengthening, diluting or rephrasing a deponent's evidence'. However, no other state or territory has adopted this position as clearly. Clearer rules at work and as a nation Right now, using AI at work lies in a governance grey zone. Most organisations are running without clear policies, risk assessments or legal safeguards. Even if everyone's doing it, the first one caught will face the consequences. In my view, national uniform legislation for AI would be preferable. After all, the AI technology we're using is the same, whether you're in New South Wales or the Northern Territory – and AI knows no physical borders. But that's not looking likely yet. If employers don't want workers using AI in secret, what can they do? If there are obvious risks, start by giving workers clearer policies and training. One example is what the legal profession is doing now (in some states) to give clear, written guidance. While it's not perfect, it's a step in the right direction. But it's still arguably not good enough, especially because the rules aren't the same nationally. We need more proactive national AI governance – with clearer policies, training, ethical guidelines, a risk-based approach and compliance monitoring – to clarify the position for both workers and employers. Without a national AI governance policy, employers are being left to navigate a fragmented and inconsistent regulatory minefield, courting breaches at every turn. Meanwhile, the very workers who could be at the forefront of our AI transformation may be driven to use AI in secret, fearing they will be judged as lazy cheats. (The Conversation) SKS GRS GRS

Agentic AI rises: 86% see higher risks, only 2% meet responsible AI gold standards
Agentic AI rises: 86% see higher risks, only 2% meet responsible AI gold standards

Time of India

timea day ago

  • Time of India

Agentic AI rises: 86% see higher risks, only 2% meet responsible AI gold standards

Infosys Knowledge Institute (IKI),the research arm of Infosys (NSE, BSE, NYSE: INFY), a global leader in next-generation digital services and consulting, today unveiled critical insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI . The report, Responsible Enterprise AI in the Agentic Era, surveyed over 1,500 business executives and interviewed 40 senior decision-makers across Australia, France, Germany, UK, US, and New Zealand. The findings show that while 78% of companies see RAI as a business growth driver, only 2% have adequate RAI controls in place to safeguard against reputational risk and financial loss. The report analyzed the effects of risks from poorly implemented AI, such as privacy violations, ethical violations, bias or discrimination, regulatory non-compliance, inaccurate or harmful predictions, among others. It found that 77% of organizations reported financial loss, and 53% of organizations have suffered reputational impact from such AI related incidents. Key findings include: AI risks are widespread and can be severe 95% of C-suite and director-level executives report AI-related incidents in the past two years.39% characterize the damage experienced from such AI issues as 'severe' or 'extremely severe'.86% of executives aware of agentic AI believe it will introduce new risks and compliance issues. Responsible AI (RAI) capability is patchy and inefficient for most enterprises Only 2% of companies (termed 'RAI leaders') met the full standards set in the Infosys RAI capability benchmark — termed 'RAISE BAR' with 15% (RAI followers) meeting three-quarters of the 'RAI leader' cohort experienced 39% lower financial losses and 18% lower severity from AI do several things better to achieve these results including developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan. Executives view RAI as a growth driver 78% of senior leaders see RAI as aiding their revenue growth and 83% say that future AI regulations would boost, rather than inhibit, the number of future AI on average companies believe they are underinvesting in RAI by 30%. With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage. To help organizations build scalable, trusted AI systems that fuel growth while mitigating risk, Infosys recommends the following actions: Learn from the leaders: Study the practices of high-maturity RAI organizations who have already faced diverse incident types and developed robust product agility with platform governance: Combine decentralized product innovation with centralized RAI guardrails and RAI guardrails into secure AI platforms: Use platform-based environments that enable AI agents to operate within preapproved data and a proactive RAI office: Create a centralized function to monitor risk, set policy, and scale governance with tools like Infosys' AI3S (Scan, Shield, Steer). Balakrishna D.R., EVP – Global Services Head, AI and Industry Verticals, Infosys said, 'Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases, would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability. This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate. Companies should not discount the important role a centralized RAI office plays as enterprise AI scales, and new regulations come into force.' Jeff Kavanaugh, Head of Infosys Knowledge Institute, Infosys, said, 'Today, enterprises are navigating a complex landscape where AI's promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of Responsible AI, there's a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era.'

This is like Mark Zuckerberg telling GenAI employees they had failed and ...: Tension at Meta over Zuckerberg's 'big money' offers as they upset existing AI researchers
This is like Mark Zuckerberg telling GenAI employees they had failed and ...: Tension at Meta over Zuckerberg's 'big money' offers as they upset existing AI researchers

Time of India

time2 days ago

  • Time of India

This is like Mark Zuckerberg telling GenAI employees they had failed and ...: Tension at Meta over Zuckerberg's 'big money' offers as they upset existing AI researchers

It appears that Meta's aggressive AI talent hiring spree is not only upsetting its rivals abut are also creating issues within the company. As reported by Business Insider, Meta CEO Mark Zuckerberg's idea to dominate the field of artificial intelligence is creating internal unrest. The huge compensation packages offered by Mark Zuckerberg to hire AI talent from rival companies has spared resentment among the existing employees. This latest controversy is followed by the recent failed attempt by Meta CEO to hire Mira Murati, former OpenAI CTO and founder of Thinking Machines Lab, with a reported $1 billion offer. After Murati declined the offer, Meta launched a 'full-scale raid' on her startup, offering compensation packages ranging from $200 million to $500 million to other researchers at her startup. 'This is like Zuckerberg telling GenAI employees they had failed' As reported by Business Insider, some insiders informed the publication that the huge compensation packages offered by Meta CEO to hire AI talent has created a morale crisis with the exiting AI teams at Meta. One employee described the situation as feeling like 'Zuckerberg told GenAI employees they had failed,' implying that internal talent was being sidelined in favor of high-profile recruits. On the other hand, another senior executive at a rival AI firm told Fores that Meta's present AI staff 'largely didn't meet their hiring bar,' adding, 'Meta is the Washington Commanders of tech companies. They massively overpay for okay-ish AI scientists and then civilians think those are the best AI scientists in the world because they are paid so much'. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World Undo Meta's AI gamble and Superintelligence mission Meta CEO Mark Zuckerberg has already committed more than $10 billion annually to AGI development. As part of this the company will focus on building custom chips, data centres and a fleet of 600,000 GPUs. However, despite all these efforts Meta's AI models including Llama 4 still lag behind rivals such has OpneAI's GPT-5 and Anthropic's Claude 3.5. The company on the other hand, has successfully managed to hire some top-talent from rival firms including Shengjia Zhao, co-creator of ChatGPT, and Alexandr Wang of Scale AI. Zuckerberg also defended his decision of hiring top AI talent at huge compensation packages. Meta CEO argued that the massive spending on top talent is a small fraction of the company's overall investment in AI infrastructure. He has also reportedly said that a small, elite team is the most effective way to build a breakthrough AI system, suggesting that "you actually kind of want the smallest group of people who can fit the whole thing in their head." However, this strategy created a divided within the company as the new hires are being brought in for a "Manhattan Project" style effort, many long-standing employees in other AI departments feel sidelined, with their projects losing momentum and their roles becoming uncertain. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store