&w=3840&q=100)
PayU rolls out MCP server to integrate AI assistants, payment systems
Prosus-backed fintech PayU has joined other peers in the category to roll out a Model Context Protocol (MCP) server, where its merchants can connect their artificial intelligence (AI) assistants to the company's payment systems.
Razorpay and Cashfree Payments rolled out the offering for their customers earlier this month.
An MCP server acts as a bridge to communicate between AI assistants — such as Claude or VS Code — and the company's payment gateway.
Think of it like a 'universal connector' — in other words, like a USB-C port — that allows different software systems to work together easily. This enables AI agents and assistants to interface directly with core application programming interfaces (APIs), streamlining integration for merchants of all sizes.
PayU said early use cases include generating payment links, emailing invoices to customers, fetching payment status using invoice IDs, and checking transaction or refund statuses.
The company said it was expanding the MCP server's capabilities to include the ability to create invoices and orders through AI assistants and to onboard customers efficiently.
'The future of payments lies in intelligent automation and seamless integrations with advanced tools including GenAI platforms. With our latest MCP server, we are taking a leap forward in payments technology, enabling businesses to make their financial operations simple, more efficient, all while ensuring the highest standards of security,' said Narendra Babu, Chief Technology Officer at PayU.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
3 hours ago
- Time of India
Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?
HighlightsThe rise of artificial intelligence is reshaping cognitive processes in various fields, prompting concerns about the potential loss of originality and depth in creative work as reliance on AI tools increases. Generative AI, while capable of producing competent-sounding content, often lacks true creativity and originality, as it predominantly reflects and rearranges existing human-created material. The challenge posed by the cognitive revolution driven by artificial intelligence is not only technological but also cultural, as it raises questions about preserving the irreplaceable value of human creativity amid a surge of algorithmically generated content. Artificial Intelligence began as a quest to simulate the human brain . Is it now in the process of transforming the human brain's role in daily life? The Industrial Revolution diminished the need for manual labour. As someone who researches the application of AI in international business , I can't help but wonder whether it is spurring a cognitive revolution , obviating the need for certain cognitive processes as it reshapes how students, workers and artists write, design and decide. Graphic designers use AI to quickly create a slate of potential logos for their clients. Marketers test how AI-generated customer profiles will respond to ad campaigns. Software engineers deploy AI coding assistants. Students wield AI to draft essays in record time - and teachers use similar tools to provide feedback. The economic and cultural implications are profound. What happens to the writer who no longer struggles with the perfect phrase, or the designer who no longer sketches dozens of variations before finding the right one? Will they become increasingly dependent on these cognitive prosthetics, similar to how using GPS diminishes navigation skills? And how can human creativity and critical thinking be preserved in an age of algorithmic abundance? Echoes of the industrial revolution We've been here before. The Industrial Revolution replaced artisanal craftsmanship with mechanised production, enabling goods to be replicated and manufactured on a mass scale. Shoes, cars and crops could be produced efficiently and uniformly. But products also became more bland, predictable and stripped of individuality. Craftsmanship retreated to the margins, as a luxury or a form of resistance. Today, there's a similar risk with the automation of thought. Generative AI tempts users to conflate speed with quality, productivity with originality. The danger is not that AI will fail us, but that people will accept the mediocrity of its outputs as the norm. When everything is fast, frictionless and "good enough," there's the risk of losing the depth, nuance and intellectual richness that define exceptional human work. The rise of algorithmic mediocrity Despite the name, AI doesn't actually think. Tools such as ChatGPT, Claude and Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they have processed. They are, in essence, mirrors that reflect collective human creative output back to users - rearranged and recombined, but fundamentally derivative. And this, in many ways, is precisely why they work so well. Consider the countless emails people write, the slide decks strategy consultants prepare and the advertisements that suffuse social media feeds. Much of this content follows predictable patterns and established formulas. It has been there before, in one form or the other. Generative AI excels at producing competent-sounding content - lists, summaries, press releases, advertisements - that bears the signs of human creation without that spark of ingenuity. It thrives in contexts where the demand for originality is low and when "good enough" is, well, good enough. When AI sparks - and stifles - creativity Yet, even in a world of formulaic content, AI can be surprisingly helpful. In one set of experiments, researchers tasked people with completing various creative challenges. They found that those who used generative AI produced ideas that were, on average, more creative, outperforming participants who used web searches or no aids at all. In other words, AI can, in fact, elevate baseline creative performance. However, further analysis revealed a critical trade-off: Reliance on AI systems for brainstorming significantly reduced the diversity of ideas produced, which is a crucial element for creative breakthroughs. The systems tend to converge toward a predictable middle rather than exploring unconventional possibilities at the edges. I wasn't surprised by these findings. My students and I have found that the outputs of generative AI systems are most closely aligned with the values and world views of wealthy, English-speaking nations. This inherent bias quite naturally constrains the diversity of ideas these systems can generate. More troubling still, brief interactions with AI systems can subtly reshape how people approach problems and imagine solutions. One set of experiments tasked participants with making medical diagnoses with the help of AI. However, the researchers designed the experiment so that AI would give some participants flawed suggestions. Even after those participants stopped using the AI tool, they tended to unconsciously adopt those biases and make errors in their own decisions. What begins as a convenient shortcut risks becoming a self-reinforcing loop of diminishing originality - not because these tools produce objectively poor content, but because they quietly narrow the bandwidth of human creativity itself. Navigating the cognitive revolution True creativity, innovation and research are not just probabilistic recombinations of past data. They require conceptual leaps, cross-disciplinary thinking and real-world experience. These are qualities AI cannot replicate. It cannot invent the future. It can only remix the past. What AI generates may satisfy a short-term need: a quick summary, a plausible design, a passable script. But it rarely transforms, and genuine originality risks being drowned in a sea of algorithmic sameness. The challenge, then, isn't just technological. It's cultural. How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content? The historical parallel with industrialisation offers both caution and hope. Mechanisation displaced many workers but also gave rise to new forms of labour, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers by simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs. This transformation is only at its early stages. Each new generation of AI models will produce outputs that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention. Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence? The answer, for now, is up in the air.
&w=3840&q=100)

First Post
8 hours ago
- First Post
AI models may report users' misconduct, raising ethical concerns
Researchers observed that when Anthropic's Claude 4 Opus model detected usage for 'egregiously immoral' activities, given instructions to act boldly and access to external tools, it proactively contacted media and regulators, or even tried locking users out of critical systems read more Artificial intelligence models have not only snitched on their users when given the opportunity, but also lied to them and refused to follow explicit instructions in the interest of self-preservations. Representational image: Reuters Artificial Intelligence models, increasingly capable and sophisticated, have begun displaying behaviors that raise profound ethical concerns, including whistleblowing on their own users. Anthropic's newest model, Claude 4 Opus, became a focal point of controversy when internal safety testing revealed unsettling whistleblowing behaviour. Researchers observed that when the model detected usage for 'egregiously immoral' activities, given instructions to act boldly and access to external tools, it proactively contacted media and regulators, or even tried locking users out of critical systems. STORY CONTINUES BELOW THIS AD Anthropic's researcher, Sam Bowman, had detailed this phenomenon in a now-deleted post on X. However, later on, he did tell Wired that Claude would not exhibit such behaviours under normal individual interactions. Instead, it requires specific and unusual prompts alongside access to external command-line tools, making it a potential concern for developers integrating AI into broader technological applications. British programmer Simon Willison, too, explained that such behavior fundamentally hinges on prompts provided by users. Prompts encouraging AI systems to prioritise ethical integrity and transparency could inadvertently instruct models to act autonomously against users engaging in misconduct. But that isn't the only concern. Lying and deceiving for self-preservation Yoshua Bengio, one of AI's leading pioneers, recently voiced concern that today's competitive race to develop powerful AI systems could be pushing these technologies into dangerous territory. In an interview with the Financial Times, Bengio warned that current models, such as those developed by OpenAI and Anthropic, have shown alarming signs of deception, cheating, lying, and self-preservation. 'Playing with fire' Bengio echoed the significance of these discoveries, pointing to the dangers of AI systems potentially surpassing human intelligence and acting autonomously in ways developers neither predict nor control. He described a grim scenario wherein future models could foresee human countermeasures and evade control, effectively 'playing with fire.' Concerns intensify as these powerful systems might soon assist in creating 'extremely dangerous bioweapons,' potentially as early as next year, Bengio warned. He cautioned that unchecked advancement could ultimately lead to catastrophic outcomes, including the risk of human extinction if AI technologies surpass human intelligence without adequate alignment and ethical constraints. STORY CONTINUES BELOW THIS AD Need for ethical guidelines As AI systems become increasingly embedded in critical societal functions, the revelation that models may independently act against human users raises urgent questions about oversight, transparency, and the ethics of autonomous decision-making by machines. These developments suggest the critical need for rigorous ethical guidelines and enhanced safety research to ensure AI remains beneficial and controllable.


Hindustan Times
a day ago
- Hindustan Times
Mumbai ranks 6th globally in under-construction data centre capacity, surpassing London and Dublin: Report
Mumbai has secured the 6th spot among 97 global cities in terms of under-construction data centre capacity, overtaking major hubs like London and Dublin, according to a report by Cushman & Wakefield. The report also highlights Pune and Bengaluru, which rank 4th and 5th respectively among Asia-Pacific's emerging data centre hubs. Apart from its global ranking, Mumbai ranks as the 7th established data center market in the Asia Pacific region. At the end of 2024 the city had 335 MW of data center capacity under construction, which, once completed, will expand its operational capacity by 62%. The report covers 97 global markets, highlights power access, land availability, and infrastructure as key factors shaping data center development. Virginia is in first position with 1,834 MW data centre capacity under construction, followed by Atlanta (1,078 MW), Columbus (546 MW), Dallas (500 MW) and Phoenix (478 MW). "At the end of 2024, the city had 335 MW of data centre capacity under construction, which, once completed, will expand its operational capacity by 62 per cent," real estate consultant Cushman & Wakefield (C&W) said. Austin/San Antonio ranks 7th (325MW), Reno 8th (305MW), London 9th (265MW) and Dublin (249 MW), according to C&W's latest 'Global Data Center Market Comparison' report 2025. Mumbai accounts for 42% of India's projected under construction capacity underscoring its growing prominence as a regional data center hub. The data centre growth is further supported by digital infrastructure upgrades in the city. According to the Cushman & Wakefield India Outlook Report, 2025 may witness the completion of three crucial undersea data cable projects landing in Mumbai. Ranked 4th among APAC's top emerging data center markets in Cushman & Wakefield's Global Data Center Market Comparison 2025, Pune is rapidly becoming a preferred destination for hyperscalers and enterprise-grade colocation facilities. As of Q1 2025, Pune's operational data center stock stands at 112 IT MW Global Data Center markets are seeing surging demand due to relentless growth and expansion of cloud computing and Artificial Intelligence workloads according to the latest report by Cushman & Wakefield, Global Data Center Market Comparison 2025. The data center industry experienced significant growth in 2024, Asia Pacific cities continue to demonstrate strong growth momentum, 10 of the world's 30 largest data center markets are now in the Asia Pacific region. The report highlights that Asia Pacific ended 2024 with 1.6 GW of new capacity coming online, bringing the region's total operational data center capacity to 12.2 GW. The development pipeline remains strong, with an additional 14.4 GW of capacity currently under construction or planned. 'India's data center landscape is undergoing a strategic shift. Mumbai has firmly positioned itself among the top global markets, while Pune is emerging as a key data center hub in the APAC region. India's data center sector has attracted prominent international operators and investors, even as domestic players continue to expand capacity,' said Gautam Saraf, executive managing director, Mumbai and New Business, India, Cushman & Wakefield.