
Decoding Gen AI, Cloud, and VDI: A Candid Conversation with Rajiv Ranjan Kumar of Wipro
Here's a glimpse into an insightful chat with him.
Why are companies increasingly embracing Gen AI-powered solutions?
Rajeev explained that the industry is witnessing a major shift towards the convergence of data forms; text, video, image, and audio. Gen AI leverages all of these together, making it vastly more efficient and intelligent. He pointed out how tools like ChatGPT offer a stark contrast to traditional search engines. Instead of returning a list of options, Gen AI delivers precise, context-aware answers, improving productivity and saving time. According to him, this convergence is what's driving the mass adoption across industries.
How are Cloud and VDI connected to Gen AI in today's tech ecosystem?
To break it down, Rajeev categorized AI into three key layers: infrastructure, cloud, and access. First, the infrastructure layer is crucial because AI models need high computing power. But that kind of investment isn't feasible for smaller organizations; which is where cloud platforms come in. With cloud's 'pay-as-you-go' model, anyone- even students can access powerful AI tools without buying expensive hardware. Then, comes VDI (Virtual Desktop Infrastructure), which allows people to securely access their workspace from any device, anywhere. Together, these elements form a robust ecosystem that makes AI scalable and democratized.
What are the biggest challenges in adopting Gen AI?
Rajeev outlined three major concerns: data privacy, ethical use, and hallucination. In sectors like healthcare and automotive, data sensitivity is extremely high. Any leak can lead to serious consequences, including loss of trust or business strategy exposure. Ethical use is another concern, especially with the rise of deepfakes and voice cloning. Rajeev stressed the importance of governance frameworks and audits to ensure responsible use. Lastly, he pointed to the problem of hallucinations, when AI outputs something that seems accurate but is factually incorrect. In high-risk industries, even one such error can be catastrophic.
Are AI-powered vehicles, like those from Tesla, really safe?
On the topic of autonomous driving, Rajeev admitted that the adoption rate remains low, primarily due to data reliability concerns. These vehicles rely entirely on AI models for decision-making, and if even a single command is wrong, the outcome can be dangerous. Hallucinations in AI; where answers look accurate but aren't, are especially risky here. This is why full automation is still under cautious implementation.
Will AI eventually replace human jobs?
Rajeev acknowledged that AI will partially replace roles, particularly in areas like technical documentation and basic coding. Generative AI can produce text and code with impressive accuracy, reducing the number of people required for such tasks. However, he emphasized the continued importance of the human-in-the-loop approach. AI still lacks instinct and random human judgement, and will take years to truly mature. 'You can think of it as the 'AI-fication of humans' already happening, but the 'humanification of AI' is still far off', this was established during the conversation.
Does frequent use of AI tools hamper human creativity?
'No, it actually enhances creativity,' Rajeev said firmly. He shared a story from a gastroenterology summit, where a doctor failed to diagnose cancer in a patient early on. Years later, when the patient's previous records were uploaded into an AI system, it accurately predicted the cancer risk that was overlooked. The experience reinforced Rajeev's belief that AI complements human effort and helps professionals work smarter.
To quote Rajeev,"The patient had a second stage of cancer. The doctor felt guilty. The reason that the person had come to him three times and he was not able to diagnose that he is developing that thing. Then he was feeling guilty and then he told this person to give him all the current applications he has which are based on artificial intelligence. He uploaded the entire data. The first year data, first year data was showing that there is a probability of 50% cancer happens after two to three years. Second year data was telling that 70% probability is there that he will be diagnosed with cancer in the next one year. So, he was surprised that okay, he has so much experience and this fellow came just now as a technology and it is replacing me. But he was thinking I could have used this technology two years back and I could have saved his life much earlier."
In creative fields like media, he said AI can fast-track execution, allowing professionals more time for vision and innovation.
What about the environmental cost of training large AI models?
Rajeev acknowledged the concern about AI's water and energy consumption, but said the industry is responding. Companies like NVIDIA are creating more energy-efficient hardware and Small Language Models (SLMs) are emerging as lightweight alternatives to Large Language Models (LLMs), consuming less power with similar performance for specific use cases. He added that countries like those in the EU are already implementing Responsible AI frameworks, which include environmental considerations.
How do SLMs compare to LLMs, and where should each be used?
SLMs, according to Rajiv, are ideal for task-specific applications such as call centers or IT helpdesks, where the questions are predictable and datasets are limited. LLMs, on the other hand, are better suited for complex, multimodal tasks like processing audio, text, and images together in healthcare diagnostics or creative media. At Wipro, the choice between SLM or LLM is based entirely on client requirements and the scope of the project.
When asked if AI misuse be prevented, especially by people with malicious intent?
Rajeev explained that modern AI systems have three protective layers: the user interface, the data processing layer, and the guardrail layer.
The guardrail monitors queries to detect and block inappropriate or unethical ones. Moreover, usage patterns are constantly tracked, and feedback from these interactions is used to strengthen the model over time. This includes not only security but also improving response quality.
Tell us a little about your personal journey into AI. Was it always part of your plan?
Rajeev shared that his journey into AI began by chance. Two years ago, AI was still emerging and most people were chasing more established tech roles. But he saw an opening and decided to take a leap. A turning point came during his MBA at IIM Kozhikode, when Professor Raju told him, 'The next decade belongs to data. If you control data, you control the world.' That advice inspired him to pivot, and it turned out to be a defining decision in his career.
Was there a specific moment that confirmed you made the right choice?
Yes, Rajeev recalled a friend who struggled to manually sift through 1,000 job applications. In just five days, Rajeev built a tool that could score resumes against job descriptions. To make it more robust, he implemented cosine similarity to detect AI-generated or overly similar resumes, helping to remove redundant applications. That moment made him realize how practically powerful and impactful AI can be.
AI-generated resumes are becoming common. Could the best candidates be overlooked?
Rajeev said it's a real concern. Many candidates now tailor their resumes to pass AI filters using keywords and tools. While this helps visibility, it also leads to over-standardization, which might mask real talent. He advised applicants to be strategic, 'Use AI to enhance your resume, but remember that authentic skills and substance still matter most.'
Rajeev concluded that AI is a tool, not a threat. If used responsibly, it has the potential to enhance human capabilities, not replace them. The key is to stay ethical, curious, and collaborative. "AI is here to stay. The question is: how responsibly and creatively will we use it?"
Interview by: Gyanisha Mallick
Guest: Rajeev Ranjan Kumar, Senior Leader & AI specialist, Wipro
Platform: The Hans India; TechTalk Podcast
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
4 hours ago
- Time of India
AI, But Verify: Navigating Future Of Learning
New Delhi: At a time when artificial intelligence is reshaping how we learn, teach and think, IIT Delhi took a leap into the future . Last year, IIT Delhi formed a dedicated committee to explore how generative AI tools could be ethically and effectively integrated into teaching, research and examinations. After months of collecting detailed feedback from both students and faculty, the committee released a comprehensive report covering short-term goals that revealed some eye-opening trends. A significant 80% of students reported using generative AI (GenAI) tools, with 81% of them engaging with these technologies several times a week. Around 10% of the respondents have even invested in paid subscriptions, citing the limitations and inaccuracy of free versions as a major concern. Based on these insights, the institute released a set of guidelines. A cornerstone of these new rules is the mandatory disclosure of any content generated or assisted by GenAI tools for transparency and academic integrity. You Can Also Check: Delhi AQI | Weather in Delhi | Bank Holidays in Delhi | Public Holidays in Delhi "We mandated that all academic programmes include exposure to artificial intelligence and machine learning. Most departments have already incorporated the elements, ensuring that all graduates from IIT Delhi will soon possess strong proficiency in AI. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like TV providers are furious: this gadget gives you access to all channels Techno Mag Learn More Undo This committee was formed on April 9, 2024, to guide this transition and determine the best approach for integrating AI," said Rangan Banerjee, director of IIT Delhi. In the long term, the committee was tasked with providing clear guidance on the use of GenAI tools while monitoring advancements in the field, identifying the necessary training and support infrastructure to ensure their effective and equitable use, and recommending a governance structure and AI policies that align with the existing policy framework. The report states, "While generative AI tools offer potential benefits—such as personalised, real-time feedback and customized learning experiences—they also raise significant concerns. These tools can produce inaccurate or misleading responses, and excessive reliance on them may undermine essential learning outcomes, particularly the development of critical thinking skills. It is imperative for educational institutions to establish clear guidelines that educate and sensitise both students and faculty". The genesis of this initiative reflects the growing recognition that GenAI tools are no longer futuristic concepts but active collaborators in academic life. They offer undeniable benefits—personalised, real-time feedback, efficient writing assistance, and novel ways to visualise and explore complex topics. Yet, their rapid adoption has also brought challenges and concerns that the committee sought to understand firsthand by engaging with the campus community. A survey circulated among faculty and students revealed a nuanced picture of AI's current role at IIT Delhi. According to the feedback of 427 student respondents, AI facilitated their learning by simplifying concepts, creating mind maps, and simulating scenarios to deepen understanding. However, they also identified significant limitations: inaccuracies in AI responses, difficulties with complex or context-specific questions, and poor performance in mathematical reasoning and code debugging. Some students even voiced concerns about data privacy and unequal access to paid AI subscriptions, highlighting the socio-economic dimensions of AI equity. When it came to ethics, the students were divided—just over half saw no professional ethical issues with AI use, while others worried about risks to intellectual integrity and the potential erosion of critical thinking skills. This group cautioned that overreliance on AI might stifle personal agency and authentic engagement with learning, thereby impeding intellectual growth. Faculty responses, drawn from 88 participants, echoed many of these insights. Around 77% of faculty members use GenAI tools, with half doing so regularly, reflecting a broad acceptance of AI's utility in academic workflows. Faculty members harness these tools for writing assistance, summarising research literature, preparing instructional materials, and streamlining administrative tasks, thereby enhancing productivity. However, they also raised serious concerns about intellectual integrity, the impact on students' critical thinking development, and the challenges of fair grading when AI-generated content is involved. The lack of reliable detection tools for AI-authored work further complicates assessment fairness. Recognising these complexities, the committee's recommendations offer a balanced framework designed to harness AI's benefits while safeguarding academic values. At the individual level, students, researchers, and faculty are urged to disclose AI use transparently—whether in text, images, or data visualisations—to uphold academic integrity. They must take full responsibility for verifying and fact-checking AI-generated content, ensuring it is original and free from plagiarism. Users are also advised to protect privacy by avoiding the input of sensitive or personally identifiable information into AI tools. For the institution, the committee recommends regular workshops developed in collaboration with academic units to educate and sensitise both students and faculty on ethical, responsible, and effective AI use. Faculty development programmes should include dedicated modules on AI's applications and implications. Additionally, the institute is encouraged to secure campus-wide licences for premium AI tools, ensuring equitable access and levelling the playing field for all departments and users. Specific guidance for academic units stresses the need to embed AI education into core curricula, including professional ethics and domain-specific GenAI applications. Learning outcomes—both at the programme and course level—should be redesigned to reflect AI's transformative impact, emphasising skills like critical thinking, original analysis and knowledge application beyond AI's reach. A curated and regularly updated resource compendium on ethical and constructive AI use should be developed to support students and faculty alike. Acknowledging that outright bans on AI use are impractical, the committee advises that plagiarism policies be revised to explicitly address these tools. These policies should encourage transparency and responsible use, penalise unethical behaviour, such as submitting AI-generated work without meaningful personal input, and reward honest engagement with AI technologies. Faculty members are also urged to review assessment outcomes where AI tools are involved, to refine assignments, identify academic dishonesty and raise awareness of AI's role in education. The path forward involves not only adapting to technological change but doing so thoughtfully—ensuring that AI enhances the critical thinking, creativity and integrity at the heart of learning and discovery. Get the latest lifestyle updates on Times of India, along with Friendship Day wishes , messages and quotes !


Time of India
9 hours ago
- Time of India
Electric cars: Still needing a push?
People have been dreaming about electric cars for a long time. In fact, the first electric car, called the Electrobat, was already driving around in 1894! And in 1909, an electric car even beat a petrol (gasoline) car in a traffic race. So why aren't electric cars everywhere by now? Right now, only about 25 out of every 100 cars sold in the world are electric. That may sound like a lot, but most of these cars are made and bought in just one country — China. That means in the rest of the world, most people still use petrol or diesel cars, called ICE cars (short for internal combustion engine). Even Tesla, the most famous electric car company, is struggling. It's selling fewer cars in Europe, and it's making less money. One big reason Tesla made so much money before was by selling something called carbon credits — basically, rewards from the government for making clean cars. But those rewards are starting to go away, especially in the US. And soon, people in the US will also lose a big $7,500 discount they got for buying electric cars. China also spent a huge amount of money — around $231 billion — to help its electric car industry grow. Now, Chinese carmakers are building more cars than people want to buy, which is causing problems. This raises a big question: If electric cars are really the future, why do they still need so much help from governments? When Henry Ford built his Model T car in 1908, it cost $850. Within a few years, he made it so efficiently that it cost only $300 — and millions of people bought it. He didn't need any government help to do that. Electric cars are amazing for the planet. But maybe, just maybe, they need to learn how to survive without help — and roll forward on their own four wheels. Facebook Twitter Linkedin Email Disclaimer Views expressed above are the author's own.


Indian Express
10 hours ago
- Indian Express
TCS layoffs signal rising strain on Indian IT as AI disruption, US economic woes trigger uncertainty
The recent decision by tech major Tata Consultancy Services (TCS) to lay off 2 per cent of its workforce highlights the growing pressures on India's IT sector, driven by the fast-paced adoption of new technologies like artificial intelligence (AI) and ongoing economic uncertainty in the US, a key market for Indian tech companies. In the first quarter of FY26, a considerable number of IT companies posted weak top-line performance and a squeeze in margins due to the tariff-related uncertainties. Last week, IT bellwether TCS said that it will be laying off 12,000 employees, which is 2 per cent of its global workforce. The move is going to impact employees from the mid and senior levels. Framed as a push toward building a 'future-ready generation' through 'skilling and redeployment,' TCS's move is, in effect, a sweeping cost-cutting exercise. Analysts warn that as the use of AI continues to grow across the IT industry, a significant number of jobs could be at risk. With AI increasingly taking over tasks that were once handled manually — such as coding, data analysis and customer support — companies are likely to reassess workforce needs, potentially leading to widespread layoffs. Experts also point out that roles involving repetitive or process-driven functions are especially vulnerable, unless employees upskill or transition into areas where human oversight and creativity remain essential. 'Aggregate headcount saw a modest quarter-on-quarter increase in Q1 FY26, but several IT companies announced workforce reductions,' BNP Paribas Securities India said in a report. 'TCS laid off nearly 2 per cent of its employees, while HCL Technologies is adjusting its talent deployment outside India, particularly scaling down in the automotive engineering and R&D segment. Wipro incurred a restructuring charge of Rs 247 crore linked to severance payouts in Europe.' Understandably, the employee retrenchment has started the debate of GenAI starting to impact the workforce, it said. The layoffs in the Indian IT sector are increasingly becoming common mainly due to skill mismatches and deployment challenges. 'With growing pressure to reduce costs and align talent with AI-driven models, tech majors are slowing fresher hiring and trimming staff, signalling a structural shift in workforce strategy,' said Arun Kailasan, research analyst – Fundamental Research, Geojit Investments Ltd. Rather than going for lateral hiring, IT firms are focusing on upskilling their existing workforce in emerging areas like AI and generative AI to take care of project execution going ahead. Besides AI, other important factors for layoffs in the IT sector are the macroeconomic headwinds in the US due to tariff-related uncertainty and delay in rate cuts by the US Federal Reserve, resulting in a slower execution of projects by clients. These factors will affect the margins of domestic IT companies. 'During our April 2025 earnings call, we had called out delays in decision-making and projects start with respect to discretionary investments. This trend has continued and intensified to some extent in this quarter,' TCS chief executive officer and managing director, K Krithivasan, said during the Q1 FY26 earnings call. 'Global businesses were disrupted due to conflicts, economic uncertainties and supply chain issues. We saw cost pressures in our customers causing previously unseen project pauses, deferrals and decision delays that resulted in less than expected revenue conversion,' he said. In its recent policy announced on July 30, the Federal Open Market Committee (FOMC) kept the interest rate unchanged at 4.25-4.5 per cent. 'At the beginning of the year, there was an expectation that the US Fed would reduce rates by 50-100 basis points. This cut has been consistently getting extended. When interest rates are high, spending in the US gets impacted, including on IT. This has a bearing on the contracts awarded to Indian IT firms,' said an analyst. Analysts say that due to weak demand, IT companies are likely to slow down their hiring in the near future. 'With muted demand and tighter budgets, companies are focusing on optimising existing talent rather than expanding headcount. Hiring remains subdued, while utilisation rates are rising and attrition has stabilised. The shift is towards value-based deployment and reskilling for AI-driven roles, setting the stage for long-term workforce transformation,' Kailasan of Geojit Investments said. IT analysts said that domestic IT companies are likely to see soft earnings for the rest of 2025 amid volatile and uncertain geopolitical conditions. 'The main challenge remains the slowdown in decision-making among major US clients,' said Ashish Gupta, chief investment officer at Axis Mutual Fund. 'There's a lot of uncertainty around the outlook—questions about retail spending, how consumers will respond to potentially higher interest rates, and whether the US economy can maintain its momentum. The broader economic picture remains unclear.' A report by Nuvama Research said that the demand environment is expected to remain challenging for the next one to two quarters for the IT sector due to the macro — tariff-related — uncertainty. 'In the near term, we expect lack of clarity on macro to continue until most of the trade deals are announced. In general, a large part of the impact of delays was felt in Q1 FY26. The second quarter of FY26 can have some residual impact of the delays. If there are no further delays, Q2 FY26 will be at least better than the first quarter,' said Sumit Pokharna, vice president (Fundamental Research), Kotak Securities. IT sector experts anticipate recovery in 2026 as clarity on the US tariffs emerges and potential rate cuts by the US Federal Reserve help revive demand.