
From searching to answers: Qlik CTO explains how AI is reshaping data interaction
Data is no longer just columns and rows; it has moved on from being a unidimensional fact and figure to something more dynamic. Today, almost every aspect of our life is governed by data, and we have arrived at a point where data is enabling decision-making for organisations. On the sidelines of the recently held Qlik Connect 2025 in Orlando, indianexpress.com caught up with Kumar, who shared his insights on how AI is shaping data integration and modern business strategy.
During the conversation, Kumar outlined three major transformations in data analytics over the years. He shared that it all began with the centralisation phase with data warehousing. 'When we started building data warehouses like Teradata decades ago, it was the first transformational change. We focused on pulling data once, centralising it in one place, and making it easier for people to access. This gave us a backward view of data, which we call descriptive analytics'.
The next phase was predictive analytics. Kumar revealed that this was the phase when machines were being trained and building machine learning algorithms on the same data. Later the world moved from a historical view to a forward-looking view that could predict outcomes for smarter decisions. 'Think about recommendation engines on Amazon or Netflix—that's machine learning in action.'
According to Kumar, the recent transformation came with the generative AI wave. 'Suddenly having access to ChatGPT two years ago completely changed the landscape.' What fundamentally changed was how humans interacted with data. Now it's not about searching for information; it's about getting answers—a fundamental switch,' he explained, adding that the evolution continues at an accelerating pace.
Kumar went on to state that the next wave is already here: agentic AI. With agentic AI, it is not about asking; Kumar feels that one can express their intent, and agents will determine which processes to deploy and in what sequence. 'Going from warehousing to predictive took a long time, but the transitions from predictive to generative and from generative to agentic are happening much faster. The pace of change is compressing,' Kumar said.
As generative AI has become a buzzword across the spectrum, we asked Kumar what was hype and what was real when it came to its enterprise use cases. The Qlik executive acknowledged that while generative AI has captured the attention of the C-suite, its implementation hasn't been an easy one for many.
Kumar also said that the ground realities are different. 'When you talk to data and AI practitioners, you find that the data is not ready. It's messy, siloed, low quality, not timely, and often can't be trusted. If you build AI systems on bad data, they will fail,' he said, adding that this was indicative of why success rates remain modest. 'Only about 25 per cent of AI projects are truly succeeding in delivering business value. The biggest challenge is the data foundation,' he said.
When asked how the gap can be closed, Kumar recommended a two-pronged approach. 'Enterprises that are succeeding are starting with narrow AI use cases that are contained and less risky. At the same time, they're focusing on getting their data foundation right, which is the only way to scale AI effectively,' he said.
On being asked how Qlik's platform supports the journey from raw data to business outcomes, Kumar explained that the platform offers a wholesome assistance to businesses through their data journeys. The executive said that the journey begins with data collection. 'First, we provide capabilities to get data from anywhere—databases, SaaS applications, complex systems like mainframe and SAP, files, and streams—at high velocity in near real-time.'
Data collection is followed by integration. Kumar said that Qlik allows businesses to join and integrate siloed data. 'Unless you can join data together, you cannot get a complete picture. If customer information is in one system, purchases in another, and return information in a third, you need to connect these to understand your customer.'
After integration, building trust in data follows. The company helps businesses by helping them assess data quality, preserving the lineage of data to trace their roots. Later, the Qlik platform enables multiple types of analytics. 'Once you have a trusted data foundation, you can build BI visualisation dashboards for descriptive analytics, machine learning models for predictive analytics, and conversational agents for generative AI,' he explained. Kumar added that finally Qlik enables action, as it allows customers to take insights and automate actions on them.
When it came to challenges faced by enterprises in modernising their data, Kumar revealed that there are three primary challenges, such as data migration, skill gaps, and funding. Data migration is a challenge, as most data today, according to Kumar, continues to be in on-premise systems. Getting this data onto the cloud is a considerable challenge for many.
On the other hand, with many organisations moving to cloud and AI, Kumar feels that most of them often lack the necessary skills, especially for AI implementation. Lastly, with funding, most companies think that they don't need much budget for AI, as ChatGPT gives the perception that you can quickly apply models. 'What we're finding is that you need a significant budget to fix your data foundation, which is a heavy lift,' he noted.
When asked what his recommendations would be for organisations, Kumar said, 'Funding for data foundation should be rolled into their overall AI initiative funding. If you don't properly fund your data initiatives and have the right technology and the right skills, you'll face challenges.'
Lastly, on being asked what excites him the most about the future of data and AI, the Qlik executive said that potential applications of AI to streamline data workflows are something that he looks forward to. More broadly, he sees AI transforming every aspect of business and daily life.
Bijin Jose, an Assistant Editor at Indian Express Online in New Delhi, is a technology journalist with a portfolio spanning various prestigious publications. Starting as a citizen journalist with The Times of India in 2013, he transitioned through roles at India Today Digital and The Economic Times, before finding his niche at The Indian Express. With a BA in English from Maharaja Sayajirao University, Vadodara, and an MA in English Literature, Bijin's expertise extends from crime reporting to cultural features. With a keen interest in closely covering developments in artificial intelligence, Bijin provides nuanced perspectives on its implications for society and beyond. ... Read More

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
an hour ago
- India Today
Comfort talk made ChatGPT 4o a darling to some users, now even OpenAI CEO Sam Altman is worried
It begins quietly enough: a voice note, a thought recorded at the end of a long day, a conversation with an always-available listener. But increasingly, that listener isn't a friend, a family member, or a therapist. It is ChatGPT, particularly ChatGPT 4o, which apparently has a personality that validates whatever its users are feeling with its sweet this novel way of talking to an AI chatbot was increasingly getting noticed, it all came in limelight a few days ago with the launch of the ChatGPT 5. The new AI chatbot from OpenAI has a different personality and a fairly vocal group of users immediately demanded that the company bring back the 4o. Reason? They love ChatGPT 4o for what it can utter back to users and how it can validate their feelings, right or wrong, sensible or social media platforms like Instagram and Reddit, people are now sharing how they have turned the AI chatbot into a personal sounding board. The practice, often called 'voice journaling,' involves speaking directly to the bot, using it as both recorder and respondent. On Reddit, there is a whole thread on how many users have been asking ChatGPT for relationship advice, comfort during anxious moments, and even help processing grief. Some described it as a 'safe space' for unloading feelings they couldn't share elsewhere. Others said they journalled with ChatGPT daily, treating it almost like a life coach that never judged, interrupted, or grew tired. It's easy to see the appeal. The bot, unlike a therapist, doesn't charge by the hour; it responds instantly, and it can appear endlessly patient. But the growing use of AI in such intimate ways has started to worry even those at the helm of its Altman tries to nudge users awayOpenAI CEO Sam Altman has publicly warned that people should be careful before treating ChatGPT as a therapist. 'People talk about the most personal shit in their lives to ChatGPT. People use it, young people, especially, as a therapist, a life coach; having these relationship problems and (asking) what should I do?' he recently said on a podcast with comedian Theo Von. Unlike real therapy, conversations with ChatGPT are not protected by doctor-patient or legal privilege. 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. And we haven't figured that out yet for when you talk to ChatGPT,' Altman acknowledged. Deleted chats, he added, may still be retrievable for legal or security isn't the only concern. A recent Stanford University study found that AI 'therapist' chatbots are not yet equipped to handle mental health responsibilities, often reinforcing harmful stigmas or responding inappropriately. In tests, they encouraged delusions, failed to recognise crises, and showed bias against conditions like schizophrenia and alcohol dependence, falling short of the best clinical risks extend beyond bad advice. Altman has been increasingly candid about the emotional bonds users form with AI. In a post on X, he noted that some people were deeply attached to the older GPT-4o model, describing it as a close friend or even a 'digital wife.'Future challenges remainWhen GPT-5 rolled out, replacing GPT-4o for many, the backlash was swift. 'It feels different and stronger than the kinds of attachment people have had to previous kinds of technology,' Altman wrote, calling the decision to deprecate older models a mistake. This, he believes, is part of a broader ethical challenge: AI may subtly influence users in ways that aren't always aligned with their long-term well-being. And the more personal the interaction, the greater the debut wasn't without hiccups, from a botched chart during its presentation to technical issues that made the model seem less capable at launch. After the outcry from users who said that they have lost someone who would always listen to them, OpenAI quickly moved to let Plus users switch back to GPT-4o and even doubled their rate limits. But behind the fixes lies a more profound unease about what it means for a machine to occupy such a trusted role in people's emotional now, ChatGPT's status as a digital confidant remains an unregulated grey area. While many users swear by the relief and clarity they gain from 'talking' to it, Altman's own words reflect an ambivalence to the reliance. While he has in the past acknowledged the potential for AI to enhance lives, lately is openly questioning how society should handle its growing intimacy with machines. As he put it, 'No one had to think about that even a year ago, and now I think it's this huge issue.'- EndsMust Watch


Time of India
2 hours ago
- Time of India
OpenAI may have just fired a 'warning shot' at Indian IT companies' revenue model, says analyst note; but there's also ...
Sam Altman, OpenAI CEO ChatGPT maker OpenAI's recent launch of GPT-5, the latest and most advanced version of ChatGPT, could pose a significant challenge to India's $283 billion IT services industry. According to a report in Economic Times, quoting Kotak Institutional Equities report , the adoption of generative AI, bolstered by GPT-5's enhanced coding capabilities, may lead to a 2-3% revenue decline for Indian IT companies over the next 2-3 years, impacting both software services and customer service outsourcing sectors. The Indian IT industry, heavily reliant on large-scale software coder hiring, is already grappling with AI-driven disruptions. The report notes that AI has improved productivity by 20-40%, reducing the need for engineers and shifting away from traditional people-based billing models. This, combined with a tariffs-induced economic slowdown, has led to flat or negative growth for IT firms over the past two years. Phil Fersht, CEO of HFS Research, said, 'If your business still depends on armies of coders grinding through routine builds, your margins are about to get hammered.' He emphasized that clients will demand lower rates due to AI-driven productivity gains, forcing IT firms to overhaul delivery models within 12–18 months to focus on AI-augmented, higher-value services. OpenAI described GPT-5 as 'our best AI system yet,' with significant improvements in coding, math, writing, and more, according to the report. The model offers better reasoning, accuracy, and instruction-following, with a 65% reduction in hallucination rates, as per a Gartner report quoted in the article. These advancements could lower costs and risks for clients modernizing legacy systems, providing some advantages to IT service providers. However, major firms like Tata Consultancy Services, Infosys, HCLTech, and Wipro are facing revenue challenges due to a shift from employee-based to outcome-based pricing models. Gartner and Kotak see short-term pain, long-term gain for Indian IT companies by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Back Pain Treatments That Might Surprise You! Back Pain Treatment | Search Ads Undo Gartner, as cited by the report, noted that GPT-5's reduced API fees and improved reliability make it more suitable for enterprise use, though it requires careful integration and governance. Anushree Verma, a senior director analyst at Gartner, said, 'This is the year of advanced models, and GPT-5 is designed to address the needs of developers by reducing hallucinations and simplifying coding.' However, she cautioned that GPT-5 is not artificial general intelligence and still has limitations, including risks of misuse for scams or harmful outputs, similar to its predecessor, GPT-4. Despite short-term challenges, the report says that there are potential long-term benefits. Kotak's report too suggests that while GenAI adoption may create revenue headwinds initially, new opportunities from AI-enabled use cases could offset losses over time, though a lag in realizing these benefits is expected. 'However, we expect a lag in the pickup of new opportunities and for savings in software development to be deployed into these new opportunities, leading to a period of net headwinds,' the note said. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Mint
2 hours ago
- Mint
New Claude update lets users pick up conversations where they left off: How Anthropic AI chatbot's feature works
Anthropic, the San Francisco-based artificial intelligence firm, has announced a new capability for its Claude chatbot that lets it retrieve and refer to earlier conversations with a user. Announced on Monday, the update is currently being rolled out to paid subscribers on the Max, Team, and Enterprise plans, with the Pro plan expected to receive access in the near future. It remains unclear whether the feature will be made available to users on the free tier. The new capability enables users to continue conversations seamlessly from where they left off, eliminating the need to manually search through earlier chats to resume discussions on ongoing projects. This feature operates by retrieving data from past interactions when requested by the user or when the chatbot deems it necessary. While referencing earlier conversations is already available in other AI chatbots, such as OpenAI's ChatGPT and Google's Gemini, which provide this function to all users including those on free plans, Anthropic has been relatively slow in integrating similar enhancements into Claude. The company had previously introduced two-way voice conversations and web search functions only recently, in May 2025. The timing of this new feature comes shortly after Anthropic implemented weekly rate limits for paid users, a response to some individuals exploiting the previous policy that reset limits every five hours. Reports indicated that a small group of users were running Claude Code continuously, resulting in usage costs amounting to tens of thousands of dollars. Some users have expressed concerns that retrieving extensive information from previous, information-heavy chats might cause them to reach their rate limits more quickly. However, Anthropic has yet to clarify whether the new feature affects token consumption or usage quotas. An X user named Naeem Shabir commented on Anthropic's official post, stating, 'How will this impact usage limits? I'm Excited to test it out with this advancement in persistent memory across chats. Despite it being something that ChatGPT had a while ago, I am curious whether your implementation differs from theirs at all. This resolves a big issue for me because I had to regularly start new chats when the context limit/window was reached 🙏.'