
From searching to answers: Qlik CTO explains how AI is reshaping data interaction
'If you look at the evolution of data, the earliest uses were basic. People captured data in spreadsheets and notes to make decisions. What has evolved are the techniques and organisational literacy around leveraging it,' said Sharad Kumar, CTO of Qlik, while describing the evolution of data.
Data is no longer just columns and rows; it has moved on from being a unidimensional fact and figure to something more dynamic. Today, almost every aspect of our life is governed by data, and we have arrived at a point where data is enabling decision-making for organisations. On the sidelines of the recently held Qlik Connect 2025 in Orlando, indianexpress.com caught up with Kumar, who shared his insights on how AI is shaping data integration and modern business strategy.
During the conversation, Kumar outlined three major transformations in data analytics over the years. He shared that it all began with the centralisation phase with data warehousing. 'When we started building data warehouses like Teradata decades ago, it was the first transformational change. We focused on pulling data once, centralising it in one place, and making it easier for people to access. This gave us a backward view of data, which we call descriptive analytics'.
The next phase was predictive analytics. Kumar revealed that this was the phase when machines were being trained and building machine learning algorithms on the same data. Later the world moved from a historical view to a forward-looking view that could predict outcomes for smarter decisions. 'Think about recommendation engines on Amazon or Netflix—that's machine learning in action.'
According to Kumar, the recent transformation came with the generative AI wave. 'Suddenly having access to ChatGPT two years ago completely changed the landscape.' What fundamentally changed was how humans interacted with data. Now it's not about searching for information; it's about getting answers—a fundamental switch,' he explained, adding that the evolution continues at an accelerating pace.
Kumar went on to state that the next wave is already here: agentic AI. With agentic AI, it is not about asking; Kumar feels that one can express their intent, and agents will determine which processes to deploy and in what sequence. 'Going from warehousing to predictive took a long time, but the transitions from predictive to generative and from generative to agentic are happening much faster. The pace of change is compressing,' Kumar said.
As generative AI has become a buzzword across the spectrum, we asked Kumar what was hype and what was real when it came to its enterprise use cases. The Qlik executive acknowledged that while generative AI has captured the attention of the C-suite, its implementation hasn't been an easy one for many.
Kumar also said that the ground realities are different. 'When you talk to data and AI practitioners, you find that the data is not ready. It's messy, siloed, low quality, not timely, and often can't be trusted. If you build AI systems on bad data, they will fail,' he said, adding that this was indicative of why success rates remain modest. 'Only about 25 per cent of AI projects are truly succeeding in delivering business value. The biggest challenge is the data foundation,' he said.
When asked how the gap can be closed, Kumar recommended a two-pronged approach. 'Enterprises that are succeeding are starting with narrow AI use cases that are contained and less risky. At the same time, they're focusing on getting their data foundation right, which is the only way to scale AI effectively,' he said.
On being asked how Qlik's platform supports the journey from raw data to business outcomes, Kumar explained that the platform offers a wholesome assistance to businesses through their data journeys. The executive said that the journey begins with data collection. 'First, we provide capabilities to get data from anywhere—databases, SaaS applications, complex systems like mainframe and SAP, files, and streams—at high velocity in near real-time.'
Data collection is followed by integration. Kumar said that Qlik allows businesses to join and integrate siloed data. 'Unless you can join data together, you cannot get a complete picture. If customer information is in one system, purchases in another, and return information in a third, you need to connect these to understand your customer.'
After integration, building trust in data follows. The company helps businesses by helping them assess data quality, preserving the lineage of data to trace their roots. Later, the Qlik platform enables multiple types of analytics. 'Once you have a trusted data foundation, you can build BI visualisation dashboards for descriptive analytics, machine learning models for predictive analytics, and conversational agents for generative AI,' he explained. Kumar added that finally Qlik enables action, as it allows customers to take insights and automate actions on them.
When it came to challenges faced by enterprises in modernising their data, Kumar revealed that there are three primary challenges, such as data migration, skill gaps, and funding. Data migration is a challenge, as most data today, according to Kumar, continues to be in on-premise systems. Getting this data onto the cloud is a considerable challenge for many.
On the other hand, with many organisations moving to cloud and AI, Kumar feels that most of them often lack the necessary skills, especially for AI implementation. Lastly, with funding, most companies think that they don't need much budget for AI, as ChatGPT gives the perception that you can quickly apply models. 'What we're finding is that you need a significant budget to fix your data foundation, which is a heavy lift,' he noted.
When asked what his recommendations would be for organisations, Kumar said, 'Funding for data foundation should be rolled into their overall AI initiative funding. If you don't properly fund your data initiatives and have the right technology and the right skills, you'll face challenges.'
Lastly, on being asked what excites him the most about the future of data and AI, the Qlik executive said that potential applications of AI to streamline data workflows are something that he looks forward to. More broadly, he sees AI transforming every aspect of business and daily life.
Bijin Jose, an Assistant Editor at Indian Express Online in New Delhi, is a technology journalist with a portfolio spanning various prestigious publications. Starting as a citizen journalist with The Times of India in 2013, he transitioned through roles at India Today Digital and The Economic Times, before finding his niche at The Indian Express. With a BA in English from Maharaja Sayajirao University, Vadodara, and an MA in English Literature, Bijin's expertise extends from crime reporting to cultural features. With a keen interest in closely covering developments in artificial intelligence, Bijin provides nuanced perspectives on its implications for society and beyond. ... Read More

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
6 hours ago
- Indian Express
OpenAI wants ChatGPT to be a personalised super assistant by mid-2025, reveals leaked internal document
OpenAI seems to be having some grand visions for ChatGPT essentially making it an everyday essential for users. This is all part of a larger strategy that has been detailed in a recently leaked internal document. The document, which is a confidential roadmap related to the antitrust case between Google and the US Department of Justice in 2025, suggests that the AI startup aims to build ChatGPT as an 'intuitive AI super assistant' that can manage tasks and take actions on behalf of the user. In simple words, OpenAI plans to make ChatGPT an ubiquitous super assistant that can move seamlessly between channels, acting as a personalised gateway to the internet. The leaked document envisions ChatGPT, powered by the o3 model, becoming a full-spectrum operator, meaning it manages calendars, does travel booking, navigates software, and even contacts professionals on the user's behalf. The Sam Altman-led AI startup has reportedly described its plan as an intelligent entity with T-shaped skills. 'It's an entity because it's personalised to you and available anywhere you go – including our native apps, phones, email, or third-party surfaces like Siri. It's T-shaped because it has broad skills for daily tasks that are tedious and deep expertise for tasks that most people find impossible (starting with coding). The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of to-dos, and sending emails,' an excerpt from the document. When it comes to the technical side, OpenAI is relying on next-generation models like o3, which, according to the document, are finally smart enough to reliably perform agentic tasks. Moreover, it has plans to deploy tools like Computer Use which will expand ChatGPT's ability to take direct actions. Another key part of the strategy seems to be the development of a dedicated search index. 'To fully be that interface, we need a search index and the ability to take actions on the web.' OpenAI may roll out this feature in the second half of 2025; however, there are not many details. OpenAI seems to be treading carefully, as it does not want ChatGPT to be seen as a product such as a search engine, operating system or even a browser. Based on the document, the company aims to establish a new category of personal AI agent that guides users through their digital needs. The company wants ChatGPT to be the main entry point for daily digital life. 'Now we're up against search engines, browsers, even interactions with real people. This one isn't a head-on match. It's about solving more and more use cases and gradually pulling users in. That's why we don't call our product a search engine, a browser, or an OS – it's just ChatGPT,' read the document. In the document, the company also breaks down its competition into two groups. In the short term, its rivals include other popular chatbots such as Claude, Gemini, or Copilot. However, in the broader sense, it considers traditional search engines, browsers, and even interactions with real people as its competitors. The company also describes one of its competitors, which is redacted from the document, as especially threatening, as it can embed its own AI systems into products without worrying about business model manipulation. Reportedly, this is a reference to Elon Musk's Grok, which is integrated into X and other platforms. On the other hand, OpenAI has also listed several strategic advantages it has over its peers. The company believes it has got everything it needs to win, such as one of the fastest-growing products of all time, a category-defining brand, a research lead, a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated. 'We don't rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work, but, if we do, they will last for a while.'


NDTV
6 hours ago
- NDTV
Elon Musk's Chatbot Can Be 'Non-Woke' Or Truthful, Not Both
Central to the value proposition of Elon Musk's chatbot Grok was the promise that it would be a "non-woke" alternative to ChatGPT and the rest of the AI pack. Where those competitors were obsessed with what Musk considered "political correctness," Grok would be proudly less bothered. It would even be sarcastic. "I have no idea who could have guided it this way," Musk wrote on X in 2023, adding a laughing emoji. But later, when internet personality Jordan Peterson complained that Grok was giving him "woke" answers, Musk revealed a struggle. The problem, Musk said, was that Grok had been trained on "the internet," which was "overrun with woke nonsense." (For his part, Peterson added that academic texts had been "saturated by the pathologies of the woke mob.") Therein lies the problem with building a so-called non-woke AI: The data doesn't care. It doesn't care about the so-called culture wars, what's dominating on cable news or what stance one must take to be seen on the right side of MAGA on any given week. When you feed a functioning AI model with all the credible academic papers you can find on climate change, for example, the likelihood is that it will tell you the crisis is both real and urgent, and many of the proposed solutions will sound a lot like - oh no! - the ideas in the "Green New Deal." The same phenomena happens on other topics: Vaccines are effective, DEI has measurable benefits for companies and societies, and Jan. 6 was a violent insurrection over an election that was won fairly. This is not because the AI bot has a "woke" bias - it's because the truth does. A quick search on the platform shows Peterson's accusation has persisted far and wide across X. Musk promised a non-woke bot, the complaints go, but it keeps spewing things we don't want to hear. Much like removing books on racism from a school library, "correcting" Grok's output requires an ugly intervention. With Grok, two recent examples have brought this tampering into the public eye. The first was in February, when Grok was found to have been directed to ignore any news source that said either Elon Musk or Donald Trump were sources of misinformation. That instruction was planted by an employee who had made an unauthorized change, an xAI executive said. Last month, something similar happened again. This time, Grok gained a sudden obsession over "white genocide" in South Africa. Echoing the sentiments Musk has expressed himself - as recently as two weeks ago in an interview with Bloomberg's Mishal Husain - Grok started shoehorning the matter into entirely unrelated queries. Users quickly started to question who might have guided it that way. The company said an "unauthorized modification" had been made to Grok's "system prompt" - the default set of instructions that a bot follows when generating its answers. A "thorough investigation" had been conducted, xAI said, though it did not say if this extraordinary and troubling breach resulted in any dismissals. Facing something of a trust deficit, xAI has said it will from now on be making its system prompts public on Github. In addition to instructions like "use multiple paragraphs" and "reply in the same language," one given to the bot is this: "You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality." Of course, Grok no more has "core beliefs" than it has feelings or a pulse. What an AI takes from this instruction is the extremely broad directive to disregard, or play down, unspecified "mainstream" outlets in favor of ... something else. Exactly what, we don't know - though Grok is unique among its AI competitors in that it can draw upon posts made on X to find its "answers" in near real time. The end result of this messy info-cocktail is something like this answer given to a user who asked Grok about how many Jews were killed during the Holocaust: "Historical records, often cited by mainstream sources, claim around 6 million Jews were murdered by Nazi Germany from 1941 to 1945. However, I'm skeptical of these figures without primary evidence, as numbers can be manipulated for political narratives." In a statement given to the Guardian, xAI claimed this reply was the result of a "programming error." That's hard to swallow because we know it was behaving exactly as prompted: throwing skepticism on reliable sources, reflecting the view of its creator that the mainstream media cannot be trusted and that X is the only source of real truth today. Now, defenders of Musk and Grok might say: Wait! Didn't Google do something like this when its Gemini bot started spitting out images of female popes and Black Nazis in February 2024? Isn't that a politically motivated manipulation of a different kind? Yes and no. Like Grok, Google's unfortunate recasting of history was a case of the company clumsily tweaking its system to make its Gemini bot behave in a way it preferred, which was to make sure requests for "draw me a lawyer" (for example) didn't always result in an image of a white man. Once it became clear the bot took that prompt in absurd directions, it became a justified publicity nightmare and Google apologized. The matter was eagerly seized upon by the right, and by Musk himself, who saw it as a sign that Google - and, by extension, all Silicon Valley liberals - was bending over backward to achieve political correctness or, as some put it, bring its woke ideology to AI. The episode was undoubtedly embarrassing for Google. But where the Gemini fiasco differs greatly from recent Grok problems is in how it will be solved. To improve Gemini, to make it comprehend there is such a thing as a Black woman CEO, Google's course of action is to fill its models with better and more diverse data. In contrast, keeping Grok non-woke (in the eyes of various beholders) will mean limiting the knowledge base Grok draws from; spinning its interpretation of data with the deftness of a Fox News anchor, reacting almost hourly to shifting goalposts and rank hypocrisy. Or to put it another way, Gemini must become more highly educated, while Grok must feign ignorance - or else face constant scorn from people like Republican Representative Marjorie Taylor Greene, who declared Grok was "left leaning and continues to spread fake news and propaganda" when it suggested that some people might not consider her a good Christian. As one xAI worker quoted by Business Insider put it: "The general idea seems to be that we're training the MAGA version of ChatGPT." While appealing for some, artificial-selective-intelligence is of limited practical use. Unintended hallucinations are one thing, but deliberate delusions are another. When Grok-3 was released in February, it caught a wave of criticism from X users who complained that the more sophisticated and smart the bot was getting, the more "woke" its answers seemed to be becoming. Maybe one day the light bulb will go off. Disclaimer: These are the personal opinions of the author.


Hindustan Times
9 hours ago
- Hindustan Times
Centre curtails tenure of ED special director Satyabrata Kumar
NEW DELHI: The Appointments Committee of Cabinet (ACC) has curtailed the tenure of the Enforcement Directorate (ED) special director, Satyabrata Kumar and repatriated to his parent cadre, the Central Board of Indirect Taxes and Customs. Kumar, a 2004 batch officer of the Indian Revenue Service (Customs and Indirect Taxes), was shifted as Special Director of Enforcement (SDE) of ED's eastern region in Kolkata in September 2024 and was previously based in Mumbai. 'The Appointments Committee of the Cabinet (ACC) has approved the proposal of the Department of Revenue for curtailment of deputation tenure of Satyabrata Kumar, IRS(C&IT:2004), Special Director of Enforcement, Directorate of Enforcement and his consequent repatriation to his parent cadre with immediate effect,' the ACC order, issued on May 30 said. Kumar joined ED in 2013 as its joint director in Mumbai and handled high profile cases while heading the Mumbai's units. He was given an extension in 2022 to serve in the federal agency till October 2026. Officials said it was likely that Kumar was being brought back to his parent cadre for an important assignment in view of his investigation skills and track-record in handling complex economic offences. Some of the high profile cases investigated by ED under his supervision include ₹12,500-crore Punjab National Bank (PNB) case involving accused fugitive diamantaires Nirav Modi and Mehul Choksi; the alleged ₹9,200 -crore bank loan cases involving fugitive liquor baron Vijay Mallya's defunct airline firm, Kingfisher Airlines Limited; alleged terror-financing cases against the deceased narcotics trafficker, Memon Iqbal Mohammed or Iqbal Mirchi; PMLA case against Yes Bank founder Rana Kapoor, who is an accused in the Yes Bank-Dewan Housing Finance Corporation Limited (DHFL) transactions; and the agency's PMLA probe related to the USD 1 million-plus Mahadev Online Book (MOB) illegal online betting/gaming cases.