logo
Qlik unveils agentic experience to simplify data workflows, aid decisions

Qlik unveils agentic experience to simplify data workflows, aid decisions

Qlik, a leading player in data integration, data quality, analytics, and artificial intelligence, on Wednesday introduced its new agentic experience to drive faster decisions and boost productivity by bringing new simplicity to complex data-driven workflows.
Besides, the company launched Open Lakehouse, a fully managed Apache Iceberg solution built into Qlik Talend Cloud.
These two products were unveiled at Qlik Connect 2025 here for its customers.
The agentic experience will provide a single, conversational interface allowing users across the enterprise to interact naturally with data, using specialised AI agents to quickly uncover insights, drive faster decisions, and boost productivity, bringing new simplicity to complex data-driven workflows.
At the heart of this continuous innovation is the Qlik engine, a unique technology that indexes relationships across data, enabling the discovery of unexpected connections.
This new agentic experience is about removing the distance between data, decisions, and outcomes, Qlik CEO Mike Capone said here.
"People want a seamless, conversational way to engage with their data, one that fits naturally into their work and delivers clear, trusted answers in context. We've built this experience to reflect how decisions actually get made in a business," he said.
As enterprises face unpredictable market conditions and increasing pressure to make critical decisions rapidly, investments in AI have grown, he said, adding that, with its agentic experience, Qlik is focused on helping customers turn data into timely, high-quality decisions and results.
Qlik's agentic experience to be rolled out this summer is specifically designed to empower teams to accelerate both decisions and productivity in rapidly changing environments, he added.
Designed for enterprises under pressure to scale faster and spend less, Capone said, Qlik Open Lakehouse delivers real-time ingestion, automated optimisation, and multi-engine interoperability, without vendor lock-in or operational overhead.
As organisations accelerate AI adoption, he said, the cost and rigidity of traditional data warehouses have become unsustainable.
Qlik Open Lakehouse offers a new path -- a fully managed lakehouse architecture powered by Apache Iceberg that delivers 2.5 times, 5 times faster query performance and up to 50 per cent lower infrastructure costs, while maintaining full compatibility with the most widely used analytics and machine learning engines, he said.
"With Qlik Open Lakehouse, enterprises gain real-time scale, full control over their data, and the freedom to choose the tools that work best for them. We built this to meet the demands of AI and analytics at enterprise scale, without compromise," he said.
Qlik Open Lakehouse is built from the ground up to meet the scale, flexibility, and performance demands of modern enterprises, without the tradeoffs, he added.
During the Qlik Connect 2025, it was also highlighted that despite record AI investment, most enterprises remain stuck in the lab.
According to recent IDC research, while 80 per cent plan to deploy agentic AI workflows, only 12 per cent feel ready to support autonomous decision-making at scale.
Trust in outputs is eroding amid growing concerns around hallucinations, bias, and regulatory scrutiny, the report said.
And as models become commoditised, competitive advantage is shifting, not to those with the most advanced models, but to those who can operationalise AI with speed, integrity, and confidence, it added.
The Qlik AI Council emphasised that trust must be designed in, not added later. Execution is the new differentiator, and it only works when the data, infrastructure, and outputs are verifiable, explainable, and actionable.
In today's environment, the companies that pull ahead won't be the ones that test the most, they'll be the ones that deliver, the Council said.
Observing that the market is short on execution, Capone said, companies aren't losing ground because they lack access to powerful models.
"They're losing because they haven't embedded trusted AI into the fabric of their operations. That's why at Qlik, we've built a platform focused on decisive, scalable action. If your data isn't trusted, your AI isn't either. And if your AI can't be trusted, it won't be used," he added.
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Bhargav Kumar Konidena: Effective contributions to the Open Source: Transforming DevOps and Data Workflows with Apache Projects
Bhargav Kumar Konidena: Effective contributions to the Open Source: Transforming DevOps and Data Workflows with Apache Projects

India.com

time30-07-2025

  • India.com

Bhargav Kumar Konidena: Effective contributions to the Open Source: Transforming DevOps and Data Workflows with Apache Projects

Open source is not only a technology approach, but is indeed a movement in the global environment of technology-driven industries where engineers, researchers, and creators are united in a common goal to construct mechanism platforms to be the composite of present-day enterprise computing. There are a lot of contributors that make this ecosystem and just one of them that I would highlight is Bhargav Kumar Konidena who is an experienced DevOps and a Java developer and the work he has done with leading Apache Foundation projects showcases not only his technical skills but his collective commitment to innovation. Bhargav has worked in the field of DevOps, cloud automation and enterprise systems more than ten years, and with his practical knowledge in AWS, Kubernetes, Docker and other similar resources, he has engaged with a number of open source projects that are well-used by enterprises. His contributions cut across key areas in data lake systems, data pipeline orchestration systems and systems for containerized application deployment; inputs that are currently affecting software teams and companies in every continent. Enhancing Data Lake supporting Apache Iceberg A significant contribution to Apache Iceberg as a high-performance table format to big analytic data is made by Bhargav. Apache Iceberg is built to support lake-scale data lakes and is now part of data infrastructure stacks built around it, run by companies such as Netflix, Apple, and LinkedIn. As of July 2025, the Iceberg GitHub source code repository had been starred more than 4,300 times, indicating good developer interest and uptake. Bhargav has worked on Iceberg in terms of restructuring the SQL and DataFrame queries supporting the Time Travel feature so that we can query historical snapshots of datasets. This entailed metadata processing and performance improvements, which are the most crucial factors in achieving the lightning fast and stable real-time access to versioned data as analytics workflows evolve. His work has direct impact on the interaction of teams with historical data increasing system reproducibility and auditability of machine learning and BI activities. The Apache Beam Making Data Pipelines leaner The other important field of work by Bhargav is in Apache Beam which is a unified model of defining batch and streaming data-parallel processing pipelines. Beam is an abstraction framework that operates on top of such engines as Apache Flink Google Cloud Dataflow and Apache Spark. The project is widely used in verticals of need of real-time analytics and complex event processing and has more than 6,400 stars on GitHub at the moment. Through the Beam code submission, Bhargav provided a fix to one of the drawbacks in the support of the handling of user-specified providers through command-line arguments, such as custom FileSystem implementations or external transforms. His modification of the code makes sure that each time the users provide certain configurations through flags, such as –filesystem or –transform-providers, Beam dynamically prioritizes and incorporates the same as inputs in the runtime environment. Such an update was vital in the enhancement of Beam flexibility, especially in situations where developers have deployed the platform in a hybrid environment or where the extensions are proprietary. His improvement makes it possible to do even more customization and less friction during the process of adapting Beam to fit a distinct organizational context. Facilitating the DSL configuration understanding in Helm DevOps expertise is demonstrated by Bhargav in his contribution of Helm, a software project used as a package manager in the Kubernetes, and the most commonly used tool so far to facilitate application deployment. Helm has more than 25000 stars on GitHub, making it an essential tool in CI/CD toolchains of teams that deal with the containerized microservices. Bhargav added enhancements regarding the processing of precedence of configuration input options namely, -set, -set-file and -values. These inputs give users the option to override the default deployment-time configuration with different ones at deployment time, but the results were unreliable due to precedence inconsistency and the results of this could mean a badly configured deployment. His update was extra clear that –set and –set-file arguments should take precedence over –values, and this correlates tool behavior with the points in the documentation and the intentions of its users. He also increased malformed inputs error handling, which minimizes the risk of production deployment. Such modifications allow the Helm users to circumvent the configuration drift and guarantee a repeatable and trustworthy application provisioning between the development, staging, and production setting. Effect of the Apache Contributions Donating to Apache is not a technical process only, it is a duty that develops software ecosystems all over the world. Being one of the contributors, Bhargav belongs to an elite group of people who develop and update the tools provided to the companies of the Fortune 500 list as well as to data scientists, cloud architects, the products of DevOps, etc. Projects hosted by Apache Foundation features elementary blocks of platforms by known cloud major providers such as AWS, Azure, and Google Cloud. In addition, every code submitted to it is rigorously subjected to peer, architectural and integration test prior to inclusion into the main codebase. Through the work done in the three varied and technically challenging projects, including its sound engineering and flawless code, Bhargav has managed to prove both the skill and mastery of coding and his strong mastery of the needs of many user communities. Contributions made are even more comprehensive and publicly viewable in repositories that anyone can view and generate a visible, verifiable record of impact. The popularity of the Iceberg, Beam, and Helm has established a fact that the code developed by Bhargav is not a subject of experimentation only, but it is actually used in production settings in such domains as financial, media, telecommunications, and healthcare spheres. An Approach to Shared Innovation by a Practitioner Open source work by Bhargav is a sign of his wider philosophy in life: to develop scalable, efficient, and modular systems where the user experience, maintainability and long term performance is emphasized. He offers the solutions based on the best enterprise DevOps experience–his experience on DevOps support of leading insurers, telecom, and medical platforms. As an example, his previous work in container orchestration, infrastructure-as-code with the Terraform, and CI/CD automation are directly applied to his efforts in Helm and Beam. Known to him are metadata, schema evolution, and performance bottlenecks on large-scale datasets, which reflect in his efforts at improving Apache Iceberg. This is the synergy between the professional world and open source contribution, which guarantees that the features he is developing should address the real problems experienced by the engineering teams. Conclusion The work of Bhargav Kumar Konidena (Apache Iceberg, Beam, and Helm) can be considered an excellent example of what it is to be a valuable member of the open source community. These are not just small bug fixes or occasional patches, these are architectural advancements and feature improvements that affect thousands of users and are used in mission-critical applications on a worldwide scale. By working with Apache and the Apache ecosystem, Bhargav not only put himself at the center of the global innovation, but is able to collaborate with other first-rate engineers, open-source and peer-review his code and help define the future of data engineering and DevOps.

One nation, one KYC: Sebi shows the way, other regulators should follow
One nation, one KYC: Sebi shows the way, other regulators should follow

Business Standard

time27-07-2025

  • Business Standard

One nation, one KYC: Sebi shows the way, other regulators should follow

Other regulators should follow Sebi's lead in explicitly stating that once a client completes KYC with one intermediary, they shouldn't have to repeat the process with another Harsh Roongta Listen to This Article It wasn't a shootout or a sting operation that brought down Al Capone. It was a ledger. In The Untouchables (1987), a bespectacled accountant in Eliot Ness's squad suggests prosecuting Capone not for bootlegging or murder, but for failing to file tax returns. Ness (played by Kevin Costner) initially scoffs—Capone had outmanoeuvred every case by eliminating witnesses. But the money trail didn't lie. Records of undeclared income led to Capone's 1932 conviction for tax evasion and over a decade in prison. (Al Capone was a real-life American mafia boss active in the 1920s and 1930s.) That's the power of a

From searching to answers: Qlik CTO explains how AI is reshaping data interaction
From searching to answers: Qlik CTO explains how AI is reshaping data interaction

Indian Express

time21-05-2025

  • Indian Express

From searching to answers: Qlik CTO explains how AI is reshaping data interaction

'If you look at the evolution of data, the earliest uses were basic. People captured data in spreadsheets and notes to make decisions. What has evolved are the techniques and organisational literacy around leveraging it,' said Sharad Kumar, CTO of Qlik, while describing the evolution of data. Data is no longer just columns and rows; it has moved on from being a unidimensional fact and figure to something more dynamic. Today, almost every aspect of our life is governed by data, and we have arrived at a point where data is enabling decision-making for organisations. On the sidelines of the recently held Qlik Connect 2025 in Orlando, caught up with Kumar, who shared his insights on how AI is shaping data integration and modern business strategy. During the conversation, Kumar outlined three major transformations in data analytics over the years. He shared that it all began with the centralisation phase with data warehousing. 'When we started building data warehouses like Teradata decades ago, it was the first transformational change. We focused on pulling data once, centralising it in one place, and making it easier for people to access. This gave us a backward view of data, which we call descriptive analytics'. The next phase was predictive analytics. Kumar revealed that this was the phase when machines were being trained and building machine learning algorithms on the same data. Later the world moved from a historical view to a forward-looking view that could predict outcomes for smarter decisions. 'Think about recommendation engines on Amazon or Netflix—that's machine learning in action.' According to Kumar, the recent transformation came with the generative AI wave. 'Suddenly having access to ChatGPT two years ago completely changed the landscape.' What fundamentally changed was how humans interacted with data. Now it's not about searching for information; it's about getting answers—a fundamental switch,' he explained, adding that the evolution continues at an accelerating pace. Kumar went on to state that the next wave is already here: agentic AI. With agentic AI, it is not about asking; Kumar feels that one can express their intent, and agents will determine which processes to deploy and in what sequence. 'Going from warehousing to predictive took a long time, but the transitions from predictive to generative and from generative to agentic are happening much faster. The pace of change is compressing,' Kumar said. As generative AI has become a buzzword across the spectrum, we asked Kumar what was hype and what was real when it came to its enterprise use cases. The Qlik executive acknowledged that while generative AI has captured the attention of the C-suite, its implementation hasn't been an easy one for many. Kumar also said that the ground realities are different. 'When you talk to data and AI practitioners, you find that the data is not ready. It's messy, siloed, low quality, not timely, and often can't be trusted. If you build AI systems on bad data, they will fail,' he said, adding that this was indicative of why success rates remain modest. 'Only about 25 per cent of AI projects are truly succeeding in delivering business value. The biggest challenge is the data foundation,' he said. When asked how the gap can be closed, Kumar recommended a two-pronged approach. 'Enterprises that are succeeding are starting with narrow AI use cases that are contained and less risky. At the same time, they're focusing on getting their data foundation right, which is the only way to scale AI effectively,' he said. On being asked how Qlik's platform supports the journey from raw data to business outcomes, Kumar explained that the platform offers a wholesome assistance to businesses through their data journeys. The executive said that the journey begins with data collection. 'First, we provide capabilities to get data from anywhere—databases, SaaS applications, complex systems like mainframe and SAP, files, and streams—at high velocity in near real-time.' Data collection is followed by integration. Kumar said that Qlik allows businesses to join and integrate siloed data. 'Unless you can join data together, you cannot get a complete picture. If customer information is in one system, purchases in another, and return information in a third, you need to connect these to understand your customer.' After integration, building trust in data follows. The company helps businesses by helping them assess data quality, preserving the lineage of data to trace their roots. Later, the Qlik platform enables multiple types of analytics. 'Once you have a trusted data foundation, you can build BI visualisation dashboards for descriptive analytics, machine learning models for predictive analytics, and conversational agents for generative AI,' he explained. Kumar added that finally Qlik enables action, as it allows customers to take insights and automate actions on them. When it came to challenges faced by enterprises in modernising their data, Kumar revealed that there are three primary challenges, such as data migration, skill gaps, and funding. Data migration is a challenge, as most data today, according to Kumar, continues to be in on-premise systems. Getting this data onto the cloud is a considerable challenge for many. On the other hand, with many organisations moving to cloud and AI, Kumar feels that most of them often lack the necessary skills, especially for AI implementation. Lastly, with funding, most companies think that they don't need much budget for AI, as ChatGPT gives the perception that you can quickly apply models. 'What we're finding is that you need a significant budget to fix your data foundation, which is a heavy lift,' he noted. When asked what his recommendations would be for organisations, Kumar said, 'Funding for data foundation should be rolled into their overall AI initiative funding. If you don't properly fund your data initiatives and have the right technology and the right skills, you'll face challenges.' Lastly, on being asked what excites him the most about the future of data and AI, the Qlik executive said that potential applications of AI to streamline data workflows are something that he looks forward to. More broadly, he sees AI transforming every aspect of business and daily life. Bijin Jose, an Assistant Editor at Indian Express Online in New Delhi, is a technology journalist with a portfolio spanning various prestigious publications. Starting as a citizen journalist with The Times of India in 2013, he transitioned through roles at India Today Digital and The Economic Times, before finding his niche at The Indian Express. With a BA in English from Maharaja Sayajirao University, Vadodara, and an MA in English Literature, Bijin's expertise extends from crime reporting to cultural features. With a keen interest in closely covering developments in artificial intelligence, Bijin provides nuanced perspectives on its implications for society and beyond. ... Read More

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store