logo
#

Latest news with #ApacheIceberg™

Qlik Expands Integration with the Databricks Data Intelligence Platform
Qlik Expands Integration with the Databricks Data Intelligence Platform

Business Wire

time13 hours ago

  • Business
  • Business Wire

Qlik Expands Integration with the Databricks Data Intelligence Platform

SAN FRANCISCO--(BUSINESS WIRE)-- Qlik ®, a global leader in data integration, data quality, analytics, and artificial intelligence, today announced a series of new capabilities for customers of Databricks, the Data and AI company, built on the Databricks Data Intelligence Platform. These enhancements include streaming real-time data into Unity Catalog's Uniform tables via change data capture (CDC), automated Apache Iceberg™ optimization through Qlik Open Lakehouse, and the creation of high-quality data products. These enhancements give data teams greater flexibility across open formats, improve operational performance with Delta and Iceberg data, and accelerate the path to trusted, AI-ready architectures, without compromising Databricks-native governance or performance. New capabilities include: Real-Time Data Streaming to Databricks Uniform Tables via CDC: Qlik Replicate ® now streams continuous CDC from enterprise data sources directly into Unity Catalog's managed Iceberg tables, enabling low-latency ingestion that supports strict business SLAs for both Delta and Iceberg formats. Adaptive Iceberg Optimization: As data is ingested into Apache Iceberg tables by Qlik Talend Cloud ®, Qlik Open Lakehouse's fully automated optimizer will intelligently handle compactions, partitioning, and pruning, reducing storage footprint and delivering faster queries. Optimized Iceberg tables will be queryable via Databricks Photon or any Iceberg-compatible engine with consistently low-latency performance. High-Quality, AI-Ready Data Products: Enables data teams to build governed Data Products and push down data quality computation for Databricks assets—including Delta Live Tables—ensuring products remain trusted, accurate, and ready for AI use cases. Spark-Aware Studio Roadmap Enhancements: Qlik will soon introduce new developer-focused capabilities including schema inference, Databricks notebook import, and native Spark debugging—empowering teams to manage governed, self-service data pipelines within their existing Databricks workflows. 'Databricks customers continue to push the boundaries of what's possible with open data formats and AI,' said Ariel Amster, Director, Strategic Technology Partners at Databricks. 'By delivering real-time change data capture into UniForm tables and its native integration with Mosaic AI, Qlik is helping our joint customers simplify and accelerate innovation on the Databricks Data Intelligence Platform.' 'From ingestion to insight, Databricks customers are demanding more speed, flexibility, and trust across their data estate,' said David Zember, Senior Vice President of Worldwide Channels and Alliances at Qlik. 'These new capabilities allow teams to do more with their Databricks investment—especially around governance, interoperability, and AI readiness.' The new capabilities—including Uniform table CDC and Iceberg optimization—are now available in private preview. Qlik's planned Open Lakehouse integration for Databricks is under development with timing to be announced. To learn more, request early access, or see a demo, visit Qlik at booth #521 during Databricks Data + AI Summit or visit our website. About Qlik Qlik converts complex data landscapes into actionable insights, driving strategic business outcomes. Serving over 40,000 global customers, our portfolio provides advanced, enterprise-grade AI/ML, data integration, and analytics. Our AI/ML tools, both practical and scalable, lead to better decisions, faster. We excel in data integration and governance, offering comprehensive solutions that work with diverse data sources. Intuitive analytics from Qlik uncover hidden patterns, empowering teams to address complex challenges and seize new opportunities. As strategic partners, our platform-agnostic technology and expertise make our customers more competitive. © 2025 QlikTech International AB. All rights reserved. All company and/or product names may be trade names, trademarks and/or registered trademarks of the respective owners with which they are associated.

Qlik Adds Native Support for Snowflake Managed Iceberg Tables, Expands Open Lakehouse Options for Snowflake Customers
Qlik Adds Native Support for Snowflake Managed Iceberg Tables, Expands Open Lakehouse Options for Snowflake Customers

Business Wire

time02-06-2025

  • Business
  • Business Wire

Qlik Adds Native Support for Snowflake Managed Iceberg Tables, Expands Open Lakehouse Options for Snowflake Customers

SAN FRANCISCO--(BUSINESS WIRE)-- Qlik®, a global leader in data integration, data quality, analytics, and artificial intelligence, today announced at Snowflake's annual user conference, Snowflake Summit 2025, the launch of native support for Snowflake-managed Apache Iceberg™ tables, enabling fast, open-format data pipelines directly into Snowflake's highly performant, governed environment. Alongside this, Qlik is introducing additional capabilities that allow customers to leverage Qlik Open Lakehouse, powered by Apache Iceberg, in conjunction with Snowflake for greater architectural flexibility and AI scalability. These advancements are designed to help Snowflake customers reduce latency, optimize storage and compute efficiency, and accelerate the development of AI-powered applications, including retrieval-augmented generation (RAG) via Snowflake Cortex AI. Newly announced capabilities include: Native Streaming to Snowflake-managed Iceberg Tables: Qlik Talend Cloud® now supports continuous change data capture (CDC) from enterprise systems directly into Snowflake-managed Iceberg tables, enabling low-latency ingestion supporting strict business SLAs for analytics and AI use cases. Qlik Open Lakehouse Optimization & Mirroring: Qlik Open Lakehouse combines low-latency ingestion into Apache Iceberg tables with an automated optimizer that manages compactions, partitioning, and pruning in S3—delivering faster queries and a reduced storage footprint without manual tuning. It also mirrors Iceberg data back into Snowflake for downstream transformations without duplicating data. One-Click Data Products with In-Snowflake Quality Execution: Qlik data products can be generated directly within customers' Snowflake ecosystems, leveraging the Qlik Talend Trust Score™ to push down data quality computation in Snowflake—enabling teams to produce governed, high-quality outputs that elevate the value of curated assets. Knowledge Mart for RAG on Snowflake Cortex: Qlik's Knowledge Mart transforms structured and unstructured content—including PDFs, call transcripts, and relational records—into AI-ready vectorized assets in Snowflake, powering retrieval-augmented generation pipelines through Cortex with full explainability and governance. 'Open standards like Apache Iceberg are foundational to an interoperable data stack, including both Qlik and Snowflake,' said Saurin Shah, Senior Product Manager, Data Engineering at Snowflake. 'By combining real-time ingestion, automated optimization, and Cortex-ready AI pipelines, Qlik, together with Snowflake, helps customers accelerate time to insight while maximizing the value of their data investments.' 'The integration between Qlik and Snowflake has transformed how we manage and operationalize data,' said Michael Benassi, Vice President of Enterprise Analytics at United Federal Credit Union. 'By operationalizing near real-time data ingestion and streamlined engineering pipelines, we're able to scale insights across the business and support faster, more trusted AI initiatives.' 'This launch gives our joint customers the power to do more with their Snowflake investment,' said David Zember, Senior Vice President of Worldwide Channels and Alliances at Qlik. 'By combining Qlik's real-time ingestion and Iceberg optimization with native Snowflake governance, we're unlocking a smarter path to analytics and AI that's as open as it is scalable.' The new Qlik capabilities are now available in private preview, with general availability targeted in July 2025. To request early access, see a live demo, or speak with Qlik product experts, visit booth #1219 at Snowflake Summit 2025 or visit us online. About Snowflake Snowflake makes enterprise AI easy, efficient and trusted. More than 11,000 companies around the globe, including hundreds of the world's largest, use Snowflake's AI Data Cloud to share data, build applications, and power their business with AI. The era of enterprise AI is here. Learn more at (NYSE: SNOW). About Qlik Qlik converts complex data landscapes into actionable insights, driving strategic business outcomes. Serving over 40,000 global customers, our portfolio provides advanced, enterprise-grade AI/ML, data integration, and analytics. Our AI/ML tools, both practical and scalable, lead to better decisions, faster. We excel in data integration and governance, offering comprehensive solutions that work with diverse data sources. Intuitive analytics from Qlik uncover hidden patterns, empowering teams to address complex challenges and seize new opportunities. As strategic partners, our platform-agnostic technology and expertise make our customers more competitive. © 2025 QlikTech International AB. All rights reserved. All company and/or product names may be trade names, trademarks and/or registered trademarks of the respective owners with which they are associated.

Confluent Enhances Tableflow with Apache Iceberg and Delta Lake Support News Desk - 19/03/2025 ShareConfluent, Inc., the data streaming company, has announced key advancements in its Tableflow platform, providing enhanced access to operational data from data lakes and warehouses. With these updates, including full support for Apache Iceberg™ and the launch of an Early Access Program for Delta Lake in partnership with Databricks, Tableflow enables businesses to unlock new possibilities for real-time analytics, artificial intelligence (AI), and next-generation applications.The new updates to Tableflow allow data engineers and data scientists to access streaming data in popular open table formats, empowering AI-driven decision-making and simplifying the integration of operational data into analytical systems. With the general availability of Apache Iceberg support, teams can now seamlessly represent Apache Kafka® topics as Iceberg tables for real-time and batch processing use cases. This development significantly reduces the maintenance burden of tasks like table compaction, giving data engineers more time to focus on driving business value.'At Confluent, we're all about making your data work for you, whenever you need it and in whatever format is required,' said Shaun Clowes, Chief Product Officer at Confluent. 'With Tableflow, we're bringing our expertise of connecting operational data to the analytical world. Now, data scientists and data engineers have access to a single, real-time source of truth across the enterprise, making it possible to build and scale the next generation of AI-driven applications.'Tableflow also introduces the Early Access Program for Delta Lake, a widely used open-format storage layer pioneered by Databricks. Delta Lake processes over 10 exabytes of data daily, making it a key enabler for AI-driven applications. Through this integration, customers can now access a unified view of real-time data across operational and analytic applications, speeding up AI-driven decision-making and allowing for smarter, more agile business processes. Interested users can apply for the Early Access Program to explore these capabilities.To offer more flexibility, Tableflow now supports the Bring Your Own Storage feature, enabling customers to store Iceberg or Delta tables once and reuse them multiple times with their preferred storage solutions. This added flexibility allows businesses to have full control over their data storage and compliance requirements, ensuring that data governance needs are met without sacrificing performance.Confluent has further enhanced Tableflow's capabilities with seamless integrations with AWS Glue Data Catalog and Snowflake's Open Catalog, ensuring easy management of Iceberg tables and providing access to popular analytical engines like Amazon Athena, AWS EMR, and RedShift. This integration streamlines data accessibility for a range of data lake and warehouse solutions, including Snowflake, Dremio, Imply, and others.With support from global and regional system integrators, including Tata Consultancy Services (TCS), Onibex, GoodLabs Studio, and Psyncopate, Confluent is positioning Tableflow as a critical tool for enterprises seeking to drive AI innovation and scale real-time analytics. The continued development of Tableflow underscores Confluent's commitment to providing cutting-edge tools that bridge the gap between operational data and analytical systems, enabling businesses to accelerate their AI-driven digital transformation.
Confluent Enhances Tableflow with Apache Iceberg and Delta Lake Support News Desk - 19/03/2025 ShareConfluent, Inc., the data streaming company, has announced key advancements in its Tableflow platform, providing enhanced access to operational data from data lakes and warehouses. With these updates, including full support for Apache Iceberg™ and the launch of an Early Access Program for Delta Lake in partnership with Databricks, Tableflow enables businesses to unlock new possibilities for real-time analytics, artificial intelligence (AI), and next-generation applications.The new updates to Tableflow allow data engineers and data scientists to access streaming data in popular open table formats, empowering AI-driven decision-making and simplifying the integration of operational data into analytical systems. With the general availability of Apache Iceberg support, teams can now seamlessly represent Apache Kafka® topics as Iceberg tables for real-time and batch processing use cases. This development significantly reduces the maintenance burden of tasks like table compaction, giving data engineers more time to focus on driving business value.'At Confluent, we're all about making your data work for you, whenever you need it and in whatever format is required,' said Shaun Clowes, Chief Product Officer at Confluent. 'With Tableflow, we're bringing our expertise of connecting operational data to the analytical world. Now, data scientists and data engineers have access to a single, real-time source of truth across the enterprise, making it possible to build and scale the next generation of AI-driven applications.'Tableflow also introduces the Early Access Program for Delta Lake, a widely used open-format storage layer pioneered by Databricks. Delta Lake processes over 10 exabytes of data daily, making it a key enabler for AI-driven applications. Through this integration, customers can now access a unified view of real-time data across operational and analytic applications, speeding up AI-driven decision-making and allowing for smarter, more agile business processes. Interested users can apply for the Early Access Program to explore these capabilities.To offer more flexibility, Tableflow now supports the Bring Your Own Storage feature, enabling customers to store Iceberg or Delta tables once and reuse them multiple times with their preferred storage solutions. This added flexibility allows businesses to have full control over their data storage and compliance requirements, ensuring that data governance needs are met without sacrificing performance.Confluent has further enhanced Tableflow's capabilities with seamless integrations with AWS Glue Data Catalog and Snowflake's Open Catalog, ensuring easy management of Iceberg tables and providing access to popular analytical engines like Amazon Athena, AWS EMR, and RedShift. This integration streamlines data accessibility for a range of data lake and warehouse solutions, including Snowflake, Dremio, Imply, and others.With support from global and regional system integrators, including Tata Consultancy Services (TCS), Onibex, GoodLabs Studio, and Psyncopate, Confluent is positioning Tableflow as a critical tool for enterprises seeking to drive AI innovation and scale real-time analytics. The continued development of Tableflow underscores Confluent's commitment to providing cutting-edge tools that bridge the gap between operational data and analytical systems, enabling businesses to accelerate their AI-driven digital transformation.

TECHx

time19-03-2025

  • Business
  • TECHx

Confluent Enhances Tableflow with Apache Iceberg and Delta Lake Support News Desk - 19/03/2025 ShareConfluent, Inc., the data streaming company, has announced key advancements in its Tableflow platform, providing enhanced access to operational data from data lakes and warehouses. With these updates, including full support for Apache Iceberg™ and the launch of an Early Access Program for Delta Lake in partnership with Databricks, Tableflow enables businesses to unlock new possibilities for real-time analytics, artificial intelligence (AI), and next-generation applications.The new updates to Tableflow allow data engineers and data scientists to access streaming data in popular open table formats, empowering AI-driven decision-making and simplifying the integration of operational data into analytical systems. With the general availability of Apache Iceberg support, teams can now seamlessly represent Apache Kafka® topics as Iceberg tables for real-time and batch processing use cases. This development significantly reduces the maintenance burden of tasks like table compaction, giving data engineers more time to focus on driving business value.'At Confluent, we're all about making your data work for you, whenever you need it and in whatever format is required,' said Shaun Clowes, Chief Product Officer at Confluent. 'With Tableflow, we're bringing our expertise of connecting operational data to the analytical world. Now, data scientists and data engineers have access to a single, real-time source of truth across the enterprise, making it possible to build and scale the next generation of AI-driven applications.'Tableflow also introduces the Early Access Program for Delta Lake, a widely used open-format storage layer pioneered by Databricks. Delta Lake processes over 10 exabytes of data daily, making it a key enabler for AI-driven applications. Through this integration, customers can now access a unified view of real-time data across operational and analytic applications, speeding up AI-driven decision-making and allowing for smarter, more agile business processes. Interested users can apply for the Early Access Program to explore these capabilities.To offer more flexibility, Tableflow now supports the Bring Your Own Storage feature, enabling customers to store Iceberg or Delta tables once and reuse them multiple times with their preferred storage solutions. This added flexibility allows businesses to have full control over their data storage and compliance requirements, ensuring that data governance needs are met without sacrificing performance.Confluent has further enhanced Tableflow's capabilities with seamless integrations with AWS Glue Data Catalog and Snowflake's Open Catalog, ensuring easy management of Iceberg tables and providing access to popular analytical engines like Amazon Athena, AWS EMR, and RedShift. This integration streamlines data accessibility for a range of data lake and warehouse solutions, including Snowflake, Dremio, Imply, and others.With support from global and regional system integrators, including Tata Consultancy Services (TCS), Onibex, GoodLabs Studio, and Psyncopate, Confluent is positioning Tableflow as a critical tool for enterprises seeking to drive AI innovation and scale real-time analytics. The continued development of Tableflow underscores Confluent's commitment to providing cutting-edge tools that bridge the gap between operational data and analytical systems, enabling businesses to accelerate their AI-driven digital transformation.

Confluent Enhances Tableflow with Apache Iceberg and Delta Lake Support Confluent, Inc., the data streaming company, has announced key advancements in its Tableflow platform, providing enhanced access to operational data from data lakes and warehouses. With these updates, including full support for Apache Iceberg™ and the launch of an Early Access Program for Delta Lake in partnership with Databricks, Tableflow enables businesses to unlock new possibilities for real-time analytics, artificial intelligence (AI), and next-generation applications. The new updates to Tableflow allow data engineers and data scientists to access streaming data in popular open table formats, empowering AI-driven decision-making and simplifying the integration of operational data into analytical systems. With the general availability of Apache Iceberg support, teams can now seamlessly represent Apache Kafka® topics as Iceberg tables for real-time and batch processing use cases. This development significantly reduces the maintenance burden of tasks like table compaction, giving data engineers more time to focus on driving business value. 'At Confluent, we're all about making your data work for you, whenever you need it and in whatever format is required,' said Shaun Clowes, Chief Product Officer at Confluent. 'With Tableflow, we're bringing our expertise of connecting operational data to the analytical world. Now, data scientists and data engineers have access to a single, real-time source of truth across the enterprise, making it possible to build and scale the next generation of AI-driven applications.' Tableflow also introduces the Early Access Program for Delta Lake, a widely used open-format storage layer pioneered by Databricks. Delta Lake processes over 10 exabytes of data daily, making it a key enabler for AI-driven applications. Through this integration, customers can now access a unified view of real-time data across operational and analytic applications, speeding up AI-driven decision-making and allowing for smarter, more agile business processes. Interested users can apply for the Early Access Program to explore these capabilities. To offer more flexibility, Tableflow now supports the Bring Your Own Storage feature, enabling customers to store Iceberg or Delta tables once and reuse them multiple times with their preferred storage solutions. This added flexibility allows businesses to have full control over their data storage and compliance requirements, ensuring that data governance needs are met without sacrificing performance. Confluent has further enhanced Tableflow's capabilities with seamless integrations with AWS Glue Data Catalog and Snowflake's Open Catalog, ensuring easy management of Iceberg tables and providing access to popular analytical engines like Amazon Athena, AWS EMR, and RedShift. This integration streamlines data accessibility for a range of data lake and warehouse solutions, including Snowflake, Dremio, Imply, and others. With support from global and regional system integrators, including Tata Consultancy Services (TCS), Onibex, GoodLabs Studio, and Psyncopate, Confluent is positioning Tableflow as a critical tool for enterprises seeking to drive AI innovation and scale real-time analytics. The continued development of Tableflow underscores Confluent's commitment to providing cutting-edge tools that bridge the gap between operational data and analytical systems, enabling businesses to accelerate their AI-driven digital transformation.

'Tableflow makes it possible to build and scale the next generation of AI-driven applications.' – Shaun Clowes, Confluent
'Tableflow makes it possible to build and scale the next generation of AI-driven applications.' – Shaun Clowes, Confluent

Tahawul Tech

time19-03-2025

  • Business
  • Tahawul Tech

'Tableflow makes it possible to build and scale the next generation of AI-driven applications.' – Shaun Clowes, Confluent

Confluent have formally announced the general availability of Tableflow, which brings real-time business context to analytical systems to make AI and next-generation applications enterprise-ready. With Tableflow, all streaming data in Confluent Cloud can be accessed in popular open table formats, unlocking limitless possibilities for advanced analytics, real-time artificial intelligence (AI), and next-generation applications. Support for Apache Iceberg™ is now generally available (GA). And as a result of an expanded partnership with Databricks, a new early access program for Delta Lake is now open. Additionally, Tableflow now offers enhanced data storage flexibility and seamless integrations with leading catalog providers, including AWS Glue Data Catalog and Snowflake's managed service for Apache Polaris™, Snowflake Open Catalog. 'At Confluent, we're all about making your data work for you, whenever you need it and in whatever format is required,' said Shaun Clowes, Chief Product Officer at Confluent. 'With Tableflow, we're bringing our expertise of connecting operational data to the analytical world. Now, data scientists and data engineers have access to a single, real-time source of truth across the enterprise, making it possible to build and scale the next generation of AI-driven applications.' Bridging the Data Gap for Enterprise-Ready AI Tableflow simplifies the integration between operational data and analytical systems. It continuously updates tables used for analytics and AI with the exact same data from business applications connected to Confluent Cloud. Within Confluent, processing and governance happen as data is generated, shifting these tasks upstream to ensure that only high-quality, consistent data is used to feed data lakes and warehouses. This is a breakthrough for AI, as it's only as powerful as the data that shapes it. Today, Confluent announces significant updates to Tableflow: ● Support for Apache Iceberg is ready for production workloads. Teams can now instantly represent Apache Kafka® topics as Iceberg tables to feed any data warehouse, data lake, or analytics engine for real-time or batch processing use cases. Expensive and error-prone table maintenance tasks, such as compaction, are automatically handled by Tableflow, giving time back to data engineers to deliver more business value. It also provides a single source of truth for one of the most widely adopted open-format storage options, enabling data scientists and data engineers to scale AI innovation and next-generation applications. ● New Early Access Program for Delta Lake is now open. This open-format storage layer, pioneered by Databricks, processes more than 10 exabytes of data daily and is used alongside many popular AI engines and tools. With this integration, customers will have a consistent view of real-time data across operational and analytic applications, enabling faster, smarter AI-driven decision-making. Apply for the Tableflow Early Access Program here. ● Increase flexibility through Bring Your Own Storage. Store fresh, up-to-date Iceberg or Delta tables once and reuse them many times with the freedom to choose a storage bucket. Customers now have full control over storage and compliance to meet their unique data ownership needs. ● Enhance data accessibility and governance with partners. Direct integrations with Amazon SageMaker Lakehouse via AWS Glue Data Catalog (GA) and Snowflake Open Catalog (GA) enable seamless catalog management for Tableflow's Iceberg tables. They also streamline access for analytical engines such as Amazon Athena, AWS EMR, and Amazon RedShift, and leading data lake and warehouse solutions including Snowflake, Dremio, Imply, Onehouse, and Starburst. Additionally, Confluent has strengthened enterprise adoption for Tableflow with support from global and regional system integrators, including GoodLabs Studio, Onibex, Psyncopate, and Tata Consultancy Services (TCS).

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store