logo
#

Latest news with #Fivetran

Fivetran expands SDK to simplify building custom data connectors
Fivetran expands SDK to simplify building custom data connectors

Techday NZ

time6 days ago

  • Business
  • Techday NZ

Fivetran expands SDK to simplify building custom data connectors

Fivetran has expanded its Connector SDK to enable custom connectors for any data source. The update allows developers to build pipelines connecting even unique or internally developed systems, facilitating the centralisation of company data for analytics, artificial intelligence, and business decision-making. With the Connector SDK, data teams now have the ability to build secure, reliable pipelines for a range of sources—from various applications and internal APIs to legacy systems. Developers write integration logic in Python, while Fivetran manages infrastructure elements such as deployment, orchestration, scaling, monitoring, and error handling. The process is designed to allow most connectors to be built and deployed within several hours, removing the need for DevOps support or dedicated infrastructure development. Anjan Kundavaram, Chief Product Officer at Fivetran, discussed the approach companies often take when a prebuilt connector is unavailable. He stated: "When there isn't a prebuilt connector, most teams end up building and maintaining custom pipelines themselves. That DIY approach may seem flexible at first, but it often becomes a long-term burden with hidden costs in reliability, security, and maintenance. The Connector SDK changes that. Now, any engineer can build a custom connector for any source and run it with the same infrastructure, performance, and reliability as Fivetran's native connectors. It gives companies the flexibility they need without the tradeoffs." The SDK offers the same infrastructure that supports Fivetran's managed connectors, handling automatic retries, monitoring, and alerting to ensure the accurate delivery of data to destinations such as BigQuery, Databricks, Snowflake, and other platforms. Babacar Seck, Head of Data Integration at Saint-Gobain, shared his perspective on their experience with the Connector SDK. He said: "The SDK was a huge surprise in the best way. We expected to keep using Azure Data Factory for APIs because it was the only option. But once we saw what we could do with Fivetran's Connector SDK, everything changed. We can now build custom connectors in-house and respond to business needs much faster — all while seamlessly delivering data into Snowflake on Azure." The company noted that the Connector SDK is being demonstrated to the public, with a focus on allowing data engineers to build custom connectors for moving data into cloud destinations tailored for analytics and artificial intelligence workloads. Fivetran is known for working with organisations across various industries, enabling them to centralise data from software-as-a-service applications, databases, files, and additional sources into cloud destinations such as data lakes. The company's approach emphasises high-performance pipelines, security, and interoperability to help organisations enhance or modernise their data infrastructure.

Fivetran Expands Connector SDK to Support Any Source With Native-Grade Reliability and Performance
Fivetran Expands Connector SDK to Support Any Source With Native-Grade Reliability and Performance

Associated Press

time7 days ago

  • Business
  • Associated Press

Fivetran Expands Connector SDK to Support Any Source With Native-Grade Reliability and Performance

OAKLAND, Calif.--(BUSINESS WIRE)--Jun 2, 2025-- Fivetran, the global leader in data movement, today announced that its Connector SDK now supports custom connectors for any data source. With this update, developers can build reliable pipelines for even the most specialized or homegrown systems, eliminating data gaps and making it possible to centralize all of a company's data in one place. This ensures that critical information — regardless of where it lives — can be used to drive analytics, AI, and business decisions with confidence. With the Connector SDK, datateams can build reliable, secure pipelines for virtually any application, internal API, or legacy system. Developers write the logic in Python, and Fivetran manages the infrastructure, including deployment, orchestration, scaling, monitoring, and error handling. Most connectors can be built and deployed in just a few hours, without the need for DevOps support or custom infrastructure. 'When there isn't a prebuilt connector, most teams end up building and maintaining custom pipelines themselves,' said Anjan Kundavaram, Chief Product Officer at Fivetran. 'That DIY approach may seem flexible at first, but it often becomes a long-term burden with hidden costs in reliability, security, and maintenance. The Connector SDK changes that. Now, any engineer can build a custom connector for any source and run it with the same infrastructure, performance, and reliability as Fivetran's native connectors. It gives companies the flexibility they need without the tradeoffs.' The SDK uses the same infrastructure that powers Fivetran's fully managed connectors. It handles retries, monitoring, and alerting to ensure data is delivered where it is needed, whether that is BigQuery, Databricks, Snowflake or another destination. 'The SDK was a huge surprise in the best way. We expected to keep using Azure Data Factory for APIs because it was the only option. But once we saw what we could do with Fivetran's Connector SDK, everything changed,' said Babacar Seck, Head of Data Integration at Saint-Gobain. 'We can now build custom connectors in-house and respond to business needs much faster — all while seamlessly delivering data into Snowflake on Azure.' Fivetran is showcasing the Connector SDK this week at Snowflake Summit 2025 (Booth #1809) and next week at Databricks Data + AI Summit 2025 (Booth #E4519), and highlighting how data engineers can build custom connectors to move data into their cloud destination for analytics and AI workloads. To learn more about how developers are using the Connector SDK to build custom data pipelines, visit About Fivetran Fivetran, the global leader in data movement, is trusted by companies like OpenAI, LVMH, Pfizer, Verizon, and Spotify to centralize data from SaaS applications, databases, files, and other sources into cloud destinations, including data lakes. With high-performance pipelines, seamless interoperability, and enterprise-grade security, Fivetran empowers organizations to modernize their data infrastructure, power analytics and AI, ensure compliance, and achieve transformative business outcomes. Learn more at View source version on CONTACT: Matias Cavallin [email protected] KEYWORD: CALIFORNIA UNITED STATES NORTH AMERICA INDUSTRY KEYWORD: SOFTWARE TECHNOLOGY ARTIFICIAL INTELLIGENCE DATA MANAGEMENT SOURCE: Fivetran Copyright Business Wire 2025. PUB: 06/02/2025 09:00 AM/DISC: 06/02/2025 08:59 AM

Teradata & Fivetran team up to streamline data integration
Teradata & Fivetran team up to streamline data integration

Techday NZ

time30-05-2025

  • Business
  • Techday NZ

Teradata & Fivetran team up to streamline data integration

Teradata has entered into a partnership with Fivetran to offer automated data integration solutions for enterprises seeking to centralise information from multiple sources. The two companies have integrated their technologies to allow customers to streamline the movement of data from a diverse range of enterprise sources into the Teradata VantageCloud platform. According to Teradata, this enables businesses to use data for complex artificial intelligence (AI) workloads and supports scaling trusted AI initiatives. A recent Fivetran survey highlighted that nearly half of enterprise AI projects, including generative and agentic AI, fail due to inadequate data readiness, underscoring the importance of a robust data foundation for successful deployments. Reliable and continuously updated data sourced from a broad mix of systems is essential for responsible data usage and effective AI operations. Both companies say their integration is intended to deliver a scalable, automated, and cost-effective approach for organisations to process large amounts of data using Teradata's environment. "With AI innovation accelerating at an unprecedented pace, transforming data pipelines through automation has become critical for businesses to stay competitive. By leveraging data automation, enterprises can streamline data integration, reduce errors, and enhance decision-making processes, enabling them to adapt swiftly to market changes and drive continuous improvement," said Dan Spurling, SVP Product Management at Teradata. He added, "Teradata's integration with Fivetran allows our joint cloud and hybrid customers to automate complex data movement at scale into our harmonised analytics and data platform, ensuring they have reliable, real-time data to power trusted AI, analytics, and strategic decision-making." The integration brings together Fivetran's fully managed, end-to-end data movement platform and Teradata's centralised analytics environment. This combination is expected to allow joint customers to accelerate business insights and improve operational efficiency through several key components: automated data integration, real-time data synchronisation, and broader data accessibility for analysts and business users. The demand for automated data integration is increasing as businesses encounter growing volumes and complexity of information. Tools such as Fivetran are designed to make the extract, transform, load (ETL) process more efficient and scalable. The integration provides real-time data synchronisation, enabling current and relevant information to be available in Teradata for analysis. Features supporting data democratisation mean that users across business functions can access and analyse data without requiring specialised engineering skills. "Fivetran's mission is to make access to data as simple and reliable as electricity—regardless of where it lives or where it needs to go," said Mark Van De Wiel, Field CTO, Fivetran. He continued, "With the addition of Teradata as a destination via the Partner-Built program, customers can now seamlessly move data into Teradata using the same automated, fully managed pipelines they expect from Fivetran. This expands our ecosystem and gives joint customers the ability to centralise their data for AI and analytics without added engineering overhead." Engineers using the integration have the capability to transfer data from over 700 sources into Teradata. These sources include SaaS applications such as Salesforce and HubSpot, databases including MySQL, PostgreSQL and Oracle, ERP systems such as SAP and NetSuite, files in formats like CSV and JSON, and event streams such as Kafka. This range of connectivity facilitates the unification and transformation of data in Teradata for comprehensive querying and analysis alongside other business information. The Teradata destination connector is being developed with Fivetran's Partner SDK and is due for availability to joint cloud customers in June 2025. The connector is built and maintained by Teradata under Fivetran's Partner-Built programme and operates on Fivetran's fully managed infrastructure, allowing customers to move data without managing pipelines or additional systems. Both companies have stated that customers will benefit from reduced data migration effort, simplified operational workflows, and expedited insights thanks to the partnership's approach to data integration and synchronisation.

Fivetran appoints Monica Ohara as Chief Marketing Officer
Fivetran appoints Monica Ohara as Chief Marketing Officer

Techday NZ

time19-05-2025

  • Business
  • Techday NZ

Fivetran appoints Monica Ohara as Chief Marketing Officer

Fivetran has appointed Monica Ohara as its new Chief Marketing Officer, tasking her with leading the company's global marketing strategy and brand development efforts. Ohara brings over two decades of experience in marketing, growth, and leadership roles at technology companies. She will report to Chief Operating Officer Taylor Brown and become a member of Fivetran's executive team, overseeing functions including brand, customer, partner, field and product marketing, public relations, analyst relations, and events. Her previous roles include Vice President of Global Marketing at Shopify and marketing leadership positions at and HackerRank. In addition, Ohara co-founded the growth marketing startup DataScore, which was acquired by Lyft, where she subsequently led rider and driver acquisition efforts through the company's initial public offering. "Monica brings a rare combination of entrepreneurial drive, deep marketing expertise, and a proven ability to scale some of the world's most respected technology brands. Her experience leading growth at both early-stage startups and global enterprises will be critical as Fivetran continues to expand our market leadership. We are confident Monica will help propel Fivetran's next chapter of innovation and growth," Taylor Brown, Chief Operating Officer at Fivetran, commented on the appointment. This leadership change follows a range of recent executive appointments at Fivetran, including Suresh Seshadri as Chief Financial Officer, Anand Mehta as Chief People Officer, and Simon Quinton as General Manager for EMEA. These additions accompany the company's acquisition of Census, which introduces Reverse ETL capabilities to its platform and is intended to solidify Fivetran's position as a provider of end-to-end data movement solutions. "Throughout my career, I've been drawn to companies that make complex technology accessible and enable transformative outcomes for their customers. Fivetran's ability to automate data movement and enable real-time decision-making is fundamentally changing how businesses operate. I'm excited to help amplify Fivetran's impact globally and build deeper connections with the customers and partners who rely on us every day," Monica Ohara said. Prior to this appointment, Ohara held the position of Vice President of Growth Marketing at Shopify. She also served as Chief Marketing Officer at and HackerRank and led growth strategy at before its acquisition by Ohara holds a Bachelor of Arts in English from UCLA and studied Principles of Engineering Design through Johns Hopkins University's Centre for Talented Youth. Fivetran is currently used by companies including OpenAI, LVMH, Pfizer, Verizon, and Spotify to centralise data from various sources such as SaaS applications, databases, files, and other systems into cloud destinations. Its services include data lakes, high-performance pipelines, and interoperability aimed at supporting analytics, artificial intelligence, compliance, and business objectives.

Qlik Widens Interoperable Data Platform With Open Lakehouse
Qlik Widens Interoperable Data Platform With Open Lakehouse

Forbes

time15-05-2025

  • Business
  • Forbes

Qlik Widens Interoperable Data Platform With Open Lakehouse

The people of Inle Lake (called Intha), some 70,000 of them, live in four cities bordering the lake, ... More in numerous small villages along the lake's shores, and on the lake itself. (Photo by Neil Thomas/Corbis via Getty Images) Software comes in builds. When source code is compiled and combined with its associated libraries into an executable format, a build is ready to run, in basic terms. The construction analogy here extends directly to the data architecture that the code is composed of and draws upon. Because data architectures today are as diverse as the software application types above them, data integration specialists now have to work across complex data landscapes and remain alert to subsidence, fragilities and leakage. These software and data construct realities drive us towards a point where data integration, data quality control and data analytics start to blend. Key players in this market include Informatica, SnapLogic, Rivery, Boomi, Fivetran, Tibco, Oracle with its Data Integrator service and Talend, the latter now being part of Qlik. Key differentiators in the data analytics and integration space generally manifest themselves in terms of how complex the platform is to set up and install (Informatica is weighty, but commensurately complex), how flexible the tools are from a customization perspective (Fivetran is fully managed, but less flexible as a result), how natively aligned the service is to the environment it has to run in (no surprise, Microsoft Azure Data Factory is native with Microsoft ecosystem technologies) and how far the data integration and analytics services on offer can be used by less technical businesspeople. As this vast marketplace also straddles business intelligence, there are wider reputable forces at play here from firms including Salesforce's Tableau, Microsoft's Power BI, Google's Looker and ThoughtSpot for its easy-to-use natural language data vizualizations. Where one vendor will tell us its dashboards are simplicity itself, another will stress how comprehensively end-to-end its technology proposition is. Generally cloud-based and often with a good open source heritage, the data integration, data quality and data analytics space is a noisy but quite happy place. Looking specifically at Qlik, the company is known for its 'associative' data engine, which offers freeform data analytics that highlight relationships between data sets in non-linear directions without the need for predefined queries. It also offers real-time data pipelines and data analytics dashboards. The organizations's central product set includes Qlik Sense, an AI-fuelled data analytics platform service with interactive dashboards that also offers 'guided analytics' to align users towards a standard business process or workflow. QlikView is a business intelligence service with dynamic dashboards and reports - and Qlik Data Integration (the clue is in the name) for data integration and data quality controls with a web-based user interface that support both on-premises and cloud deployments. Qlik champions end-to-end data capabilities, that means the tools here extend from the raw data ingestion stage all the way through to so-called 'actionable insights' (that term data analytics vendors swoon over), which are now underpinned and augmented by a new set of AI services. The company's AI-enhanced analytics and self-service AI services enable users to build customized AI models, which help identify key drivers and trends in their data. Not as historically dominant in open source code community contribution involvement as some others (although a keen advocate of open data and open APIs, with news on open source Apache Iceberg updates in the wings) Qlik has been called out for its pricing structure complexity. From a wider perspective, the company's associative engine and its more unified approach to both data analytics and data integration (plus its self-service analytics capabilities) are probably the factors that set it apart. 'Qlik's analytics-centric origins and methodical, iterative portfolio development has made it the BI platform for data geeks and data scientists alike, but thankfully hasn't made it overly conservative. The company has accelerated its product strategy in the past four years, adding data quality with the Talend acquisition and 'AI for BI' with the AutoML acquisition (originally Big Squid). These, plus modernization capabilities for customers who need it - Qlik Sense for accessibility to broader user bases, Qlik Cloud for an as-a-Service model… and the tools to migrate to them, make Qlik worth watching in today's increasingly data-driven and visualization-driven, AI-empowered enterprise market,' explained Guy Currier, a technology analyst at the Futurum Group. Looking to extend its data platform proposition right now, Qlik Open Lakehouse is a new and fully-managed Apache Iceberg solution built into Qlik Talend Cloud. As explained here, a data lakehouse combines the structure, management and querying capabilities of a data warehouse, with the low-cost benefits of a data lake. Apache Iceberg is an open source format technology for managing large datasets in data lakes with data consistency. Designed for enterprises under pressure to scale faster, the company says its Qlik Open Lakehouse delivers real-time ingestion, automated optimization and multi-engine interoperability. 'Performance and cost should no longer be a tradeoff in modern data architectures,' said Mike Capone, CEO of Qlik. 'With Qlik Open Lakehouse, enterprises gain real-time scale, full control over their data and the freedom to choose the tools that work best for them. We built this to meet the demands of AI and analytics at enterprise scale and without compromise.' Capone has detailed the company's progression when talking to press and analysts this month. He explained that for many years, Qlik has been known for its visual data analytics services and indeed, the organization still gets customer wins on that basis. 'But a lot has happened in recent times and the conversation with users has really come around to gravitate on data with a more concerted focus. With data quality [spanning everything from deduplication to analysis tools to validate the worth of a team's data model] being an essential part of that conversation - and the old adage of garbage in, garbage out still very much holding true - the icing on the cake for us was the Talend acquisition [for its data integration, quality and governance capabilities] because customers cleary found it really expensive to cobble all the parts of their data estate together. Now we can say that all the component parts of our own technology proposition come together with precision-engineered fit and performance characteristics better than ever before,' said Capone. Keen to stress the need for rationalized application of technologies so that the right tool is used for the appropriate job, Capone says that the Qlik platform enables users to custom align services for specific tasks i.e. software engineering and data management teams need not use a super-expensive compute function when the use case is suited to a more lightweight set of functions. He also notes that the company's application of agentic AI technology pervades 'throughout the entire Qlik platform'; this means that not only can teams use natural language queries to perform business intelligence and business integration tasks, they can also ask questions in natural language related to data quality to ensure an organization's data model's veracity, timeliness and relevance is also on target. But does he really mean any data tool openness in a way that enables customers the 'freedom to choose the tools' that work best for them? 'Absolutely. If a company wants to use some Tableau, some Informatica and some Tibco, then we think they should be able to work with all those toolsets and also deploy with us at whatever level works for the business to be most successful. Obviously I'm going to tell you that those customers will naturally gravitate to use more Qlik as they experience our functionality and cost-performance advantage without being constrained by vendor lock-in, but that's how good technology should work,' underlined Capone. Freedom to choose your own big data tools and analytics engines sounds appealing, but why do organizations need this scope and does it just introduce complexity from a management perspective? David Navarro, data domain architect at Toyota Motor Europe, thinks this is 'development worth keenly watching' right now. This is because large corporations like his need interoperability between different (often rather diverse) business units and between different partners, each managing its own technology stack with different data architects, different data topographies and all with their own data sovereignty stipulations. 'Apache Iceberg is emerging as the key to zero-copy data sharing across vendor-independent lakehouses and Qlik's commitment to delivering performance and control in these complex, dynamic landscapes is precisely what the industry requires,' said Navarro, when asked to comment on this recent product news. Qlik tells us that all these developments are an evolution of modern data architectures in this time of AI adoption. It's a period where the company says that the cost and rigidity of traditional data warehouses have become unsustainable. Qlik Open Lakehouse offers a different path i.e. it is a fully managed lakehouse architecture powered by Apache Iceberg to offer 2.5x–5x faster query performance and up to 50% lower infrastructure costs. The company says that it achieves this while maintaining full compatibility with the most widely used analytics and machine learning engines. Qlik Open Lakehouse is built for scale, flexibility and performance… and it combines real-time ingestion, intelligent optimization and ecosystem interoperability in a single, fully managed platform. Capabilities here include real-time ingestion at enterprise scale, so (for example) a customer could ingest millions of records per second from hundreds of sources (e.g. cloud apps, SaaS, ERP suites and mainframes and plug that data directly into Iceberg tables with low latency and high throughput. Qlik's Adaptive Iceberg Optimizer handles compaction, clustering and 'pruning' (removing irrelevant, redundant and often low-value data from a dataset) automatically, with no tuning required. Users can access data in Iceberg tables using a variety of Iceberg-compatible engines without replatforming or reprocessing, including Snowflake, Amazon Athena, Apache Spark, Trino and SageMaker. 'Although clearly fairly proficient in across a number of disciplines including data integration, analytics and data quality controls, one of the challenges of Qlik and similar platforms is the limited scope for truly advanced analytics capabilities," said Jerry Yurchisin, senior data science strategist at Gurobi, a company known for its mathematical optimization decision intelligence technology. 'This can mean that users have to take on extra configuration responsibilities or make use of an extended set of third-party tools. Data scientists, programmers, analysts and others really want one place to do all of their work, so it's important for all platforms to move in that direction. This starts with data integrity, visualization and all parts of the analytics spectrum - not just descriptive and predictive, but also prescriptive - which is arguably the holy grail for data management at this level.' Director of research, analytics and data at ISG Software Research, Matt Aslett spends a lot of time analyzing data lakehouse architectures in a variety of cloud computing deployment scenarios. He suggests that products like Qlik Open Lakehouse, which use open standards such as Apache Iceberg, are 'well-positioned' to meet the growing demand for real-time data access and multi-engine interoperability. 'This enables enterprises to harness the full potential of their data for AI and analytics initiatives," said Aslett. 'As AI workloads demand faster access to broader, fresher datasets, open formats like Apache Iceberg are becoming the new foundation. Qlik Open Lakehouse responds to this shift by making it effortless to build and manage Iceberg-based architectures, without the need for custom code or pipeline babysitting. It also runs within the customer's own AWS environment, ensuring data privacy, cost control and full operational visibility. In line with what currently appears to drive every single enterprise technology vendor's roadmap bar none, Qlik has also tabled new agentic AI functions in its platform this year. Here we find a conversational interface designed to give users an evenue to 'interact naturally' with data. If none of us can ever claim to have had a real world natural data interaction, in this case the term refers to data exploration with the Qlik engine to uncover indexed relationships across data. The agentic functions on offer work across Qlik Cloud platform and so offer data integration, data quality and analytics. It's all about giving businesspeople a more intuitive visibility into data analytics for decision making. Also new are an expanded set of capabilities in Qlik Cloud Analytics. These include functions to detect anomalies, forecast complex trends, prepare data faster and take action through what the company calls 'embedded decision workflows' today. 'While organizations continue to invest heavily in AI and data, most still struggle to turn insight into impact. Dashboards pile up, but real-time execution remains elusive. Only 26% of enterprises have deployed AI at scale and fewer still have embedded it into operational workflows. The problem isn't access to static intelligence, it's the ability to act on it. Dashboards aren't decision engines and predictive models alone won't prevent risk or drive outcomes. What businesses need is intelligence that anticipates, explains, and enables action without added tools, delays, or friction. Discovery agent, multivariate time series forecasting, write table, and table recipe work in concert to solve a singular problem: how to move from fragmented insight to seamless execution, at scale,' said the company, in a product statement that promises to target 'critical enterprise bottlenecks' and close the gap between data and decisions. The data integration, data quality, data analytics and AI-powered data services market continues to expand, but we can perhaps pick up on some defining trends here. An alignment towards essentially open source technologies, protocols and standards is key, especially in a world of open cloud-native Kubernetes. Provision of self-service functionalities is also fundamental, whether they manifest themselves as developer self-service tools or as 'citizen user' abstractions that allow businesspeople to use deep tech… or both. A direct embrace of AI-driven services is, of course, a prerequisite now, as is the ability to provide more unified technology services (all firms have too many enterprise apps… and they know it) that work across as wide an end-to-end transept as is physically and technically possible. Qlik is getting a lot of that right, but no single vendor in this space can get everything absolutely comprehensively perfected it seems, so there will always be a need for data integration, even across and between the data integration space.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store