logo
Most fintechs fail API security, risking sensitive payment data

Most fintechs fail API security, risking sensitive payment data

Techday NZ03-07-2025
New research conducted by Raidiam highlights significant weaknesses in API security across fintech companies, SaaS platforms, payments firms, and other enterprises operating outside regulated environments such as Open Banking.
The report, which assessed security practices at 68 organisations, reveals that 84% remain vulnerable due to insufficient API protections, even when dealing with sensitive or high-value data.
Widespread vulnerabilities
The research indicates that 85% of the surveyed organisations handle either payment data or special category personal data, yet only one met the benchmark for modern, cryptographic API protection. The study found that outdated or insufficient controls—such as the use of static API keys and basic OAuth secrets—prevail among most firms, leaving them open to exploitation. "We've all read the recent headlines; API security should not be an afterthought. The gap between the sensitivity of data and the strength of controls is a board-level risk – not just a technical issue," said David Oppenheim, Head of Enterprise Strategy at Raidiam.
Of the organisations surveyed, 57 out of 68 use bare API keys or basic OAuth credentials, mechanisms that have well-known security vulnerabilities. Less than half conduct regular API-specific penetration testing or runtime anomaly monitoring, measures deemed essential for identifying and addressing potential attack vectors in real time.
Real-world consequences
The report points to the 2023 Dell partner API breach as evidence that attackers are already actively exploiting these weak points in enterprise systems. These incidents underscore a growing risk for any entity exposing sensitive APIs without robust protective measures in place.
According to the report, a Security vs Sensitivity Matrix mapping exercise revealed a severe misalignment between the sensitivity of the data held and the strength of security controls implemented. This mismatch increases the likelihood and potential impact of security incidents. "We found that even firms handling payment and personal data still rely on static API keys and basic secrets. In today's threat landscape, that's the digital equivalent of leaving the vault door open," Oppenheim added. "In regulated environments like Open Banking, stronger controls like mutual TLS and certificate-bound tokens are already standard. Outside those frameworks, there's a gaping hole."
API risk in unregulated environments is becoming a prominent concern in the industry. In early 2025, the Chief Information Security Officer at JPMorgan Chase issued a public warning about rising vulnerabilities linked to third-party platforms, advocating for a shift towards prioritising security over rapid development.
Gartner statistics cited in the report indicate that API breaches tend to leak 10 times more data than traditional attacks. The report states, "This isn't theoretical — attackers are already in."
Recommendations for addressing risk
The report provides a four-step action plan for organisations seeking to bridge the gap between data sensitivity and protection. It recommends elevating API security to a board-level priority, modernising controls through cryptographic methods such as mutual TLS (mTLS) and sender-constrained access tokens, increasing investment in developer awareness and security testing, and working with trusted partners to accelerate adoption of proven standards and infrastructure.
Raidiam's expertise in secure digital data-sharing ecosystems is currently being made available to assist enterprise organisations in bringing API security standards up to date and closing the gaps identified by this research.
Follow us on:
Share on:
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Workspace 365 launches unified search to cut digital workplace clutter
Workspace 365 launches unified search to cut digital workplace clutter

Techday NZ

time3 days ago

  • Techday NZ

Workspace 365 launches unified search to cut digital workplace clutter

Workspace 365 has introduced a new feature, Search .Simplified, that allows employees to conduct a single query across all their business applications and data sources directly from their digital workplace platform. The new capability was developed in response to findings from Workspace 365's recent research into workplace digital tools. The study uncovered that 49% of employees would prefer to have all their tools, apps, and documents accessible in one location to avoid needing to switch between different systems. Additionally, 44% indicated that reducing the number of logins or systems would enhance their productivity at work. The research also highlighted that 32% of hybrid workers consider the management of multiple tools to be their biggest challenge, while 59% of the overall workforce, and 65% in the largest organisations, believe employers should prioritise simplifying digital workplaces to improve productivity. Speaking about the motivation for the new feature, Workspace 365's Chief Executive Officer and Co-founder, Erik Nicolai, provided further context. "Our research painted a clear picture - employees are losing time, focus and engagement because information is fragmented across multiple systems," explained Erik Nicolai, CEO and co-founder of Workspace 365. "Search .Simplified aims to address that by bringing every source of information and connected app - from emails and documents, to tickets and tasks - into one intelligent search bar, and making the right information available in seconds." Search .Simplified operates through the Workspace 365 Integration Builder, enabling organisations to connect all of their business applications, legacy systems, and modern SaaS platforms via APIs. The platform requires no additional development work when integrating systems. If an application contains important data and permits integration, Workspace 365 is able to connect with it. The company is positioning itself to move beyond simple app integration, aiming to develop a broader Ecosystem. According to Workspace 365, this ecosystem unifies information and tools, connecting employee experiences and supporting focused work within a single workplace interface. Among the key features highlighted are the ability to search across every connected application without switching between them, permission-aware results so users only see information relevant and accessible to them, and improved efficiency with the promise of relevant information often delivered in under 10 seconds. The introduction of Search .Simplified comes after Workspace 365's release of Communication .Simplified, which is intended to enhance employee engagement. Together, these features are core elements in the company's broader transition towards a unified Ecosystem, integrating tools, information, and workflows into a single user experience. Nicolai commented further on the direction of Workspace 365's product strategy, stating: "Our product roadmap has a clear objective - to simplify everything. Search .Simplified is the next step towards delivering on that mission; by unlocking intelligent search across all data points, with results filtered by relevance, access rights, and context, and making them available directly from the digital workplace." Workspace 365 was founded in the Netherlands in 2010 and has developed a platform designed to bring together all digital tools, applications, and information employees need onto a single interface. The solution aims to reduce digital clutter, streamline collaboration, and centralise access for distributed and hybrid workforces. The company has been recognised for its contributions in workplace technology, having been listed on Groeibedrijven Top 250 in 2023 and TechRound's SaaS66 in 2024.

How optimisation is helping to tackle the data centre efficiency challenge
How optimisation is helping to tackle the data centre efficiency challenge

Techday NZ

time5 days ago

  • Techday NZ

How optimisation is helping to tackle the data centre efficiency challenge

In the era of cloud adoption and AI, the demand for data centre bandwidth has skyrocketed, leading to the exponential sprawl of data centres worldwide. However, new data centres are running up against sustainability, space and budget constraints. Policymakers recognise the benefits of data centres to productivity, economic growth and research, but there is still a tension over their impact on local communities, water and electricity use. The best solution is in optimising the data centre infrastructure we have already to unlock more performance while still being mindful of the limits we have. Our cities, our consumer products and our world is going to become more digital and we need more compute to keep up. Optimising the data centre infrastructure we have already to unlock more performance is the best way data centres can turn constraints into an opportunity for a competitive advantage. Why data centre optimisation matters CIOs and IT leaders increasingly face calls to provide a high-performance foundational compute infrastructure across their businesses and handle new, more demanding, use cases while balancing sustainability commitments, space and budget constraints. Many have sought to build new data centres outright to meet demand and pair them with energy efficient technologies to minimise their environmental impact. For example, the LUMI (Large Unified Modern Infrastructure) Supercomputer, one of the most powerful in Europe uses 100% carbon-free hydroelectric energy for its operations and its waste heat is reused to heat homes in the nearby town of Kajanni, Finland. There are many other examples like LUMI showing the considerable progress the data centre industry have made in addressing the need for energy efficiency. Yet energy efficiency alone won't be enough to power the growing demands of AI which is expected to plump up data centre storage capacity. AI's greater energy requirements will also require more energy efficient designs to help ensure scalability and address environmental goals and with data centre square footage, land and power grids nearing capacity, one way to optimise design is to upgrade from old servers. Data centres are expensive investments, and some CIOs and IT leaders try to recoup costs by running their hardware for as long as possible. As a result, most data centres are still using hardware that is 10 years old (Dell) and only expand compute when absolutely necessary. While building new data centres might be necessary for some, there are significant opportunities to upgrade existing infrastructure. Upgrading to newer systems means data centres can achieve the same tasks more efficiently. Global IT data centre capacity will grow from 180 Gigawatts (GW) in 2024 to 296 GW in 2028, representing a 12.3% CAGR, while electricity consumption will grow at a higher rate 23.3% from 397 Terawatt hours (TWh) to 915 TWh in 2028. For the ageing data centres, that can translate to fewer racks and systems to manage, while still maintaining the same bandwidth. It can leave significant room for future IT needs but also makes room for experimentation which is absolutely necessary in AI workloads at the moment. They can use the space to build less expensive proof of concept half racks before it leads to bigger build outs and use new hyper-efficient chips to help reduce energy consumption and cooling requirements, recouping investment back more quickly. What to look for in an upgrade There are many factors to consider in a server upgrade and there isn't a one size fits all solution to data centre needs. It's not just about buying the most powerful chip that can be afforded. Yes, the significance of a good chip on energy efficiency cannot be overstated, but each data centre has different needs that will shape the hardware and software stack they need to operate most efficiently. Leading South Korean cloud provider, Kakao Enterprise, needed servers that can deliver high performance across a wide range of workloads to support its expansive range of offerings. By deploying a mixed fleet of 3rd and 4th Gen AMD EPYC processors, the company was able to reduce the server required for its total workload to 40 percent of its original fleet, while achieving increased performance by 30 percent, with a 50 percent reduction in total cost of ownership. Much like Kakao Enterprise, IT decision makers should look for providers that can deliver end-to-end data centre Infrastructure at scale combining high performance chips, networking, software and systems design expertise. For example, the right physical racks make it easy to swap in new kit as needs evolve, and having open software is equally important for getting the different pieces of the software stack from different providers talking with each other. In addition, providers that are continually investing in world class systems design and AI systems capabilities will be best positioned to accelerate enterprise AI hardware and software roadmaps. AMD, for example, recently achieved a 38× improvement in node-level energy efficiency for AI training and HPC over just five years. This translates to a 97% reduction in energy for the same performance, empowering providers and end-users alike to innovate more sustainably and at scale. Advancing the Data Centre As our reliance on digital technologies continues to grow, so too does our need for computing power. It is important to balance the need for more compute real estate with sustainability goals, and the way forward is in making the most out of the existing real estate we have. This is a big opportunity to think smartly about this and turn an apparent tension into a massive advantage. By using the right computational architecture, data centres can achieve the same tasks more efficiently, making room for the future technologies that will transform businesses and lives.

Dell unveils AI Data Platform upgrades with NVIDIA & Elastic
Dell unveils AI Data Platform upgrades with NVIDIA & Elastic

Techday NZ

time6 days ago

  • Techday NZ

Dell unveils AI Data Platform upgrades with NVIDIA & Elastic

Dell Technologies has announced enhancements to the Dell AI Data Platform, expanding its support across the full lifecycle of artificial intelligence workloads with new hardware and software collaborations. The updates to the Dell AI Data Platform aim to address the challenges enterprises face with massive, rapidly growing, and unstructured data pools. Much of this data is unsuitable for generative AI applications unless it can be properly indexed and retrieved in real time. The latest advancements are designed to streamline data ingestion, transformation, retrieval, and computing tasks within enterprise environments. Lifecycle management The Dell AI Data Platform now provides improved automation for data preparation, enabling enterprises to move more quickly from experimental phases to deployment in production environments. The architecture is anchored by specialised storage and data engines, designed to connect AI agents directly to quality enterprise data for analytics and inferencing. The platform incorporates the NVIDIA AI Data Platform reference architecture, providing a validated, GPU-accelerated solution that combines storage, compute, networking, and AI software for generative AI workflows. New partnerships An important component of the update is the introduction of an unstructured data engine, the result of collaboration with Elastic. This engine offers customers advanced vector search, semantic retrieval, and hybrid keyword search capabilities, underpinned by built-in GPU acceleration for improved inferencing and analytics performance. The unstructured data engine operates alongside other data tools, including a federated SQL engine for querying structured data, a large-scale processing engine for data transformation, and fast-access AI-ready storage. The array of tools is designed to turn large, disparate datasets into actionable insights for AI applications. Server integration Supporting these software advancements are the new Dell PowerEdge R7725 and R770 servers, fitted with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Dell claims these air-cooled servers provide improved price-to-performance for enterprise AI workloads, supporting a diverse range of use cases from data analytics and visual computing to AI inferencing and simulation. The NVIDIA RTX PRO 6000 GPU supports up to six times the token throughput for large language model inference, offers double the capacity for engineering simulations, and can handle four times the number of concurrent users compared to the previous generation. The integration of these GPUs in a 2U server chassis is positioned to make high-density AI calculations more accessible to a wider base of enterprise users. The Dell PowerEdge R7725 will be the first 2U server platform to deliver the NVIDIA AI Data Platform reference design, allowing organisations to deploy a unified hardware and software solution without the need for in-house architecture and testing. This is expected to enable enterprises to accelerate inferencing, achieve more responsive semantic searching, and support larger and more complex AI operations. Industry perspectives "The key to unlocking AI's full potential lies in breaking down silos and simplifying access to enterprise data," said Arthur Lewis, president, Infrastructure Solutions Group, Dell Technologies. "Collaborating with industry leaders like NVIDIA and Elastic to advance the Dell AI Data Platform will help organisations accelerate innovation and scale AI with confidence." Justin Boitano, Vice President of Enterprise AI at NVIDIA, added, "Enterprises worldwide need infrastructure that handles the growing scale and complexity of AI workloads. With NVIDIA RTX PRO 6000 GPUs in new 2U Dell PowerEdge servers, organisations now have a power efficient, accelerated computing platform to power AI applications and storage on NVIDIA Blackwell." Ken Exner, Chief Product Officer at Elastic, commented, "Fast, accurate, and context-aware access to unstructured data is key to scaling enterprise AI. With Elasticsearch vector database at the heart of the Dell AI Data Platform's unstructured data engine, Elastic will bring vector search and hybrid retrieval to a turnkey architecture, enabling natural language search, real-time inferencing, and intelligent asset discovery across massive datasets. Dell's deep presence in the enterprise makes them a natural partner as we work to help customers deploy AI that's performant, precise, and production-ready." Availability The unstructured data engine for the Dell AI Data Platform is scheduled for availability later in the year. The Dell PowerEdge R7725 and R770 servers with NVIDIA RTX PRO 6000 GPUs will also become globally available in the same period.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store