Latest news with #TempestaTechnologies


Forbes
4 days ago
- Business
- Forbes
How To Build Scalable And Affordable DDoS Protection
Alexander Krizhanovsky is CEO of Tempesta Technologies and the architect of Tempesta FW. DDoS attacks are growing significantly. Nowadays, DDoS protection is a commodity service, and you can protect your web application from DDoS attacks almost entirely, or even fully, for free. However, as your business and traffic grow, scaling DDoS protection becomes increasingly difficult. Infrastructure: Cloud Or On-Premises With modern cloud and CDN providers, you can get solid and affordable DDoS protection, at least in most cases. Recently, the Forbes Technology Council Expert Panel discussed "Why On-Prem Data Centers Still Matter In The Cloud Era." Rather than repeat that conversation, let's consider several additional cases. Big and respected vendors protect thousands and thousands of clients, and some of those clients are quite large. This makes these vendors attractive targets for attackers looking to fine-tune their tools. Once attackers succeed, they can potentially compromise many companies at once. Just look at the LWN case, which involved scraping bots, or review the technical details from a company offering bot services that can bypass Cloudflare. One of our clients, a digital bank, operates under strict security requirements and cannot share its TLS certificates with any third-party organization. They contracted with a DDoS scrubbing center and routed their network traffic through it. However, because the scrubber didn't have access to the bank's TLS certificates, the bank had to provide access logs for the scrubber to analyze and block application-level (L7) DDoS attacks. Like many others, the scrubbing center relied on a set of Python-based machine learning scripts to classify the log records, processing that took at least three minutes. For the bank, even a few minutes of downtime was unacceptable, and they ultimately adopted an on-premises solution. DDoS protection services can be always-on or on-demand. In an always-on setup, traffic is permanently routed through the mitigation infrastructure, ensuring minimal reaction time. But if you have plenty of traffic, e.g., if you distribute video content, it can cost a fortune. On-demand protection is more affordable and is typically triggered by traffic anomalies or alerts, but its response time can be unacceptably slow, often worse than the three-minute reaction time in the always-on scenario mentioned earlier. To build your own DDoS protection, you need two key components: a robust infrastructure capable of handling massive incoming traffic and filtration nodes. The cornerstone of a DDoS-resilient infrastructure is Anycast technology, which allows multiple nodes across the internet to share the same IP address. This effectively splits a large DDoS botnet into many smaller parts, with each protection node receiving only a portion of the malicious traffic. You can either build your own Anycast-enabled network or purchase Anycast services at relatively affordable prices. Filtering Nodes: Proprietary Or Open Source In theory, if you need to defend against a 1 Tbps DDoS attack, 10 nodes with 100 Gbps uplinks and anycast IPs should suffice. In real-world scenarios, however, traffic is often concentrated in specific regions or networks. As a result, you'll need more nodes and higher-capacity connections. The cost of such a setup—using DDoS protection appliances from vendors like F5, Fortinet or Imperva—can be substantial, especially since you'll need to upgrade the appliances regularly to keep up with the growing scale of DDoS attacks. The alternative is to use a free, open-source software (FOSS) solution. This not only lowers costs but also allows you to repurpose your existing hardware for the protection setup, and later, when it's time to scale up, you can reuse the DDoS filtering servers for other tasks. The first layer to build is volumetric DDoS protection. This layer defends against SYN floods, UDP floods, amplification attacks and similar threats. These types of attacks don't require completed TCP or TLS handshakes, allowing attackers to send large volumes of packets while spoofing source IP addresses. This is why these attacks usually come with high traffic volumes and are not so easy to block. However, protection against these types of attacks can be accelerated using commodity hardware. For example, an XDP (eXpress Data Path) filter on a mid-level x86-64 server with a capable NIC can operate at 100 to 400 Gbps. A typical XDP filtering module, offering functionality comparable to a proprietary appliance, consists of just a few thousand lines of C/eBPF code. Basic statistical methods are also effective at detecting huge traffic spikes, eliminating the need for advanced machine learning, at least initially. Moreover, since you understand your own traffic patterns, seasonal variations and baseline metrics, you're in a much better position to build an effective analytics engine than a vendor who isn't focused on your specific use case. While volumetric attacks target link capacity and basic operating system functionality, application-level (L7) DDoS attacks focus on application servers, such as web servers, web applications and database servers. These attacks involve more sophisticated logic and typically operate at a much lower rate in terms of bits per second. There are plenty of online guides on how to tune Nginx, HAProxy and other common web proxies for L7 DDoS protection. However, modern L7 DDoS attacks can be extremely powerful. For example, Google observed the HTTP/2 Rapid Reset technique, which delivered 398 million requests per second. Mitigating such an attack would require thousands of servers running Nginx or HAProxy. To defend against this scale of threat, you need a web proxy with a more advanced network stack, such as one built on DPDK (kernel bypass) or an OS-level kernel solution. Conclusion Building your own infrastructure and filtration nodes using FOSS is a sophisticated process that may involve some programming. But it offers stronger security, freedom from vendor lock-in, greater control over service quality for your clients and more predictable scaling in terms of cost. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Forbes
23-06-2025
- Forbes
Understanding And Improving Web Security Performance
Alexander Krizhanovsky is CEO of Tempesta Technologies and the architect of Tempesta FW. The cornerstone of a secure web architecture is a web application firewall (WAF). A WAF is essentially a web proxy that sits in front of your web application, detecting and blocking web attacks and application-layer DDoS attacks. Although some vendors focus exclusively on web API security, modern next-generation WAFs typically include API protection as well. For simplicity, 'WAFs' will refer to solutions that also handle API security. Performance Issues In Web Security Although WAFs are designed to protect web applications, they often suffer from low performance (download), leading to not only scalability limitations but also security issues. A high-quality, deep-inspection WAF might process only 1,000 to 5,000 requests per second (RPS), depending on the vendor and workload. Under heavy load, latency for the 99.9th percentile and above can exceed one second. Given that an average web page triggers dozens of HTTP requests, WAF-induced latency may result in 10 seconds or more total page load time. In contrast, open-source web proxies like NGINX or HAProxy can easily handle 100,000 RPS and have just several milliseconds of latency for the 99.9th percentile. The reason WAFs are so much slower is the intensive logic applied to every request and response: normalization of input data, hundreds of complex regular expressions, validation routines for various data types, the logic tailored to specific frontend/backend frameworks and dozens of other things. Scaling a WAF to 100,000 RPS and beyond can be cost-prohibitive, especially for high-traffic infrastructures. Yet, even smaller web services (100 to 1,000 RPS) can experience traffic surges from DDoS or AI-feeding bots. Web Bots And DDoS Attacks Advanced proxy services now help web bots (e.g., scrapers) avoid detection. Although bots often aren't intended to cause DDoS, their inaccurate logic in fetching information can have that effect. For instance, recently experienced such a case. The AI industry's demand for data has also fueled the growth of bot-avoidance tools. WAFs' DDoS Weakness Any slow logic is a potential DDoS vector—WAFs included. A well-known example is a regular expression denial of service (ReDoS), where inefficient regular expressions, or regexes, are exploited to spike CPU usage. Rate limits are a basic but effective mitigation tool. However, they're hard to configure safely so they don't affect normal users, are vulnerable to misjudged thresholds and can be inflexible with unexpected marketing surges or application degradations. Thus, more sophisticated traffic analysis—often powered by machine learning (ML)—is required. But ML-based systems need time (often several minutes) to observe classification traffic and react. This means DDoS attacks can succeed temporarily before protection kicks in. In LWN's case, bots used a proxy network distributing requests across millions of IP addresses, with each sending a single browser-mimicking request. Such attacks are hard to detect and often slip through defenses. Architectural Limits Of Traditional WAFs Most WAFs are built on general-purpose web servers like NGINX or Envoy, which are optimized for daily workloads, not for aggressively filtering malicious traffic with minimal resource consumption. Common inefficiencies include redundant data copies and inspections of the same data. Also, the event handling model of a web server isn't well suited to heavy request processing. Cloudflare, for example, has blogged extensively about optimizing and reworking NGINX internals to improve WAF performance. Improving Web Security Performance One solution is to introduce a lightweight application-level load balancer in front of the WAF. This component classifies requests and forwards to the WAF just the ones needing inspection, such as dynamic content requests, and offloads the WAF by caching responses. Ideally, the load balancer should be a lightweight WAF itself to block trivial attacks early. It should also come from a different vendor to reduce the risk of shared weaknesses that attackers could exploit to bypass protection. It should also include caching to reduce backend load. To resist cache-targeted attacks (e.g., web cache deception), this balancer must support secure caching logic. Keeping only a small, hot subset of resources in RAM helps ensure resilience to (semi-)random URL DDoS attacks, reducing disk I/O bottlenecks. If your WAF uses ML for detection, it may benefit from visibility into all traffic, not just what passes through the cache. In such cases, extended access logs from the balancer should be fed into the WAF's ML pipeline. Lastly, the load balancer should include a programmable rules engine, allowing WAF rules to be offloaded when safe, improving efficiency by blocking bad traffic earlier. Conclusion Web security and performance are tightly linked aspects of any web application: It's impossible to achieve strong security on an underperforming infrastructure. The performance and scalability of WAFs are crucial for effective DDoS resistance. This becomes even more critical with the growth and diversity of AI-driven businesses employing advanced bots for scraping—these bots are hard to detect and may cause severe accelerators not only mitigate the performance degradations introduced by WAFs but also enhance DDoS protection, enable cost-effective scalability and even improve the overall security of architectures that use WAFs. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?