How a global botnet attacked a trading platform

  • Fabian Sinner
  • October 30, 2025

Content

How a global botnet attacked a trading platform

A pattern appeared in the monitoring dashboard that indicated a classic DDoS attack. A trading platform was the target of massive web requests: there were two distinct traffic spikes in quick succession, both large enough to bring down an unprotected website. Let’s take a closer look at the larger of the two traffic spikes. 

The measured peak values were around 16 million requests per minute with a bandwidth of 160 gigabits per minute, or about 21 Gbit/s. The requests were successfully repelled. However, the attack pattern reveals how sophisticated today’s botnets have now become and how seemingly minor details, such as the user agent, can play a highly crucial role. 

No lightning attack, but a calculated ramp-up

The traffic did not surge at once and instead followed a characteristic curve. It rose slowly and steadily over several minutes. This controlled escalation is typical of attackers who first test their targets to ascertain whether a site is protected by mitigation systems. 

Such “soft” starts are more sophisticated than they appear: a sudden peak is noticeable, but a slow or gradual increase can trick those systems, which only respond to abrupt spikes in volume. The attack lasted less than half an hour. 

A botnet that spans the globe

The source IP addresses were distributed across multiple continents including Indonesia, the US, China, Germany, India, Mexico, Brazil, Peru, and Russia. In total, more than 5,000 unique IPs were involved in the attack. 

This broad spectrum suggests a large-scale, well-organized botnet. It was evident that cloud resources or powerful servers were used in several cases, as indicated by the high data throughput per host. It was particularly striking that a significant proportion of the requests came from the networks of large telecommunications providers. This suggests that compromised devices or servers within normal consumer networks were also involved. This is a classic feature of modern, “mixed” botnets. 

Attack on the root domain

The attackers focused their attack on the entire website, i.e., the root domain, which is the home page and the central access point. The apparent aim was to make the entire website inaccessible. 

The defense systems responded efficiently: Suspicious IPs were automatically moved to a quarantine zone and were blocked immediately, thus conserving resources and keeping the latency low. Most requests received an HTTP 403 response. This shows that the defense at the application level worked consistently and effectively. 

When the user agent becomes a signature

One of the characteristics of this attack was the uniformity of the so-called user agents, i.e., the identifiers that browsers send to servers to identify themselves as “Chrome,” “Safari,” or “Firefox,” for example. 

In this case, almost all requests used the same user agent string, a clear red flag. 

  • Real users use a wide variety of browsers, versions, and devices—in other words, many different user agents. 
  • If, on the other hand, thousands of IPs appear with the same user agent, this is an indication of automated requests, for example from a botnet or script. 
  • Modern protection systems recognize such patterns and block identical user agents when they come from many sources at the same time. 

Disguised attacks vary user agents dynamically

Today, attackers can easily generate lists of thousands of realistic-looking user agent strings, many of which are publicly available. In an advanced version of the attack, each bot would use a different, plausible user agent string thus making it far harder to detect or block the traffic based on the signature.  

In this case, however, stuck with two static user agents and thus failed at the first line of defense. 

Why uniformity helps, but is also dangerous

For defenders, such uniformity is a stroke of luck. Simple rules such as “block all identical user agents with more than X requests per minute from different IPs” can work effectively. 

But the threat landscape is rapidly evolving. Botnet frameworks are already in circulation that use randomized browser signatures, fake headers, or even simulated mouse movements to appear like real users. 

This transforms the user agent from a simple identifying feature to one of the most important indicators of bot intelligence. 

Precise detection & lightning-fast mitigation

Learn more about a GDPR-compliant, cloudbased and patented DDoS Protection that delivers, what it promises.

Rebalancing during the attack

The traffic graphs revealed short pauses or drops in data volume. It is likely that the botnet was being restructured during these moments. The providers may have blocked suspicious IPs, devices may have lost connection, or the attackers may have switched to new address ranges. 

This dynamic underscore the fact that this is a highly automated infrastructure. Botnet operators recognize in real time which parts have been blocked and automatically replace them with new resources. This requires technical control and resources. This was not the work of “script kiddies,” but rather a calculated, large-scale attack with global reach. 

What can companies learn from this example?

This attack may seem less harmful compared to complex zero-day exploits, but it impressively demonstrates that even simple mechanisms – when scaled globally – can cause significant disruption. 

For web platform operators, this means: 

  • Bot management is a must, not an option. Only those who intelligently detect automated access can separate legitimate users from malicious bots and protect system resources. 
  • User agent analyses provide valuable early warning signals. Uniform browser identifiers across many IPs almost always point to automation. 
  • Quarantine and rate limit rules prevent waste of resources. Suspicious IPs should be taken out of circulation at an early stage. 
  • Secure forensic data. Short packet captures and log excerpts help analyze patterns and trace attack paths. 
  • Keep an eye on cloud providers. Since many attacks exploit virtual machines in public clouds, targeted monitoring is critical for early detection of abuse. 

The underestimated ally: the user agent

What began as a classic DDoS attack became a lesson in automation, reusability, and identity on the internet. The user agent, which is often overlooked or ignored, was the key to understanding and successfully defending against the attack in this case. 

It showed that millions of requests were not genuine user activity, but rather coordinated actions of globally synchronized botnet. Modern internet security depends not only on bandwidth or hardware, but also on artificial intelligence, pattern recognition, and the continuous learning of defense systems. 

Are your protection systems ready for the next wave?

We help companies reliably detect botnet traffic, refine user agent-based detection mechanisms, and develop flexible defense strategies. 

Contact us for a no-obligation security analysis before the next spike hits. 

Contact us now >>

 

Making the Move to CloudSecOps
Why Risk Management is a critical component
X