A pattern appeared in the monitoring dashboard that indicated a classic DDoS attack. A trading platform was the target of massive web requests: there were two distinct traffic spikes in quick succession, both large enough to bring down an unprotected website. Let’s take a closer look at the larger of the two traffic spikes.
The measured peak values were around 16 million requests per minute with a bandwidth of 160 gigabits per minute, or about 21 Gbit/s. The requests were successfully repelled. However, the attack pattern reveals how sophisticated today’s botnets have now become and how seemingly minor details, such as the user agent, can play a highly crucial role.
The traffic did not surge at once and instead followed a characteristic curve. It rose slowly and steadily over several minutes. This controlled escalation is typical of attackers who first test their targets to ascertain whether a site is protected by mitigation systems.
Such “soft” starts are more sophisticated than they appear: a sudden peak is noticeable, but a slow or gradual increase can trick those systems, which only respond to abrupt spikes in volume. The attack lasted less than half an hour.
The source IP addresses were distributed across multiple continents including Indonesia, the US, China, Germany, India, Mexico, Brazil, Peru, and Russia. In total, more than 5,000 unique IPs were involved in the attack.
This broad spectrum suggests a large-scale, well-organized botnet. It was evident that cloud resources or powerful servers were used in several cases, as indicated by the high data throughput per host. It was particularly striking that a significant proportion of the requests came from the networks of large telecommunications providers. This suggests that compromised devices or servers within normal consumer networks were also involved. This is a classic feature of modern, “mixed” botnets.
The attackers focused their attack on the entire website, i.e., the root domain, which is the home page and the central access point. The apparent aim was to make the entire website inaccessible.
The defense systems responded efficiently: Suspicious IPs were automatically moved to a quarantine zone and were blocked immediately, thus conserving resources and keeping the latency low. Most requests received an HTTP 403 response. This shows that the defense at the application level worked consistently and effectively.
One of the characteristics of this attack was the uniformity of the so-called user agents, i.e., the identifiers that browsers send to servers to identify themselves as “Chrome,” “Safari,” or “Firefox,” for example.
In this case, almost all requests used the same user agent string, a clear red flag.
Today, attackers can easily generate lists of thousands of realistic-looking user agent strings, many of which are publicly available. In an advanced version of the attack, each bot would use a different, plausible user agent string thus making it far harder to detect or block the traffic based on the signature.
In this case, however, stuck with two static user agents and thus failed at the first line of defense.
For defenders, such uniformity is a stroke of luck. Simple rules such as “block all identical user agents with more than X requests per minute from different IPs” can work effectively.
But the threat landscape is rapidly evolving. Botnet frameworks are already in circulation that use randomized browser signatures, fake headers, or even simulated mouse movements to appear like real users.
This transforms the user agent from a simple identifying feature to one of the most important indicators of bot intelligence.
The traffic graphs revealed short pauses or drops in data volume. It is likely that the botnet was being restructured during these moments. The providers may have blocked suspicious IPs, devices may have lost connection, or the attackers may have switched to new address ranges.
This dynamic underscore the fact that this is a highly automated infrastructure. Botnet operators recognize in real time which parts have been blocked and automatically replace them with new resources. This requires technical control and resources. This was not the work of “script kiddies,” but rather a calculated, large-scale attack with global reach.
This attack may seem less harmful compared to complex zero-day exploits, but it impressively demonstrates that even simple mechanisms – when scaled globally – can cause significant disruption.
For web platform operators, this means:
What began as a classic DDoS attack became a lesson in automation, reusability, and identity on the internet. The user agent, which is often overlooked or ignored, was the key to understanding and successfully defending against the attack in this case.
It showed that millions of requests were not genuine user activity, but rather coordinated actions of globally synchronized botnet. Modern internet security depends not only on bandwidth or hardware, but also on artificial intelligence, pattern recognition, and the continuous learning of defense systems.
We help companies reliably detect botnet traffic, refine user agent-based detection mechanisms, and develop flexible defense strategies.
Contact us for a no-obligation security analysis before the next spike hits.