We have been observing an unusual pattern in the IT environment of a public sector customer for quite some time now. The number of unique IP addresses has skyrocketed from 2,000 to 2,500 during peak times to a regular range of 5,000 to 8,000. One would expect a proportional increase in requests and required bandwidth with this doubling of visitor numbers. However, bandwidth and request volume are increasing only moderately (by a factor of 1.3 to 1.7), while the number of unique IPs remains virtually constant.
This is more than a minor statistical detail. It points to a bot-driven pattern with an unexpected structure.
Since the cutoff date, the customer has experienced a sustained increase in unique IP addresses that is not short-lived.
The metrics at a glance: The number of unique IPs has tripled in some places. Yet, the number of requests is increasing by only 50 to 70 percent, and the bandwidth just by 30 to 70 percent. At the same time, we find that the origin server’s response time deteriorates. There are higher average values, more peaks, and occasional 500 errors. Notably, there are no signs of massive content downloads (e.g. images or scripts), and the shift in website traffic profiles is subtle and partly contradictory.
When the number of real users increases, the volume of requests and bandwidth typically rise proportionally. A browser loads not only HTML pages, but also images, fonts, stylesheets, scripts, and videos. This results in a significantly higher load per visit.
Bots behave differently. They are usually interested in the HTML – the “raw text” of the page – and ignore everything necessary for visual rendering.
This explains why the number of unique IP addresses increases dramatically without the bandwidth increasing at the same rate.
Although bots that only load HTML generate minimal traffic compared to real browsers, when thousands of them act simultaneously, they can have a significant impact on server resources.
The combination of many unique IP addresses and a relatively small increase in bandwidth is consistent with the hypothesis of a botnet primarily sending Spartan HTTP requests, such as search queries or form submissions.
Possible Variants:
The result is thousands of short-lived connections that transfer very little data but still consume significant resources.
Even a simple botnet can achieve the effect of a Slowloris attack simply through the sheer number of connections.
Real browsers are “cooperative.” They keep connections open, bundle requests, use HTTP/2 multiplexing, and efficiently process content. Bots, on the other hand, open many small sessions, often without keep-alive or compression, and only request bare HTML pages.
This results in a high volume of connections with low bandwidth and thus, which matches exactly what we see in the customer’s logs. It is of note that many of these bots do not retrieve images or scripts, nor do they interact with forms or cookies. This confirms that these are not real users but automated crawlers or attack scripts that systematically scan endpoints or perform stress tests.
Current monitoring data only provides a trend data. It lacks details about the behaviour of individual connections. For example, it is unclear whether the requests were short-lived or long-lasting, how much data was transferred per session, and whether the clients behaved like humans or machines.
Understanding this requires detailed network observation, i.e., collecting metrics that record individual sessions, request types, and response times. Additionally, short traffic recordings (packet captures) or higher-resolution flow data help identify patterns and repetitions.
Only then can one determine with certainty whether the activity is the result of coordinated bots, misconfigurations, or targeted stress tests. Bot management systems or web application firewalls (WAFs) could help, as they recognize bot behavior based on characteristics such as JavaScript activity, browser behaviour, and cookie usage.
This is essential to reliably distinguish human access from automated traffic.
The following pragmatic steps are recommended to stabilize the situation and gain insights:
This case represents a new hybrid bot phenomenon. It combines a high number of sources, low bandwidth per request and sustained stress on dynamic endpoints over a period of weeks. Classic threshold alarms are insufficient, and an adaptive monitoring and protection concept is required.
This incident demonstrates that network resilience is not achieved through rigid rules. It requires the ability to recognize behavioural patterns, trigger automated countermeasures and adapting to them dynamically.
Contact us if you want to understand how resilient your services are against such subtle bot activities. We support you with analysis and design of protection strategies, so your infrastructure remains stable even against quiet but persistent attacks.