DDoS PPS vs Gbps explained: why packet rate matters
Learn why a DDoS attack can be dangerous at low Gbps but high PPS, and how packet rate changes capacity planning for routers, firewalls, servers and Anti-DDoS platforms.
Learn why a DDoS attack can be dangerous at low Gbps but high PPS, and how packet rate changes capacity planning for routers, firewalls, servers and Anti-DDoS platforms.
Learn why a DDoS attack can be dangerous at low Gbps but high PPS, and how packet rate changes capacity planning for routers, firewalls, servers and Anti-DDoS platforms.
Learn why a DDoS attack can be dangerous at low Gbps but high PPS, and how packet rate changes capacity planning for routers, firewalls, servers and Anti-DDoS platforms.
Learn why a DDoS attack can be dangerous at low Gbps but high PPS, and how packet rate changes capacity planning for routers, firewalls, servers and Anti-DDoS platforms.
Gbps is the visible number in most DDoS discussions, but PPS often explains why a service collapses. A flood can be “small” in bandwidth and still overload packet processing, interrupts, firewall state or routing logic.
Teams that buy Anti-DDoS protection should read both metrics. Gbps tells how much capacity is consumed; PPS tells how many packet decisions must be made every second. A credible design needs headroom for both.
Learn why a DDoS attack can be dangerous at low Gbps but high PPS, and how packet rate changes capacity planning for routers, firewalls, servers and Anti-DDoS platforms.
Gbps measures the amount of data per second. PPS measures the number of packets per second. During DDoS, those two numbers can move independently: large packets create volume, small packets create processing pressure.
A 5 Gbps attack with tiny packets can be harder for a server than a 50 Gbps attack made of larger packets, because each packet triggers parsing, queueing, counters, ACL checks or state decisions.
PPS matters because routers, firewalls, NIC queues and kernels all have packet processing limits. Once those limits are reached, latency rises, packet loss appears and legitimate sessions fail even if the uplink is not full.
For gaming, the symptom can look like lag. For hosting, it can look like random VPS outages. For transit customers, it can create unexpected CPU pressure on equipment that was sized only by bandwidth.
Capacity planning should combine port speed, filtering throughput, packet-rate limits, queue layout and upstream relief. Looking only at bandwidth leads to overconfidence.
High-PPS filtering benefits from early drops, simple hot paths, upstream FlowSpec or ACL help when useful, and clear separation between volumetric mitigation and deeper service logic.
Peeryx treats Gbps and PPS as two different risk indicators. Volumetric traffic must be reduced before it fills links, while high-PPS noise must be handled before it burns CPU on the protected endpoint.
This reading is useful for protected IP transit, dedicated protected servers and gaming proxies because each model has a different bottleneck and a different clean-traffic delivery path.
A customer sees only 8 Gbps on graphs but the firewall becomes unstable. The real problem is 12 Mpps of small UDP packets. Buying a bigger port alone would not fix the firewall path; filtering must happen earlier and with less stateful work.
Another customer receives 80 Gbps of larger packets. The port is the first bottleneck, so upstream capacity and traffic shaving matter more than local CPU tuning.
The first mistake is to advertise only Tbps and ignore Mpps. The second is to test with synthetic large packets and assume the result applies to real attack traffic.
The third is to place a stateful firewall in front of everything. Stateful devices are useful, but during high-PPS floods they can become the bottleneck that attackers wanted to hit.
The best SEO-friendly answer is also the best engineering answer: explain the attack type, show the operational impact and choose the mitigation model that matches the real service.
Peeryx is designed around upstream relief, clean traffic delivery and practical handoff models, not only a marketing capacity number.
The same platform can protect transit, dedicated infrastructure, VPS-like services and gaming flows with different delivery paths.
The objective is to keep a service usable during attack, with rules and topology that operators can actually understand.
No. Smaller high-PPS or protocol-specific attacks can break services even when bandwidth looks acceptable.
Often yes. Depending on routing and topology, clean traffic can be delivered through tunnel, cross-connect, protected IP path or proxy.
Yes. Game protocols often use UDP and latency-sensitive queries, so generic filtering can break legitimate players.
Protected transit fits networks and prefixes; a protected server or VPS is simpler when you want hosted infrastructure with protection included.
Learn why a DDoS attack can be dangerous at low Gbps but high PPS, and how packet rate changes capacity planning for routers, firewalls, servers and Anti-DDoS platforms.
The best SEO-friendly answer is also the best engineering answer: explain the attack type, show the operational impact and choose the mitigation model that matches the real service.
Learn why a DDoS attack can be dangerous at low Gbps but high PPS, and how packet rate changes capacity planning for routers, firewalls, servers and Anti-DDoS platforms.