← Back to blog

How do you mitigate a DDoS attack above 100Gbps?

A credible mitigation ddos 100gbps design is not just a capacity number. It has to account for link saturation, packet-rate pressure, CPU limits, upstream pre-filtering, filtering servers and clean traffic handoff.

100Gbps changes the problem

At that point the question is no longer only “can I filter?”, but “can my architecture stay alive?”

Gbps is not the whole story

Attacks may break a service through bandwidth, packet rate or CPU cost before the app can react.

Upstream relief matters

Pre-filtering exists to do coarse reduction so that more intelligent layers remain stable.

Good mitigation is layered

Upstream relief, dedicated filtering, clean handoff and more specific logic behind it should work together.

The keyword mitigation ddos 100gbps attracts serious leads because it touches a real breakpoint. Above 100Gbps, many theoretical designs stop looking credible. A headline capacity number is no longer enough. You need to explain how traffic enters the protected path, where it is reduced, what is filtered more precisely, and how legitimate traffic is handed back to production.

At that level, the real question is not only “how many Gbps can you absorb?”. The real question is “how do you keep the service usable when the noise becomes massive?”. That is where architecture makes the difference.

Why 100Gbps is such a strong psychological boundary

The 100Gbps mark is psychological because everybody instantly translates it into infrastructure risk: a 100G port can be endangered, transit can saturate, and teams can no longer think only in terms of “server + firewall”.

Even if the real outcome depends on bursts, packet mix and topology, that threshold forces the conversation toward ports, handoff, upstream mitigation and clean traffic return. That is where serious buyers start separating marketing from real design.

Volumetric and application-layer attacks do not belong to the same layer

A volumetric attack first targets the network path: bandwidth, buffers, flow handling and packet rate. An application-layer attack aims to exhaust service logic, proxies or application resources. Both can coexist, but they should not be handled in the same place.

When talking about more than 100Gbps, the first priority is usually to survive the volume and the PPS. If the link collapses before that, your smartest L7 logic never gets a chance to work.

  • Volumetric pressure has to be handled first at absorption level.
  • Application-aware filtering comes afterwards and needs more context.
  • Mixing both too early leads either to collateral damage or wasted CPU.

Link saturation, PPS saturation and CPU saturation are three different failures

A DDoS does not break a service in only one way. It may fill the link, overwhelm the path with packets per second or push too much CPU cost into the mitigation logic. That is why a simple capacity number says very little without an architecture behind it.

Link saturation

The port or transit fills before deeper analysis can happen.

PPS saturation

Packet rate becomes the real killer even when Gbps still looks manageable.

CPU saturation

The filtering stack sees the traffic but burns too many cycles to stay stable.

The role of upstream pre-filtering

Upstream pre-filtering exists to do coarse reduction. Its job is not to decide all legitimacy by itself, but to remove patterns that are obvious enough so that massive noise does not hit the most expensive stages.

That is often the best cost/performance point: send less raw noise to the filtering server, preserve link headroom and keep room for the traffic that actually needs intelligence.

The role of a filtering server

The filtering server is the more precise stage. It receives traffic that has already been reduced, applies sharper signatures, keeps useful visibility and prepares clean delivery back to production.

It is also where you can connect more specific logic: custom pre-filtering, an XDP engine, a DPDK dataplane or filtering before an application proxy. Used well, the filtering server is not just a relay. It is the hinge between network mitigation and real service continuity.

The role of tunnels and clean traffic return

Mitigation alone is never enough. Clean traffic still has to be injected where the customer needs it, without forcing a full rebuild. That is where GRE, IPIP, VXLAN, BGP over GRE or cross-connect handoff models matter.

The right model depends on the context: an existing dedicated server, a backbone, a cluster, a proxy or the need to preserve public IPs. The key is not the tunnel name. The key is whether the handoff is coherent with the real topology.

  • A good handoff avoids a full migration just to get protected.
  • The tunnel is part of the operating model, not only a transport choice.
  • Clean traffic return must be planned before the attack, not after saturation.

A realistic Peeryx scenario

Take a gaming service already running in production on an existing dedicated server with 2x10G on the customer side. When an attack goes beyond 100Gbps, the goal is not to understand every detail immediately. The goal is to stop the noise from reaching the final production path.

A viable scenario is to bring prefixes or protected IPs into Peeryx, relieve the most obvious patterns upstream, then pass the residual flow through a dedicated filtering server. That server applies more precise logic and returns clean traffic through GRE or BGP over GRE to the customer server. The final proxy or custom engine handles whatever still needs deeper context.

1. Protected ingress

Customer prefixes or IPs enter the protected infrastructure.

2. Upstream relief

The most obvious noise is reduced before expensive stages.

3. Dedicated filtering

A filtering server refines the decision and prepares clean delivery.

4. Clean return

Legitimate traffic is handed back to production through the right model.

Common mistakes

  • Thinking 100Gbps is only a bandwidth topic.
  • Trying to place all mitigation logic in one layer.
  • Ignoring clean traffic return until production can no longer accept the flow cleanly.
  • Filtering too aggressively upstream without a real legitimate traffic baseline.
  • Buying a mitigation promise without understanding the handoff model.

FAQ

Does a DDoS attack above 100Gbps automatically mean an outage?

No, but it requires a prepared architecture. Without absorption, coarse reduction and clean return, the risk rises very quickly.

Can a single filtering server be enough?

Not always. It can be extremely useful, but without upstream relief it may become the next bottleneck.

Why focus so much on clean traffic return?

Because mitigation only creates value if the customer gets usable legitimate traffic back.

Can an existing infrastructure be preserved?

Yes, very often. That is exactly why the delivery model matters so much.

Conclusion

A serious mitigation ddos 100gbps strategy does not rely on one magical box. It relies on a coherent chain: absorption, upstream coarse reduction, dedicated filtering and clean return to the right place.

The best signal is therefore not only a capacity number, but the ability to keep the service usable when the noise becomes massive. That is where serious architectures stand apart.

Resources

Related reading

To go deeper, here are other useful pages and articles.

Need a credible design above 100Gbps?

Peeryx can help define a readable design with upstream protection, a filtering server, the right handoff model and clean traffic returned either to existing production or to a custom layer.