When traceroute starts lying: ICMP tunneling across an MPLS hop from London to Cape Town

Every network engineer learns early on to trust traceroute. It is the pocket‑knife of path‑finding: increment a TTL, read the ICMP “time exceeded,” repeat. Hop after hop accumulates latency and, if something is broken, loss shows up exactly where the trouble is.

Every network engineer learns early on to trust traceroute. It is the pocket‑knife of path‑finding: increment a TTL, read the ICMP “time exceeded,” repeat. Hop after hop accumulates latency and, if something is broken, loss shows up exactly where the trouble is.

Drop the same tool onto an MPLS backbone and the output turns surreal. Latency jumps to a far‑away country, every hop inside the label‑switched core reports the same numbers, and loss appears everywhere at once. This article walks through a London‑to‑Cape Town trace and shows why ICMP tunneling makes traceroute look drunk.


The reference run over plain IP

Imagine a direct IP path that leaves the London PE, crosses a Paris P router, hops the Atlantic to Johannesburg, lands on the Cape Town PE and reaches the server.

#HostnameLoss%SntLastAvgBestWrst
1LDN-PE0.0101.01.01.01.0
2PAR-P0.01016.016.016.016.0
3JNB-P0.010166.0166.0166.0166.0
4CPT-PE40.010186.0186.0186.0186.0
5DST-CPT40.010187.0187.0187.0187.0

Hop-by-hop delay rises in a straight line and the 40 percent loss appears only after Johannesburg, exactly where the under-sea segment is having trouble.


The same path carried inside an MPLS VPN

Now the provider turns up an MPLS L3VPN for the same route. Cape Town and London sit at the edges as PE routers. Paris and Johannesburg live in the core as P routers. TTL propagation is on, but ICMP has no ordinary IP route back to London inside the cloud. When a core router needs to send “time exceeded,” it wraps that ICMP packet in the same label it saw on the original probe, sends it to Cape Town, and only the Cape Town PE converts the error back into plain IP.

#HostnameLoss%SntLastAvgBestWrst
1LDN-PE0.0101.01.01.01.0
2PAR-P40.010186.0186.0186.0186.0
3JNB-P40.010186.0186.0186.0186.0
4CPT-PE40.010186.0186.0186.0186.0
5DST-CPT40.010187.0187.0187.0187.0

The moment the probes enter the MPLS domain, latency and loss snap to the far edge. Paris, Johannesburg and Cape Town all swear they live in the same place and suffer the same 40 percent packet loss. Traceroute’s neat mile-marker illusion is gone.


What actually happens inside the label stack

  1. Your probe with TTL 1 reaches Paris. The TTL expires. Paris needs to tell you but has no route to the source.
  2. Paris builds the ICMP reply, slaps on the same VPN label that points to Cape Town and ships it down the LSP.
  3. Cape Town PE pops the label, finally finds a normal IP route back to London, and forwards the ICMP packet.
  4. The round trip between Cape Town and London sets the latency and any loss that you see. Every later hop inside the MPLS core repeats the sequence, so every reply inherits exactly the same timing and loss.

Because the ICMP replies all share the same tail path, traceroute projects Cape Town’s numbers backward onto Paris and Johannesburg. The MPLS cloud has effectively muted its own middle.


Reading the tea leaves

A jump to a fixed latency followed by identical figures on each hop is the telltale sign that ICMP tunneling is in play. The problem is probably on the far side of the MPLS cloud, not in the middle where the trace seems to accuse everything. Knowing this spares hours chasing ghosts inside a provider’s core.


Traceroute never lies. It speaks in the accent of the network underneath. Over MPLS that accent can be thick. Once you know how ICMP tunneling works, the odd rhythm of those traces starts to make perfect sense.

Subscribe to Cloud Networking Pro

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe