Traffic Shaping

Some time ago I noticed that our cable modem connection suffered very badly when uploading files. Although the file uploaded at a reasonable rate, anything else that tried to use the connection at the same time (SSH, irc, ping, web, ...) performed extremely poorly.

For example, normal ping results to a well-connected machine look like this:

60 packets transmitted, 60 packets received, 0% packet loss
round-trip min/avg/max = 16.252/31.075/73.105 ms

But when uploading a 1Mb file, they looked like this:

60 packets transmitted, 56 packets received, 6% packet loss
round-trip min/avg/max = 20.624/720.626/1517.038 ms

The mere fact of uploading a file has caused round trip times to go from a few tens of milliseconds to around 0.7 of a second. Interactive protocols become very painful to use under these conditions.

This isn't inevitable, and isn't the way it's supposed to be. As best we can tell it is a side-effect of wrongly implemented rate limiting.

Background

Our ISP only wants us upload at a certain rate (128Kbit/s, here). Whichever component implements this (our modem or their router) there's two ways it could do it: drop any packets that would cause the average rate to exceed the limit, or queue excess packets up and release them only at the required rate.

The latter sounds superficially attractive - dropping packets is bad, right? Wrong: the end system's TCP implementations depend on packets being dropped in order to estimate the bandwidth available and to share it equitably between all the hosts and applications using it. (They don't have direct access to information from each other: all the information they have to go on is latency and loss rates.) Provided they use these estimates correctly they can limit the rate at which they send packets and so overall the modem shouldn't ever have to drop very many packets.

Now the effect of a queue is that a packet if the link is under heavy use then a packet might sit in the queue for a long time before being transmitted: the latency of the link increases as usage increases. It's as if your Internet connection physically stretched out the more you tried to use it. Put like that, it seems like a mad thing to do.

Solution

It turns out that if you front-end your Internet connection with your own suitably intelligent router there is actually something you can do about this. For instance, we have a 486 running Linux which acts as a router and firewall for our house ethernet, and Linux has some traffic shaping features which have proved sufficient to eliminate the problem for us.

The command I use on the firewall is:

tc qdisc add dev eth1 root tbf rate 127kbit burst 2048 latency 50ms

See tc(8) and tc-tbf(8) to interpret this command. Once you are happy with it (having modified it for your own configuration) make sure you firewall runs it at boot time. Make sure you have the right kernel options enabled.

With traffic shaping in force, round trip times during an upload are not quite as good as when idle but it is still much better than without traffic shaping:

60 packets transmitted, 52 packets received, 13% packet loss
round-trip min/avg/max = 18.329/86.267/187.397 ms

Upload speed remains about the same - you're not sacrificing bandwidth for latency, you're just getting lower latency for almost free.

(I know none of these numbers are very scientific. In practice the proof of the pudding is in the eating: the difference in the performance of more interactive protocols is very noticable.)

Notes

If you ever change which interface corresponds to which network, remember to tell tc about it. I forgot and became very confused by the resulting poor performance to my internal network.

Links

The Linux Advanced Routing & Traffic Control HOWTO.


Copyright © 2004 Richard Kettlewell

RJK | Contents