Recently I wanted to test a decentralised application that uses UDP multicast to communicate with other peers in its network. I wanted to test how this application would handle packet-loss and initially I’ve been looking at the excellent pumba chaos testing tool to introduce packet-loss to this application.
Pumba leverages tc netem to apply network effects to outgoing traffic of the container under test. As these effects are only applied to outgoing traffic, and the application I wanted to test is using IP multicast, this means that when IP packets are being dropped none of the other peers in the network will receive this packet, which creates synchronised behaviour of the peers that have missed this information. To make more realistic scenarios possible I wanted to apply a traffic loss to IP packets that are coming into my container which was not possible with pumba at that time.
Exploring other options to create a test network I found was to use iptables statistics module to add loss to incoming IP traffic on my container. This method requires the iptables package to be installed on the containers under test, similar that pumba requires iproute2 to be installed on the container under test, or on a sidecar, for using the tc command. The containers under test should also have the NET_ADMIN capability added.
Ingress packet dropping can be activated on the containers under test by adding a iptables rule to these containers.
An example of such an iptables rule, randomly dropping 20% of incoming UDP packets to a specific port, can be found below:
iptables -I INPUT -p udp --dport 5001 -i eth0 -m statistic --mode random --probability 0.2 -j DROP
An other example of an iptables rule, randomly dropping 5% of incoming UDP packets to a specific multicast, can be seen below:
iptables -I INPUT -p udp -d 239.1.2.3 -i eth0 -m statistic --mode random --probability 0.05 -j DROP
Using iptables this way would make a great addition to pumba and this is why I put together a merge request to add this feature to pumba. This contribution has now been merged by Alexei Ledenev, the maintainer of pumba. As of version 0.11.0 of pumba you can now use it to add a packet-loss effect to incoming network traffic to your docker container. This effect is not limited to UDP multicast but can also be used for any other type of UDP, TCP or ICMP traffic.
You can test the new pumba feature as follows.
1, Create a docker network for your experiment
docker network create iperf
2. On that network we create an iperf server that listen to incoming multicast traffic and is named server
docker run --rm -it --network=iperf --name server \
sk278/iperf -s -u -B 226.94.1.1 -i 1
3. We add another container to that network that runs an iperf client that generates UPD multicast traffic.
docker run --rm -it --network=iperf --name client \
sk278/iperf -c 226.94.1.1 -u -T 32 -t 3 -i 1
4. We take note of the output on our server and see that no packets are lost
[ 1] local 226.94.1.1 port 5001 connected with 172.18.0.3 port 56090
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 1] 0.00-1.00 sec 131 KBytes 1.07 Mbits/sec 0.014 ms 0/91 (0%)
[ 1] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 0.029 ms 0/89 (0%)
[ 1] 2.00-3.00 sec 128 KBytes 1.05 Mbits/sec 0.017 ms 0/89 (0%)
[ 1] 0.00-3.02 sec 389 KBytes 1.06 Mbits/sec 0.016 ms 0/271 (0%)
5. Now we fire up pumba to startup a sidecar container to add 20% packet-loss on your server container for 1 minute.
pumba iptables --iptables-image rancher/mirrored-kube-vip-kube-vip-iptables:v0.8.9 \
-d 1m -p udp --dport 5001 loss --probability 0.2 server
Note that this command will download an image from Docker Hub to provide the iptables command.
6. Once again we start our client container to generate UDP multicast traffic
docker run --rm -it --network=iperf --name client \
sk278/iperf -c 226.94.1.1 -u -T 32 -t 3 -i 1
7. On the server container output you can now witness the magic happening as iptables statistic module will randomly drop 20% of the incoming traffic that matches the rule that is set my pumba
[ 2] local 226.94.1.1 port 5001 connected with 172.18.0.3 port 36262
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 2] 0.00-1.00 sec 115 KBytes 941 Kbits/sec 0.025 ms 11/91 (12%)
[ 2] 1.00-2.00 sec 109 KBytes 894 Kbits/sec 0.014 ms 13/89 (15%)
[ 2] 2.00-3.00 sec 97.6 KBytes 800 Kbits/sec 0.021 ms 21/89 (24%)
[ 2] 0.00-3.02 sec 324 KBytes 880 Kbits/sec 0.020 ms 45/271 (17%)
Happing testing!




