Skip to content

VirtualBox network performance

Some time ago I had some network performance issues with a VirtualBox guest and I was able to solve it by switching to a different NIC type. But I wanted to find out how the different types are performing and also if there's a difference between the different network modes too. And yes, there is! :-)

Results I

After some test runs, here are the results:
 HOST: Debian/GNU Linux 8.2 / Kernel 4.2.0 x86_64 (vanilla) / VirtualBox 5.0.4
GUEST: Debian/GNU Linux unstable / Kernel 4.2.0-trunk-amd64

Am79C970A / hostonly  580 Mbits/sec
Am79C970A / bridged  473 Mbits/sec
Am79C970A / natnetwork  640 Kbits/sec 1)
Am79C970A / nat  396 Mbits/sec

Am79C973 / hostonly  569 Mbits/sec
Am79C973 / bridged  285 Mbits/sec
Am79C973 / natnetwork  640 Kbits/sec
Am79C973 / nat  438 Mbits/sec

82540EM / hostonly  1.89 Gbits/sec
82540EM / bridged  1.86 Gbits/sec
82540EM / natnetwork  640 Kbits/sec
82540EM / nat  449 Mbits/sec

82543GC / hostonly  1.85 Gbits/sec
82543GC / bridged  1.91 Gbits/sec
82543GC / natnetwork  640 Kbits/sec
82543GC / nat  357 Mbits/sec

82545EM / hostonly  1.85 Gbits/sec
82545EM / bridged  1.90 Gbits/sec
82545EM / natnetwork  640 Kbits/sec
82545EM / nat  389 Mbits/sec

virtio / hostonly  705 Mbits/sec
virtio / bridged  682 Mbits/sec
virtio / natnetwork  640 Kbits/sec
virtio / nat  129 Mbits/sec
The clear winner appears to be 82543GC (Intel PRO/1000 T Server) for bridged mode or 82540EM (Intel PRO/1000 MT Desktop) for hostonly mode.

Results II

And again on a (slower) MacOS X host:
 HOST: MacOS 10.10.5 / X86_64 / VirtualBox 5.0.4
GUEST: Debian/GNU Linux 8.0 / Kernel 4.1

NIC: Am79C970A / MODE: hostonly  29.6 MBytes/sec
NIC: Am79C970A / MODE: bridged  29.9 MBytes/sec
NIC: Am79C970A / MODE: natnetwork  25.2 MBytes/sec
NIC: Am79C970A / MODE: nat  25.8 MBytes/sec

NIC: Am79C973 / MODE: hostonly  28.7 MBytes/sec
NIC: Am79C973 / MODE: bridged  30.0 MBytes/sec
NIC: Am79C973 / MODE: natnetwork  1.38 MBytes/sec
NIC: Am79C973 / MODE: nat  23.4 MBytes/sec

NIC: 82540EM / MODE: hostonly  45.4 MBytes/sec
NIC: 82540EM / MODE: bridged  38.2 MBytes/sec
NIC: 82540EM / MODE: natnetwork  61.3 MBytes/sec
NIC: 82540EM / MODE: nat  47.0 MBytes/sec

NIC: 82543GC / MODE: hostonly  43.0 MBytes/sec
NIC: 82543GC / MODE: bridged  44.7 MBytes/sec
NIC: 82543GC / MODE: natnetwork  64.7 MBytes/sec
NIC: 82543GC / MODE: nat  49.3 MBytes/sec

NIC: 82545EM / MODE: hostonly - (VM would not start)
NIC: 82545EM / MODE: bridged - (VM would not start)
NIC: 82545EM / MODE: natnetwork - (VM would not start)
NIC: 82545EM / MODE: nat - (VM would not start)

NIC: virtio / MODE: hostonly  43.3 MBytes/sec
NIC: virtio / MODE: bridged  46.6 MBytes/sec
NIC: virtio / MODE: natnetwork  10.9 MBytes/sec
NIC: virtio / MODE: nat  13.8 MBytes/sec
Here, the winner appears to be virtio for bridged mode and again 82540EM (Intel PRO/1000 MT Desktop) for hostonly mode. This time, both nat and natnetwork were working, with very different performance patterns.

Results III

On a different system, the iperf results varied greatly and I decided to run the test script longer and multiple times:
for a in {1..10}; do
   echo "### $a -- `date`"
  ~/bin/vbox-nic-bench.sh vm0 300 2>&1 | tee vbox_nic_"$a".log
done
Looking at the report files we can already see that the "hostonly" network mode was the fastest, so let's run the report function over all the output files and sort by the fastest NIC:
$ for a in vbox_nic_*.log; do
   ~/bin/vbox-nic-bench.sh report $a | grep hostonly | sort -u
done | sort -nk6 | tail -5
NIC: 82540EM / MODE: hostonly  228 MBytes/sec
NIC: 82540EM / MODE: hostonly  228 MBytes/sec
NIC: 82545EM / MODE: hostonly  228 MBytes/sec
NIC: 82543GC / MODE: hostonly  229 MBytes/sec
NIC: 82540EM / MODE: hostonly  231 MBytes/sec
So, either NIC (82540EM or 82543GC) should be the fastest in our setup.
1) For some reason, I couldn't get the new natnetwork mode to work on Linux. iperf measured "640 Kbits/sec" while in fact no data was transferred:
HOST$ iperf -t 3 -c 127.0.0.1 -p 15001
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 15001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 127.0.0.1 port 51056 connected with 127.0.0.1 port 15001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-18.5 sec  3.06 MBytes  1.39 Mbits/sec


GUEST$ sudo tcpdump -ni eth2
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth2, link-type EN10MB (Ethernet), capture size 262144 bytes
17:05:36.569862 IP 192.168.0.104.51056 > 192.168.15.4.5001: Flags [S], seq 6583, win 32768, options [mss 1460], length 0
17:05:39.574354 IP 192.168.0.104.51056 > 192.168.15.4.5001: Flags [S], seq 6583, win 32768, options [mss 1460], length 0
17:05:42.579472 IP 192.168.0.104.51056 > 192.168.15.4.5001: Flags [S], seq 6583, win 32768, options [mss 1460], length 0
17:05:45.584319 IP 192.168.0.104.51056 > 192.168.15.4.5001: Flags [S], seq 6583, win 32768, options [mss 1460], length 0
17:05:48.589318 IP 192.168.0.104.51056 > 192.168.15.4.5001: Flags [S], seq 6583, win 32768, options [mss 1460], length 0
17:05:51.593294 IP 192.168.0.104.51056 > 192.168.15.4.5001: Flags [S], seq 6583, win 32768, options [mss 1460], length 0
17:05:54.594851 IP 192.168.0.104.51056 > 192.168.15.4.5001: Flags [S], seq 6583, win 32768, options [mss 1460], length 0