Single CPU core bottleneck caused by high site-to-site traffic
Samuel Holland
samuel at sholland.org
Sat Mar 9 01:16:33 CET 2019
On 03/01/19 22:24, Xiaozhou Liu wrote:
> Hi Jason and the list,
>
> Here at our corporate network we run some inner site-to-site VPNs using
> WireGuard. Thanks for giving out such a beautiful software to the world.
>
> Recently we encountered some noticeable network latency during peak traffic
> time. Although the traffic is pretty huge, the WireGuard box is far from
> running out of any of its resources: CPU, memory, network bandwidth, etc.
>
> It turns out that the bottleneck is caused by the single UDP connection
> between the sites, which cannot be routed to different CPU cores by RSS
> on receiving. The total CPU usage is not high, but one of the cores can
> reach 100%.
>
> Maybe we can improve this by:
>
> embedding more endpoints in one peer so that the VPN tunnel can run
> multiple UDP flows instead of one. Hence, the single huge UDP flow is
> effectively broken down to some smaller ones which can be received by
> multiple queues of the NIC and then later processed by more CPU cores.
> It will not break current users because the single UDP connection is
> still provided as the default configuration.
While a native solution would be nice, you should be able to do this today with
nftables. Dynamically rewrite the port number on a portion of the outgoing
packets, and redirect that additional port to the main port on the receive side.
This (untested) is based on the examples in the nftables wiki[1]:
$ nft add rule nat output ip daddr $OTHER_PEER udp dport $MAIN_PORT \
dnat to $PEER : numgen inc mod 2 map { \
0 : $MAIN_PORT ,\
1 : $ALT_PORT \
}
$ nft add rule nat prerouting ip daddr $MY_IP udp dport $ALT_PORT \
redirect to $MAIN_PORT
[1]: https://wiki.nftables.org/wiki-nftables/index.php/Load_balancing
> It is also possible to set up multiple wg interfaces and more connections
> explicitly. But it would make the network administration much more complex.
>
> We are planning to make a working demo of this idea but we would like to
> hear from you first.
>
> Any idea or comment is appreciated.
>
>
> Thanks,
> Xiaozhou
Regards,
Samuel
More information about the WireGuard
mailing list