Early Feedback on Container Networking, Resilience, Json Config and AcceptedIPs
raulbe at gmail.com
Sat Jul 8 04:26:28 CEST 2017
First, this is a stunning piece of work.
I am working on a container networking project with vxlan and peervpn at L2
and quagga at L3 with a smattering of nearly all Linux networking tunnels
and testing Wireguard has been a revelation.
Vxlan is not useful without multicast. BGP may be overkill for simple route
management, most other tunnels require intensive management at scale.
Peervpn is nice and has autodiscovery of nodes but like most current
encrypted tunnels has a performance penalty. Ipsec doesn't but adds config
complexity. Wireguard brings something fresh to the table.
My early Wireguard Iperf tests show near line speed, 942Mbps without and
870-890Mbps with Wireguard. Around 120Mbps for Peervpn. CPU usage is at
around 100% on one core for Peervpn and around 50-60% across 4 cores for
Wireguard for sending and 35-40% receiving . These tests are on low power
1.5Ghz quad core Apollo lake Celeron J3455 clusters on Ubuntu Xenial.
You have been able to hide a tremendous amount of complexity to expose a
simple, flexible but powerful front end and I am certain this must have
taken extraordinary talent, experience, time and engineering.
I have been running a number of scenarios mainly around container
networking and service discovery across multiple clouds and systems and am
glad to report its is much more robust than I expected and just works.
There are 4 things I wanted to bring up.
1. Do you anticipate a situation where differing Wireguard versions across
distributions and kernels do not inter-operate? This introduces a huge
amount of complexity to managing clusters. I faced this with Gluster and it
is a significant pain and blocker.
2. Would you consider supporting a json configuration file? The current
Wireguard ini format has duplicate entires for 'Peers' and the python ini
parser for instance does not support duplicate section heads.
3. While the allowed IPs construct is powerful it can sometimes feel a bit
unintuitive. The way I understand it the 'AllowedIP config on the Wireguard
server is for shared networks from the client and on the client for
accepted networks. This is so the server knows where to route (and thus
there cannot be duplicate IPs, subnets or any subnet overlapping in the
server config) and the client can only receive traffic from acceptedips
recorded on the client side config.
My mindset may be a bit too container orientated with hosts sharing
internal container networks across systems at the moment, and Wireguard has
a much broader use case but the share/accept construct seems slightly more
intuitive though I could be wrong.
4. The Wireguard server is a single point of failure in a star topology. If
the server host goes down your network goes down with it. How can we add
more resilience in a simple way? A backup server in L2 with identical keys
and a floating internal IP?
Thanks for Wireguard!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the WireGuard