multiple wireguard interface and kworker ressources
nicolas prochazka
prochazka.nicolas at gmail.com
Wed Jun 14 15:50:16 CEST 2017
At this moment, we are using 3000 wg tunnel on a single wireguard
interface, but now
we want divide the tunnels by interface and by group of our client, to
manage qos by wireguard interface, and some other tasks.
So on in a single interface, it's working well, but test with 3000
interface causes some trouble about cpu / load average , performance
of vm.
Regards,
Nicolas
2017-06-14 9:52 GMT+02:00 nicolas prochazka <prochazka.nicolas at gmail.com>:
> hello,
> after create of wg interface, kworker thread does not return to a
> normal state in my case,
> kernel thread continues to consume a lot of cpu .
> I must delete wireguard interface to kworker decrease.
>
> Nicolas
>
> 2017-06-13 23:47 GMT+02:00 Jason A. Donenfeld <Jason at zx2c4.com>:
>> Hi Nicolas,
>>
>> It looks to me like some resources are indeed expended in adding those
>> interfaces. Not that much that would be problematic -- are you seeing
>> a problematic case? -- but still a non-trivial amount.
>>
>> I tracked it down to WireGuard's instantiation of xt_hashlimit, which
>> does some ugly vmalloc, and it's call into the power state
>> notification system, which uses a naive O(n) algorithm for insertion.
>> I might have a way of amortizing on module insertion, which would
>> speed things up. But I wonder -- what is the practical detriment of
>> spending a few extra cycles on `ip link add`? What's your use case
>> where this would actually be a problem?
>>
>> Thanks,
>> Jason
More information about the WireGuard
mailing list