CPU round-robin and isolated cores
Charles-François Natali
cf.natali at gmail.com
Tue Mar 29 22:16:58 UTC 2022
Hi!
We've run into an issue where wireguard doesn't play nice with
isolated cores (`isolcpus` kernel parameter).
Basically we use `isolcpus` to isolate cores and explicitly bind our
low-latency processes to those cores, in order to minimize latency due
to the kernel and userspace.
It worked great until we started using wireguard, in particular it
seems to be due to the way work is allocated to the workqueues created here:
https://github.com/torvalds/linux/blob/ae085d7f9365de7da27ab5c0d16b12d51ea7fca9/drivers/net/wireguard/device.c#L335
I'm not familiar with the wireguard code at all so might be missing
something, but looking at e.g.
https://github.com/torvalds/linux/blob/ae085d7f9365de7da27ab5c0d16b12d51ea7fca9/drivers/net/wireguard/receive.c#L575
and https://github.com/torvalds/linux/blob/ae085d7f9365de7da27ab5c0d16b12d51ea7fca9/drivers/net/wireguard/queueing.h#L176
it seems that the RX path uses round-robin to dispatch the
packets to all online CPUs, including isolated ones:
```
void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb)
{
[...]
/* Then we queue it up in the device queue, which consumes the
* packet as soon as it can.
*/
cpu = wg_cpumask_next_online(next_cpu);
if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb)))
return -EPIPE;
queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work);
return 0;
}
```
Where `wg_cpumask_next_online` is defined like this:
```
static inline int wg_cpumask_next_online(int *next)
{
int cpu = *next;
while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask)))
cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
*next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
return cpu;
}
```
It's a problem for us because it causes significant latency, see e.g.
this ftrace output showing a kworker - bound to an isolated core - spend over
240usec inside wg_packet_decrypt_worker - we've seen much higher, up to
500usec or even more:
```
kworker/47:1-2373323 [047] 243644.756405: funcgraph_entry: |
process_one_work() {
kworker/47:1-2373323 [047] 243644.756406: funcgraph_entry: |
wg_packet_decrypt_worker() {
[...]
kworker/47:1-2373323 [047] 243644.756647: funcgraph_exit: 0.591 us | }
kworker/47:1-2373323 [047] 243644.756647: funcgraph_exit: ! 242.655 us | }
```
If it was for example a physical NIC, typically what we'd do would
be to set IRQ affinity to avoid those isolated cores, which would also
avoid running the corresponding softirqs on those cores, avoiding such
latency.
However it seems that there's currently no way to tell wireguard to
avoid those cores.
I was wondering if it would make sense for wireguard to ignore
isolated cores to avoid this kind of issue. As far as I can tell it should
be a matter of replacing usages of `cpu_online_mask` by
`housekeeping_cpumask(HK_TYPE_DOMAIN)` or even
`housekeeping_cpumask(HK_TYPE_DOMAIN | HK_TYPE_WQ)`.
We could potentially run with a patched kernel but would very much
prefer using an upstream fix if that's acceptable.
Thanks in advance!
Charles
More information about the WireGuard
mailing list