[PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback
Paul E. McKenney
paulmck at kernel.org
Thu Jun 13 18:13:52 UTC 2024
On Thu, Jun 13, 2024 at 07:58:17PM +0200, Uladzislau Rezki wrote:
> On Thu, Jun 13, 2024 at 10:45:59AM -0700, Paul E. McKenney wrote:
> > On Thu, Jun 13, 2024 at 07:38:59PM +0200, Uladzislau Rezki wrote:
> > > On Thu, Jun 13, 2024 at 08:06:30AM -0700, Paul E. McKenney wrote:
> > > > On Thu, Jun 13, 2024 at 03:06:54PM +0200, Uladzislau Rezki wrote:
> > > > > On Thu, Jun 13, 2024 at 05:47:08AM -0700, Paul E. McKenney wrote:
> > > > > > On Thu, Jun 13, 2024 at 01:58:59PM +0200, Jason A. Donenfeld wrote:
> > > > > > > On Wed, Jun 12, 2024 at 03:37:55PM -0700, Paul E. McKenney wrote:
> > > > > > > > On Wed, Jun 12, 2024 at 02:33:05PM -0700, Jakub Kicinski wrote:
> > > > > > > > > On Sun, 9 Jun 2024 10:27:12 +0200 Julia Lawall wrote:
> > > > > > > > > > Since SLOB was removed, it is not necessary to use call_rcu
> > > > > > > > > > when the callback only performs kmem_cache_free. Use
> > > > > > > > > > kfree_rcu() directly.
> > > > > > > > > >
> > > > > > > > > > The changes were done using the following Coccinelle semantic patch.
> > > > > > > > > > This semantic patch is designed to ignore cases where the callback
> > > > > > > > > > function is used in another way.
> > > > > > > > >
> > > > > > > > > How does the discussion on:
> > > > > > > > > [PATCH] Revert "batman-adv: prefer kfree_rcu() over call_rcu() with free-only callbacks"
> > > > > > > > > https://lore.kernel.org/all/20240612133357.2596-1-linus.luessing@c0d3.blue/
> > > > > > > > > reflect on this series? IIUC we should hold off..
> > > > > > > >
> > > > > > > > We do need to hold off for the ones in kernel modules (such as 07/14)
> > > > > > > > where the kmem_cache is destroyed during module unload.
> > > > > > > >
> > > > > > > > OK, I might as well go through them...
> > > > > > > >
> > > > > > > > [PATCH 01/14] wireguard: allowedips: replace call_rcu by kfree_rcu for simple kmem_cache_free callback
> > > > > > > > Needs to wait, see wg_allowedips_slab_uninit().
> > > > > > >
> > > > > > > Also, notably, this patch needs additionally:
> > > > > > >
> > > > > > > diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
> > > > > > > index e4e1638fce1b..c95f6937c3f1 100644
> > > > > > > --- a/drivers/net/wireguard/allowedips.c
> > > > > > > +++ b/drivers/net/wireguard/allowedips.c
> > > > > > > @@ -377,7 +377,6 @@ int __init wg_allowedips_slab_init(void)
> > > > > > >
> > > > > > > void wg_allowedips_slab_uninit(void)
> > > > > > > {
> > > > > > > - rcu_barrier();
> > > > > > > kmem_cache_destroy(node_cache);
> > > > > > > }
> > > > > > >
> > > > > > > Once kmem_cache_destroy has been fixed to be deferrable.
> > > > > > >
> > > > > > > I assume the other patches are similar -- an rcu_barrier() can be
> > > > > > > removed. So some manual meddling of these might be in order.
> > > > > >
> > > > > > Assuming that the deferrable kmem_cache_destroy() is the option chosen,
> > > > > > agreed.
> > > > > >
> > > > > <snip>
> > > > > void kmem_cache_destroy(struct kmem_cache *s)
> > > > > {
> > > > > int err = -EBUSY;
> > > > > bool rcu_set;
> > > > >
> > > > > if (unlikely(!s) || !kasan_check_byte(s))
> > > > > return;
> > > > >
> > > > > cpus_read_lock();
> > > > > mutex_lock(&slab_mutex);
> > > > >
> > > > > rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU;
> > > > >
> > > > > s->refcount--;
> > > > > if (s->refcount)
> > > > > goto out_unlock;
> > > > >
> > > > > err = shutdown_cache(s);
> > > > > WARN(err, "%s %s: Slab cache still has objects when called from %pS",
> > > > > __func__, s->name, (void *)_RET_IP_);
> > > > > ...
> > > > > cpus_read_unlock();
> > > > > if (!err && !rcu_set)
> > > > > kmem_cache_release(s);
> > > > > }
> > > > > <snip>
> > > > >
> > > > > so we have SLAB_TYPESAFE_BY_RCU flag that defers freeing slab-pages
> > > > > and a cache by a grace period. Similar flag can be added, like
> > > > > SLAB_DESTROY_ONCE_FULLY_FREED, in this case a worker rearm itself
> > > > > if there are still objects which should be freed.
> > > > >
> > > > > Any thoughts here?
> > > >
> > > > Wouldn't we also need some additional code to later check for all objects
> > > > being freed to the slab, whether or not that code is initiated from
> > > > kmem_cache_destroy()?
> > > >
> > > Same away as SLAB_TYPESAFE_BY_RCU is handled from the kmem_cache_destroy() function.
> > > It checks that flag and if it is true and extra worker is scheduled to perform a
> > > deferred(instead of right away) destroy after rcu_barrier() finishes.
> >
> > Like this?
> >
> > SLAB_DESTROY_ONCE_FULLY_FREED
> >
> > Instead of adding a new kmem_cache_destroy_rcu()
> > or kmem_cache_destroy_wait() API member, instead add a
> > SLAB_DESTROY_ONCE_FULLY_FREED flag that can be passed to the
> > existing kmem_cache_destroy() function. Use of this flag would
> > suppress any warnings that would otherwise be issued if there
> > was still slab memory yet to be freed, and it would also spawn
> > workqueues (or timers or whatever) to do any needed cleanup work.
> >
> >
> The flag is passed as all others during creating a cache:
>
> slab = kmem_cache_create(name, size, ..., SLAB_DESTROY_ONCE_FULLY_FREED | OTHER_FLAGS, NULL);
>
> the rest description is correct to me.
Good catch, fixed, thank you!
Thanx, Paul
More information about the WireGuard
mailing list