[PATCH 00/14] replace call_rcu by kfree_rcu for simple kmem_cache_free callback
Jason A. Donenfeld
Jason at zx2c4.com
Mon Jun 17 17:04:34 UTC 2024
On Mon, Jun 17, 2024 at 06:38:52PM +0200, Vlastimil Babka wrote:
> On 6/17/24 6:33 PM, Jason A. Donenfeld wrote:
> > On Mon, Jun 17, 2024 at 6:30 PM Uladzislau Rezki <urezki at gmail.com> wrote:
> >> Here if an "err" is less then "0" means there are still objects
> >> whereas "is_destroyed" is set to "true" which is not correlated
> >> with a comment:
> >>
> >> "Destruction happens when no objects"
> >
> > The comment is just poorly written. But the logic of the code is right.
> >
> >>
> >> > out_unlock:
> >> > mutex_unlock(&slab_mutex);
> >> > cpus_read_unlock();
> >> > diff --git a/mm/slub.c b/mm/slub.c
> >> > index 1373ac365a46..7db8fe90a323 100644
> >> > --- a/mm/slub.c
> >> > +++ b/mm/slub.c
> >> > @@ -4510,6 +4510,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
> >> > return;
> >> > trace_kmem_cache_free(_RET_IP_, x, s);
> >> > slab_free(s, virt_to_slab(x), x, _RET_IP_);
> >> > + if (s->is_destroyed)
> >> > + kmem_cache_destroy(s);
> >> > }
> >> > EXPORT_SYMBOL(kmem_cache_free);
> >> >
> >> > @@ -5342,9 +5344,6 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> >> > if (!slab->inuse) {
> >> > remove_partial(n, slab);
> >> > list_add(&slab->slab_list, &discard);
> >> > - } else {
> >> > - list_slab_objects(s, slab,
> >> > - "Objects remaining in %s on __kmem_cache_shutdown()");
> >> > }
> >> > }
> >> > spin_unlock_irq(&n->list_lock);
> >> >
> >> Anyway it looks like it was not welcome to do it in the kmem_cache_free()
> >> function due to performance reason.
> >
> > "was not welcome" - Vlastimil mentioned *potential* performance
> > concerns before I posted this. I suspect he might have a different
> > view now, maybe?
> >
> > Vlastimil, this is just checking a boolean (which could be
> > unlikely()'d), which should have pretty minimal overhead. Is that
> > alright with you?
>
> Well I doubt we can just set and check it without any barriers? The
> completion of the last pending kfree_rcu() might race with
> kmem_cache_destroy() in a way that will leave the cache there forever, no?
> And once we add barriers it becomes a perf issue?
Hm, yea you might be right about barriers being required. But actually,
might this point toward a larger problem with no matter what approach,
polling or event, is chosen? If the current rule is that
kmem_cache_free() must never race with kmem_cache_destroy(), because
users have always made diligent use of call_rcu()/rcu_barrier() and
such, but now we're going to let those race with each other - either by
my thing above or by polling - so we're potentially going to get in trouble
and need some barriers anyway.
I think?
Jason
More information about the WireGuard
mailing list