[PATCH AUTOSEL 6.2 09/30] cpumask: fix incorrect cpumask scanning result checks
Sasha Levin
sashal at kernel.org
Mon Mar 20 00:52:34 UTC 2023
From: Linus Torvalds <torvalds at linux-foundation.org>
[ Upstream commit 8ca09d5fa3549d142c2080a72a4c70ce389163cd ]
It turns out that commit 596ff4a09b89 ("cpumask: re-introduce
constant-sized cpumask optimizations") exposed a number of cases of
drivers not checking the result of "cpumask_next()" and friends
correctly.
The documented correct check for "no more cpus in the cpumask" is to
check for the result being equal or larger than the number of possible
CPU ids, exactly _because_ we've always done those constant-sized
cpumask scans using a widened type before. So the return value of a
cpumask scan should be checked with
if (cpu >= nr_cpu_ids)
...
because the cpumask scan did not necessarily stop exactly *at* that
maximum CPU id.
But a few cases ended up instead using checks like
if (cpu == nr_cpumask_bits)
...
which used that internal "widened" number of bits. And that used to
work pretty much by accident (ok, in this case "by accident" is simply
because it matched the historical internal implementation of the cpumask
scanning, so it was more of a "intentionally using implementation
details rather than an accident").
But the extended constant-sized optimizations then did that internal
implementation differently, and now that code that did things wrong but
matched the old implementation no longer worked at all.
Which then causes subsequent odd problems due to using what ends up
being an invalid CPU ID.
Most of these cases require either unusual hardware or special uses to
hit, but the random.c one triggers quite easily.
All you really need is to have a sufficiently small CONFIG_NR_CPUS value
for the bit scanning optimization to be triggered, but not enough CPUs
to then actually fill that widened cpumask. At that point, the cpumask
scanning will return the NR_CPUS constant, which is _not_ the same as
nr_cpumask_bits.
This just does the mindless fix with
sed -i 's/== nr_cpumask_bits/>= nr_cpu_ids/'
to fix the incorrect uses.
The ones in the SCSI lpfc driver in particular could probably be fixed
more cleanly by just removing that repeated pattern entirely, but I am
not emptionally invested enough in that driver to care.
Reported-and-tested-by: Guenter Roeck <linux at roeck-us.net>
Link: https://lore.kernel.org/lkml/481b19b5-83a0-4793-b4fd-194ad7b978c3@roeck-us.net/
Reported-and-tested-by: Geert Uytterhoeven <geert+renesas at glider.be>
Link: https://lore.kernel.org/lkml/CAMuHMdUKo_Sf7TjKzcNDa8Ve+6QrK+P8nSQrSQ=6LTRmcBKNww@mail.gmail.com/
Reported-by: Vernon Yang <vernon2gm at gmail.com>
Link: https://lore.kernel.org/lkml/20230306160651.2016767-1-vernon2gm@gmail.com/
Cc: Yury Norov <yury.norov at gmail.com>
Cc: Jason A. Donenfeld <Jason at zx2c4.com>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Sasha Levin <sashal at kernel.org>
---
arch/powerpc/xmon/xmon.c | 2 +-
drivers/char/random.c | 2 +-
drivers/net/wireguard/queueing.h | 2 +-
drivers/scsi/lpfc/lpfc_init.c | 14 +++++++-------
4 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 0da66bc4823d4..3b4e2475fc4ef 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -1277,7 +1277,7 @@ static int xmon_batch_next_cpu(void)
while (!cpumask_empty(&xmon_batch_cpus)) {
cpu = cpumask_next_wrap(smp_processor_id(), &xmon_batch_cpus,
xmon_batch_start_cpu, true);
- if (cpu == nr_cpumask_bits)
+ if (cpu >= nr_cpu_ids)
break;
if (xmon_batch_start_cpu == -1)
xmon_batch_start_cpu = cpu;
diff --git a/drivers/char/random.c b/drivers/char/random.c
index ce3ccd172cc86..253f2ddb89130 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1311,7 +1311,7 @@ static void __cold try_to_generate_entropy(void)
/* Basic CPU round-robin, which avoids the current CPU. */
do {
cpu = cpumask_next(cpu, &timer_cpus);
- if (cpu == nr_cpumask_bits)
+ if (cpu >= nr_cpu_ids)
cpu = cpumask_first(&timer_cpus);
} while (cpu == smp_processor_id() && num_cpus > 1);
diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
index 583adb37ee1e3..125284b346a77 100644
--- a/drivers/net/wireguard/queueing.h
+++ b/drivers/net/wireguard/queueing.h
@@ -106,7 +106,7 @@ static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
{
unsigned int cpu = *stored_cpu, cpu_index, i;
- if (unlikely(cpu == nr_cpumask_bits ||
+ if (unlikely(cpu >= nr_cpu_ids ||
!cpumask_test_cpu(cpu, cpu_online_mask))) {
cpu_index = id % cpumask_weight(cpu_online_mask);
cpu = cpumask_first(cpu_online_mask);
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 25ba20e428255..3fbd3bec26fc1 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -12507,7 +12507,7 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
goto found_same;
new_cpu = cpumask_next(
new_cpu, cpu_present_mask);
- if (new_cpu == nr_cpumask_bits)
+ if (new_cpu >= nr_cpu_ids)
new_cpu = first_cpu;
}
/* At this point, we leave the CPU as unassigned */
@@ -12521,7 +12521,7 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
* selecting the same IRQ.
*/
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
- if (start_cpu == nr_cpumask_bits)
+ if (start_cpu >= nr_cpu_ids)
start_cpu = first_cpu;
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@@ -12557,7 +12557,7 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
goto found_any;
new_cpu = cpumask_next(
new_cpu, cpu_present_mask);
- if (new_cpu == nr_cpumask_bits)
+ if (new_cpu >= nr_cpu_ids)
new_cpu = first_cpu;
}
/* We should never leave an entry unassigned */
@@ -12575,7 +12575,7 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
* selecting the same IRQ.
*/
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
- if (start_cpu == nr_cpumask_bits)
+ if (start_cpu >= nr_cpu_ids)
start_cpu = first_cpu;
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@@ -12648,7 +12648,7 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
goto found_hdwq;
}
new_cpu = cpumask_next(new_cpu, cpu_present_mask);
- if (new_cpu == nr_cpumask_bits)
+ if (new_cpu >= nr_cpu_ids)
new_cpu = first_cpu;
}
@@ -12663,7 +12663,7 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
goto found_hdwq;
new_cpu = cpumask_next(new_cpu, cpu_present_mask);
- if (new_cpu == nr_cpumask_bits)
+ if (new_cpu >= nr_cpu_ids)
new_cpu = first_cpu;
}
@@ -12674,7 +12674,7 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
found_hdwq:
/* We found an available entry, copy the IRQ info */
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
- if (start_cpu == nr_cpumask_bits)
+ if (start_cpu >= nr_cpu_ids)
start_cpu = first_cpu;
cpup->hdwq = new_cpup->hdwq;
logit:
--
2.39.2
More information about the WireGuard
mailing list