cpuidle: Fix ct_idle_*() usage
The whole disable-RCU, enable-IRQS dance is very intricate since changing IRQ state is traced, which depends on RCU. Add two helpers for the cpuidle case that mirror the entry code: ct_cpuidle_enter() ct_cpuidle_exit() And fix all the cases where the enter/exit dance was buggy. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.130014793@infradead.org
This commit is contained in:
committed by
Ingo Molnar
parent
0c5ffc3d7b
commit
a01353cf18
@@ -622,9 +622,13 @@ struct cpumask *tick_get_broadcast_oneshot_mask(void)
|
||||
* to avoid a deep idle transition as we are about to get the
|
||||
* broadcast IPI right away.
|
||||
*/
|
||||
int tick_check_broadcast_expired(void)
|
||||
noinstr int tick_check_broadcast_expired(void)
|
||||
{
|
||||
#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
|
||||
return arch_test_bit(smp_processor_id(), cpumask_bits(tick_broadcast_force_mask));
|
||||
#else
|
||||
return cpumask_test_cpu(smp_processor_id(), tick_broadcast_force_mask);
|
||||
#endif
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
Reference in New Issue
Block a user