twx-linux/kernel/locking
Paul E. McKenney af5f6e27d5 locktorture: Count lock readers
Currently, the lock_is_read_held variable is bool, so that a reader sets
it to true just after lock acquisition and then to false just before
lock release.  This works in a rough statistical sense, but can result
in false negatives just after one of a pair of concurrent readers has
released the lock.  This approach does have low overhead, but at the
expense of the setting to true potentially never leaving the reader's
store buffer, thus resulting in an unconditional false negative.

This commit therefore converts this variable to atomic_t and makes
the reader use atomic_inc() just after acquisition and atomic_dec()
just before release.  This does increase overhead, but this increase is
negligible compared to the 10-microsecond lock hold time.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-07-27 11:39:30 -07:00
..
irqflag-debug.c lockdep: Noinstr annotate warn_bogus_irq_restore() 2021-02-10 14:44:39 +01:00
lock_events_list.h locking/rwsem: Remove reader optimistic spinning 2020-12-09 17:08:48 +01:00
lock_events.c locking/lock_events: Don't show pvqspinlock events on bare metal 2019-04-10 10:56:05 +02:00
lock_events.h locking/lock_events: Use raw_cpu_{add,inc}() for stats 2019-06-03 12:32:56 +02:00
lockdep_internals.h lockdep: Allow tuning tracing capacity constants. 2021-04-05 20:33:57 +09:00
lockdep_proc.c locking/lockdep: Fix meaningless /proc/lockdep output of lock classes on !CONFIG_PROVE_LOCKING 2021-07-05 10:44:52 +02:00
lockdep_states.h locking/lockdep: Rework FS_RECLAIM annotation 2017-08-10 12:29:03 +02:00
lockdep.c Merge branch 'core-rcu-2021.07.04' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu 2021-07-04 12:58:33 -07:00
locktorture.c locktorture: Count lock readers 2021-07-27 11:39:30 -07:00
Makefile locking/rtmutex: Move debug functions as inlines into common header 2021-03-29 15:57:04 +02:00
mcs_spinlock.h locking: Fix typos in comments 2021-03-22 02:45:52 +01:00
mutex-debug.c locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal 2021-05-18 12:53:51 +02:00
mutex-debug.h locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal 2021-05-18 12:53:51 +02:00
mutex.c sched: Change task_struct::state 2021-06-18 11:43:09 +02:00
mutex.h locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal 2021-05-18 12:53:51 +02:00
osq_lock.c locking: Fix typos in comments 2021-03-22 02:45:52 +01:00
percpu-rwsem.c locking/percpu-rwsem: Use this_cpu_{inc,dec}() for read_count 2020-09-16 16:26:56 +02:00
qrwlock.c locking/qrwlock: Cleanup queued_write_lock_slowpath() 2021-05-06 15:33:49 +02:00
qspinlock_paravirt.h Revert "locking/pvqspinlock: Don't wait if vCPU is preempted" 2019-09-25 10:22:37 +02:00
qspinlock_stat.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157 2019-05-30 11:26:37 -07:00
qspinlock.c x86/kvm: Add "nopvspin" parameter to disable PV spinlocks 2020-07-08 16:21:57 -04:00
rtmutex_common.h locking/rtmutex: Move debug functions as inlines into common header 2021-03-29 15:57:04 +02:00
rtmutex.c sched: Change task_struct::state 2021-06-18 11:43:09 +02:00
rwsem.c sched: Change task_struct::state 2021-06-18 11:43:09 +02:00
semaphore.c kernel: delete repeated words in comments 2021-02-26 09:41:03 -08:00
spinlock_debug.c lockdep: Introduce wait-type checks 2020-03-21 16:00:24 +01:00
spinlock.c locking: Fix typos in comments 2021-03-22 02:45:52 +01:00
test-ww_mutex.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 9 2019-05-21 11:28:40 +02:00