Since we can only hold one shared spinlock at a time anyway, change the
authorMatthew Dillon <dillon@dragonflybsd.org>
Thu, 1 Jun 2006 19:02:39 +0000 (19:02 +0000)
committerMatthew Dillon <dillon@dragonflybsd.org>
Thu, 1 Jun 2006 19:02:39 +0000 (19:02 +0000)
commitbbb31c5d6f229977dcb6949ccc712d119dfe9a34
tree023c06ae51909b07349278fb9b325d6c00f9e496
parentcc1033d4f2409b2a7c97c91060097816ae9a8b96
Since we can only hold one shared spinlock at a time anyway, change the
gd_spinlocks_rd counter into a gd_spinlock_rd pointer.  This will improve
performance for potentially contested exclusive spinlocks.  Now they can
test the per-cpu spinlock pointer directly against the spinlock being
acquired instead of testing a counter which might represent any shared
spinlock.

This also has the effect of relaxing the requirement that further
exclusive spinlocks cannot be acquired while holding a shared spinlock,
but for now we are going to leave the requirement intact.
sys/kern/kern_spinlock.c
sys/kern/lwkt_thread.c
sys/kern/usched_bsd4.c
sys/sys/globaldata.h
sys/sys/spinlock2.h