X-Git-Url: https://gitweb.dragonflybsd.org/ikiwiki.git/blobdiff_plain/153a18d8eda4857b69190006ef0d476bd6850e36..HEAD:/docs/developer/Locking_and_Synchronization/index.mdwn diff --git a/docs/developer/Locking_and_Synchronization/index.mdwn b/docs/developer/Locking_and_Synchronization/index.mdwn index 98ec817e..77c6b6de 100644 --- a/docs/developer/Locking_and_Synchronization/index.mdwn +++ b/docs/developer/Locking_and_Synchronization/index.mdwn @@ -1,105 +1,129 @@ -## Spinlocks +## Lockmgr + +Lockmgr locks used to be heavy-weight locks but have since been optimized and will run almost as fast as spinlocks and faster than mtx locks in the critical path. Lockmgr locks block formally and will not be as fast as spin-locks for very short highly-contending bits of code. Lockmgr locks are the go-to lock for DragonFly and should be used if you are not sure what to use. + ### Internals -* they effectively spin until the lock becomes released. In each spin (while iteration) they check if the lock has been released yet; if not, they just spin further. -* optimization for UP: do nothing. -* new lock holder spins already, not sleeping, so no need to wakeup -### Usage -* very lightweight, but should be held only for *very* short time (as other contenders spin and don't sleep!) -* read spinlocks (effectively shared) are also available, but not recommended for new implementations -* can not sleep while holding a spinlock; scheduler will panic about spinlocks being held by yielding thread XXX -* can never recurse! -* used to protect structures +* Uses atomic_cmpset*() +* Both shared and exclusive locks are supported. +* Locks are held across blocking conditions. +* Exclusive priority over shared for SMP performance. +* Always a procedure call but fast-paths operations internally. +* Will formally block when acquiring (if LK_NOWAIT not used). +* Can unblock and return an error on signal received. +* Has many general purpose features. +* Easy to debug. +* Easy wakeup() semantics with numerous cpu management features. +* Most portable. ---- -## LWKT serializing tokens +## Spinlocks + +Kernel spinlocks are optimized for fast inlining but should generally only be used if tokens, lockmgr, or mtx locks are not appropriate. The most common use case is when you are holding a token but do not want to break its atomicy by potentially blocking on a deeper lock. It is far more difficult to debug issues with spinlocks than it is for blockable locks. + ### Internals -* uses atomic_cmpset* internally -* UP optimization: only check to see if preemption is happening, in which case acquisition of token fails. -* deeply integrated with lwkt scheduler (lwkt_yield, ...) - * scheduler takes care of acquiring the token before rescheduling - * so a thread won't run unless the scheduler can acquire all the tokens for it. - * tokens are not owned by the thread but by the CPU. threads only get token references -* no explicit wakeup when yielding a token reference -### Usage -* a same thread can acquire multiple token references, but if that's the case, all tokens have to be acquired before the lwkt is scheduled! -* if thread sleeps while holding token references, other threads can acquire a token reference and run; so not completely safe to sleep -* used to protect heavier processing than spinlocks; but mostly also to protect data structures. Often used for global lists (allproc, etc) -* when acquiring token, not available, go to sleep, lose tokens +* Non-recursive by design. +* Both shared and exclusive spinlocks are supported. +* Thread spins to wait, does not sleep or block. +* Uses atomic_cmpset*(), fast-path is inlined. +* Any held tokens will not be temporarily released. +* Automatic critical section while a spinlock is held. +* Exclusive spinlocks have priority over shared spinlocks to reduce SMP lag. +* Should only be used for very short segments of code. +* Should not wrap long or complex pieces of code or anything that can block. +* Should not wrap kernel calls, even simple ones. +* Nesting is not recommended. +* Do not use unless you are very familiar with how spinlocks work as misuse can lead to a kernel deadlock. ---- -## Lockmgr +## LWKT serializing tokens + +LWKT tokens are commonly used to loosely lock very broad, deep, or complex sections of code which might block at times. You can acquire tokens in any order without worrying about deadlocks, however any blocking condition (including blocking when acquiring additional tokens) will temporarily release ALL held tokens while the thread is blocked. Tokens have the property of serializing code execution for the token held while the thread is runnable. + +Preemption by another thread (typically an interrupt thread) will not break the tokens you are holding. Preemption is still allowed... tokens do not enter a critical section. If the preempting thread tries to obtain the same token it will reschedule normally and return to the original thread. + +The kernel thread scheduler will try to spin a bit on a token that cannot be acquired before giving up and blocking for real. + ### Internals -* uses a spinlock inside -* sleeps while acquiring (at least if NO_WAIT is not specified) XXX -* heavyweight -* wakeup used to activate new lock holder -* no UP optimization -### Usage -* supports shared locks or exclusive locks, shared locks can be upgraded, exclusive can be downgraded -* can be acquired recursively, if the thread is the same and LK_CANRECURSE is specified -* can sleep while holding the lock. -* used when there is a requirement or possibility of blocking/sleep/recursion/... +* Uses atomic_cmpset*() internally +* Spins in scheduler instead of blocking but counts as 'blocking' because scheduler is allowed to switch to other threads. +* Easy to debug, all held tokens are tracked on a thread-by-thread basis. +* Recursion is always allowed. +* Both exclusive and shared tokens are supported +* Shared tokens must be used with caution, mixed use on the same token in the same thread will panic. +* Optimal for ping-ponging between two threads. ---- ## MTX + +Mtx locks in DragonFly should not be confused with mtx locks in FreeBSD. Mtx locks in DragonFly are a very different beast. Generally these locks work similarly to lockmgr locks but provide two additional features which also make them heavier-weight in the face of contention. + ### Internals -* based around atomic_cmpset_int instead of spinlocks -* uses wakeup to activate new lock holder -* much more lightweight than lockmgr (faster, much less space) -* no UP optimization -### Usage -* can always be recursive -* can be shared/exclusive, so upgradable -* can be held across blocking/sleeps -* can pass lock ownership directly to new owner, so no wakeup needed and is guaranteed to reach the intended destination +* Also uses atomic_cmpset*() internally. +* Implements lock chaining internally, round-robin for exclusive locks. +* Exclusive locks have priority over shared locks for SMP performance. +* Always recursive. +* The fast-path is inlined. +* Easy to debug due to internal chaining structures. +* Locks are held across blocking conditions. +* Supports shared and exclusive locks and most features of lockmgr locks. +* Supports callback-based asynchronous lock acquisition. +* Supports lock aborts (for both synchronous and asynchronous). ---- ## MPLock + +The mplock has been deprecated and should no longer be used. It is still used in a few non-critical places in DragonFly but has been removed from nearly all major and minor kernel subsystems. + ### Internals -* internally uses atomic_cmpset -* logs when the mplock is contended to KTR -* clogs performance inmensely -### Usage -* should be avoided at all cost -* must be held as little as possible -* one for the whole system (all CPUs!!) +* API is a wrapper for mp_token ---- ## LWKT Messages + +DragonFly implements a light weight message-and-port subsystem which is primarily used to shove packets around in bulk in the network subsystem and to cpu-localize network operations. The disk subsystem also uses LWKT messages to sequence disk probes and synchronize devfs. + ### Internals -* messages are passed by queueing them to the destination thread's message queue, then waking up the listener. -* messages have to be allocated/deallocated (typically using objcache_*) - * (usually use a drain to deallocate; set the replyport to be the drain and then just reply to the message) -* rather lightweight, except for inter-processor messages -### Usage -* isn't a locking mechanism, rather serialization as everything can be processed in one thread if the others just send it to that one -* can be used to avoid races: just do all the processing in one single thread -* send all the work from other threads/entry points to that one thread using lwkt messages. -* requires no locking +* Message passing and queueing. +* End-point mechanics can be customized. +* Very light-weight. ---- ## Critical Sections + +Critical sections are a form of cpu-localized locking that prevents a thread from being preempted by another thread or an IPI or clock interrupt. + +Critical sections do not protect a thread from threads or interrupts which might run on other cpus and should generally not be used to protect the frontend of a chipset driver from the backend (since the frontend can run on any cpu). Critical sections are primarily used to protect against IPIs, allowing cpu localized code to run very efficiently. + +A critical section can be used if you want to avoid code latency but is not recommended for this purpose any more. A critical section cannot prevent SMIs from occurring and also do not prevent the actual hard interrupt from interrupting the cpu, it simply causes the interrupt to immediately return and delay execution until the critical section is exited. + ### Internals -* Changes priority of current thread to TDPRIT_CRIT, effectively avoiding preemption of the thread -* are per-cpu -### Usage -* avoid anything else happening on that CPU due to the disabled preemption -* are no synchronization/locking between different CPUs! -* should be used if some code has to run uninterruptedly +* Prevents preemption by an interrupt thread +* Prevents preemption by an IPI or clock interrupt +* Critical section holder can still block if desired, the critical section is simply set aside and then restored when the thread unblocks. ---- ## Condvars + +Condvars are only used when porting code from FreeBSD and should not generally be used for native DragonFly code. + ### Internals -* uses spinlocks internally on the condvar -### Usage -* can interlock sleep when given a lockmgr lock to avoid missing changes to it, or just regular tsleep -* used to wait for conditions to happen (threads doing this are waiters) -* can wakeup all waiters or just one, effectively notifying that a change has occured - + +* Generally used for simple interlocked event/wait loops, but tends to obfuscate what is happening. +* Uses a spinlock internally. +* Not inline-optimized. +* Not optimized for heavy SMP contention. +* Lockmgr locks are generally easier to understand and should be used instead. + +---- + +## Serializers + +DragonFly currently implements a LWKT serializer abstraction which should not be used generally. This abstraction is heavily integrated into the network interface device driver subsystem. +