From 5042f74dda34ca4cca002c8a21288262a96ad756 Mon Sep 17 00:00:00 2001 From: Matthew Dillon Date: Sat, 21 Oct 2017 15:02:05 -0700 Subject: [PATCH] kernel - Cleanup token code, add simple exclusive priority (2) * The priority mechanism revealed an issue with lwkt_switch()'s fall-back code in dealing with contended tokens. The code was refusing to schedule a lower-priority thread on a cpu requesting an exclusive lock as another on that same cpu requesting a shared lock. This creates a problem for the exclusive priority feature. More pointedly, it also creates a fairness problem in the mixed lock type use case generally. * Change the mechanism to allow any thread polling on tokens to be scheduled. The scheduler will still iterate in priority order. This imposes a little extra overhead with regards to userspace returns as a thread might be scheduled that then tries to return to userland without being the designated user thread. * This also fixes another bug that cropped up recently where a 32-way threaded program would sometimes not quickly schedule to all 32 cpus, sometimes leaving one or two cpus idle for a few seconds. --- sys/kern/lwkt_thread.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/sys/kern/lwkt_thread.c b/sys/kern/lwkt_thread.c index b5529f637f..25b955816c 100644 --- a/sys/kern/lwkt_thread.c +++ b/sys/kern/lwkt_thread.c @@ -722,14 +722,22 @@ lwkt_switch(void) * See if we can switch to another thread. * * We generally don't want to do this because it represents a - * priority inversion. Do not allow the case if the thread - * is returning to userland (not a kernel thread) AND the thread - * has a lower upri. + * priority inversion, but contending tokens on the same cpu can + * cause real problems if we don't now that we have an exclusive + * priority mechanism over shared for tokens. + * + * The solution is to allow threads with pending tokens to compete + * for them (a lower priority thread will get less cpu once it + * returns from the kernel anyway). If a thread does not have + * any contending tokens, we go by td_pri and upri. */ while ((ntd = TAILQ_NEXT(ntd, td_threadq)) != NULL) { - if (ntd->td_pri < TDPRI_KERN_LPSCHED && upri > ntd->td_upri) - break; - upri = ntd->td_upri; + if (TD_TOKS_NOT_HELD(ntd) && + ntd->td_pri < TDPRI_KERN_LPSCHED && upri > ntd->td_upri) { + continue; + } + if (upri < ntd->td_upri) + upri = ntd->td_upri; /* * Try this one. -- 2.41.0