From: Matthew Dillon Date: Wed, 8 Oct 2003 00:10:56 +0000 (+0000) Subject: Disable background bitmap writes. They appear to cause at least two race X-Git-Tag: v2.0.1~12871 X-Git-Url: https://gitweb.dragonflybsd.org/dragonfly.git/commitdiff_plain/27c2f783473045f3d08ad5aedfea12d35ace5d5c Disable background bitmap writes. They appear to cause at least two race conditions: First, on MP systems even an LK_NOWAIT lock may block, invalidating flags checks done just prior to the lock attempt. Second, on both MP and UP systems, the original buffer (origbp) may be modified during the completion of a background write without its lock being held and these modifications can race against mainline code that is also modifying the same buffer with the lock held. Eventually the problem background bitmap writes solved will be solved more generally by implementing page COWing durign device I/O to avoid stalls on pages undergoing write I/O. --- diff --git a/sys/kern/vfs_bio.c b/sys/kern/vfs_bio.c index 1da5a43c0b..a09817fe2a 100644 --- a/sys/kern/vfs_bio.c +++ b/sys/kern/vfs_bio.c @@ -12,7 +12,7 @@ * John S. Dyson. * * $FreeBSD: src/sys/kern/vfs_bio.c,v 1.242.2.20 2003/05/28 18:38:10 alc Exp $ - * $DragonFly: src/sys/kern/vfs_bio.c,v 1.14 2003/08/27 01:43:07 dillon Exp $ + * $DragonFly: src/sys/kern/vfs_bio.c,v 1.15 2003/10/08 00:10:56 dillon Exp $ */ /* @@ -143,6 +143,16 @@ SYSCTL_INT(_vfs, OID_AUTO, buffreekvacnt, CTLFLAG_RW, SYSCTL_INT(_vfs, OID_AUTO, bufreusecnt, CTLFLAG_RW, &bufreusecnt, 0, ""); +/* + * Disable background writes for now. There appear to be races in the + * flags tests and locking operations as well as races in the completion + * code modifying the original bp (origbp) without holding a lock, assuming + * splbio protection when there might not be splbio protection. + */ +static int dobkgrdwrite = 0; +SYSCTL_INT(_debug, OID_AUTO, dobkgrdwrite, CTLFLAG_RW, &dobkgrdwrite, 0, + "Do background writes (honoring the BV_BKGRDWRITE flag)?"); + static int bufhashmask; static LIST_HEAD(bufhashhdr, buf) *bufhashtbl, invalhash; struct bqueues bufqueues[BUFFER_QUEUES] = { { 0 } }; @@ -630,7 +640,8 @@ bwrite(struct buf * bp) * This optimization eats a lot of memory. If we have a page * or buffer shortfull we can't do it. */ - if ((bp->b_xflags & BX_BKGRDWRITE) && + if (dobkgrdwrite && + (bp->b_xflags & BX_BKGRDWRITE) && (bp->b_flags & B_ASYNC) && !vm_page_count_severe() && !buf_dirty_count_severe()) {