From: Matthew Dillon Date: Thu, 8 Oct 2009 21:20:13 +0000 (-0700) Subject: kernel - Major performance changes to VM page management. X-Git-Tag: v2.7.1~506 X-Git-Url: https://gitweb.dragonflybsd.org/dragonfly.git/commitdiff_plain/0e8bd897b2ebcf1a575536f3bfdd88fe2377cc27 kernel - Major performance changes to VM page management. This commit significantly changes the way the kernel caches VM pages. Essentially what happens now is that vnodes and VM pages which are accessed often wind up in the VM active queue and last on the list for recyclement while vnodes and VM pages which are only accessed once or twice wind up on the VM inactive queue and are inserted in the middle of the list for recyclement. Previously vnodes were essentially recycled in a LRU fashion and due to algorithmic design issues VM pages associated with files scanned via open()/read() were also winding up getting recycled in a LRU fashion. This caused relatively often-used data to get recycled way too early in the face of large filesystem scans (tar, rdist, cvs, etc). In the new scheme vnodes and VM pages are essentially split into two camps: Those which are used often and those which are only used once or twice. The ones used often wind up in the VM active queue (and their vnodes are last on the list of vnodes which can be recycled), and the ones used only once or twice wind up in the VM inactive queue. The cycling of a large number of files from single-use scans (tar, rdist, cvs, etc on large data sets) now only recycles within the inactive set and does not touch the active set AT ALL. So, for example, files often-accessed by a shell or other programs tend to remain cached permanently. Permanance here is a relative term. Given enough memory pressure such files WILL be recycled. But single-use scans even of huge data sets will not create this sort of memory pressure. Examples of how active VM pages and vnodes will get recycled include: (1) Too many pages or vnodes wind up being marked as active. (2) Memory pressure created by anonymous memory from running processes. Technical Description of changes: * The buffer cache is limited. For example, on a 3G system the buffer cache only manages around 200MB. The VM page cache, on the otherhand can cover all available memory. This means that data can cycle in and out of buffer cache at a much higher rate then it would from the VM page cache. * VM pages were losing their activity history (m->act_count) when wired to back buffer cache pages. Because the buffer cache only manages around 200MB the VM pages were being cycled in and out of the buffer cache on a shorter time period verses how long they would be able to survive in the VM page queues. This caused VM pages to get recycled in more of a LRU fashion instead of based on usage, particularly the VM pages for files accessed with open()/read(). VM pages now retain their activity history and it also gets updated even while the VM pages are owned by the buffer cache. * Files accessed just once, for example in a large 'tar', 'find', or 'ls', could cause vnodes for files accessed numerous times to get kicked out of the vnode free list. This could occur due to an edge case when many tiny files are iterated (such as in a cvs update), on machines with 2G or more of memory. In these cases the vnode cache would reach its maximum number of vnodes without the VM page cache ever coming under pressure, forcing the VM system to throw away vnodes. The VM system invariably chose vnodes with small numbers of cached VM pages (which is what we desire), but wound up chosing them in strict LRU order regardless of whether the vnode was for a file accessed just once or for a file accessed many times. More technical Description of changes: * The buffer cache now inherits the highest m->act_count from the VM pages backing it, and updates its tracking b_act_count whenever the buffer is getblk()'d (and HAMMER does it manually for buffers it attaches to internal structures). * VAGE in the vnode->v_flag field has been changed to VAGE0 and VAGE1 (a 2 bit counter). Vnodes start out marked as being fully aged (count of 3) and the count is decremented every time the vnode is opened. * When a vnode is placed in the vnode free list aged vnodes are now inserted into the middle of the list while non-aged vnodes are inserted at the end. So aged vnodes get recycled first. * VM pages returned from the buffer cache are now placed in the inactive queue or the active queue based on m->act_count. This works properly now that we do not lose the activity state when wiring and unwiring the VM page for buffer cache backings. * The VM system now sets a much larger inactive page target, 1/4 of available memory. This combined with the vnode reclamation algorithm which reclaims 1/10 of the active vnodes in the system is now responsible for regulating the distribution of 'active' pages verses 'inactive' pages. It is important to note that the inactive page target and the vnode reclamation algorithm sets a minimum size for pages and vnodes intended to be on the inactive side of the ledger. Memory pressure from having too many active pages or vnodes will cause VM pages to move to the inactive side. But, as already mentioned, the simple one-time cycling of files such as in a tar, rdist, or other file scan will NOT cause this sort of memory pressure. Negative aspects of the patch. * Very large data sets which might have previously fit in memory but do not fit in e.g. 1/2 of available memory will no longer be fully cached. This is an either-or type of deal. We can't prevent active pages from getting recycled unless we reduce the amount of data we allow to get cached from 'one time' uses before starting to recycle that data. -Matt --- diff --git a/sys/emulation/linux/i386/linprocfs/linprocfs_subr.c b/sys/emulation/linux/i386/linprocfs/linprocfs_subr.c index 6debe5761d..8cda14030d 100644 --- a/sys/emulation/linux/i386/linprocfs/linprocfs_subr.c +++ b/sys/emulation/linux/i386/linprocfs/linprocfs_subr.c @@ -151,7 +151,7 @@ loop: (VREAD|VEXEC) >> 3 | (VREAD|VEXEC) >> 6; vp->v_type = VDIR; - vp->v_flag = VROOT; + vp->v_flag |= VROOT; break; case Pself: /* /proc/self = lr--r--r-- */ diff --git a/sys/kern/vfs_bio.c b/sys/kern/vfs_bio.c index 6d75070cc0..cdf7d68e7e 100644 --- a/sys/kern/vfs_bio.c +++ b/sys/kern/vfs_bio.c @@ -129,6 +129,7 @@ static int bd_request; /* locked by needsbuffer_spin */ static int bd_request_hw; /* locked by needsbuffer_spin */ static u_int bd_wake_ary[BD_WAKE_SIZE]; static u_int bd_wake_index; +static u_int vm_cycle_point = ACT_INIT + ACT_ADVANCE * 6; static struct spinlock needsbuffer_spin; static struct thread *bufdaemon_td; @@ -146,6 +147,8 @@ SYSCTL_INT(_vfs, OID_AUTO, lorunningspace, CTLFLAG_RW, &lorunningspace, 0, "Minimum amount of buffer space required for active I/O"); SYSCTL_INT(_vfs, OID_AUTO, hirunningspace, CTLFLAG_RW, &hirunningspace, 0, "Maximum amount of buffer space to usable for active I/O"); +SYSCTL_UINT(_vfs, OID_AUTO, vm_cycle_point, CTLFLAG_RW, &vm_cycle_point, 0, + "Recycle pages to active or inactive queue transition pt 0-64"); /* * Sysctls determining current state of the buffer cache. */ @@ -1557,6 +1560,8 @@ bqrelse(struct buf *bp) return; } + buf_act_advance(bp); + spin_lock_wr(&bufspin); if (bp->b_flags & B_LOCKED) { /* @@ -1628,11 +1633,24 @@ vfs_vmio_release(struct buf *bp) for (i = 0; i < bp->b_xio.xio_npages; i++) { m = bp->b_xio.xio_pages[i]; bp->b_xio.xio_pages[i] = NULL; + /* - * In order to keep page LRU ordering consistent, put - * everything on the inactive queue. + * This is a very important bit of code. We try to track + * VM page use whether the pages are wired into the buffer + * cache or not. While wired into the buffer cache the + * bp tracks the act_count. + * + * We can choose to place unwired pages on the inactive + * queue (0) or active queue (1). If we place too many + * on the active queue the queue will cycle the act_count + * on pages we'd like to keep, just from single-use pages + * (such as when doing a tar-up or file scan). */ - vm_page_unwire(m, 0); + if (bp->b_act_count < vm_cycle_point) + vm_page_unwire(m, 0); + else + vm_page_unwire(m, 1); + /* * We don't mess with busy pages, it is * the responsibility of the process that @@ -1659,7 +1677,10 @@ vfs_vmio_release(struct buf *bp) if (bp->b_flags & B_DIRECT) { vm_page_try_to_free(m); } else if (vm_page_count_severe()) { + m->act_count = bp->b_act_count; vm_page_try_to_cache(m); + } else { + m->act_count = bp->b_act_count; } } } @@ -2009,6 +2030,7 @@ restart: bp->b_bcount = 0; bp->b_xio.xio_npages = 0; bp->b_dirtyoff = bp->b_dirtyend = 0; + bp->b_act_count = ACT_INIT; reinitbufbio(bp); KKASSERT(LIST_FIRST(&bp->b_dep) == NULL); buf_dep_init(bp); @@ -3163,6 +3185,8 @@ allocbuf(struct buf *bp, int size) vm_page_wire(m); bp->b_xio.xio_pages[bp->b_xio.xio_npages] = m; ++bp->b_xio.xio_npages; + if (bp->b_act_count < m->act_count) + bp->b_act_count = m->act_count; } crit_exit(); diff --git a/sys/kern/vfs_lock.c b/sys/kern/vfs_lock.c index 221ce768ad..e80d56abd9 100644 --- a/sys/kern/vfs_lock.c +++ b/sys/kern/vfs_lock.c @@ -78,7 +78,13 @@ static struct sysref_class vnode_sysref_class = { } }; -static TAILQ_HEAD(freelst, vnode) vnode_free_list; /* vnode free list */ +/* + * The vnode free list hold inactive vnodes. Aged inactive vnodes + * are inserted prior to the mid point, and otherwise inserted + * at the tail. + */ +static TAILQ_HEAD(freelst, vnode) vnode_free_list; +static struct vnode vnode_free_mid; int freevnodes = 0; SYSCTL_INT(_debug, OID_AUTO, freevnodes, CTLFLAG_RD, @@ -86,6 +92,11 @@ SYSCTL_INT(_debug, OID_AUTO, freevnodes, CTLFLAG_RD, static int wantfreevnodes = 25; SYSCTL_INT(_debug, OID_AUTO, wantfreevnodes, CTLFLAG_RW, &wantfreevnodes, 0, ""); +#ifdef TRACKVNODE +static ulong trackvnode; +SYSCTL_ULONG(_debug, OID_AUTO, trackvnode, CTLFLAG_RW, + &trackvnode, 0, ""); +#endif /* * Called from vfsinit() @@ -94,6 +105,7 @@ void vfs_lock_init(void) { TAILQ_INIT(&vnode_free_list); + TAILQ_INSERT_HEAD(&vnode_free_list, &vnode_free_mid, v_freelist); } /* @@ -107,21 +119,32 @@ static __inline void __vbusy(struct vnode *vp) { +#ifdef TRACKVNODE + if ((ulong)vp == trackvnode) + kprintf("__vbusy %p %08x\n", vp, vp->v_flag); +#endif TAILQ_REMOVE(&vnode_free_list, vp, v_freelist); freevnodes--; - vp->v_flag &= ~(VFREE|VAGE); + vp->v_flag &= ~VFREE; } static __inline void __vfree(struct vnode *vp) { - if (vp->v_flag & (VAGE|VRECLAIMED)) +#ifdef TRACKVNODE + if ((ulong)vp == trackvnode) { + kprintf("__vfree %p %08x\n", vp, vp->v_flag); + print_backtrace(); + } +#endif + if (vp->v_flag & VRECLAIMED) TAILQ_INSERT_HEAD(&vnode_free_list, vp, v_freelist); + else if (vp->v_flag & (VAGE0 | VAGE1)) + TAILQ_INSERT_BEFORE(&vnode_free_mid, vp, v_freelist); else TAILQ_INSERT_TAIL(&vnode_free_list, vp, v_freelist); freevnodes++; - vp->v_flag &= ~VAGE; vp->v_flag |= VFREE; } @@ -129,6 +152,10 @@ static __inline void __vfreetail(struct vnode *vp) { +#ifdef TRACKVNODE + if ((ulong)vp == trackvnode) + kprintf("__vfreetail %p %08x\n", vp, vp->v_flag); +#endif TAILQ_INSERT_TAIL(&vnode_free_list, vp, v_freelist); freevnodes++; vp->v_flag |= VFREE; @@ -204,7 +231,6 @@ vdrop(struct vnode *vp) KKASSERT(vp->v_sysref.refcnt != 0 && vp->v_auxrefs > 0); atomic_subtract_int(&vp->v_auxrefs, 1); if ((vp->v_flag & VCACHED) && vshouldfree(vp)) { - /*vp->v_flag |= VAGE;*/ vp->v_flag &= ~VCACHED; __vfree(vp); } @@ -434,7 +460,6 @@ void vx_put(struct vnode *vp) { if ((vp->v_flag & VCACHED) && vshouldfree(vp)) { - /*vp->v_flag |= VAGE;*/ vp->v_flag &= ~VCACHED; __vfree(vp); } @@ -485,6 +510,8 @@ allocfreevnode(void) * XXX NOT MP SAFE */ vp = TAILQ_FIRST(&vnode_free_list); + if (vp == &vnode_free_mid) + vp = TAILQ_NEXT(vp, v_freelist); if (vx_lock_nonblock(vp)) { KKASSERT(vp->v_flag & VFREE); TAILQ_REMOVE(&vnode_free_list, vp, v_freelist); @@ -492,6 +519,10 @@ allocfreevnode(void) vp, v_freelist); continue; } +#ifdef TRACKVNODE + if ((ulong)vp == trackvnode) + kprintf("allocfreevnode %p %08x\n", vp, vp->v_flag); +#endif /* * With the vnode locked we can safely remove it @@ -556,6 +587,10 @@ allocfreevnode(void) /* * Obtain a new vnode from the freelist, allocating more if necessary. * The returned vnode is VX locked & refd. + * + * All new vnodes set the VAGE flags. An open() of the vnode will + * decrement the (2-bit) flags. Vnodes which are opened several times + * are thus retained in the cache over vnodes which are merely stat()d. */ struct vnode * allocvnode(int lktimeout, int lkflags) @@ -614,7 +649,7 @@ allocvnode(int lktimeout, int lkflags) panic("Clean vnode still on hash tree!"); KKASSERT(vp->v_mount == NULL); #endif - vp->v_flag = 0; + vp->v_flag = VAGE0 | VAGE1; vp->v_lastw = 0; vp->v_lasta = 0; vp->v_cstart = 0; diff --git a/sys/kern/vfs_mount.c b/sys/kern/vfs_mount.c index e228de68d6..13429a28cd 100644 --- a/sys/kern/vfs_mount.c +++ b/sys/kern/vfs_mount.c @@ -107,6 +107,10 @@ struct vmntvnodescan_info { struct vnode *vp; }; +struct vnlru_info { + int pass; +}; + static int vnlru_nowhere = 0; SYSCTL_INT(_debug, OID_AUTO, vnlru_nowhere, CTLFLAG_RD, &vnlru_nowhere, 0, @@ -403,7 +407,7 @@ vfs_setfsid(struct mount *mp, fsid_t *template) * not a good candidate, 1 if it is. */ static __inline int -vmightfree(struct vnode *vp, int page_count) +vmightfree(struct vnode *vp, int page_count, int pass) { if (vp->v_flag & VRECLAIMED) return (0); @@ -415,6 +419,29 @@ vmightfree(struct vnode *vp, int page_count) return (0); if (vp->v_object && vp->v_object->resident_page_count >= page_count) return (0); + + /* + * XXX horrible hack. Up to four passes will be taken. Each pass + * makes a larger set of vnodes eligible. For now what this really + * means is that we try to recycle files opened only once before + * recycling files opened multiple times. + */ + switch(vp->v_flag & (VAGE0 | VAGE1)) { + case 0: + if (pass < 3) + return(0); + break; + case VAGE0: + if (pass < 2) + return(0); + break; + case VAGE1: + if (pass < 1) + return(0); + break; + case VAGE0 | VAGE1: + break; + } return (1); } @@ -499,10 +526,18 @@ vtrytomakegoneable(struct vnode *vp, int page_count) * * This routine is a callback from the mountlist scan. The mount point * in question will be busied. + * + * NOTE: The 1/10 reclamation also ensures that the inactive data set + * (the vnodes being recycled by the one-time use) does not degenerate + * into too-small a set. This is important because once a vnode is + * marked as not being one-time-use (VAGE0/VAGE1 both 0) that vnode + * will not be destroyed EXCEPT by this mechanism. VM pages can still + * be cleaned/freed by the pageout daemon. */ static int vlrureclaim(struct mount *mp, void *data) { + struct vnlru_info *info = data; struct vnode *vp; lwkt_tokref ilock; int done; @@ -532,6 +567,7 @@ vlrureclaim(struct mount *mp, void *data) done = 0; lwkt_gettoken(&ilock, &mntvnode_token); count = mp->mnt_nvnodelistsize / 10 + 1; + while (count && mp->mnt_syncer) { /* * Next vnode. Use the special syncer vnode to placemark @@ -559,7 +595,7 @@ vlrureclaim(struct mount *mp, void *data) * check, and then must check again after we lock the vnode. */ if (vp->v_type == VNON || /* syncer or indeterminant */ - !vmightfree(vp, trigger) /* critical path opt */ + !vmightfree(vp, trigger, info->pass) /* critical path opt */ ) { --count; continue; @@ -631,6 +667,7 @@ static void vnlru_proc(void) { struct thread *td = curthread; + struct vnlru_info info; int done; EVENTHANDLER_REGISTER(shutdown_pre_sync, shutdown_kproc, td, @@ -665,7 +702,19 @@ vnlru_proc(void) continue; } cache_cleanneg(0); - done = mountlist_scan(vlrureclaim, NULL, MNTSCAN_FORWARD); + + /* + * The pass iterates through the four combinations of + * VAGE0/VAGE1. We want to get rid of aged small files + * first. + */ + info.pass = 0; + done = 0; + while (done == 0 && info.pass < 4) { + done = mountlist_scan(vlrureclaim, &info, + MNTSCAN_FORWARD); + ++info.pass; + } /* * The vlrureclaim() call only processes 1/10 of the vnodes diff --git a/sys/kern/vfs_vopops.c b/sys/kern/vfs_vopops.c index a3cf635fe8..8c4aabea50 100644 --- a/sys/kern/vfs_vopops.c +++ b/sys/kern/vfs_vopops.c @@ -226,6 +226,9 @@ vop_old_mknod(struct vop_ops *ops, struct vnode *dvp, return(error); } +/* + * NOTE: VAGE is always cleared when calling VOP_OPEN(). + */ int vop_open(struct vop_ops *ops, struct vnode *vp, int mode, struct ucred *cred, struct file *fp) @@ -233,6 +236,16 @@ vop_open(struct vop_ops *ops, struct vnode *vp, int mode, struct ucred *cred, struct vop_open_args ap; int error; + /* + * Decrement 3-2-1-0. Does not decrement beyond 0 + */ + if (vp->v_flag & VAGE0) { + vp->v_flag &= ~VAGE0; + } else if (vp->v_flag & VAGE1) { + vp->v_flag &= ~VAGE1; + vp->v_flag |= VAGE0; + } + ap.a_head.a_desc = &vop_open_desc; ap.a_head.a_ops = ops; ap.a_vp = vp; diff --git a/sys/sys/buf.h b/sys/sys/buf.h index 7392e3796e..cc370890e3 100644 --- a/sys/sys/buf.h +++ b/sys/sys/buf.h @@ -162,7 +162,8 @@ struct buf { struct bio b_bio_array[NBUF_BIO]; /* BIO translation layers */ u_int32_t b_flags; /* B_* flags. */ unsigned short b_qindex; /* buffer queue index */ - unsigned short b_unused01; + unsigned char b_act_count; /* similar to vm_page act_count */ + unsigned char b_unused01; struct lock b_lock; /* Buffer lock */ buf_cmd_t b_cmd; /* I/O command */ int b_bufsize; /* Allocated buffer size. */ diff --git a/sys/sys/buf2.h b/sys/sys/buf2.h index 0ce799e050..510d640174 100644 --- a/sys/sys/buf2.h +++ b/sys/sys/buf2.h @@ -63,6 +63,9 @@ #ifndef _SYS_VNODE_H_ #include #endif +#ifndef _VM_VM_PAGE_H_ +#include +#endif /* * Initialize a lock. @@ -180,6 +183,28 @@ bioq_first(struct bio_queue_head *bioq) return (TAILQ_FIRST(&bioq->queue)); } +/* + * Adjust buffer cache buffer's activity count. This + * works similarly to vm_page->act_count. + */ +static __inline void +buf_act_advance(struct buf *bp) +{ + if (bp->b_act_count > ACT_MAX - ACT_ADVANCE) + bp->b_act_count = ACT_MAX; + else + bp->b_act_count += ACT_ADVANCE; +} + +static __inline void +buf_act_decline(struct buf *bp) +{ + if (bp->b_act_count < ACT_DECLINE) + bp->b_act_count = 0; + else + bp->b_act_count -= ACT_DECLINE; +} + /* * biodeps inlines - used by softupdates and HAMMER. */ diff --git a/sys/sys/vnode.h b/sys/sys/vnode.h index bd3a241006..3395316c52 100644 --- a/sys/sys/vnode.h +++ b/sys/sys/vnode.h @@ -214,7 +214,7 @@ struct vnode { struct bio_track v_track_write; /* track I/O's in progress */ struct mount *v_mount; /* ptr to vfs we are in */ struct vop_ops **v_ops; /* vnode operations vector */ - TAILQ_ENTRY(vnode) v_freelist; /* vnode freelist */ + TAILQ_ENTRY(vnode) v_freelist; /* vnode freelist/cachelist */ TAILQ_ENTRY(vnode) v_nmntvnodes; /* vnodes for mount point */ struct buf_rb_tree v_rbclean_tree; /* RB tree of clean bufs */ struct buf_rb_tree v_rbdirty_tree; /* RB tree of dirty bufs */ @@ -284,12 +284,12 @@ struct vnode { #define VMAYHAVELOCKS 0x00000080 /* maybe posix or flock locks on vp */ #define VPFSROOT 0x00000100 /* may be a pseudo filesystem root */ /* open for business 0x00000200 */ -/* open for business 0x00000400 */ -/* open for business 0x00000800 */ +#define VAGE0 0x00000400 /* Age count for recycling - 2 bits */ +#define VAGE1 0x00000800 /* Age count for recycling - 2 bits */ #define VCACHED 0x00001000 /* No active references but has cache value */ #define VOBJBUF 0x00002000 /* Allocate buffers in VM object */ #define VINACTIVE 0x00004000 /* The vnode is inactive (did VOP_INACTIVE) */ -#define VAGE 0x00008000 /* Insert vnode at head of free list */ +/* open for business 0x00008000 */ #define VOLOCK 0x00010000 /* vnode is locked waiting for an object */ #define VOWANT 0x00020000 /* a process is waiting for VOLOCK */ #define VRECLAIMED 0x00040000 /* This vnode has been destroyed */ diff --git a/sys/vfs/hammer/hammer.h b/sys/vfs/hammer/hammer.h index 7f05c99bbf..6492eb2db9 100644 --- a/sys/vfs/hammer/hammer.h +++ b/sys/vfs/hammer/hammer.h @@ -1178,6 +1178,7 @@ void hammer_io_init(hammer_io_t io, hammer_volume_t volume, enum hammer_io_type type); int hammer_io_read(struct vnode *devvp, struct hammer_io *io, hammer_off_t limit); +void hammer_io_advance(struct hammer_io *io); int hammer_io_new(struct vnode *devvp, struct hammer_io *io); int hammer_io_inval(hammer_volume_t volume, hammer_off_t zone2_offset); struct buf *hammer_io_release(struct hammer_io *io, int flush); diff --git a/sys/vfs/hammer/hammer_io.c b/sys/vfs/hammer/hammer_io.c index bc3743300b..537b9ec120 100644 --- a/sys/vfs/hammer/hammer_io.c +++ b/sys/vfs/hammer/hammer_io.c @@ -249,6 +249,17 @@ hammer_io_new(struct vnode *devvp, struct hammer_io *io) return(0); } +/* + * Advance the activity count on the underlying buffer because + * HAMMER does not getblk/brelse on every access. + */ +void +hammer_io_advance(struct hammer_io *io) +{ + if (io->bp) + buf_act_advance(io->bp); +} + /* * Remove potential device level aliases against buffers managed by high level * vnodes. Aliases can also be created due to mixed buffer sizes or via diff --git a/sys/vfs/hammer/hammer_ondisk.c b/sys/vfs/hammer/hammer_ondisk.c index c0b3391d33..1cb9bd1b57 100644 --- a/sys/vfs/hammer/hammer_ondisk.c +++ b/sys/vfs/hammer/hammer_ondisk.c @@ -543,6 +543,7 @@ again: */ if (buffer->ondisk && buffer->io.loading == 0) { *errorp = 0; + hammer_io_advance(&buffer->io); return(buffer); } @@ -660,6 +661,7 @@ found: } else { *errorp = 0; } + hammer_io_advance(&buffer->io); return(buffer); } @@ -1117,6 +1119,7 @@ again: hammer_ref(&node->lock); if (node->ondisk) { *errorp = 0; + hammer_io_advance(&node->buffer->io); } else { *errorp = hammer_load_node(trans, node, isnew); trans->flags |= HAMMER_TRANSF_DIDIO; diff --git a/sys/vfs/nfs/nfs_vfsops.c b/sys/vfs/nfs/nfs_vfsops.c index e1152bac3b..37bd8f306b 100644 --- a/sys/vfs/nfs/nfs_vfsops.c +++ b/sys/vfs/nfs/nfs_vfsops.c @@ -664,7 +664,7 @@ nfs_mountroot(struct mount *mp) * Since the swap file is not the root dir of a file system, * hack it to a regular file. */ - vp->v_flag = 0; + vp->v_flag &= ~VROOT; vref(vp); nfs_setvtype(vp, VREG); swaponvp(td, vp, nd->swap_nblks); @@ -1258,7 +1258,7 @@ nfs_root(struct mount *mp, struct vnode **vpp) } if (vp->v_type == VNON) nfs_setvtype(vp, VDIR); - vp->v_flag = VROOT; + vp->v_flag |= VROOT; if (error) vput(vp); else diff --git a/sys/vm/vm_pageout.c b/sys/vm/vm_pageout.c index b46b8b0332..e0c8d24b39 100644 --- a/sys/vm/vm_pageout.c +++ b/sys/vm/vm_pageout.c @@ -1451,15 +1451,23 @@ vm_pageout(void) else vmstats.v_free_target = 2 * vmstats.v_free_min + vmstats.v_free_reserved; + /* + * NOTE: With the new buffer cache b_act_count we want the default + * inactive target to be a percentage of available memory. + * + * The inactive target essentially determines the minimum + * number of 'temporary' pages capable of caching one-time-use + * files when the VM system is otherwise full of pages + * belonging to multi-time-use files or active program data. + */ if (vmstats.v_free_count > 2048) { vmstats.v_cache_min = vmstats.v_free_target; vmstats.v_cache_max = 2 * vmstats.v_cache_min; - vmstats.v_inactive_target = (3 * vmstats.v_free_target) / 2; } else { vmstats.v_cache_min = 0; vmstats.v_cache_max = 0; - vmstats.v_inactive_target = vmstats.v_free_count / 4; } + vmstats.v_inactive_target = vmstats.v_free_count / 4; if (vmstats.v_inactive_target > vmstats.v_free_count / 3) vmstats.v_inactive_target = vmstats.v_free_count / 3; diff --git a/test/debug/vnodeinfo.c b/test/debug/vnodeinfo.c index 5279b317ac..563232861b 100644 --- a/test/debug/vnodeinfo.c +++ b/test/debug/vnodeinfo.c @@ -245,8 +245,20 @@ dumpvp(kvm_t *kd, struct vnode *vp, int whichlist) #endif if (vn.v_flag & VOBJBUF) printf(" VOBJBUF"); - if (vn.v_flag & VAGE) - printf(" VAGE"); + switch(vn.v_flag & (VAGE0 | VAGE1)) { + case 0: + printf(" VAGE0"); + break; + case VAGE0: + printf(" VAGE1"); + break; + case VAGE1: + printf(" VAGE2"); + break; + case VAGE0 | VAGE1: + printf(" VAGE3"); + break; + } if (vn.v_flag & VOLOCK) printf(" VOLOCK"); if (vn.v_flag & VOWANT) diff --git a/usr.sbin/pstat/pstat.c b/usr.sbin/pstat/pstat.c index c2fe784123..ef3fb6c5f9 100644 --- a/usr.sbin/pstat/pstat.c +++ b/usr.sbin/pstat/pstat.c @@ -425,7 +425,7 @@ vnode_print(struct vnode *avnode, struct vnode *vp) #endif if (flag & VOBJBUF) *fp++ = 'V'; - if (flag & VAGE) + if (flag & (VAGE0 | VAGE1)) *fp++ = 'a'; if (flag & VOLOCK) *fp++ = 'l';