kernel - Cut buffer cache related pmap invalidations in half
authorMatthew Dillon <dillon@apollo.backplane.com>
Mon, 25 Jul 2016 04:52:26 +0000 (21:52 -0700)
committerMatthew Dillon <dillon@apollo.backplane.com>
Mon, 25 Jul 2016 04:52:26 +0000 (21:52 -0700)
commit2d1b280fd7290751e4da3171a396ae301829790f
tree14a1191943b86e3518398e837d1802674d39f036
parentd0f59917b31c9ce70fc3a52aa387163871f88f4b
kernel - Cut buffer cache related pmap invalidations in half

* Do not bother to invalidate the TLB when tearing down a buffer
  cache buffer.  On the flip side, always invalidate the TLB
  (the page range in question) when entering pages into a buffer
  cache buffer.  Only applicable to normal VMIO buffers.

* Significantly improves buffer cache / filesystem performance with
  no real risk.

* Significantly improves performance for tmpfs teardowns on unmount
  (which typically have to tear-down a lot of buffer cache buffers).
sys/kern/vfs_bio.c