kernel - Numerous VM MPSAFE fixes
* Remove most critical sections from the VM subsystem, these are no longer
applicable (vm_token covers the access).
* _pmap_allocpte() for x86-64 - Conditionalize the zeroing of the vm_page
after the grab. The grab can race other threads and result in a page
which had already been zero'd AND populated with pte's, so we can't just
zero it.
Use m->valid to determine if the page is actually newly allocated or not.
NOTE: The 32 bit code already properly zeros the page by detecting whether
the pte has already been entered or not. The 64-bit code couldn't
do this neatly so we used another method.
* Hold the pmap vm_object in pmap_release() and pmap_object_init_pt() for
the x86-64 pmap code. This prevents related loops from blocking on the
pmap vm_object when freeing VM pages which is not expected by the code.
* pmap_copy() for x86-64 needs the vm_token, critical sections are no longer
sufficient.
* Assert that PG_MANAGED is set when clearing pte's out of a pmap via the
PV entries. The pte's must exist in this case and it's a critical panic
if they don't.
* pmap_replacevm() for x86-64 - Adjust newvm->vm_sysref prior to assigning
it to p->p_vmspace to handle any potential MP races with other sysrefs
on the vmspace.
* faultin() needs p->p_token, not proc_token.
* swapout_procs_callback() needs p->p_token.
* Deallocate the VM object associated with a vm_page after freeing the
page instead of before freeing the page. This fixes a potential
use-after-refs-transition-to-0 case if a MP race occurs.