Sascha Wildner [Sun, 2 Jun 2019 17:23:22 +0000 (19:23 +0200)]
Remove lint checks in system headers.
Modern checkers do not (need to) pass this anymore to the build.
Tomohiro Kusumi [Sun, 2 Jun 2019 05:54:18 +0000 (14:54 +0900)]
sys/vfs/hammer: Rename hammer_flush_inode_done() -> hammer_sync_inode_done()
It's a counterpart for hammer_sync_inode().
François Tigeot [Sun, 2 Jun 2019 05:16:11 +0000 (07:16 +0200)]
drm: Sync drm_gem_object_lookup() with Linux 4.7.10
Sascha Wildner [Sat, 1 Jun 2019 15:03:30 +0000 (17:03 +0200)]
libm: Fix some -Wredundant-decls. signgam is in <math.h> already.
Sascha Wildner [Sat, 1 Jun 2019 08:23:09 +0000 (10:23 +0200)]
Correct some casts in printf arguments in various utilities.
Peeter Must [Sat, 1 Jun 2019 06:56:05 +0000 (09:56 +0300)]
kernel/evdev: create input devices with UID_ROOT and GID_WHEEL
* In preparation to follow the same scheme as in drm devices:
access rights will be set using the devfs system, see commit
82aec1d31805500239e50c9b6ed8d25802b0a17c.
Sepherosa Ziehau [Sun, 3 Mar 2019 12:51:53 +0000 (20:51 +0800)]
em/emx/igb: Merge Intel em-7.7.4 and igb-2.5.6
Most noticeably added I219 V6/V7 support for em/emx.
Sascha Wildner [Fri, 31 May 2019 15:36:37 +0000 (17:36 +0200)]
resident(8): Remove a.out support.
Sascha Wildner [Fri, 31 May 2019 15:36:08 +0000 (17:36 +0200)]
ldd(1): Remove unneeded inclusion of <a.out.h>.
Sascha Wildner [Fri, 31 May 2019 15:36:03 +0000 (17:36 +0200)]
gcore(1): Stop depending on <a.out.h>.
Sascha Wildner [Fri, 31 May 2019 15:32:08 +0000 (17:32 +0200)]
Remove symorder(1). It's no longer useful.
Sascha Wildner [Fri, 31 May 2019 13:58:40 +0000 (15:58 +0200)]
crunchide(1): Remove a.out, elf32 and ECOFF traces.
François Tigeot [Fri, 31 May 2019 05:23:05 +0000 (07:23 +0200)]
drm/radeon: Stop naming the kernel module "radeonkms"
It is named "radeon" in Linux.
Matthew Dillon [Fri, 31 May 2019 04:38:29 +0000 (23:38 -0500)]
vkernel - Adjust use of GDF_VIRTUSER
* Don't clear GDF_VIRTUSER until after exiting the critical section,
otherwise hardclock() will tick sys instead of user.
A user bound program should now show more reasonable values for
user%.
* Scrap some debug counters that are no longer applicable, cleaning
up a few cache line bounces.
Matthew Dillon [Fri, 31 May 2019 03:41:32 +0000 (22:41 -0500)]
dummynet - Poll only if operational, change default freq for vkernel
* Default frequency for vkernel is now 100 hz instead of 1000 hz.
The frequency can be changed at any time with a sysctl.
* The systimer is only configured when there are pipes or flows
present, and will be deconfigured when there aren't. This
saves us from doing any unnecessary polling.
* dummynet still has some considerable holes related to SMP
operation. It really needs a rewrite to some degree.
François Tigeot [Thu, 30 May 2019 15:42:44 +0000 (17:42 +0200)]
drm/linux: Implement i2c_bit_add_bus()
Sascha Wildner [Thu, 30 May 2019 13:03:40 +0000 (15:03 +0200)]
sysctl.8: Revert
39d69daecef529eb49d36fefa429c8ac08e7cbc1.
The manpage just says whether it's a number, a string or something
else, not the exact type.
Sascha Wildner [Thu, 30 May 2019 11:25:44 +0000 (13:25 +0200)]
<sys/sysctl.h>: Remove the unused CTL_HW_NAMES define.
Reported-by: zrj
Sascha Wildner [Thu, 30 May 2019 10:56:52 +0000 (12:56 +0200)]
kernel/sysctl: Switch kern.osrevision to showing __DragonFly_version.
It was tied to a historic define (BSD) that started as 199506 and was
sporadically bumped in the past until 200708. Revert the define back
to 199506, as it is not supposed to be bumped, and add a comment about
this (taken from NetBSD). We cannot remove these defines completely
because at least some are used by ports.
François Tigeot [Thu, 30 May 2019 08:06:06 +0000 (10:06 +0200)]
drm/linux: Add wait_event_interruptible_locked()
François Tigeot [Thu, 30 May 2019 07:51:56 +0000 (09:51 +0200)]
drm/linux: Add vmalloc() and vzalloc()
François Tigeot [Thu, 30 May 2019 07:49:40 +0000 (09:49 +0200)]
drm/linux: Add set_memory_uc()
François Tigeot [Thu, 30 May 2019 07:48:47 +0000 (09:48 +0200)]
drm/linux: Add gcd()
François Tigeot [Thu, 30 May 2019 07:47:17 +0000 (09:47 +0200)]
drm/linux: Add list_prev_entry()
Obtained-from: FreeBSD
François Tigeot [Thu, 30 May 2019 07:40:22 +0000 (09:40 +0200)]
drm/linux: Fix pci_map_page() arguments
Matthew Dillon [Wed, 29 May 2019 21:38:21 +0000 (14:38 -0700)]
vkernel - Restore vkernel build
* Finish cleaning up the vkernel pmap code so the build works again.
Tested-by: dillon, tested with a NFS boot.
Matthew Dillon [Wed, 29 May 2019 21:33:07 +0000 (14:33 -0700)]
kernel - Don't block in tstop() with locks held
* There are several places where the kernel improperly blocks on a
STOP signal while locks might be held. This is a particular problem
when PCATCH is specified e.g. in the middle of the NFS code. It is
meant to catch INTR but it also improperly allowed STOP to function
and left the vnode lock held.
Several other places in the kernel also use PCATCH and don't expect
the kernel to actually block indefinitely on a STOP.
* Don't block in STOP in these situations. Simply mark the thread as
stopped and wait until it tries to return to userland before actually
stopping.
Any kernel subsystems which desire to act on the STOP in-line instead
of upon return to userland can do so manually, as long as they release
all locks for the duration.
Sascha Wildner [Wed, 29 May 2019 12:41:04 +0000 (14:41 +0200)]
<sys/syslimits.h>: Clean up inclusion check and warning.
Sascha Wildner [Tue, 28 May 2019 19:17:34 +0000 (21:17 +0200)]
Clean up a few math related manualpages.
* In frexp.3, change the library to libc, because it is part of libc,
not libm (anymore).
* In fpclassify.3 and signbit.3, remove the LIBRARY section, because
all these are macros from <math.h>.
Matthew Dillon [Tue, 28 May 2019 18:57:59 +0000 (11:57 -0700)]
kernel - Build 'evdev' into the kernel
* Build evdev into the kernel along with its EVDEV_SUPPORT option.
Requested-by: peeter
Matthew Dillon [Mon, 27 May 2019 00:20:48 +0000 (17:20 -0700)]
kernel - Refactor scheduler weightings part 2/2.
* Change the default fork()ing mechanic from 0x80 (random cpu) to
0x20 (best cpu). We no longer need to mix it up on fork because
weight4 now works.
* The best cpu algorithm has a number of desirable characteristics
for fork() and fork()/exec().
- Will generally start the child topologically 'close' to the parent,
reducing fork/exec/exit/wait overheads, but still spread the children
out while machine load is light. If the child sticks around for
long enough, it will get spread out even more optimally. If not,
closer is better.
- Will not stack children up on the same cpu unnecessarily (e.g. parent
fork()s a bunch of times at once).
- Will optimize heavy and very-heavy load situations. If the child
have nowhere else reasonable to go, this will schedule it on a
hyper-thread sibling or even on the same cpu as the parent. Depending
on the load.
* Gives us around a 15% improvement in fork/exec/exit/wait performance.
* Once a second we clear the td_wakefromcpu hint on the currently
running thread. This allows a thread which has become cpu-bound
to start to 'wander' afield (though the scheduler will still try to
avoid moving it too far away, topologically).
Sascha Wildner [Mon, 27 May 2019 16:09:05 +0000 (18:09 +0200)]
kernel/pmap: Clean up no longer used MALLOC_DEFINE.
Sascha Wildner [Sun, 26 May 2019 17:43:17 +0000 (19:43 +0200)]
<sys/cdefs.h>: Remove the old unused __DF_VISIBLE.
Nothing depends on it anymore and nothing sets _DRAGONFLY_SOURCE or
_NETBSD_SOURCE. We handle all non-POSIX visibility with __BSD_VISIBLE
quite well.
Pointed-out-by: zrj
Sascha Wildner [Sun, 26 May 2019 13:30:40 +0000 (15:30 +0200)]
Remove expand(1) from the bootstrap tools.
Sascha Wildner [Sun, 26 May 2019 12:54:53 +0000 (14:54 +0200)]
route(8): Simplify the keywords handling.
Adapted from FreeBSD. This eliminates paste(1) as a bootstrap tool.
Sascha Wildner [Sun, 26 May 2019 12:02:07 +0000 (14:02 +0200)]
dump(8): Remove some unneeded defines.
Matthew Dillon [Sun, 26 May 2019 16:49:02 +0000 (09:49 -0700)]
kernel - VM rework part 21 - Start resynchronizing the vkernel
* Fix some minor syntax errors.
* The vkernel is still not operational in master. It will be a little
while. Even though the vkernel retains the old pmap mechanism, there
are some i's to dot and t's to cross in the expectations the mainline
kernel has of the APIs.
Reported-by: swildner
Matthew Dillon [Sun, 26 May 2019 16:47:29 +0000 (09:47 -0700)]
kernel - Backout 'Reduce token backoff'
* Return the backoff to 4096. Basically there are multiple situations
here where a smaller backoff works better, and multiple situations
where a larger backoff works better. For now, return the setting to
its former glory and don't mess with it.
Matthew Dillon [Sun, 26 May 2019 16:37:51 +0000 (09:37 -0700)]
kernel - Refactor scheduler weightings part 1/2.
* Refactor the scheduler's weightings and fix a few issues that
have cropped up due to breaking previous tunings. This gets
our pgbench results back to normal.
* There will probably be a follow-up commit with a bit more
tuning work, particular with regards to resetting the
td_wakefromcpu field which we currently do not do at all.
* Increase weight1 (keep thread on current CPU) slightly,
implement weight4, and re-tune the algorithm.
* Break-out the IPC pairing control fields into two new
sysctls, kern.usched_dfly.ipc_smt and kern.usched_dfly.ipc_same,
with the default set to -1 (auto).
ipc_smt Tries to schedule IPC pairings onto sibling
hyperthreads to avoid cache mastership changes
when the load is greater than (ncpus / 2).
ipc_same Tries to schedule IPC pairings onto the same
logical cpu to avoid both cache mastership changes
AND IPIs when the load is greater than (ncpus).
* Keep in mind that the scheduler cannot perfectly predict program
behavior. In particular, these IPC pairings can work better or worse
depending on the mix between local cpu use within each process,
verses the amount of data being transfered between them. By default
we try to localize IPC pairings to nearby cores but we do not try
to schedule them to sibling hyperthreads unless the load is high
enough for it to make sense.
* The main IPC weighting is weight2, whereas the fairness metric is
weight4. Generally speaking, weight4 should be somewhat smaller
than weight2 but still high enough to ensure that available CPUs
in the system are reasonably well utilized. Also note that the
fairness metric (weight4) is based on priority-weighted load.
Matthew Dillon [Sat, 25 May 2019 18:49:46 +0000 (11:49 -0700)]
kernel - pipe locks are not needed in the kqueue event code
* The kqueue event code locks the knote itself, and this should be
sufficient to interlock any race between the filter and the
other side.
Remove the token locks from the event filters and add a little code
to handle any invalid kn_data values (due to not being locked).
Testing-with: sysutils/pv (via zrj)
Matthew Dillon [Sat, 25 May 2019 18:45:35 +0000 (11:45 -0700)]
kernel - Reduce token backoff
* Reduce lwkt.token_backoff_max from 4096 to 128. 4096 was just too
long and results in poor performance when heavy token contention is
present.
Testing-with: sysutils/pv (via zrj)
Sascha Wildner [Sat, 25 May 2019 07:31:30 +0000 (09:31 +0200)]
kernel/netmap: Move headers into <net/netmap/...>.
Note that netmap isn't hooked into the build right now, and because of
that, this commit results in removing them from their current location.
Some dports like net/libpcap started breaking after
2c68437386f4be2ed45a4
because configure found a building netmap_user.h and decided that we have
a current and usable netmap.
Reported-by: zrj
Matthew Dillon [Thu, 23 May 2019 16:24:41 +0000 (09:24 -0700)]
kernel - Enhance indefinite wait buffer console message
* Enhance debug info for indefinite wait buffers.
Sascha Wildner [Thu, 23 May 2019 07:43:14 +0000 (09:43 +0200)]
kernel: Remove two more unneeded .PATHs.
Sascha Wildner [Thu, 23 May 2019 07:17:09 +0000 (09:17 +0200)]
kernel/smbus: Remove an unneeded .PATH in a Makefile.
Sascha Wildner [Thu, 23 May 2019 06:49:31 +0000 (08:49 +0200)]
Move <sys/fd_set.h> to <sys/_fd_set.h>.
It is only supposed to be included by other headers. Normal code
should use <sys/select.h>.
Sascha Wildner [Thu, 23 May 2019 06:41:49 +0000 (08:41 +0200)]
Don't include the full <sys/signal.h> in headers that just need sigset_t.
Namely, <select.h> and <spawn.h>.
Split it out into a separate header, <sys/_sigset_t.h> and include that
in <spawn.h> and <sys/select.h>.
This cleans up these two headers' name space considerably.
Thanks to zrj for testing with a dports bulk build.
Matthew Dillon [Thu, 23 May 2019 06:27:57 +0000 (23:27 -0700)]
kernel - Reduce acpi_ec timeout after failure, silence errors
* Reduce the acpi_ec timeout from 750ms to 100ms after a
failure.
* Automatically silence... well, all acpi error messages,
after 10 acpi_ec errors.
* This allows the dell xps-13 to boot in a more reasonable
period of time and not spew EC errors to the console all
the time, as a default, without us having to disable EC manually..
Matthew Dillon [Thu, 23 May 2019 00:43:47 +0000 (17:43 -0700)]
dhclient - Allow 'start' keyword
* There's something weird in our rc scripts that is causing dhclient
to be called with 'start <interface>' on the wlan, so temporarily
hack allowing the 'start' keyword.
François Tigeot [Wed, 22 May 2019 21:40:59 +0000 (23:40 +0200)]
drm: Reduce differences with Linux in ioctl code
Matthew Dillon [Wed, 22 May 2019 07:16:17 +0000 (00:16 -0700)]
kernel - VM rework part 20 - Fix vmmeter_neg_slop_cnt
* Fix some serious issues with the vmmeter_neg_slop_cnt calculation.
The main problem is that this calculation was then causing
vmstats.v_free_min to be recalculated to a much higher value
than it should beeen calculated to, resulting in systems starting
to page far earlier than they should.
For example, the 128G TR started paging tmpfs data with 25GB of
free memory, which was not intended. The correct target for that
amount of memory is more around 3GB.
* Remove vmmeter_neg_slop_cnt entirely and refactor the synchronization
code to be smarter. It will now synchronize vmstats fields whos
adjustments exceed -1024, but only if paging would actually be
needed in the worst-case scenario.
* This algorithm needs low-memory testing and might require more
tuning.
Matthew Dillon [Wed, 22 May 2019 03:12:34 +0000 (20:12 -0700)]
kernel - Reduce/refactor nbuf and maxvnodes calculations.
* The prime motivation for this commit is to target about 1/20
(5%) of physical memory for use by the kernel. These changes
significantly reduce kernel memory usage on systems with less
than 4GB of ram (and more specific for systems with less
than 1TB of ram), and also emplace more reasonable caps on
systems with 128GB+ of ram.
These changes return 100-200MB of ram to userland on systems
with 1GB of ram, and return around 6.5GB of ram on systems
with 128G of ram.
* The nbuf calculation and related code documentation was a bit
crufty, still somewhat designed for an earlier era and was
calculating about twice the stated 5% target. For systems with
128GB of ram or less the calculation was simply creating too many
filesystem buffers, allowing as much as 10% of physical memory to
be locked up by the buffer cache.
Particularly on small systems, this 10% plus other kernel overheads
left a lot less memory available for user programs than we would
have liked. This work gets us closer to the 5% target.
* Change the base calculation from 1/10 of physical memory to 1/20
of physical memory, cutting the number of buffers in half on
most systems. The code documentation stated 1/20 but was actually
calculating 1/10.
* On large memory systems > 100GB the number of buffers is now capped
at around 400000 or so (allowing the buffer cache to use around
6.5 GBytes). This cap was previously based on a relatively
disconnected parameter relating to available memory in early boot,
and when triggered it actually miscalculating nbufs to be double
the intended number.
The new cap is based on a fixed maximum of 500MB worth of
struct bufs, roughly similar to the original intention. This
change reduces the number of buffers reserved on system with
more than around 100GB of ram from around 12GB worth of data
down to 6.5GB.
* With the BKVABIO work eliminating most SMP invltlbs on buffer
recyclement, there is no real reason to need a huge buffer
cache. Just make sure its big enough on large-memory machines
to fully cache the likely live datasets for things like bulk
compiles and such.
* For kern.maxvnodes (which can be changed at run-time if you
desire), the base calcualtion on systems with less than 1GB
of ram has been cut in half (~60K vnodes to ~30K vnodes). It
will ramp up more slowly until it roughly matches the prior
calculation at 4GB of system memory. On systems with enough
memory, maxvnodes is now explicitly capped at 4M.
There generally is no need to allow an excessive number of vnodes
to be cached.
For HAMMER1 you can set vfs.hammer.double_buffer=1 to cause it
to cache data from the underlying device, allowing it to utilize
all available free(ish) memory regardless of the maxvnodes setting.
HAMMER2 caches disk blocks in the underlying device by default.
The vnode-based vm_object caches decompressed data, so we want
to have enough vnodes for nominal heavily parallel bulk operations
to avoid unnecessary re-lookups of the vnode as well as avoid having
to decompress the same thing over and over again.
In both cases an excessively high kern.maxvnodes actually wastes
memory on both HAMMER1 and HAMMER2... or at least makes the pageout
daemon's job more difficult.
* Remove vfs.maxmallocbufspace. It is no longer connected to
anything.
Matthew Dillon [Wed, 22 May 2019 02:24:20 +0000 (19:24 -0700)]
kernel - VM rework part 19 - Cleanup
* vmpageinfo breaks down the kernel load size, vm_page_array
size, and buffer headers for the buffer cache, all of which
are major boot-time wired kernel memory.
Note that the vm_page_array[] uses 3.1% of physical memory.
Its a lot, but there is no convenient way to make it less.
Matthew Dillon [Tue, 21 May 2019 20:55:43 +0000 (13:55 -0700)]
kernel - VM rework part 18 - Cleanup
* Significantly reduce the zone limit for pvzone (for pmap
pv_entry structures). pv_entry's are no longer allocated
on a per-page basis so the limit can be made much smaller.
This also has the effect of reducing the per-cpu cache limit
which ultimately stabilizes wired memory use for the zone.
* Also reduce the generic pre-cpu cache limit for zones.
This only really effects the pvzone.
* Make pvzone, mapentzone, and swap_zone __read_mostly.
* Enhance vmstat -z, report current structural use and actual
total memory use.
* Also cleanup the copyright statement for vm/vm_zone.c. John Dyson's
original copyright was slightly different than the BSD copyright and
stipulated no changes, so separate out the DragonFly addendum.
Sascha Wildner [Tue, 21 May 2019 18:43:42 +0000 (20:43 +0200)]
<net/netmap_user.h>: s/<malloc.h>/<stdlib.h>/.
It is not used in base and in fact the netmap we have in the tree is
not hooked in, but it seems at least one port stumbles over this.
Reported-by: zrj
Matthew Dillon [Tue, 21 May 2019 00:35:57 +0000 (17:35 -0700)]
kernel - VM rework part 17 - Cleanup
* Adjust kmapinfo and vmpageinfo in /usr/src/test/debug.
Enhance the code to display more useful information.
* Get pmap_page_stats_*() working again.
* Change systat -vm's 'VM' reporting. Replace VM-rss with PMAP and
VMRSS. Relabel VM-swp to SWAP and SWTOT.
PMAP - Amount of real memory faulted into user pmaps.
VMRSS - Sum of all process RSS's in thet system. This is
the 'virtual' memory faulted into user pmaps and
includes shared pages.
SWAP - Amount of swap space currently in use.
SWTOT - Total amount of swap installed.
* Redocument vm_page.h.
* Remove dead code from pmap.c (some left over cruft from the
days when pv_entry's were used for PTEs).
Matthew Dillon [Mon, 20 May 2019 16:37:12 +0000 (09:37 -0700)]
kernel - VM rework part 16 - Optimization & cleanup pass
* Adjust __exclusive_cache_line to use 128-byte alignment as
per suggestion by mjg. Use this for the global vmstats.
* Add the vmmeter_neg_slop_cnt global, which is a more generous
dynamic calculation verses -VMMETER_SLOP_COUNT. The idea is to
return how often vm_page_alloc() synchronizes its per-cpu statistics
with the global vmstats.
Matthew Dillon [Mon, 20 May 2019 16:29:43 +0000 (09:29 -0700)]
kernel - VM rework part 15 - Core pmap work, refactor PG_*
* Augment PG_FICTITIOUS. This takes over some of PG_UNMANAGED's previous
capabilities. In addition, the pmap_*() API will work with fictitious
pages, making mmap() operation (aka of the GPU) more consistent.
* Add PG_UNQUEUED. This prevents a vm_page from being manipulated in
the vm_page_queues[] in any way. This takes over another feature
of the old PG_UNMANAGED flag.
* Remove PG_UNMANAGED
* Remove PG_DEVICE_IDX. This is no longer relevant. We use PG_FICTITIOUS
for all device pages.
* Refactor vm_contig_pg_alloc(), vm_contig_pg_free(),
vm_page_alloc_contig(), and vm_page_free_contig().
These functions now set PG_FICTITIOUS | PG_UNQUEUED on the returned
pages, and properly clear the bits upon free or if/when a regular
(but special contig-managed) page is handed over to the normal paging
system.
This is combined with making the pmap*() functions work better with
PG_FICTITIOUS is the primary 'fix' for some of DRMs hacks.
Matthew Dillon [Mon, 20 May 2019 01:48:30 +0000 (18:48 -0700)]
kernel - VM rework part 14 - Core pmap work, stabilize for X/drm
* Don't gratuitously change the vm_page flags in the drm code.
The vm_phys_fictitious_reg_range() code in drm_vm.c was clearing
PG_UNMANAGED. It was only luck that this worked before, but
because these are faked pages, PG_UNMANAGED must be set or the
system will implode trying to convert the physical address back
to a vm_page in certain routines.
The ttm code was setting PG_FICTITIOUS in order to prevent the
page from getting into the active or inactive queues (they had
a conditional test for PG_FICTITIOUS). But ttm never cleared
the bit before freeing the page. Remove the hack and instead
fix it in vm_page.c
* in vm_object_terminate(), allow the case where there are still
wired pages in a OBJT_MGTDEVICE object that has wound up on a
queue (don't complain about it). This situation arises because the
ttm code uses the contig malloc API which returns wired pages.
NOTE: vm_page_activate()/vm_page_deactivate() are allowed to mess
with wired pages. Wired pages are not anything 'special' to
the queues, which allows us to avoid messing with the queues
when pages are assigned to the buffer cache.
Matthew Dillon [Sun, 19 May 2019 19:59:49 +0000 (12:59 -0700)]
kernel - VM rework part 13 - Core pmap work, stabilize & optimize
* Refactor the vm_page_hash hash again to get a better distribution.
* I tried to only hash shared objects but this resulted in a number of
edge cases where program re-use could miss the optimization.
* Add a sysctl vm.page_hash_vnode_only (default off). If turned on,
only vm_page's associated with vnodes will be hashed. This should
generally not be necessary.
* Refactor vm_page_list_find2() again to avoid all duplicate queue
checks. This time I mocked the algorithm up in userland and twisted
it until it did what I wanted.
* VM_FAULT_QUICK_DEBUG was accidently left on, turn it off.
* Do not remove the original page from the pmap when vm_fault_object()
must do a COW. And just in case this is ever added back in later,
don't do it using pmap_remove_specific() !!! Use pmap_remove_pages()
to avoid the backing scan lock.
vm_fault_page() will now do this removal (for procfs rwmem), the normal
vm_fault will of course replace the page anyway, and the umtx code
uses different recovery mechanisms now and should be ok.
* Optimize vm_map_entry_shadow() for the situation where the old
object is no longer shared. Get rid of an unnecessary transient
kmalloc() and vm_object_hold_shared().
Matthew Dillon [Sun, 19 May 2019 16:53:12 +0000 (09:53 -0700)]
kernel - VM rework part 12 - Core pmap work, stabilize & optimize
* Add tracking for the number of PTEs mapped writeable in md_page.
Change how PG_WRITEABLE and PG_MAPPED is cleared in the vm_page
to avoid clear/set races. This problem occurs because we would
have otherwise tried to clear the bits without hard-busying the
page. This allows the bits to be set with only an atomic op.
Procedures which test these bits universally do so while holding
the page hard-busied, and now call pmap_mapped_sfync() prior to
properly synchronize the bits.
* Fix bugs related to various counterse. pm_stats.resident_count,
wiring counts, vm_page->md.writeable_count, and
vm_page->md.pmap_count.
* Fix bugs related to synchronizing removed pte's with the vm_page.
Fix one case where we were improperly updating (m)'s state based
on a lost race against a pte swap-to-0 (pulling the pte).
* Fix a bug related to the page soft-busying code when the
m->object/m->pindex race is lost.
* Implement a heuristical version of vm_page_active() which just
updates act_count unlocked if the page is already in the
PQ_ACTIVE queue, or if it is fictitious.
* Allow races against the backing scan for pmap_remove_all() and
pmap_page_protect(VM_PROT_READ). Callers of these routines for
these cases expect full synchronization of the page dirty state.
We can identify when a page has not been fully cleaned out by
checking vm_page->md.pmap_count and vm_page->md.writeable_count.
In the rare situation where this happens, simply retry.
* Assert that the PTE pindex is properly interlocked in pmap_enter().
We still allows PTEs to be pulled by other routines without the
interlock, but multiple pmap_enter()s of the same page will be
interlocked.
* Assert additional wiring count failure cases.
* (UNTESTED) Flag DEVICE pages (dev_pager_getfake()) as being
PG_UNMANAGED. This essentially prevents all the various
reference counters (e.g. vm_page->md.pmap_count and
vm_page->md.writeable_count), PG_M, PG_A, etc from being
updated.
The vm_page's aren't tracked in the pmap at all because there
is no way to find them.. they are 'fake', so without a pv_entry,
we can't track them. Instead we simply rely on the vm_map_backing
scan to manipulate the PTEs.
* Optimize the new vm_map_entry_shadow() to use a shared object
token instead of an exclusive one. OBJ_ONEMAPPING will be cleared
with the shared token.
* Optimize single-threaded access to pmaps to avoid pmap_inval_*()
complexities.
* Optimize __read_mostly for more globals.
* Optimize pmap_testbit(), pmap_clearbit(), pmap_page_protect().
Pre-check vm_page->md.writeable_count and vm_page->md.pmap_count
for an easy degenerate return; before real work.
* Optimize pmap_inval_smp() and pmap_inval_smp_cmpset() for the
single-threaded pmap case, when called on the same CPU the pmap
is associated with. This allows us to use simple atomics and
cpu_*() instructions and avoid the complexities of the
pmap_inval_*() infrastructure.
* Randomize the page queue used in bio_page_alloc(). This does not
appear to hurt performance (e.g. heavy tmpfs use) on large many-core
NUMA machines and it makes the vm_page_alloc()'s job easier.
This change might have a downside for temporary files, but for more
long-lasting files there's no point allocating pages localized to a
particular cpu.
* Optimize vm_page_alloc().
(1) Refactor the _vm_page_list_find*() routines to avoid re-scanning
the same array indices over and over again when trying to find
a page.
(2) Add a heuristic, vpq.lastq, for each queue, which we set if a
_vm_page_list_find*() operation had to go far-afield to find its
page. Subsequent finds will skip to the far-afield position until
the current CPUs queues have pages again.
(3) Reduce PQ_L2_SIZE From an extravagant 2048 entries per queue down
to 1024. The original 2048 was meant to provide 8-way
set-associativity for 256 cores but wound up reducing performance
due to longer index iterations.
* Refactor the vm_page_hash[] array. This array is used to shortcut
vm_object locks and locate VM pages more quickly, without locks.
The new code limits the size of the array to something more reasonable,
implements a 4-way set-associative replacement policy using 'ticks',
and rewrites the hashing math.
* Effectively remove pmap_object_init_pt() for now. In current tests
it does not actually improve performance, probably because it may
map pages that are not actually used by the program.
* Remove vm_map_backing->refs. This field is no longer used.
* Remove more of the old now-stale code related to use of pv_entry's
for terminal PTEs.
* Remove more of the old shared page-table-page code. This worked but
could never be fully validated and was prone to bugs. So remove it.
In the future we will likely use larger 2MB and 1GB pages anyway.
* Remove pmap_softwait()/pmap_softhold()/pmap_softdone().
* Remove more #if 0'd code.
Matthew Dillon [Sat, 18 May 2019 06:18:11 +0000 (23:18 -0700)]
kernel - VM rework part 11 - Core pmap work to remove terminal PVs
* Remove pv_entry_t belonging to terminal PTEs. The pv_entry's for
PT, PD, PDP, and PML4 remain. This reduces kernel memory use for
pv_entry's by 99%.
The pmap code now iterates vm_object->backing_list (of vm_map_backing
structures) to run-down pages for various operations.
* Remove vm_page->pv_list. This was one of the biggest sources of
contention for shared faults. However, in this first attempt I
am leaving all sorts of ref-counting intact so the contention has
not been entirely removed yet.
* Current hacks:
- Dynamic page table page removal currently disabled because the
vm_map_backing scan needs to be able to deterministically
run-down PTE pointers. Removal only occurs at program exit.
- PG_DEVICE_IDX probably isn't being handled properly yet.
- Shared page faults not yet optimized.
* So far minor improvements in performance across the board.
This is realtively unoptimized. The buildkernel test improves
by 2% and the zero-fill fault test improves by around 10%.
Kernel memory use is improved (reduced) enormously.
Matthew Dillon [Fri, 17 May 2019 18:55:14 +0000 (11:55 -0700)]
kernel - VM rework part 10 - Precursor work for terminal pv_entry removal
* Effectively remove pmap_track_modified(). Turn it into an assertion.
The normal pmap code should NEVER EVER be called with any range inside
the clean map.
This assertion, and the routine in its entirety, will be removed in a
later commit.
* The purpose of the original code was to prevent buffer cache kvm mappings
from being misinterpreted as contributing to the underlying vm_page's
modified state. Normal paging operation synchronizes the modified bit and
then transfers responsibility to the buffer cache. We didn't want
manipulation of the buffer cache to further affect the modified bit for
the page.
In modern times, the buffer cache does NOT use a kernel_object based
mapping for anything and there should be no chance of any kernel related
pmap_enter() (entering a managed page into the kernel_pmap) from messing
with the space.
Matthew Dillon [Fri, 17 May 2019 17:03:35 +0000 (10:03 -0700)]
kernel - VM rework part 9 - Precursor work for terminal pv_entry removal
* Cleanup the API a bit
* Get rid of pmap_enter_quick()
* Remove unused procedures.
* Document that vm_page_protect() (and thus the related
pmap_page_protect()) must be called with a hard-busied page. This
ensures that the operation does not race a new pmap_enter() of the page.
Sascha Wildner [Mon, 20 May 2019 19:16:55 +0000 (21:16 +0200)]
<assert.h>: Sync comments a bit with FreeBSD.
Ed Schouten [Sun, 9 Jan 2011 21:39:46 +0000 (21:39 +0000)]
<assert.h>: add missing __dead2 to __assert().
__assert() is called when an assertion fails. After printing an error
message, it will call abort(). abort() never returns, hence it has the
__dead2 attribute. Also add this attribute to __assert().
Taken-from: FreeBSD (r217207)
Submitted-by: Jan Beich
Sascha Wildner [Mon, 20 May 2019 06:46:31 +0000 (08:46 +0200)]
pam_ftpusers.8: Remove reference to ftpusers.5.
Sascha Wildner [Mon, 20 May 2019 06:36:48 +0000 (08:36 +0200)]
sys/boot: Clean up btxld's manual page.
It is a host tool only and not installed to base.
Sascha Wildner [Mon, 20 May 2019 06:26:53 +0000 (08:26 +0200)]
i386 removal, part 72/x: Remove i386 specific ed.4 manpage references.
This was missing from
09ab7e4ea7d3a5476ab60148ed6fa1b8a0e61b0c.
Sascha Wildner [Mon, 20 May 2019 06:03:06 +0000 (08:03 +0200)]
bsd-family-tree: Sync with FreeBSD (add OpenBSD 6.5).
François Tigeot [Sun, 19 May 2019 16:48:03 +0000 (18:48 +0200)]
drm: Do not report PRIME as supported
This fixes kernel panics with the Ravenports graphics stack
Matthew Dillon [Sat, 18 May 2019 16:31:26 +0000 (09:31 -0700)]
kernel - Remove improper direct user-space access
* chroot_kernel() (a privileged system call) was improperly
callin kprintf() with a direct user address. Just remove
the kprintf().
Reported-by: tdfbsd
Sascha Wildner [Sat, 18 May 2019 15:11:58 +0000 (17:11 +0200)]
kernel: Don't include <sys/user.h> in kernel code.
There is really no point in doing that because its main purpose is to
expose kernel structures to userland. The majority of cases wasn't
needed at all and the rest required only a couple of other includes.
Sascha Wildner [Sat, 18 May 2019 12:53:17 +0000 (14:53 +0200)]
mandoc(1): Use base recallocarray().
Sascha Wildner [Sat, 18 May 2019 12:57:28 +0000 (14:57 +0200)]
Merge branch 'vendor/MDOCML'
Sascha Wildner [Sat, 18 May 2019 12:55:15 +0000 (14:55 +0200)]
Remove the compat recallocarray() on the vendor branch.
Sascha Wildner [Fri, 17 May 2019 14:20:45 +0000 (16:20 +0200)]
makedb: Fix apropos database generation better across release/
The apropos database format used by our new man(1) is different and
incompatible to that used by our old man(1). The files are also named
differently, mandoc.db (new) and whatis (old).
So it makes no sense to use the old makewhatis on new systems or the
new makewhatis on old systems. If the desired makewhatis does not
exist, then we just don't generate the db, because the building system
doesn't have the makewhatis needed to generate it.
Once installed, the database will be updated regularly as per weekly
periodic.
Matthew Dillon [Fri, 17 May 2019 01:55:32 +0000 (18:55 -0700)]
Revert "kernel - Clean up direction flag on syscall entry"
Actually not needed, the D flag is cleared via the mask
set in MSR_SF_MASK. Revert.
This reverts commit
cea0e49dc0b2e5aea1b929d02f12d00df66528e2.
Matthew Dillon [Fri, 17 May 2019 01:44:28 +0000 (18:44 -0700)]
kernel - Implement support for SMAP and SMEP security (3)
* Issue clac after the push on all traps, interrupts, and
exceptions.
* Improve code documentation.
Matthew Dillon [Fri, 17 May 2019 01:43:20 +0000 (18:43 -0700)]
kernel - Clean up direction flag on syscall entry
* Make sure the direction flag is clear on syscall entry. Don't
trust userland.
Matthew Dillon [Fri, 17 May 2019 00:37:48 +0000 (17:37 -0700)]
kernel - Implement support for SMAP and SMEP security (2)
* Oops. Do the CR4 initialization in the correct place, so it is
applied to all CPUs.
Matthew Dillon [Fri, 17 May 2019 00:14:58 +0000 (17:14 -0700)]
kernel - Implement support for SMAP and SMEP security
* Implement support for SMAP security. This prevents accidental
accesses to user address space from the kernel. When available,
we wrap intentional user-space accesses from the kernel with
the 'stac' and 'clac' instructions.
We use a NOP replacement policy to implement the feature. The wrapper
is initially a 'nop %eax' (3-byte NOP), and is replaced by 'stac' and
'clac' via a .section iteration when the feature is supported.
* Implement support for SMEP security. This prevents accidental
execution of user code from the kernel and simply requires
turning the bit on in CR4.
* Reports support in dmesg via the 'CPU Special Features Installed:'
line.
Matthew Dillon [Thu, 16 May 2019 18:11:35 +0000 (11:11 -0700)]
kernel - Implement retpoline for kernel
* Now that we have gcc-8 operational, we can turn on retpoline (software
spectre protection against the return stack buffer). Turn it on via
-mindirect-branch=thunk-inline
* No discernable performance loss with a generic buildkernel test:
Xeon e5-2620v4 x 2
time make -j 32 nativekernel (all tmpfs)
BEFORE 1717.427u 323.662s 2:28.49 1374.5% 9582+721k 200842+0io 4870pf+0w
BEFORE 1720.130u 338.635s 2:30.21 1370.5% 9555+720k 199720+0io 4804pf+0w
BEFORE 1722.395u 341.508s 2:30.71 1369.4% 9559+720k 199720+0io 4804pf+0w
AFTER 1720.271u 329.492s 2:28.27 1382.4% 9578+721k 200842+0io 4870pf+0w
AFTER 1736.268u 344.874s 2:30.90 1379.1% 9555+720k 199720+0io 4804pf+0w
AFTER 1726.056u 348.324s 2:31.14 1372.4% 9543+719k 199720+0io 4804pf+0w
Sascha Wildner [Wed, 15 May 2019 20:34:17 +0000 (22:34 +0200)]
Don't include "internal" headers outside of regular headers.
Include files like <sys/_timespec.h> and so on contain small parts
such as struct timespec that are supposed to be provided by multiple
regular headers. They should only be included by other headers, not
by *.c files.
None of these was actually needed except for the libtelnet one
(replaced with <stddef.h>).
Sascha Wildner [Wed, 15 May 2019 18:27:38 +0000 (20:27 +0200)]
Update the pciconf(8) database.
May 14, 2019 snapshot from https://pci-ids.ucw.cz
Matthew Dillon [Wed, 15 May 2019 00:33:39 +0000 (17:33 -0700)]
kernel - Add MDS mitigation support for Intel side-channel attack
* Add MDS (Microarchitectural Data Sampling) attack mitigation to
the kernel. This is an attack against Intel CPUs made from 2011
to date. The attack is not currently known to work against AMD CPUs.
With an intel microcode update the mitigation can be enabled with
sysctl machdep.mds_mitigation=MD_CLEAR
* Without the intel microcode update, only disabling hyper-threading
gives you any protection. Older architectures might not get
support. If sysctl machdep.mds_support does not show support,
then the currently loaded microcode does not have support for the
feature.
* DragonFlyBSD only supports the MD_CLEAR mode, and it will only
be available with a microcode update from Intel.
Updating the microcode alone does not protect against the attack.
The microcode must be updated AND the mode must be turned on in
DragonFlyBSD to protect against the attack.
This mitigation burns around 250nS of additional latency on kernel->user
transitions (system calls and interrupts primarily). The additional
latency will not be present if the microcode has support but it is disabled
in the kernel, so you should be able to safely update your microcode
even if you do not intend to use the mitigation.
* It is unclear whether the microcode + mitigation completely protects
the machine. The attack is supposedly a sibling hyper-thread
attack and it may be that the only way to completely protect your
machine is to disable hyper-threading entirely. Or buy AMD.
Templated-from: NetBSD
Matthew Dillon [Tue, 14 May 2019 03:35:51 +0000 (20:35 -0700)]
kernel - VM rework part 8 - Precursor work for terminal pv_entry removal
* Adjust structures so the pmap code can iterate backing_ba's with
just the vm_object spinlock.
Add a ba.pmap back-pointer.
Move entry->start and entry->end into the ba (ba.start, ba.end).
This is replicative of the base entry->ba.start and entry->ba.end,
but local modifications are locked by individual objects to allow
pmap ops to just look at backing ba's iterated via the object.
Remove the entry->map back-pointer.
Remove the ba.entry_base back-pointer.
* ba.offset is now an absolute offset and not additive. Adjust all code
that calculates and uses ba.offset (fortunately it is all concentrated
in vm_map.c and vm_fault.c).
* Refactor ba.start/offset/end modificatons to be atomic with
the necessary spin-locks to allow the pmap code to safely iterate
the vm_map_backing list for a vm_object.
* Test VM system with full synth run.
Peeter Must [Mon, 13 May 2019 09:31:47 +0000 (12:31 +0300)]
kernel/evdev: Synchronize event codes with Linux 4.16
Taken-from: FreeBSD, Linux
Matthew Dillon [Sun, 12 May 2019 16:26:54 +0000 (09:26 -0700)]
rtld-elf - Notify thread state to optimize relocations (2)
* Remove write() prototype in dlfcn.c that was only used for
debugging.
Reminded-by: swildner
Sascha Wildner [Sun, 12 May 2019 16:19:52 +0000 (18:19 +0200)]
pathconf.2/sysconf.3: Add some related references to SEE ALSO.
Matthew Dillon [Sun, 12 May 2019 04:01:55 +0000 (21:01 -0700)]
rtld-elf - Notify thread state to optimize relocations
* Add shims to allow libthread_xu to notify rtld when threading
is being used.
* Requires weak symbols in libc which are overriden by rtld-elf.
* Implement the feature in rtld-elf and use it to avoid making calls
to lwp_gettid(). When threaded, use tls_get_tcb() (which does not
require a system call) instead of lwp_gettid(). When not threaded,
just use a constant.
NOTE: We cannot use tls_get_tcb() unconditionally because the tcb
is not setup during early relocations. So do this whack-a-mole
to make it work.
* This leaves just the sigprocmask wrappers around rtld-elf (which
are needed to prevent stacked relocations from signal handlers).
Poked-by: mjg
Matthew Dillon [Sat, 11 May 2019 05:39:53 +0000 (22:39 -0700)]
kernel - VM rework part 7 - Initial vm_map_backing index
* Implement a TAILQ and hang vm_map_backing structures off
of the related object. This feature is still in progress
and will eventually be used to allow pmaps to manipulate
vm_page's without pv_entry's.
At the same time, remove all sharing of vm_map_backing.
For example, clips no longer share the vm_map_backing. We
can't share the structures if they are being used to
itemize areas for pmap management.
TODO - reoptimize this at some point.
TODO - not yet quite deterministic enough for pmap
searches (due to clips).
* Refactor vm_object_reference_quick() to again allow
operation on any vm_object whos ref_count is already
at least 1, or which belongs to a vnode. The ref_count
is no longer being used for complex vm_object collapse,
shadowing, or migration code.
This allows us to avoid a number of unnecessary token
grabs on objects during clips, shadowing, and forks.
* Cleanup a few fields in vm_object. Name TAILQ_ENTRY()
elements blahblah_entry instead of blahblah_list.
* Fix an issue with a.out binaries (that are still supported but
nobody uses) where the object refs on the binaries were not
being properly accounted for.
Matthew Dillon [Sat, 11 May 2019 01:41:08 +0000 (18:41 -0700)]
kernel - VM rework part 6 - Stabilize
* Fix a case and situations where VPAGETABLE won't work.
Matthew Dillon [Fri, 10 May 2019 18:37:00 +0000 (11:37 -0700)]
kernel - VM rework part 5 - Cleanup
* Cleanup vm_map_entry_shadow()
* Remove (unused) vmspace_president_count()
Remove (barely used) struct lwkt_token typedef.
* Cleanup the vm_map_aux, vm_map_entry, vm_map, and vm_object
structures
* Adjfustments to in-code documentation
Sascha Wildner [Sat, 11 May 2019 22:08:34 +0000 (00:08 +0200)]
Clean up some Makefiles.
* WARNS?=6 is usually not needed because upper-level Makefile.inc's
already have it (such as usr.bin/Makefile.inc).
* Remove an unneded SRCS in ndis_events(8).
Sascha Wildner [Sat, 11 May 2019 21:56:20 +0000 (23:56 +0200)]
libutil: Raise WARNS to 6.
Matthew Dillon [Sat, 11 May 2019 18:39:22 +0000 (11:39 -0700)]
kcollect - Adjust Mops right hand on graph
* Adjust the Mops cap based on ncpus.
Matthew Dillon [Sat, 11 May 2019 16:06:43 +0000 (09:06 -0700)]
kernel - Restore kern.cam.da.X.trim_enabled sysctl
* This sysctl was not always being properly installed due to an
ordering and timing issue.
* The code was not setting the trim flag in the correct structure.
Matthew Dillon [Sat, 11 May 2019 16:04:25 +0000 (09:04 -0700)]
kernel - VM rework (fix introduced bug)
* Fix a null-pointer dereferencing bug in vm_object_madvise() introduced
in recent commits.
Matthew Dillon [Fri, 10 May 2019 01:37:10 +0000 (18:37 -0700)]
kernel - VM rework part 4 - Implement vm_fault_collapse()
* Add the function vm_fault_collapse(). This function simulates
faults to copy all pages from backing objects into the front
object, allowing the backing objects to be disconnected
from the map entry.
This function is called under certain conditions from the
vmspace_fork*() code prior to a fork to potentially collapse
the entry's backing objects into the front object. The
caller then disconnects the backing objects, truncating the
list to a single object (the front object).
This optimization is necessary to prevent the backing_ba list
from growing in an unbounded fashion. In addition, being able
to disconnect the graph allows redundant backing store to
be freed more quickly, reducing memory use.
* Add sysctl vm.map_backing_shadow_test (default enabled).
The vmspace_fork*() code now does a quick all-shadowed test on
the first backing object and calls vm_fault_collapse()
if it comes back true, regardless of the chain length.
* Add sysctl vm.map_backing_limit (default 5).
The vmspace_fork*() code calls vm_fault_collapse() when the
ba.backing_ba list exceeds the specified number of entries.
* Performance is a tad faster than the original collapse
code.