Peter Avalos [Thu, 4 Aug 2016 01:25:04 +0000 (18:25 -0700)]
Remove most local modifications from OpenSSH.
This primarily removes the HPN patches. It's become too cumbersome to
maintain these patches as demonstrated by the fact that we haven't
updated OpenSSH in quite some time. If people want additional
functionality in their OpenSSH, it's available in dports
(security/openssh).
Instead of just silently ignoring removed options in people's
configurations, I decided to treat these as errors so that the admin
will need to decide to remove it from their configuration or install the
dport to get the functionality back.
Sascha Wildner [Thu, 4 Aug 2016 06:53:34 +0000 (08:53 +0200)]
make upgrade: Remove no longer existing manpages after the OpenSSL upgrade.
Sascha Wildner [Thu, 4 Aug 2016 06:53:21 +0000 (08:53 +0200)]
strftime.3: Remove extra whitespace.
Matthew Dillon [Thu, 4 Aug 2016 02:38:11 +0000 (19:38 -0700)]
kernel - Fix lwp_fork/exit race (2) (vkernel)
* Fix same race as before, in vkernel also.
Peter Avalos [Wed, 3 Aug 2016 20:22:16 +0000 (13:22 -0700)]
libcrypto(3): Set CC variable for Perl scripts.
This detects assembler/compiler capabilities.
Obtained-from: FreeBSD
François Tigeot [Wed, 3 Aug 2016 19:35:34 +0000 (21:35 +0200)]
drm/linux: kernel_ulong_t is defined in linux/mod_devicetable.h
Peter Avalos [Wed, 3 Aug 2016 19:16:05 +0000 (12:16 -0700)]
Fix typo.
Peter Avalos [Wed, 3 Aug 2016 19:11:29 +0000 (12:11 -0700)]
libcrypto(3): Remove some cruft from when we supported 32-bit.
Peter Avalos [Wed, 3 Aug 2016 19:05:00 +0000 (12:05 -0700)]
Add IDEA support for libcrypto(3).
The patent expired years ago.
Sascha Wildner [Wed, 3 Aug 2016 18:09:42 +0000 (20:09 +0200)]
vkernel: Add a simple pagezero() macro (unbreaks build).
Peter Avalos [Wed, 3 Aug 2016 09:52:16 +0000 (02:52 -0700)]
Update files for OpenSSL-1.0.2h import.
Make the jump to 1.0.2, because support for 1.0.1 ends at the end of
this year.
Peter Avalos [Wed, 3 Aug 2016 08:18:34 +0000 (01:18 -0700)]
Merge branch 'vendor/OPENSSL'
Peter Avalos [Wed, 3 Aug 2016 08:02:32 +0000 (01:02 -0700)]
Import OpenSSL-1.0.2h.
Matthew Dillon [Wed, 3 Aug 2016 05:28:54 +0000 (22:28 -0700)]
kernel - Fix lwp_fork/exit race
* In a multi-threaded program it is possible for the exit sequence to
deadlock if one thread is trying to exit (exit the entire process)
while another thread is simultaniously creating a new thread.
* Fix the issue by having the new thread checking for the exit condition and
sending a SIGKILL to itself. And kprintf() a message when it happens.
Sascha Wildner [Wed, 3 Aug 2016 03:39:50 +0000 (05:39 +0200)]
librt/aio: #ifndef notyet -> #if 0 /* not yet */
Sascha Wildner [Wed, 3 Aug 2016 02:59:16 +0000 (04:59 +0200)]
Adjust a couple of manual pages to the recent header changes.
Sascha Wildner [Wed, 3 Aug 2016 02:48:08 +0000 (04:48 +0200)]
<sys/types.h>: Use __BSD_VISIBLE instead of !_POSIX_SOURCE.
Sascha Wildner [Wed, 3 Aug 2016 02:47:13 +0000 (04:47 +0200)]
<sys/stat.h>: Clean up the POSIX namespace.
* Use __BSD_VISIBLE instead of !_POSIX_SOURCE.
* Reduce visibility of some BSD specific stuff.
* Expand visibility of fchmod() and lstat().
Sascha Wildner [Wed, 3 Aug 2016 02:46:23 +0000 (04:46 +0200)]
<sys/aio.h>: Remove unneeded includes (cleans up namespace).
Sascha Wildner [Wed, 3 Aug 2016 02:45:14 +0000 (04:45 +0200)]
<sys/ipc.h>: Some POSIX adjustments.
* Use standard types.
* Reduce visibility of some BSD specific stuff.
* While here, add missing parentheses around a macro argument.
Sascha Wildner [Wed, 3 Aug 2016 02:43:40 +0000 (04:43 +0200)]
<sys/shm.h>: Some POSIX adjustments.
* Define pid_t, size_t and time_t as required.
* Put BSD specific stuff under __BSD_VISIBLE.
Sascha Wildner [Wed, 3 Aug 2016 02:42:14 +0000 (04:42 +0200)]
<sys/sem.h>: Some POSIX adjustments.
* Define pid_t, size_t and time_t as required.
* Put BSD specific stuff under __BSD_VISIBLE.
* Use standard types.
Sascha Wildner [Wed, 3 Aug 2016 02:41:14 +0000 (04:41 +0200)]
<sys/msg.h>: Some POSIX adjustments.
* Define msglen_t and msgqnum_t and use them (no size change).
* Define pid_t, size_t, ssize_t and time_t as required.
* Put BSD specific stuff under __BSD_VISIBLE.
Sascha Wildner [Wed, 3 Aug 2016 02:39:47 +0000 (04:39 +0200)]
<netinet/tcp.h>: Clean up the POSIX namespace a bit.
Sascha Wildner [Wed, 3 Aug 2016 02:36:12 +0000 (04:36 +0200)]
Clean up whitespace in a few headers (no functional change).
<netinet/tcp.h>
<sys/aio.h>
<sys/ipc.h>
<sys/msg.h>
<sys/sem.h>
<sys/shm.h>
<sys/stat.h>
<sys/types.h>
In preparation for namespace cleanup.
Matthew Dillon [Wed, 3 Aug 2016 00:41:08 +0000 (17:41 -0700)]
kernel - Remove PG_ZERO and zeroidle (page-zeroing) entirely
* Remove the PG_ZERO flag and remove all page-zeroing optimizations,
entirely. Aftering doing a substantial amount of testing, these
optimizations, which existed all the way back to CSRG BSD, no longer
provide any benefit on a modern system.
- Pre-zeroing a page only takes 80ns on a modern cpu. vm_fault overhead
in general is ~at least 1 microscond.
- Pre-zeroing a page leads to a cold-cache case on-use, forcing the fault
source (e.g. a userland program) to actually get the data from main
memory in its likely immediate use of the faulted page, reducing
performance.
- Zeroing the page at fault-time is actually more optimal because it does
not require any reading of dynamic ram and leaves the cache hot.
- Multiple synth and build tests show that active idle-time zeroing of
pages actually reduces performance somewhat and incidental allocations
of already-zerod pages (from page-table tear-downs) do not affect
performance in any meaningful way.
* Remove bcopyi() and obbcopy() -> collapse into bcopy(). These other
versions existed because bcopy() used to be specially-optimized and
could not be used in all situations. That is no longer true.
* Remove bcopy function pointer argument to m_devget(). It is no longer
used. This function existed to help support ancient drivers which might
have needed a special memory copy to read and write mapped data. It has
long been supplanted by BUSDMA.
Matthew Dillon [Mon, 1 Aug 2016 20:03:52 +0000 (13:03 -0700)]
kernel - Cleanup vm_page_pcpu_cache()
* Remove the empty vm_page_pcpu_cache() function and related call. Page
affinity is handled by the vm_page_queues[] array now.
François Tigeot [Sun, 31 Jul 2016 15:45:29 +0000 (17:45 +0200)]
drm/linux: Add linux/kobject.h
Matthew Dillon [Sun, 31 Jul 2016 03:40:06 +0000 (20:40 -0700)]
kernel - Refactor cpu localization for VM page allocations (3)
* Instead of iterating the cpus in the mask starting at cpu #0, iterate
starting at mycpu to the end, then from 0 to mycpu - 1.
This fixes random masked wakeups from favoring lower-numbered cpus.
* The user process scheduler (usched_dfly) was favoring lower-numbered
cpus due to a bug in the simple selection algorithm, causing forked
processes to initially weight improperly. A high fork or fork/exec
rate skewed the way the cpus were loaded.
Fix this by correctly scanning cpus from the (scancpu) rover.
* For now, use a random 'previous' affinity for initially scheduling a
fork.
François Tigeot [Sun, 31 Jul 2016 06:06:36 +0000 (08:06 +0200)]
drm/linux: Add vmap()
Sascha Wildner [Sat, 30 Jul 2016 20:19:24 +0000 (22:19 +0200)]
Sync ACPICA with Intel's version
20160729.
* Restructured and standardized the C library configuration for
ACPICA.
* AML interpreter: Allows for execution of so-called "executable"
AML code outside of control methods, not just at the module level
(top level) but also within any scope declared outside of a
control method - Scope{}, Device{}, Processor{}, PowerResource{},
and ThermalZone{}. Lv Zheng.
* iASL: Add full support for the RASF ACPI table (RAS Features Table).
* iASL: Allows for compilation/disassembly of so-called "executable"
AML code (see above).
For a more detailed list, please see sys/contrib/dev/acpica/changes.txt.
Matthew Dillon [Sat, 30 Jul 2016 19:30:24 +0000 (12:30 -0700)]
kernel - Refactor cpu localization for VM page allocations (2)
* Finish up the refactoring. Localize backoffs for search failures
by doing a masked domain search. This avoids bleeding into non-local
page queues until we've completely exhausted our local queues,
regardess of the starting pg_color index.
* We try to maintain 16-way set associativity for VM page allocations
even if the topology does not allow us to do it perfect. So, for
example, a 4-socket x 12-core (48-core) opteron can break the 256
queues into 4 x 64 queues, then split the 12-cores per socket into
sets of 3 giving 16 queues (the minimum) to each set of 3 cores.
* Refactor the page-zeroing code to only check the localized area.
This fixes a number of issues related to the zerod pages in the
queues winding up severely unbalanced. Other cpus in the local
group can help replentish a particular cpu's pre-zerod pages but
we intentionally allow a heavy user to exhaust the pages.
* Adjust the cpu topology code to normalize the physical package id.
Some machines start at 1, some machines start at 0. Normalize
everything to start at 0.
Matthew Dillon [Sat, 30 Jul 2016 19:27:09 +0000 (12:27 -0700)]
kernel - cleanup vfs_cache debugging
* Remove the deep namecache recursion warning, we've taken care of it
properly for a while now so we don't need to know when it happens any
more.
* Augment the cache_inval_internal warnings with more information.
Imre Vadász [Sat, 30 Jul 2016 10:32:26 +0000 (12:32 +0200)]
if_iwm - Fix iwm_poll_bit() usage in iwm_stop_device().
* The iwm(4) iwm_poll_bit() returns 1 on success and 0 on failure,
whereas iwl_poll_bit() in Linux iwlwifi returns >= 0 on success and
< 0 on failure.
Matthew Dillon [Sat, 30 Jul 2016 00:03:22 +0000 (17:03 -0700)]
kernel - Refactor cpu localization for VM page allocations
* Change how cpu localization works. The old scheme was extremely unbalanced
in terms of vm_page_queue[] load.
The new scheme uses cpu topology information to break the vm_page_queue[]
down into major blocks based on the physical package id, minor blocks
based on the core id in each physical package, and then by 1's based on
(pindex + object->pg_color).
If PQ_L2_SIZE is not big enough such that 16-way operation is attainable
by physical and core id, we break the queue down only by physical id.
Note that the core id is a real core count, not a cpu thread count, so
an 8-core/16-thread x 2 socket xeon system will just fit in the 16-way
requirement (there are 256 PQ_FREE queues).
* When a particular queue does not have a free page, iterate nearby queues
start at +/- 1 (before we started at +/- PQ_L2_SIZE/2), in an attempt to
retain as much locality as possible. This won't be perfect but it should
be good enough.
* Also fix an issue with the idlezero counters.
Matthew Dillon [Fri, 29 Jul 2016 21:59:15 +0000 (14:59 -0700)]
systat - Adjust extended vmstats display
* When the number of devices are few enough (or you explicitly specify
just a few disk devices, or one), there is enough room for the
extended vmstats display. Make some adjustments to this display.
* Display values in bytes (K, M, G, etc) instead of pages like the other
fields.
* Rename zfod to nzfod and subtract-away ozfod when displaying nzfod
(only in the extended display), so the viewer doesn't have to do the
subtraction in his head.
Matthew Dillon [Fri, 29 Jul 2016 20:29:03 +0000 (13:29 -0700)]
kernel - Reduce memory testing and early-boot zeroing.
* Reduce the amount of memory testing and early-boot zeroing that
we do, improving boot times on systems with large amounts of memory.
* Fix race in the page zeroing count.
* Refactor the VM zeroidle code. Instead of having just one kernel thread,
have one on each cpu.
This significantly increases the rate at which the machine can eat up
idle cycles to pre-zero pages in the cold path, improving performance
in the hot-path (normal) page allocations which request zerod pages.
* On systems with a lot of cpus there is usually a little idle time (e.g.
0.1%) on a few of the cpus, even under extreme loads. At the same time,
such loads might also imply a lot of zfod faults requiring zero'd pages.
On our 48-core opteron we see a zfod rate of 1.0 to 1.5 GBytes/sec and
a page-freeing rate of 1.3 - 2.5 GBytes/sec. Distributing the page
zeroing code and eating up these miniscule bits of idle improves the
kernel's ability to provide a pre-zerod page (vs having to zero-it in
the hot path) significantly.
Under the synth test load the kernel was still able to provide 400-700
MBytes/sec worth of pre-zerod pages whereas before this change the kernel
was only able to provide 20 MBytes/sec worth of pre-zerod pages.
Matthew Dillon [Fri, 29 Jul 2016 17:22:53 +0000 (10:22 -0700)]
kernel - Cleanup namecache stall messages on console
* Report the proper elapsed time and also include td->td_comm
in the printed output on the console.
Matthew Dillon [Fri, 29 Jul 2016 17:02:50 +0000 (10:02 -0700)]
kernel - Fix rare tsleep/callout race
* Fix a rare tsleep/callout race. The callout timer can trigger before
the tsleep() releases its lwp_token (or if someone else holds the
calling thread's lwp_token).
This case is detected, but failed to adjust lwp_stat before
descheduling and switching away. This resulted in an endless sleep.
zrj [Fri, 29 Jul 2016 07:12:39 +0000 (10:12 +0300)]
mktemp.3: Improve the manpage, add mklinks.
Fix SYNOPSIS, remove outdated information and clarify availability.
Taken-from: FreeBSD
zrj [Fri, 29 Jul 2016 07:27:22 +0000 (10:27 +0300)]
mdoc.local: Add DragonFly 4.6 for future reference.
Sepherosa Ziehau [Fri, 29 Jul 2016 08:56:10 +0000 (16:56 +0800)]
hyperv/vmbus: Passthrough interrupt resource allocation to nexus
This greatly simplies interrupt allocation. And reenable the interrupt
resource not found warning in acpi.
Matthew Dillon [Fri, 29 Jul 2016 01:05:42 +0000 (18:05 -0700)]
libthread_xu - Don't override vfork()
* Allow vfork() to operate normally in a threaded environment. The kernel
can handle multiple concurrent vfork()s by different threads (only the
calling thread blocks, same as how Linux deals with it).
Sascha Wildner [Thu, 28 Jul 2016 20:16:33 +0000 (22:16 +0200)]
mktemp.3: Fix a typo and bump .Dd
Matthew Dillon [Thu, 28 Jul 2016 17:12:39 +0000 (10:12 -0700)]
kernel - Be nicer to pthreads in vfork()
* When vfork()ing, give the new sub-process's lwp the same TID as the one
that called vfork(). Even though user processes are not supposed to do
anything sophisticated inside a vfork() prior to exec()ing, some things
such as fileno() having to lock in a threaded environment might not be
apparent to the programmer.
* By giving the sub-process the same TID, operations done inside the
vfork() prior to exec that interact with pthreads will not confuse
pthreads and cause corruption due to e.g. TID 0 clashing with TID 0
running in the parent that is running concurrently.
Sascha Wildner [Thu, 28 Jul 2016 17:10:40 +0000 (19:10 +0200)]
ed(1): Sync with FreeBSD.
Sascha Wildner [Thu, 28 Jul 2016 17:18:46 +0000 (19:18 +0200)]
ed(1): Remove handling of non-POSIX environment.
Matthew Dillon [Thu, 28 Jul 2016 17:03:08 +0000 (10:03 -0700)]
libc - Fix more popen() issues
* Fix a file descriptor leak between popen() and pclose() in a threaded
environment. The control structure is removed from the list, then the
list is unlocked, then the file is closed. This can race a popen
inbetween the unlock and the closure.
* Do not use fileno() inside vfork, it is a complex function in a threaded
environment which could lead to corruption since the vfork()'s lwp id may
clash with one from the parent process.
Matthew Dillon [Thu, 28 Jul 2016 16:39:57 +0000 (09:39 -0700)]
kernel - Fix getpid() issue in vfork() when threaded
* upmap->invfork was a 0 or 1, but in a threaded program it is possible
for multiple threads to be in vfork() at the same time. Change invfork
to a count.
* Fixes improper getpid() return when concurrent vfork()s are occuring in
a threaded program.
François Tigeot [Thu, 28 Jul 2016 06:56:12 +0000 (08:56 +0200)]
drm/linux: Clean-up pci_resource_start()
Making it less verbose
Matthew Dillon [Wed, 27 Jul 2016 23:22:11 +0000 (16:22 -0700)]
systat - Restrict %rip sampling to root
* Only allow root to sample the %rip and %rsp on all cpus. The sysctl will
not sample and return 0 for these fields if the uid is not root.
This is for security, as %rip sampling can be used to break cryptographic
keys.
* systat -pv 1 will not display the sampling columns if the sample value
is 0.
Matthew Dillon [Wed, 27 Jul 2016 18:22:56 +0000 (11:22 -0700)]
test - Add umtx1 code
* Add umtx1 code - fast context switch tests
* Make blib.c thread-safe.
Matthew Dillon [Wed, 27 Jul 2016 18:13:44 +0000 (11:13 -0700)]
libc - Fix numerous fork/exec*() leaks, also add mkostemp() and mkostemps().
* Use O_CLOEXEC in many places to prevent temporary descriptors from leaking
into fork/exec'd code (e.g. in multi-threaded situations).
* Note that the popen code will close any other popen()'d descriptors in
the child process that it forks just prior to exec. However, there was
a descriptor leak where another thread issuing popen() at the same time
could leak the descriptors into their exec.
Use O_CLOEXEC to close this hole.
* popen() now accepts the 'e' flag (i.e. "re") to retain O_CLOEXEC in the
returned descriptor. Normal "r" (etc) will clear O_CLOEXEC in the returned
descriptor.
Note that normal "r" modes are still fine for most use cases since popen
properly closes other popen()d descriptors in the fork(). BUT!! If the
threaded program calls exec*() in other ways, such descriptors may
unintentionally be passed onto sub-processes. So consider using "re".
* Add mkostemp() and mkostemps() to allow O_CLOEXEC to be passed in,
closing a thread race that would otherwise leak the temporary descriptor
into other fork/exec()s.
Taken-from: Mostly taken from FreeBSD
Matthew Dillon [Tue, 26 Jul 2016 23:24:14 +0000 (16:24 -0700)]
kernel - Disable lwp->lwp optimization in thread switcher
* Put #ifdef around the existing lwp->lwp switch optimization and then
disable it. This optimizations tries to avoid reloading %cr3 and avoid
pmap->pm_active atomic ops when switching to a lwp that shares the same
process.
This optimization is no longer applicable on multi-core systems as such
switches are very rare. LWPs are usually distributed across multiple cores
so rarely does one switch to another on the same core (and in cpu-bound
situations, the scheduler will already be in batch mode). The conditionals
in the optimization, on the other hand, did measurably (just slightly)
reduce performance for normal switches. So turn it off.
* Implement an optimization for interrupt preemptions, but disable it for
now. I want to keep the code handy but so far my tests show no improvement
in performance with huge interrupt rates (from nvme devices), so it is
#undef'd for now.
Matthew Dillon [Tue, 26 Jul 2016 20:12:51 +0000 (13:12 -0700)]
kernel - Minor cleanup swtch.s
* Minor cleanup
Matthew Dillon [Tue, 26 Jul 2016 20:01:27 +0000 (13:01 -0700)]
kernel - Fix namecache race & panic
* Properly lock and re-check the parent association when iterating its
children, fixing a bug in a code path associated with unmounting
filesystems.
The code improperly assumed that there could be no races because there
are were no accessors left. In fact, under heavy loads, the namecache
scan in this routine can race against the negative-name-cache management
code.
* Generally speaking can only happen when lots of mounts and unmounts are
done under heavy loads (for example, tmpfs mounts during a poudriere or
synth run).
Matthew Dillon [Tue, 26 Jul 2016 19:56:31 +0000 (12:56 -0700)]
kernel - Reduce atomic ops in switch code
* Instead of using four atomic 'and' ops and four atomic 'or' ops, use
one atomic 'and' and one atomic 'or' when adjusting the pmap->pm_active.
* Store the array index and simplified cpu mask in the globaldata structure
for the above operation.
Matthew Dillon [Tue, 26 Jul 2016 19:53:39 +0000 (12:53 -0700)]
kernel - refactor CPUMASK_ADDR()
* Refactor CPUMASK_ADDR(), removing the conditionals and just indexing the
array as appropriate.
Matthew Dillon [Tue, 26 Jul 2016 00:06:52 +0000 (17:06 -0700)]
kernel - Fix VM bug introduced earlier this month
* Adding the yields to the VM page teardown and related code was a great
idea (~Jul 10th commits), but it also introduced a bug where the page
could get torn-out from under the scan due to the vm_object's token being
temporarily lost.
* Re-check page object ownership and (when applicable) its pindex before
acting on the page.
Matthew Dillon [Mon, 25 Jul 2016 23:05:40 +0000 (16:05 -0700)]
systat - Refactor memory displays for systat -vm
* Report paging and swap activity in bytes and I/Os instead of pages and
I/Os (I/Os usually matched pages).
* Report zfod and cow in bytes instead of pages.
* Replace the REAL and VIRTUAL section with something that makes a bit
more sense.
Report active memory (this is just active pages), kernel memory
(currently just wired but we can add more stuff later), Free
(inactive + cache + free is considered free/freeable memory), and
total system memory as reported at boot time.
Report total RSS - basically how many pages the system is mapping to
user processes. Due to sharing this can be a large value.
Do not try to report aggregate VSZ as there's no point in doing so
any more.
Reported swap usage on the main -vm display as well as total swap
allocated.
* Fix display bug in systat -sw display.
* Add "nvme" device type match for the disk display.
Imre Vadász [Sun, 24 Jul 2016 19:11:29 +0000 (21:11 +0200)]
if_iwm - Fix inverted logic in iwm_tx().
The PROT_REQUIRE flag in should be set for data frames above a certain
length, but we were setting it for !data frames above a certain length,
which makes no sense at all.
Taken-From: OpenBSD, Linux iwlwifi
Matthew Dillon [Mon, 25 Jul 2016 18:31:04 +0000 (11:31 -0700)]
kernel - Fix mountctl() / unmount race
* kern_mountctl() now properly checks to see if an unmount is in-progress
and returns an error, fixing a later panic.
Sascha Wildner [Mon, 25 Jul 2016 19:46:01 +0000 (21:46 +0200)]
sysconf.3: Fix typo.
Sascha Wildner [Mon, 25 Jul 2016 18:43:03 +0000 (20:43 +0200)]
libc/strptime: Return NULL, not 0, since the function returns char *.
While here, accept 'UTC' for %Z as well.
Taken-from: FreeBSD
Matthew Dillon [Mon, 25 Jul 2016 18:18:57 +0000 (11:18 -0700)]
mountd, mount - Change how mount signals mountd, reduce mountd spam
* mount now signals mountd with SIGUSR1 instead of SIGHUP.
* mountd now recognizes SIGUSR1 as requesting an incremental update.
Instead of wiping all exports on all mounts and then re-scanning
the exports file and re-adding from the exports file, mountd will
now only wipe the export(s) on mounts it finds in the exports file.
* Greatly reduces unnecessary mountlist scans and commands due to
mount_null and mount_tmpfs operations, while still preserving our
ability to export such filesystems.
Matthew Dillon [Mon, 25 Jul 2016 04:55:00 +0000 (21:55 -0700)]
kernel - Close a few SMP holes
* Don't trust the compiler when loading refs in cache_zap(). Make sure
it doesn't reorder or re-use the memory reference.
* In cache_nlookup() and cache_nlookup_maybe_shared(), do a full re-test
of the namecache element after locking instead of a partial re-test.
* Lock the namecache record in two situations where we need to set a
flag. Almost all other flag cases require similar locking. This fixes
a potential SMP race in a very thin window during mounting.
* Fix unmount / access races in sys_vquotactl() and, more importantly, in
sys_mount(). We were disposing of the namecache record after extracting
the mount pointer, then using the mount pointer. This could race an
unmount and result in a corrupt mount pointer.
Change the code to dispose of the namecache record after we finish using
the mount point. This is somewhat more complex then I'd like, but it
is important to unlock the namecache record across the potentially
blocking operation to prevent a lock chain from propagating upwards
towards the root.
* Enhanced debugging for the namecache teardown case when nc_refs changes
unexpectedly.
* Remove some dead code (cache_purgevfs()).
Matthew Dillon [Mon, 25 Jul 2016 04:52:26 +0000 (21:52 -0700)]
kernel - Cut buffer cache related pmap invalidations in half
* Do not bother to invalidate the TLB when tearing down a buffer
cache buffer. On the flip side, always invalidate the TLB
(the page range in question) when entering pages into a buffer
cache buffer. Only applicable to normal VMIO buffers.
* Significantly improves buffer cache / filesystem performance with
no real risk.
* Significantly improves performance for tmpfs teardowns on unmount
(which typically have to tear-down a lot of buffer cache buffers).
Matthew Dillon [Mon, 25 Jul 2016 04:49:57 +0000 (21:49 -0700)]
kernel - Add some more options for pmap_qremove*()
* Add pmap_qremove_quick() and pmap_qremove_noinval(), allowing pmap
entries to be removed without invalidation under carefully managed
circumstances by other subsystems.
* Redo the virtual kernel a little to work the same as the real kernel
when entering new pmap entries. We cannot assume that no invalidation
is needed when the prior contents of the pte is 0, because there are
several ways it could have become 0 without a prior invalidation.
Also use an atomic op to clear the entry.
Matthew Dillon [Mon, 25 Jul 2016 04:44:33 +0000 (21:44 -0700)]
kernel - cli interlock with critcount in interrupt assembly
* Disable interrupts when decrementing the critical section count
and gd_intr_nesting_level, just prior to jumping into doreti.
This prevents a stacking interrupt from occurring in this roughly
10-instruction window.
* While limited stacking is not really a problem, this closes a very
small and unlikely window where multiple device interrupts could
stack excessively and run the kernel thread out of stack space.
(unlikely that it has ever happened in real life, but becoming more
likely as some modern devices are capable of much higher interrupt
rates).
Sascha Wildner [Sun, 24 Jul 2016 22:45:46 +0000 (00:45 +0200)]
sysconf.3: Document _SC_PAGE_SIZE and _SC_PHYS_PAGES.
Taken-from: FreeBSD
Submitted-by: Sevan Janiyan
Dragonfly-bug: <https://bugs.dragonflybsd.org/issues/2929>
Matthew Dillon [Sun, 24 Jul 2016 21:02:10 +0000 (14:02 -0700)]
drm - Fix subtle plane masking bug.
* Index needs to be 1 << index.
Reported-by: davshao
Found-by: Matt Roper - https://patchwork.kernel.org/patch/7889051/
zrj [Wed, 20 Jul 2016 16:59:28 +0000 (19:59 +0300)]
cpumask.9: Add short manpage.
zrj [Tue, 19 Jul 2016 16:35:16 +0000 (19:35 +0300)]
cpumask.h: Turn CPUMASK_ELEMENTS as implementation defined.
No functional change intended.
zrj [Tue, 19 Jul 2016 07:07:45 +0000 (10:07 +0300)]
sys: Extract CPUMASK macros to new <machine/cpumask.h>
There are plenty enough CPUMASK macros already for them to have their own header.
So far only userspace users are powerd(8), usched(8) and kern_usched.c(VKERNEL64).
After recent change to expose kernel internal CPUMASK macros those got available
for userland codes even through <time.h> header. It is better to avoid that.
Also this reduces POSIX namespace pollution and keeps cpu/types.h header slim.
For now leave CPUMASK_ELEMENTS (not sure about ASSYM() macro handling the _ prefix)
and cpumask_t typedef (forward decl of struct cpumask would be better in prototypes).
Matthew Dillon [Sun, 24 Jul 2016 07:56:04 +0000 (00:56 -0700)]
kernel - Fix atomic op comparison
* The sequence was testing a signed integer and then testing the same
variable using atomic_fetchadd_int(&var, 0). Unfortunately, the
atomic-op returns an unsigned value so the result is that when the
buffer count was exhausted, the program would hard-loop without
calling tsleep.
* Fixed by casting the atomic op.
* Should fix the hardlock issue once and for all.
Matthew Dillon [Sun, 24 Jul 2016 02:19:46 +0000 (19:19 -0700)]
kernel - Refactor Xinvltlb a little, turn off the idle-thread invltlb opt
* Turn off the idle-thread invltlb optimization. This feature can be
turned on with a sysctl (default-off) machdep.optimized_invltlb. It
will be turned on by default when we've life-tested that it works
properly.
* Remove excess critical sections and interrupt disablements. All entries
into smp_invlpg() now occur with interrupts already disabled and the
thread already in a critical section. This also defers critical-section
1->0 transition handling away from smp_invlpg() and into its caller.
* Refactor the Xinvltlb APIs a bit. Have Xinvltlb enter the critical
section (it didn't before). Remove the critical section from
smp_inval_intr(). The critical section is now handled by the assembly,
and by any other callers.
* Add additional tsc-based loop/counter debugging to try to catch problems.
* Move inner-loop handling of smp_invltlb_mask to act on invltlbs a little
faster.
* Disable interrupts a little later inside pmap_inval_smp() and
pmap_inval_smp_cmpset().
Matthew Dillon [Sun, 24 Jul 2016 02:17:24 +0000 (19:17 -0700)]
hammer - remove commented out code, move a biodone()
* Remove commented-out code which is no longer applicable.
* Move the biodone() call in hammer_io_direct_write_complete() to after
the token-release, reducing stacking of tokens in biodone().
Matthew Dillon [Sun, 24 Jul 2016 02:09:26 +0000 (19:09 -0700)]
hammer - Try to fix improper DATA CRC error
* Under heavy I/O loads HAMMER has an optimization (similar to UFS) where
the logical buffer is used to issue a write to the underlying device,
rather than copying the logical buffer to a device buffer. This
optmization is earmarked by a hammer2_record.
* If the logical buffer is discarded just after it is written, and then
re-read, hammer may go through a path which calls
hammer_ip_resolve_data(). This code failed to check whether the record
was still in-progress, and in-fact the write to the device may not have
even been initiated yet, and there could also have been a device buffer
alias in the buffer cache for the device for the offset.
This caused the followup read to access the wrong data, causing HAMMER
to report a DATA CRC error. The actual media receives the correct data
eventually and a umount/remount would show an uncorrupted file.
* Try to fix the problem by calling hammer_io_direct_wait() on the record
in this path to wait for the operation to complete (and also to
invalidate the related device buffer) before trying to re-read the block
from the media.
Matthew Dillon [Sun, 24 Jul 2016 02:06:42 +0000 (19:06 -0700)]
kernel - Enhance indefinite wait buffer error message
* Enhance the error message re: indefinite wait buffer notifications.
Matthew Dillon [Sun, 24 Jul 2016 01:59:33 +0000 (18:59 -0700)]
kernel - Fix TDF_EXITING bug, instrument potential live loops
* Fix a TDF_EXITING bug. lwkt_switch_return() is called to fixup
the 'previous' thread, meaning turning off TDF_RUNNING and handling
TDF_EXITING.
However, if TDF_EXITING is not set, the old thread can be used or
acted upon / exited on by some other cpu the instant we clear
TDF_RUNNING. In this situation it is possible that the other cpu
will set TDF_EXITING in the small window of opportunity just before
we check ourselves, leading to serious thread management corruption.
* The new pmap_inval*() code runs on Xinvltlb instead of as a IPIQ
and can easily create significant latency between the two tests,
whereas the old code ran as an IPIQ and could not due to the critical
section.
Matthew Dillon [Sun, 24 Jul 2016 01:57:15 +0000 (18:57 -0700)]
kernel - Add vfs.repurpose_enable, adjust B_HASBOGUS
* Add vfs.repurpose_enable, default disabled. If this feature is turned on
the system will try to repurpose the VM pages underlying a buffer on
re-use instead of allowing the VM pages to cycle into the VM page cache.
Designed for high I/O-load environments.
* Use the B_HASBOGUS flag to determine if a pmap_qenter() is required,
and devolve the case to a single call to pmap_qenter() instead of one
for each bogus page.
Sascha Wildner [Sat, 23 Jul 2016 20:05:49 +0000 (22:05 +0200)]
Add a realquickkernel target, analogous to realquickworld.
It skips the recently added depend step, so it behaves like
quickkernel did before
521f740e8971df6fdb1b63933cb534746e86bfae.
Sascha Wildner [Sat, 23 Jul 2016 19:15:13 +0000 (21:15 +0200)]
Fix VKERNEL64 build.
François Tigeot [Sat, 23 Jul 2016 18:20:48 +0000 (20:20 +0200)]
kernel: Fix compilation
Sascha Wildner [Sat, 23 Jul 2016 17:15:24 +0000 (19:15 +0200)]
bsd-family-tree: Sync with FreeBSD.
François Tigeot [Sat, 23 Jul 2016 10:16:31 +0000 (12:16 +0200)]
drm/i915/gem: Reduce differences with Linux 4.4
François Tigeot [Sat, 23 Jul 2016 09:12:44 +0000 (11:12 +0200)]
drm: Sync a few headers with Linux 4.4
Sascha Wildner [Sat, 23 Jul 2016 07:40:11 +0000 (09:40 +0200)]
dmesg.8: Improve markup a bit and fix a typo (dumnr -> dumpnr).
Matthew Dillon [Sat, 23 Jul 2016 04:58:59 +0000 (21:58 -0700)]
kernel - Fix excessive ipiq recursion (4)
* Possibly the smoking gun. There was a case where the lwkt_switch()
code could wind up looping excessively calling lwkt_getalltokens()
if td_contended went negative, and td_contended on interrupt threads
could in-fact go negative.
This stopped IPIs in their tracks.
* Fix by making td_contended unsigned, causing the comparions to work
in all situations. And add a missing assignment to 0 for the
preempted thread case.
Matthew Dillon [Sat, 23 Jul 2016 01:22:17 +0000 (18:22 -0700)]
kernel - Fix excessive ipiq recursion (3)
* Third try. I'm not quite sure why we are still getting hard locks. These
changes (so far) appear to fix the problem, but I don't know why. It
is quite possible that the problem is still not fixed.
* Setting target->gd_npoll will prevent *all* other cpus from sending an
IPI to that target. This should have been ok because we were in a
critical section and about to send the IPI to the target ourselves, after
setting gd_npoll. The critical section does not prevent Xinvltlb, Xsniff,
Xspuriousint, or Xcpustop from running, but of these only Xinvltlb does
anything significant and it should theoretically run at a higher level
on all cpus than Xipiq (and thus complete without causing a deadlock of
any sort).
So in short, it should have been ok to allow something like an Xinvltlb
to interrupt the cpu inbetween setting target->gd_npoll and actually
sending the Xipiq to the target. But apparently it is not ok.
* Only clear mycpu->gd_npoll when we either (1) EOI and take the IPIQ
interrupt or (2) If the IPIQ is made pending via reqflags, when we clear
the flag. Previously we were clearing gd_npoll in the IPI processing
loop itself, potentially racing new incoming interrupts before they get
EOId by our cpu. This also should have been just fine, because interrupts
are enabled in the processing loop so nothing should have been able to
back-up in the LAPIC.
I can conjecture that possibly there was a race when we cleared gd_npoll
multiple times, potentially clearing it the second (or later) times,
allowing multiple incoming IPIs to be queued from multiple cpu sources but
then cli'ing and entering a e.g. Xinvltlb processing loop before our cpu
could acknowledge any of them. And then, possibly, trying to issue an IPI
with the system in this state.
I don't really see how this can cause a hard lock because I did not observe
any loop/counter error messages on the console which should have been
triggered if other cpus got stuck trying to issue IPIs. But LAPIC IPI
interactions are not well documented so... perhaps they were being issued
but blocked our local LAPIC from accepting a Xinvltlb due to having one
extra unacknowledged Xipiq pending? But then, our Xinvltlb processing loop
*does* enable interrupts for the duration, so it should have drained if
this were so.
In anycase, we no longer gratuitously clear gd_npoll in the processing
loop. We only clear it when we know there isn't one in-flight heading to
our cpu and none queued on our cpu. What will happen now is that a second
IPI can be sent to us once we've EOI'd the first one, and wind up in
reqflags, but will not be acted upon until our current processing loop
returns.
I will note that the gratuitous clearing we did before *could* have allowed
substantially all other cpus to try to Xipiq us at nearly the same time,
so perhaps the deadlock was related to that type of situation.
* When queueing an ipiq command from mycpu to a target, interrupts were
enabled between our entry into the ipiq fifo, the setting of our cpu bit
in the target gd_ipimask, the setting of target->gd_npoll, and our
issuing of the actual IPI to the target. We now disable interrupts across
these four steps.
It should have been ok for interrupts to have been left enabled across
these four steps. It might still be, but I am not taking any chances now.
Sascha Wildner [Fri, 22 Jul 2016 19:17:54 +0000 (21:17 +0200)]
build.7: Mention that KERNCONF can have more than one config.
Sascha Wildner [Fri, 22 Jul 2016 19:17:29 +0000 (21:17 +0200)]
Run make depend in quickkernel, too.
It is much cleaner to do that, just like it is run in quickworld, too.
At the price of a small increase in build time, quickkernel will now
continue working when a new kernel header is added, which broke it
before this commit because the header would not be copied to the right
place in /usr/obj.
Matthew Dillon [Fri, 22 Jul 2016 18:22:32 +0000 (11:22 -0700)]
drm - Stabilize broadwell and improve skylake
* The issue was primarily the bitops on longs were all wrong. '1 << N'
returns an integer (even if N is a long), so those had to be 1L or 1LU.
There were also some missing parenthesis in the bit test code.
* Throw in one fix from Linux, but I think its basically a NOP when DMAPs
are used (and we use DMAPs).
* Add some code to catch a particular failure condition by locking up X
in a while/tsleep loop instead of crashing outright, allowing a remote
login to kgdb the live system.
Matthew Dillon [Tue, 19 Jul 2016 01:27:12 +0000 (18:27 -0700)]
kernel - repurpose buffer cache entries under heavy I/O loads
* At buffer-cache I/O loads > 200 MBytes/sec (newbuf instantiations, not
cached buffer use), the buffer cache will now attempt to repurpose the
VM pages in the buffer it is recycling instead of returning the pages
to the VM system.
* sysctl vfs.repurposedspace may be used to adjust the I/O load limit.
* The repurposing code attempts to free the VM page then reassign it to
the logical offset and vnode of the new buffer. If this succeeds, the
new buffer can be returned to the caller without having to run any
SMP tlb operations. If it fails, the pages will be either freed or
returned to the VM system and the buffer cache will act as before.
* The I/O load limit has a secondary beneficial effect which is to reduce
the allocation load on the VM system to something the pageout daemon can
handle while still allowing new pages up to the I/O load limit to transfer
to VM backing store. Thus, this mechanism ONLY effects systems with I/O
load limits above 200 MBytes/sec (or whatever programmed value you decide
on).
* Pages already in the VM page cache do not count towards the I/O load limit
when reconstituting a buffer.
Matthew Dillon [Mon, 18 Jul 2016 18:44:11 +0000 (11:44 -0700)]
kernel - Refactor buffer cache code in preparation for vm_page repurposing
* Keep buffer_map but no longer use vm_map_findspace/vm_map_delete to manage
buffer sizes. Instead, reserve MAXBSIZE of unallocated KVM for each buffer.
* Refactor the buffer cache management code. bufspace exhaustion now has
hysteresis, bufcount works just about the same.
* Start work on the repurposing code (currently disabled).
Matthew Dillon [Fri, 22 Jul 2016 05:48:10 +0000 (22:48 -0700)]
hammer2 - Fix deadlocks, bad assertion, improve flushing.
* Fix a deadlock in checkdirempty(). We must release the lock on oparent
before following a hardlink. If after re-locking chain->parent != oparent,
return EAGAIN to the caller.
* When doing a full filesystem flush, pre-flush the vnodes with a normal
transaction to try to soak-up all the compression time and avoid stalling
user process writes for too long once we get inside the formal flush.
* Fix a flush bug. Flushing a deleted chain is allowed if it is an inode.
Matthew Dillon [Thu, 21 Jul 2016 02:29:06 +0000 (19:29 -0700)]
nvme - Fix BUF_KERNPROC() SMP race
* BUF_KERNPROC() must be issued before we submit the request. The subq
lock is not sufficient to interlock request completion (which only needs
the comq lock).
* Only occurs under extreme loads, probably due to an IPI or Xinvltlb
causing enough of a pause that the completion can run. NVMe is so fast,
probably no other controller would hit this particular race condition.
* Also fix a bio queueing race which can leave a bio hanging. If no
requests are available (which can only happen under very heavy I/O
loads), the signaling to the admin thread on the next I/O completion
can race the queueing of the bio. Fix the race by making sure the
admin thread is signalled *after* queueing the bio.
François Tigeot [Thu, 21 Jul 2016 10:13:58 +0000 (12:13 +0200)]
drm/i915: Mark a DragonFly-specific change as such
zrj [Fri, 20 May 2016 15:54:04 +0000 (18:54 +0300)]
drm/i915: Re-apply lost intel_dp.c diff.
Bring back intel_dp.c part of
9c52345db761baa0a08634b3e93a233804b7a91b
Also reduce spam on laptops with eDP panels on i915 load.
Great opportunity to use just implemented DRM_ERROR_RATELIMITED()
macro that uses krateprintf().
Issue is still there.
Sascha Wildner [Thu, 21 Jul 2016 06:52:54 +0000 (08:52 +0200)]
<sys/param.h>: Fix comments.