Matthew Dillon [Wed, 26 Sep 2012 18:04:20 +0000 (11:04 -0700)]
Merge branches 'hammer2' and 'master' of ssh://crater.dragonflybsd.org/repository/git/dragonfly into hammer2
Matthew Dillon [Wed, 26 Sep 2012 18:03:32 +0000 (11:03 -0700)]
kernel - Fix i386 wire_count panics (2)
* Optimize wakeup case.
Matthew Dillon [Wed, 26 Sep 2012 17:36:13 +0000 (10:36 -0700)]
kernel - Fix i386 wire_count panics
* Tracked down to a situation where a pmap structure is being dtor'd by
the objcache simultaniously with a vm_page_protect() operation on
a page table page's vm_page_t.
(1) vm_page_protect() begins running, finds page table page to remove,
removes the related pv_entry, but then gets stuck waiting for the
pmap->pm_pteobj (vm_object token).
(2) Exit on another thread simultaniously removes all remaining VM
pages from the pmap. However, due to #(1), there is still an
active page table page in pmap->pm_pteobj that the exit code has
no visibility to.
(3) The related pmap is then dtor'd due to heavy fork/exec/exit load
on the system. The VM page is still present, vm_page_protect()
is still stuck on the token (or hasn't gotten cpu back).
(4) Nominal vm_object_terminate() destroys the page table page.
(5) vm_page_protect() unblocks and tries to destroy the page.
(6) BOOM.
* This fix places a barrier between the normal process exit code and the
dtor which will block while a vm_page_protect() is active on the pmap.
* This time for sure, but if not we still know that the problem is related
to this exit race.
Samuel J. Greear [Wed, 26 Sep 2012 04:51:10 +0000 (22:51 -0600)]
Merge branch 'master' of ssh://crater.dragonflybsd.org/repository/git/dragonfly
Samuel J. Greear [Wed, 26 Sep 2012 04:50:33 +0000 (22:50 -0600)]
dloader - Add user_scheduler kenv tuneable sample
Matthew Dillon [Wed, 26 Sep 2012 02:13:39 +0000 (19:13 -0700)]
kernel - usched_dfly revamp (8), add reschedule hints
* Add reschedule hints when issuing a read() on a pipe or socket, or
issuing a blocking kevent() call.
* usched_dfly will force a reschedule after the round-robin count has
passed the half-way point if it detects a scheduling hint. This is
an attempt to avoid rescheduling in the middle of some critical user
operation (e.g. postgres server holding internal locks).
* Add kern.usched_dfly.fast_resched which allows the scheduler to avoid
interrupting a less desireable process with a more desireable process
as long as the priority difference is not too great.
However, default the value to 0, because setting the value has
consequences for interactive responsiveness.
* When running pgbench we recommend leaving fast_resched disabled and
instead running the pgbench at idprio 15 to work around issues where
the postgres server process(es) get interrupted by the pgbench processes
which causes the postgres server process(es) to hit internal lock conflicts
more quickly and enter a semaphore wait more often (when both pgbench and
the postgres servers are running on the same machine).
This is really an issue with postgres server scaling. Because the pgbench's
use so much less cpu than the postgres server processes they are given a
more desireable priority and thus can interrupt the postgres server
processes. We can't really 'fix' this in the scheduler without really
messing up normal interactive responsiveness for the system.
Example:
idprio pgbench -j 80 -c 80 -T 60 -S bench
Matthew Dillon [Tue, 25 Sep 2012 18:53:58 +0000 (11:53 -0700)]
kernel - usched_dfly revamp (7), bring back td_release, sysv_sem, weights
* Bring back the td_release kernel priority adjustment.
* sysv_sem now attempts to delay wakeups until after releasing its token.
* Tune default weights.
* Do not depress priority until we've become the uschedcp.
* Fix priority sort for LWKT and usched_dfly to avoid context-switching
across all runable threads twice.
Sascha Wildner [Tue, 25 Sep 2012 15:21:52 +0000 (17:21 +0200)]
make.conf.5: Add some words about WANT_NETGRAPH7.
Sascha Wildner [Tue, 25 Sep 2012 05:37:07 +0000 (07:37 +0200)]
arcmsr(4): Sync with FreeBSD (Areca's driver version 1.20.00.25).
Some bug fixes and added support for ARC-1213, ARC-1223 and ARC-1882.
Thanks to ftigeot for giving it some testing.
Matthew Dillon [Tue, 25 Sep 2012 01:24:22 +0000 (18:24 -0700)]
kernel - usched_dfly revamp (6), reimplement shared spinlocks & misc others
* Rename gd_spinlocks_wr to just gd_spinlocks.
* Reimplement shared spinlocks and optimize the shared spinlock path.
Contended exclusive spinlocks are less optimal with this change.
* Use shared spinlocks for all file descriptor accesses. This includes
not only most IO calls like read() and write(), but also callbacks
from kqueue to double-check the validity of a file descriptor.
* Use getnanouptime() instead of nanouptime() in kqueue_sleep() and
kern_kevent(), removing a hardware I/O serialization (to read the HPET)
from the critical path.
* These changes significantly reduce kernel spinlock contention when running
postgres/pgbench benchmarks.
Matthew Dillon [Mon, 24 Sep 2012 21:34:41 +0000 (14:34 -0700)]
kernel - Add PC sampling for x86-64
* Xtimer interrupt (lapic timer) now samples the %rip value and stores
it in the globaldata structure. Sampling occurs whether the machine is
in a critical section or not.
* Used for debugging.
Matthew Dillon [Mon, 24 Sep 2012 20:43:50 +0000 (13:43 -0700)]
kernel - usched_dfly revamp (5), correct default in last commit
* Doh. weight2 should be 120, not 1200.
Matthew Dillon [Mon, 24 Sep 2012 20:32:11 +0000 (13:32 -0700)]
kernel - usched_dfly revamp (4), improve tail
* Improve tail performance (many more cpu-bound processes than available
cpus).
* Experiment with removing the LWKT priority adjustments for kernel vs user.
Instead give LWKT a hint about the user scheduler when scheduling a thread.
LWKT's round-robin is left unhinted to hopefully round-robin starved LWKTs
running in kernel mode.
* Implement a better calculation for the per-thread uload than the priority.
Instead, use estcpu.
* Adjust default weigntings for new uload calculation scale.
Matthew Dillon [Mon, 24 Sep 2012 17:31:47 +0000 (10:31 -0700)]
systat - Ensure vmmeter output separates fields by at least one space.
* Ensure vmmeter output separates fields by at least one space.
* Fix minor formatting issue with token names.
Matthew Dillon [Mon, 24 Sep 2012 17:27:32 +0000 (10:27 -0700)]
systat - Display colliding token
* Display the colliding token when a non-zero token collision count is
reported. This is somewhat statistical but should still provide good
information on MP bottlenecks.
Matthew Dillon [Mon, 24 Sep 2012 17:26:28 +0000 (10:26 -0700)]
kernel - Add v_token_name to gd_cnt
* Copy the tok->t_desc field into the gd_cnt.v_token_name buffer
when a token collides so systat -pv 1 can pick it up.
Sascha Wildner [Mon, 24 Sep 2012 12:45:04 +0000 (12:45 +0000)]
usr.sbin/Makefile: Remove an obsolete old comment.
Sascha Wildner [Mon, 24 Sep 2012 12:44:16 +0000 (12:44 +0000)]
pnpinfo(8): Don't build/install for x86_64.
It crashes and isn't really useful. FreeBSD did so, too.
Sepherosa Ziehau [Sun, 23 Sep 2012 09:12:52 +0000 (17:12 +0800)]
vlan: Dispatch mbuf to be sent to physical interface's start cpu
Sepherosa Ziehau [Sun, 23 Sep 2012 09:08:22 +0000 (17:08 +0800)]
bridge: Utilize netisr to run physical interface's if_start
Sepherosa Ziehau [Sun, 23 Sep 2012 09:07:01 +0000 (17:07 +0800)]
aue/lgue: Utilize netisr to run if_start
Sepherosa Ziehau [Sun, 23 Sep 2012 09:04:48 +0000 (17:04 +0800)]
if: Defer if_start to netisr instead of ifnet for further processing
Sepherosa Ziehau [Sun, 23 Sep 2012 07:59:03 +0000 (15:59 +0800)]
ifpoll: Use u_long for statistics
Sepherosa Ziehau [Sun, 23 Sep 2012 07:48:31 +0000 (15:48 +0800)]
ifpoll: Reorder iopoll fields a little bit
Sepherosa Ziehau [Sun, 23 Sep 2012 07:42:14 +0000 (15:42 +0800)]
ifpoll: Simplify TX polling logic
Matthew Dillon [Sun, 23 Sep 2012 02:07:02 +0000 (19:07 -0700)]
top - Fix -t / -S
* -t to show threads now just shows threaded processes (all LWPs),
and no longer also shows system processes.
* -s to show system threads now works as expected.
Matthew Dillon [Sun, 23 Sep 2012 02:00:12 +0000 (19:00 -0700)]
top - Adjust top to account for kernel changes
* ccpu no longer exists
* if top is out of sync with the system, fix a seg-fault which can occur
when it tries to use lwp_stat as an array indx.
Matthew Dillon [Sun, 23 Sep 2012 01:59:26 +0000 (18:59 -0700)]
ps - Adjust ps to account for kernel changes
* ccpu no longer exists.
* pctcpu is now accurate regardless of the lwp's state.
Matthew Dillon [Sun, 23 Sep 2012 01:57:06 +0000 (18:57 -0700)]
kernel - usched_dfly revamp (3), fix estcpu
* Fix the estcpu calculation, which previously assumed only a single
runq (in usched_dfly there is a runq per cpu).
* Add a global atomic int accounting for all running and runnable lwp's.
* Fix cpu-hogging issues for bursty processes by creating a fast-decay-mode
for estcpu when a thread first starts up, or after it has been asleep
for more than 1 seconds.
Sepherosa Ziehau [Sat, 22 Sep 2012 13:45:43 +0000 (21:45 +0800)]
emx: Allow user to specify RX/TX processing CPU's offset
Matthew Dillon [Sat, 22 Sep 2012 01:34:09 +0000 (18:34 -0700)]
kernel - usched_dfly revamp (2), reduce token collisions
* Add wakeup_start_delayed() and wakeup_end_delayed(). These functions
will attempt to delay any wakeup() calls made between them.
Use the functions in the unix domain socket send code.
* This removes a lot of volatility from monster's 48:48 pgbench tests by
delaying the wakeup()s related to a unix domain socket write until after
the pool token has been released.
* Adjust usched_dfly parameters. In particular, weight2 can be higher now.
Matthew Dillon [Fri, 21 Sep 2012 23:53:37 +0000 (16:53 -0700)]
kernel - Add vmmeter counter for token collisions
* Add vmmeter counter for token collisions.
* Add token collisions to systat -pv's display.
Matthew Dillon [Fri, 21 Sep 2012 23:09:25 +0000 (16:09 -0700)]
kernel - usched_dfly revamp
* NOTE: This introduces a few regressions at high loads. They've been
identified and will be fixed in another iteration.
We've identified an issue with weight2. When weight2 successfully
schedules a process pair on the same cpu it can lead to inefficiencies
elsewhere in the scheduler related to user-mode and kernel-mode
priority switching. In this situation testing pgbench/postgres pairs
(e.g. -j $ncpus -c $ncpus) we sometimes see some serious regressions on
multi-socket machines, and other times see remarkably high performance.
* Fix a reported panic.
* Revamp the weights and algorithms signficantly. Fix algorithmic errors
and improve the accuracy of weight3. Add weight4 which basically tells
the scheduler to try harder to find a free cpu to schedule the lwp on
when the current cpu is busy doing something else.
Matthew Dillon [Fri, 21 Sep 2012 23:07:00 +0000 (16:07 -0700)]
kdump - cleanup cpu-stamp formatting
* %2d instead of %d so columns align.
Matthew Dillon [Fri, 21 Sep 2012 23:05:50 +0000 (16:05 -0700)]
ps - Show cpu# even when process is sleeping
* Show the cpu# the process is scheduled on even when it is sleeping.
* Add a 'Q' flag for 'R'unnable processes, indicating that it is runnable
but on the user scheduler runq and does not actually have cpu at the
moment.
Matthew Dillon [Fri, 21 Sep 2012 20:16:56 +0000 (13:16 -0700)]
sysctl - Allow integers to use hex
* Allow integers to be specified in hex using a 0x or 0X prefix.
Sascha Wildner [Thu, 20 Sep 2012 20:45:13 +0000 (22:45 +0200)]
nrelease: Remove the obsolete -scmgit-gui option from the build.
Its presence was breaking snapshot building.
It is no longer necessary to prevent gitk from being built along
with git using '-scmgit-gui', because pkgsrc now has a separate
scmgit-gitk package for it.
Since we used to build our GUI distribution without -scmgit-gui
(meaning with gitk), add scmgit-gitk to the GUI build.
Matthew Dillon [Thu, 20 Sep 2012 18:35:21 +0000 (11:35 -0700)]
kernel - Improve regressions in usched_dfly (2)
* Allow various fork() behaviors to be supported via
kern.usched_dfly.features.
* Set the default to place the newly forked process on
a random cpu instead of the current cpu.
The bsd4 scheduler had a global queue and could just signal
a random helper to pick up the thread. The dfly scheduler
has per-cpu queues and must actually enqueue the thread to
another cpu.
The bsd4 scheduler is still slightly superior here because
if the parent running on the current cpu immediately waits
for the child, the child is able to run on the current cpu.
However, randomization works quite well and this removes
nearly all of the make -j N regression.
Matthew Dillon [Thu, 20 Sep 2012 18:33:52 +0000 (11:33 -0700)]
kdump - Add options to print physical cpu
* -c option adds the physical cpu to the output
* -a option humanform. This options turns on -c and -R.
Matthew Dillon [Thu, 20 Sep 2012 18:33:03 +0000 (11:33 -0700)]
kernel - Include physical cpu in ktrace header
* Record which cpu the system call (etc) is running on
in the ktrace header.
Samuel J. Greear [Thu, 20 Sep 2012 07:52:58 +0000 (01:52 -0600)]
wmesg - Increase to 8 chars from 7
* Increase top from 7 to 8 and use the WMESGLEN define in ps (previously 6).
Matthew Dillon [Thu, 20 Sep 2012 07:37:40 +0000 (00:37 -0700)]
kernel - Add usched_dfly algorith, set as default for now (9)
* Code cleanup, remove bits that shouldn't matter any more.
Matthew Dillon [Thu, 20 Sep 2012 07:31:43 +0000 (00:31 -0700)]
kernel - Improve regressions in usched_dfly (1)
* The new scheduler is MP locked at a very fine-grain. The old scheduler
had a global spinlock which effectively serialized competing cores during
exit/wait sequences.
With the new scheduler this serialization is gone and resulted in a
vfork performance regression due to a fallback tsleep loop in the
reaper.
* This fixes the problem with an explicit signal bit for tsleep/wakeup.
The sequence is avoided if the reaper determines the thread has
already completed its exit.
Matthew Dillon [Thu, 20 Sep 2012 04:37:18 +0000 (21:37 -0700)]
kernel - Add usched_dfly algorith, set as default for now (8)
* Fix additional edge cases, in particular improving the process pairing
algorithm to reduce flapping.
* Reorder conditionals in dd->uschedcp assignment to improve the hot path.
* Rewrite the balancing rover. The rover will now move one process per
tick from a very heavily loaded cpu queue to a lightly loaded cpu queue.
Each cpu target is iterated by the rover, one target per tick.
* Reformulate dfly_chooseproc_locked() and friends. Add a capability to
choose the 'worst' process (from the end of the queue), which is used
by the rover.
* When pulling a random thread we require the queue it is taken from to
be MUCH more heavily loaded than our own queue, which avoids ping-ponging
processes back and forth when the load is not balanced against the number
of cpu cores (e.g. 6 servers, 4 cores).
Matthew Dillon [Thu, 20 Sep 2012 04:33:54 +0000 (21:33 -0700)]
kernel - Add lwkt_yield_quick()
* Add a quick version of lwkt_yield() which does not try to round-robin
LWKT threads at the same priority.
Matthew Dillon [Thu, 20 Sep 2012 04:33:03 +0000 (21:33 -0700)]
kernel - Don't call lwkt_user_yield() in uiomove() unless xfer is big
* Only call lwkt_user_yield() in uiomove() when the xfer is big.
Matthew Dillon [Wed, 19 Sep 2012 18:25:09 +0000 (11:25 -0700)]
kernel - Add usched_dfly algorith, set as default for now (7)
* Reenable weight2 (the process pairing heuristic) and fix the
edge cases associated with it.
* Change the process pulling behavior. Now we pull the 'worst' thread
from some other cpu instead of the best (duh!), we only pull when a
cpu winds up with no designated user threads, or we pull via a
schedulerclock-implemented rover.
The schedulerclock-implemented rover will allow ONE cpu to pull the
'worst' thread across all cpus (with some locality) once every
round-robin ticks (4 scheduler ticks).
The rover is responsible for taking excess processes that are unbalancing
one or more cpu's (for example, you have 6 running batch processes and
only 4 cpus) and slowly moving them between cpus. If we did not do this
the 'good' processes running on the unbalanced cpus are put at an unfair
disadvantage.
* This should fix all known edge cases, including ramp-down edge cases.
Matthew Dillon [Wed, 19 Sep 2012 12:10:21 +0000 (05:10 -0700)]
kernel - Add usched_dfly algorith, set as default for now (6)
* Fix an edge case where the pairing could cause flapping.
* Fix an edge case where user processes were interrupting each other
when they were in the same queue, which could cause a synchronous
process like a postgres server to lose cpu while holding internal
locks during a short operation.
Matthew Dillon [Wed, 19 Sep 2012 11:41:27 +0000 (04:41 -0700)]
kernel - Add usched_dfly algorith, set as default for now (5)
* Do a better job pushing threads to the correct cpu. Keep the load
factor live even when the thread goes to sleep, until some other thread
tries to go to sleep on the same cpu.
* Handle an edge case where a cpu-bound thread needs to be moved to
another cpu.
* Pull once a second and on-demand.
Matthew Dillon [Wed, 19 Sep 2012 03:55:18 +0000 (20:55 -0700)]
kernel - Add usched_dfly algorith, set as default for now (4)
* Fix fork regression with usched_dfly. Most fork/exec sequences involve
the parent waiting. The new scheduler was placing the newly forked
process on another cpu which is non-optimal if the parent is going
to immediately wait.
Instead if there is nothing else waiting to run on the current cpu,
leave the forked process on the current cpu initially. If the parent
waits quickly the forked process will get cpu, otherwise it will get
scheduled away soon enough. If the parent forks additional children
then we find there is something on the queue now (the first child) and
put the additional children on other cpus.
Reported-by: thesjg
Matthew Dillon [Tue, 18 Sep 2012 21:18:24 +0000 (14:18 -0700)]
kernel - Increase machdep.cpu_idle_repeat from 4 to 750
* Increase machdep.cpu_idle_repeat from 4 to 750. It now takes longer
before the kernel will move from HLT/MONITOR/MWAIT to ACPI-based halting.
* Improves benchmark performance significantly on recent cpus without
eating up too much extra power, but laptop tests are still pending.
* Laptop users can always set it back to 4.
Matthew Dillon [Tue, 18 Sep 2012 20:58:11 +0000 (13:58 -0700)]
usched - Add usched utility
* Currently must run as root
* usched {bsd4,dfly} program args...
François Tigeot [Tue, 18 Sep 2012 20:38:59 +0000 (22:38 +0200)]
tuning(7): shm_use_phys is now enabled by default
Matthew Dillon [Tue, 18 Sep 2012 18:45:19 +0000 (11:45 -0700)]
kernel - Account for file reads that take the VM shortcut
* Account for file reads that take the VM shortcut in hammer's statistics.
Sascha Wildner [Tue, 18 Sep 2012 18:18:35 +0000 (20:18 +0200)]
kernel/usched_dfly: Small UP compilation fix.
Matthew Dillon [Tue, 18 Sep 2012 18:01:35 +0000 (11:01 -0700)]
kernel - Add usched_dfly algorith, set as default for now (3)
* Add a field to the thread structure, td_wakefromcpu. All wakeup()
family calls will load this field with the cpu the thread was woken
up FROM.
* Use this field in usched_dfly to weight scheduling such that pairs
of synchronously-dependent threads (for example, a pgbench thread
and a postgres server process) are placed closer to each other in
the cpu topology.
* Weighting:
- Load matters the most
- Current cpu thread is scheduled on is next
- Synchronous wait/wakeup weighting is last
* Tests on monster yield better all-around results with a new all-time
high w/ pgbench -j 40 -c 40 -T 60 -S bench:
25% idle at 40:40 tps = 215293.173300 (excluding connections establishing)
Without the wait/wakeup weighting (but with allwload and current cpu
weighting):
41% idle at 40:40 tps = 162352.813046 (excluding connections establishing)
Without wait/wakeup or current-cpu weighting. Load balancing only:
43% idle at 40:40 tps = 159047.440641 (excluding connections establishing)
Sascha Wildner [Tue, 18 Sep 2012 17:01:42 +0000 (19:01 +0200)]
Mention KTR_USCHED_DFLY in the manpage and in the LINTs.
Sascha Wildner [Tue, 18 Sep 2012 17:01:11 +0000 (19:01 +0200)]
kernel/usched_dfly: #if 0 all unused KTR_INFOs (fixes build with KTR).
Sascha Wildner [Tue, 18 Sep 2012 16:57:11 +0000 (18:57 +0200)]
kernel/usched_bsd4: Declare the KTR_INFO_MASTER(usched) as extern.
It is shared with usched_dfly.
Matthew Dillon [Tue, 18 Sep 2012 16:25:03 +0000 (09:25 -0700)]
kernel - add usched_dfly algorith, set as default for now (3)
* UP compile fixes.
Reported-by: swildner
Matthew Dillon [Tue, 18 Sep 2012 06:54:07 +0000 (23:54 -0700)]
kernel - add usched_dfly algorith, set as default for now (2)
* Bug fix to the load accounting code, which effects cpu selection.
Matthew Dillon [Tue, 18 Sep 2012 06:17:51 +0000 (23:17 -0700)]
kernel - add usched_dfly algorith, set as default for now
* Fork usched_bsd4 for continued development.
* Rewrite the bsd4 scheduler to use per-cpu spinlocks and queues.
* Reformulate the cpu selection algorithm using the topology info.
We now do a top-down iteration instead of a bottom-up iteration
to calculate the best cpu node to schedule something to.
Implements both thread push to remote queue and pull from remote queue.
* Track a load factor on a per-cpu basis.
Sepherosa Ziehau [Tue, 18 Sep 2012 01:08:57 +0000 (09:08 +0800)]
ifpoll: Setup if_start cpuid for NPOLLING properly
Sascha Wildner [Mon, 17 Sep 2012 21:13:26 +0000 (23:13 +0200)]
ixgbe.4: Use .Dx
Matthew Dillon [Mon, 17 Sep 2012 16:32:53 +0000 (09:32 -0700)]
kernel - usched_bsd4 algorith fixes & improvements
* Fix a bug in the checks loop where the loop counter would be reset
whenever it moved to a new queue.
* Improve the min_level_lwp selec code by also testing lwp_priority.
* Add code to kick the helper threads for the processes that weren't
selected.
* Clean up some code syntax.
Sascha Wildner [Mon, 17 Sep 2012 14:51:34 +0000 (16:51 +0200)]
msgport.9: Some mdoc and typo fixes.
Sascha Wildner [Mon, 17 Sep 2012 13:49:58 +0000 (15:49 +0200)]
em.4: Mention TSO support.
François Tigeot [Tue, 11 Sep 2012 20:25:27 +0000 (22:25 +0200)]
ixgbe: Remove the link handler tasklet
There's no need for it, the job can be done just as well in regular
interrupt threads.
Sascha Wildner [Mon, 17 Sep 2012 12:22:02 +0000 (14:22 +0200)]
kernel: Remove some unused variables.
Sascha Wildner [Mon, 17 Sep 2012 12:20:13 +0000 (14:20 +0200)]
kernel/ipx: Remove #ifdef lint checks (and add #endif comments).
Sepherosa Ziehau [Mon, 17 Sep 2012 09:30:09 +0000 (17:30 +0800)]
ifpoll: Field renaming; if_qpoll -> if_npoll
Consistent w/ IFF_NPOLLING flag
Sepherosa Ziehau [Mon, 17 Sep 2012 09:13:32 +0000 (17:13 +0800)]
ifpoll: Don't limit number of CPUs that perform polling
Nuno Antunes [Mon, 17 Sep 2012 05:16:52 +0000 (06:16 +0100)]
msgport.9: Catch up with recent changes to lwkt_initport_spin().
Sepherosa Ziehau [Mon, 17 Sep 2012 01:29:28 +0000 (09:29 +0800)]
msgport: Always save owner thread for threads' msgports
This unbreaks the assertion in dropmsg for spin msgport. Also for shared
spin msgport don't allow dropmsg.
While im here, add comment for mp_dropmsg and adjust comment about mpu_td
Reported-by: pavalos@
Sascha Wildner [Mon, 17 Sep 2012 00:55:36 +0000 (02:55 +0200)]
fortune(6)/mutex.9: s/is is/is/
Sascha Wildner [Mon, 17 Sep 2012 00:26:06 +0000 (02:26 +0200)]
em.4: Add some words about emx(4) and create MLINKS.
Sascha Wildner [Sun, 16 Sep 2012 15:25:58 +0000 (17:25 +0200)]
Update the pciconf(8) database.
September 14, 2012 snapshot from http://pciids.sourceforge.net/
Matthew Dillon [Sun, 16 Sep 2012 03:52:45 +0000 (20:52 -0700)]
kernel - Add vm.read_shortcut_enable
* Add vm.read_shortcut_enable (disabled by default for now). Set to 1 to
enable this feature.
This enables a helper function which HAMMER1 now uses to short-cut read()
operations on files. This feature only works on x86-64.
* When enabled this feature allows file read() requests to be satisfied
directly from the VM page cache using lwbuf's, completely bypassing the
buffer cache and also bypassing most of the VFS's VOP_READ code.
The result is an approximate doubling of read() performance in cases
where the buffer cache is too small to fit the hot data set, but the VM
page cache is not.
This feature is able to avoid the buffer cache and thus prevent buffer
cycling within it which, due to the constant installation and
deinstallation of pages in KVM cause a great deal of SMP page table
page invalidations.
Matthew Dillon [Sun, 16 Sep 2012 01:43:26 +0000 (18:43 -0700)]
hammer - Adjust record and dirtybuf limits to handle large buffer caches
* Adjust record and dirtybuf limits such that they don't blow up a hammer
volume if the system's buffer cache is very large.
Matthew Dillon [Sun, 16 Sep 2012 01:22:52 +0000 (18:22 -0700)]
Merge branches 'hammer2' and 'master' of ssh://crater.dragonflybsd.org/repository/git/dragonfly into hammer2
John Marino [Sat, 15 Sep 2012 21:36:39 +0000 (23:36 +0200)]
rtld: Don't call process_nodelete with NULL object pointer
If object loading and relocation fail, the obj pointer will be NULL when
then process_nodelete function is reached. A crash will occur if the
function is called with a null pointer, so ensure that it doesn't.
Taken-from: FreeBSD SVN 239470 (20 Aug 2012)
Matthew Dillon [Sat, 15 Sep 2012 20:50:38 +0000 (13:50 -0700)]
kernel - fix builds
* Fix a few kprintf()'d %d -> %ld for nbufs.
Reported-by: vsrinivas
Matthew Dillon [Sat, 15 Sep 2012 17:04:30 +0000 (10:04 -0700)]
systat - remove bounds on buffer cache nbuf count for 64-bit
* Adjust systat to the new kernel reality.
Matthew Dillon [Sat, 15 Sep 2012 17:00:54 +0000 (10:00 -0700)]
kernel - remove bounds on buffer cache nbuf count for 64-bit
* Remove arbitrary 1GB buffer cache limitation
* Adjusted numerous 'int' fields to 'long'. Even though nbuf is not
likely to exceed 2 billion buffers, byte calculations using the
variable began overflowing so just convert that and various other
variables to long.
* Make sure we don't blow-out the temporary valloc() space in early boot
due to nbufs being too large.
* Unbound 'kern.nbuf' specifications in /boot/loader.conf as well.
Matthew Dillon [Sat, 15 Sep 2012 07:11:04 +0000 (00:11 -0700)]
ipcs - Fix kvm accesses for new semid structures
* semid_ds -> semid_pool, primarily.
Matthew Dillon [Sat, 15 Sep 2012 06:44:10 +0000 (23:44 -0700)]
kernel - Implement segment pmap optimizations for x86-64 (6)
* Improve process exit. When the last process referencing a shared
anonymous memory VM object exits the kernel destroys the object
and its shared pmap.
Removal of pages from the shared pmap was causing the system to IPI
EVERY cpu for EACH pte. Needless to say this caused a process to take
~2 minutes to remove a ~6GB shared segment. Optimize this case by
not bothering the do the IPI/invlpg invalidations since the pmap is not
actually active.
* This also applies to any exiting process. When cleaning out the pmap
we no longer invlpg each pte, since nobody is referencing the pmap except
the current thread in the kernel doing the exit. It will simply issue
a cpu_invltlb() when it is all done.
Matthew Dillon [Sat, 15 Sep 2012 06:43:04 +0000 (23:43 -0700)]
kernel - Enhance sysv semaphore performance (2)
* Change SEMMAP default from 30 to 128. Also note that most other
semaphore-related defaults were increased significantly in prior
commits.
Matthew Dillon [Sat, 15 Sep 2012 05:05:13 +0000 (22:05 -0700)]
kernel - Enhance sysv semaphore performance
* Make the locks used by the semaphore module significantly more
fine-grained.
* Reorganize the semaphore related structures significantly to
reduce locking conflicts.
* Reduce overhead and improve performance for handling SEM_UNDO semops.
Matthew Dillon [Sat, 15 Sep 2012 05:03:39 +0000 (22:03 -0700)]
kernel - Add kern.gettimeofday_quick sysctl
* Add a sysctl that forces gettimeofday() to return a coarse timestamp
instead of a fine-grained timestamp.
This sysctl is mainly intended for performance debugging.
Matthew Dillon [Fri, 14 Sep 2012 17:13:39 +0000 (10:13 -0700)]
kernel - Use pool tokens to protect unix domain PCBs (2)
* Fix mismatched token unlock in last commit.
Matthew Dillon [Fri, 14 Sep 2012 16:10:06 +0000 (09:10 -0700)]
kernel - Use pool tokens to protect unix domain PCBs
* The read, status, and write paths now use per-pcb pool tokens
instead of the global unp_token. The global token is still used
for accept, connect, disconnect, etc.
* General semantics for making this SMP safe is to obtain a pointer
to the unp from so->so_pcb, then obtain the related pool token,
then re-check that so->so_pcb still equals unp.
* Pool token protects the peer pointer, unp->unp_conn. Any change
to unp->unp_conn requires both the pool token and the global token.
* This should improve concurrent reading and writing w/unix domain
sockets.
Matthew Dillon [Fri, 14 Sep 2012 08:47:19 +0000 (01:47 -0700)]
kernel - Fix unix domain socket portfn routing
* sonewconn_faddr() / sonewconn() was improperly overriding the sync_port
setting for unix domain sockets, causing unnecessary netmsg traffic to
the netisr threads.
* This should significantly improve unix domain socket performance.
With-help-from: sephe
Sepherosa Ziehau [Fri, 14 Sep 2012 01:48:28 +0000 (09:48 +0800)]
pci/mptable: Let parent route the interrupt before using the intline
Tested-by: swildner@
Matthew Dillon [Fri, 14 Sep 2012 00:51:13 +0000 (17:51 -0700)]
ls - Add -I to reverse -A
* ls implies -A when run as root. Add the -I option
which disables this behavior.
* Note that -A and -I will override each other on the
command line.
Matthew Dillon [Thu, 13 Sep 2012 20:47:03 +0000 (13:47 -0700)]
kernel - Implement segment pmap optimizations for x86-64 (5)
* Fix self-deadlock in pmap_remove_*() sequence. The sequence calls
pmap_remove_callback() -> pmap_release_pv(proc_pt_pv) but the caller
may already be holding the parent, proc_pd_pv, locked. If
pmap_release_pv() needs to get the parent it deadlocks.
Fixed by passing the parent into pmap_release_pv() for this case.
Matthew Dillon [Thu, 13 Sep 2012 18:39:11 +0000 (11:39 -0700)]
kernel - Implement segment pmap optimizations for x86-64 (4)
* Fix pmap_pte_quick() when it is called on a VM object's simple pmap.
Fixes a panic during postgres init w/ postgres/mmap. Simple pmaps
do not have PDP or PML4 pages or pv_entry's, only from PD on down.
* Do some minor API work on the pte-indexing functions.
Matthew Dillon [Thu, 13 Sep 2012 18:38:22 +0000 (11:38 -0700)]
Merge branch 'master' of ssh://crater.dragonflybsd.org/repository/git/dragonfly
Matthew Dillon [Thu, 13 Sep 2012 17:58:19 +0000 (10:58 -0700)]
kernel - Implement segment pmap optimizations for x86-64 (3)
* Fix pmap optimization bugs triggered by XORG (startx) and postgres/mmap
* The simple-mode pmaps embedded in VM objects do not have the PML4 or PDP
layer. This caused pmap_scan() to miss pages, resulting in an assertion
and panic during object frees if the objects were large enough.
* Improve postgres 9.2/mmap, still more work to go.
Sascha Wildner [Thu, 13 Sep 2012 17:06:15 +0000 (19:06 +0200)]
Sync zoneinfo database with tzdata2012f from ftp://ftp.iana.org/tz/releases
* australasia (Pacific/Fiji): Fiji DST is October 21 through January 20
this year. (Thanks to Steffen Thorsen.)
* Theory: Correct a typo.
Nuno Antunes [Thu, 13 Sep 2012 07:07:08 +0000 (08:07 +0100)]
Expand a comment in lwkt_switch().