# DragonFly BSD 4.2 * Version 4.2.0 released 29 June 2015. Version 4.2 of DragonFly brings significant updates to i915 and Radeon support, a move to GCC 5 (and the first BSD to do so), a replacement to Sendmail, and numerous other changes including OpenSSL updates, a new boot screen, improved sound, and improved USB support. The details of all commits between the 4.0 and 4.2 branches are available in the associated commit messages for [4.2RC](http://lists.dragonflybsd.org/pipermail/commits/2015-June/418748.html) and [4.2.0](http://lists.dragonflybsd.org/pipermail/commits/2015-June/418867.html). ## Big-ticket items ### New base compiler GCC-5 (gcc-5.1.1) is now the system base compiler as of the release. The backup compiler is gcc-4.7.4. This gives us significantly better C++ support which is needed for package building. ### Improved graphics support Significant progress continues in the drm (graphics) subsystem. Both radeon and i915 drivers have been updated, with i915 support seeing the most improvements. ### Sendmail replaced by DMA Sendmail has been replaced by the home-grown DragonFly Mail Agent (DMA) in the base system. DMA is not a full-featured MTA (Mail Transfer Agent), it only accepts mails from local MUA (Mail User Agents) and delivers them immediately, either locally or remotely. DMA doesn't listen to network connections on port 25. People who still need a full-featured MTA must install it from dports. OpenSMTPD, Postfix and Sendmail itself are available as binary packages. A [MTA wiki page](http://www.dragonflybsd.com/docs/docs/newhandbook/mta/) has been written to explain in details how to switch from DMA to a full-featured MTA on DragonFly 4.2 and more recent versions. ## Changes since DragonFly 4.0 ### Kernel * Fix an exec() optimization race that could deadlock processes. * Add reapctl() system call for reaper and sub-process management. * Improve slab cleanup performance. * Increase default MAXTSIZ from 128M to 256M. * Add lock cancelation features. * Implement a new callout*() core. * Fix callout deadlocks in u4b. * Fix a panic on upmap/kpmap access from procfs. * Fix a pmap panic in vkernel64. * Implement kqueue write filtering for U4B, required by some apps. * Fix a major pagetable memory leak that could fill up swap. * Swapcache cleaning is allowed to proceed even if swapcache is disabled. * pxeboot - workaround some BIOS breakage. * usb-u4b - synchronize from FreeBSD as-of March 2015. * Refactor kernel message buffer (dmesg) code to fix lost text. * Fix panic in situations where a chroot is broken. * Fix O_CLOEXEC race in open() and fhopen(). * Add pipe2() system call. * Add chflagsat() system call. * Refactor bdirty() handling to fix possible fsync races. * Add cpu C-state ACPI parsing and settings. * Ext2fs code cleanup. * Tmpfs code cleanup. * Add utimensat(2) and futimens(2). ### Graphics Significant progress continues in the drm (graphics) subsystem, with full acceleration (2D, 3D) supported on most Intel and some AMD GPUs. The kernel drm code has been partially updated to Linux 3.14 in order to follow the evolution of the i915 and radeon drivers. Many Linux programming interfaces and data structures have been implemented in order to make the drm code and its drivers run with as few modifications as possible. From the point of view of the graphics subsystem, the DragonFly kernel can thus be considered a BSD-licensed implementation of Linux. DragonFly now has an experimental KMS (frame buffer) console. Putting 'kern.kms_console=1' in /boot/loader.conf and either starting X or just loading the i915kms or radeonkms module enables support. Backlight control is also generally available for i915 and radeon, useful for laptops. #### i915 The drm/i915 driver has been updated to the Linux 3.14 version, bringing among other things support for Broadwell GPUs (but without acceleration for now). * Most GEM code paths are now similar to the Linux ones, increasing stability and performance. This change has been greatly helped by the study of the OpenBSD code. * Many bug fixes. The driver is now more robust and handles GPU hangs better. * Power savings and framebuffer compression are now enabled by default, depending on the GPU family used. * Power management has generally been greatly improved * HDMI 4K monitors are now supported, as well as 3D/stereo displays * 10-bit color displays should now work out of the box * Monitor hot-plugging support has been improved and should now be more robust * 2D, 3D acceleration up through the Haswell chipsets now stable. * New members of the Haswell GPU family are now supported * Initial support for Broadwell GPUs (without acceleration) * Frequency changes and overclocking greatly improved on Sandy-Bridge to Haswell * The giant 128MB cache is now enabled when available on Haswell GPUs * The VECS engine is now enabled on Haswell GPUs and can be used by libva for video post-treatment tasks. #### radeon The drm/radeon driver has been updated to the Linux 3.11 version. The most important things this change brings are: * Richland APUs support * Oland, Hainan and CIK chip family support * hdmi sound support (still experimental) * power savings improvements It is also now possible to read temperature sensor information. ### Audio stack The sound subsystem has been updated to the audio stack of FreeBSD 11 (development version) from January 2015. This change brings improved hardware support as well as enhanced sound quality, the new stack using high-fidelity conversion and resampling algorithms. Newer desktop and laptop audio chipsets since the Ivy-Bridge CPU generation are now supported and it has become possible to send audio data on Display-Port and HDMI links. Many drivers with restrictive licenses or requiring the use of binary blobs have been removed. A new driver has also been added to manage the sound devices of the Acer Chromebook C720 laptop family. This driver is not present in FreeBSD. From a random user point of view, the immediate benefit from these changes is that HTML5 videos can now been played without any special manipulation. ### Networking The SCTP protocol (an alternative to TCP and UDP) has been removed. Its code was originally written at the beginning of the 2000s and having never been updated since then, it was starting to become a problem for the general evolution of the network stack. Not having had any known user in 15 years, its removal was an obvious choice. IPv4 support on IPv6 sockets has been removed. This change has helped to greatly simplify the network stack and remove many potential problems. OpenBSD had refused to support IPV4-mapped IPv6 addresses a long time ago, mostly on security grounds. An [IETF draft](https://tools.ietf.org/html/draft-itojun-v6ops-v4mapped-harmful-02) also already recommended to avoid IPv4-mapped IPv6 addresses 12 years ago. The ICMP code is now able to work asynchronously and process data in parallel on many CPUs. Other changes: * TCP path MTU discovery now enabled by default * Numerous ipv6 fixes and features. * Work has been started to make the IPv6 and ALTQ code multi-processor friendly * e1000 (em, emx, ig_hal) - sync w/intel em-7.4.2. * if_bridge improvements. * mountd now properly supports IPV6. ### Packet Filter (pf) * ipfw3 - Ported from FreeBSD (called ipfw2 in FreeBSD) * if_lagg improvements. ### Mobile devices * Synchronize 80211 infrastructure with FreeBSD. ### RAS features #### ECC and temperature sensors The dimm(4), ecc(4), coretemp(4) and memtemp(4) drivers have been created or updated in order to manage hardware sensor information from CPU cores and memory modules. Temperature and ECC error rate data is tagged with hardware topology information in order to quickly identify problematic components. An acceptable error rate can be specified for the ecc(4) driver. If the effective error rate is more important, a sysadmin-visible error is generated via the devctl(4) reporting mechanism. Sensor information is visible directly under the hw.sensors and hw.dimminfo sysctl trees: hw.sensors.cpu5.temp0: 44.00 degC (node0 core1 temp), OK hw.sensors.dimm0.ecc0: 0 (node0 chan0 DIMM0 ecc), OK $ sysctl hw.dimminfo hw.dimminfo.dimm0.node: 0 hw.dimminfo.dimm0.chan: 0 hw.dimminfo.dimm0.slot: 0 hw.dimminfo.dimm0.ecc_thresh: 10 hw.dimminfo.dimm1.node: 0 hw.dimminfo.dimm1.chan: 1 hw.dimminfo.dimm1.slot: 0 hw.dimminfo.dimm1.ecc_thresh: 10 The ecc(4) and memtemp(4) driver support the memory controllers from Intel Xeon E3, Xeon E3v2, Xeon E3v3, Xeon E5v2, Xeon E5v3 as well as Intel Haswell core i3/i5/i7 processors. #### Watchdogs The ichwd(4) driver has been updated and now supports the Intel Coleto Creek (Xeon EP Ivy-Bridge), Lynx Point and Wilcat Point chipsets. A new ipmi(4) driver has been added; it supports the watchdog hardware present in the various IPMI 2.0 systems. ### Userland * Many manual page cleanups. * date -R (RFC 2822 date and time output format). * patch - add dry-run alias. * sed - add unbuffered output option (-u) * camcontrol -b for camcontrol devlist op. * rcrun status and its shortcut rcstatus to show the status of a rc script. * blacklist support removed from ssh (EOL for that old Debian bug). * A simple in-base-system sshlockout, now uses PF table instead of IPFW. * tail -q (quiet mode removes filename headers). * Add 'idle', 'standby' and 'sleep' directives. * Fix seg-fault in jls. * Add 'ifconsole' option to /etc/ttys to enable serial ports only if designated as the console. * rtld-elf - Save/restore fp scratch regs for dynamic linker. * rtld-elf - minor bug fixes, synced with FreeBSD * powerd enhanced. * Major version updates for many dports. * Fix a resource leak in libc/db and a memory leak in libc/regex. * Ssh now correctly sets xauth's path. * libm augmented by FreeBSD (6 functions) and NetBSD (16 complex functions) * No more GNU Info pages provided on system * symbol versioning activated on 7 libaries: z, ncurses, lzma, edit, archive, md, bz2 ### Various tools have been upgraded in the base system: * openssh 6.7p1. * file 5.22. * ftp 1.205 from netbsd. * sh - sync to FreeBSD d038ee76 (mostly fixes that effect poudriere). * mdocml 1.13.1. * byacc 2014-10-06 * less 471 * mpc 1.0.3 (internal) * bmake 2014-11-11 * binutils 2.25 (primary) * GCC 5.1.1 * OpenSSL 1.0.1o ### Removed from the base system: * GCC 4.4 * Sendmail 8.14 * Binutils 2.21 * texinfo 4.13 ### HAMMER improvements * extensive code and documentation cleanups * huge number of minor fixes * most issues fixed were only visible on dedicated file servers under high loads * new "hammer abort-cleanup" command added * NFS export of slave filesystems is now possible ### Other improvements * Boot menu refreshed - color by default - new Blue Fred logo added and displayed by default * Building mechanism improved (more parallelism - world and kernel, avoid rebuilding major libraries by changing dependency requirements, add missing start/completion messages on buildworld, buildkernel, make stage4 use recently built rpcgen instead of host rpcgen, don't build unused linker in ctools stage) * GCC50 base compiler embedded DT_RUNPATH rather than DT_RPATH (as has been done previously) in built executables. The dynamic linker reacts differently when DT_RUNPATH is present; it will check LD_LIBRARY_PATH before the rpath in that case. ### Hammer2 Status Hammer2 is not ready for release but progress continues apace. The core non-clustered code is about 95% operational for single-image operation. This includes all standard frontend operations, snapshots, compression, and the bulk free scan. [In the words of Matthew Dillon]: Work is progressing on the clustering piece. Since the clustering is really the whole point, I am not going to release HAMMER2 until it is operational. Recent developments are as follows: I buckled under and bumped the blockref descriptor from 64 bytes to 128 bytes. This was needed to properly support per-directory quota and inode/data-use statistics but it also has the fringe benefit of allowing up to 512-bit check codes to be used. The quota management is an extremely powerful tool in HAMMER2 so I deemed it worth doing despite the added bloat. I could not implement the fields in the inode due to the presence of indirect blocks without significantly adding to the complexity of the software which is why it had to go into the blockref. The use of very large check codes makes non-verified de-duplication for non-critical data possible. (Non-verified dedup is de-duplication based only on the check code, without validating the actual data content, so collisions are possible where the data does not match. However, it also means the de-duplication can be done several orders of magnitude more quickly needing only a meta-data scan with no data I/O). This is the only sort of dedup that really works well on insanely huge filesystems. The 1KB H2 inode is able to embed 512 bytes of direct data. Once file data exceeds 512 bytes, that area in the inode is able to embed up to 4 blockrefs (it used to be 8), representing up to 4 x 64KB = 256KB of file data. Since HAMMER2 uses 64KB logical blocks (actual physical blocks can be smaller, down to 64 bytes), the blockref overhead is at worst 128 bytes per 64KB or 0.2% of storage. Hammer2 itself implements a radix tree for the block table. Larger block sizes are possible but not convenient due to buffer-cache buffer limitations and the need to calculate and test check codes. My original attempt to implement the clustering mechanic was to use the calling context and both asynchronous locks and asynchronous I/O all in-context acting on all cluster nodes to prevent stalls due to dead or dying nodes. It had the advantage of being very parallel (concurrency scales with process threads). But this quickly became too complex algorithmically and I've given up on that approach. The new approach is to replicate the request from the user thread to multiple kernel threads, one per cluster node, which then execute the operation on each node synchronously (much easier), independent of the user process, and then aggregate/consolidate the results back to the user process. The user process will be able to detach the operation and return the instant it gets a definitive result. This means that stalled or dying nodes will not slow down or stall the frontend VOP. The kernel node threads themselves can be multiplied out for additional concurrency. It should be noted that frontend operations which operate on cached data will be able to complete in-context and will not have to issue replicated requests to these kernel threads. This includes core inode meta-data and, most especially, read or write operations for data already cached in the VM page cache / buffer cache.