3 .\" The DragonFly Project. All rights reserved.
5 .\" Redistribution and use in source and binary forms, with or without
6 .\" modification, are permitted provided that the following conditions
9 .\" 1. Redistributions of source code must retain the above copyright
10 .\" notice, this list of conditions and the following disclaimer.
11 .\" 2. Redistributions in binary form must reproduce the above copyright
12 .\" notice, this list of conditions and the following disclaimer in
13 .\" the documentation and/or other materials provided with the
15 .\" 3. Neither the name of The DragonFly Project nor the names of its
16 .\" contributors may be used to endorse or promote products derived
17 .\" from this software without specific, prior written permission.
19 .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 .\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
22 .\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
23 .\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
24 .\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING,
25 .\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
26 .\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
27 .\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 .\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
29 .\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
37 .Nd HAMMER file system
39 To compile this driver into the kernel,
40 place the following line in your
41 kernel configuration file:
42 .Bd -ragged -offset indent
46 Alternatively, to load the driver as a
47 module at boot time, place the following line in
49 .Bd -literal -offset indent
55 .Bd -literal -offset indent
56 /dev/ad0s1d[:/dev/ad1s1d:...] /mnt hammer rw 2 0
61 file system provides facilities to store file system data onto disk devices
62 and is intended to replace
64 as the default file system for
67 Among its features are instant crash recovery,
68 large file systems spanning multiple volumes,
69 data integrity checking,
71 fine grained history retention and snapshots,
72 pseudo-filesystems (PFSs),
73 mirroring capability and
74 unlimited number of files and links.
76 All functions related to managing
78 file systems are provided by the
88 For a more detailed introduction refer to the paper and slides listed in the
91 For some common usages of
100 .Ss Instant Crash Recovery
101 After a non-graceful system shutdown,
103 file systems will be brought back into a fully coherent state
104 when mounting the file system, usually within a few seconds.
108 mount fails due redo recovery (stage 2 recovery) being corrupted, a
109 workaround to skip this stage can be applied by setting the following tunable:
110 .Bd -literal -offset indent
111 vfs.hammer.skip_redo=<value>
115 .Bl -tag -width indent
117 Run redo recovery normally and fail to mount in the case of error (default).
119 Run redo recovery but continue mounting if an error appears.
121 Completely bypass redo recovery.
126 .Ss Large File Systems & Multi Volume
129 file system can be up to 1 Exabyte in size.
130 It can span up to 256 volumes,
131 each volume occupies a
133 disk slice or partition, or another special file,
134 and can be up to 4096 TB in size.
137 file system size is 50 GB.
138 For volumes over 2 TB in size
142 normally need to be used.
152 .Ss Data Integrity Checking
154 has high focus on data integrity,
155 CRC checks are made for all major structures and data.
157 snapshots implements features to make data integrity checking easier:
158 The atime and mtime fields are locked to the ctime
159 for files accessed via a snapshot.
162 field is based on the PFS
164 and not on any real device.
165 This means that archiving the contents of a snapshot with e.g.\&
167 and piping it to something like
169 will yield a consistent result.
170 The consistency is also retained on mirroring targets.
171 .Ss Data Deduplication
172 To save disk space data deduplication can be used.
173 Data deduplication will identify data blocks which occur multiple times
174 and only store one copy, multiple reference will be made to this copy.
186 file system uses 64-bit transaction ids to refer to historical
187 file or directory data.
188 Transaction ids used by
190 are monotonically increasing over time.
192 when a transaction is made,
194 will always use higher transaction ids for following transactions.
195 A transaction id is given in hexadecimal format
198 .Li 0x00000001061a8ba6 .
209 .Ss History & Snapshots
210 History metadata on the media is written with every sync operation, so that
211 by default the resolution of a file's history is 30-60 seconds until the next
213 Prior versions of files and directories are generally accessible by appending
215 and a transaction id to the name.
216 The common way of accessing history, however, is by taking snapshots.
218 Snapshots are softlinks to prior versions of directories and their files.
219 Their data will be retained across prune operations for as long as the
221 Removing the softlink enables the file system to reclaim the space
222 again upon the next prune & reblock operations.
225 Version 3+ snapshots are also maintained as file system meta-data.
242 .Ss Pruning & Reblocking
243 Pruning is the act of deleting file system history.
244 By default only history used by the given snapshots
245 and history from after the latest snapshot will be retained.
246 By setting the per PFS parameter
248 history is guaranteed to be saved at least this time interval.
249 All other history is deleted.
250 Reblocking will reorder all elements and thus defragment the file system and
251 free space for reuse.
252 After pruning a file system must be reblocked to recover all available space.
253 Reblocking is needed even when using the
266 .Cm prune-everything ,
273 .Ss Pseudo-Filesystems (PFSs)
274 A pseudo-filesystem, PFS for short, is a sub file system in a
277 Each PFS has independent inode numbers.
280 file system is shared between all PFSs in it,
281 so each PFS is free to use all remaining space.
284 file system supports up to 65536 PFSs.
287 file system is PFS# 0, it is called the root PFS and is always a master PFS.
289 A PFS can be either master or slave.
290 Slaves are always read-only,
291 so they can't be updated by normal file operations, only by
293 operations like mirroring and pruning.
294 Upgrading slaves to masters and downgrading masters to slaves are supported.
296 It is recommended to use a
298 mount to access a PFS, except for root PFS;
299 this way no tools are confused by the PFS root being a symlink
300 and inodes not being unique across a
306 operations operates per PFS,
307 this includes mirroring, offline deduping, pruning, reblocking and rebalancing.
322 Mirroring is copying of all data in a file system, including snapshots
323 and other historical data.
324 In order to allow inode numbers to be duplicated on the slaves
326 mirroring feature uses PFSs.
327 A master or slave PFS can be mirrored to a slave PFS.
328 I.e.\& for mirroring multiple slaves per master are supported,
329 but multiple masters per slave are not.
337 .Cm mirror-read-stream ,
340 .Ss Fsync Flush Modes
343 file system implements several different
345 flush modes, the mode used is set via the
346 .Va vfs.hammer.flush_mode
350 .Ss Unlimited Number of Files and Links
351 There is no limit on the number of files or links in a
353 file system, apart from available disk space.
356 file systems support NFS export.
357 NFS export of PFSs is done using
359 mounts (for file/directory in root PFS
361 mount is not needed).
362 For example, to export the PFS
363 .Pa /hammer/pfs/data ,
368 and export the latter path.
370 Don't export a directory containing a PFS (e.g.\&
378 above) should be exported (subdirectory may be escaped if exported).
379 .Ss File System Versions
380 As new features have been introduced to
382 a version number has been bumped.
385 file system has a version, which can be upgraded to support new features.
391 .Cm version-upgrade ;
395 .Ss Preparing the File System
396 To create and mount a
405 file systems must have a unique name on a per-machine basis.
406 .Bd -literal -offset indent
407 newfs_hammer -L HOME /dev/ad0s1d
408 mount_hammer /dev/ad0s1d /home
411 Similarly, multi volume file systems can be created and mounted by
412 specifying additional arguments.
413 .Bd -literal -offset indent
414 newfs_hammer -L MULTIHOME /dev/ad0s1d /dev/ad1s1d
415 mount_hammer /dev/ad0s1d /dev/ad1s1d /home
418 Once created and mounted,
420 file systems need periodic clean up making snapshots, pruning and reblocking,
421 in order to have access to history and file system not to fill up.
422 For this it is recommended to use the
430 .Nm hammer Cm cleanup
434 It is also possible to perform these operations individually via
436 For example, to reblock the
438 file system every night at 2:15 for up to 5 minutes:
439 .Bd -literal -offset indent
440 15 2 * * * hammer -c /var/run/HOME.reblock -t 300 reblock /home \e
448 command provides several ways of taking snapshots.
449 They all assume a directory where snapshots are kept.
450 .Bd -literal -offset indent
452 hammer snapshot /home /snaps/snap1
453 (...after some changes in /home...)
454 hammer snapshot /home /snaps/snap2
459 point to the state of the
461 directory at the time each snapshot was taken, and could now be used to copy
462 the data somewhere else for backup purposes.
466 is set up to create nightly snapshots of all
470 and to keep them for 60 days.
472 A snapshot directory is also the argument to the
475 command which frees historical data from the file system that is not
476 pointed to by any snapshot link and is not from after the latest snapshot
479 .Bd -literal -offset indent
484 Mirroring is set up using
486 pseudo-filesystems (PFSs).
487 To associate the slave with the master its shared UUID should be set to
488 the master's shared UUID as output by the
489 .Nm hammer Cm pfs-master
491 .Bd -literal -offset indent
492 hammer pfs-master /home/pfs/master
493 hammer pfs-slave /home/pfs/slave shared-uuid=<master's shared uuid>
498 link is unusable for as long as no mirroring operation has taken place.
500 To mirror the master's data, either pipe a
504 or, as a short-cut, use the
506 command (which works across a
509 Initial mirroring operation has to be done to the PFS path (as
511 can't access it yet).
512 .Bd -literal -offset indent
513 hammer mirror-copy /home/pfs/master /home/pfs/slave
516 It is also possible to have the target PFS auto created
517 by just issuing the same
519 command, if the target PFS doesn't exist you will be prompted
520 if you would like to create it.
521 You can even omit the prompting by using the
524 .Bd -literal -offset indent
525 hammer -y mirror-copy /home/pfs/master /home/pfs/slave
528 After this initial step
530 mount can be setup for
531 .Pa /home/pfs/slave .
532 Further operations can use
535 .Bd -literal -offset indent
536 mount_null /home/pfs/master /home/master
537 mount_null /home/pfs/slave /home/slave
539 hammer mirror-copy /home/master /home/slave
542 To NFS export from the
548 without PFSs, and the PFS
549 .Pa /hammer/pfs/data ,
559 .Bd -literal -offset indent
560 /hammer/pfs/data /hammer/data null rw
567 .Bd -literal -offset indent
573 .It "hammer: System has insuffient buffers to rebalance the tree. nbuf < %d"
576 PFS uses quite a bit of memory and
577 can't be done on low memory systems.
578 It has been reported to fail on 512MB systems.
579 Rebalancing isn't critical for
581 file system operation;
608 .%O http://www.dragonflybsd.org/hammer/hammer.pdf
609 .%T "The HAMMER Filesystem"
614 .%O http://www.dragonflybsd.org/hammer/nycbsdcon/
615 .%T "Slideshow from NYCBSDCon 2008"
620 .%O http://www.ntecs.de/sysarch09/HAMMER.pdf
621 .%T "Slideshow for a presentation held at KIT (http://www.kit.edu)"
623 .Sh FILESYSTEM PERFORMANCE
626 file system has a front-end which processes VNOPS and issues necessary
627 block reads from disk, and a back-end which handles meta-data updates
628 on-media and performs all meta-data write operations.
629 Bulk file write operations are handled by the front-end.
632 defers meta-data updates virtually no meta-data read operations will be
633 issued by the frontend while writing large amounts of data to the file system
634 or even when creating new files or directories, and even though the
635 kernel prioritizes reads over writes the fact that writes are cached by
636 the drive itself tends to lead to excessive priority given to writes.
638 There are four bioq sysctls, shown below with default values,
639 which can be adjusted to give reads a higher priority:
640 .Bd -literal -offset indent
641 kern.bioq_reorder_minor_bytes: 262144
642 kern.bioq_reorder_burst_bytes: 3000000
643 kern.bioq_reorder_minor_interval: 5
644 kern.bioq_reorder_burst_interval: 60
647 If a higher read priority is desired it is recommended that the
648 .Va kern.bioq_reorder_minor_interval
649 be increased to 15, 30, or even 60, and the
650 .Va kern.bioq_reorder_burst_bytes
651 be decreased to 262144 or 524288.
655 file system first appeared in
661 file system was designed and implemented by
662 .An Matthew Dillon Aq dillon@backplane.com ,
663 data deduplication was added by
665 This manual page was written by
668 .An Thomas Nikolajsen .
670 Data deduplication is considered experimental.