2 HAMMER2 DESIGN DOCUMENT
14 Current Status as of document date
16 * Filesystem Core - operational
17 - bulkfree - operational
18 - Compression - operational
19 - Snapshots - operational
20 - Deduper - live operational, batch specced
21 - Subhierarchy quotas - scrapped (still possible on a limited basis)
22 - Logical Encryption - not specced yet
23 - Copies - not specced yet
24 - fsync bypass - not specced yet
25 - FS consistency - operational
28 - Network msg core - operational
29 - Network blk device - operational
30 - Error handling - under development
31 - Quorum Protocol - under development
32 - Synchronization - under development
33 - Transaction replay - not specced yet
34 - Cache coherency - not specced yet
36 Recent Document Changes
38 * Reorganized the feature list to indicate currently operational features
39 first, and moving the future features to another section (since they
40 are taking so long to implement).
44 * Standard filesystem semantics with full hardlink and softlink support.
45 64-bit hardlink count field.
47 * The topology is indexed with a dynamic radix tree rooted in several
48 places: The super-root, the PFS root inode, and any inode boundary.
49 Index keys are 64-bits. Each element is referenced with a blockref
50 structure (described below) that is capable of referencing a power-of-2
51 sized block. The block size is currently capped at 64KB to play
52 nice(r) with the buffer cache and SSDs.
54 The dynamic radix tree pushes elements into new indirect blocks only
55 when the current level fills up, and will delete empty indirect blocks
56 when a level is cleaned out.
58 * Block-copy-on-write filesystem mechanism for both the main topology
59 and for the freemap. Media-level block frees are deferred and flushes
60 rotate between (up to) 4 volume headers (capped at 4 if the filesystem
61 is > ~8GB). Recovery will choose the most recent fully-valid volume
62 header and can thus work around failures which cause partial volume
65 Modifications issue copy-on-write updates up to the volume root.
67 * Utilizes a fat blockref structure (128 bytes) which can store up to
68 64 bytes (512 bits) of check code data for each referenced block.
69 In the original implementation I had gone with 64 byte blockrefs,
70 but I eventually decided that I wanted to support up to a 512-bit
71 hash (which eats 64 bytes), so I bumped it up to 128 bytes. This
72 turned out to be fortuitous because it made it possible to store
73 most directory entries directly in the blockref structure without
74 having to reference a separate data block via the blockref structure.
76 * 1KB 'fat' inode structure. The inode structure directly embeds four
77 blockrefs so small files and directories can be represented without
78 requiring an indirect block to be allocated. The inode structure can
79 also overload the same space to store up to 512 bytes of direct
80 file data (for files which are <= 512 bytes long).
82 The super-root and PFS root inodes are directly represented in the
83 topology, without the use of directory entries. A combination of
84 normal directory entries and separtely-indexed inodes are implemented
87 Normal filesystem inodes (other than inode 1) are indexed under the PFS
88 root inode by their inode number. Directory entries are indexed under the
89 same PFS root by their filename hash. Bit 63 is used to distinguish and
90 partition the two. Filename hash collisions are handled by incrementing
91 reserved low bits in the filename hash code.
93 * Directory entries representing filenames that are less than 64 bytes
94 long are directly stored AS blockrefs. This means that an inode
95 representing a small directory can store up to 4 directory entries in
96 the inode itself before resorting to indirect blocks, and then those
97 indirect blocks themselves can directly embed up to 512 directory entries.
98 Directory entries with long filenames reference an indirect data block
99 to hold the filename instead of directly-embedding the filename.
101 This results in *very* compact directories in terms of I/O bandwidth.
102 Not as compact as e.g. UFS's variable-length directory entries, but still
103 very good with a nominal 128 real bytes per directory entry.
105 Because directory entries are represented using a dynamic radix tree via
106 its blockrefs, directory entries can be randomly looked up without having
107 to scan the whole directory.
109 * Multiple PFSs. In HAMMER2, all PFSs are implemented the same way, with
110 the kernel choosing a default PFS name for the mount if none is specified.
111 For example, "ROOT" is the default PFS name for a root mount. You can
112 create as many PFSs as you like and you can specify the PFS name in the
113 mount command using the <device_path>@<pfs_name> notation.
115 * Snapshots are implemented as PFSs. Due to the copy-on-write nature of
116 the filesystem, taking a snapshot is a trivial operation requiring only
117 a normal filesystme sync and copying of the PFS root inode (1KB), and
120 On the minus side, can complicate the bulkfree operation that is responsible
121 for freeing up disk space. It can take significantly longer when many
122 snapshots are present.
124 * SNAPSHOTS ARE READ-WRITE. You can mount any PFS read-write, including
125 snapshots. For example, you can revert to an earlier 'root' that you
126 made a snapshot of simply by changing what the system mounts as the root
129 * Full filesystem coherency at both the radix tree level and the filesystem
130 semantics level. This is true for all filesystem syncs, recovery after
131 a crash, and snapshots.
133 The filesystem syncs fully vfsync the buffer cache for the files
134 that are part of the sync group, and keeps track of dependencies to
135 ensure that all inter-dependent inodes are flushed in the same sync
136 group. Atomic filesystem ops such as write()s are guaranteed to remain
137 atomic across a sync, snapshot, and crash.
139 * Flushes and syncs are almost entirely asynchronous and will run concurrent
140 with frontend operations. This feature is implemented by adding inodes
141 to the sync group currently being flushed on-the-fly as new dependencies
142 are created, and reordering inodes in the sync queue to prioritize inodes
143 which the frontend is stalled on.
145 By reprioritizing inodes in the syncq, frontend stalls are minimized.
147 The only synchronous disk operations is the final sync of the volume
148 header which updates the ultimate root of the filesystem. A disk flush
149 command is issued synchronously, then the write of the volume header is
150 issued synchronously. All other writes to the disk, regardless of the
151 complexity of the dependencies, occur asynchronously and can make very
152 good use of high-speed I/O and SSD bandwidth.
154 * Low memory footprint. Except for the volume header, the buffer cache
155 is completely asynchronous and dirty buffers can be retired by the OS
156 directly to backing store with no further interactions with the filesystem.
158 * Compression support. Multiple algorithms are supported and can be
159 configured on a subdirectory hierarchy or individual file basis.
160 Block compression up to 64KB will be used. Only compression ratios at
161 powers of 2 that are at least 2:1 (e.g. 2:1, 4:1, 8:1, etc) will work in
162 this scheme because physical block allocations in HAMMER2 are always
165 Modest compression can be achieved with low overhead, is turned on
166 by default, and is compatible with deduplication.
168 Compression is extremely useful and often gives you anywhere from 25%
169 to 400% the logical storage as you have physical blocks, depending.
170 Of course, .tgz and other pre-compressed files cannot be compressed
171 further by the filesystem.
173 The usefulness shnould not be underestimated, our users are constantly
174 being surprised at things the filesystem is able to compres that just
175 makes life a lot easier. For example, 30GB core dumps tend to contain
176 a great deal of highly compressable data. Source trees, web files,
177 executables, general data... this is why HAMMER2 turns modest compression
178 on by default. It just works.
180 * De-duplication support. HAMMER2 uses a relatively simple freemap
181 scheme that allows the filesystem to discard block references
182 asynchronously. The same scheme allows essentially unlimited references
183 to the same data block in the hierarchy. Thus, both live de-duplication
184 and bulk deduplication are relatively easy to implement.
186 HAMMER2 currently implements only live de-duplications. This means that
187 typical situations such as when copying files or whole directory hierarchies
188 will naturally de-duplicate. Simply reading filesystem data in makes
189 it available for deduplication later. HAMMER2 will index a potentially
190 very large number of blocks in memory, even beyond what the buffer cache
191 can hold, for deduplication purposes.
193 * Zero-fill detection on write (writing all-zeros), which requires the data
194 buffer to be scanned, is fully supported. This allows the writing of 0's
197 Generally speaking pre-writing zerod blocks to reserve space doesn't work
198 well on copy-on-write filesystems. However, if both compression and
199 check codes are disabled on a file, H2 will also disable zero-detection,
200 allowing the file blocks to be pre-reserved (by actually zeroing them and
201 reusing them later on), and allow data overwrites to write to the same
202 sector. Please be aware that DISABLING THE CHECK CODE IN THIS MANNER ALSO
203 MEANS THAT SNAPSHOTS WILL NOT WORK. The snapshot will contain the latest
204 data for the file and not the data as-of the snapshot. This is NOT turned
205 on by default in HAMMER2 and is not recommended except in special
206 well-controlled circumstances.
208 * Multiple supporting kernel threads, breaking up frontend VOP operation
209 from backend I/O, compression, and decompression operation. Buffer cache
210 I/O and VOP ops message the backend. Actual I/O is handled by the backend
211 and not by the frontend, which will theoretically allow us to survive
212 stalled devices and nodes when implementing multi-node support.
215 (not yet implemented)
217 * Constructing a filesystem across multiple nodes. Each low-level H2 device
218 would be able to accomodate nodes belonging to multiple cluster components
219 as well as nodes that are simply local to the device or machine.
221 CURRENT STATUS: Not yet operational.
223 * Incremental synchronization via highest-transaction id propagation
224 within the radix tree. This is a queueless, incremental design.
226 CURRENT STATUS: Due to the flat inode hierarchy now being employed,
227 the current synchronization code which silently recurses indirect nodes
228 will be inefficient due to the fact that all the inodes are at the
229 same logical level in the topology. To fix this, the code will need
230 to explicitly iterate indirect nodes and keep track of the related
231 key ranges to match them up on an indirect-block basis, which would
232 be incredibly efficient.
234 * Background synchronization and mirroring occurs at the logical layer
235 rather than the physical layer. This allows cluster components to
236 have differing storage arrangements.
238 In addition, this mechanism will fully correct any out of sync nodes
239 in the cluster as long as a sufficient number of other nodes agree on
240 what the proper state should be.
242 CURRENT STATUS: Not yet operational.
244 * Encryption. Whole-disk encryption is supported by another layer, but I
245 intend to give H2 an encryption feature at the logical layer which works
246 approximately as follows:
248 - Encryption controlled by the client on an inode/sub-tree basis.
249 - Server has no visibility to decrypted data.
250 - Encrypt filenames in directory entries. Since the filename[] array
251 is 256 bytes wide, client can add random bytes after the normal
252 terminator to make it virtually impossible for an attacker to figure
254 - Encrypt file size and most inode contents.
255 - Encrypt file data (holes are not encrypted).
256 - Encryption occurs after compression, with random filler.
257 - Check codes calculated after encryption & compression (not before).
259 - Blockrefs are not encrypted.
260 - Directory and File Topology is not encrypted.
261 - Encryption is not sub-topology validation. Client would have to keep
262 track of that itself. Server or other clients can still e.g. remove
265 In particular, note that even though the file size field can be encrypted,
266 the server does have visibility on the block topology and thus has a pretty
267 good idea how big the file is. However, a client could add junk blocks
268 at the end of a file to make this less apparent, at the cost of space.
270 If a client really wants a fully validated H2-encrypted space the easiest
271 solution is to format a filesystem within an encrypted file by treating it
272 as a block device, but I digress.
274 CURRENT STATUS: Not yet operational.
276 * Device ganging, copies for redundancy, and file splitting.
278 Device ganging - The idea here is not to gang devices into a single
279 physical volume but to instead format each device independently
280 and allow crossover-references in the blockref to other devices in
283 One of the things we want to accomplish is to ensure that a failed
284 device does not prevent access to radix tree elements in other devices
285 in the gang, and that the failed device can be reconstructed. To do
286 this, each device implements complete reachability from the node root
287 to all elements underneath it. When a device fails, the sychronization
288 code can theoretically reconstruct the missing material in other
289 devices making up the gang. New devices can be added to the gang and
290 existing devices can be removed from the gang.
292 Redundant copies - This is actually a fairly tough problem. The
293 solution I would like to implement is to use the device ganging feature
294 to also implement redundancy, that way if a device fails within the
295 gang there's a good chance that it can still remain completely functional
296 without having to resynchronize. But making this work is difficult to say
299 CURRENT STATUS: Not yet operational.
301 * MESI Cache coherency for multi-master/multi-client clustering operations.
302 The servers hosting the MASTERs are also responsible for keeping track of
305 This is a feature that we would need to implement coherent cross-machine
306 multi-threading and migration.
308 CURRENT STATUS: Not yet operational.
310 * Implement unverified de-duplication (where only the check code is tested,
311 avoiding having to actually read data blocks to calculate a de-duplication.
312 This would make use of the blockref structure's widest check field
315 Out of necessity this type of feature would be settable on a file or
316 recursive directory tree basis, but should only be used when the data
317 is throw-away or can be reconstructed since data corruption (mismatched
318 duplicates with the same hash) is still possible even with a 512-bit
321 The Unverified dedup feature is intended only for those files where
322 occassional corruption is ok, such as in a web-crawler data store or
323 other situations where the data content is not critically important
324 or can be externally recovered if it becomes corrupt.
326 CURRENT STATUS: Not yet operational.
330 HAMMER2 generally implements a copy-on-write block design for the filesystem,
331 which is very different from HAMMER1's B-Tree design. Because the design
332 is copy-on-write it can be trivially snapshotted simply by making a copy
333 of the block table we desire to snapshot. Snapshotting the root inode
334 effectively snapshots the entire filesystem, whereas snapshotting a file
335 inode only snapshots that one file. Snapshotting a directory inode is
336 generally unhelpful since it only contains directory entries and the
337 underlying files are not arranged under it in the radix tree.
339 The copy-on-write design implements a block table as a radix-tree,
340 with a small fan-out in the volume header and inode (typically 4x) and
341 a large fan-out for indirect blocks (typically 128x and 512x depending).
342 The table is built bottom-up. Intermediate radii are only created when
343 necessary so small files and directories will have a much shallower radix
346 HAMMER2 implements several space optimizations:
348 1. Directory entries with filenames <= 64 bytes will fit entirely
349 in the 128-byte blockref structure and do not require additional data
350 block references. Since blockrefs are the core elements making up
351 block tables, most directories should have good locality of reference
354 Filenames > 64 bytes require a 1KB data-block reference, which
355 is clearly less optimal, but very few files in a filesystem tend
356 to be larger than 64 bytes so it works out. This also simplifies
357 the handling for large filenames as we can allow filenames up to
358 1023 bytes long with this mechanism with no major changes to the
361 2. Inodes embed 4 blockrefs, so files up to 256KB and directories with
362 up to four directory entries (not including "." or "..") can be
363 accomodated without requiring any indirecct blocks.
365 3. Indirect blocks can be sized to any power of two up to 65536 bytes,
366 and H2 typically uses 16384 and 65536 bytes. The smaller size is
367 used for initial indirect blocks to reduce storage overhead for
368 medium-sized files and directories.
370 4. The File inode itself can directly hold the data for small
371 files <= 512 bytes in size, overloading the space also used
372 by its four 128 bytes blockrefs (which are not needed if the
373 file is <= 512 bytes in size). This works out great for small
374 files and directories.
376 5. The last block in a file will have a storage allocation in powers
377 of 2 from 1KB to 64KB as needed. Thus a small file in excess of
378 512 bytes but less than 64KB will not waste a full 64KB block.
380 6. When compression is enabled, small physical blocks will be allocated
381 when possible. However, only reductions in powers of 2 are supported.
382 So if a 64KB data block can be compressed to (16KB+1) to 32KB, then
383 a 32KB block will be used. This gives H2 modest compression at very
384 low cost without too much added complexity.
386 7. Live de-dup will attempt to share data blocks when file copying is
387 detected, significantly reducing actual physical writes to storage
388 and the storage used. Bulk de-dup (when implemented), will catch
389 other cases of de-duplication.
391 Directories contain directory entries which are indexed using a hash of
392 their filename. The hash is carefully designed to maintain some natural
393 sort ordering. The directory entries are implemented AS blockrefs. So
394 an inode can contain up to 4 before requiring an indirect block, and
395 each indirect block can contain up to 512 entries, with further data block
396 references required for any directory entry whos filename is > 64 bytes.
397 Because the directory entries are blockrefs, random access lookups are
398 maximally efficient. The directory hash is designed to very loosely try
399 to retain some alphanumeric sorting to bundle similarly-named files together
400 and reduce random lookups.
402 The copy-on-write nature of the filesystem means that any modification
403 whatsoever will have to eventually synchronize new disk blocks all the way
404 to the super-root of the filesystem and then to the volume header itself.
405 This forms the basis for crash recovery and also ensures that recovery
406 occurs on a completed high-level transaction boundary. All disk writes are
407 to new blocks except for the volume header (which cycles through 4 copies),
408 thus allowing all writes to run asynchronously and concurrently prior to
409 and during a flush, and then just doing a final synchronization and volume
410 header update at the end. Many of HAMMER2s features are enabled by this
413 The Freemap is also implemented using a radix tree via a set of pre-reserved
414 blocks (approximately 4MB for every 2GB of storage), and also cycles through
415 multiple copies to ensure that crash recovery can restore the state of the
416 filesystem quickly at mount time.
418 HAMMER2 tries to maintain a small footprint and one way it does this is
419 by using the normal buffer cache for data and meta-data, and allowing the
420 kernel to asynchronously flush device buffers at any time (even during
421 synchronization). The volume root is flushed separately, separated from
422 the asynchronous flushes by a synchronizing BUF_CMD_FLUSH op. This means
423 that HAMMER2 has very low resource overhead from the point of view of the
424 operating system and is very much unlike HAMMER1 which had to lock dirty
425 buffers into memory for long periods of time. HAMMER2 has no such
428 Buffer cache overhead is very well bounded and can handle filesystem
429 operations of any complexity, even on boxes with very small amounts
430 of physical memory. Buffer cache overhead is significantly lower with H2
431 than with H1 (and orders of magnitude lower than ZFS).
433 At some point I intend to implement a shortcut to make fsync()'s run fast,
434 and that is to allow deep updates to blockrefs to shortcut to auxillary
435 space in the volume header to satisfy the fsync requirement. The related
436 blockref is then recorded when the filesystem is mounted after a crash and
437 the update chain is reconstituted when a matching blockref is encountered
438 again during normal operation of the filesystem.
440 FILESYSTEM SYNC SEQUENCING
442 HAMMER2 implements a filesystem sync mechanism that allows the frontend
443 to continue doing modifying operations concurrent with the sync. The
444 general sync mechanism operates in four phases:
446 1. Individual file and directory inodes are fsync()d to disk,
447 updated the blockrefs in the parent block above the inode, and
448 removed from the syncq.
450 Once removed from the syncq, the frontend can do a modifying
451 operation on these file and directory inodes without further
452 effecting the filesystem sync. These modifications will be
453 flushed to disk on the next filesystem sync.
455 To reduce frontend stall times, an inode blocked on by the frontend
456 which is on the syncq will be reordered to the front of the syncq
457 to give the syncer a shot at it more quickly, in order to unstall
460 If a frontend operations creates an unavoidable dependency between
461 an inode on the syncq and an inode not on the syncq, both inodes
462 are placed on (or back onto) the syncq as needed to ensure filesystem
463 consistency for the filesystem sync. This can extend the filesystem
464 sync time, but even under heavy loads syncs are still able to be
467 2. The PFS ROOT is fsync()d to storage along with the subhierarchy
468 representing the inode index (whos inodes were flushed in (1)).
469 This brings the block copy-on-write up to the root inode.
471 3. The SUPER-ROOT inode is fsync()d to storage along with the
472 subhierarchy representing the PFS ROOTs for the volume.
474 4. Finally, a physical disk flush command is issued to the storage
475 device, and then the volume header is written to disk. All
476 I/O prior to this step occurred asynchronously. This is the only
477 step which must occur synchronously.
479 MIRROR_TID, MODIFY_TID, UPDATE_TID
481 In HAMMER2, the core block reference is a 128-byte structure called a blockref.
482 The blockref contains various bits of information including the 64-bit radix
483 key (typically a directory hash if a directory entry, inode number if a
484 hidden hardlink target, or file offset if a file block), number of significant
485 key bits for ranged recursion of indirect blocks, a 64-bit device seek that
486 encodes the radix of the physical block size in the low bits (physical block
487 size can be different from logical block size due to compression),
488 three 64-bit transaction ids, type information, and up to 512 bits worth
489 of check data for the block being reference which can be anything from
490 a simple CRC to a strong cryptographic hash.
492 mirror_tid - This is a media-centric (as in physical disk partition)
493 transaction id which tracks media-level updates. The mirror_tid
494 can be different at the same point on different nodes in a
497 Whenever any block in the media topology is modified, its
498 mirror_tid is updated with the flush id and will propagate
499 upward during the flush all the way to the volume header.
501 mirror_tid is monotonic. It is primarily used for on-mount
502 recovery and volume root validation. The name is historical
503 from H1, it is not used for nominal mirroring.
505 modify_tid - This is a cluster-centric (as in across all the nodes used
506 to build a cluster) transaction id which tracks filesystem-level
509 modify_tid is updated when the front-end of the filesystem makes
510 a change to an inode or data block. It does NOT propagate upward
513 update_tid - This is a cluster synchronization transaction id. Modifications
514 made to the topology will clear this field to 0 as they propagate
515 up to the root. This gives the synchronizer an easy way to
516 determine what needs revalidation.
518 The synchronizer revalidates the cluster bottom-up by validating
519 a sub-topology and propagating the highest modify_tid in the
520 validated sub-topology up via the update_tid field.
522 Update to this field may be optimized by the HAMMER2 VFS to
523 avoid the double-transition.
525 The synchronization code updates an out-of-sync node bottom-up and will
526 dynamically set update_tid as it goes, but media flushes can occur at any
527 time and these flushes will use mirror_tid for flush and freemap management.
528 The mirror_tid for each flush propagates upward to the volume header on each
529 flush. modify_tid is set for any chains modified by a cluster op but does
530 not propagate up, instead serving as a seed for update_tid.
532 * The synchronization code is able to determine that a sub-tree is
533 synchronized simply by observing the update_tid at the root of the sub-tree,
534 on an inode-by-inode basis and also on a data-block-by-data-block basis.
536 * The synchronization code is able to do an incremental update of an
537 out-of-sync node simply by skipping elements with a matching update_tid
540 * The synchronization code can be interrupted and restarted at any time,
541 and is able to pick up where it left off with very low overhead.
543 * The synchronization code does not inhibit media flushes. Media flushes
544 can occur (and must occur) while synchronization is ongoing.
546 There are several other stored transaction ids in HAMMER2. There is a
547 separate freemap_tid in the volume header that is used to allow freemap
548 flushes to be deferred, and inodes have a pfs_psnap_tid which is used in
549 conjuction with CHECK_NONE to allow blocks without a check code which do
550 not violate the most recent snapshot to be overwritten in-place.
552 Remember that since this is a copy-on-write filesystem, we can propagate
553 a considerable amount of information up the tree to the volume header
554 without adding to the I/O we already have to do.
556 DIRECTORIES AND INODES
558 Directories are hashed. In HAMMER2, the PFS ROOT directory (aka inode 1 for
559 a PFS) can contain a mix of directory entries AND embedded inodes. This was
560 actually a design mistake, so the code to deal with the index of inodes
561 vs the directory entries is slightly convoluted (but not too bad).
563 In the first iteration of HAMMER2 I tried really hard to embed actual
564 inodes AS the directory entries, but it created a mass of problems for
565 implementing NFS export support and dealing with hardlinks, so in a later
566 iteration I implemented small independent directory entries (that wound up
567 mostly fitting in the blockref structure, so WIN WIN!). However, 'embedded'
568 inodes AS the directory entries still survive for the SUPER-ROOT and the
569 PFS-ROOTs under the SUPER-ROOT. They just aren't used in the individual
570 filesystem that each PFS represents.
572 Hardlinks are now implemented normally, with multiple directory entries
573 referencing the same inode and that inode containing a nlinks count.
577 H2 allows freemap flushes to lag behind topology flushes. This improves
578 filesystem sync performance. The freemap flush tracks a separate
579 transaction id (via mirror_tid) in the volume header.
581 On mount, HAMMER2 will first locate the highest-sequenced check-code-validated
582 volume header from the 4 copies available (if the filesystem is big enough,
583 e.g. > ~8GB or so, there will be 4 copies of the volume header).
585 HAMMER2 will then run an incremental scan of the topology for mirror_tid
586 transaction ids between the last freemap flush tid and the last topology
587 flush tid in order to synchronize the freemap. Because this scan is
588 incremental the time it takes to run will be relatively short and well-bounded
589 at mount-time. This is NOT an fsck. Freemap flushes can be avoided for any
590 number of normal topology flushes but should still occur frequently enough
591 to avoid long recovery times in case of a crash.
593 The filesystem is then ready for use.
595 DISK I/O OPTIMIZATIONS
597 The freemap implements a 1KB allocation resolution. Each 2MB segment managed
598 by the freemap is zoned and has a tendancy to collect inodes, small data,
599 indirect blocks, and larger data blocks into separate segments. The idea is
600 to greatly improve I/O performance (particularly by laying inodes down next
601 to each other which has a huge effect on directory scans).
603 The current implementation of HAMMER2 implements a fixed 64KB physical block
604 size in order to allow the mapping of hammer2_dio's in its IO subsystem
605 to consumers that might desire different sizes. This way we don't have to
606 worry about matching the buffer cache / DIO cache to the variable block
607 size of underlying elements. In addition, 64KB I/Os allow compatibility
608 with physical sector sizes up to 64KB in the underlying physical storage
609 with no change in the byte-by-byte format of the filesystem. The DIO
610 layer also prevents ordering deadlocks between unrelated portions of the
611 filesystem hierarchy whos logical blocks wind up in the same physical block.
613 The biggest issue we are avoiding by having a fixed 64KB I/O size is not
614 actually to help nominal front-end access issue but instead to reduce the
615 complexity of having to deal with mixed block sizes in the buffer cache,
616 particularly when blocks are freed and then later reused with a different
617 block size. HAMMER1 had to have specialized code to check for and
618 invalidate buffer cache buffers in the free/reuse case. HAMMER2 does not
621 That said, HAMMER2 places no major restrictions on mixing logical block
622 sizes within a 64KB block. The only restriction is that a logical HAMMER2
623 block cannot cross a 64KB boundary. The soft restrictions the block
624 allocator puts in place exist primarily for performance reasons (i.e. to
625 try to collect 1K inodes together). The 2MB freemap zone granularity
626 should work very well in this regard.
628 HAMMER2 also utilizes OS support for ganging 64KB buffers together into even
629 larger blocks for I/O (OS buffer cache 'clustering'), OS-supported read-ahead,
630 OS-driven asynchronous retirement, and other performance features typically
631 provided by the OS at the block-level to ensure smooth system operation.
633 By avoiding wiring buffers/memory and allowing the OS's buffer cache to
634 run normally, HAMMER2 winds up with very low OS overhead.
638 The freemap is stored in the reserved blocks situated in the ~4MB reserved
639 area at the base of every ~1GB level-1 zone of physical storage. The current
640 implementation reserves 8 copies of every freemap block and cycles through
641 them in order to make the freemap operate in a copy-on-write fashion.
643 - Freemap is copy-on-write.
644 - Freemap operations are transactional, same as everything else.
645 - All backup volume headers are consistent on-mount.
647 The Freemap is organized using the same radix blockmap algorithm used for
648 files and directories, but with fixed radix values. For a maximally-sized
649 filesystem the Freemap will wind up being a 5-level-deep radix blockmap,
650 but the top-level is embedded in the volume header so insofar as performance
651 goes it is really just a 4-level blockmap.
653 The freemap radix allocation mechanism is also the same, meaning that it is
654 bottom-up and will not allocate unnecessary intermediate levels for smaller
655 filesystems. The number of blockmap levels not including the volume header
656 for various filesystem sizes is as follows:
658 up-to #of freemap levels
666 The Freemap has bitmap granularity down to 16KB and a linear iterator that
667 can linearly allocate space down to 1KB. Due to fragmentation it is possible
668 for the linear allocator to become marginalized, but it is relatively easy
669 to reallocate small blocks every once in a while (like once a year if you
670 care at all) and once the old data cycles out of the snapshots, or you also
671 rewrite the snapshots (which you can do), the freemap should wind up
672 relatively optimal again. Generally speaking I believe that algorithms can
673 be developed to make this a non-problem without requiring any media structure
674 changes. However, touching all the freemaps will replicate meta-data whereas
675 the meta-data was mostly shared in the original snapshot. So this is a
676 problem that needs solving in HAMMER2.
678 In order to implement fast snapshots (and writable snapshots for that
679 matter), HAMMER2 does NOT ref-count allocations. All the freemap does is
680 keep track of 100% free blocks plus some extra bits for staging the bulkfree
681 scan. The lack of ref-counting makes it possible to:
683 - Completely trivialize HAMMER2s snapshot operations.
684 - Completely trivialize HAMMER2s de-dup operations.
685 - Allows any volume header backup to be used trivially.
686 - Allows whole sub-trees to be destroyed without having to scan them.
687 Deleting PFSs and snapshots is instant (though space recovery still
688 requires two bulkfree scans).
689 - Simplifies normal crash recovery operations by not having to reconcile
691 - Simplifies catastrophic recovery operations for the same reason.
693 Normal crash recovery is simply a matter of doing an incremental scan
694 of the topology between the last flushed freemap TID and the last flushed
695 topology TID. This usually takes only a few seconds and allows:
697 - Freemap flushes to be be deferred for any number of topology flush
698 cycles (with some care to ensure that all four volume headers
700 - Does not have to be flushed for fsync, reducing fsync overhead.
704 Blocks are freed via a bulkfree scan, which is a two-stage meta-data scan.
705 Blocks are first marked as being possibly free and then finalized in the
706 second scan. Live filesystem operations are allowed to run during these
707 scans and any freemap block that is allocated or adjusted after the first
708 scan will simply be re-marked as allocated and the second scan will not
709 transition it to being free.
711 The cost of not doing ref-count tracking is that HAMMER2 must perform two
712 bulkfree scans of the meta-data to determine which blocks can actually be
713 freed. This can be complicated by the volume header backups and snapshots
714 which cause the same meta-data topology to be scanned over and over again,
715 but mitigated somewhat by keeping a cache of higher-level nodes to detect
716 when we would scan a sub-topology that we have already scanned. Due to the
717 copy-on-write nature of the filesystem, such detection is easy to implement.
719 Part of the ongoing design work is finding ways to reduce the scope of this
720 meta-data scan so the entire filesystem's meta-data does not need to be
721 scanned (though in tests with HAMMER1, even full meta-data scans have
722 turned out to be fairly low cost). In other words, its an area where
723 improvements can be made without any media format changes.
725 Another advantage of operating the freemap like this is that some future
726 version of HAMMER2 might decide to completely change how the freemap works
727 and would be able to make the change with relatively low downtime.
731 Clustering, as always, is the most difficult bit but we have some advantages
732 with HAMMER2 that we did not have with HAMMER1. First, HAMMER2's media
733 structures generally follow the kernel's filesystem hiearchy which allows
734 cluster operations to use topology cache and lock state. Second,
735 HAMMER2's writable snapshots make it possible to implement several forms
736 of multi-master clustering.
738 The mount device path you specify serves to bootstrap your entry into
739 the cluster. This is typically local media. It can even be a ram-disk
740 that only contains placemarkers that help HAMMER2 connect to a fully
743 With HAMMER2 you mount a directory entry under the super-root. This entry
744 will contain a cluster identifier that helps HAMMER2 identify and integrate
745 with the nodes making up the cluster. HAMMER2 will automatically integrate
746 *all* entries under the super-root when you mount one of them. You have to
747 mount at least one for HAMMER2 to integrate the block device in the larger
750 For cluster servers every HAMMER2-formatted partition has a "LOCAL" MASTER
751 which can be mounted in order to make the rest of the elements under the
752 super-root available to the network. (In a prior specification I emplaced
753 the cluster connections in the volume header's configuration space but I no
756 Connecting to the wider networked cluster involves setting up the /etc/hammer2
757 directory with appropriate IP addresses and keys. The user-mode hammer2
758 service daemon maintains the connections and performs graph operations
761 Node types within the cluster:
763 DUMMY - Used as a local placeholder (typically in ramdisk)
764 CACHE - Used as a local placeholder and cache (typically on a SSD)
765 SLAVE - A SLAVE in the cluster, can source data on quorum agreement.
766 MASTER - A MASTER in the cluster, can source and sink data on quorum
768 SOFT_SLAVE - A SLAVE in the cluster, can source data locally without
769 quorum agreement (must be directly mounted).
770 SOFT_MASTER - A local MASTER but *not* a MASTER in the cluster. Can source
771 and sink data locally without quorum agreement, intended to
772 be synchronized with the real MASTERs when connectivity
773 allows. Operations are not coherent with the real MASTERS
774 even when they are available.
776 NOTE: SNAPSHOT, AUTOSNAP, etc represent sub-types, typically under a
777 SLAVE. A SNAPSHOT or AUTOSNAP is a SLAVE sub-type that is no longer
778 synchronized against current masters.
780 NOTE: Any SLAVE or other copy can be turned into its own writable MASTER
781 by giving it a unique cluster id, taking it out of the cluster that
782 originally spawned it.
784 There are four major protocols:
788 This protocol is used between MASTER nodes to vote on operations
789 and resolve deadlocks.
791 This protocol is used between SOFT_MASTER nodes in a sub-cluster
792 to vote on operations, resolve deadlocks, determine what the latest
793 transaction id for an element is, and to perform commits.
797 This is the MESI sub-protocol which runs under the Quorum
798 protocol. This protocol is used to maintain cache state for
799 sub-trees to ensure that operations remain cache coherent.
801 Depending on administrative rights this protocol may or may
802 not allow a leaf node in the cluster to hold a cache element
803 indefinitely. The administrative controller may preemptively
804 downgrade a leaf with insufficient administrative rights
805 without giving it a chance to synchronize any modified state
810 The Quorum and Cache protocols only operate between MASTER
811 and SOFT_MASTER nodes. All other node types must use the
812 Proxy protocol to perform similar actions. This protocol
813 differs in that proxy requests are typically sent to just
814 one adjacent node and that node then maintains state and
815 forwards the request or performs the required operation.
816 When the link is lost to the proxy, the proxy automatically
817 forwards a deletion of the state to the other nodes based on
818 what it has recorded.
820 If a leaf has insufficient administrative rights it may not
821 be allowed to actually initiate a quorum operation and may only
822 be allowed to maintain partial MESI cache state or perhaps none
823 at all (since cache state can block other machines in the
824 cluster). Instead a leaf with insufficient rights will have to
825 make due with a preemptive loss of cache state and any allowed
826 modifying operations will have to be forwarded to the proxy which
827 continues forwarding it until a node with sufficient administrative
828 rights is encountered.
830 To reduce issues and give the cluster more breath, sub-clusters
831 made up of SOFT_MASTERs can be formed in order to provide full
832 cache coherent within a subset of machines and yet still tie them
833 into a greater cluster that they normally would not have such
834 access to. This effectively makes it possible to create a two
835 or three-tier fan-out of groups of machines which are cache-coherent
836 within the group, but perhaps not between groups, and use other
837 means to synchronize between the groups.
841 This is basically the physical media protocol.
843 MASTER & SLAVE SYNCHRONIZATION
845 With HAMMER2 I really want to be hard-nosed about the consistency of the
846 filesystem, including the consistency of SLAVEs (snapshots, etc). In order
847 to guarantee consistency we take advantage of the copy-on-write nature of
848 the filesystem by forking consistent nodes and using the forked copy as the
849 source for synchronization.
851 Similarly, the target for synchronization is not updated on the fly but instead
852 is also forked and the forked copy is updated. When synchronization is
853 complete, forked sources can be thrown away and forked copies can replace
854 the original synchronization target.
856 This may seem complex, but 'forking a copy' is actually a virtually free
857 operation. The top-level inode (under the super-root), on-media, is simply
858 copied to a new inode and poof, we have an unchanging snapshot to work with.
860 - Making a snapshot is fast... almost instantanious.
862 - Snapshots are used for various purposes, including synchronization
863 of out-of-date nodes.
865 - A snapshot can be converted into a MASTER or some other PFS type.
867 - A snapshot can be forked off from its parent cluster entirely and
868 turned into its own writable filesystem, either as a single MASTER
869 or this can be done across the cluster by forking a quorum+ of
870 existing MASTERs and transfering them all to a new cluster id.
872 More complex is reintegrating the target once the synchronization is complete.
873 For SLAVEs we just delete the old SLAVE and rename the copy to the same name.
874 However, if the SLAVE is mounted and not optioned as a static mount (that is
875 the mounter wants to see updates as they are synchronized), a reconciliation
876 must occur on the live mount to clean up the vnode, inode, and chain caches
877 and shift any remaining vnodes over to the updated copy.
879 - A mounted SLAVE can track updates made to the SLAVE but the
880 actual mechanism is that the SLAVE PFS is replaced with an
881 updated copy, typically every 30-60 seconds.
883 Reintegrating a MASTER which has fallen out of the quorum due to being out
884 of date is also somewhat more complex. The same updating mechanic is used,
885 we actually have to throw the 'old' MASTER away once the new one has been
886 updated. However if the cluster is undergoing heavy modifications the
887 updated MASTER will be out of date almost the instant its source is
888 snapshotted. Reintegrating a MASTER thus requires a somewhat more complex
891 - If a MASTER is really out of date we can run one or more
892 synchronization passes concurrent with modifying operations.
893 The quorum can remain live.
895 - A final synchronization pass is required with quorum operations
896 blocked to reintegrate the now up-to-date MASTER into the cluster.
901 Quorum operations can be broken down into HARD BLOCK operations and NETWORK
902 operations. If your MASTERs are all local mounts, then failures and
903 sequencing is easy to deal with.
905 Quorum operations on a networked cluster are more complex. The problems:
907 - Masters cannot rely on clients to moderate quorum transactions.
908 Apart from the reliance being unsafe, the client could also
909 lose contact with one or more masters during the transaction and
910 leave one or more masters out-of-sync without the master(s) knowing
911 they are out of sync.
913 - When many clients are present, we do not want a flakey network
914 link from one to cause one or more masters to go out of
915 synchronization and potentially stall the whole works.
917 - Normal hammer2 mounts allow a virtually unlimited number of modifying
918 transactions between actual flushes. The media flush rolls everything
919 up into a single transaction id per flush. Detection of 'missing'
920 transactions in a concurrent multi-client setup when one or more client
921 temporarily loses connectivity is thus difficult.
923 - Clients have a limited amount of time to reconnect to a cluster after
924 a network disconnect before their MESI cache states are lost.
926 - Clients may proceed with several transactions before knowing for sure
927 that earlier transactions were completely successful. Performance is
928 important, we won't be waiting for a full quorum-verified synchronous
929 flush to media before allowing a system call to return.
931 - Masters can decide that a client's MESI cache states were lost (i.e.
932 that the transaction was too slow) as well.
934 The solutions (for modifying transactions):
936 - Masters handle quorum confirmation amongst themselves and do not rely
937 on the client for that purpose.
939 - A client can connect to one or more masters regardless of the size of
940 the quorum and can submit modifying operations to a single master if
941 desired. The master will take care of the rest.
943 A client must still validate the quorum (and obtain MESI cache states)
944 when doing read-only operations in order to present the correct data
945 to the user process for the VOP.
947 - Masters will run a 2-phase commit amongst themselves, often concurrent
948 with other non-conflicting transactions, and will serialize operations
949 and/or enforce synchronization points for 2-phase completion on
950 serialized transactions from the same client or when cache state
951 ownership is shifted from one client to another.
953 - Clients will usually allow operations to run asynchronously and return
954 from system calls more or less ASAP once they own the necessary cache
955 coherency locks. The client can select the validation mode to wait for
958 (1) Fully async (mount -o async)
959 (2) Wait for phase-1 ack (mount)
960 (3) Wait for phase-2 ack (mount -o sync) (fsync - wait p2ack)
961 (4) Wait for flush (mount -o sync) (fsync - wait flush)
963 Modifying system calls cannot be told to wait for a full media
964 flush, as full media flushes are prohibitively expensive. You
965 still have to fsync().
967 The fsync wait mode for network links can be selected, either to
968 return after the phase-2 ack or to return after the media flush.
969 The default is to wait for the phase-2 ack, which at least guarantees
970 that a network failure after that point will not disrupt operations
971 issued before the fsync.
973 - Clients must adjust the chain state for modifying operations prior to
974 releasing chain locks / returning from the system call, even if the
975 masters have not finished the transaction. A late failure by the
976 cluster will result in desynchronized state which requires erroring
977 out the whole filesystem or resynchronizing somehow.
979 - Clients can opt to keep a record of transactions through the phase-2
980 ack or the actual media flush on the masters.
982 However, replaying/revalidating the log cannot necessarily guarantee
983 success. If the masters lose synchronization due to network issues
984 between masters (or if the client was mounted fully-async), or if enough
985 masters crash simultaniously such that a quorum fails to flush even
986 after the phase-2 ack, then it is possible that by the time a client
987 is able to replay/revalidate, some other client has squeeded in and
988 committed something that would conflict.
990 If the client crashes it works similarly to a crash with a local storage
991 mount... many dirty buffers might be lost. And the same happens in
996 Keeping a short-term transaction log, much less being able to properly replay
997 it, is fraught with difficulty and I've made it a separate development task.
998 For now HAMMER2 does not have one.