2 HAMMER2 DESIGN DOCUMENT
11 Current Status as of document date
13 * Filesystem Core - operational
14 - bulkfree - operational
15 - Compression - operational
16 - Snapshots - operational
18 - Subhierarchy quotas - specced
19 - Logical Encryption - not specced yet
20 - Copies - not specced yet
21 - fsync bypass - not specced yet
24 - Network msg core - operational
25 - Network blk device - operational
26 - Error handling - under development
27 - Quorum Protocol - under development
28 - Synchronization - under development
29 - Transaction replay - not specced yet
30 - Cache coherency - not specced yet
34 * Block topology (both the main topology and the freemap) use a copy-on-write
35 design. Media-level block frees are delayed and flushes rotate between
36 4 volume headers (maxes out at 4 if the filesystem is > ~8GB). Flushes
37 will allocate new blocks up to the root in order to propagate block table
38 changes and transaction ids.
40 * Incremental synchronization is queueless and trivial by design.
42 * Multiple roots, with many features. This is implemented via the super-root
43 concept. When mounting a HAMMER2 filesystem you specify a device path and
44 a directory name in the super-root. (HAMMER1 had only one root).
46 * All cluster types and multiple PFSs (belonging to the same or different
47 clusters) can be mixed on one physical filesystem.
49 This allows independent cluster components to be configured within a
50 single formatted H2 filesystem. Each component is a super-root entry,
51 a cluster identifier, and a unique identifier. The network protocl
52 integrates the component into the cluster when it is created
54 * Roots are really no different from snapshots (HAMMER1 distinguished between
55 its root mount and its PFS's. HAMMER2 does not).
57 * I/O and chain locking thread separation. I/O stalls and lock stalls can
58 cause any filesystem which purports to operate over multiple physical and
59 network devices to implode. HAMMER2 incorporates a frontend/backend design
60 which separates media operations into support threads and allows the
61 frontend to validate the cluster, proceed with an operation, and disconnect
62 any remaining running operation even when backend ops have not completed
63 on all nodes. This allows the frontend to return 'early' (so to speak).
65 * Early return on best data-path supported by virtue of the above. In a
66 multi-master system, frontend ops will issue I/O on all cluster elements
67 concurrently and will return the instant incoming data validates the
70 * Snapshots are writable (in HAMMER1 snapshots were read-only).
72 * Snapshots are explicit but trivial to create. In HAMMER1 snapshots were
73 both explicit and fine-grained/automatic. HAMMER2 does not implement
74 automatic fine-grained snapshots. H2 snapshots are cheap enough that you
75 can create fine-grained snapshots if you desire.
77 * HAMMER2 formalizes a synchronization point for the flush, does a pre-flush
78 that does not update the volume root, then waits for all running modifying
79 operations to complete to memory (not to disk) while temporarily stalling
80 new modifying operation initiations. The final flush is then executed.
82 At the moment we do not allow concurrent modifying operations during the
83 final flush phase. Ultimately I would like to, but doing so can be complex.
85 * HAMMER2 flushes and synchronization points do not bisect VOPs (system calls).
86 (HAMMER1 flushes could wind up bisecting VOPs). This means the H2 flushes
87 leave the filesystem in a far more consistent state than H1 flushes did.
89 * Directory sub-hierarchy-based quotas for space and inode usage tracking.
90 Any directory can be used.
92 * Low memory footprint. Except for the volume header, the buffer cache
93 is completely asynchronous and dirty buffers can be retired by the OS
94 directly to backing store with no further interactions with the filesystem.
96 * Background synchronization and mirroring occurs at the logical level.
97 When a failure occurs or a normal validation scan comes up with
98 discrepancies, the synchronization thread will use the quorum to figure
99 out which information is not correct and update accordingly.
101 * Support for multiple compression algorithms configured on a subdirectory
102 tree basis and on a file basis. Block compression up to 64KB will be used.
103 Only compression ratios at powers of 2 that are at least 2:1 (e.g. 2:1,
104 4:1, 8:1, etc) will work in this scheme because physical block allocations
105 in HAMMER2 are always power-of-2. Modest compression can be achieved with
106 low overhead, is turned on by default, and is compatible with deduplication.
108 * Encryption. Whole-disk encryption is supported by another layer, but I
109 intend to give H2 an encryption feature at the logical layer which works
110 approximately as follows:
112 - Encryption controlled by the client on an inode/sub-tree basis.
113 - Server has no visibility to decrypted data.
114 - Encrypt filenames in directory entries. Since the filename[] array
115 is 256 bytes wide, client can add random bytes after the normal
116 terminator to make it virtually impossible for an attacker to figure
118 - Encrypt file size and most inode contents.
119 - Encrypt file data (holes are not encrypted).
120 - Encryption occurs after compression, with random filler.
121 - Check codes calculated after encryption & compression (not before).
123 - Blockrefs are not encrypted.
124 - Directory and File Topology is not encrypted.
125 - Encryption is not sub-topology validation. Client would have to keep
126 track of that itself. Server or other clients can still e.g. remove
129 In particular, note that even though the file size field can be encrypted,
130 the server does have visibility on the block topology and thus has a pretty
131 good idea how big the file is. However, a client could add junk blocks
132 at the end of a file to make this less apparent, at the cost of space.
134 If a client really wants a fully validated H2-encrypted space the easiest
135 solution is to format a filesystem within an encrypted file by treating it
136 as a block device, but I digress.
138 * Zero detection on write (writing all-zeros), which requires the data
139 buffer to be scanned, is fully supported. This allows the writing of 0's
142 * Copies support for redundancy within a single physical filesystem.
143 Up to 256 physical disks and/or partitions can be ganged to form a
144 single physical filesystem. If you use a disk or RAID aggregation
145 layer then the actual number of physical disks that can be associated
146 with a single H2 filesystem is unbounded.
148 H2 puts an 8-bit copyid in the blockref structure to represent potentially
149 multiple copies of a block. The copyid corresponds to a configuration
150 specification in the volume header. The full algorithm has not been
153 Copies support is implemented by having multiple blockref entries for
154 the same key, each with a different copyid. The copyid represents which
155 of the 256 slots is used. Meta-data is also subject to the copies
156 mechanism. However, for both meta-data and data, each copy should be
157 identical so the check fields in the blockref for all copies should wind
158 up being the same, and any valid copy can be used by the block-level
159 hammer2_chain code to access the filesystem. File accesses will attempt
160 to use the same copy. If an I/O read error occurs, a different copy will
161 be chosen. Modifying operations must update all copies and/or create
162 new copies as needed. If a write error occurs on a copy and other copies
163 are available, the errored target will be taken offline.
165 It is possible to configure H2 to write out fewer copies on-write and then
166 use a background scan to beef-up the number of copies to improve real-time
169 * MESI Cache coherency for multi-master/multi-client clustering operations.
170 The servers hosting the MASTERs are also responsible for keeping track of
173 * Hardlinks and softlinks are supported. Hardlinks are somewhat complex to
174 deal with and there is still an edge case. I am trying to avoid storing
175 the hardlinks at the root level because that messes up my concept for
176 sub-tree quotas and is unnecessarily burdensome in terms of SMP collisions
179 * The media blockref structure is now large enough to support up to a 192-bit
180 check value, which would typically be a cryptographic hash of some sort.
181 Multiple check value algorithms will be supported with the default being
182 a simple 32-bit iSCSI CRC.
184 * Fully verified deduplication will be supported and automatic (and
185 necessary in many respects).
187 * Unverified de-duplication will be supported as a configurable option on a
188 file or subdirectory tree. Unverified deduplication must use the largest
189 available check code (192 bits). It will not verify that data content with
190 the same check code is actually identical during the dedup pass, resulting
191 in approximately 100x to 1000x the deduplication performance but at the cost
192 of potentially corrupting some data.
194 The Unverified dedup feature is intended only for those files where
195 occassional corruption is ok, such as in a web-crawler data store or
196 other situations where the data content is not critically important
197 or can be externally recovered if it becomes corrupt.
201 HAMMER2 generally implements a copy-on-write block design for the filesystem,
202 which is very different from HAMMER1's B-Tree design. Because the design
203 is copy-on-write it can be trivially snapshotted simply by referencing an
204 existing block, and because the media structures logically match a standard
205 filesystem directory/file hierarchy snapshots and other similar operations
206 can be trivially performed on an entire subdirectory tree at any level in
209 The copy-on-write design implements a block table in a radix-tree format,
210 with a small 8x fan-out in the volume header and inode and a large 256x or
211 1024x fan-out for indirect blocks. The table is built bottom-up.
212 Intermediate radii are only created when necessary so small files will use
213 much shallower radix block trees. The inode itself can accomodate files
214 up 512KB (65536x8). Directories also use a radix block table and directory
215 inodes can accomodate up to 8 entries before pushing an indirect radix block.
217 The copy-on-write nature of the filesystem implies that any modification
218 whatsoever will have to eventually synchronize new disk blocks all the way
219 to the super-root of the filesystem and the volume header itself. This forms
220 the basis for crash recovery and also ensures that recovery occurs on a
221 completed high-level transaction boundary. All disk writes are to new blocks
222 except for the volume header (which cycles through 4 copies), thus allowing
223 all writes to run asynchronously and concurrently prior to and during a flush,
224 and then just doing a final synchronization and volume header update at the
225 end. Many of HAMMER2s features are enabled by this core design feature.
227 Clearly this method requires intermediate modifications to the chain to be
228 cached so multiple modifications can be aggregated prior to being
229 synchronized. One advantage, however, is that the normal buffer cache can
230 be used and intermediate elements can be retired to disk by H2 or the OS
231 at any time. This means that HAMMER2 has very low resource overhead from the
232 point of view of the operating system. Unlike HAMMER1 which had to lock
233 dirty buffers in memory for long periods of time, HAMMER2 has no such
236 Buffer cache overhead is very well bounded and can handle filesystem
237 operations of any complexity, even on boxes with very small amounts
238 of physical memory. Buffer cache overhead is significantly lower with H2
239 than with H1 (and orders of magnitude lower than ZFS).
241 At some point I intend to implement a shortcut to make fsync()'s run fast,
242 and that is to allow deep updates to blockrefs to shortcut to auxillary
243 space in the volume header to satisfy the fsync requirement. The related
244 blockref is then recorded when the filesystem is mounted after a crash and
245 the update chain is reconstituted when a matching blockref is encountered
246 again during normal operation of the filesystem.
248 MIRROR_TID, MODIFY_TID, UPDATE_TID
250 In HAMMER2, the core block reference is 128-byte structure called a blockref.
251 The blockref contains various bits of information including the 64-bit radix
252 key (typically a directory hash if a directory entry, inode number if a
253 hidden hardlink target, or file offset if a file block), 64-bit data offset
254 with the physical block size radix encoded in it (physical block size can be
255 different from logical block size due to compression), three 64-bit
256 transaction ids, type information, and up to 512 bits worth of check data
257 for the block being reference which can be anything from a simple CRC to
258 a strong cryptographic hash.
260 mirror_tid - This is a media-centric (as in physical disk partition)
261 transaction id which tracks media-level updates. The mirror_tid
262 can be different at the same point on different nodes in a
265 Whenever any block in the media topology is modified, its
266 mirror_tid is updated with the flush id and will propagate
267 upward during the flush all the way to the volume header.
269 mirror_tid is monotonic. It is primarily used for on-mount
270 recovery and volume root validation. The name is historical
271 from H1, it is not used for nominal mirroring.
273 modify_tid - This is a cluster-centric (as in across all the nodes used
274 to build a cluster) transaction id which tracks filesystem-level
277 modify_tid is updated when the front-end of the filesystem makes
278 a change to an inode or data block. It does NOT propagate upward
281 update_tid - This is a cluster synchronization transaction id. Modifications
282 made to the topology will clear this field to 0 as they propagate
283 up to the root. This gives the synchronizer an easy way to
284 determine what needs revalidation.
286 The synchronizer revalidates the cluster bottom-up by validating
287 a sub-topology and propagating the highest modify_tid in the
288 validated sub-topology up via the update_tid field.
290 Update to this field may be optimized by the HAMMER2 VFS to
291 avoid the double-transition.
293 The synchronization code updates an out-of-sync node bottom-up and will
294 dynamically set update_tid as it goes, but media flushes can occur at any
295 time and these flushes will use mirror_tid for flush and freemap management.
296 The mirror_tid for each flush propagates upward to the volume header on each
297 flush. modify_tid is set for any chains modified by a cluster op but does
298 not propagate up, instead serving as a seed for update_tid.
300 * The synchronization code is able to determine that a sub-tree is
301 synchronized simply by observing the update_tid at the root of the sub-tree,
302 on an inode-by-inode basis and also on a data-block-by-data-block basis.
304 * The synchronization code is able to do an incremental update of an
305 out-of-sync node simply by skipping elements with a matching update_tid
308 * The synchronization code can be interrupted and restarted at any time,
309 and is able to pick up where it left off with very little overhead.
311 * The synchronization code does not inhibit media flushes. Media flushes
312 can occur (and must occur) while synchronization is ongoing.
314 There are several other stored transaction ids in HAMMER2. There is a
315 separate freemap_tid in the volume header that is used to allow freemap
316 flushes to be deferred, and inodes have an attr_tid and a dirent_tid which
317 tracks attribute changes and (for directories) create/rename/delete changes.
318 The inode TIDs are used as an aid for the cache coherency subsystem.
320 Remember that since this is a copy-on-write filesystem, we can propagate
321 a considerable amount of information up the tree to the volume header
322 without adding to the I/O we already have to do.
324 DIRECTORIES AND INODES
326 Directories are hashed, and another major design element is that directory
327 entries ARE inodes. They are one and the same, with a special placemarker
328 for hardlinks. Inodes are 1KB.
330 Hardlinks are implemented with placemarkers as directory entries which simply
331 represent the inode number. The actual file resides in a parent directory
332 that is common to all hardlinks to that file. If the hardlinks are all within
333 a single directory, the actual hardlink inode is in that directory. The
334 hardlink target, as we call it, is a hidden directory entry in a common parent
335 whos key is basically just the inode number itself, so lookups are fast.
337 Half of the inode structure (512 bytes) is used to hold top-level blockrefs
338 to the radix block tree representing the file contents. Files which are
339 less than or equal to 512 bytes in size will simply store the file contents
340 in this area instead of a blockref array. So files <= 512 bytes take only
341 1KB of space inclusive of the inode.
343 Inode numbers are not spatially referenced, which complicates NFS servers
344 but doesn't complicate anything else. The inode number is stored in the
345 inode itself, an absolute necessity required to properly support HAMMER2s
346 hugely flexible snapshots. I would like to support NFS services but it
347 would require (probably) a lookaside index in the root for inode lookups
348 and might not happen quickly.
352 H2 allows freemap flushes to lag behind topology flushes. The freemap flush
353 tracks a separate transaction id (via mirror_tid) in the volume header.
355 On mount, HAMMER2 will first locate the highest-sequenced check-code-validated
356 volume header from the 4 copies available (if the filesystem is big enough,
357 e.g. > ~10GB or so, there will be 4 copies of the volume header).
359 HAMMER2 will then run an incremental scan of the topology for mirror_tid
360 transaction ids between the last freemap flush tid and the last topology
361 flush tid in order to synchronize the freemap. Because this scan is
362 incremental the time it takes to run will be relatively short and well-bounded
363 at mount-time. This is NOT fsck. Freemap flushes can be avoided for any
364 number of normal topology flushes but should still occur frequently enough
365 to avoid long recovery times in case of a crash.
367 The filesystem is then ready for use.
369 DISK I/O OPTIMIZATIONS
371 The freemap implements a 1KB allocation resolution. Each 2MB segment managed
372 by the freemap is zoned and has a tendancy to collect inodes, small data,
373 indirect blocks, and larger data blocks into separate segments. The idea is
374 to greatly improve I/O performance (particularly by laying inodes down next
375 to each other which has a huge effect on directory scans).
377 The current implementation of HAMMER2 implements a fixed block size of 64KB
378 in order to allow the mapping of hammer2_dio's in its IO subsystem to
379 conumers that might desire different sizes. This way we don't have to
380 worry about matching the buffer cache / DIO cache to the variable block
381 size of underlying elements.
383 The biggest issue we are avoiding by having a fixed 64KB I/O size is not
384 actually to help nominal front-end access issue but instead to reduce the
385 complexity when blocks are freed and reused for another purpose. HAMMER1
386 had to have specialized code to check for and invalidate buffer cache buffers
387 in the free/reuse case. HAMMER2 does not need such code.
389 That said, HAMMER2 places no major restrictions on mixing block sizes within
390 a 64KB block. The only restriction is that a HAMMER2 block cannot cross
391 a 64KB boundary. The soft restrictions the block allocator puts in place
392 exist primarily for performance reasons (i.e. try to collect 1K inodes
393 together). The 2MB freemap zone granularity should work very well in this
396 HAMMER2 also allows OS support for ganging buffers together into even
397 larger blocks for I/O (OS buffer cache 'clustering'), OS-supported read-ahead,
398 OS-driven asynchronous retirement, and other performance features typically
399 provided by the OS at the block-level to ensure smooth system operation.
401 By avoiding wiring buffers/memory and allowing these features to run normally,
402 HAMMER2 winds up with very low OS overhead.
406 The freemap is stored in the reserved blocks situated in the ~4MB reserved
407 area at the baes of every ~1GB level-1 zone. The current implementation
408 reserves 8 copies of every freemap block and cycles through them in order
409 to make the freemap operate in a copy-on-write fashion.
411 - Freemap is copy-on-write.
412 - Freemap operations are transactional, same as everything else.
413 - All backup volume headers are consistent on-mount.
415 The Freemap is organized using the same radix blockmap algorithm used for
416 files and directories, but with fixed radix values. For a maximally-sized
417 filesystem the Freemap will wind up being a 5-level-deep radix blockmap,
418 but the top-level is embedded in the volume header so insofar as performance
419 goes it is really just a 4-level blockmap.
421 The freemap radix allocation mechanism is also the same, meaning that it is
422 bottom-up and will not allocate unnecessary intermediate levels for smaller
423 filesystems. The number of blockmap levels not including the volume header
424 for various filesystem sizes is as follows:
426 up-to #of freemap levels
434 The Freemap has bitmap granularity down to 16KB and a linear iterator that
435 can linearly allocate space down to 1KB. Due to fragmentation it is possible
436 for the linear allocator to become marginalized, but it is relatively easy
437 to for a reallocation of small blocks every once in a while (like once a year
438 if you care at all) and once the old data cycles out of the snapshots, or you
439 also rewrite the snapshots (which you can do), the freemap should wind up
440 relatively optimal again. Generally speaking I believe that algorithms can
441 be developed to make this a non-problem without requiring any media structure
444 In order to implement fast snapshots (and writable snapshots for that
445 matter), HAMMER2 does NOT ref-count allocations. All the freemap does is
446 keep track of 100% free blocks plus some extra bits for staging the bulkfree
447 scan. The lack of ref-counting makes it possible to:
449 - Completely trivialize HAMMER2s snapshot operations.
450 - Allows any volume header backup to be used trivially.
451 - Allows whole sub-trees to be destroyed without having to scan them.
452 - Simplifies normal crash recovery operations.
453 - Simplifies catastrophic recovery operations.
455 Normal crash recovery is simply a matter of doing an incremental scan
456 of the topology between the last flushed freemap TID and the last flushed
457 topology TID. This usually takes only a few seconds and allows:
459 - Freemap flushes to be be deferred for any number of topology flush
461 - Does not have to be flushed for fsync, reducing fsync overhead.
465 Blocks are freed via a bulkfree scan, which is a two-stage meta-data scan.
466 Blocks are first marked as being possibly free and then finalized in the
467 second scan. Live filesystem operations are allowed to run during these
468 scans and any freemap block that is allocated or adjusted after the first
469 scan will simply be re-marked as allocated and the second scan will not
470 transition it to being free.
472 The cost of not doing ref-count tracking is that HAMMER2 must perform two
473 bulkfree scans of the meta-data to determine which blocks can actually be
474 freed. This can be complicated by the volume header backups and snapshots
475 which cause the same meta-data topology to be scanned over and over again,
476 but mitigated somewhat by keeping a cache of higher-level nodes to detect
477 when we would scan a sub-topology that we have already scanned. Due to the
478 copy-on-write nature of the filesystem, such detection is easy to implement.
480 Part of the ongoing design work is finding ways to reduce the scope of this
481 meta-data scan so the entire filesystem's meta-data does not need to be
482 scanned (though in tests with HAMMER1, even full meta-data scans have
483 turned out to be fairly low cost). In other words, its an area where
484 improvements can be made without any media format changes.
486 Another advantage of operating the freemap like this is that some future
487 version of HAMMER2 might decide to completely change how the freemap works
488 and would be able to make the change with relatively low downtime.
492 Clustering, as always, is the most difficult bit but we have some advantages
493 with HAMMER2 that we did not have with HAMMER1. First, HAMMER2's media
494 structures generally follow the kernel's filesystem hiearchy which allows
495 cluster operations to use topology cache and lock state. Second,
496 HAMMER2's writable snapshots make it possible to implement several forms
497 of multi-master clustering.
499 The mount device path you specify serves to bootstrap your entry into
500 the cluster. This is typically local media. It can even be a ram-disk
501 that only contains placemarkers that help HAMMER2 connect to a fully
504 With HAMMER2 you mount a directory entry under the super-root. This entry
505 will contain a cluster identifier that helps HAMMER2 identify and integrate
506 with the nodes making up the cluster. HAMMER2 will automatically integrate
507 *all* entries under the super-root when you mount one of them. You have to
508 mount at least one for HAMMER2 to integrate the block device in the larger
511 For cluster servers every HAMMER2-formatted partition has a "LOCAL" MASTER
512 which can be mounted in order to make the rest of the elements under the
513 super-root available to the network. (In a prior specification I emplaced
514 the cluster connections in the volume header's configuration space but I no
517 Connecting to the wider networked cluster involves setting up the /etc/hammer2
518 directory with appropriate IP addresses and keys. The user-mode hammer2
519 service daemon maintains the connections and performs graph operations
522 Node types within the cluster:
524 DUMMY - Used as a local placeholder (typically in ramdisk)
525 CACHE - Used as a local placeholder and cache (typically on a SSD)
526 SLAVE - A SLAVE in the cluster, can source data on quorum agreement.
527 MASTER - A MASTER in the cluster, can source and sink data on quorum
529 SOFT_SLAVE - A SLAVE in the cluster, can source data locally without
530 quorum agreement (must be directly mounted).
531 SOFT_MASTER - A local MASTER but *not* a MASTER in the cluster. Can source
532 and sink data locally without quorum agreement, intended to
533 be synchronized with the real MASTERs when connectivity
534 allows. Operations are not coherent with the real MASTERS
535 even when they are available.
537 NOTE: SNAPSHOT, AUTOSNAP, etc represent sub-types, typically under a
538 SLAVE. A SNAPSHOT or AUTOSNAP is a SLAVE sub-type that is no longer
539 synchronized against current masters.
541 NOTE: Any SLAVE or other copy can be turned into its own writable MASTER
542 by giving it a unique cluster id, taking it out of the cluster that
543 originally spawned it.
545 There are four major protocols:
549 This protocol is used between MASTER nodes to vote on operations
550 and resolve deadlocks.
552 This protocol is used between SOFT_MASTER nodes in a sub-cluster
553 to vote on operations, resolve deadlocks, determine what the latest
554 transaction id for an element is, and to perform commits.
558 This is the MESI sub-protocol which runs under the Quorum
559 protocol. This protocol is used to maintain cache state for
560 sub-trees to ensure that operations remain cache coherent.
562 Depending on administrative rights this protocol may or may
563 not allow a leaf node in the cluster to hold a cache element
564 indefinitely. The administrative controller may preemptively
565 downgrade a leaf with insufficient administrative rights
566 without giving it a chance to synchronize any modified state
571 The Quorum and Cache protocols only operate between MASTER
572 and SOFT_MASTER nodes. All other node types must use the
573 Proxy protocol to perform similar actions. This protocol
574 differs in that proxy requests are typically sent to just
575 one adjacent node and that node then maintains state and
576 forwards the request or performs the required operation.
577 When the link is lost to the proxy, the proxy automatically
578 forwards a deletion of the state to the other nodes based on
579 what it has recorded.
581 If a leaf has insufficient administrative rights it may not
582 be allowed to actually initiate a quorum operation and may only
583 be allowed to maintain partial MESI cache state or perhaps none
584 at all (since cache state can block other machines in the
585 cluster). Instead a leaf with insufficient rights will have to
586 make due with a preemptive loss of cache state and any allowed
587 modifying operations will have to be forwarded to the proxy which
588 continues forwarding it until a node with sufficient administrative
589 rights is encountered.
591 To reduce issues and give the cluster more breath, sub-clusters
592 made up of SOFT_MASTERs can be formed in order to provide full
593 cache coherent within a subset of machines and yet still tie them
594 into a greater cluster that they normally would not have such
595 access to. This effectively makes it possible to create a two
596 or three-tier fan-out of groups of machines which are cache-coherent
597 within the group, but perhaps not between groups, and use other
598 means to synchronize between the groups.
602 This is basically the physical media protocol.
604 MASTER & SLAVE SYNCHRONIZATION
606 With HAMMER2 I really want to be hard-nosed about the consistency of the
607 filesystem, including the consistency of SLAVEs (snapshots, etc). In order
608 to guarantee consistency we take advantage of the copy-on-write nature of
609 the filesystem by forking consistent nodes and using the forked copy as the
610 source for synchronization.
612 Similarly, the target for synchronization is not updated on the fly but instead
613 is also forked and the forked copy is updated. When synchronization is
614 complete, forked sources can be thrown away and forked copies can replace
615 the original synchronization target.
617 This may seem complex, but 'forking a copy' is actually a virtually free
618 operation. The top-level inode (under the super-root), on-media, is simply
619 copied to a new inode and poof, we have an unchanging snapshot to work with.
621 - Making a snapshot is fast... almost instantanious.
623 - Snapshots are used for various purposes, including synchronization
624 of out-of-date nodes.
626 - A snapshot can be converted into a MASTER or some other PFS type.
628 - A snapshot can be forked off from its parent cluster entirely and
629 turned into its own writable filesystem, either as a single MASTER
630 or this can be done across the cluster by forking a quorum+ of
631 existing MASTERs and transfering them all to a new cluster id.
633 More complex is reintegrating the target once the synchronization is complete.
634 For SLAVEs we just delete the old SLAVE and rename the copy to the same name.
635 However, if the SLAVE is mounted and not optioned as a static mount (that is
636 the mounter wants to see updates as they are synchronized), a reconciliation
637 must occur on the live mount to clean up the vnode, inode, and chain caches
638 and shift any remaining vnodes over to the updated copy.
640 - A mounted SLAVE can track updates made to the SLAVE but the
641 actual mechanism is that the SLAVE PFS is replaced with an
642 updated copy, typically every 30-60 seconds.
644 Reintegrating a MASTER which has fallen out of the quorum due to being out
645 of date is also somewhat more complex. The same updating mechanic is used,
646 we actually have to throw the 'old' MASTER away once the new one has been
647 updated. However if the cluster is undergoing heavy modifications the
648 updated MASTER will be out of date almost the instant its source is
649 snapshotted. Reintegrating a MASTER thus requires a somewhat more complex
652 - If a MASTER is really out of date we can run one or more
653 synchronization passes concurrent with modifying operations.
654 The quorum can remain live.
656 - A final synchronization pass is required with quorum operations
657 blocked to reintegrate the now up-to-date MASTER into the cluster.
662 Quorum operations can be broken down into HARD BLOCK operations and NETWORK
663 operations. If your MASTERs are all local mounts, then failures and
664 sequencing is easy to deal with.
666 Quorum operations on a networked cluster are more complex. The problems:
668 - Masters cannot rely on clients to moderate quorum transactions.
669 Apart from the reliance being unsafe, the client could also
670 lose contact with one or more masters during the transaction and
671 leave one or more masters out-of-sync without the master(s) knowing
672 they are out of sync.
674 - When many clients are present, we do not want a flakey network
675 link from one to cause one or more masters to go out of
676 synchronization and potentially stall the whole works.
678 - Normal hammer2 mounts allow a virtually unlimited number of modifying
679 transactions between actual flushes. The media flush rolls everything
680 up into a single transaction id per flush. Detection of 'missing'
681 transactions in a concurrent multi-client setup when one or more client
682 temporarily loses connectivity is thus difficult.
684 - Clients have a limited amount of time to reconnect to a cluster after
685 a network disconnect before their MESI cache states are lost.
687 - Clients may proceed with several transactions before knowing for sure
688 that earlier transactions were completely successful. Performance is
689 important, we won't be waiting for a full quorum-verified synchronous
690 flush to media before allowing a system call to return.
692 - Masters can decide that a client's MESI cache states were lost (i.e.
693 that the transaction was too slow) as well.
695 The solutions (for modifying transactions):
697 - Masters handle quorum confirmation amongst themselves and do not rely
698 on the client for that purpose.
700 - A client can connect to one or more masters regardless of the size of
701 the quorum and can submit modifying operations to a single master if
702 desired. The master will take care of the rest.
704 A client must still validate the quorum (and obtain MESI cache states)
705 when doing read-only operations in order to present the correct data
706 to the user process for the VOP.
708 - Masters will run a 2-phase commit amongst themselves, often concurrent
709 with other non-conflicting transactions, and will serialize operations
710 and/or enforce synchronization points for 2-phase completion on
711 serialized transactions from the same client or when cache state
712 ownership is shifted from one client to another.
714 - Clients will usually allow operations to run asynchronously and return
715 from system calls more or less ASAP once they own the necessary cache
716 coherency locks. The client can select the validation mode to wait for
719 (1) Fully async (mount -o async)
720 (2) Wait for phase-1 ack (mount)
721 (3) Wait for phase-2 ack (mount -o sync) (fsync - wait p2ack)
722 (4) Wait for flush (mount -o sync) (fsync - wait flush)
724 Modifying system calls cannot be told to wait for a full media
725 flush, as full media flushes are prohibitively expensive. You
726 still have to fsync().
728 The fsync wait mode for network links can be selected, either to
729 return after the phase-2 ack or to return after the media flush.
730 The default is to wait for the phase-2 ack, which at least guarantees
731 that a network failure after that point will not disrupt operations
732 issued before the fsync.
734 - Clients must adjust the chain state for modifying operations prior to
735 releasing chain locks / returning from the system call, even if the
736 masters have not finished the transaction. A late failure by the
737 cluster will result in desynchronized state which requires erroring
738 out the whole filesystem or resynchronizing somehow.
740 - Clients can opt to keep a record of transactions through the phase-2
741 ack or the actual media flush on the masters.
743 However, replaying/revalidating the log cannot necessarily guarantee
744 success. If the masters lose synchronization due to network issues
745 between masters (or if the client was mounted fully-async), or if enough
746 masters crash simultaniously such that a quorum fails to flush even
747 after the phase-2 ack, then it is possible that by the time a client
748 is able to replay/revalidate, some other client has squeeded in and
749 committed something that would conflict.
751 If the client crashes it works similarly to a crash with a local storage
752 mount... many dirty buffers might be lost. And the same happens in
757 Keeping a short-term transaction log, much less being able to properly replay
758 it, is fraught with difficulty and I've made it a separate development task.