Merge from vendor branch NETGRAPH:
[dragonfly.git] / sys / vfs / hammer / hammer.txt
66325755 1$DragonFly: src/sys/vfs/hammer/Attic/hammer.txt,v 1.3 2007/11/07 00:43:24 dillon Exp $
3 Hammer Filesystem
5(I) General Storage Abstraction
7 HAMMER uses a basic 16K filesystem buffer for all I/O. Buffers are
8 collected into clusters, cluster are collected into super-clusters,
9 and super-clusters are collected into volumes. A single HAMMER filesystem
10 may span multiple volumes.
8750964d 11
12 HAMMER maintains small hinted radix trees called A-Lists in several
13 places for storage management in each layer. A major feature of the
14 A-list is the ability to stack with another A-list and pass hinting
15 information between the two to create an integrated storage management
16 entity.
18 Volumes are typically specified as disk partitions, with one volume
19 designated as the root volume containing the root cluster. The root
20 cluster does not need to be contained in volume 0 nor does it have to
21 be located at any particular offset.
23 Data can be migrated on a cluster-by-cluster or volume-by-volume basis
24 and any given volume may be expanded or contracted while the filesystem
25 is live. Whole volumes can be added and (with appropriate data
26 migration) removed.
28 HAMMER's storage management limits it to 32768 volumes. Each volume
29 contains up to 16384 super-clusters and each super-cluster may contain
30 up to 32768 clusters. A cluster may contain up to 4096 16K filesystem
31 buffers (64MB). Volumes less then 2TB do away with the super-cluster
32 layer. HAMMER can manage individual volumes up to 32768TB each. The
33 total size of a HAMMER filesystem is, well, a lot, about 2^70 bytes of
34 storage space.
36 HAMMER breaks all of its information down into objects and records.
37 Records have a creation and deletion transaction id which allows HAMMER
38 to maintain a historical store. Information is only physically deleted
39 based on the data retention policy. Those portions of the data retention
40 policy affecting near-term modifications may be acted upon by the live
41 filesystem but all historical vacuuming is handled by a helper process.
43 All information in a HAMMER filesystem is CRCd to detect corruption.
45(II) Filesystem Object Topology
47 The objects and records making up a HAMMER filesystem is organized into
48 a single, unified B-Tree. Each cluster maintains a B-Tree of the
49 records contained in that cluster and a unified B-Tree is constructed by
50 linking clusters together. HAMMER issues PUSH and PULL operations
51 internally to open up space for new records and to balance the global
52 B-Tree. These operations may have the side effect of allocating
53 new clusters or freeing clusters which become unused.
55 B-Tree operations tend to be limited to a single cluster. That is,
56 the B-Tree insertion and deletion algorithm is not extended to the
57 whole unified tree. If insufficient space exists in a cluster HAMMER
58 will allocate a new cluster, PUSH a portion of the existing
59 cluster's record store to the new cluster, and link the existing
60 cluster's B-Tree to the new one.
62 Because B-Tree operations tend to be restricted and because HAMMER tries
63 to avoid balancing clusters in the critical path, HAMMER employs a
64 background process to keep the topology as a whole in balance. One
65 side effect of this is that HAMMER is fairly loose when it comes to
66 inserting new clusters into the topology.
68 HAMMER objects revolve around the concept of an object identifier.
69 The obj_id is a 64 bit quantity which uniquely identifies a filesystem
70 object for the entire life of the filesystem. This uniqueness allows
71 backups and mirrors to retain varying amounts of filesystem history by
72 removing any possibility of conflict through identifier reuse. HAMMER
73 typically iterates object identifiers sequentially and expects to never
74 run out. At a creation rate of 100,000 objects per second it would
75 take HAMMER around 6 million years to run out of identifier space.
76 The characteristics of the HAMMER obj_id also allow HAMMER to operate
77 in a multi-master clustered environment.
79 A filesystem object is made up of records. Each record references a
80 variable-length store of related data, a 64 bit key, and a creation
81 and deletion transaction id which is indexed along with the key.
83 HAMMER utilizes a 64 bit key to index all records. Regular files use
84 (base_data_offset + data_len) as the key in a data record. This allows
85 use to use a non-ranged B-Tree search to locate and iterate through
86 data records and also allows us to use variable block sizes for
87 data records without regards to the stored history.
89 Directories use a namekey hash as the key and store one directory
90 entry per record. For all intents and purposes a directory can
91 store an unlimited number of files.
93 HAMMER is also capable of associating any number of out-of-band
94 attributes with a filesystem object using a separate key space. This
95 key space may be used for extended attributes, ACLs, and anything else
96 the user desires.
98(III) Access to historical information
100 A HAMMER filesystem can be mounted with an as-of date to access a
101 snapshot of the system. Snapshots do not have to be explicitly taken
102 but are instead based on the retention policy you specify for any
103 given HAMMER filesystem. It is also possible to access individual files
104 or directories (and their contents) using an as-of extension on the
105 file name.
107 HAMMER uses the transaction ids stored in records to present a snapshot
108 view of the filesystem as-of any time in the past, with a granularity
109 based on the retention policy chosen by the system administrator.
110 feature also effectively implements file versioning.
112(IV) Mirrors and Backups
114 HAMMER is organized in a way that allows an information stream to be
115 generated for mirroring and backup purposes. This stream includes all
116 historical information available in the source. No queueing is required
117 so there is no limit to the number of mirrors or backups you can have
118 and no limit to how long any given mirror or backup can be taken offline.
119 Resynchronization of the stream is not considered to be an expensive
120 operation.
122 Mirrors and backups are maintained logically, not physically, and may
123 have their own, independant retention polcies. For example, your live
124 filesystem could have a fairly rough retention policy, even none at all,
125 then be streamed to an on-site backup and from there to an off-site
126 backup, each with different retention policies.
128(V) Transactions and Recovery
130 HAMMER implement an instant-mount capability and will recover information
131 on a cluster-by-cluster basis as it is being accessed.
133 HAMMER numbers each record it lays down and stores a synchronization
134 point in the cluster header. Clusters are synchronously marked 'open'
135 when undergoing modification. If HAMMER encounters a cluster which is
136 unexpectedly marked open it will perform a recovery operation on the
137 cluster and throw away any records beyond the synchronization point.
139 HAMMER supports a userland transactional facility. Userland can query
140 the current (filesystem wide) transaction id, issue numerous operations
141 and on recovery can tell HAMMER to revert all records with a greater
142 transaction id for any particular set of files. Multiple userland
143 applications can use this feature simultaniously as long as the files
144 they are accessing do not overlap. It is also possible for userland
145 to set up an ordering dependancy and maintain completely asynchronous
146 operation while still being able to guarentee recovery to a fairly
147 recent transaction id.
149(VI) Database files
151 HAMMER uses 64 bit keys internally and makes key-based files directly
152 available to userland. Key-based files are not regular files and do not
153 operate using a normal data offset space.
155 You cannot copy a database file using a regular file copier. The
156 file type will not be S_IFREG but instead will be S_IFDB. The file
157 must be opened with O_DATABASE. Reads which normally seek the file
158 forward will instead iterate through the records and lseek/qseek can
159 be used to acquire or set the key prior to the read/write operation.