4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
35 .Nd configure ZFS storage pools
47 command configures ZFS storage pools.
48 A storage pool is a collection of devices that provides physical storage and
49 data replication for ZFS datasets.
50 All datasets within a storage pool share the same space.
53 for information on managing datasets.
55 For an overview of creating and managing ZFS storage pools see the
59 All subcommands that modify state are logged persistently to the pool in their
64 command provides subcommands to create and destroy storage pools, add capacity
65 to storage pools, and provide information about the storage pools.
66 The following subcommands are supported:
72 Displays a help message.
84 Displays the software version of the
86 userland utility and the zfs kernel module.
91 Creates a new storage pool containing the virtual devices specified on the
93 .It Xr zpool-initialize 8
94 Begins initializing by writing to all unallocated regions on the specified
95 devices, or all eligible devices in the pool if no individual devices are
100 .It Xr zpool-destroy 8
101 Destroys the given pool, freeing up any devices for other use.
102 .It Xr zpool-labelclear 8
103 Removes ZFS label information from the specified
112 Increases or decreases redundancy by
113 .Cm attach Ns -ing or
114 .Cm detach Ns -ing a device on an existing vdev (virtual device).
119 Adds the specified virtual devices to the given pool,
120 or removes the specified device from the pool.
121 .It Xr zpool-replace 8
122 Replaces an existing device (which may be faulted) with a new one.
124 Creates a new pool by splitting all mirrors in an existing pool (which decreases its redundancy).
127 Available pool properties listed in the
132 Lists the given pools along with a health status and space usage.
137 Retrieves the given list of properties
143 for the specified storage pool(s).
147 .It Xr zpool-status 8
148 Displays the detailed health status for the given pools.
149 .It Xr zpool-iostat 8
150 Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
153 .It Xr zpool-events 8
154 Lists all recent events generated by the ZFS kernel modules. These events
157 and used to automate administrative tasks such as replacing a failed device
158 with a hot spare. For more information about the subclasses and event payloads
159 that can be generated see the
162 .It Xr zpool-history 8
163 Displays the command history of the specified pool(s) or all pools if no pool is
169 Begins a scrub or resumes a paused scrub.
170 .It Xr zpool-checkpoint 8
171 Checkpoints the current state of
173 , which can be later restored by
174 .Nm zpool Cm import --rewind-to-checkpoint .
176 Initiates an immediate on-demand TRIM operation for all of the free space in
177 a pool. This operation informs the underlying storage devices of all blocks
178 in the pool which are no longer allocated and allows thinly provisioned
179 devices to reclaim the space.
181 This command forces all in-core dirty data to be written to the primary
182 pool storage and not the ZIL. It will also update administrative
183 information including quota reporting. Without arguments,
185 will sync all pools on the system. Otherwise, it will sync only the
187 .It Xr zpool-upgrade 8
188 Manage the on-disk format version of storage pools.
190 Waits until all background activity of the given types has ceased in the given
199 Takes the specified physical device offline or brings it online.
200 .It Xr zpool-resilver 8
201 Starts a resilver. If an existing resilver is already running it will be
202 restarted from the beginning.
203 .It Xr zpool-reopen 8
204 Reopen all the vdevs associated with the pool.
206 Clears device errors in a pool.
210 .It Xr zpool-import 8
211 Make disks containing ZFS storage pools available for use on the system.
212 .It Xr zpool-export 8
213 Exports the given pools from the system.
214 .It Xr zpool-reguid 8
215 Generates a new unique identifier for the pool.
218 The following exit values are returned:
221 Successful completion.
225 Invalid command line options were specified.
229 .It Sy Example 1 No Creating a RAID-Z Storage Pool
230 The following command creates a pool with a single raidz root vdev that
231 consists of six disks.
233 # zpool create tank raidz sda sdb sdc sdd sde sdf
235 .It Sy Example 2 No Creating a Mirrored Storage Pool
236 The following command creates a pool with two mirrors, where each mirror
239 # zpool create tank mirror sda sdb mirror sdc sdd
241 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
242 The following command creates an unmirrored pool using two disk partitions.
244 # zpool create tank sda1 sdb2
246 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
247 The following command creates an unmirrored pool using files.
248 While not recommended, a pool based on files can be useful for experimental
251 # zpool create tank /path/to/file/a /path/to/file/b
253 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
254 The following command adds two mirrored disks to the pool
256 assuming the pool is already made up of two-way mirrors.
257 The additional space is immediately available to any datasets within the pool.
259 # zpool add tank mirror sda sdb
261 .It Sy Example 6 No Listing Available ZFS Storage Pools
262 The following command lists all available pools on the system.
263 In this case, the pool
265 is faulted due to a missing device.
266 The results from this command are similar to the following:
269 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
270 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
271 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
272 zion - - - - - - - FAULTED -
274 .It Sy Example 7 No Destroying a ZFS Storage Pool
275 The following command destroys the pool
277 and any datasets contained within.
279 # zpool destroy -f tank
281 .It Sy Example 8 No Exporting a ZFS Storage Pool
282 The following command exports the devices in pool
284 so that they can be relocated or later imported.
288 .It Sy Example 9 No Importing a ZFS Storage Pool
289 The following command displays available pools, and then imports the pool
291 for use on the system.
292 The results from this command are similar to the following:
296 id: 15451357997522795478
298 action: The pool can be imported using its name or numeric identifier.
308 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
309 The following command upgrades all ZFS Storage pools to the current version of
313 This system is currently running ZFS version 2.
315 .It Sy Example 11 No Managing Hot Spares
316 The following command creates a new pool with an available hot spare:
318 # zpool create tank mirror sda sdb spare sdc
321 If one of the disks were to fail, the pool would be reduced to the degraded
323 The failed device can be replaced using the following command:
325 # zpool replace tank sda sdd
328 Once the data has been resilvered, the spare is automatically removed and is
329 made available for use should another device fail.
330 The hot spare can be permanently removed from the pool using the following
333 # zpool remove tank sdc
335 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
336 The following command creates a ZFS storage pool consisting of two, two-way
337 mirrors and mirrored log devices:
339 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
342 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
343 The following command adds two disks for use as cache devices to a ZFS storage
346 # zpool add pool cache sdc sdd
349 Once added, the cache devices gradually fill with content from main memory.
350 Depending on the size of your cache devices, it could take over an hour for
352 Capacity and reads can be monitored using the
356 # zpool iostat -v pool 5
358 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
359 The following commands remove the mirrored log device
361 and mirrored top-level data device
364 Given this configuration:
368 scrub: none requested
371 NAME STATE READ WRITE CKSUM
373 mirror-0 ONLINE 0 0 0
376 mirror-1 ONLINE 0 0 0
380 mirror-2 ONLINE 0 0 0
385 The command to remove the mirrored log
389 # zpool remove tank mirror-2
392 The command to remove the mirrored data
396 # zpool remove tank mirror-1
398 .It Sy Example 15 No Displaying expanded space on a device
399 The following command displays the detailed information for the pool
401 This pool is comprised of a single raidz vdev where one of its devices
402 increased its capacity by 10GB.
403 In this example, the pool will not be able to utilize this extra capacity until
404 all the devices under the raidz vdev have been expanded.
407 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
408 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
409 raidz1 23.9G 14.6G 9.30G - 48%
414 .It Sy Example 16 No Adding output columns
415 Additional columns can be added to the
423 # zpool status -c vendor,model,size
424 NAME STATE READ WRITE CKSUM vendor model size
426 mirror-0 ONLINE 0 0 0
427 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
428 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
429 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
430 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
431 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
432 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
434 # zpool iostat -vc size
435 capacity operations bandwidth
436 pool alloc free read write read write size
437 ---------- ----- ----- ----- ----- ----- ----- ----
438 rpool 14.6G 54.9G 4 55 250K 2.69M
439 sda1 14.6G 54.9G 4 55 250K 2.69M 70G
440 ---------- ----- ----- ----- ----- ----- ----- ----
443 .Sh ENVIRONMENT VARIABLES
444 .Bl -tag -width "ZFS_ABORT"
448 to dump core on exit for the purposes of running
451 .Bl -tag -width "ZFS_COLOR"
457 .Bl -tag -width "ZPOOL_IMPORT_PATH"
458 .It Ev ZPOOL_IMPORT_PATH
459 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
461 looks for device nodes and files.
467 .Bl -tag -width "ZPOOL_IMPORT_UDEV_TIMEOUT_MS"
468 .It Ev ZPOOL_IMPORT_UDEV_TIMEOUT_MS
469 The maximum time in milliseconds that
471 will wait for an expected device to be available.
473 .Bl -tag -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE"
474 .It Ev ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
475 If set, suppress warning about non-native vdev ashift in
477 The value is not used, only the presence or absence of the variable matters.
479 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
480 .It Ev ZPOOL_VDEV_NAME_GUID
483 subcommands to output vdev guids by default. This behavior is identical to the
487 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
488 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
491 subcommands to follow links for vdev names by default. This behavior is identical to the
495 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
496 .It Ev ZPOOL_VDEV_NAME_PATH
499 subcommands to output full vdev path names by default. This
500 behavior is identical to the
504 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
505 .It Ev ZFS_VDEV_DEVID_OPT_OUT
506 Older OpenZFS implementations had issues when attempting to display pool
507 config VDEV names if a
509 NVP value is present in the pool's config.
511 For example, a pool that originated on illumos platform would have a devid
512 value in the config and
514 would fail when listing the config.
515 This would also be true for future Linux based pools.
517 A pool can be stripped of any
519 values on import or prevented from adding
525 .Sy ZFS_VDEV_DEVID_OPT_OUT .
527 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
528 .It Ev ZPOOL_SCRIPTS_AS_ROOT
529 Allow a privileged user to run the
530 .Nm zpool status/iostat
533 option. Normally, only unprivileged users are allowed to run
536 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
537 .It Ev ZPOOL_SCRIPTS_PATH
538 The search path for scripts when running
539 .Nm zpool status/iostat
542 option. This is a colon-separated list of directories and overrides the default
548 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
549 .It Ev ZPOOL_SCRIPTS_ENABLED
551 .Nm zpool status/iostat
555 .Sy ZPOOL_SCRIPTS_ENABLED
556 is not set, it is assumed that the user is allowed to run
557 .Nm zpool status/iostat -c .
559 .Sh INTERFACE STABILITY
563 .Xr zfs-module-parameters 5 ,
564 .Xr zpool-features 5 ,
569 .Xr zpool-checkpoint 8 ,
572 .Xr zpool-destroy 8 ,
577 .Xr zpool-history 8 ,
579 .Xr zpool-initialize 8 ,
581 .Xr zpool-labelclear 8 ,
583 .Xr zpool-offline 8 ,
588 .Xr zpool-replace 8 ,
589 .Xr zpool-resilver 8 ,
596 .Xr zpool-upgrade 8 ,
598 .Xr zpoolconcepts 8 ,