From: Matthew Dillon Date: Sat, 20 Feb 2010 20:10:27 +0000 (-0800) Subject: kernel - Update swapcache manual page X-Git-Tag: v2.7.0~164 X-Git-Url: https://gitweb.dragonflybsd.org/dragonfly.git/commitdiff_plain/a865840aae836a8db9fca2f7ef7ffced18a34f43 kernel - Update swapcache manual page * The manual page is still a work in progress but I'm pushing in everything I learn about SSDs into it as I learn them. At least insofar as the Intel X25-V 40G SSD goes the vendor-specified 40TB write endurance limit appears to assume high write magnifications and significant inefficiencies in write patterns. The theoretical write endurance limit for this SSD with static wear leveling is 400TB. My expectation is a practical endurance somewhere between 150-250TB when configuring 32G of swap on the 40G X25-V. The manual page will be updated as I get better numbers from testing. * Specify that disklabel64 should be used when labeling a SSD, so the partitions are properly aligned. Kernels as of id 4921cba1f6 (late 2.5.x) will align the partition base for virgin disklabel64 labels to a 1MB boundary. MLC flash uses 128K write blocks, SLC uses 64K. Swapcache will write in 64K clusters but also tends to issue multiple linear writes, leading to fairly optimal SSD operation. --- diff --git a/share/man/man8/swapcache.8 b/share/man/man8/swapcache.8 index 4002bea26b..35228ae2c5 100644 --- a/share/man/man8/swapcache.8 +++ b/share/man/man8/swapcache.8 @@ -22,87 +22,18 @@ data and meta-data. .Cd sysctl vm.swapcache.maxfilesize=0 .Cd sysctl vm.swapcache.maxburst=2000000000 .Cd sysctl vm.swapcache.curburst=4000000000 -.Cd sysctl vm.swapcache.minburst=10000000 -.Cd sysctl vm.swapcache.read_enable=0 -.Cd sysctl vm.swapcache.meta_enable=0 -.Cd sysctl vm.swapcache.data_enable=0 -.Cd sysctl vm.swapcache.use_chflags=1 -.Cd sysctl vm.swapcache.maxlaunder=256 -.Sh DESCRIPTION -.Nm -is a system capability which allows a solid state disk (SSD) in a swap -space configuration to be used to cache clean filesystem data and meta-data -in addition to its normal function of backing anonymous memory. -.Pp -Sysctls are used to manage operational parameters and can be adjusted at -any time. Typically a large initial burst is desired after system boot, -controlled by the initial -.Cd vm.swapcache.curburst -parameter. -This parameter is reduced as data is written to swap by the swapcache -and increased at a rate specified by -.Cd vm.swapcache.accrate . -Once this parameter reaches zero write activity ceases until it has -recovered sufficiently for write activity to resume. -.Pp -.Cd vm.swapcache.meta_enable -enables the writing of filesystem meta-data to the swapcache. Filesystem -metadata is any data which the filesystem accesses via the disk device -using buffercache. Meta-data is cached globally regardless of file -or directory flags. -.Pp -.Cd vm.swapcache.data_enable -enables the writing of filesystem file-data to the swapcache. Filesystem -filedata is any data which the filesystem accesses via a regular file. -In technical terms, when the buffer cache is used to access a regular -file through its vnode. Please do not blindly turn on this option, -see the PERFORMANCE TUNING section for more information. -.Pp -.Cd vm.swapcache.use_chflags -enables the use of the -.Cm cache -and -.Cm noscache -.Xr chflags 1 -flags to control which files will be data-cached. -If this sysctl is disabled and data_enable is enabled, -the system will ignore file flags and attempt to swapcache all -regular files. -.Pp -.Cd vm.swapcache.read_enable -enables reading from the swapcache and should be set to 1 for normal -operation. -.Pp -.Cd vm.swapcache.maxfilesize -controls which files are to be cached based on their size. -If set to non-zero only files smaller than the specified size -will be cached. Larger files will not be cached. -.Sh PERFORMANCE TUNING -Best operation is achieved when the active data set fits within the -swapcache. -.Pp -.Bl -tag -width 4n -compact -.It Cd vm.swapcache.accrate -This specifies the burst accumulation rate in bytes per second and -ultimately controls the write bandwidth to swap averaged over a long -period of time. -This parameter must be carefully chosen to manage the write endurance of -the SSD in order to avoid wearing it out too quickly. -Even though SSDs have limited write endurance, there is massive -cost/performance benefit to using one in a swapcache configuration. -.Pp -Let's use the Intel X25V 40G MLC SATA SSD as an example. This device -has approximately a 40TB (40 terabyte) write endurance. +40TB (40 terabyte) write endurance, but see later +notes on this, it is more a minimum value. Limiting the long term average bandwidth to 100K/sec leads to no more than ~9G/day writing which calculates approximately to a 12 year endurance. Endurance scales linearly with size. The 80G version of this SSD will have a write endurance of approximately 80TB. .Pp -MLC SSDs have approximately a 1000x write endurance, while the -lower density higher-cost SLC SSDs have an approximately 10000x -write endurance. MLC SSDs can be used for the swapcache (and swap) -as long as the system manager is cognizant of its limitations. +MLC SSDs have a 1000-10000x write endurance, while the lower density +higher-cost SLC SSDs have an approximately 10000-100000x write endurance. +MLC SSDs can be used for the swapcache (and swap) as long as the system +manager is cognizant of its limitations. .Pp .It Cd vm.swapcache.meta_enable Turning on just @@ -193,6 +124,14 @@ This controls the maximum amount of swapspace may use, in percentage terms. .El .Pp +It is important to note that you should always use +.Xr disklabel64 8 +to label your SSD. Disklabel64 will properly align the base of the +partition space relative to the physical drive regardless of how badly +aligned the fdisk slice is. +This will significantly reduce write amplification and write combining +inefficiencies on the SSD. +.Pp Finally, interleaved swap (multiple SSDs) may be used to increase performance even further. A single SATA SSD is typically capable of reading 120-220MB/sec. Configuring two SSDs for your swap will @@ -263,7 +202,7 @@ The system operator should configure at least 4 times the SWAP space versus main memory and no less than 8G of swap space. If a 40G SSD is used the recommendation is to configure 16G to 32G of swap (note: 32-bit is limited to 32G of swap by default, for 64-bit -it is 512G of swap). +it is 512G of swap), and to leave the remainder unwritten and unused. .Pp The .Cd vm_swapcache.maxswappct @@ -322,19 +261,32 @@ a -j 8 parallel build world in a little less than twice the time it would take if the system had 2G of ram, whereas it would take 5x to 10x as long with normal HD based swap. .Sh WARNINGS -SSDs have limited durability and +I am going to repeat and expand a bit on SSD wear. +Wear on SSDs is a function of the write durability of the cells, +whether the SSD implements static or dynamic wear leveling, and +write amplification effects based on the type of write activity. +Write amplification occurs due to wasted space when the SSD must +erase and rewrite the underlying flash blocks. e.g. MLC flash uses +128KB erase/write blocks. +.Pp .Nm parameters should be carefully chosen to avoid early wearout. -For example, the Intel X25V 40G SSD has a nominal 40TB (terabyte) -write durability. +For example, the Intel X25V 40G SSD has a minimum write durability +of 40TB and an actual durability that can be quite a bit higher. Generally speaking, you want to select parameters that will give you -at least 5 years of service life. 10 years is a good compromise. -.Pp -Durability typically scales with size and also depends on the -wear-leveling algorithm used by the device. Durability can often -be improved by configuring less space (in a manufacturer-fresh drive) -than the drive's capacity. For example, by only using 32G of a 40G -SSD. SSDs typically implement 10% more storage than advertised and +at least 10 years of service life. +The most important parameter to control this is +.Cd vm.swapcache.accrate . +.Nm +uses a very conservative 100KB/sec default but even a small X25V +can probably handle 300KB/sec of continuous writing and still last +10 years. +.Pp +Depending on the wear leveling algorithm the drive uses, durability +and performance can sometimes be improved by configuring less +space (in a manufacturer-fresh drive) than the drive's probed capacity. +For example, by only using 32G of a 40G SSD. +SSDs typically implement 10% more storage than advertised and use this storage to improve wear leveling. As cells begin to fail this overallotment slowly becomes part of the primary storage until it has been exhausted. After that the SSD has basically failed. @@ -351,10 +303,10 @@ for swap. (from pkgsrc's sysutils/smartmontools) may be used to retrieve the wear indicator from the drive. One usually runs something like 'smartctl -d sat -a /dev/daXX' -(for AHCI/SILI/SCSI), or 'smartctl -a /dev/adXX' for NATA. Many SSDs -will brick the SATA port when smart operations are done while the drive -is busy with normal activity, so the tool should only be run when the -SSD is idle. +(for AHCI/SILI/SCSI), or 'smartctl -a /dev/adXX' for NATA. Some SSDs +(particularly the Intels) will brick the SATA port when smart operations +are done while the drive is busy with normal activity, so the tool should +only be run when the SSD is idle. .Pp ID 232 (0xe8) in the SMART data dump indicates available reserved space and ID 233 (0xe9) is the wear-out meter. Reserved space @@ -362,26 +314,43 @@ typically starts at 100 and decrements to 10, after which the SSD is considered to operate in a degraded mode. The wear-out meter typically starts at 99 and decrements to 0, after which the SSD has failed. -Wear on SSDs is a function only of the write durability which is -essentially just the total aggregate sectors written. +.Pp +.Nm +tends to use large 64K writes and tends to cluster multiple writes +linearly. The SSD is able to take significant advantage of this +and write amplification effects are greatly reduced. If we +take a 40G Intel X25V as an example the vendor specifies a write +durability of approximately 40TB, but .Nm -tends to use large 64K writes as well as operates in a bursty fashion -which the SSD is able to take significant advantage of. -Power-on hours, power cycles, and read operations do not really affect wear. +should be able to squeeze out upwards of 200TB due the fairly optimal +write clustering it does. +The theoretical limit for the Intel X25V is 400TB (10,000 erase cycles +per MLC cell, 40G drive), but the firmware doesn't do perfect static +wear leveling so the actual durability is less. +.Pp +In contrast, most filesystems directly stored on a SSD have +fairly severe write amplification effects and will have durabilities +ranging closer to the vendor-specified limit. +Power-on hours, power cycles, and read operations do not really affect +wear. .Pp SSD's with MLC-based flash technology are high-density, low-cost solutions with limited write durability. SLC-based flash technology is a low-density, higher-cost solution with 10x the write durability as MLC. The durability -also scales with the amount of flash storage, with SLC based flash typically +also scales with the amount of flash storage. SLC based flash is typically twice as expensive per gigabyte. From a cost perspective, SLC based flash is at least 5x more cost effective in situations where high write -bandwidths are required (lasting 10x longer). MLC is at least 2x more -cost effective in situations where high write bandwidth is not required. -When wear calculations are in years, these differences become huge. +bandwidths are required (because it lasts 10x longer). MLC is at least +2x more cost effective in situations where high write bandwidth is not +required. +When wear calculations are in years, these differences become huge, but +often the quantity of storage needed trumps the wear life so we expect most +people will be using MLC. .Nm is usable with both technologies. .Sh SEE ALSO .Xr swapon 8 , +.Xr disklabel64 8 , .Xr fstab 5 .Sh HISTORY .Nm