1 .\" Copyright (c) 2001 Matthew Dillon. Terms and conditions are those of
2 .\" the BSD Copyright as specified in the file "/usr/src/COPYRIGHT" in
5 .\" $FreeBSD: src/share/man/man7/tuning.7,v 1.1.2.30 2002/12/17 19:32:08 dillon Exp $
6 .\" $DragonFly: src/share/man/man7/tuning.7,v 1.6 2005/12/10 00:22:29 swildner Exp $
13 .Nd performance tuning under
15 .Sh SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP
20 to lay out your filesystems on a hard disk it is important to remember
21 that hard drives can transfer data much more quickly from outer tracks
22 than they can from inner tracks.
23 To take advantage of this you should
24 try to pack your smaller filesystems and swap closer to the outer tracks,
25 follow with the larger filesystems, and end with the largest filesystems.
26 It is also important to size system standard filesystems such that you
27 will not be forced to resize them later as you scale the machine up.
28 I usually create, in order, a 128M root, 1G swap, 128M
34 and use any remaining space for
37 You should typically size your swap space to approximately 2x main memory.
38 If you do not have a lot of RAM, though, you will generally want a lot
40 It is not recommended that you configure any less than
41 256M of swap on a system and you should keep in mind future memory
42 expansion when sizing the swap partition.
43 The kernel's VM paging algorithms are tuned to perform best when there is
44 at least 2x swap versus main memory.
45 Configuring too little swap can lead
46 to inefficiencies in the VM page scanning code as well as create issues
47 later on if you add more memory to your machine.
48 Finally, on larger systems
49 with multiple SCSI disks (or multiple IDE disks operating on different
50 controllers), we strongly recommend that you configure swap on each drive
52 The swap partitions on the drives should be approximately the same size.
53 The kernel can handle arbitrary sizes but
54 internal data structures scale to 4 times the largest swap partition.
56 the swap partitions near the same size will allow the kernel to optimally
57 stripe swap space across the N disks.
58 Do not worry about overdoing it a
59 little, swap space is the saving grace of
61 and even if you do not normally use much swap, it can give you more time to
62 recover from a runaway program before being forced to reboot.
66 partition depends heavily on what you intend to use the machine for.
68 partition is primarily used to hold mailboxes, the print spool, and log
72 its own partition (but except for extreme cases it is not worth the waste
74 If your machine is intended to act as a mail
76 or you are running a heavily visited web server, you should consider
77 creating a much larger partition \(en perhaps a gig or more.
79 to underestimate log file storage requirements.
83 depends on the kind of temporary file usage you think you will need.
85 the minimum we recommend.
86 Also note that sysinstall will create a
89 Dedicating a partition for temporary file storage is important for
90 two reasons: first, it reduces the possibility of filesystem corruption
91 in a crash, and second it reduces the chance of a runaway process that
93 .Oo Pa /var Oc Ns Pa /tmp
94 from blowing up more critical subsystems (mail,
97 .Oo Pa /var Oc Ns Pa /tmp
98 is a very common problem to have.
100 In the old days there were differences between
104 but the introduction of
108 led to massive confusion
109 by program writers so today programs haphazardly use one or the
110 other and thus no real distinction can be made between the two.
111 So it makes sense to have just one temporary directory and
112 softlink to it from the other tmp directory locations.
115 the one thing you do not want to do is leave it sitting
116 on the root partition where it might cause root to fill up or possibly
117 corrupt root in a crash/reboot situation.
121 partition holds the bulk of the files required to support the system and
122 a subdirectory within it called
124 holds the bulk of the files installed from the
127 If you do not use ports all that much and do not intend to keep
130 on the machine, you can get away with
134 However, if you install a lot of ports
135 (especially window managers and Linux-emulated binaries), we recommend
136 at least a 2 gigabyte
138 and if you also intend to keep system source
139 on the machine, we recommend a 3 gigabyte
141 Do not underestimate the
142 amount of space you will need in this partition, it can creep up and
147 partition is typically used to hold user-specific data.
148 I usually size it to the remainder of the disk.
150 Why partition at all?
151 Why not create one big
153 partition and be done with it?
154 Then I do not have to worry about undersizing things!
155 Well, there are several reasons this is not a good idea.
157 each partition has different operational characteristics and separating them
158 allows the filesystem to tune itself to those characteristics.
162 partitions are read-mostly, with very little writing, while
163 a lot of reading and writing could occur in
168 partitioning your system fragmentation introduced in the smaller more
169 heavily write-loaded partitions will not bleed over into the mostly-read
171 Additionally, keeping the write-loaded partitions closer to
172 the edge of the disk (i.e. before the really big partitions instead of after
173 in the partition table) will increase I/O performance in the partitions
174 where you need it the most.
175 Now it is true that you might also need I/O
176 performance in the larger partitions, but they are so large that shifting
177 them more towards the edge of the disk will not lead to a significant
178 performance improvement whereas moving
180 to the edge can have a huge impact.
181 Finally, there are safety concerns.
182 Having a small neat root partition that
183 is essentially read-only gives it a greater chance of surviving a bad crash
186 Properly partitioning your system also allows you to tune
193 requires more experience but can lead to significant improvements in
195 There are three parameters that are relatively safe to tune:
196 .Em blocksize , bytes/i-node ,
198 .Em cylinders/group .
201 performs best when using 8K or 16K filesystem block sizes.
202 The default filesystem block size is 16K,
203 which provides best performance for most applications,
204 with the exception of those that perform random access on large files
205 (such as database server software).
206 Such applications tend to perform better with a smaller block size,
207 although modern disk characteristics are such that the performance
208 gain from using a smaller block size may not be worth consideration.
209 Using a block size larger than 16K
210 can cause fragmentation of the buffer cache and
211 lead to lower performance.
213 The defaults may be unsuitable
214 for a filesystem that requires a very large number of i-nodes
215 or is intended to hold a large number of very small files.
216 Such a filesystem should be created with an 8K or 4K block size.
217 This also requires you to specify a smaller
219 We recommend always using a fragment size that is 1/8
220 the block size (less testing has been done on other fragment size factors).
223 options for this would be
224 .Dq Li "newfs -f 1024 -b 8192 ..." .
226 If a large partition is intended to be used to hold fewer, larger files, such
227 as database files, you can increase the
229 ratio which reduces the number of i-nodes (maximum number of files and
230 directories that can be created) for that partition.
231 Decreasing the number
232 of i-nodes in a filesystem can greatly reduce
234 recovery times after a crash.
235 Do not use this option
236 unless you are actually storing large files on the partition, because if you
237 overcompensate you can wind up with a filesystem that has lots of free
238 space remaining but cannot accommodate any more files.
239 Using 32768, 65536, or 262144 bytes/i-node is recommended.
240 You can go higher but
241 it will have only incremental effects on
245 .Dq Li "newfs -i 32768 ..." .
248 may be used to further tune a filesystem.
249 This command can be run in
250 single-user mode without having to reformat the filesystem.
251 However, this is possibly the most abused program in the system.
252 Many people attempt to
253 increase available filesystem space by setting the min-free percentage to 0.
254 This can lead to severe filesystem fragmentation and we do not recommend
258 option worthwhile here is turning on
261 .Dq Li "tunefs -n enable /filesystem" .
264 and later, softupdates can be turned on using the
270 will typically enable softupdates automatically for non-root filesystems).
271 Softupdates drastically improves meta-data performance, mainly file
272 creation and deletion.
273 We recommend enabling softupdates on most filesystems; however, there
274 are two limitations to softupdates that you should be aware of when
275 determining whether to use it on a filesystem.
276 First, softupdates guarantees filesystem consistency in the
277 case of a crash but could very easily be several seconds (even a minute!)
278 behind on pending writes to the physical disk.
279 If you crash you may lose more work
281 Secondly, softupdates delays the freeing of filesystem
283 If you have a filesystem (such as the root filesystem) which is
284 close to full, doing a major update of it, e.g.\&
285 .Dq Li "make installworld" ,
286 can run it out of space and cause the update to fail.
287 For this reason, softupdates will not be enabled on the root filesystem
288 during a typical install. There is no loss of performance since the root
289 filesystem is rarely written to.
293 options exist that can help you tune the system.
294 The most obvious and most dangerous one is
296 Do not ever use it; it is far too dangerous.
297 A less dangerous and more
303 filesystems normally update the last-accessed time of a file or
304 directory whenever it is accessed.
305 This operation is handled in
307 with a delayed write and normally does not create a burden on the system.
308 However, if your system is accessing a huge number of files on a continuing
309 basis the buffer cache can wind up getting polluted with atime updates,
310 creating a burden on the system.
311 For example, if you are running a heavily
312 loaded web site, or a news server with lots of readers, you might want to
313 consider turning off atime updates on your larger partitions with this
316 However, you should not gratuitously turn off atime
320 filesystem customarily
321 holds mailboxes, and atime (in combination with mtime) is used to
322 determine whether a mailbox has new mail.
323 You might as well leave
324 atime turned on for mostly read-only partitions such as
329 This is especially useful for
331 since some system utilities
332 use the atime field for reporting.
334 In larger systems you can stripe partitions from several drives together
335 to create a much larger overall partition.
336 Striping can also improve
337 the performance of a filesystem by splitting I/O operations across two
343 utilities may be used to create simple striped filesystems.
345 speaking, striping smaller partitions such as the root and
347 or essentially read-only partitions such as
349 is a complete waste of time.
350 You should only stripe partitions that require serious I/O performance,
353 or custom partitions used to hold databases and web pages.
354 Choosing the proper stripe size is also
356 Filesystems tend to store meta-data on power-of-2 boundaries
357 and you usually want to reduce seeking rather than increase seeking.
359 means you want to use a large off-center stripe size such as 1152 sectors
360 so sequential I/O does not seek both disks and so meta-data is distributed
361 across both disks rather than concentrated on a single disk.
363 you really need to get sophisticated, we recommend using a real hardware
364 RAID controller from the list of
366 supported controllers.
369 variables permit system behavior to be monitored and controlled at
371 Some sysctls simply report on the behavior of the system; others allow
372 the system behavior to be modified;
373 some may be set at boot time using
375 but most will be set via
377 There are several hundred sysctls in the system, including many that appear
378 to be candidates for tuning but actually are not.
379 In this document we will only cover the ones that have the greatest effect
383 .Va kern.ipc.shm_use_phys
384 sysctl defaults to 0 (off) and may be set to 0 (off) or 1 (on).
386 this parameter to 1 will cause all System V shared memory segments to be
387 mapped to unpageable physical RAM.
388 This feature only has an effect if you
389 are either (A) mapping small amounts of shared memory across many (hundreds)
390 of processes, or (B) mapping large amounts of shared memory across any
392 This feature allows the kernel to remove a great deal
393 of internal memory management page-tracking overhead at the cost of wiring
394 the shared memory into core, making it unswappable.
397 .Va vfs.vmiodirenable
398 sysctl defaults to 1 (on).
399 This parameter controls how directories are cached
401 Most directories are small and use but a single fragment
402 (typically 1K) in the filesystem and even less (typically 512 bytes) in
404 However, when operating in the default mode the buffer
405 cache will only cache a fixed number of directories even if you have a huge
407 Turning on this sysctl allows the buffer cache to use
408 the VM Page Cache to cache the directories.
409 The advantage is that all of
410 memory is now available for caching directories.
411 The disadvantage is that
412 the minimum in-core memory used to cache a directory is the physical page
413 size (typically 4K) rather than 512 bytes.
414 We recommend turning this option off in memory-constrained environments;
415 however, when on, it will substantially improve the performance of services
416 that manipulate a large number of files.
417 Such services can include web caches, large mail systems, and news systems.
418 Turning on this option will generally not reduce performance even with the
419 wasted memory but you should experiment to find out.
423 sysctl defaults to 1 (on). This tells the filesystem to issue media
424 writes as full clusters are collected, which typically occurs when writing
425 large sequential files. The idea is to avoid saturating the buffer
426 cache with dirty buffers when it would not benefit I/O performance. However,
427 this may stall processes and under certain circumstances you may wish to turn
431 .Va vfs.hirunningspace
432 sysctl determines how much outstanding write I/O may be queued to
433 disk controllers system wide at any given instance. The default is
434 usually sufficient but on machines with lots of disks you may want to bump
435 it up to four or five megabytes. Note that setting too high a value
436 (exceeding the buffer cache's write threshold) can lead to extremely
437 bad clustering performance. Do not set this value arbitrarily high! Also,
438 higher write queueing values may add latency to reads occuring at the same
441 There are various other buffer-cache and VM page cache related sysctls.
442 We do not recommend modifying these values.
445 the VM system does an extremely good job tuning itself.
448 .Va net.inet.tcp.sendspace
450 .Va net.inet.tcp.recvspace
451 sysctls are of particular interest if you are running network intensive
453 They control the amount of send and receive buffer space
454 allowed for any given TCP connection.
455 The default sending buffer is 32K; the default receiving buffer
458 improve bandwidth utilization by increasing the default at the cost of
459 eating up more kernel memory for each connection.
461 increasing the defaults if you are serving hundreds or thousands of
462 simultaneous connections because it is possible to quickly run the system
463 out of memory due to stalled connections building up.
465 high bandwidth over a fewer number of connections, especially if you have
466 gigabit Ethernet, increasing these defaults can make a huge difference.
467 You can adjust the buffer size for incoming and outgoing data separately.
468 For example, if your machine is primarily doing web serving you may want
469 to decrease the recvspace in order to be able to increase the
470 sendspace without eating too much kernel memory.
471 Note that the routing table (see
473 can be used to introduce route-specific send and receive buffer size
476 As an additional management tool you can use pipes in your
479 to limit the bandwidth going to or from particular IP blocks or ports.
480 For example, if you have a T1 you might want to limit your web traffic
481 to 70% of the T1's bandwidth in order to leave the remainder available
482 for mail and interactive use.
483 Normally a heavily loaded web server
484 will not introduce significant latencies into other services even if
485 the network link is maxed out, but enforcing a limit can smooth things
486 out and lead to longer term stability.
487 Many people also enforce artificial
488 bandwidth limitations in order to ensure that they are not charged for
489 using too much bandwidth.
491 Setting the send or receive TCP buffer to values larger then 65535 will result
492 in a marginal performance improvement unless both hosts support the window
493 scaling extension of the TCP protocol, which is controlled by the
494 .Va net.inet.tcp.rfc1323
496 These extensions should be enabled and the TCP buffer size should be set
497 to a value larger than 65536 in order to obtain good performance from
498 certain types of network links; specifically, gigabit WAN links and
499 high-latency satellite links.
500 RFC1323 support is enabled by default.
503 .Va net.inet.tcp.always_keepalive
504 sysctl determines whether or not the TCP implementation should attempt
505 to detect dead TCP connections by intermittently delivering
508 By default, this is enabled for all applications; by setting this
509 sysctl to 0, only applications that specifically request keepalives
511 In most environments, TCP keepalives will improve the management of
512 system state by expiring dead TCP connections, particularly for
513 systems serving dialup users who may not always terminate individual
514 TCP connections before disconnecting from the network.
515 However, in some environments, temporary network outages may be
516 incorrectly identified as dead sessions, resulting in unexpectedly
517 terminated TCP connections.
518 In such environments, setting the sysctl to 0 may reduce the occurrence of
519 TCP session disconnections.
522 .Va net.inet.tcp.delayed_ack
523 TCP feature is largly misunderstood. Historically speaking this feature
524 was designed to allow the acknowledgement to transmitted data to be returned
525 along with the response. For example, when you type over a remote shell
526 the acknowledgement to the character you send can be returned along with the
527 data representing the echo of the character. With delayed acks turned off
528 the acknowledgement may be sent in its own packet before the remote service
529 has a chance to echo the data it just received. This same concept also
530 applies to any interactive protocol (e.g. SMTP, WWW, POP3) and can cut the
531 number of tiny packets flowing across the network in half. The
533 delayed-ack implementation also follows the TCP protocol rule that
534 at least every other packet be acknowledged even if the standard 100ms
535 timeout has not yet passed. Normally the worst a delayed ack can do is
536 slightly delay the teardown of a connection, or slightly delay the ramp-up
537 of a slow-start TCP connection. While we aren't sure we believe that
538 the several FAQs related to packages such as SAMBA and SQUID which advise
539 turning off delayed acks may be refering to the slow-start issue. In
541 it would be more beneficial to increase the slow-start flightsize via
543 .Va net.inet.tcp.slowstart_flightsize
544 sysctl rather then disable delayed acks.
547 .Va net.inet.tcp.inflight_enable
548 sysctl turns on bandwidth delay product limiting for all TCP connections.
549 The system will attempt to calculate the bandwidth delay product for each
550 connection and limit the amount of data queued to the network to just the
551 amount required to maintain optimum throughput. This feature is useful
552 if you are serving data over modems, GigE, or high speed WAN links (or
553 any other link with a high bandwidth*delay product), especially if you are
554 also using window scaling or have configured a large send window. If
555 you enable this option you should also be sure to set
556 .Va net.inet.tcp.inflight_debug
557 to 0 (disable debugging), and for production use setting
558 .Va net.inet.tcp.inflight_min
559 to at least 6144 may be beneficial. Note, however, that setting high
560 minimums may effectively disable bandwidth limiting depending on the link.
561 The limiting feature reduces the amount of data built up in intermediate
562 router and switch packet queues as well as reduces the amount of data built
563 up in the local host's interface queue. With fewer packets queued up,
564 interactive connections, especially over slow modems, will also be able
565 to operate with lower round trip times. However, note that this feature
566 only effects data transmission (uploading / server-side). It does not
567 effect data reception (downloading).
570 .Va net.inet.tcp.inflight_stab
572 This parameter defaults to 20, representing 2 maximal packets added
573 to the bandwidth delay product window calculation. The additional
574 window is required to stabilize the algorithm and improve responsiveness
575 to changing conditions, but it can also result in higher ping times
576 over slow links (though still much lower then you would get without
577 the inflight algorithm). In such cases you may
578 wish to try reducing this parameter to 15, 10, or 5, and you may also
580 .Va net.inet.tcp.inflight_min
581 (for example, to 3500) to get the desired effect. Reducing these parameters
582 should be done as a last resort only.
585 .Va net.inet.ip.portrange.*
586 sysctls control the port number ranges automatically bound to TCP and UDP
587 sockets. There are three ranges: A low range, a default range, and a
588 high range, selectable via an IP_PORTRANGE setsockopt() call. Most
589 network programs use the default range which is controlled by
590 .Va net.inet.ip.portrange.first
592 .Va net.inet.ip.portrange.last ,
593 which defaults to 1024 and 5000 respectively. Bound port ranges are
594 used for outgoing connections and it is possible to run the system out
595 of ports under certain circumstances. This most commonly occurs when you are
596 running a heavily loaded web proxy. The port range is not an issue
597 when running serves which handle mainly incoming connections such as a
598 normal web server, or has a limited number of outgoing connections such
599 as a mail relay. For situations where you may run yourself out of
600 ports we recommend increasing
601 .Va net.inet.ip.portrange.last
602 modestly. A value of 10000 or 20000 or 30000 may be reasonable. You should
603 also consider firewall effects when changing the port range. Some firewalls
604 may block large ranges of ports (usually low-numbered ports) and expect systems
605 to use higher ranges of ports for outgoing connections. For this reason
606 we do not recommend that
607 .Va net.inet.ip.portrange.first
611 .Va kern.ipc.somaxconn
612 sysctl limits the size of the listen queue for accepting new TCP connections.
613 The default value of 128 is typically too low for robust handling of new
614 connections in a heavily loaded web server environment.
615 For such environments,
616 we recommend increasing this value to 1024 or higher.
618 may itself limit the listen queue size (e.g.\&
621 often have a directive in its configuration file to adjust the queue size up.
622 Larger listen queues also do a better job of fending off denial of service
627 sysctl determines how many open files the system supports.
629 typically a few thousand but you may need to bump this up to ten or twenty
630 thousand if you are running databases or large descriptor-heavy daemons.
633 sysctl may be interrogated to determine the current number of open files
637 .Va vm.swap_idle_enabled
638 sysctl is useful in large multi-user systems where you have lots of users
639 entering and leaving the system and lots of idle processes.
641 tend to generate a great deal of continuous pressure on free memory reserves.
642 Turning this feature on and adjusting the swapout hysteresis (in idle
644 .Va vm.swap_idle_threshold1
646 .Va vm.swap_idle_threshold2
647 allows you to depress the priority of pages associated with idle processes
648 more quickly then the normal pageout algorithm.
649 This gives a helping hand
650 to the pageout daemon.
651 Do not turn this option on unless you need it,
652 because the tradeoff you are making is to essentially pre-page memory sooner
653 rather then later, eating more swap and disk bandwidth.
655 this option will have a detrimental effect but in a large system that is
656 already doing moderate paging this option allows the VM system to stage
657 whole processes into and out of memory more easily.
659 Some aspects of the system behavior may not be tunable at runtime because
660 memory allocations they perform must occur early in the boot process.
661 To change loader tunables, you must set their values in
663 and reboot the system.
666 controls the scaling of a number of static system tables, including defaults
667 for the maximum number of open files, sizing of network memory resources, etc.
671 is automatically sized at boot based on the amount of memory available in
672 the system, and may be determined at run-time by inspecting the value of the
676 Some sites will require larger or smaller values of
678 and may set it as a loader tunable; values of 64, 128, and 256 are not
680 We do not recommend going above 256 unless you need a huge number
681 of file descriptors; many of the tunable values set to their defaults by
683 may be individually overridden at boot-time or run-time as described
684 elsewhere in this document.
687 must set this value via the kernel
693 .Va kern.ipc.nmbclusters
694 may be adjusted to increase the number of network mbufs the system is
696 Each cluster represents approximately 2K of memory,
697 so a value of 1024 represents 2M of kernel memory reserved for network
699 You can do a simple calculation to figure out how many you need.
700 If you have a web server which maxes out at 1000 simultaneous connections,
701 and each connection eats a 16K receive and 16K send buffer, you need
702 approximately 32MB worth of network buffers to deal with it.
704 thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768.
706 you would want to set
707 .Va kern.ipc.nmbclusters
709 We recommend values between
710 1024 and 4096 for machines with moderates amount of memory, and between 4096
711 and 32768 for machines with greater amounts of memory.
712 Under no circumstances
713 should you specify an arbitrarily high value for this parameter, it could
714 lead to a boot-time crash.
719 may be used to observe network cluster use.
722 do not have this tunable and require that the
729 More and more programs are using the
731 system call to transmit files over the network.
734 sysctl controls the number of filesystem buffers
736 is allowed to use to perform its work.
737 This parameter nominally scales
740 so you should not need to modify this parameter except under extreme
742 .Sh KERNEL CONFIG TUNING
743 There are a number of kernel options that you may have to fiddle with in
744 a large-scale system.
745 In order to change these options you need to be
746 able to compile a new kernel from source.
749 manual page and the handbook are good starting points for learning how to
751 Generally the first thing you do when creating your own custom
752 kernel is to strip out all the drivers and services you do not use.
755 and drivers you do not have will reduce the size of your kernel, sometimes
756 by a megabyte or more, leaving more memory available for applications.
761 may be used to reduce system boot times.
762 The defaults are fairly high and
763 can be responsible for 15+ seconds of delay in the boot process.
766 to 5 seconds usually works (especially with modern drives).
769 also works but you have to be a little more careful.
771 There are a number of
773 options that can be commented out.
774 If you only want the kernel to run
775 on a Pentium class CPU, you can easily remove
781 if you are sure your CPU is being recognized as a Pentium II or better.
782 Some clones may be recognized as a Pentium or even a 486 and not be able
783 to boot without those options.
786 will be able to better-use higher-end CPU features for MMU, task switching,
787 timebase, and even device operations.
788 Additionally, higher-end CPUs support
789 4MB MMU pages, which the kernel uses to map the kernel itself into memory,
790 increasing its efficiency under heavy syscall loads.
791 .Sh IDE WRITE CACHING
793 flirted with turning off IDE write caching.
794 This reduced write bandwidth
795 to IDE disks but was considered necessary due to serious data consistency
796 issues introduced by hard drive vendors.
797 Basically the problem is that
798 IDE drives lie about when a write completes.
799 With IDE write caching turned
800 on, IDE hard drives will not only write data to disk out of order, they
801 will sometimes delay some of the blocks indefinitely under heavy disk
803 A crash or power failure can result in serious filesystem
805 So our default was changed to be safe.
807 result was such a huge loss in performance that we caved in and changed the
808 default back to on after the release.
809 You should check the default on
810 your system by observing the
813 If IDE write caching is turned off, you can turn it back
817 More information on tuning the ATA driver system may be found in the
821 There is a new experimental feature for IDE hard drives called
823 (you also set this in the boot loader) which allows write caching to be safely
825 This brings SCSI tagging features to IDE drives.
827 writing only IBM DPTA and DTLA drives support the feature.
830 drives apparently have quality control problems and I do not recommend
831 purchasing them at this time.
832 If you need performance, go with SCSI.
833 .Sh CPU, MEMORY, DISK, NETWORK
834 The type of tuning you do depends heavily on where your system begins to
835 bottleneck as load increases.
836 If your system runs out of CPU (idle times
837 are perpetually 0%) then you need to consider upgrading the CPU or moving to
838 an SMP motherboard (multiple CPU's), or perhaps you need to revisit the
839 programs that are causing the load and try to optimize them.
841 is paging to swap a lot you need to consider adding more memory.
843 system is saturating the disk you typically see high CPU idle times and
844 total disk saturation.
846 can be used to monitor this.
847 There are many solutions to saturated disks:
848 increasing memory for caching, mirroring disks, distributing operations across
849 several machines, and so forth.
850 If disk performance is an issue and you
851 are using IDE drives, switching to SCSI can help a great deal.
853 IDE drives compare with SCSI in raw sequential bandwidth, the moment you
854 start seeking around the disk SCSI drives usually win.
856 Finally, you might run out of network suds.
857 The first line of defense for
858 improving network performance is to make sure you are using switches instead
859 of hubs, especially these days where switches are almost as cheap.
861 have severe problems under heavy loads due to collision backoff and one bad
862 host can severely degrade the entire LAN.
863 Second, optimize the network path
867 we describe a firewall protecting internal hosts with a topology where
868 the externally visible hosts are not routed through it.
870 than 10BaseT, or use 1000BaseT rather then 100BaseT, depending on your needs.
871 Most bottlenecks occur at the WAN link (e.g.\&
872 modem, T1, DSL, whatever).
873 If expanding the link is not an option it may be possible to use the
875 feature to implement peak shaving or other forms of traffic shaping to
876 prevent the overloaded service (such as web services) from affecting other
877 services (such as email), or vice versa.
878 In home installations this could
879 be used to give interactive traffic (your browser,
882 over services you export from your box (web services, email).
912 manual page was originally written by