8 :Author: Tejun Heo <tj@kernel.org>
10 This is the authoritative documentation on the design, interface and
11 conventions of cgroup v2. It describes all userland-visible aspects
12 of cgroup including core and specific controller behaviors. All
13 future changes must be reflected in this document. Documentation for
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
23 2-2. Organizing Processes and Threads
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
45 4-3. Core Interface Files
48 5-1-1. CPU Interface Files
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
54 5-3-1. IO Interface Files
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
61 5-4-1. PID Interface Files
63 5.5-1. Cpuset Interface Files
66 5-7-1. RDMA Interface Files
68 5.8-1. HugeTLB Interface Files
70 5.9-1 Miscellaneous cgroup Interface Files
71 5.9-2 Migration and Ownership
74 5-N. Non-normative information
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
79 6-2. The Root and Views
80 6-3. Migration and setns(2)
81 6-4. Interaction with Other Namespaces
82 P. Information on Kernel Programming
83 P-1. Filesystem Support for Writeback
84 D. Deprecated v1 Core Features
85 R. Issues with v1 and Rationales for v2
86 R-1. Multiple Hierarchies
87 R-2. Thread Granularity
88 R-3. Competition Between Inner Nodes and Threads
89 R-4. Other Interface Issues
90 R-5. Controller Issues and Remedies
100 "cgroup" stands for "control group" and is never capitalized. The
101 singular form is used to designate the whole feature and also as a
102 qualifier as in "cgroup controllers". When explicitly referring to
103 multiple individual control groups, the plural form "cgroups" is used.
109 cgroup is a mechanism to organize processes hierarchically and
110 distribute system resources along the hierarchy in a controlled and
113 cgroup is largely composed of two parts - the core and controllers.
114 cgroup core is primarily responsible for hierarchically organizing
115 processes. A cgroup controller is usually responsible for
116 distributing a specific type of system resource along the hierarchy
117 although there are utility controllers which serve purposes other than
118 resource distribution.
120 cgroups form a tree structure and every process in the system belongs
121 to one and only one cgroup. All threads of a process belong to the
122 same cgroup. On creation, all processes are put in the cgroup that
123 the parent process belongs to at the time. A process can be migrated
124 to another cgroup. Migration of a process doesn't affect already
125 existing descendant processes.
127 Following certain structural constraints, controllers may be enabled or
128 disabled selectively on a cgroup. All controller behaviors are
129 hierarchical - if a controller is enabled on a cgroup, it affects all
130 processes which belong to the cgroups consisting the inclusive
131 sub-hierarchy of the cgroup. When a controller is enabled on a nested
132 cgroup, it always restricts the resource distribution further. The
133 restrictions set closer to the root in the hierarchy can not be
134 overridden from further away.
143 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
144 hierarchy can be mounted with the following mount command::
146 # mount -t cgroup2 none $MOUNT_POINT
148 cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
149 controllers which support v2 and are not bound to a v1 hierarchy are
150 automatically bound to the v2 hierarchy and show up at the root.
151 Controllers which are not in active use in the v2 hierarchy can be
152 bound to other hierarchies. This allows mixing v2 hierarchy with the
153 legacy v1 multiple hierarchies in a fully backward compatible way.
155 A controller can be moved across hierarchies only after the controller
156 is no longer referenced in its current hierarchy. Because per-cgroup
157 controller states are destroyed asynchronously and controllers may
158 have lingering references, a controller may not show up immediately on
159 the v2 hierarchy after the final umount of the previous hierarchy.
160 Similarly, a controller should be fully disabled to be moved out of
161 the unified hierarchy and it may take some time for the disabled
162 controller to become available for other hierarchies; furthermore, due
163 to inter-controller dependencies, other controllers may need to be
166 While useful for development and manual configurations, moving
167 controllers dynamically between the v2 and other hierarchies is
168 strongly discouraged for production use. It is recommended to decide
169 the hierarchies and controller associations before starting using the
170 controllers after system boot.
172 During transition to v2, system management software might still
173 automount the v1 cgroup filesystem and so hijack all controllers
174 during boot, before manual intervention is possible. To make testing
175 and experimenting easier, the kernel parameter cgroup_no_v1= allows
176 disabling controllers in v1 and make them always available in v2.
178 cgroup v2 currently supports the following mount options.
181 Consider cgroup namespaces as delegation boundaries. This
182 option is system wide and can only be set on mount or modified
183 through remount from the init namespace. The mount option is
184 ignored on non-init namespace mounts. Please refer to the
185 Delegation section for details.
188 Reduce the latencies of dynamic cgroup modifications such as
189 task migrations and controller on/offs at the cost of making
190 hot path operations such as forks and exits more expensive.
191 The static usage pattern of creating a cgroup, enabling
192 controllers, and then seeding it with CLONE_INTO_CGROUP is
193 not affected by this option.
196 Only populate memory.events with data for the current cgroup,
197 and not any subtrees. This is legacy behaviour, the default
198 behaviour without this option is to include subtree counts.
199 This option is system wide and can only be set on mount or
200 modified through remount from the init namespace. The mount
201 option is ignored on non-init namespace mounts.
204 Recursively apply memory.min and memory.low protection to
205 entire subtrees, without requiring explicit downward
206 propagation into leaf cgroups. This allows protecting entire
207 subtrees from one another, while retaining free competition
208 within those subtrees. This should have been the default
209 behavior but is a mount-option to avoid regressing setups
210 relying on the original semantics (e.g. specifying bogusly
211 high 'bypass' protection values at higher tree levels).
213 memory_hugetlb_accounting
214 Count HugeTLB memory usage towards the cgroup's overall
215 memory usage for the memory controller (for the purpose of
216 statistics reporting and memory protetion). This is a new
217 behavior that could regress existing setups, so it must be
218 explicitly opted in with this mount option.
220 A few caveats to keep in mind:
222 * There is no HugeTLB pool management involved in the memory
223 controller. The pre-allocated pool does not belong to anyone.
224 Specifically, when a new HugeTLB folio is allocated to
225 the pool, it is not accounted for from the perspective of the
226 memory controller. It is only charged to a cgroup when it is
227 actually used (for e.g at page fault time). Host memory
228 overcommit management has to consider this when configuring
229 hard limits. In general, HugeTLB pool management should be
230 done via other mechanisms (such as the HugeTLB controller).
231 * Failure to charge a HugeTLB folio to the memory controller
232 results in SIGBUS. This could happen even if the HugeTLB pool
233 still has pages available (but the cgroup limit is hit and
234 reclaim attempt fails).
235 * Charging HugeTLB memory towards the memory controller affects
236 memory protection and reclaim dynamics. Any userspace tuning
237 (of low, min limits for e.g) needs to take this into account.
238 * HugeTLB pages utilized while this option is not selected
239 will not be tracked by the memory controller (even if cgroup
240 v2 is remounted later on).
243 Organizing Processes and Threads
244 --------------------------------
249 Initially, only the root cgroup exists to which all processes belong.
250 A child cgroup can be created by creating a sub-directory::
254 A given cgroup may have multiple child cgroups forming a tree
255 structure. Each cgroup has a read-writable interface file
256 "cgroup.procs". When read, it lists the PIDs of all processes which
257 belong to the cgroup one-per-line. The PIDs are not ordered and the
258 same PID may show up more than once if the process got moved to
259 another cgroup and then back or the PID got recycled while reading.
261 A process can be migrated into a cgroup by writing its PID to the
262 target cgroup's "cgroup.procs" file. Only one process can be migrated
263 on a single write(2) call. If a process is composed of multiple
264 threads, writing the PID of any thread migrates all threads of the
267 When a process forks a child process, the new process is born into the
268 cgroup that the forking process belongs to at the time of the
269 operation. After exit, a process stays associated with the cgroup
270 that it belonged to at the time of exit until it's reaped; however, a
271 zombie process does not appear in "cgroup.procs" and thus can't be
272 moved to another cgroup.
274 A cgroup which doesn't have any children or live processes can be
275 destroyed by removing the directory. Note that a cgroup which doesn't
276 have any children and is associated only with zombie processes is
277 considered empty and can be removed::
281 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
282 cgroup is in use in the system, this file may contain multiple lines,
283 one for each hierarchy. The entry for cgroup v2 is always in the
286 # cat /proc/842/cgroup
288 0::/test-cgroup/test-cgroup-nested
290 If the process becomes a zombie and the cgroup it was associated with
291 is removed subsequently, " (deleted)" is appended to the path::
293 # cat /proc/842/cgroup
295 0::/test-cgroup/test-cgroup-nested (deleted)
301 cgroup v2 supports thread granularity for a subset of controllers to
302 support use cases requiring hierarchical resource distribution across
303 the threads of a group of processes. By default, all threads of a
304 process belong to the same cgroup, which also serves as the resource
305 domain to host resource consumptions which are not specific to a
306 process or thread. The thread mode allows threads to be spread across
307 a subtree while still maintaining the common resource domain for them.
309 Controllers which support thread mode are called threaded controllers.
310 The ones which don't are called domain controllers.
312 Marking a cgroup threaded makes it join the resource domain of its
313 parent as a threaded cgroup. The parent may be another threaded
314 cgroup whose resource domain is further up in the hierarchy. The root
315 of a threaded subtree, that is, the nearest ancestor which is not
316 threaded, is called threaded domain or thread root interchangeably and
317 serves as the resource domain for the entire subtree.
319 Inside a threaded subtree, threads of a process can be put in
320 different cgroups and are not subject to the no internal process
321 constraint - threaded controllers can be enabled on non-leaf cgroups
322 whether they have threads in them or not.
324 As the threaded domain cgroup hosts all the domain resource
325 consumptions of the subtree, it is considered to have internal
326 resource consumptions whether there are processes in it or not and
327 can't have populated child cgroups which aren't threaded. Because the
328 root cgroup is not subject to no internal process constraint, it can
329 serve both as a threaded domain and a parent to domain cgroups.
331 The current operation mode or type of the cgroup is shown in the
332 "cgroup.type" file which indicates whether the cgroup is a normal
333 domain, a domain which is serving as the domain of a threaded subtree,
334 or a threaded cgroup.
336 On creation, a cgroup is always a domain cgroup and can be made
337 threaded by writing "threaded" to the "cgroup.type" file. The
338 operation is single direction::
340 # echo threaded > cgroup.type
342 Once threaded, the cgroup can't be made a domain again. To enable the
343 thread mode, the following conditions must be met.
345 - As the cgroup will join the parent's resource domain. The parent
346 must either be a valid (threaded) domain or a threaded cgroup.
348 - When the parent is an unthreaded domain, it must not have any domain
349 controllers enabled or populated domain children. The root is
350 exempt from this requirement.
352 Topology-wise, a cgroup can be in an invalid state. Please consider
353 the following topology::
355 A (threaded domain) - B (threaded) - C (domain, just created)
357 C is created as a domain but isn't connected to a parent which can
358 host child domains. C can't be used until it is turned into a
359 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
360 these cases. Operations which fail due to invalid topology use
361 EOPNOTSUPP as the errno.
363 A domain cgroup is turned into a threaded domain when one of its child
364 cgroup becomes threaded or threaded controllers are enabled in the
365 "cgroup.subtree_control" file while there are processes in the cgroup.
366 A threaded domain reverts to a normal domain when the conditions
369 When read, "cgroup.threads" contains the list of the thread IDs of all
370 threads in the cgroup. Except that the operations are per-thread
371 instead of per-process, "cgroup.threads" has the same format and
372 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
373 written to in any cgroup, as it can only move threads inside the same
374 threaded domain, its operations are confined inside each threaded
377 The threaded domain cgroup serves as the resource domain for the whole
378 subtree, and, while the threads can be scattered across the subtree,
379 all the processes are considered to be in the threaded domain cgroup.
380 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
381 processes in the subtree and is not readable in the subtree proper.
382 However, "cgroup.procs" can be written to from anywhere in the subtree
383 to migrate all threads of the matching process to the cgroup.
385 Only threaded controllers can be enabled in a threaded subtree. When
386 a threaded controller is enabled inside a threaded subtree, it only
387 accounts for and controls resource consumptions associated with the
388 threads in the cgroup and its descendants. All consumptions which
389 aren't tied to a specific thread belong to the threaded domain cgroup.
391 Because a threaded subtree is exempt from no internal process
392 constraint, a threaded controller must be able to handle competition
393 between threads in a non-leaf cgroup and its child cgroups. Each
394 threaded controller defines how such competitions are handled.
396 Currently, the following controllers are threaded and can be enabled
397 in a threaded cgroup::
404 [Un]populated Notification
405 --------------------------
407 Each non-root cgroup has a "cgroup.events" file which contains
408 "populated" field indicating whether the cgroup's sub-hierarchy has
409 live processes in it. Its value is 0 if there is no live process in
410 the cgroup and its descendants; otherwise, 1. poll and [id]notify
411 events are triggered when the value changes. This can be used, for
412 example, to start a clean-up operation after all processes of a given
413 sub-hierarchy have exited. The populated state updates and
414 notifications are recursive. Consider the following sub-hierarchy
415 where the numbers in the parentheses represent the numbers of processes
421 A, B and C's "populated" fields would be 1 while D's 0. After the one
422 process in C exits, B and C's "populated" fields would flip to "0" and
423 file modified events will be generated on the "cgroup.events" files of
427 Controlling Controllers
428 -----------------------
430 Enabling and Disabling
431 ~~~~~~~~~~~~~~~~~~~~~~
433 Each cgroup has a "cgroup.controllers" file which lists all
434 controllers available for the cgroup to enable::
436 # cat cgroup.controllers
439 No controller is enabled by default. Controllers can be enabled and
440 disabled by writing to the "cgroup.subtree_control" file::
442 # echo "+cpu +memory -io" > cgroup.subtree_control
444 Only controllers which are listed in "cgroup.controllers" can be
445 enabled. When multiple operations are specified as above, either they
446 all succeed or fail. If multiple operations on the same controller
447 are specified, the last one is effective.
449 Enabling a controller in a cgroup indicates that the distribution of
450 the target resource across its immediate children will be controlled.
451 Consider the following sub-hierarchy. The enabled controllers are
452 listed in parentheses::
454 A(cpu,memory) - B(memory) - C()
457 As A has "cpu" and "memory" enabled, A will control the distribution
458 of CPU cycles and memory to its children, in this case, B. As B has
459 "memory" enabled but not "CPU", C and D will compete freely on CPU
460 cycles but their division of memory available to B will be controlled.
462 As a controller regulates the distribution of the target resource to
463 the cgroup's children, enabling it creates the controller's interface
464 files in the child cgroups. In the above example, enabling "cpu" on B
465 would create the "cpu." prefixed controller interface files in C and
466 D. Likewise, disabling "memory" from B would remove the "memory."
467 prefixed controller interface files from C and D. This means that the
468 controller interface files - anything which doesn't start with
469 "cgroup." are owned by the parent rather than the cgroup itself.
475 Resources are distributed top-down and a cgroup can further distribute
476 a resource only if the resource has been distributed to it from the
477 parent. This means that all non-root "cgroup.subtree_control" files
478 can only contain controllers which are enabled in the parent's
479 "cgroup.subtree_control" file. A controller can be enabled only if
480 the parent has the controller enabled and a controller can't be
481 disabled if one or more children have it enabled.
484 No Internal Process Constraint
485 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
487 Non-root cgroups can distribute domain resources to their children
488 only when they don't have any processes of their own. In other words,
489 only domain cgroups which don't contain any processes can have domain
490 controllers enabled in their "cgroup.subtree_control" files.
492 This guarantees that, when a domain controller is looking at the part
493 of the hierarchy which has it enabled, processes are always only on
494 the leaves. This rules out situations where child cgroups compete
495 against internal processes of the parent.
497 The root cgroup is exempt from this restriction. Root contains
498 processes and anonymous resource consumption which can't be associated
499 with any other cgroups and requires special treatment from most
500 controllers. How resource consumption in the root cgroup is governed
501 is up to each controller (for more information on this topic please
502 refer to the Non-normative information section in the Controllers
505 Note that the restriction doesn't get in the way if there is no
506 enabled controller in the cgroup's "cgroup.subtree_control". This is
507 important as otherwise it wouldn't be possible to create children of a
508 populated cgroup. To control resource distribution of a cgroup, the
509 cgroup must create children and transfer all its processes to the
510 children before enabling controllers in its "cgroup.subtree_control"
520 A cgroup can be delegated in two ways. First, to a less privileged
521 user by granting write access of the directory and its "cgroup.procs",
522 "cgroup.threads" and "cgroup.subtree_control" files to the user.
523 Second, if the "nsdelegate" mount option is set, automatically to a
524 cgroup namespace on namespace creation.
526 Because the resource control interface files in a given directory
527 control the distribution of the parent's resources, the delegatee
528 shouldn't be allowed to write to them. For the first method, this is
529 achieved by not granting access to these files. For the second, the
530 kernel rejects writes to all files other than "cgroup.procs" and
531 "cgroup.subtree_control" on a namespace root from inside the
534 The end results are equivalent for both delegation types. Once
535 delegated, the user can build sub-hierarchy under the directory,
536 organize processes inside it as it sees fit and further distribute the
537 resources it received from the parent. The limits and other settings
538 of all resource controllers are hierarchical and regardless of what
539 happens in the delegated sub-hierarchy, nothing can escape the
540 resource restrictions imposed by the parent.
542 Currently, cgroup doesn't impose any restrictions on the number of
543 cgroups in or nesting depth of a delegated sub-hierarchy; however,
544 this may be limited explicitly in the future.
547 Delegation Containment
548 ~~~~~~~~~~~~~~~~~~~~~~
550 A delegated sub-hierarchy is contained in the sense that processes
551 can't be moved into or out of the sub-hierarchy by the delegatee.
553 For delegations to a less privileged user, this is achieved by
554 requiring the following conditions for a process with a non-root euid
555 to migrate a target process into a cgroup by writing its PID to the
558 - The writer must have write access to the "cgroup.procs" file.
560 - The writer must have write access to the "cgroup.procs" file of the
561 common ancestor of the source and destination cgroups.
563 The above two constraints ensure that while a delegatee may migrate
564 processes around freely in the delegated sub-hierarchy it can't pull
565 in from or push out to outside the sub-hierarchy.
567 For an example, let's assume cgroups C0 and C1 have been delegated to
568 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
569 all processes under C0 and C1 belong to U0::
571 ~~~~~~~~~~~~~ - C0 - C00
574 ~~~~~~~~~~~~~ - C1 - C10
576 Let's also say U0 wants to write the PID of a process which is
577 currently in C10 into "C00/cgroup.procs". U0 has write access to the
578 file; however, the common ancestor of the source cgroup C10 and the
579 destination cgroup C00 is above the points of delegation and U0 would
580 not have write access to its "cgroup.procs" files and thus the write
581 will be denied with -EACCES.
583 For delegations to namespaces, containment is achieved by requiring
584 that both the source and destination cgroups are reachable from the
585 namespace of the process which is attempting the migration. If either
586 is not reachable, the migration is rejected with -ENOENT.
592 Organize Once and Control
593 ~~~~~~~~~~~~~~~~~~~~~~~~~
595 Migrating a process across cgroups is a relatively expensive operation
596 and stateful resources such as memory are not moved together with the
597 process. This is an explicit design decision as there often exist
598 inherent trade-offs between migration and various hot paths in terms
599 of synchronization cost.
601 As such, migrating processes across cgroups frequently as a means to
602 apply different resource restrictions is discouraged. A workload
603 should be assigned to a cgroup according to the system's logical and
604 resource structure once on start-up. Dynamic adjustments to resource
605 distribution can be made by changing controller configuration through
609 Avoid Name Collisions
610 ~~~~~~~~~~~~~~~~~~~~~
612 Interface files for a cgroup and its children cgroups occupy the same
613 directory and it is possible to create children cgroups which collide
614 with interface files.
616 All cgroup core interface files are prefixed with "cgroup." and each
617 controller's interface files are prefixed with the controller name and
618 a dot. A controller's name is composed of lower case alphabets and
619 '_'s but never begins with an '_' so it can be used as the prefix
620 character for collision avoidance. Also, interface file names won't
621 start or end with terms which are often used in categorizing workloads
622 such as job, service, slice, unit or workload.
624 cgroup doesn't do anything to prevent name collisions and it's the
625 user's responsibility to avoid them.
628 Resource Distribution Models
629 ============================
631 cgroup controllers implement several resource distribution schemes
632 depending on the resource type and expected use cases. This section
633 describes major schemes in use along with their expected behaviors.
639 A parent's resource is distributed by adding up the weights of all
640 active children and giving each the fraction matching the ratio of its
641 weight against the sum. As only children which can make use of the
642 resource at the moment participate in the distribution, this is
643 work-conserving. Due to the dynamic nature, this model is usually
644 used for stateless resources.
646 All weights are in the range [1, 10000] with the default at 100. This
647 allows symmetric multiplicative biases in both directions at fine
648 enough granularity while staying in the intuitive range.
650 As long as the weight is in range, all configuration combinations are
651 valid and there is no reason to reject configuration changes or
654 "cpu.weight" proportionally distributes CPU cycles to active children
655 and is an example of this type.
658 .. _cgroupv2-limits-distributor:
663 A child can only consume up to the configured amount of the resource.
664 Limits can be over-committed - the sum of the limits of children can
665 exceed the amount of resource available to the parent.
667 Limits are in the range [0, max] and defaults to "max", which is noop.
669 As limits can be over-committed, all configuration combinations are
670 valid and there is no reason to reject configuration changes or
673 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
674 on an IO device and is an example of this type.
676 .. _cgroupv2-protections-distributor:
681 A cgroup is protected up to the configured amount of the resource
682 as long as the usages of all its ancestors are under their
683 protected levels. Protections can be hard guarantees or best effort
684 soft boundaries. Protections can also be over-committed in which case
685 only up to the amount available to the parent is protected among
688 Protections are in the range [0, max] and defaults to 0, which is
691 As protections can be over-committed, all configuration combinations
692 are valid and there is no reason to reject configuration changes or
695 "memory.low" implements best-effort memory protection and is an
696 example of this type.
702 A cgroup is exclusively allocated a certain amount of a finite
703 resource. Allocations can't be over-committed - the sum of the
704 allocations of children can not exceed the amount of resource
705 available to the parent.
707 Allocations are in the range [0, max] and defaults to 0, which is no
710 As allocations can't be over-committed, some configuration
711 combinations are invalid and should be rejected. Also, if the
712 resource is mandatory for execution of processes, process migrations
715 "cpu.rt.max" hard-allocates realtime slices and is an example of this
725 All interface files should be in one of the following formats whenever
728 New-line separated values
729 (when only one value can be written at once)
735 Space separated values
736 (when read-only or multiple values can be written at once)
748 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
749 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
752 For a writable file, the format for writing should generally match
753 reading; however, controllers may allow omitting later fields or
754 implement restricted shortcuts for most common use cases.
756 For both flat and nested keyed files, only the values for a single key
757 can be written at a time. For nested keyed files, the sub key pairs
758 may be specified in any order and not all pairs have to be specified.
764 - Settings for a single feature should be contained in a single file.
766 - The root cgroup should be exempt from resource control and thus
767 shouldn't have resource control interface files.
769 - The default time unit is microseconds. If a different unit is ever
770 used, an explicit unit suffix must be present.
772 - A parts-per quantity should use a percentage decimal with at least
773 two digit fractional part - e.g. 13.40.
775 - If a controller implements weight based resource distribution, its
776 interface file should be named "weight" and have the range [1,
777 10000] with 100 as the default. The values are chosen to allow
778 enough and symmetric bias in both directions while keeping it
779 intuitive (the default is 100%).
781 - If a controller implements an absolute resource guarantee and/or
782 limit, the interface files should be named "min" and "max"
783 respectively. If a controller implements best effort resource
784 guarantee and/or limit, the interface files should be named "low"
785 and "high" respectively.
787 In the above four control files, the special token "max" should be
788 used to represent upward infinity for both reading and writing.
790 - If a setting has a configurable default value and keyed specific
791 overrides, the default entry should be keyed with "default" and
792 appear as the first entry in the file.
794 The default value can be updated by writing either "default $VAL" or
797 When writing to update a specific override, "default" can be used as
798 the value to indicate removal of the override. Override entries
799 with "default" as the value must not appear when read.
801 For example, a setting which is keyed by major:minor device numbers
802 with integer values may look like the following::
804 # cat cgroup-example-interface-file
808 The default value can be updated by::
810 # echo 125 > cgroup-example-interface-file
814 # echo "default 125" > cgroup-example-interface-file
816 An override can be set by::
818 # echo "8:16 170" > cgroup-example-interface-file
822 # echo "8:0 default" > cgroup-example-interface-file
823 # cat cgroup-example-interface-file
827 - For events which are not very high frequency, an interface file
828 "events" should be created which lists event key value pairs.
829 Whenever a notifiable event happens, file modified event should be
830 generated on the file.
836 All cgroup core files are prefixed with "cgroup."
839 A read-write single value file which exists on non-root
842 When read, it indicates the current type of the cgroup, which
843 can be one of the following values.
845 - "domain" : A normal valid domain cgroup.
847 - "domain threaded" : A threaded domain cgroup which is
848 serving as the root of a threaded subtree.
850 - "domain invalid" : A cgroup which is in an invalid state.
851 It can't be populated or have controllers enabled. It may
852 be allowed to become a threaded cgroup.
854 - "threaded" : A threaded cgroup which is a member of a
857 A cgroup can be turned into a threaded cgroup by writing
858 "threaded" to this file.
861 A read-write new-line separated values file which exists on
864 When read, it lists the PIDs of all processes which belong to
865 the cgroup one-per-line. The PIDs are not ordered and the
866 same PID may show up more than once if the process got moved
867 to another cgroup and then back or the PID got recycled while
870 A PID can be written to migrate the process associated with
871 the PID to the cgroup. The writer should match all of the
872 following conditions.
874 - It must have write access to the "cgroup.procs" file.
876 - It must have write access to the "cgroup.procs" file of the
877 common ancestor of the source and destination cgroups.
879 When delegating a sub-hierarchy, write access to this file
880 should be granted along with the containing directory.
882 In a threaded cgroup, reading this file fails with EOPNOTSUPP
883 as all the processes belong to the thread root. Writing is
884 supported and moves every thread of the process to the cgroup.
887 A read-write new-line separated values file which exists on
890 When read, it lists the TIDs of all threads which belong to
891 the cgroup one-per-line. The TIDs are not ordered and the
892 same TID may show up more than once if the thread got moved to
893 another cgroup and then back or the TID got recycled while
896 A TID can be written to migrate the thread associated with the
897 TID to the cgroup. The writer should match all of the
898 following conditions.
900 - It must have write access to the "cgroup.threads" file.
902 - The cgroup that the thread is currently in must be in the
903 same resource domain as the destination cgroup.
905 - It must have write access to the "cgroup.procs" file of the
906 common ancestor of the source and destination cgroups.
908 When delegating a sub-hierarchy, write access to this file
909 should be granted along with the containing directory.
912 A read-only space separated values file which exists on all
915 It shows space separated list of all controllers available to
916 the cgroup. The controllers are not ordered.
918 cgroup.subtree_control
919 A read-write space separated values file which exists on all
920 cgroups. Starts out empty.
922 When read, it shows space separated list of the controllers
923 which are enabled to control resource distribution from the
924 cgroup to its children.
926 Space separated list of controllers prefixed with '+' or '-'
927 can be written to enable or disable controllers. A controller
928 name prefixed with '+' enables the controller and '-'
929 disables. If a controller appears more than once on the list,
930 the last one is effective. When multiple enable and disable
931 operations are specified, either all succeed or all fail.
934 A read-only flat-keyed file which exists on non-root cgroups.
935 The following entries are defined. Unless specified
936 otherwise, a value change in this file generates a file
940 1 if the cgroup or its descendants contains any live
941 processes; otherwise, 0.
943 1 if the cgroup is frozen; otherwise, 0.
945 cgroup.max.descendants
946 A read-write single value files. The default is "max".
948 Maximum allowed number of descent cgroups.
949 If the actual number of descendants is equal or larger,
950 an attempt to create a new cgroup in the hierarchy will fail.
953 A read-write single value files. The default is "max".
955 Maximum allowed descent depth below the current cgroup.
956 If the actual descent depth is equal or larger,
957 an attempt to create a new child cgroup will fail.
960 A read-only flat-keyed file with the following entries:
963 Total number of visible descendant cgroups.
966 Total number of dying descendant cgroups. A cgroup becomes
967 dying after being deleted by a user. The cgroup will remain
968 in dying state for some time undefined time (which can depend
969 on system load) before being completely destroyed.
971 A process can't enter a dying cgroup under any circumstances,
972 a dying cgroup can't revive.
974 A dying cgroup can consume system resources not exceeding
975 limits, which were active at the moment of cgroup deletion.
978 A read-write single value file which exists on non-root cgroups.
979 Allowed values are "0" and "1". The default is "0".
981 Writing "1" to the file causes freezing of the cgroup and all
982 descendant cgroups. This means that all belonging processes will
983 be stopped and will not run until the cgroup will be explicitly
984 unfrozen. Freezing of the cgroup may take some time; when this action
985 is completed, the "frozen" value in the cgroup.events control file
986 will be updated to "1" and the corresponding notification will be
989 A cgroup can be frozen either by its own settings, or by settings
990 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
991 cgroup will remain frozen.
993 Processes in the frozen cgroup can be killed by a fatal signal.
994 They also can enter and leave a frozen cgroup: either by an explicit
995 move by a user, or if freezing of the cgroup races with fork().
996 If a process is moved to a frozen cgroup, it stops. If a process is
997 moved out of a frozen cgroup, it becomes running.
999 Frozen status of a cgroup doesn't affect any cgroup tree operations:
1000 it's possible to delete a frozen (and empty) cgroup, as well as
1001 create new sub-cgroups.
1004 A write-only single value file which exists in non-root cgroups.
1005 The only allowed value is "1".
1007 Writing "1" to the file causes the cgroup and all descendant cgroups to
1008 be killed. This means that all processes located in the affected cgroup
1009 tree will be killed via SIGKILL.
1011 Killing a cgroup tree will deal with concurrent forks appropriately and
1012 is protected against migrations.
1014 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1015 killing cgroups is a process directed operation, i.e. it affects
1016 the whole thread-group.
1019 A read-write single value file that allowed values are "0" and "1".
1022 Writing "0" to the file will disable the cgroup PSI accounting.
1023 Writing "1" to the file will re-enable the cgroup PSI accounting.
1025 This control attribute is not hierarchical, so disable or enable PSI
1026 accounting in a cgroup does not affect PSI accounting in descendants
1027 and doesn't need pass enablement via ancestors from root.
1029 The reason this control attribute exists is that PSI accounts stalls for
1030 each cgroup separately and aggregates it at each level of the hierarchy.
1031 This may cause non-negligible overhead for some workloads when under
1032 deep level of the hierarchy, in which case this control attribute can
1033 be used to disable PSI accounting in the non-leaf cgroups.
1036 A read-write nested-keyed file.
1038 Shows pressure stall information for IRQ/SOFTIRQ. See
1039 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1049 The "cpu" controllers regulates distribution of CPU cycles. This
1050 controller implements weight and absolute bandwidth limit models for
1051 normal scheduling policy and absolute bandwidth allocation model for
1052 realtime scheduling policy.
1054 In all the above models, cycles distribution is defined only on a temporal
1055 base and it does not account for the frequency at which tasks are executed.
1056 The (optional) utilization clamping support allows to hint the schedutil
1057 cpufreq governor about the minimum desired frequency which should always be
1058 provided by a CPU, as well as the maximum desired frequency, which should not
1059 be exceeded by a CPU.
1061 WARNING: cgroup2 doesn't yet support control of realtime processes and
1062 the cpu controller can only be enabled when all RT processes are in
1063 the root cgroup. Be aware that system management software may already
1064 have placed RT processes into nonroot cgroups during the system boot
1065 process, and these processes may need to be moved to the root cgroup
1066 before the cpu controller can be enabled.
1072 All time durations are in microseconds.
1075 A read-only flat-keyed file.
1076 This file exists whether the controller is enabled or not.
1078 It always reports the following three stats:
1084 and the following five when the controller is enabled:
1093 A read-write single value file which exists on non-root
1094 cgroups. The default is "100".
1096 The weight in the range [1, 10000].
1099 A read-write single value file which exists on non-root
1100 cgroups. The default is "0".
1102 The nice value is in the range [-20, 19].
1104 This interface file is an alternative interface for
1105 "cpu.weight" and allows reading and setting weight using the
1106 same values used by nice(2). Because the range is smaller and
1107 granularity is coarser for the nice values, the read value is
1108 the closest approximation of the current weight.
1111 A read-write two value file which exists on non-root cgroups.
1112 The default is "max 100000".
1114 The maximum bandwidth limit. It's in the following format::
1118 which indicates that the group may consume up to $MAX in each
1119 $PERIOD duration. "max" for $MAX indicates no limit. If only
1120 one number is written, $MAX is updated.
1123 A read-write single value file which exists on non-root
1124 cgroups. The default is "0".
1126 The burst in the range [0, $MAX].
1129 A read-write nested-keyed file.
1131 Shows pressure stall information for CPU. See
1132 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1135 A read-write single value file which exists on non-root cgroups.
1136 The default is "0", i.e. no utilization boosting.
1138 The requested minimum utilization (protection) as a percentage
1139 rational number, e.g. 12.34 for 12.34%.
1141 This interface allows reading and setting minimum utilization clamp
1142 values similar to the sched_setattr(2). This minimum utilization
1143 value is used to clamp the task specific minimum utilization clamp.
1145 The requested minimum utilization (protection) is always capped by
1146 the current value for the maximum utilization (limit), i.e.
1150 A read-write single value file which exists on non-root cgroups.
1151 The default is "max". i.e. no utilization capping
1153 The requested maximum utilization (limit) as a percentage rational
1154 number, e.g. 98.76 for 98.76%.
1156 This interface allows reading and setting maximum utilization clamp
1157 values similar to the sched_setattr(2). This maximum utilization
1158 value is used to clamp the task specific maximum utilization clamp.
1165 The "memory" controller regulates distribution of memory. Memory is
1166 stateful and implements both limit and protection models. Due to the
1167 intertwining between memory usage and reclaim pressure and the
1168 stateful nature of memory, the distribution model is relatively
1171 While not completely water-tight, all major memory usages by a given
1172 cgroup are tracked so that the total memory consumption can be
1173 accounted and controlled to a reasonable extent. Currently, the
1174 following types of memory usages are tracked.
1176 - Userland memory - page cache and anonymous memory.
1178 - Kernel data structures such as dentries and inodes.
1180 - TCP socket buffers.
1182 The above list may expand in the future for better coverage.
1185 Memory Interface Files
1186 ~~~~~~~~~~~~~~~~~~~~~~
1188 All memory amounts are in bytes. If a value which is not aligned to
1189 PAGE_SIZE is written, the value may be rounded up to the closest
1190 PAGE_SIZE multiple when read back.
1193 A read-only single value file which exists on non-root
1196 The total amount of memory currently being used by the cgroup
1197 and its descendants.
1200 A read-write single value file which exists on non-root
1201 cgroups. The default is "0".
1203 Hard memory protection. If the memory usage of a cgroup
1204 is within its effective min boundary, the cgroup's memory
1205 won't be reclaimed under any conditions. If there is no
1206 unprotected reclaimable memory available, OOM killer
1207 is invoked. Above the effective min boundary (or
1208 effective low boundary if it is higher), pages are reclaimed
1209 proportionally to the overage, reducing reclaim pressure for
1212 Effective min boundary is limited by memory.min values of
1213 all ancestor cgroups. If there is memory.min overcommitment
1214 (child cgroup or cgroups are requiring more protected memory
1215 than parent will allow), then each child cgroup will get
1216 the part of parent's protection proportional to its
1217 actual memory usage below memory.min.
1219 Putting more memory than generally available under this
1220 protection is discouraged and may lead to constant OOMs.
1222 If a memory cgroup is not populated with processes,
1223 its memory.min is ignored.
1226 A read-write single value file which exists on non-root
1227 cgroups. The default is "0".
1229 Best-effort memory protection. If the memory usage of a
1230 cgroup is within its effective low boundary, the cgroup's
1231 memory won't be reclaimed unless there is no reclaimable
1232 memory available in unprotected cgroups.
1233 Above the effective low boundary (or
1234 effective min boundary if it is higher), pages are reclaimed
1235 proportionally to the overage, reducing reclaim pressure for
1238 Effective low boundary is limited by memory.low values of
1239 all ancestor cgroups. If there is memory.low overcommitment
1240 (child cgroup or cgroups are requiring more protected memory
1241 than parent will allow), then each child cgroup will get
1242 the part of parent's protection proportional to its
1243 actual memory usage below memory.low.
1245 Putting more memory than generally available under this
1246 protection is discouraged.
1249 A read-write single value file which exists on non-root
1250 cgroups. The default is "max".
1252 Memory usage throttle limit. If a cgroup's usage goes
1253 over the high boundary, the processes of the cgroup are
1254 throttled and put under heavy reclaim pressure.
1256 Going over the high limit never invokes the OOM killer and
1257 under extreme conditions the limit may be breached. The high
1258 limit should be used in scenarios where an external process
1259 monitors the limited cgroup to alleviate heavy reclaim
1263 A read-write single value file which exists on non-root
1264 cgroups. The default is "max".
1266 Memory usage hard limit. This is the main mechanism to limit
1267 memory usage of a cgroup. If a cgroup's memory usage reaches
1268 this limit and can't be reduced, the OOM killer is invoked in
1269 the cgroup. Under certain circumstances, the usage may go
1270 over the limit temporarily.
1272 In default configuration regular 0-order allocations always
1273 succeed unless OOM killer chooses current task as a victim.
1275 Some kinds of allocations don't invoke the OOM killer.
1276 Caller could retry them differently, return into userspace
1277 as -ENOMEM or silently ignore in cases like disk readahead.
1280 A write-only nested-keyed file which exists for all cgroups.
1282 This is a simple interface to trigger memory reclaim in the
1285 This file accepts a single key, the number of bytes to reclaim.
1286 No nested keys are currently supported.
1290 echo "1G" > memory.reclaim
1292 The interface can be later extended with nested keys to
1293 configure the reclaim behavior. For example, specify the
1294 type of memory to reclaim from (anon, file, ..).
1296 Please note that the kernel can over or under reclaim from
1297 the target cgroup. If less bytes are reclaimed than the
1298 specified amount, -EAGAIN is returned.
1300 Please note that the proactive reclaim (triggered by this
1301 interface) is not meant to indicate memory pressure on the
1302 memory cgroup. Therefore socket memory balancing triggered by
1303 the memory reclaim normally is not exercised in this case.
1304 This means that the networking layer will not adapt based on
1305 reclaim induced by memory.reclaim.
1308 A read-only single value file which exists on non-root
1311 The max memory usage recorded for the cgroup and its
1312 descendants since the creation of the cgroup.
1315 A read-write single value file which exists on non-root
1316 cgroups. The default value is "0".
1318 Determines whether the cgroup should be treated as
1319 an indivisible workload by the OOM killer. If set,
1320 all tasks belonging to the cgroup or to its descendants
1321 (if the memory cgroup is not a leaf cgroup) are killed
1322 together or not at all. This can be used to avoid
1323 partial kills to guarantee workload integrity.
1325 Tasks with the OOM protection (oom_score_adj set to -1000)
1326 are treated as an exception and are never killed.
1328 If the OOM killer is invoked in a cgroup, it's not going
1329 to kill any tasks outside of this cgroup, regardless
1330 memory.oom.group values of ancestor cgroups.
1333 A read-only flat-keyed file which exists on non-root cgroups.
1334 The following entries are defined. Unless specified
1335 otherwise, a value change in this file generates a file
1338 Note that all fields in this file are hierarchical and the
1339 file modified event can be generated due to an event down the
1340 hierarchy. For the local events at the cgroup level see
1341 memory.events.local.
1344 The number of times the cgroup is reclaimed due to
1345 high memory pressure even though its usage is under
1346 the low boundary. This usually indicates that the low
1347 boundary is over-committed.
1350 The number of times processes of the cgroup are
1351 throttled and routed to perform direct memory reclaim
1352 because the high memory boundary was exceeded. For a
1353 cgroup whose memory usage is capped by the high limit
1354 rather than global memory pressure, this event's
1355 occurrences are expected.
1358 The number of times the cgroup's memory usage was
1359 about to go over the max boundary. If direct reclaim
1360 fails to bring it down, the cgroup goes to OOM state.
1363 The number of time the cgroup's memory usage was
1364 reached the limit and allocation was about to fail.
1366 This event is not raised if the OOM killer is not
1367 considered as an option, e.g. for failed high-order
1368 allocations or if caller asked to not retry attempts.
1371 The number of processes belonging to this cgroup
1372 killed by any kind of OOM killer.
1375 The number of times a group OOM has occurred.
1378 Similar to memory.events but the fields in the file are local
1379 to the cgroup i.e. not hierarchical. The file modified event
1380 generated on this file reflects only the local events.
1383 A read-only flat-keyed file which exists on non-root cgroups.
1385 This breaks down the cgroup's memory footprint into different
1386 types of memory, type-specific details, and other information
1387 on the state and past events of the memory management system.
1389 All memory amounts are in bytes.
1391 The entries are ordered to be human readable, and new entries
1392 can show up in the middle. Don't rely on items remaining in a
1393 fixed position; use the keys to look up specific values!
1395 If the entry has no per-node counter (or not show in the
1396 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1397 to indicate that it will not show in the memory.numa_stat.
1400 Amount of memory used in anonymous mappings such as
1401 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1404 Amount of memory used to cache filesystem data,
1405 including tmpfs and shared memory.
1408 Amount of total kernel memory, including
1409 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1410 addition to other kernel memory use cases.
1413 Amount of memory allocated to kernel stacks.
1416 Amount of memory allocated for page tables.
1419 Amount of memory allocated for secondary page tables,
1420 this currently includes KVM mmu allocations on x86
1424 Amount of memory used for storing per-cpu kernel
1428 Amount of memory used in network transmission buffers
1431 Amount of memory used for vmap backed memory.
1434 Amount of cached filesystem data that is swap-backed,
1435 such as tmpfs, shm segments, shared anonymous mmap()s
1438 Amount of memory consumed by the zswap compression backend.
1441 Amount of application memory swapped out to zswap.
1444 Amount of cached filesystem data mapped with mmap()
1447 Amount of cached filesystem data that was modified but
1448 not yet written back to disk
1451 Amount of cached filesystem data that was modified and
1452 is currently being written back to disk
1455 Amount of swap cached in memory. The swapcache is accounted
1456 against both memory and swap usage.
1459 Amount of memory used in anonymous mappings backed by
1460 transparent hugepages
1463 Amount of cached filesystem data backed by transparent
1467 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1468 transparent hugepages
1470 inactive_anon, active_anon, inactive_file, active_file, unevictable
1471 Amount of memory, swap-backed and filesystem-backed,
1472 on the internal memory management lists used by the
1473 page reclaim algorithm.
1475 As these represent internal list state (eg. shmem pages are on anon
1476 memory management lists), inactive_foo + active_foo may not be equal to
1477 the value for the foo counter, since the foo counter is type-based, not
1481 Part of "slab" that might be reclaimed, such as
1482 dentries and inodes.
1485 Part of "slab" that cannot be reclaimed on memory
1489 Amount of memory used for storing in-kernel data
1492 workingset_refault_anon
1493 Number of refaults of previously evicted anonymous pages.
1495 workingset_refault_file
1496 Number of refaults of previously evicted file pages.
1498 workingset_activate_anon
1499 Number of refaulted anonymous pages that were immediately
1502 workingset_activate_file
1503 Number of refaulted file pages that were immediately activated.
1505 workingset_restore_anon
1506 Number of restored anonymous pages which have been detected as
1507 an active workingset before they got reclaimed.
1509 workingset_restore_file
1510 Number of restored file pages which have been detected as an
1511 active workingset before they got reclaimed.
1513 workingset_nodereclaim
1514 Number of times a shadow node has been reclaimed
1517 Amount of scanned pages (in an inactive LRU list)
1520 Amount of reclaimed pages
1523 Amount of scanned pages by kswapd (in an inactive LRU list)
1526 Amount of scanned pages directly (in an inactive LRU list)
1528 pgscan_khugepaged (npn)
1529 Amount of scanned pages by khugepaged (in an inactive LRU list)
1531 pgsteal_kswapd (npn)
1532 Amount of reclaimed pages by kswapd
1534 pgsteal_direct (npn)
1535 Amount of reclaimed pages directly
1537 pgsteal_khugepaged (npn)
1538 Amount of reclaimed pages by khugepaged
1541 Total number of page faults incurred
1544 Number of major page faults incurred
1547 Amount of scanned pages (in an active LRU list)
1550 Amount of pages moved to the active LRU list
1553 Amount of pages moved to the inactive LRU list
1556 Amount of pages postponed to be freed under memory pressure
1559 Amount of reclaimed lazyfree pages
1561 thp_fault_alloc (npn)
1562 Number of transparent hugepages which were allocated to satisfy
1563 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1566 thp_collapse_alloc (npn)
1567 Number of transparent hugepages which were allocated to allow
1568 collapsing an existing range of pages. This counter is not
1569 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1572 Number of transparent hugepages which are swapout in one piece
1575 thp_swpout_fallback (npn)
1576 Number of transparent hugepages which were split before swapout.
1577 Usually because failed to allocate some continuous swap space
1581 A read-only nested-keyed file which exists on non-root cgroups.
1583 This breaks down the cgroup's memory footprint into different
1584 types of memory, type-specific details, and other information
1585 per node on the state of the memory management system.
1587 This is useful for providing visibility into the NUMA locality
1588 information within an memcg since the pages are allowed to be
1589 allocated from any physical node. One of the use case is evaluating
1590 application performance by combining this information with the
1591 application's CPU allocation.
1593 All memory amounts are in bytes.
1595 The output format of memory.numa_stat is::
1597 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1599 The entries are ordered to be human readable, and new entries
1600 can show up in the middle. Don't rely on items remaining in a
1601 fixed position; use the keys to look up specific values!
1603 The entries can refer to the memory.stat.
1606 A read-only single value file which exists on non-root
1609 The total amount of swap currently being used by the cgroup
1610 and its descendants.
1613 A read-write single value file which exists on non-root
1614 cgroups. The default is "max".
1616 Swap usage throttle limit. If a cgroup's swap usage exceeds
1617 this limit, all its further allocations will be throttled to
1618 allow userspace to implement custom out-of-memory procedures.
1620 This limit marks a point of no return for the cgroup. It is NOT
1621 designed to manage the amount of swapping a workload does
1622 during regular operation. Compare to memory.swap.max, which
1623 prohibits swapping past a set amount, but lets the cgroup
1624 continue unimpeded as long as other memory can be reclaimed.
1626 Healthy workloads are not expected to reach this limit.
1629 A read-only single value file which exists on non-root
1632 The max swap usage recorded for the cgroup and its
1633 descendants since the creation of the cgroup.
1636 A read-write single value file which exists on non-root
1637 cgroups. The default is "max".
1639 Swap usage hard limit. If a cgroup's swap usage reaches this
1640 limit, anonymous memory of the cgroup will not be swapped out.
1643 A read-only flat-keyed file which exists on non-root cgroups.
1644 The following entries are defined. Unless specified
1645 otherwise, a value change in this file generates a file
1649 The number of times the cgroup's swap usage was over
1653 The number of times the cgroup's swap usage was about
1654 to go over the max boundary and swap allocation
1658 The number of times swap allocation failed either
1659 because of running out of swap system-wide or max
1662 When reduced under the current usage, the existing swap
1663 entries are reclaimed gradually and the swap usage may stay
1664 higher than the limit for an extended period of time. This
1665 reduces the impact on the workload and memory management.
1667 memory.zswap.current
1668 A read-only single value file which exists on non-root
1671 The total amount of memory consumed by the zswap compression
1675 A read-write single value file which exists on non-root
1676 cgroups. The default is "max".
1678 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1679 limit, it will refuse to take any more stores before existing
1680 entries fault back in or are written out to disk.
1683 A read-only nested-keyed file.
1685 Shows pressure stall information for memory. See
1686 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1692 "memory.high" is the main mechanism to control memory usage.
1693 Over-committing on high limit (sum of high limits > available memory)
1694 and letting global memory pressure to distribute memory according to
1695 usage is a viable strategy.
1697 Because breach of the high limit doesn't trigger the OOM killer but
1698 throttles the offending cgroup, a management agent has ample
1699 opportunities to monitor and take appropriate actions such as granting
1700 more memory or terminating the workload.
1702 Determining whether a cgroup has enough memory is not trivial as
1703 memory usage doesn't indicate whether the workload can benefit from
1704 more memory. For example, a workload which writes data received from
1705 network to a file can use all available memory but can also operate as
1706 performant with a small amount of memory. A measure of memory
1707 pressure - how much the workload is being impacted due to lack of
1708 memory - is necessary to determine whether a workload needs more
1709 memory; unfortunately, memory pressure monitoring mechanism isn't
1716 A memory area is charged to the cgroup which instantiated it and stays
1717 charged to the cgroup until the area is released. Migrating a process
1718 to a different cgroup doesn't move the memory usages that it
1719 instantiated while in the previous cgroup to the new cgroup.
1721 A memory area may be used by processes belonging to different cgroups.
1722 To which cgroup the area will be charged is in-deterministic; however,
1723 over time, the memory area is likely to end up in a cgroup which has
1724 enough memory allowance to avoid high reclaim pressure.
1726 If a cgroup sweeps a considerable amount of memory which is expected
1727 to be accessed repeatedly by other cgroups, it may make sense to use
1728 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1729 belonging to the affected files to ensure correct memory ownership.
1735 The "io" controller regulates the distribution of IO resources. This
1736 controller implements both weight based and absolute bandwidth or IOPS
1737 limit distribution; however, weight based distribution is available
1738 only if cfq-iosched is in use and neither scheme is available for
1746 A read-only nested-keyed file.
1748 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1749 The following nested keys are defined.
1751 ====== =====================
1753 wbytes Bytes written
1754 rios Number of read IOs
1755 wios Number of write IOs
1756 dbytes Bytes discarded
1757 dios Number of discard IOs
1758 ====== =====================
1760 An example read output follows::
1762 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1763 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1766 A read-write nested-keyed file which exists only on the root
1769 This file configures the Quality of Service of the IO cost
1770 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1771 currently implements "io.weight" proportional control. Lines
1772 are keyed by $MAJ:$MIN device numbers and not ordered. The
1773 line for a given device is populated on the first write for
1774 the device on "io.cost.qos" or "io.cost.model". The following
1775 nested keys are defined.
1777 ====== =====================================
1778 enable Weight-based control enable
1779 ctrl "auto" or "user"
1780 rpct Read latency percentile [0, 100]
1781 rlat Read latency threshold
1782 wpct Write latency percentile [0, 100]
1783 wlat Write latency threshold
1784 min Minimum scaling percentage [1, 10000]
1785 max Maximum scaling percentage [1, 10000]
1786 ====== =====================================
1788 The controller is disabled by default and can be enabled by
1789 setting "enable" to 1. "rpct" and "wpct" parameters default
1790 to zero and the controller uses internal device saturation
1791 state to adjust the overall IO rate between "min" and "max".
1793 When a better control quality is needed, latency QoS
1794 parameters can be configured. For example::
1796 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1798 shows that on sdb, the controller is enabled, will consider
1799 the device saturated if the 95th percentile of read completion
1800 latencies is above 75ms or write 150ms, and adjust the overall
1801 IO issue rate between 50% and 150% accordingly.
1803 The lower the saturation point, the better the latency QoS at
1804 the cost of aggregate bandwidth. The narrower the allowed
1805 adjustment range between "min" and "max", the more conformant
1806 to the cost model the IO behavior. Note that the IO issue
1807 base rate may be far off from 100% and setting "min" and "max"
1808 blindly can lead to a significant loss of device capacity or
1809 control quality. "min" and "max" are useful for regulating
1810 devices which show wide temporary behavior changes - e.g. a
1811 ssd which accepts writes at the line speed for a while and
1812 then completely stalls for multiple seconds.
1814 When "ctrl" is "auto", the parameters are controlled by the
1815 kernel and may change automatically. Setting "ctrl" to "user"
1816 or setting any of the percentile and latency parameters puts
1817 it into "user" mode and disables the automatic changes. The
1818 automatic mode can be restored by setting "ctrl" to "auto".
1821 A read-write nested-keyed file which exists only on the root
1824 This file configures the cost model of the IO cost model based
1825 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1826 implements "io.weight" proportional control. Lines are keyed
1827 by $MAJ:$MIN device numbers and not ordered. The line for a
1828 given device is populated on the first write for the device on
1829 "io.cost.qos" or "io.cost.model". The following nested keys
1832 ===== ================================
1833 ctrl "auto" or "user"
1834 model The cost model in use - "linear"
1835 ===== ================================
1837 When "ctrl" is "auto", the kernel may change all parameters
1838 dynamically. When "ctrl" is set to "user" or any other
1839 parameters are written to, "ctrl" become "user" and the
1840 automatic changes are disabled.
1842 When "model" is "linear", the following model parameters are
1845 ============= ========================================
1846 [r|w]bps The maximum sequential IO throughput
1847 [r|w]seqiops The maximum 4k sequential IOs per second
1848 [r|w]randiops The maximum 4k random IOs per second
1849 ============= ========================================
1851 From the above, the builtin linear model determines the base
1852 costs of a sequential and random IO and the cost coefficient
1853 for the IO size. While simple, this model can cover most
1854 common device classes acceptably.
1856 The IO cost model isn't expected to be accurate in absolute
1857 sense and is scaled to the device behavior dynamically.
1859 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1860 generate device-specific coefficients.
1863 A read-write flat-keyed file which exists on non-root cgroups.
1864 The default is "default 100".
1866 The first line is the default weight applied to devices
1867 without specific override. The rest are overrides keyed by
1868 $MAJ:$MIN device numbers and not ordered. The weights are in
1869 the range [1, 10000] and specifies the relative amount IO time
1870 the cgroup can use in relation to its siblings.
1872 The default weight can be updated by writing either "default
1873 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1874 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1876 An example read output follows::
1883 A read-write nested-keyed file which exists on non-root
1886 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1887 device numbers and not ordered. The following nested keys are
1890 ===== ==================================
1891 rbps Max read bytes per second
1892 wbps Max write bytes per second
1893 riops Max read IO operations per second
1894 wiops Max write IO operations per second
1895 ===== ==================================
1897 When writing, any number of nested key-value pairs can be
1898 specified in any order. "max" can be specified as the value
1899 to remove a specific limit. If the same key is specified
1900 multiple times, the outcome is undefined.
1902 BPS and IOPS are measured in each IO direction and IOs are
1903 delayed if limit is reached. Temporary bursts are allowed.
1905 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1907 echo "8:16 rbps=2097152 wiops=120" > io.max
1909 Reading returns the following::
1911 8:16 rbps=2097152 wbps=max riops=max wiops=120
1913 Write IOPS limit can be removed by writing the following::
1915 echo "8:16 wiops=max" > io.max
1917 Reading now returns the following::
1919 8:16 rbps=2097152 wbps=max riops=max wiops=max
1922 A read-only nested-keyed file.
1924 Shows pressure stall information for IO. See
1925 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1931 Page cache is dirtied through buffered writes and shared mmaps and
1932 written asynchronously to the backing filesystem by the writeback
1933 mechanism. Writeback sits between the memory and IO domains and
1934 regulates the proportion of dirty memory by balancing dirtying and
1937 The io controller, in conjunction with the memory controller,
1938 implements control of page cache writeback IOs. The memory controller
1939 defines the memory domain that dirty memory ratio is calculated and
1940 maintained for and the io controller defines the io domain which
1941 writes out dirty pages for the memory domain. Both system-wide and
1942 per-cgroup dirty memory states are examined and the more restrictive
1943 of the two is enforced.
1945 cgroup writeback requires explicit support from the underlying
1946 filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
1947 btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
1948 attributed to the root cgroup.
1950 There are inherent differences in memory and writeback management
1951 which affects how cgroup ownership is tracked. Memory is tracked per
1952 page while writeback per inode. For the purpose of writeback, an
1953 inode is assigned to a cgroup and all IO requests to write dirty pages
1954 from the inode are attributed to that cgroup.
1956 As cgroup ownership for memory is tracked per page, there can be pages
1957 which are associated with different cgroups than the one the inode is
1958 associated with. These are called foreign pages. The writeback
1959 constantly keeps track of foreign pages and, if a particular foreign
1960 cgroup becomes the majority over a certain period of time, switches
1961 the ownership of the inode to that cgroup.
1963 While this model is enough for most use cases where a given inode is
1964 mostly dirtied by a single cgroup even when the main writing cgroup
1965 changes over time, use cases where multiple cgroups write to a single
1966 inode simultaneously are not supported well. In such circumstances, a
1967 significant portion of IOs are likely to be attributed incorrectly.
1968 As memory controller assigns page ownership on the first use and
1969 doesn't update it until the page is released, even if writeback
1970 strictly follows page ownership, multiple cgroups dirtying overlapping
1971 areas wouldn't work as expected. It's recommended to avoid such usage
1974 The sysctl knobs which affect writeback behavior are applied to cgroup
1975 writeback as follows.
1977 vm.dirty_background_ratio, vm.dirty_ratio
1978 These ratios apply the same to cgroup writeback with the
1979 amount of available memory capped by limits imposed by the
1980 memory controller and system-wide clean memory.
1982 vm.dirty_background_bytes, vm.dirty_bytes
1983 For cgroup writeback, this is calculated into ratio against
1984 total available memory and applied the same way as
1985 vm.dirty[_background]_ratio.
1991 This is a cgroup v2 controller for IO workload protection. You provide a group
1992 with a latency target, and if the average latency exceeds that target the
1993 controller will throttle any peers that have a lower latency target than the
1996 The limits are only applied at the peer level in the hierarchy. This means that
1997 in the diagram below, only groups A, B, and C will influence each other, and
1998 groups D and F will influence each other. Group G will influence nobody::
2007 So the ideal way to configure this is to set io.latency in groups A, B, and C.
2008 Generally you do not want to set a value lower than the latency your device
2009 supports. Experiment to find the value that works best for your workload.
2010 Start at higher than the expected latency for your device and watch the
2011 avg_lat value in io.stat for your workload group to get an idea of the
2012 latency you see during normal operation. Use the avg_lat value as a basis for
2013 your real setting, setting at 10-15% higher than the value in io.stat.
2015 How IO Latency Throttling Works
2016 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2018 io.latency is work conserving; so as long as everybody is meeting their latency
2019 target the controller doesn't do anything. Once a group starts missing its
2020 target it begins throttling any peer group that has a higher target than itself.
2021 This throttling takes 2 forms:
2023 - Queue depth throttling. This is the number of outstanding IO's a group is
2024 allowed to have. We will clamp down relatively quickly, starting at no limit
2025 and going all the way down to 1 IO at a time.
2027 - Artificial delay induction. There are certain types of IO that cannot be
2028 throttled without possibly adversely affecting higher priority groups. This
2029 includes swapping and metadata IO. These types of IO are allowed to occur
2030 normally, however they are "charged" to the originating group. If the
2031 originating group is being throttled you will see the use_delay and delay
2032 fields in io.stat increase. The delay value is how many microseconds that are
2033 being added to any process that runs in this group. Because this number can
2034 grow quite large if there is a lot of swapping or metadata IO occurring we
2035 limit the individual delay events to 1 second at a time.
2037 Once the victimized group starts meeting its latency target again it will start
2038 unthrottling any peer groups that were throttled previously. If the victimized
2039 group simply stops doing IO the global counter will unthrottle appropriately.
2041 IO Latency Interface Files
2042 ~~~~~~~~~~~~~~~~~~~~~~~~~~
2045 This takes a similar format as the other controllers.
2047 "MAJOR:MINOR target=<target time in microseconds>"
2050 If the controller is enabled you will see extra stats in io.stat in
2051 addition to the normal ones.
2054 This is the current queue depth for the group.
2057 This is an exponential moving average with a decay rate of 1/exp
2058 bound by the sampling interval. The decay rate interval can be
2059 calculated by multiplying the win value in io.stat by the
2060 corresponding number of samples based on the win value.
2063 The sampling window size in milliseconds. This is the minimum
2064 duration of time between evaluation events. Windows only elapse
2065 with IO activity. Idle periods extend the most recent window.
2070 A single attribute controls the behavior of the I/O priority cgroup policy,
2071 namely the io.prio.class attribute. The following values are accepted for
2075 Do not modify the I/O priority class.
2078 For requests that have a non-RT I/O priority class, change it into RT.
2079 Also change the priority level of these requests to 4. Do not modify
2080 the I/O priority of requests that have priority class RT.
2083 For requests that do not have an I/O priority class or that have I/O
2084 priority class RT, change it into BE. Also change the priority level
2085 of these requests to 0. Do not modify the I/O priority class of
2086 requests that have priority class IDLE.
2089 Change the I/O priority class of all requests into IDLE, the lowest
2093 Deprecated. Just an alias for promote-to-rt.
2095 The following numerical values are associated with the I/O priority policies:
2097 +----------------+---+
2099 +----------------+---+
2100 | promote-to-rt | 1 |
2101 +----------------+---+
2102 | restrict-to-be | 2 |
2103 +----------------+---+
2105 +----------------+---+
2107 The numerical value that corresponds to each I/O priority class is as follows:
2109 +-------------------------------+---+
2110 | IOPRIO_CLASS_NONE | 0 |
2111 +-------------------------------+---+
2112 | IOPRIO_CLASS_RT (real-time) | 1 |
2113 +-------------------------------+---+
2114 | IOPRIO_CLASS_BE (best effort) | 2 |
2115 +-------------------------------+---+
2116 | IOPRIO_CLASS_IDLE | 3 |
2117 +-------------------------------+---+
2119 The algorithm to set the I/O priority class for a request is as follows:
2121 - If I/O priority class policy is promote-to-rt, change the request I/O
2122 priority class to IOPRIO_CLASS_RT and change the request I/O priority
2124 - If I/O priority class policy is not promote-to-rt, translate the I/O priority
2125 class policy into a number, then change the request I/O priority class
2126 into the maximum of the I/O priority class policy number and the numerical
2132 The process number controller is used to allow a cgroup to stop any
2133 new tasks from being fork()'d or clone()'d after a specified limit is
2136 The number of tasks in a cgroup can be exhausted in ways which other
2137 controllers cannot prevent, thus warranting its own controller. For
2138 example, a fork bomb is likely to exhaust the number of tasks before
2139 hitting memory restrictions.
2141 Note that PIDs used in this controller refer to TIDs, process IDs as
2149 A read-write single value file which exists on non-root
2150 cgroups. The default is "max".
2152 Hard limit of number of processes.
2155 A read-only single value file which exists on all cgroups.
2157 The number of processes currently in the cgroup and its
2160 Organisational operations are not blocked by cgroup policies, so it is
2161 possible to have pids.current > pids.max. This can be done by either
2162 setting the limit to be smaller than pids.current, or attaching enough
2163 processes to the cgroup such that pids.current is larger than
2164 pids.max. However, it is not possible to violate a cgroup PID policy
2165 through fork() or clone(). These will return -EAGAIN if the creation
2166 of a new process would cause a cgroup policy to be violated.
2172 The "cpuset" controller provides a mechanism for constraining
2173 the CPU and memory node placement of tasks to only the resources
2174 specified in the cpuset interface files in a task's current cgroup.
2175 This is especially valuable on large NUMA systems where placing jobs
2176 on properly sized subsets of the systems with careful processor and
2177 memory placement to reduce cross-node memory access and contention
2178 can improve overall system performance.
2180 The "cpuset" controller is hierarchical. That means the controller
2181 cannot use CPUs or memory nodes not allowed in its parent.
2184 Cpuset Interface Files
2185 ~~~~~~~~~~~~~~~~~~~~~~
2188 A read-write multiple values file which exists on non-root
2189 cpuset-enabled cgroups.
2191 It lists the requested CPUs to be used by tasks within this
2192 cgroup. The actual list of CPUs to be granted, however, is
2193 subjected to constraints imposed by its parent and can differ
2194 from the requested CPUs.
2196 The CPU numbers are comma-separated numbers or ranges.
2202 An empty value indicates that the cgroup is using the same
2203 setting as the nearest cgroup ancestor with a non-empty
2204 "cpuset.cpus" or all the available CPUs if none is found.
2206 The value of "cpuset.cpus" stays constant until the next update
2207 and won't be affected by any CPU hotplug events.
2209 cpuset.cpus.effective
2210 A read-only multiple values file which exists on all
2211 cpuset-enabled cgroups.
2213 It lists the onlined CPUs that are actually granted to this
2214 cgroup by its parent. These CPUs are allowed to be used by
2215 tasks within the current cgroup.
2217 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2218 all the CPUs from the parent cgroup that can be available to
2219 be used by this cgroup. Otherwise, it should be a subset of
2220 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2221 can be granted. In this case, it will be treated just like an
2222 empty "cpuset.cpus".
2224 Its value will be affected by CPU hotplug events.
2227 A read-write multiple values file which exists on non-root
2228 cpuset-enabled cgroups.
2230 It lists the requested memory nodes to be used by tasks within
2231 this cgroup. The actual list of memory nodes granted, however,
2232 is subjected to constraints imposed by its parent and can differ
2233 from the requested memory nodes.
2235 The memory node numbers are comma-separated numbers or ranges.
2241 An empty value indicates that the cgroup is using the same
2242 setting as the nearest cgroup ancestor with a non-empty
2243 "cpuset.mems" or all the available memory nodes if none
2246 The value of "cpuset.mems" stays constant until the next update
2247 and won't be affected by any memory nodes hotplug events.
2249 Setting a non-empty value to "cpuset.mems" causes memory of
2250 tasks within the cgroup to be migrated to the designated nodes if
2251 they are currently using memory outside of the designated nodes.
2253 There is a cost for this memory migration. The migration
2254 may not be complete and some memory pages may be left behind.
2255 So it is recommended that "cpuset.mems" should be set properly
2256 before spawning new tasks into the cpuset. Even if there is
2257 a need to change "cpuset.mems" with active tasks, it shouldn't
2260 cpuset.mems.effective
2261 A read-only multiple values file which exists on all
2262 cpuset-enabled cgroups.
2264 It lists the onlined memory nodes that are actually granted to
2265 this cgroup by its parent. These memory nodes are allowed to
2266 be used by tasks within the current cgroup.
2268 If "cpuset.mems" is empty, it shows all the memory nodes from the
2269 parent cgroup that will be available to be used by this cgroup.
2270 Otherwise, it should be a subset of "cpuset.mems" unless none of
2271 the memory nodes listed in "cpuset.mems" can be granted. In this
2272 case, it will be treated just like an empty "cpuset.mems".
2274 Its value will be affected by memory nodes hotplug events.
2276 cpuset.cpus.exclusive
2277 A read-write multiple values file which exists on non-root
2278 cpuset-enabled cgroups.
2280 It lists all the exclusive CPUs that are allowed to be used
2281 to create a new cpuset partition. Its value is not used
2282 unless the cgroup becomes a valid partition root. See the
2283 "cpuset.cpus.partition" section below for a description of what
2284 a cpuset partition is.
2286 When the cgroup becomes a partition root, the actual exclusive
2287 CPUs that are allocated to that partition are listed in
2288 "cpuset.cpus.exclusive.effective" which may be different
2289 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive"
2290 has previously been set, "cpuset.cpus.exclusive.effective"
2291 is always a subset of it.
2293 Users can manually set it to a value that is different from
2294 "cpuset.cpus". The only constraint in setting it is that the
2295 list of CPUs must be exclusive with respect to its sibling.
2297 For a parent cgroup, any one of its exclusive CPUs can only
2298 be distributed to at most one of its child cgroups. Having an
2299 exclusive CPU appearing in two or more of its child cgroups is
2300 not allowed (the exclusivity rule). A value that violates the
2301 exclusivity rule will be rejected with a write error.
2303 The root cgroup is a partition root and all its available CPUs
2304 are in its exclusive CPU set.
2306 cpuset.cpus.exclusive.effective
2307 A read-only multiple values file which exists on all non-root
2308 cpuset-enabled cgroups.
2310 This file shows the effective set of exclusive CPUs that
2311 can be used to create a partition root. The content of this
2312 file will always be a subset of "cpuset.cpus" and its parent's
2313 "cpuset.cpus.exclusive.effective" if its parent is not the root
2314 cgroup. It will also be a subset of "cpuset.cpus.exclusive"
2315 if it is set. If "cpuset.cpus.exclusive" is not set, it is
2316 treated to have an implicit value of "cpuset.cpus" in the
2317 formation of local partition.
2319 cpuset.cpus.partition
2320 A read-write single value file which exists on non-root
2321 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2322 and is not delegatable.
2324 It accepts only the following input values when written to.
2326 ========== =====================================
2327 "member" Non-root member of a partition
2328 "root" Partition root
2329 "isolated" Partition root without load balancing
2330 ========== =====================================
2332 A cpuset partition is a collection of cpuset-enabled cgroups with
2333 a partition root at the top of the hierarchy and its descendants
2334 except those that are separate partition roots themselves and
2335 their descendants. A partition has exclusive access to the
2336 set of exclusive CPUs allocated to it. Other cgroups outside
2337 of that partition cannot use any CPUs in that set.
2339 There are two types of partitions - local and remote. A local
2340 partition is one whose parent cgroup is also a valid partition
2341 root. A remote partition is one whose parent cgroup is not a
2342 valid partition root itself. Writing to "cpuset.cpus.exclusive"
2343 is optional for the creation of a local partition as its
2344 "cpuset.cpus.exclusive" file will assume an implicit value that
2345 is the same as "cpuset.cpus" if it is not set. Writing the
2346 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2347 before the target partition root is mandatory for the creation
2348 of a remote partition.
2350 Currently, a remote partition cannot be created under a local
2351 partition. All the ancestors of a remote partition root except
2352 the root cgroup cannot be a partition root.
2354 The root cgroup is always a partition root and its state cannot
2355 be changed. All other non-root cgroups start out as "member".
2357 When set to "root", the current cgroup is the root of a new
2358 partition or scheduling domain. The set of exclusive CPUs is
2359 determined by the value of its "cpuset.cpus.exclusive.effective".
2361 When set to "isolated", the CPUs in that partition will
2362 be in an isolated state without any load balancing from the
2363 scheduler. Tasks placed in such a partition with multiple
2364 CPUs should be carefully distributed and bound to each of the
2365 individual CPUs for optimal performance.
2367 A partition root ("root" or "isolated") can be in one of the
2368 two possible states - valid or invalid. An invalid partition
2369 root is in a degraded state where some state information may
2370 be retained, but behaves more like a "member".
2372 All possible state transitions among "member", "root" and
2373 "isolated" are allowed.
2375 On read, the "cpuset.cpus.partition" file can show the following
2378 ============================= =====================================
2379 "member" Non-root member of a partition
2380 "root" Partition root
2381 "isolated" Partition root without load balancing
2382 "root invalid (<reason>)" Invalid partition root
2383 "isolated invalid (<reason>)" Invalid isolated partition root
2384 ============================= =====================================
2386 In the case of an invalid partition root, a descriptive string on
2387 why the partition is invalid is included within parentheses.
2389 For a local partition root to be valid, the following conditions
2392 1) The parent cgroup is a valid partition root.
2393 2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2394 though it may contain offline CPUs.
2395 3) The "cpuset.cpus.effective" cannot be empty unless there is
2396 no task associated with this partition.
2398 For a remote partition root to be valid, all the above conditions
2399 except the first one must be met.
2401 External events like hotplug or changes to "cpuset.cpus" or
2402 "cpuset.cpus.exclusive" can cause a valid partition root to
2403 become invalid and vice versa. Note that a task cannot be
2404 moved to a cgroup with empty "cpuset.cpus.effective".
2406 A valid non-root parent partition may distribute out all its CPUs
2407 to its child local partitions when there is no task associated
2410 Care must be taken to change a valid partition root to "member"
2411 as all its child local partitions, if present, will become
2412 invalid causing disruption to tasks running in those child
2413 partitions. These inactivated partitions could be recovered if
2414 their parent is switched back to a partition root with a proper
2415 value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2417 Poll and inotify events are triggered whenever the state of
2418 "cpuset.cpus.partition" changes. That includes changes caused
2419 by write to "cpuset.cpus.partition", cpu hotplug or other
2420 changes that modify the validity status of the partition.
2421 This will allow user space agents to monitor unexpected changes
2422 to "cpuset.cpus.partition" without the need to do continuous
2425 A user can pre-configure certain CPUs to an isolated state
2426 with load balancing disabled at boot time with the "isolcpus"
2427 kernel boot command line option. If those CPUs are to be put
2428 into a partition, they have to be used in an isolated partition.
2434 Device controller manages access to device files. It includes both
2435 creation of new device files (using mknod), and access to the
2436 existing device files.
2438 Cgroup v2 device controller has no interface files and is implemented
2439 on top of cgroup BPF. To control access to device files, a user may
2440 create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2441 them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2442 device file, corresponding BPF programs will be executed, and depending
2443 on the return value the attempt will succeed or fail with -EPERM.
2445 A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2446 bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2447 access type (mknod/read/write) and device (type, major and minor numbers).
2448 If the program returns 0, the attempt fails with -EPERM, otherwise it
2451 An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2452 tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2458 The "rdma" controller regulates the distribution and accounting of
2461 RDMA Interface Files
2462 ~~~~~~~~~~~~~~~~~~~~
2465 A readwrite nested-keyed file that exists for all the cgroups
2466 except root that describes current configured resource limit
2467 for a RDMA/IB device.
2469 Lines are keyed by device name and are not ordered.
2470 Each line contains space separated resource name and its configured
2471 limit that can be distributed.
2473 The following nested keys are defined.
2475 ========== =============================
2476 hca_handle Maximum number of HCA Handles
2477 hca_object Maximum number of HCA Objects
2478 ========== =============================
2480 An example for mlx4 and ocrdma device follows::
2482 mlx4_0 hca_handle=2 hca_object=2000
2483 ocrdma1 hca_handle=3 hca_object=max
2486 A read-only file that describes current resource usage.
2487 It exists for all the cgroup except root.
2489 An example for mlx4 and ocrdma device follows::
2491 mlx4_0 hca_handle=1 hca_object=20
2492 ocrdma1 hca_handle=1 hca_object=23
2497 The HugeTLB controller allows to limit the HugeTLB usage per control group and
2498 enforces the controller limit during page fault.
2500 HugeTLB Interface Files
2501 ~~~~~~~~~~~~~~~~~~~~~~~
2503 hugetlb.<hugepagesize>.current
2504 Show current usage for "hugepagesize" hugetlb. It exists for all
2505 the cgroup except root.
2507 hugetlb.<hugepagesize>.max
2508 Set/show the hard limit of "hugepagesize" hugetlb usage.
2509 The default value is "max". It exists for all the cgroup except root.
2511 hugetlb.<hugepagesize>.events
2512 A read-only flat-keyed file which exists on non-root cgroups.
2515 The number of allocation failure due to HugeTLB limit
2517 hugetlb.<hugepagesize>.events.local
2518 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2519 are local to the cgroup i.e. not hierarchical. The file modified event
2520 generated on this file reflects only the local events.
2522 hugetlb.<hugepagesize>.numa_stat
2523 Similar to memory.numa_stat, it shows the numa information of the
2524 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2525 use hugetlb pages are included. The per-node values are in bytes.
2530 The Miscellaneous cgroup provides the resource limiting and tracking
2531 mechanism for the scalar resources which cannot be abstracted like the other
2532 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2535 A resource can be added to the controller via enum misc_res_type{} in the
2536 include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2537 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2538 capacity prior to using the resource by calling misc_cg_set_capacity().
2540 Once a capacity is set then the resource usage can be updated using charge and
2541 uncharge APIs. All of the APIs to interact with misc controller are in
2542 include/linux/misc_cgroup.h.
2544 Misc Interface Files
2545 ~~~~~~~~~~~~~~~~~~~~
2547 Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2550 A read-only flat-keyed file shown only in the root cgroup. It shows
2551 miscellaneous scalar resources available on the platform along with
2559 A read-only flat-keyed file shown in the all cgroups. It shows
2560 the current usage of the resources in the cgroup and its children.::
2567 A read-write flat-keyed file shown in the non root cgroups. Allowed
2568 maximum usage of the resources in the cgroup and its children.::
2574 Limit can be set by::
2576 # echo res_a 1 > misc.max
2578 Limit can be set to max by::
2580 # echo res_a max > misc.max
2582 Limits can be set higher than the capacity value in the misc.capacity
2586 A read-only flat-keyed file which exists on non-root cgroups. The
2587 following entries are defined. Unless specified otherwise, a value
2588 change in this file generates a file modified event. All fields in
2589 this file are hierarchical.
2592 The number of times the cgroup's resource usage was
2593 about to go over the max boundary.
2595 Migration and Ownership
2596 ~~~~~~~~~~~~~~~~~~~~~~~
2598 A miscellaneous scalar resource is charged to the cgroup in which it is used
2599 first, and stays charged to that cgroup until that resource is freed. Migrating
2600 a process to a different cgroup does not move the charge to the destination
2601 cgroup where the process has moved.
2609 perf_event controller, if not mounted on a legacy hierarchy, is
2610 automatically enabled on the v2 hierarchy so that perf events can
2611 always be filtered by cgroup v2 path. The controller can still be
2612 moved to a legacy hierarchy after v2 hierarchy is populated.
2615 Non-normative information
2616 -------------------------
2618 This section contains information that isn't considered to be a part of
2619 the stable kernel API and so is subject to change.
2622 CPU controller root cgroup process behaviour
2623 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2625 When distributing CPU cycles in the root cgroup each thread in this
2626 cgroup is treated as if it was hosted in a separate child cgroup of the
2627 root cgroup. This child cgroup weight is dependent on its thread nice
2630 For details of this mapping see sched_prio_to_weight array in
2631 kernel/sched/core.c file (values from this array should be scaled
2632 appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2635 IO controller root cgroup process behaviour
2636 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2638 Root cgroup processes are hosted in an implicit leaf child node.
2639 When distributing IO resources this implicit child node is taken into
2640 account as if it was a normal child cgroup of the root cgroup with a
2641 weight value of 200.
2650 cgroup namespace provides a mechanism to virtualize the view of the
2651 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2652 flag can be used with clone(2) and unshare(2) to create a new cgroup
2653 namespace. The process running inside the cgroup namespace will have
2654 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2655 cgroupns root is the cgroup of the process at the time of creation of
2656 the cgroup namespace.
2658 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2659 complete path of the cgroup of a process. In a container setup where
2660 a set of cgroups and namespaces are intended to isolate processes the
2661 "/proc/$PID/cgroup" file may leak potential system level information
2662 to the isolated processes. For example::
2664 # cat /proc/self/cgroup
2665 0::/batchjobs/container_id1
2667 The path '/batchjobs/container_id1' can be considered as system-data
2668 and undesirable to expose to the isolated processes. cgroup namespace
2669 can be used to restrict visibility of this path. For example, before
2670 creating a cgroup namespace, one would see::
2672 # ls -l /proc/self/ns/cgroup
2673 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2674 # cat /proc/self/cgroup
2675 0::/batchjobs/container_id1
2677 After unsharing a new namespace, the view changes::
2679 # ls -l /proc/self/ns/cgroup
2680 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2681 # cat /proc/self/cgroup
2684 When some thread from a multi-threaded process unshares its cgroup
2685 namespace, the new cgroupns gets applied to the entire process (all
2686 the threads). This is natural for the v2 hierarchy; however, for the
2687 legacy hierarchies, this may be unexpected.
2689 A cgroup namespace is alive as long as there are processes inside or
2690 mounts pinning it. When the last usage goes away, the cgroup
2691 namespace is destroyed. The cgroupns root and the actual cgroups
2698 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2699 process calling unshare(2) is running. For example, if a process in
2700 /batchjobs/container_id1 cgroup calls unshare, cgroup
2701 /batchjobs/container_id1 becomes the cgroupns root. For the
2702 init_cgroup_ns, this is the real root ('/') cgroup.
2704 The cgroupns root cgroup does not change even if the namespace creator
2705 process later moves to a different cgroup::
2707 # ~/unshare -c # unshare cgroupns in some cgroup
2708 # cat /proc/self/cgroup
2711 # echo 0 > sub_cgrp_1/cgroup.procs
2712 # cat /proc/self/cgroup
2715 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2717 Processes running inside the cgroup namespace will be able to see
2718 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2719 From within an unshared cgroupns::
2723 # echo 7353 > sub_cgrp_1/cgroup.procs
2724 # cat /proc/7353/cgroup
2727 From the initial cgroup namespace, the real cgroup path will be
2730 $ cat /proc/7353/cgroup
2731 0::/batchjobs/container_id1/sub_cgrp_1
2733 From a sibling cgroup namespace (that is, a namespace rooted at a
2734 different cgroup), the cgroup path relative to its own cgroup
2735 namespace root will be shown. For instance, if PID 7353's cgroup
2736 namespace root is at '/batchjobs/container_id2', then it will see::
2738 # cat /proc/7353/cgroup
2739 0::/../container_id2/sub_cgrp_1
2741 Note that the relative path always starts with '/' to indicate that
2742 its relative to the cgroup namespace root of the caller.
2745 Migration and setns(2)
2746 ----------------------
2748 Processes inside a cgroup namespace can move into and out of the
2749 namespace root if they have proper access to external cgroups. For
2750 example, from inside a namespace with cgroupns root at
2751 /batchjobs/container_id1, and assuming that the global hierarchy is
2752 still accessible inside cgroupns::
2754 # cat /proc/7353/cgroup
2756 # echo 7353 > batchjobs/container_id2/cgroup.procs
2757 # cat /proc/7353/cgroup
2758 0::/../container_id2
2760 Note that this kind of setup is not encouraged. A task inside cgroup
2761 namespace should only be exposed to its own cgroupns hierarchy.
2763 setns(2) to another cgroup namespace is allowed when:
2765 (a) the process has CAP_SYS_ADMIN against its current user namespace
2766 (b) the process has CAP_SYS_ADMIN against the target cgroup
2769 No implicit cgroup changes happen with attaching to another cgroup
2770 namespace. It is expected that the someone moves the attaching
2771 process under the target cgroup namespace root.
2774 Interaction with Other Namespaces
2775 ---------------------------------
2777 Namespace specific cgroup hierarchy can be mounted by a process
2778 running inside a non-init cgroup namespace::
2780 # mount -t cgroup2 none $MOUNT_POINT
2782 This will mount the unified cgroup hierarchy with cgroupns root as the
2783 filesystem root. The process needs CAP_SYS_ADMIN against its user and
2786 The virtualization of /proc/self/cgroup file combined with restricting
2787 the view of cgroup hierarchy by namespace-private cgroupfs mount
2788 provides a properly isolated cgroup view inside the container.
2791 Information on Kernel Programming
2792 =================================
2794 This section contains kernel programming information in the areas
2795 where interacting with cgroup is necessary. cgroup core and
2796 controllers are not covered.
2799 Filesystem Support for Writeback
2800 --------------------------------
2802 A filesystem can support cgroup writeback by updating
2803 address_space_operations->writepage[s]() to annotate bio's using the
2804 following two functions.
2806 wbc_init_bio(@wbc, @bio)
2807 Should be called for each bio carrying writeback data and
2808 associates the bio with the inode's owner cgroup and the
2809 corresponding request queue. This must be called after
2810 a queue (device) has been associated with the bio and
2813 wbc_account_cgroup_owner(@wbc, @page, @bytes)
2814 Should be called for each data segment being written out.
2815 While this function doesn't care exactly when it's called
2816 during the writeback session, it's the easiest and most
2817 natural to call it as data segments are added to a bio.
2819 With writeback bio's annotated, cgroup support can be enabled per
2820 super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
2821 selective disabling of cgroup writeback support which is helpful when
2822 certain filesystem features, e.g. journaled data mode, are
2825 wbc_init_bio() binds the specified bio to its cgroup. Depending on
2826 the configuration, the bio may be executed at a lower priority and if
2827 the writeback session is holding shared resources, e.g. a journal
2828 entry, may lead to priority inversion. There is no one easy solution
2829 for the problem. Filesystems can try to work around specific problem
2830 cases by skipping wbc_init_bio() and using bio_associate_blkg()
2834 Deprecated v1 Core Features
2835 ===========================
2837 - Multiple hierarchies including named ones are not supported.
2839 - All v1 mount options are not supported.
2841 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2843 - "cgroup.clone_children" is removed.
2845 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
2846 at the root instead.
2849 Issues with v1 and Rationales for v2
2850 ====================================
2852 Multiple Hierarchies
2853 --------------------
2855 cgroup v1 allowed an arbitrary number of hierarchies and each
2856 hierarchy could host any number of controllers. While this seemed to
2857 provide a high level of flexibility, it wasn't useful in practice.
2859 For example, as there is only one instance of each controller, utility
2860 type controllers such as freezer which can be useful in all
2861 hierarchies could only be used in one. The issue is exacerbated by
2862 the fact that controllers couldn't be moved to another hierarchy once
2863 hierarchies were populated. Another issue was that all controllers
2864 bound to a hierarchy were forced to have exactly the same view of the
2865 hierarchy. It wasn't possible to vary the granularity depending on
2866 the specific controller.
2868 In practice, these issues heavily limited which controllers could be
2869 put on the same hierarchy and most configurations resorted to putting
2870 each controller on its own hierarchy. Only closely related ones, such
2871 as the cpu and cpuacct controllers, made sense to be put on the same
2872 hierarchy. This often meant that userland ended up managing multiple
2873 similar hierarchies repeating the same steps on each hierarchy
2874 whenever a hierarchy management operation was necessary.
2876 Furthermore, support for multiple hierarchies came at a steep cost.
2877 It greatly complicated cgroup core implementation but more importantly
2878 the support for multiple hierarchies restricted how cgroup could be
2879 used in general and what controllers was able to do.
2881 There was no limit on how many hierarchies there might be, which meant
2882 that a thread's cgroup membership couldn't be described in finite
2883 length. The key might contain any number of entries and was unlimited
2884 in length, which made it highly awkward to manipulate and led to
2885 addition of controllers which existed only to identify membership,
2886 which in turn exacerbated the original problem of proliferating number
2889 Also, as a controller couldn't have any expectation regarding the
2890 topologies of hierarchies other controllers might be on, each
2891 controller had to assume that all other controllers were attached to
2892 completely orthogonal hierarchies. This made it impossible, or at
2893 least very cumbersome, for controllers to cooperate with each other.
2895 In most use cases, putting controllers on hierarchies which are
2896 completely orthogonal to each other isn't necessary. What usually is
2897 called for is the ability to have differing levels of granularity
2898 depending on the specific controller. In other words, hierarchy may
2899 be collapsed from leaf towards root when viewed from specific
2900 controllers. For example, a given configuration might not care about
2901 how memory is distributed beyond a certain level while still wanting
2902 to control how CPU cycles are distributed.
2908 cgroup v1 allowed threads of a process to belong to different cgroups.
2909 This didn't make sense for some controllers and those controllers
2910 ended up implementing different ways to ignore such situations but
2911 much more importantly it blurred the line between API exposed to
2912 individual applications and system management interface.
2914 Generally, in-process knowledge is available only to the process
2915 itself; thus, unlike service-level organization of processes,
2916 categorizing threads of a process requires active participation from
2917 the application which owns the target process.
2919 cgroup v1 had an ambiguously defined delegation model which got abused
2920 in combination with thread granularity. cgroups were delegated to
2921 individual applications so that they can create and manage their own
2922 sub-hierarchies and control resource distributions along them. This
2923 effectively raised cgroup to the status of a syscall-like API exposed
2926 First of all, cgroup has a fundamentally inadequate interface to be
2927 exposed this way. For a process to access its own knobs, it has to
2928 extract the path on the target hierarchy from /proc/self/cgroup,
2929 construct the path by appending the name of the knob to the path, open
2930 and then read and/or write to it. This is not only extremely clunky
2931 and unusual but also inherently racy. There is no conventional way to
2932 define transaction across the required steps and nothing can guarantee
2933 that the process would actually be operating on its own sub-hierarchy.
2935 cgroup controllers implemented a number of knobs which would never be
2936 accepted as public APIs because they were just adding control knobs to
2937 system-management pseudo filesystem. cgroup ended up with interface
2938 knobs which were not properly abstracted or refined and directly
2939 revealed kernel internal details. These knobs got exposed to
2940 individual applications through the ill-defined delegation mechanism
2941 effectively abusing cgroup as a shortcut to implementing public APIs
2942 without going through the required scrutiny.
2944 This was painful for both userland and kernel. Userland ended up with
2945 misbehaving and poorly abstracted interfaces and kernel exposing and
2946 locked into constructs inadvertently.
2949 Competition Between Inner Nodes and Threads
2950 -------------------------------------------
2952 cgroup v1 allowed threads to be in any cgroups which created an
2953 interesting problem where threads belonging to a parent cgroup and its
2954 children cgroups competed for resources. This was nasty as two
2955 different types of entities competed and there was no obvious way to
2956 settle it. Different controllers did different things.
2958 The cpu controller considered threads and cgroups as equivalents and
2959 mapped nice levels to cgroup weights. This worked for some cases but
2960 fell flat when children wanted to be allocated specific ratios of CPU
2961 cycles and the number of internal threads fluctuated - the ratios
2962 constantly changed as the number of competing entities fluctuated.
2963 There also were other issues. The mapping from nice level to weight
2964 wasn't obvious or universal, and there were various other knobs which
2965 simply weren't available for threads.
2967 The io controller implicitly created a hidden leaf node for each
2968 cgroup to host the threads. The hidden leaf had its own copies of all
2969 the knobs with ``leaf_`` prefixed. While this allowed equivalent
2970 control over internal threads, it was with serious drawbacks. It
2971 always added an extra layer of nesting which wouldn't be necessary
2972 otherwise, made the interface messy and significantly complicated the
2975 The memory controller didn't have a way to control what happened
2976 between internal tasks and child cgroups and the behavior was not
2977 clearly defined. There were attempts to add ad-hoc behaviors and
2978 knobs to tailor the behavior to specific workloads which would have
2979 led to problems extremely difficult to resolve in the long term.
2981 Multiple controllers struggled with internal tasks and came up with
2982 different ways to deal with it; unfortunately, all the approaches were
2983 severely flawed and, furthermore, the widely different behaviors
2984 made cgroup as a whole highly inconsistent.
2986 This clearly is a problem which needs to be addressed from cgroup core
2990 Other Interface Issues
2991 ----------------------
2993 cgroup v1 grew without oversight and developed a large number of
2994 idiosyncrasies and inconsistencies. One issue on the cgroup core side
2995 was how an empty cgroup was notified - a userland helper binary was
2996 forked and executed for each event. The event delivery wasn't
2997 recursive or delegatable. The limitations of the mechanism also led
2998 to in-kernel event delivery filtering mechanism further complicating
3001 Controller interfaces were problematic too. An extreme example is
3002 controllers completely ignoring hierarchical organization and treating
3003 all cgroups as if they were all located directly under the root
3004 cgroup. Some controllers exposed a large amount of inconsistent
3005 implementation details to userland.
3007 There also was no consistency across controllers. When a new cgroup
3008 was created, some controllers defaulted to not imposing extra
3009 restrictions while others disallowed any resource usage until
3010 explicitly configured. Configuration knobs for the same type of
3011 control used widely differing naming schemes and formats. Statistics
3012 and information knobs were named arbitrarily and used different
3013 formats and units even in the same controller.
3015 cgroup v2 establishes common conventions where appropriate and updates
3016 controllers so that they expose minimal and consistent interfaces.
3019 Controller Issues and Remedies
3020 ------------------------------
3025 The original lower boundary, the soft limit, is defined as a limit
3026 that is per default unset. As a result, the set of cgroups that
3027 global reclaim prefers is opt-in, rather than opt-out. The costs for
3028 optimizing these mostly negative lookups are so high that the
3029 implementation, despite its enormous size, does not even provide the
3030 basic desirable behavior. First off, the soft limit has no
3031 hierarchical meaning. All configured groups are organized in a global
3032 rbtree and treated like equal peers, regardless where they are located
3033 in the hierarchy. This makes subtree delegation impossible. Second,
3034 the soft limit reclaim pass is so aggressive that it not just
3035 introduces high allocation latencies into the system, but also impacts
3036 system performance due to overreclaim, to the point where the feature
3037 becomes self-defeating.
3039 The memory.low boundary on the other hand is a top-down allocated
3040 reserve. A cgroup enjoys reclaim protection when it's within its
3041 effective low, which makes delegation of subtrees possible. It also
3042 enjoys having reclaim pressure proportional to its overage when
3043 above its effective low.
3045 The original high boundary, the hard limit, is defined as a strict
3046 limit that can not budge, even if the OOM killer has to be called.
3047 But this generally goes against the goal of making the most out of the
3048 available memory. The memory consumption of workloads varies during
3049 runtime, and that requires users to overcommit. But doing that with a
3050 strict upper limit requires either a fairly accurate prediction of the
3051 working set size or adding slack to the limit. Since working set size
3052 estimation is hard and error prone, and getting it wrong results in
3053 OOM kills, most users tend to err on the side of a looser limit and
3054 end up wasting precious resources.
3056 The memory.high boundary on the other hand can be set much more
3057 conservatively. When hit, it throttles allocations by forcing them
3058 into direct reclaim to work off the excess, but it never invokes the
3059 OOM killer. As a result, a high boundary that is chosen too
3060 aggressively will not terminate the processes, but instead it will
3061 lead to gradual performance degradation. The user can monitor this
3062 and make corrections until the minimal memory footprint that still
3063 gives acceptable performance is found.
3065 In extreme cases, with many concurrent allocations and a complete
3066 breakdown of reclaim progress within the group, the high boundary can
3067 be exceeded. But even then it's mostly better to satisfy the
3068 allocation from the slack available in other groups or the rest of the
3069 system than killing the group. Otherwise, memory.max is there to
3070 limit this type of spillover and ultimately contain buggy or even
3071 malicious applications.
3073 Setting the original memory.limit_in_bytes below the current usage was
3074 subject to a race condition, where concurrent charges could cause the
3075 limit setting to fail. memory.max on the other hand will first set the
3076 limit to prevent new charges, and then reclaim and OOM kill until the
3077 new limit is met - or the task writing to memory.max is killed.
3079 The combined memory+swap accounting and limiting is replaced by real
3080 control over swap space.
3082 The main argument for a combined memory+swap facility in the original
3083 cgroup design was that global or parental pressure would always be
3084 able to swap all anonymous memory of a child group, regardless of the
3085 child's own (possibly untrusted) configuration. However, untrusted
3086 groups can sabotage swapping by other means - such as referencing its
3087 anonymous memory in a tight loop - and an admin can not assume full
3088 swappability when overcommitting untrusted jobs.
3090 For trusted jobs, on the other hand, a combined counter is not an
3091 intuitive userspace interface, and it flies in the face of the idea
3092 that cgroup controllers should account and limit specific physical
3093 resources. Swap space is a resource like all others in the system,
3094 and that's why unified hierarchy allows distributing it separately.