sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget4/translations/zh_CN/admin-guide/cgroup-v1/memcg_testmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget4/translations/zh_TW/admin-guide/cgroup-v1/memcg_testmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget4/translations/it_IT/admin-guide/cgroup-v1/memcg_testmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget4/translations/ja_JP/admin-guide/cgroup-v1/memcg_testmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget4/translations/ko_KR/admin-guide/cgroup-v1/memcg_testmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget4/translations/sp_SP/admin-guide/cgroup-v1/memcg_testmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhsection)}(hhh](htitle)}(h5Memory Resource Controller(Memcg) Implementation Memoh]h5Memory Resource Controller(Memcg) Implementation Memo}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhN/var/lib/git/docbuild/linux/Documentation/admin-guide/cgroup-v1/memcg_test.rsthKubh paragraph)}(hLast Updated: 2010/2h]hLast Updated: 2010/2}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(h>Base Kernel Version: based on 2.6.33-rc7-mm(candidate for 34).h]h>Base Kernel Version: based on 2.6.33-rc7-mm(candidate for 34).}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hBecause VM is getting complex (one of reasons is memcg...), memcg's behavior is complex. This is a document for memcg's internal behavior. Please note that implementation details can be changed.h]hBecause VM is getting complex (one of reasons is memcg...), memcg’s behavior is complex. This is a document for memcg’s internal behavior. Please note that implementation details can be changed.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hhhhubh)}(hN(*) Topics on API should be in Documentation/admin-guide/cgroup-v1/memory.rst)h]hN(*) Topics on API should be in Documentation/admin-guide/cgroup-v1/memory.rst)}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hhhhubh)}(hhh](h)}(h0. How to record usage ?h]h0. How to record usage ?}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh block_quote)}(hXk2 objects are used. page_cgroup ....an object per page. Allocated at boot or memory hotplug. Freed at memory hot removal. swap_cgroup ... an entry per swp_entry. Allocated at swapon(). Freed at swapoff(). The page_cgroup has USED bit and double count against a page_cgroup never occurs. swap_cgroup is used only when a charged page is swapped-out. h](h)}(h2 objects are used.h]h2 objects are used.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh)}(h#page_cgroup ....an object per page.h]h#page_cgroup ....an object per page.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubj)}(hBAllocated at boot or memory hotplug. Freed at memory hot removal. h]h)}(hAAllocated at boot or memory hotplug. Freed at memory hot removal.h]hAAllocated at boot or memory hotplug. Freed at memory hot removal.}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj$ubah}(h]h ]h"]h$]h&]uh1jhhhKhjubh)}(h'swap_cgroup ... an entry per swp_entry.h]h'swap_cgroup ... an entry per swp_entry.}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubj)}(h+Allocated at swapon(). Freed at swapoff(). h]h)}(h*Allocated at swapon(). Freed at swapoff().h]h*Allocated at swapon(). Freed at swapoff().}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjJubah}(h]h ]h"]h$]h&]uh1jhhhKhjubh)}(hThe page_cgroup has USED bit and double count against a page_cgroup never occurs. swap_cgroup is used only when a charged page is swapped-out.h]hThe page_cgroup has USED bit and double count against a page_cgroup never occurs. swap_cgroup is used only when a charged page is swapped-out.}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhhhhubeh}(h]how-to-record-usageah ]h"]0. how to record usage ?ah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h 1. Chargeh]h 1. Charge}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj~hhhhhK ubj)}(hVa page/swp_entry may be charged (usage += PAGE_SIZE) at mem_cgroup_try_charge() h](h)}(h7a page/swp_entry may be charged (usage += PAGE_SIZE) ath]h7a page/swp_entry may be charged (usage += PAGE_SIZE) at}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK"hjubj)}(hmem_cgroup_try_charge() h]h)}(hmem_cgroup_try_charge()h]hmem_cgroup_try_charge()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK$hjubah}(h]h ]h"]h$]h&]uh1jhhhK$hjubeh}(h]h ]h"]h$]h&]uh1jhhhK"hj~hhubeh}(h]chargeah ]h"] 1. chargeah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(h 2. Unchargeh]h 2. Uncharge}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK'ubj)}(hXa page/swp_entry may be uncharged (usage -= PAGE_SIZE) by mem_cgroup_uncharge() Called when a page's refcount goes down to 0. mem_cgroup_uncharge_swap() Called when swp_entry's refcnt goes down to 0. A charge against swap disappears. h](h)}(h9a page/swp_entry may be uncharged (usage -= PAGE_SIZE) byh]h9a page/swp_entry may be uncharged (usage -= PAGE_SIZE) by}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK)hjubj)}(hmem_cgroup_uncharge() Called when a page's refcount goes down to 0. mem_cgroup_uncharge_swap() Called when swp_entry's refcnt goes down to 0. A charge against swap disappears. h]hdefinition_list)}(hhh](hdefinition_list_item)}(hDmem_cgroup_uncharge() Called when a page's refcount goes down to 0. h](hterm)}(hmem_cgroup_uncharge()h]hmem_cgroup_uncharge()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhK,hjubh definition)}(hhh]h)}(h-Called when a page's refcount goes down to 0.h]h/Called when a page’s refcount goes down to 0.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK,hj ubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1jhhhK,hjubj)}(hlmem_cgroup_uncharge_swap() Called when swp_entry's refcnt goes down to 0. A charge against swap disappears. h](j)}(hmem_cgroup_uncharge_swap()h]hmem_cgroup_uncharge_swap()}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhK0hj(ubj )}(hhh]h)}(hPCalled when swp_entry's refcnt goes down to 0. A charge against swap disappears.h]hRCalled when swp_entry’s refcnt goes down to 0. A charge against swap disappears.}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK/hj:ubah}(h]h ]h"]h$]h&]uh1j hj(ubeh}(h]h ]h"]h$]h&]uh1jhhhK0hjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhhhK+hjubeh}(h]h ]h"]h$]h&]uh1jhhhK)hjhhubeh}(h]unchargeah ]h"] 2. unchargeah$]h&]uh1hhhhhhhhK'ubh)}(hhh](h)}(h3. charge-commit-cancelh]h3. charge-commit-cancel}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhjqhhhhhK3ubj)}(hXQMemcg pages are charged in two steps: - mem_cgroup_try_charge() - mem_cgroup_commit_charge() or mem_cgroup_cancel_charge() At try_charge(), there are no flags to say "this page is charged". at this point, usage += PAGE_SIZE. At commit(), the page is associated with the memcg. At cancel(), simply usage -= PAGE_SIZE. h](h)}(h%Memcg pages are charged in two steps:h]h%Memcg pages are charged in two steps:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK5hjubj)}(hU- mem_cgroup_try_charge() - mem_cgroup_commit_charge() or mem_cgroup_cancel_charge() h]h bullet_list)}(hhh](h list_item)}(hmem_cgroup_try_charge()h]h)}(hjh]hmem_cgroup_try_charge()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK7hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h9mem_cgroup_commit_charge() or mem_cgroup_cancel_charge() h]h)}(h8mem_cgroup_commit_charge() or mem_cgroup_cancel_charge()h]h8mem_cgroup_commit_charge() or mem_cgroup_cancel_charge()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK8hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]bullet-uh1jhhhK7hjubah}(h]h ]h"]h$]h&]uh1jhhhK7hjubh)}(heAt try_charge(), there are no flags to say "this page is charged". at this point, usage += PAGE_SIZE.h]hiAt try_charge(), there are no flags to say “this page is charged”. at this point, usage += PAGE_SIZE.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK:hjubh)}(h3At commit(), the page is associated with the memcg.h]h3At commit(), the page is associated with the memcg.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK=hjubh)}(h'At cancel(), simply usage -= PAGE_SIZE.h]h'At cancel(), simply usage -= PAGE_SIZE.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK?hjubeh}(h]h ]h"]h$]h&]uh1jhhhK5hjqhhubh)}(h1Under below explanation, we assume CONFIG_SWAP=y.h]h1Under below explanation, we assume CONFIG_SWAP=y.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKAhjqhhubeh}(h]charge-commit-cancelah ]h"]3. charge-commit-cancelah$]h&]uh1hhhhhhhhK3ubh)}(hhh](h)}(h 4. Anonymoush]h 4. Anonymous}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj"hhhhhKDubj)}(hXAnonymous page is newly allocated at - page fault into MAP_ANONYMOUS mapping. - Copy-On-Write. 4.1 Swap-in. At swap-in, the page is taken from swap-cache. There are 2 cases. (a) If the SwapCache is newly allocated and read, it has no charges. (b) If the SwapCache has been mapped by processes, it has been charged already. 4.2 Swap-out. At swap-out, typical state transition is below. (a) add to swap cache. (marked as SwapCache) swp_entry's refcnt += 1. (b) fully unmapped. swp_entry's refcnt += # of ptes. (c) write back to swap. (d) delete from swap cache. (remove from SwapCache) swp_entry's refcnt -= 1. Finally, at task exit, (e) zap_pte() is called and swp_entry's refcnt -=1 -> 0. h](j)}(hhh]j)}(h_Anonymous page is newly allocated at - page fault into MAP_ANONYMOUS mapping. - Copy-On-Write. h](j)}(h$Anonymous page is newly allocated ath]h$Anonymous page is newly allocated at}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKHhj:ubj )}(hhh]j)}(hhh](j)}(h&page fault into MAP_ANONYMOUS mapping.h]h)}(hjTh]h&page fault into MAP_ANONYMOUS mapping.}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKGhjRubah}(h]h ]h"]h$]h&]uh1jhjOubj)}(hCopy-On-Write. h]h)}(hCopy-On-Write.h]hCopy-On-Write.}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKHhjiubah}(h]h ]h"]h$]h&]uh1jhjOubeh}(h]h ]h"]h$]h&]jjuh1jhhhKGhjLubah}(h]h ]h"]h$]h&]uh1j hj:ubeh}(h]h ]h"]h$]h&]uh1jhhhKHhj7ubah}(h]h ]h"]h$]h&]uh1jhj3ubh)}(hN4.1 Swap-in. At swap-in, the page is taken from swap-cache. There are 2 cases.h]hN4.1 Swap-in. At swap-in, the page is taken from swap-cache. There are 2 cases.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKJhj3ubhenumerated_list)}(hhh](j)}(h@If the SwapCache is newly allocated and read, it has no charges.h]h)}(hjh]h@If the SwapCache is newly allocated and read, it has no charges.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hLIf the SwapCache has been mapped by processes, it has been charged already. h]h)}(hKIf the SwapCache has been mapped by processes, it has been charged already.h]hKIf the SwapCache has been mapped by processes, it has been charged already.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKNhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]enumtype loweralphaprefix(suffix)uh1jhj3ubh)}(h=4.2 Swap-out. At swap-out, typical state transition is below.h]h=4.2 Swap-out. At swap-out, typical state transition is below.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKQhj3ubj)}(hhh](j)}(hAadd to swap cache. (marked as SwapCache) swp_entry's refcnt += 1.h]h)}(hAadd to swap cache. (marked as SwapCache) swp_entry's refcnt += 1.h]hCadd to swap cache. (marked as SwapCache) swp_entry’s refcnt += 1.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKThjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h0fully unmapped. swp_entry's refcnt += # of ptes.h]h)}(h0fully unmapped. swp_entry's refcnt += # of ptes.h]h2fully unmapped. swp_entry’s refcnt += # of ptes.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKVhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hwrite back to swap.h]h)}(hj*h]hwrite back to swap.}(hj,hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKXhj(ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hJdelete from swap cache. (remove from SwapCache) swp_entry's refcnt -= 1. h]h)}(hHdelete from swap cache. (remove from SwapCache) swp_entry's refcnt -= 1.h]hJdelete from swap cache. (remove from SwapCache) swp_entry’s refcnt -= 1.}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKYhj?ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]jjjjjjuh1jhj3ubh)}(hOFinally, at task exit, (e) zap_pte() is called and swp_entry's refcnt -=1 -> 0.h]hQFinally, at task exit, (e) zap_pte() is called and swp_entry’s refcnt -=1 -> 0.}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK]hj3ubeh}(h]h ]h"]h$]h&]uh1jhhhKFhj"hhubeh}(h] anonymousah ]h"] 4. anonymousah$]h&]uh1hhhhhhhhKDubh)}(hhh](h)}(h 5. Page Cacheh]h 5. Page Cache}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjyhhhhhKaubj)}(hPage Cache is charged at - filemap_add_folio(). The logic is very clear. (About migration, see below) Note: __filemap_remove_folio() is called by filemap_remove_folio() and __remove_mapping(). h](h)}(h/Page Cache is charged at - filemap_add_folio().h]h/Page Cache is charged at - filemap_add_folio().}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKchjubh)}(h5The logic is very clear. (About migration, see below)h]h5The logic is very clear. (About migration, see below)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKfhjubj)}(hhh]j)}(h[Note: __filemap_remove_folio() is called by filemap_remove_folio() and __remove_mapping(). h](j)}(hNote:h]hNote:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhhhKjhjubj )}(hhh]h)}(hT__filemap_remove_folio() is called by filemap_remove_folio() and __remove_mapping().h]hT__filemap_remove_folio() is called by filemap_remove_folio() and __remove_mapping().}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKihjubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]uh1jhhhKjhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhhhKchjyhhubeh}(h] page-cacheah ]h"] 5. page cacheah$]h&]uh1hhhhhhhhKaubh)}(hhh](h)}(h6. Shmem(tmpfs) Page Cacheh]h6. Shmem(tmpfs) Page Cache}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKmubj)}(hXThe best way to understand shmem's page state transition is to read mm/shmem.c. But brief explanation of the behavior of memcg around shmem will be helpful to understand the logic. Shmem's page (just leaf page, not direct/indirect block) can be on - radix-tree of shmem's inode. - SwapCache. - Both on radix-tree and SwapCache. This happens at swap-in and swap-out, It's charged when... - A new page is added to shmem's radix-tree. - A swp page is read. (move a charge from swap_cgroup to page_cgroup) h](h)}(hOThe best way to understand shmem's page state transition is to read mm/shmem.c.h]hQThe best way to understand shmem’s page state transition is to read mm/shmem.c.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKohjubh)}(hdBut brief explanation of the behavior of memcg around shmem will be helpful to understand the logic.h]hdBut brief explanation of the behavior of memcg around shmem will be helpful to understand the logic.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKrhjubh)}(hBShmem's page (just leaf page, not direct/indirect block) can be onh]hDShmem’s page (just leaf page, not direct/indirect block) can be on}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKuhjubj)}(hx- radix-tree of shmem's inode. - SwapCache. - Both on radix-tree and SwapCache. This happens at swap-in and swap-out, h]j)}(hhh](j)}(hradix-tree of shmem's inode.h]h)}(hj8h]hradix-tree of shmem’s inode.}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhj6ubah}(h]h ]h"]h$]h&]uh1jhj3ubj)}(h SwapCache.h]h)}(hjOh]h SwapCache.}(hjQhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKxhjMubah}(h]h ]h"]h$]h&]uh1jhj3ubj)}(hHBoth on radix-tree and SwapCache. This happens at swap-in and swap-out, h]h)}(hGBoth on radix-tree and SwapCache. This happens at swap-in and swap-out,h]hGBoth on radix-tree and SwapCache. This happens at swap-in and swap-out,}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhjdubah}(h]h ]h"]h$]h&]uh1jhj3ubeh}(h]h ]h"]h$]h&]jjuh1jhhhKwhj/ubah}(h]h ]h"]h$]h&]uh1jhhhKwhjubh)}(hIt's charged when...h]hIt’s charged when...}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK|hjubj)}(hhh](j)}(h*A new page is added to shmem's radix-tree.h]h)}(hjh]h,A new page is added to shmem’s radix-tree.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK~hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hDA swp page is read. (move a charge from swap_cgroup to page_cgroup) h]h)}(hCA swp page is read. (move a charge from swap_cgroup to page_cgroup)h]hCA swp page is read. (move a charge from swap_cgroup to page_cgroup)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhhhK~hjubeh}(h]h ]h"]h$]h&]uh1jhhhKohjhhubeh}(h]shmem-tmpfs-page-cacheah ]h"]6. shmem(tmpfs) page cacheah$]h&]uh1hhhhhhhhKmubh)}(hhh](h)}(h7. Page Migrationh]h7. Page Migration}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj)}(hmem_cgroup_migrate() h]h)}(hmem_cgroup_migrate()h]hmem_cgroup_migrate()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhhhKhjhhubeh}(h]page-migrationah ]h"]7. page migrationah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h8. LRUh]h8. LRU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhKubj)}(hEach memcg has its own vector of LRUs (inactive anon, active anon, inactive file, active file, unevictable) of pages from each node, each LRU handled under a single lru_lock for that memcg and node. h]h)}(hEach memcg has its own vector of LRUs (inactive anon, active anon, inactive file, active file, unevictable) of pages from each node, each LRU handled under a single lru_lock for that memcg and node.h]hEach memcg has its own vector of LRUs (inactive anon, active anon, inactive file, active file, unevictable) of pages from each node, each LRU handled under a single lru_lock for that memcg and node.}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhhhKhj hhubeh}(h]lruah ]h"]8. lruah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h9. Typical Tests.h]h9. Typical Tests.}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj>hhhhhKubj)}(hTests for racy cases. h]h)}(hTests for racy cases.h]hTests for racy cases.}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjOubah}(h]h ]h"]h$]h&]uh1jhhhKhj>hhubh)}(hhh](h)}(h9.1 Small limit to memcg.h]h9.1 Small limit to memcg.}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjghhhhhKubj)}(hWhen you do test to do racy case, it's good test to set memcg's limit to be very small rather than GB. Many races found in the test under xKB or xxMB limits. (Memory behavior under GB and Memory behavior under MB shows very different situation.) h](h)}(hWhen you do test to do racy case, it's good test to set memcg's limit to be very small rather than GB. Many races found in the test under xKB or xxMB limits.h]hWhen you do test to do racy case, it’s good test to set memcg’s limit to be very small rather than GB. Many races found in the test under xKB or xxMB limits.}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjxubh)}(hW(Memory behavior under GB and Memory behavior under MB shows very different situation.)h]hW(Memory behavior under GB and Memory behavior under MB shows very different situation.)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjxubeh}(h]h ]h"]h$]h&]uh1jhhhKhjghhubeh}(h]small-limit-to-memcgah ]h"]9.1 small limit to memcg.ah$]h&]uh1hhj>hhhhhKubh)}(hhh](h)}(h 9.2 Shmemh]h 9.2 Shmem}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj)}(hHistorically, memcg's shmem handling was poor and we saw some amount of troubles here. This is because shmem is page-cache but can be SwapCache. Test with shmem/tmpfs is always good test. h]h)}(hHistorically, memcg's shmem handling was poor and we saw some amount of troubles here. This is because shmem is page-cache but can be SwapCache. Test with shmem/tmpfs is always good test.h]hHistorically, memcg’s shmem handling was poor and we saw some amount of troubles here. This is because shmem is page-cache but can be SwapCache. Test with shmem/tmpfs is always good test.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhhhKhjhhubeh}(h]shmemah ]h"] 9.2 shmemah$]h&]uh1hhj>hhhhhKubh)}(hhh](h)}(h 9.3 Migrationh]h 9.3 Migration}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj)}(hXFor NUMA, migration is an another special case. To do easy test, cpuset is useful. Following is a sample script to do migration:: mount -t cgroup -o cpuset none /opt/cpuset mkdir /opt/cpuset/01 echo 1 > /opt/cpuset/01/cpuset.cpus echo 0 > /opt/cpuset/01/cpuset.mems echo 1 > /opt/cpuset/01/cpuset.memory_migrate mkdir /opt/cpuset/02 echo 1 > /opt/cpuset/02/cpuset.cpus echo 1 > /opt/cpuset/02/cpuset.mems echo 1 > /opt/cpuset/02/cpuset.memory_migrate In above set, when you moves a task from 01 to 02, page migration to node 0 to node 1 will occur. Following is a script to migrate all under cpuset.:: -- move_task() { for pid in $1 do /bin/echo $pid >$2/tasks 2>/dev/null echo -n $pid echo -n " " done echo END } G1_TASK=`cat ${G1}/tasks` G2_TASK=`cat ${G2}/tasks` move_task "${G1_TASK}" ${G2} & -- h](h)}(hFor NUMA, migration is an another special case. To do easy test, cpuset is useful. Following is a sample script to do migration::h]hFor NUMA, migration is an another special case. To do easy test, cpuset is useful. Following is a sample script to do migration:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh literal_block)}(hXAmount -t cgroup -o cpuset none /opt/cpuset mkdir /opt/cpuset/01 echo 1 > /opt/cpuset/01/cpuset.cpus echo 0 > /opt/cpuset/01/cpuset.mems echo 1 > /opt/cpuset/01/cpuset.memory_migrate mkdir /opt/cpuset/02 echo 1 > /opt/cpuset/02/cpuset.cpus echo 1 > /opt/cpuset/02/cpuset.mems echo 1 > /opt/cpuset/02/cpuset.memory_migrateh]hXAmount -t cgroup -o cpuset none /opt/cpuset mkdir /opt/cpuset/01 echo 1 > /opt/cpuset/01/cpuset.cpus echo 0 > /opt/cpuset/01/cpuset.mems echo 1 > /opt/cpuset/01/cpuset.memory_migrate mkdir /opt/cpuset/02 echo 1 > /opt/cpuset/02/cpuset.cpus echo 1 > /opt/cpuset/02/cpuset.mems echo 1 > /opt/cpuset/02/cpuset.memory_migrate}hjsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1jhhhKhjubh)}(hIn above set, when you moves a task from 01 to 02, page migration to node 0 to node 1 will occur. Following is a script to migrate all under cpuset.::h]hIn above set, when you moves a task from 01 to 02, page migration to node 0 to node 1 will occur. Following is a script to migrate all under cpuset.:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubj)}(h-- move_task() { for pid in $1 do /bin/echo $pid >$2/tasks 2>/dev/null echo -n $pid echo -n " " done echo END } G1_TASK=`cat ${G1}/tasks` G2_TASK=`cat ${G2}/tasks` move_task "${G1_TASK}" ${G2} & --h]h-- move_task() { for pid in $1 do /bin/echo $pid >$2/tasks 2>/dev/null echo -n $pid echo -n " " done echo END } G1_TASK=`cat ${G1}/tasks` G2_TASK=`cat ${G2}/tasks` move_task "${G1_TASK}" ${G2} & --}hjsbah}(h]h ]h"]h$]h&]j j uh1jhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjhhubeh}(h] migrationah ]h"] 9.3 migrationah$]h&]uh1hhj>hhhhhKubh)}(hhh](h)}(h9.4 Memory hotplugh]h9.4 Memory hotplug}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj6hhhhhKubj)}(hmemory hotplug test is one of good test. to offline memory, do following:: # echo offline > /sys/devices/system/memory/memoryXXX/state (XXX is the place of memory) This is an easy way to test page migration, too. h](h)}(h(memory hotplug test is one of good test.h]h(memory hotplug test is one of good test.}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjGubh)}(h!to offline memory, do following::h]h to offline memory, do following:}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjGubj)}(h;# echo offline > /sys/devices/system/memory/memoryXXX/stateh]h;# echo offline > /sys/devices/system/memory/memoryXXX/state}hjgsbah}(h]h ]h"]h$]h&]j j uh1jhhhKhjGubh)}(h(XXX is the place of memory)h]h(XXX is the place of memory)}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjGubh)}(h0This is an easy way to test page migration, too.h]h0This is an easy way to test page migration, too.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjGubeh}(h]h ]h"]h$]h&]uh1jhhhKhj6hhubeh}(h]memory-hotplugah ]h"]9.4 memory hotplugah$]h&]uh1hhj>hhhhhKubh)}(hhh](h)}(h9.5 nested cgroupsh]h9.5 nested cgroups}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj)}(hXUse tests like the following for testing nested cgroups:: mkdir /opt/cgroup/01/child_a mkdir /opt/cgroup/01/child_b set limit to 01. add limit to 01/child_b run jobs under child_a and child_b create/delete following groups at random while jobs are running:: /opt/cgroup/01/child_a/child_aa /opt/cgroup/01/child_b/child_bb /opt/cgroup/01/child_c running new jobs in new group is also good. h](h)}(h9Use tests like the following for testing nested cgroups::h]h8Use tests like the following for testing nested cgroups:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubj)}(hmkdir /opt/cgroup/01/child_a mkdir /opt/cgroup/01/child_b set limit to 01. add limit to 01/child_b run jobs under child_a and child_bh]hmkdir /opt/cgroup/01/child_a mkdir /opt/cgroup/01/child_b set limit to 01. add limit to 01/child_b run jobs under child_a and child_b}hjsbah}(h]h ]h"]h$]h&]j j uh1jhhhKhjubh)}(hAcreate/delete following groups at random while jobs are running::h]h@create/delete following groups at random while jobs are running:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubj)}(hV/opt/cgroup/01/child_a/child_aa /opt/cgroup/01/child_b/child_bb /opt/cgroup/01/child_ch]hV/opt/cgroup/01/child_a/child_aa /opt/cgroup/01/child_b/child_bb /opt/cgroup/01/child_c}hjsbah}(h]h ]h"]h$]h&]j j uh1jhhhKhjubh)}(h+running new jobs in new group is also good.h]h+running new jobs in new group is also good.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjhhubeh}(h]nested-cgroupsah ]h"]9.5 nested cgroupsah$]h&]uh1hhj>hhhhhKubh)}(hhh](h)}(h9.6 Mount with other subsystemsh]h9.6 Mount with other subsystems}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj)}(hMounting with other subsystems is a good test because there is a race and lock dependency with other cgroup subsystems. example:: # mount -t cgroup none /cgroup -o cpuset,memory,cpu,devices and do task move, mkdir, rmdir etc...under this. h](h)}(hwMounting with other subsystems is a good test because there is a race and lock dependency with other cgroup subsystems.h]hwMounting with other subsystems is a good test because there is a race and lock dependency with other cgroup subsystems.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubh)}(h example::h]hexample:}(hj+hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubj)}(h;# mount -t cgroup none /cgroup -o cpuset,memory,cpu,devicesh]h;# mount -t cgroup none /cgroup -o cpuset,memory,cpu,devices}hj9sbah}(h]h ]h"]h$]h&]j j uh1jhhhKhjubh)}(h0and do task move, mkdir, rmdir etc...under this.h]h0and do task move, mkdir, rmdir etc...under this.}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubeh}(h]h ]h"]h$]h&]uh1jhhhKhjhhubeh}(h]mount-with-other-subsystemsah ]h"]9.6 mount with other subsystemsah$]h&]uh1hhj>hhhhhKubh)}(hhh](h)}(h 9.7 swapoffh]h 9.7 swapoff}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjchhhhhKubj)}(hXBesides management of swap is one of complicated parts of memcg, call path of swap-in at swapoff is not same as usual swap-in path.. It's worth to be tested explicitly. For example, test like following is good: (Shell-A):: # mount -t cgroup none /cgroup -o memory # mkdir /cgroup/test # echo 40M > /cgroup/test/memory.limit_in_bytes # echo 0 > /cgroup/test/tasks Run malloc(100M) program under this. You'll see 60M of swaps. (Shell-B):: # move all tasks in /cgroup/test to /cgroup # /sbin/swapoff -a # rmdir /cgroup/test # kill malloc task. Of course, tmpfs v.s. swapoff test should be tested, too. h](h)}(hBesides management of swap is one of complicated parts of memcg, call path of swap-in at swapoff is not same as usual swap-in path.. It's worth to be tested explicitly.h]hBesides management of swap is one of complicated parts of memcg, call path of swap-in at swapoff is not same as usual swap-in path.. It’s worth to be tested explicitly.}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjtubh)}(h)For example, test like following is good:h]h)For example, test like following is good:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjtubh)}(h (Shell-A)::h]h (Shell-A):}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjtubj)}(h# mount -t cgroup none /cgroup -o memory # mkdir /cgroup/test # echo 40M > /cgroup/test/memory.limit_in_bytes # echo 0 > /cgroup/test/tasksh]h# mount -t cgroup none /cgroup -o memory # mkdir /cgroup/test # echo 40M > /cgroup/test/memory.limit_in_bytes # echo 0 > /cgroup/test/tasks}hjsbah}(h]h ]h"]h$]h&]j j uh1jhhhMhjtubh)}(h=Run malloc(100M) program under this. You'll see 60M of swaps.h]h?Run malloc(100M) program under this. You’ll see 60M of swaps.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjtubh)}(h (Shell-B)::h]h (Shell-B):}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjtubj)}(hg# move all tasks in /cgroup/test to /cgroup # /sbin/swapoff -a # rmdir /cgroup/test # kill malloc task.h]hg# move all tasks in /cgroup/test to /cgroup # /sbin/swapoff -a # rmdir /cgroup/test # kill malloc task.}hjsbah}(h]h ]h"]h$]h&]j j uh1jhhhM hjtubh)}(h9Of course, tmpfs v.s. swapoff test should be tested, too.h]h9Of course, tmpfs v.s. swapoff test should be tested, too.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjtubeh}(h]h ]h"]h$]h&]uh1jhhhKhjchhubeh}(h]swapoffah ]h"] 9.7 swapoffah$]h&]uh1hhj>hhhhhKubh)}(hhh](h)}(h9.8 OOM-Killerh]h9.8 OOM-Killer}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubj)}(hX<Out-of-memory caused by memcg's limit will kill tasks under the memcg. When hierarchy is used, a task under hierarchy will be killed by the kernel. In this case, panic_on_oom shouldn't be invoked and tasks in other groups shouldn't be killed. It's not difficult to cause OOM under memcg as following. Case A) when you can swapoff:: #swapoff -a #echo 50M > /memory.limit_in_bytes run 51M of malloc Case B) when you use mem+swap limitation:: #echo 50M > memory.limit_in_bytes #echo 50M > memory.memsw.limit_in_bytes run 51M of malloc h](h)}(hOut-of-memory caused by memcg's limit will kill tasks under the memcg. When hierarchy is used, a task under hierarchy will be killed by the kernel.h]hOut-of-memory caused by memcg’s limit will kill tasks under the memcg. When hierarchy is used, a task under hierarchy will be killed by the kernel.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubh)}(h^In this case, panic_on_oom shouldn't be invoked and tasks in other groups shouldn't be killed.h]hbIn this case, panic_on_oom shouldn’t be invoked and tasks in other groups shouldn’t be killed.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubh)}(h9It's not difficult to cause OOM under memcg as following.h]h;It’s not difficult to cause OOM under memcg as following.}(hj' hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubh)}(hCase A) when you can swapoff::h]hCase A) when you can swapoff:}(hj5 hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj ubj)}(h.#swapoff -a #echo 50M > /memory.limit_in_bytesh]h.#swapoff -a #echo 50M > /memory.limit_in_bytes}hjC sbah}(h]h ]h"]h$]h&]j j uh1jhhhMhj ubh)}(hrun 51M of malloch]hrun 51M of malloc}(hjQ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM!hj ubh)}(h*Case B) when you use mem+swap limitation::h]h)Case B) when you use mem+swap limitation:}(hj_ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM#hj ubj)}(hI#echo 50M > memory.limit_in_bytes #echo 50M > memory.memsw.limit_in_bytesh]hI#echo 50M > memory.limit_in_bytes #echo 50M > memory.memsw.limit_in_bytes}hjm sbah}(h]h ]h"]h$]h&]j j uh1jhhhM%hj ubh)}(hrun 51M of malloch]hrun 51M of malloc}(hj{ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM(hj ubeh}(h]h ]h"]h$]h&]uh1jhhhMhjhhubeh}(h] oom-killerah ]h"]9.8 oom-killerah$]h&]uh1hhj>hhhhhMubh)}(hhh](h)}(h"9.9 Move charges at task migrationh]h"9.9 Move charges at task migration}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhM+ubj)}(hX\Charges associated with a task can be moved along with task migration. (Shell-A):: #mkdir /cgroup/A #echo $$ >/cgroup/A/tasks run some programs which uses some amount of memory in /cgroup/A. (Shell-B):: #mkdir /cgroup/B #echo 1 >/cgroup/B/memory.move_charge_at_immigrate #echo "pid of the program running in group A" >/cgroup/B/tasks You can see charges have been moved by reading ``*.usage_in_bytes`` or memory.stat of both A and B. See 8.2 of Documentation/admin-guide/cgroup-v1/memory.rst to see what value should be written to move_charge_at_immigrate. h](h)}(hFCharges associated with a task can be moved along with task migration.h]hFCharges associated with a task can be moved along with task migration.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM-hj ubh)}(h (Shell-A)::h]h (Shell-A):}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM/hj ubj)}(h*#mkdir /cgroup/A #echo $$ >/cgroup/A/tasksh]h*#mkdir /cgroup/A #echo $$ >/cgroup/A/tasks}hj sbah}(h]h ]h"]h$]h&]j j uh1jhhhM1hj ubh)}(h@run some programs which uses some amount of memory in /cgroup/A.h]h@run some programs which uses some amount of memory in /cgroup/A.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM4hj ubh)}(h (Shell-B)::h]h (Shell-B):}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM6hj ubj)}(h#mkdir /cgroup/B #echo 1 >/cgroup/B/memory.move_charge_at_immigrate #echo "pid of the program running in group A" >/cgroup/B/tasksh]h#mkdir /cgroup/B #echo 1 >/cgroup/B/memory.move_charge_at_immigrate #echo "pid of the program running in group A" >/cgroup/B/tasks}hj sbah}(h]h ]h"]h$]h&]j j uh1jhhhM8hj ubh)}(hcYou can see charges have been moved by reading ``*.usage_in_bytes`` or memory.stat of both A and B.h](h/You can see charges have been moved by reading }(hj hhhNhNubhliteral)}(h``*.usage_in_bytes``h]h*.usage_in_bytes}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j hj ubh or memory.stat of both A and B.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM<hj ubh)}(hzSee 8.2 of Documentation/admin-guide/cgroup-v1/memory.rst to see what value should be written to move_charge_at_immigrate.h]hzSee 8.2 of Documentation/admin-guide/cgroup-v1/memory.rst to see what value should be written to move_charge_at_immigrate.}(hj" hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM?hj ubeh}(h]h ]h"]h$]h&]uh1jhhhM-hj hhubeh}(h]move-charges-at-task-migrationah ]h"]"9.9 move charges at task migrationah$]h&]uh1hhj>hhhhhM+ubh)}(hhh](h)}(h9.10 Memory thresholdsh]h9.10 Memory thresholds}(hjA hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj> hhhhhMCubj)}(hXMemory controller implements memory thresholds using cgroups notification API. You can use tools/cgroup/cgroup_event_listener.c to test it. (Shell-A) Create cgroup and run event listener:: # mkdir /cgroup/A # ./cgroup_event_listener /cgroup/A/memory.usage_in_bytes 5M (Shell-B) Add task to cgroup and try to allocate and free memory:: # echo $$ >/cgroup/A/tasks # a="$(dd if=/dev/zero bs=1M count=10)" # a= You will see message from cgroup_event_listener every time you cross the thresholds. Use /cgroup/A/memory.memsw.usage_in_bytes to test memsw thresholds. It's good idea to test root cgroup as well.h](h)}(hMemory controller implements memory thresholds using cgroups notification API. You can use tools/cgroup/cgroup_event_listener.c to test it.h]hMemory controller implements memory thresholds using cgroups notification API. You can use tools/cgroup/cgroup_event_listener.c to test it.}(hjS hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhjO ubh)}(h0(Shell-A) Create cgroup and run event listener::h]h/(Shell-A) Create cgroup and run event listener:}(hja hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhjO ubj)}(hN# mkdir /cgroup/A # ./cgroup_event_listener /cgroup/A/memory.usage_in_bytes 5Mh]hN# mkdir /cgroup/A # ./cgroup_event_listener /cgroup/A/memory.usage_in_bytes 5M}hjo sbah}(h]h ]h"]h$]h&]j j uh1jhhhMJhjO ubh)}(hB(Shell-B) Add task to cgroup and try to allocate and free memory::h]hA(Shell-B) Add task to cgroup and try to allocate and free memory:}(hj} hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMMhjO ubj)}(hG# echo $$ >/cgroup/A/tasks # a="$(dd if=/dev/zero bs=1M count=10)" # a=h]hG# echo $$ >/cgroup/A/tasks # a="$(dd if=/dev/zero bs=1M count=10)" # a=}hj sbah}(h]h ]h"]h$]h&]j j uh1jhhhMOhjO ubh)}(hTYou will see message from cgroup_event_listener every time you cross the thresholds.h]hTYou will see message from cgroup_event_listener every time you cross the thresholds.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMShjO ubh)}(hCUse /cgroup/A/memory.memsw.usage_in_bytes to test memsw thresholds.h]hCUse /cgroup/A/memory.memsw.usage_in_bytes to test memsw thresholds.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMVhjO ubh)}(h+It's good idea to test root cgroup as well.h]h-It’s good idea to test root cgroup as well.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMXhjO ubeh}(h]h ]h"]h$]h&]uh1jhhhMEhj> hhubeh}(h]memory-thresholdsah ]h"]9.10 memory thresholdsah$]h&]uh1hhj>hhhhhMCubeh}(h] typical-testsah ]h"]9. typical tests.ah$]h&]uh1hhhhhhhhKubeh}(h]4memory-resource-controller-memcg-implementation-memoah ]h"]5memory resource controller(memcg) implementation memoah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerj error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(j j j{jxjjjnjkjjjvjsjjjjj jj;j8j j jjjjj3j0jjjjj`j]jjj j j; j8 j j u nametypes}(j j{jjnjjvjjj j;j jjj3jjj`jj j; j uh}(j hjxhjj~jkjjjqjsj"jjyjjjjj8j j j>jjgjjj0jjj6jjj]jjjcj jj8 j j j> u footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.