ʝsphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget /translations/zh_CN/bpf/map_hashmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/zh_TW/bpf/map_hashmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/it_IT/bpf/map_hashmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/ja_JP/bpf/map_hashmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/ko_KR/bpf/map_hashmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hPortuguese (Brazilian)}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/pt_BR/bpf/map_hashmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget /translations/sp_SP/bpf/map_hashmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhcomment)}(h%SPDX-License-Identifier: GPL-2.0-onlyh]h%SPDX-License-Identifier: GPL-2.0-only}hhsbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1hhhhhh:/var/lib/git/docbuild/linux/Documentation/bpf/map_hash.rsthKubh)}(h Copyright (C) 2022 Red Hat, Inc.h]h Copyright (C) 2022 Red Hat, Inc.}hhsbah}(h]h ]h"]h$]h&]hhuh1hhhhhhhhKubh)}(h'Copyright (C) 2022-2023 Isovalent, Inc.h]h'Copyright (C) 2022-2023 Isovalent, Inc.}hhsbah}(h]h ]h"]h$]h&]hhuh1hhhhhhhhKubhsection)}(hhh](htitle)}(h/BPF_MAP_TYPE_HASH, with PERCPU and LRU Variantsh]h/BPF_MAP_TYPE_HASH, with PERCPU and LRU Variants}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubhnote)}(h- ``BPF_MAP_TYPE_HASH`` was introduced in kernel version 3.19 - ``BPF_MAP_TYPE_PERCPU_HASH`` was introduced in version 4.6 - Both ``BPF_MAP_TYPE_LRU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH`` were introduced in version 4.10h]h bullet_list)}(hhh](h list_item)}(h;``BPF_MAP_TYPE_HASH`` was introduced in kernel version 3.19h]h paragraph)}(hjh](hliteral)}(h``BPF_MAP_TYPE_HASH``h]hBPF_MAP_TYPE_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh& was introduced in kernel version 3.19}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhK hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(h:``BPF_MAP_TYPE_PERCPU_HASH`` was introduced in version 4.6h]j )}(hj1h](j)}(h``BPF_MAP_TYPE_PERCPU_HASH``h]hBPF_MAP_TYPE_PERCPU_HASH}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubh was introduced in version 4.6}(hj3hhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhK hj/ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hcBoth ``BPF_MAP_TYPE_LRU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH`` were introduced in version 4.10h]j )}(hcBoth ``BPF_MAP_TYPE_LRU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH`` were introduced in version 4.10h](hBoth }(hjXhhhNhNubj)}(h``BPF_MAP_TYPE_LRU_HASH``h]hBPF_MAP_TYPE_LRU_HASH}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh and }(hjXhhhNhNubj)}(h ``BPF_MAP_TYPE_LRU_PERCPU_HASH``h]hBPF_MAP_TYPE_LRU_PERCPU_HASH}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubh were introduced in version 4.10}(hjXhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhK hjTubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]bullet-uh1hhhhK hhubah}(h]h ]h"]h$]h&]uh1hhhhhhNhNubj )}(h``BPF_MAP_TYPE_HASH`` and ``BPF_MAP_TYPE_PERCPU_HASH`` provide general purpose hash map storage. Both the key and the value can be structs, allowing for composite keys and values.h](j)}(h``BPF_MAP_TYPE_HASH``h]hBPF_MAP_TYPE_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h``BPF_MAP_TYPE_PERCPU_HASH``h]hBPF_MAP_TYPE_PERCPU_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh} provide general purpose hash map storage. Both the key and the value can be structs, allowing for composite keys and values.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhhhhubj )}(hXThe kernel is responsible for allocating and freeing key/value pairs, up to the max_entries limit that you specify. Hash maps use pre-allocation of hash table elements by default. The ``BPF_F_NO_PREALLOC`` flag can be used to disable pre-allocation when it is too memory expensive.h](hThe kernel is responsible for allocating and freeing key/value pairs, up to the max_entries limit that you specify. Hash maps use pre-allocation of hash table elements by default. The }(hjhhhNhNubj)}(h``BPF_F_NO_PREALLOC``h]hBPF_F_NO_PREALLOC}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhL flag can be used to disable pre-allocation when it is too memory expensive.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhhhhubj )}(hz``BPF_MAP_TYPE_PERCPU_HASH`` provides a separate value slot per CPU. The per-cpu values are stored internally in an array.h](j)}(h``BPF_MAP_TYPE_PERCPU_HASH``h]hBPF_MAP_TYPE_PERCPU_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh^ provides a separate value slot per CPU. The per-cpu values are stored internally in an array.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhhhhubj )}(hXbThe ``BPF_MAP_TYPE_LRU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH`` variants add LRU semantics to their respective hash tables. An LRU hash will automatically evict the least recently used entries when the hash table reaches capacity. An LRU hash maintains an internal LRU list that is used to select elements for eviction. This internal LRU list is shared across CPUs but it is possible to request a per CPU LRU list with the ``BPF_F_NO_COMMON_LRU`` flag when calling ``bpf_map_create``. The following table outlines the properties of LRU maps depending on the a map type and the flags used to create the map.h](hThe }(hjhhhNhNubj)}(h``BPF_MAP_TYPE_LRU_HASH``h]hBPF_MAP_TYPE_LRU_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h ``BPF_MAP_TYPE_LRU_PERCPU_HASH``h]hBPF_MAP_TYPE_LRU_PERCPU_HASH}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhXh variants add LRU semantics to their respective hash tables. An LRU hash will automatically evict the least recently used entries when the hash table reaches capacity. An LRU hash maintains an internal LRU list that is used to select elements for eviction. This internal LRU list is shared across CPUs but it is possible to request a per CPU LRU list with the }(hjhhhNhNubj)}(h``BPF_F_NO_COMMON_LRU``h]hBPF_F_NO_COMMON_LRU}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh flag when calling }(hjhhhNhNubj)}(h``bpf_map_create``h]hbpf_map_create}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh|. The following table outlines the properties of LRU maps depending on the a map type and the flags used to create the map.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhhhhubhtable)}(hhh]htgroup)}(hhh](hcolspec)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhhjeubji)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhhjeubji)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhhjeubhthead)}(hhh]hrow)}(hhh](hentry)}(hhh]j )}(hFlagh]hFlag}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhK&hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]j )}(h``BPF_MAP_TYPE_LRU_HASH``h]j)}(hjh]hBPF_MAP_TYPE_LRU_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j hhhK&hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]j )}(h ``BPF_MAP_TYPE_LRU_PERCPU_HASH``h]j)}(hjh]hBPF_MAP_TYPE_LRU_PERCPU_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j hhhK&hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhjeubhtbody)}(hhh](j)}(hhh](j)}(hhh]j )}(h**BPF_F_NO_COMMON_LRU**h]hstrong)}(hjh]hBPF_F_NO_COMMON_LRU}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1j hhhK(hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]j )}(hPer-CPU LRU, global maph]hPer-CPU LRU, global map}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhK(hj!ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]j )}(hPer-CPU LRU, per-cpu maph]hPer-CPU LRU, per-cpu map}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhK(hj8ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]j )}(h**!BPF_F_NO_COMMON_LRU**h]j)}(hj]h]h!BPF_F_NO_COMMON_LRU}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj[ubah}(h]h ]h"]h$]h&]uh1j hhhK)hjXubah}(h]h ]h"]h$]h&]uh1jhjUubj)}(hhh]j )}(hGlobal LRU, global maph]hGlobal LRU, global map}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhK)hjxubah}(h]h ]h"]h$]h&]uh1jhjUubj)}(hhh]j )}(hGlobal LRU, per-cpu maph]hGlobal LRU, per-cpu map}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhK)hjubah}(h]h ]h"]h$]h&]uh1jhjUubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjeubeh}(h]h ]h"]h$]h&]colsKuh1jchj`ubah}(h]h ]h"]h$]h&]uh1j^hhhhhhhNubh)}(hhh](h)}(hUsageh]hUsage}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK-ubh)}(hhh](h)}(h Kernel BPFh]h Kernel BPF}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK0ubh)}(hhh](h)}(hbpf_map_update_elem()h]hbpf_map_update_elem()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK3ubh literal_block)}(h\long bpf_map_update_elem(struct bpf_map *map, const void *key, const void *value, u64 flags)h]h\long bpf_map_update_elem(struct bpf_map *map, const void *key, const void *value, u64 flags)}hjsbah}(h]h ]h"]h$]h&]hhƌforcelanguagechighlight_args}uh1jhhhK5hjhhubj )}(hHash entries can be added or updated using the ``bpf_map_update_elem()`` helper. This helper replaces existing elements atomically. The ``flags`` parameter can be used to control the update behaviour:h](h/Hash entries can be added or updated using the }(hjhhhNhNubj)}(h``bpf_map_update_elem()``h]hbpf_map_update_elem()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh@ helper. This helper replaces existing elements atomically. The }(hjhhhNhNubj)}(h ``flags``h]hflags}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh7 parameter can be used to control the update behaviour:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhK9hjhhubj)}(hhh](j)}(hC``BPF_ANY`` will create a new element or update an existing elementh]j )}(hj>h](j)}(h ``BPF_ANY``h]hBPF_ANY}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubh8 will create a new element or update an existing element}(hj@hhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhK=hj<ubah}(h]h ]h"]h$]h&]uh1jhj9hhhhhNubj)}(hK``BPF_NOEXIST`` will create a new element only if one did not already existh]j )}(hK``BPF_NOEXIST`` will create a new element only if one did not already existh](j)}(h``BPF_NOEXIST``h]h BPF_NOEXIST}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1jhjeubh< will create a new element only if one did not already exist}(hjehhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhK>hjaubah}(h]h ]h"]h$]h&]uh1jhj9hhhhhNubj)}(h.``BPF_EXIST`` will update an existing element h]j )}(h-``BPF_EXIST`` will update an existing elementh](j)}(h ``BPF_EXIST``h]h BPF_EXIST}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh will update an existing element}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhK@hjubah}(h]h ]h"]h$]h&]uh1jhj9hhhhhNubeh}(h]h ]h"]h$]h&]jjuh1hhhhK=hjhhubj )}(hU``bpf_map_update_elem()`` returns 0 on success, or negative error in case of failure.h](j)}(h``bpf_map_update_elem()``h]hbpf_map_update_elem()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh< returns 0 on success, or negative error in case of failure.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKBhjhhubeh}(h]bpf-map-update-elemah ]h"]bpf_map_update_elem()ah$]h&]uh1hhjhhhhhK3ubh)}(hhh](h)}(hbpf_map_lookup_elem()h]hbpf_map_lookup_elem()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKFubj)}(h?void *bpf_map_lookup_elem(struct bpf_map *map, const void *key)h]h?void *bpf_map_lookup_elem(struct bpf_map *map, const void *key)}hjsbah}(h]h ]h"]h$]h&]hhjjjj}uh1jhhhKHhjhhubj )}(hHash entries can be retrieved using the ``bpf_map_lookup_elem()`` helper. This helper returns a pointer to the value associated with ``key``, or ``NULL`` if no entry was found.h](h(Hash entries can be retrieved using the }(hjhhhNhNubj)}(h``bpf_map_lookup_elem()``h]hbpf_map_lookup_elem()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhD helper. This helper returns a pointer to the value associated with }(hjhhhNhNubj)}(h``key``h]hkey}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh, or }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if no entry was found.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKLhjhhubeh}(h]bpf-map-lookup-elemah ]h"]bpf_map_lookup_elem()ah$]h&]uh1hhjhhhhhKFubh)}(hhh](h)}(hbpf_map_delete_elem()h]hbpf_map_delete_elem()}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjChhhhhKQubj)}(h>long bpf_map_delete_elem(struct bpf_map *map, const void *key)h]h>long bpf_map_delete_elem(struct bpf_map *map, const void *key)}hjTsbah}(h]h ]h"]h$]h&]hhjjjj}uh1jhhhKShjChhubj )}(hHash entries can be deleted using the ``bpf_map_delete_elem()`` helper. This helper will return 0 on success, or negative error in case of failure.h](h&Hash entries can be deleted using the }(hjchhhNhNubj)}(h``bpf_map_delete_elem()``h]hbpf_map_delete_elem()}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubhT helper. This helper will return 0 on success, or negative error in case of failure.}(hjchhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKWhjChhubeh}(h]bpf-map-delete-elemah ]h"]bpf_map_delete_elem()ah$]h&]uh1hhjhhhhhKQubeh}(h] kernel-bpfah ]h"] kernel bpfah$]h&]uh1hhjhhhhhK0ubh)}(hhh](h)}(hPer CPU Hashesh]hPer CPU Hashes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK\ubj )}(hFor ``BPF_MAP_TYPE_PERCPU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH`` the ``bpf_map_update_elem()`` and ``bpf_map_lookup_elem()`` helpers automatically access the hash slot for the current CPU.h](hFor }(hjhhhNhNubj)}(h``BPF_MAP_TYPE_PERCPU_HASH``h]hBPF_MAP_TYPE_PERCPU_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }(hjhhhNhNubj)}(h ``BPF_MAP_TYPE_LRU_PERCPU_HASH``h]hBPF_MAP_TYPE_LRU_PERCPU_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh the }(hjhhhNhNubj)}(h``bpf_map_update_elem()``h]hbpf_map_update_elem()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and }hjsbj)}(h``bpf_map_lookup_elem()``h]hbpf_map_lookup_elem()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh@ helpers automatically access the hash slot for the current CPU.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhK^hjhhubh)}(hhh](h)}(hbpf_map_lookup_percpu_elem()h]hbpf_map_lookup_percpu_elem()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKcubj)}(hOvoid *bpf_map_lookup_percpu_elem(struct bpf_map *map, const void *key, u32 cpu)h]hOvoid *bpf_map_lookup_percpu_elem(struct bpf_map *map, const void *key, u32 cpu)}hj sbah}(h]h ]h"]h$]h&]hhjjjj}uh1jhhhKehjhhubj )}(hThe ``bpf_map_lookup_percpu_elem()`` helper can be used to lookup the value in the hash slot for a specific CPU. Returns value associated with ``key`` on ``cpu`` , or ``NULL`` if no entry was found or ``cpu`` is invalid.h](hThe }(hjhhhNhNubj)}(h ``bpf_map_lookup_percpu_elem()``h]hbpf_map_lookup_percpu_elem()}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhk helper can be used to lookup the value in the hash slot for a specific CPU. Returns value associated with }(hjhhhNhNubj)}(h``key``h]hkey}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh on }(hjhhhNhNubj)}(h``cpu``h]hcpu}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh , or }(hjhhhNhNubj)}(h``NULL``h]hNULL}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh if no entry was found or }(hjhhhNhNubj)}(h``cpu``h]hcpu}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh is invalid.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKihjhhubeh}(h]bpf-map-lookup-percpu-elemah ]h"]bpf_map_lookup_percpu_elem()ah$]h&]uh1hhjhhhhhKcubeh}(h]per-cpu-hashesah ]h"]per cpu hashesah$]h&]uh1hhjhhhhhK\ubh)}(hhh](h)}(h Concurrencyh]h Concurrency}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKoubj )}(hXValues stored in ``BPF_MAP_TYPE_HASH`` can be accessed concurrently by programs running on different CPUs. Since Kernel version 5.1, the BPF infrastructure provides ``struct bpf_spin_lock`` to synchronise access. See ``tools/testing/selftests/bpf/progs/test_spin_lock.c``.h](hValues stored in }(hjhhhNhNubj)}(h``BPF_MAP_TYPE_HASH``h]hBPF_MAP_TYPE_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh can be accessed concurrently by programs running on different CPUs. Since Kernel version 5.1, the BPF infrastructure provides }(hjhhhNhNubj)}(h``struct bpf_spin_lock``h]hstruct bpf_spin_lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh to synchronise access. See }(hjhhhNhNubj)}(h6``tools/testing/selftests/bpf/progs/test_spin_lock.c``h]h2tools/testing/selftests/bpf/progs/test_spin_lock.c}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKqhjhhubeh}(h] concurrencyah ]h"] concurrencyah$]h&]uh1hhjhhhhhKoubh)}(hhh](h)}(h Userspaceh]h Userspace}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKwubh)}(hhh](h)}(hbpf_map_get_next_key()h]hbpf_map_get_next_key()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKzubj)}(hEint bpf_map_get_next_key(int fd, const void *cur_key, void *next_key)h]hEint bpf_map_get_next_key(int fd, const void *cur_key, void *next_key)}hjsbah}(h]h ]h"]h$]h&]hhjjjj}uh1jhhhK|hjhhubj )}(hXIn userspace, it is possible to iterate through the keys of a hash using libbpf's ``bpf_map_get_next_key()`` function. The first key can be fetched by calling ``bpf_map_get_next_key()`` with ``cur_key`` set to ``NULL``. Subsequent calls will fetch the next key that follows the current key. ``bpf_map_get_next_key()`` returns 0 on success, -ENOENT if cur_key is the last key in the hash, or negative error in case of failure.h](hTIn userspace, it is possible to iterate through the keys of a hash using libbpf’s }(hj hhhNhNubj)}(h``bpf_map_get_next_key()``h]hbpf_map_get_next_key()}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh3 function. The first key can be fetched by calling }(hj hhhNhNubj)}(h``bpf_map_get_next_key()``h]hbpf_map_get_next_key()}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh with }(hj hhhNhNubj)}(h ``cur_key``h]hcur_key}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh set to }(hj hhhNhNubj)}(h``NULL``h]hNULL}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhI. Subsequent calls will fetch the next key that follows the current key. }(hj hhhNhNubj)}(h``bpf_map_get_next_key()``h]hbpf_map_get_next_key()}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhl returns 0 on success, -ENOENT if cur_key is the last key in the hash, or negative error in case of failure.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhjhhubj )}(hX Note that if ``cur_key`` gets deleted then ``bpf_map_get_next_key()`` will instead return the *first* key in the hash table which is undesirable. It is recommended to use batched lookup if there is going to be key deletion intermixed with ``bpf_map_get_next_key()``.h](h Note that if }(hjhhhNhNubj)}(h ``cur_key``h]hcur_key}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh gets deleted then }(hjhhhNhNubj)}(h``bpf_map_get_next_key()``h]hbpf_map_get_next_key()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh will instead return the }(hjhhhNhNubhemphasis)}(h*first*h]hfirst}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh key in the hash table which is undesirable. It is recommended to use batched lookup if there is going to be key deletion intermixed with }(hjhhhNhNubj)}(h``bpf_map_get_next_key()``h]hbpf_map_get_next_key()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhjhhubeh}(h]bpf-map-get-next-keyah ]h"]bpf_map_get_next_key()ah$]h&]uh1hhjhhhhhKzubeh}(h] userspaceah ]h"] userspaceah$]h&]uh1hhjhhhhhKwubeh}(h]usageah ]h"]usageah$]h&]uh1hhhhhhhhK-ubh)}(hhh](h)}(hExamplesh]hExamples}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj )}(hPlease see the ``tools/testing/selftests/bpf`` directory for functional examples. The code snippets below demonstrates API usage.h](hPlease see the }(hj hhhNhNubj)}(h``tools/testing/selftests/bpf``h]htools/testing/selftests/bpf}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhT directory for functional examples. The code snippets below demonstrates API usage.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhjhhubj )}(hSThis example shows how to declare an LRU Hash with a struct key and a struct value.h]hSThis example shows how to declare an LRU Hash with a struct key and a struct value.}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhjhhubj)}(hXG#include #include struct key { __u32 srcip; }; struct value { __u64 packets; __u64 bytes; }; struct { __uint(type, BPF_MAP_TYPE_LRU_HASH); __uint(max_entries, 32); __type(key, struct key); __type(value, struct value); } packet_stats SEC(".maps");h]hXG#include #include struct key { __u32 srcip; }; struct value { __u64 packets; __u64 bytes; }; struct { __uint(type, BPF_MAP_TYPE_LRU_HASH); __uint(max_entries, 32); __type(key, struct key); __type(value, struct value); } packet_stats SEC(".maps");}hj7sbah}(h]h ]h"]h$]h&]hhjjjj}uh1jhhhKhjhhubj )}(hQThis example shows how to create or update hash values using atomic instructions:h]hQThis example shows how to create or update hash values using atomic instructions:}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhjhhubj)}(hXstatic void update_stats(__u32 srcip, int bytes) { struct key key = { .srcip = srcip, }; struct value *value = bpf_map_lookup_elem(&packet_stats, &key); if (value) { __sync_fetch_and_add(&value->packets, 1); __sync_fetch_and_add(&value->bytes, bytes); } else { struct value newval = { 1, bytes }; bpf_map_update_elem(&packet_stats, &key, &newval, BPF_NOEXIST); } }h]hXstatic void update_stats(__u32 srcip, int bytes) { struct key key = { .srcip = srcip, }; struct value *value = bpf_map_lookup_elem(&packet_stats, &key); if (value) { __sync_fetch_and_add(&value->packets, 1); __sync_fetch_and_add(&value->bytes, bytes); } else { struct value newval = { 1, bytes }; bpf_map_update_elem(&packet_stats, &key, &newval, BPF_NOEXIST); } }}hjTsbah}(h]h ]h"]h$]h&]hhjjjj}uh1jhhhKhjhhubj )}(h?Userspace walking the map elements from the map declared above:h]h?Userspace walking the map elements from the map declared above:}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhjhhubj)}(hX#include #include static void walk_hash_elements(int map_fd) { struct key *cur_key = NULL; struct key next_key; struct value value; int err; for (;;) { err = bpf_map_get_next_key(map_fd, cur_key, &next_key); if (err) break; bpf_map_lookup_elem(map_fd, &next_key, &value); // Use key and value here cur_key = &next_key; } }h]hX#include #include static void walk_hash_elements(int map_fd) { struct key *cur_key = NULL; struct key next_key; struct value value; int err; for (;;) { err = bpf_map_get_next_key(map_fd, cur_key, &next_key); if (err) break; bpf_map_lookup_elem(map_fd, &next_key, &value); // Use key and value here cur_key = &next_key; } }}hjqsbah}(h]h ]h"]h$]h&]hhjjjj}uh1jhhhKhjhhubeh}(h]examplesah ]h"]examplesah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Internalsh]h Internals}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj )}(hThis section of the document is targeted at Linux developers and describes aspects of the map implementations that are not considered stable ABI. The following details are subject to change in future versions of the kernel.h]hThis section of the document is targeted at Linux developers and describes aspects of the map implementations that are not considered stable ABI. The following details are subject to change in future versions of the kernel.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhjhhubh)}(hhh](h)}(h&``BPF_MAP_TYPE_LRU_HASH`` and variantsh](j)}(h``BPF_MAP_TYPE_LRU_HASH``h]hBPF_MAP_TYPE_LRU_HASH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh and variants}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjhhhhhKubj )}(hXUpdating elements in LRU maps may trigger eviction behaviour when the capacity of the map is reached. There are various steps that the update algorithm attempts in order to enforce the LRU property which have increasing impacts on other CPUs involved in the following operation attempts:h]hXUpdating elements in LRU maps may trigger eviction behaviour when the capacity of the map is reached. There are various steps that the update algorithm attempts in order to enforce the LRU property which have increasing impacts on other CPUs involved in the following operation attempts:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhjhhubj)}(hhh](j)}(h2Attempt to use CPU-local state to batch operationsh]j )}(hjh]h2Attempt to use CPU-local state to batch operations}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(h=Attempt to fetch ``target_free`` free nodes from global listsh]j )}(hjh](hAttempt to fetch }(hjhhhNhNubj)}(h``target_free``h]h target_free}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh free nodes from global lists}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhjubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hJAttempt to pull any node from a global list and remove it from the hashmaph]j )}(hj h]hJAttempt to pull any node from a global list and remove it from the hashmap}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhj ubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubj)}(hLAttempt to pull any node from any CPU's list and remove it from the hashmap h]j )}(hKAttempt to pull any node from any CPU's list and remove it from the hashmaph]hMAttempt to pull any node from any CPU’s list and remove it from the hashmap}(hj2 hhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhj. ubah}(h]h ]h"]h$]h&]uh1jhjhhhhhNubeh}(h]h ]h"]h$]h&]jjuh1hhhhKhjhhubj )}(hXThe number of nodes to borrow from the global list in a batch, ``target_free``, depends on the size of the map. Larger batch size reduces lock contention, but may also exhaust the global structure. The value is computed at map init to avoid exhaustion, by limiting aggregate reservation by all CPUs to half the map size. With a minimum of a single element and maximum budget of 128 at a time.h](h?The number of nodes to borrow from the global list in a batch, }(hjL hhhNhNubj)}(h``target_free``h]h target_free}(hjT hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjL ubhX:, depends on the size of the map. Larger batch size reduces lock contention, but may also exhaust the global structure. The value is computed at map init to avoid exhaustion, by limiting aggregate reservation by all CPUs to half the map size. With a minimum of a single element and maximum budget of 128 at a time.}(hjL hhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhjhhubj )}(hThis algorithm is described visually in the following diagram. See the description in commit 3a08c2fd7634 ("bpf: LRU List") for a full explanation of the corresponding operations:h]hThis algorithm is described visually in the following diagram. See the description in commit 3a08c2fd7634 (“bpf: LRU List”) for a full explanation of the corresponding operations:}(hjl hhhNhNubah}(h]h ]h"]h$]h&]uh1j hhhKhjhhubkfigure kernel_figure)}(hhh]hfigure)}(hhh](himage)}(hX.. kernel-figure:: map_lru_hash_update.dot :alt: Diagram outlining the LRU eviction steps taken during map update. LRU hash eviction during map update for ``BPF_MAP_TYPE_LRU_HASH`` and variants. See the dot file source for kernel function name code references. h]h}(h]h ]h"]h$]h&]altADiagram outlining the LRU eviction steps taken during map update.uribpf/map_lru_hash_update.dot candidates}*j suh1j hj hhhKubhcaption)}(hLRU hash eviction during map update for ``BPF_MAP_TYPE_LRU_HASH`` and variants. See the dot file source for kernel function name code references.h](h(LRU hash eviction during map update for }(hj hhhNhNubj)}(h``BPF_MAP_TYPE_LRU_HASH``h]hBPF_MAP_TYPE_LRU_HASH}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhP and variants. See the dot file source for kernel function name code references.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhKhj ubeh}(h]id1ah ]h"]h$]h&]uh1j hj} ubah}(h]h ]h"]h$]h&]uh1j{ hjhhhhhNubj )}(hXMap updates start from the oval in the top right "begin ``bpf_map_update()``" and progress through the graph towards the bottom where the result may be either a successful update or a failure with various error codes. The key in the top right provides indicators for which locks may be involved in specific operations. This is intended as a visual hint for reasoning about how map contention may impact update operations, though the map type and flags may impact the actual contention on those locks, based on the logic described in the table above. For instance, if the map is created with type ``BPF_MAP_TYPE_LRU_PERCPU_HASH`` and flags ``BPF_F_NO_COMMON_LRU`` then all map properties would be per-cpu.h](h:Map updates start from the oval in the top right “begin }(hj hhhNhNubj)}(h``bpf_map_update()``h]hbpf_map_update()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubhX ” and progress through the graph towards the bottom where the result may be either a successful update or a failure with various error codes. The key in the top right provides indicators for which locks may be involved in specific operations. This is intended as a visual hint for reasoning about how map contention may impact update operations, though the map type and flags may impact the actual contention on those locks, based on the logic described in the table above. For instance, if the map is created with type }(hj hhhNhNubj)}(h ``BPF_MAP_TYPE_LRU_PERCPU_HASH``h]hBPF_MAP_TYPE_LRU_PERCPU_HASH}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh and flags }(hj hhhNhNubj)}(h``BPF_F_NO_COMMON_LRU``h]hBPF_F_NO_COMMON_LRU}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubh* then all map properties would be per-cpu.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1j hhhMhjhhubeh}(h]"bpf-map-type-lru-hash-and-variantsah ]h"]"bpf_map_type_lru_hash and variantsah$]h&]uh1hhjhhhhhKubeh}(h] internalsah ]h"] internalsah$]h&]uh1hhhhhhhhKubeh}(h].bpf-map-type-hash-with-percpu-and-lru-variantsah ]h"]/bpf_map_type_hash, with percpu and lru variantsah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksjfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjE error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourcehnj _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(j j jjjjjjj@j=jjjjjjjjjjjjjjj j j j u nametypes}(j jjjj@jjjjjjjj j uh}(j hjjjjjjj=jjjCjjjjjjjjjjjjj jj jj j u footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}jS KsRparse_messages]transform_messages] transformerN include_log] decorationNhhub.