}sphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget0/translations/zh_CN/arch/arm/kernel_user_helpersmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget0/translations/zh_TW/arch/arm/kernel_user_helpersmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget0/translations/it_IT/arch/arm/kernel_user_helpersmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget0/translations/ja_JP/arch/arm/kernel_user_helpersmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget0/translations/ko_KR/arch/arm/kernel_user_helpersmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget0/translations/sp_SP/arch/arm/kernel_user_helpersmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhsection)}(hhh](htitle)}(hKernel-provided User Helpersh]hKernel-provided User Helpers}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhJ/var/lib/git/docbuild/linux/Documentation/arch/arm/kernel_user_helpers.rsthKubh paragraph)}(hXThese are segment of kernel provided user code reachable from user space at a fixed address in kernel memory. This is used to provide user space with some operations which require kernel help because of unimplemented native feature and/or instructions in many ARM CPUs. The idea is for this code to be executed directly in user mode for best efficiency but which is too intimate with the kernel counter part to be left to user libraries. In fact this code might even differ from one CPU to another depending on the available instruction set, or whether it is a SMP systems. In other words, the kernel reserves the right to change this code as needed without warning. Only the entry points and their results as documented here are guaranteed to be stable.h]hXThese are segment of kernel provided user code reachable from user space at a fixed address in kernel memory. This is used to provide user space with some operations which require kernel help because of unimplemented native feature and/or instructions in many ARM CPUs. The idea is for this code to be executed directly in user mode for best efficiency but which is too intimate with the kernel counter part to be left to user libraries. In fact this code might even differ from one CPU to another depending on the available instruction set, or whether it is a SMP systems. In other words, the kernel reserves the right to change this code as needed without warning. Only the entry points and their results as documented here are guaranteed to be stable.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hXThis is different from (but doesn't preclude) a full blown VDSO implementation, however a VDSO would prevent some assembly tricks with constants that allows for efficient branching to those code segments. And since those code segments only use a few cycles before returning to user code, the overhead of a VDSO indirect far call would add a measurable overhead to such minimalistic operations.h]hXThis is different from (but doesn’t preclude) a full blown VDSO implementation, however a VDSO would prevent some assembly tricks with constants that allows for efficient branching to those code segments. And since those code segments only use a few cycles before returning to user code, the overhead of a VDSO indirect far call would add a measurable overhead to such minimalistic operations.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hXnUser space is expected to bypass those helpers and implement those things inline (either in the code emitted directly by the compiler, or part of the implementation of a library call) when optimizing for a recent enough processor that has the necessary native support, but only if resulting binaries are already to be incompatible with earlier ARM processors due to usage of similar native instructions for other things. In other words don't make binaries unable to run on earlier processors just for the sake of not using these kernel helpers if your compiled code is not going to use new instructions for other purpose.h]hXpUser space is expected to bypass those helpers and implement those things inline (either in the code emitted directly by the compiler, or part of the implementation of a library call) when optimizing for a recent enough processor that has the necessary native support, but only if resulting binaries are already to be incompatible with earlier ARM processors due to usage of similar native instructions for other things. In other words don’t make binaries unable to run on earlier processors just for the sake of not using these kernel helpers if your compiled code is not going to use new instructions for other purpose.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhhhubh)}(hXNew helpers may be added over time, so an older kernel may be missing some helpers present in a newer kernel. For this reason, programs must check the value of __kuser_helper_version (see below) before assuming that it is safe to call any particular helper. This check should ideally be performed only once at process startup time, and execution aborted early if the required helpers are not provided by the kernel version that process is running on.h]hXNew helpers may be added over time, so an older kernel may be missing some helpers present in a newer kernel. For this reason, programs must check the value of __kuser_helper_version (see below) before assuming that it is safe to call any particular helper. This check should ideally be performed only once at process startup time, and execution aborted early if the required helpers are not provided by the kernel version that process is running on.}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK"hhhhubh)}(hhh](h)}(hkuser_helper_versionh]hkuser_helper_version}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhK+ubh)}(hLocation: 0xffff0ffch]hLocation: 0xffff0ffc}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK-hhhhubh)}(hReference declaration::h]hReference declaration:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK/hhhhubh literal_block)}(h&extern int32_t __kuser_helper_version;h]h&extern int32_t __kuser_helper_version;}hj sbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1jhhhK1hhhhubh)}(h Definition:h]h Definition:}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK3hhhhubh block_quote)}(hThis field contains the number of helpers being implemented by the running kernel. User space may read this to determine the availability of a particular helper. h]h)}(hThis field contains the number of helpers being implemented by the running kernel. User space may read this to determine the availability of a particular helper.h]hThis field contains the number of helpers being implemented by the running kernel. User space may read this to determine the availability of a particular helper.}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK5hj@ubah}(h]h ]h"]h$]h&]uh1j>hhhK5hhhhubh)}(hUsage example::h]hUsage example:}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK9hhhhubj)}(h#define __kuser_helper_version (*(int32_t *)0xffff0ffc) void check_kuser_version(void) { if (__kuser_helper_version < 2) { fprintf(stderr, "can't do atomic operations, kernel too old\n"); abort(); } }h]h#define __kuser_helper_version (*(int32_t *)0xffff0ffc) void check_kuser_version(void) { if (__kuser_helper_version < 2) { fprintf(stderr, "can't do atomic operations, kernel too old\n"); abort(); } }}hjfsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhK;hhhhubh)}(hNotes:h]hNotes:}(hjthhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKEhhhhubj?)}(hUser space may assume that the value of this field never changes during the lifetime of any single process. This means that this field can be read once during the initialisation of a library or startup phase of a program. h]h)}(hUser space may assume that the value of this field never changes during the lifetime of any single process. This means that this field can be read once during the initialisation of a library or startup phase of a program.h]hUser space may assume that the value of this field never changes during the lifetime of any single process. This means that this field can be read once during the initialisation of a library or startup phase of a program.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKGhjubah}(h]h ]h"]h$]h&]uh1j>hhhKGhhhhubeh}(h]kuser-helper-versionah ]h"]kuser_helper_versionah$]h&]uh1hhhhhhhhK+ubh)}(hhh](h)}(h kuser_get_tlsh]h kuser_get_tls}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKMubh)}(hLocation: 0xffff0fe0h]hLocation: 0xffff0fe0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKOhjhhubh)}(hReference prototype::h]hReference prototype:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKQhjhhubj)}(hvoid * __kuser_get_tls(void);h]hvoid * __kuser_get_tls(void);}hjsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhKShjhhubh)}(hInput:h]hInput:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKUhjhhubj?)}(hlr = return address h]h)}(hlr = return addressh]hlr = return address}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKWhjubah}(h]h ]h"]h$]h&]uh1j>hhhKWhjhhubh)}(hOutput:h]hOutput:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKYhjhhubj?)}(hr0 = TLS value h]h)}(hr0 = TLS valueh]hr0 = TLS value}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK[hjubah}(h]h ]h"]h$]h&]uh1j>hhhK[hjhhubh)}(hClobbered registers:h]hClobbered registers:}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK]hjhhubj?)}(hnone h]h)}(hnoneh]hnone}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK_hj7ubah}(h]h ]h"]h$]h&]uh1j>hhhK_hjhhubh)}(h Definition:h]h Definition:}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKahjhhubj?)}(hFGet the TLS value as previously set via the __ARM_NR_set_tls syscall. h]h)}(hEGet the TLS value as previously set via the __ARM_NR_set_tls syscall.h]hEGet the TLS value as previously set via the __ARM_NR_set_tls syscall.}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKchj]ubah}(h]h ]h"]h$]h&]uh1j>hhhKchjhhubh)}(hUsage example::h]hUsage example:}(hjuhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKehjhhubj)}(htypedef void * (__kuser_get_tls_t)(void); #define __kuser_get_tls (*(__kuser_get_tls_t *)0xffff0fe0) void foo() { void *tls = __kuser_get_tls(); printf("TLS = %p\n", tls); }h]htypedef void * (__kuser_get_tls_t)(void); #define __kuser_get_tls (*(__kuser_get_tls_t *)0xffff0fe0) void foo() { void *tls = __kuser_get_tls(); printf("TLS = %p\n", tls); }}hjsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhKghjhhubh)}(hNotes:h]hNotes:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKphjhhubj?)}(hJ- Valid only if __kuser_helper_version >= 1 (from kernel version 2.6.12). h]h bullet_list)}(hhh]h list_item)}(hHValid only if __kuser_helper_version >= 1 (from kernel version 2.6.12). h]h)}(hGValid only if __kuser_helper_version >= 1 (from kernel version 2.6.12).h]hGValid only if __kuser_helper_version >= 1 (from kernel version 2.6.12).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKrhjubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]bullet-uh1jhhhKrhjubah}(h]h ]h"]h$]h&]uh1j>hhhKrhjhhubeh}(h] kuser-get-tlsah ]h"] kuser_get_tlsah$]h&]uh1hhhhhhhhKMubh)}(hhh](h)}(h kuser_cmpxchgh]h kuser_cmpxchg}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKuubh)}(hLocation: 0xffff0fc0h]hLocation: 0xffff0fc0}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKwhjhhubh)}(hReference prototype::h]hReference prototype:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKyhjhhubj)}(hKint __kuser_cmpxchg(int32_t oldval, int32_t newval, volatile int32_t *ptr);h]hKint __kuser_cmpxchg(int32_t oldval, int32_t newval, volatile int32_t *ptr);}hjsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhK{hjhhubh)}(hInput:h]hInput:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK}hjhhubj?)}(h5r0 = oldval r1 = newval r2 = ptr lr = return address h]h)}(h4r0 = oldval r1 = newval r2 = ptr lr = return addressh]h4r0 = oldval r1 = newval r2 = ptr lr = return address}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj!ubah}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubh)}(hOutput:h]hOutput:}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj?)}(hOr0 = success code (zero or non-zero) C flag = set if r0 == 0, clear if r0 != 0 h]h)}(hNr0 = success code (zero or non-zero) C flag = set if r0 == 0, clear if r0 != 0h]hNr0 = success code (zero or non-zero) C flag = set if r0 == 0, clear if r0 != 0}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjGubah}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubh)}(hClobbered registers:h]hClobbered registers:}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj?)}(hr3, ip, flags h]h)}(h r3, ip, flagsh]h r3, ip, flags}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjmubah}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubh)}(h Definition:h]h Definition:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj?)}(hAtomically store newval in `*ptr` only if `*ptr` is equal to oldval. Return zero if `*ptr` was changed or non-zero if no exchange happened. The C flag is also set if `*ptr` was changed to allow for assembly optimization in the calling code. h]h)}(hAtomically store newval in `*ptr` only if `*ptr` is equal to oldval. Return zero if `*ptr` was changed or non-zero if no exchange happened. The C flag is also set if `*ptr` was changed to allow for assembly optimization in the calling code.h](hAtomically store newval in }(hjhhhNhNubhtitle_reference)}(h`*ptr`h]h*ptr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh only if }(hjhhhNhNubj)}(h`*ptr`h]h*ptr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh$ is equal to oldval. Return zero if }(hjhhhNhNubj)}(h`*ptr`h]h*ptr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhL was changed or non-zero if no exchange happened. The C flag is also set if }(hjhhhNhNubj)}(h`*ptr`h]h*ptr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhD was changed to allow for assembly optimization in the calling code.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubh)}(hUsage example::h]hUsage example:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(hXQtypedef int (__kuser_cmpxchg_t)(int oldval, int newval, volatile int *ptr); #define __kuser_cmpxchg (*(__kuser_cmpxchg_t *)0xffff0fc0) int atomic_add(volatile int *ptr, int val) { int old, new; do { old = *ptr; new = old + val; } while(__kuser_cmpxchg(old, new, ptr)); return new; }h]hXQtypedef int (__kuser_cmpxchg_t)(int oldval, int newval, volatile int *ptr); #define __kuser_cmpxchg (*(__kuser_cmpxchg_t *)0xffff0fc0) int atomic_add(volatile int *ptr, int val) { int old, new; do { old = *ptr; new = old + val; } while(__kuser_cmpxchg(old, new, ptr)); return new; }}hjsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhKhjhhubh)}(hNotes:h]hNotes:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj?)}(h- This routine already includes memory barriers as needed. - Valid only if __kuser_helper_version >= 2 (from kernel version 2.6.12). h]j)}(hhh](j)}(h9This routine already includes memory barriers as needed. h]h)}(h8This routine already includes memory barriers as needed.h]h8This routine already includes memory barriers as needed.}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj&ubah}(h]h ]h"]h$]h&]uh1jhj#ubj)}(hHValid only if __kuser_helper_version >= 2 (from kernel version 2.6.12). h]h)}(hGValid only if __kuser_helper_version >= 2 (from kernel version 2.6.12).h]hGValid only if __kuser_helper_version >= 2 (from kernel version 2.6.12).}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj>ubah}(h]h ]h"]h$]h&]uh1jhj#ubeh}(h]h ]h"]h$]h&]jjuh1jhhhKhjubah}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubeh}(h] kuser-cmpxchgah ]h"] kuser_cmpxchgah$]h&]uh1hhhhhhhhKuubh)}(hhh](h)}(hkuser_memory_barrierh]hkuser_memory_barrier}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjjhhhhhKubh)}(hLocation: 0xffff0fa0h]hLocation: 0xffff0fa0}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjjhhubh)}(hReference prototype::h]hReference prototype:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjjhhubj)}(h"void __kuser_memory_barrier(void);h]h"void __kuser_memory_barrier(void);}hjsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhKhjjhhubh)}(hInput:h]hInput:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjjhhubj?)}(hlr = return address h]h)}(hlr = return addressh]hlr = return address}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1j>hhhKhjjhhubh)}(hOutput:h]hOutput:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjjhhubj?)}(hnone h]h)}(hnoneh]hnone}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1j>hhhKhjjhhubh)}(hClobbered registers:h]hClobbered registers:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjjhhubj?)}(hnone h]h)}(hnoneh]hnone}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1j>hhhKhjjhhubh)}(h Definition:h]h Definition:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjjhhubj?)}(hoApply any needed memory barrier to preserve consistency with data modified manually and __kuser_cmpxchg usage. h]h)}(hnApply any needed memory barrier to preserve consistency with data modified manually and __kuser_cmpxchg usage.h]hnApply any needed memory barrier to preserve consistency with data modified manually and __kuser_cmpxchg usage.}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj%ubah}(h]h ]h"]h$]h&]uh1j>hhhKhjjhhubh)}(hUsage example::h]hUsage example:}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjjhhubj)}(hVtypedef void (__kuser_dmb_t)(void); #define __kuser_dmb (*(__kuser_dmb_t *)0xffff0fa0)h]hVtypedef void (__kuser_dmb_t)(void); #define __kuser_dmb (*(__kuser_dmb_t *)0xffff0fa0)}hjKsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhKhjjhhubh)}(hNotes:h]hNotes:}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjjhhubj?)}(hJ- Valid only if __kuser_helper_version >= 3 (from kernel version 2.6.15). h]j)}(hhh]j)}(hHValid only if __kuser_helper_version >= 3 (from kernel version 2.6.15). h]h)}(hGValid only if __kuser_helper_version >= 3 (from kernel version 2.6.15).h]hGValid only if __kuser_helper_version >= 3 (from kernel version 2.6.15).}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjnubah}(h]h ]h"]h$]h&]uh1jhjkubah}(h]h ]h"]h$]h&]jjuh1jhhhKhjgubah}(h]h ]h"]h$]h&]uh1j>hhhKhjjhhubeh}(h]kuser-memory-barrierah ]h"]kuser_memory_barrierah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hkuser_cmpxchg64h]hkuser_cmpxchg64}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hLocation: 0xffff0f60h]hLocation: 0xffff0f60}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hReference prototype::h]hReference prototype:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(hint __kuser_cmpxchg64(const int64_t *oldval, const int64_t *newval, volatile int64_t *ptr);h]hint __kuser_cmpxchg64(const int64_t *oldval, const int64_t *newval, volatile int64_t *ptr);}hjsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhKhjhhubh)}(hInput:h]hInput:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj?)}(h_r0 = pointer to oldval r1 = pointer to newval r2 = pointer to target value lr = return address h]h)}(h^r0 = pointer to oldval r1 = pointer to newval r2 = pointer to target value lr = return addressh]h^r0 = pointer to oldval r1 = pointer to newval r2 = pointer to target value lr = return address}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubh)}(hOutput:h]hOutput:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj?)}(hOr0 = success code (zero or non-zero) C flag = set if r0 == 0, clear if r0 != 0 h]h)}(hNr0 = success code (zero or non-zero) C flag = set if r0 == 0, clear if r0 != 0h]hNr0 = success code (zero or non-zero) C flag = set if r0 == 0, clear if r0 != 0}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj ubah}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubh)}(hClobbered registers:h]hClobbered registers:}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj?)}(hr3, lr, flags h]h)}(h r3, lr, flagsh]h r3, lr, flags}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj/ubah}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubh)}(h Definition:h]h Definition:}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj?)}(hX1Atomically store the 64-bit value pointed by `*newval` in `*ptr` only if `*ptr` is equal to the 64-bit value pointed by `*oldval`. Return zero if `*ptr` was changed or non-zero if no exchange happened. The C flag is also set if `*ptr` was changed to allow for assembly optimization in the calling code. h](h)}(hAtomically store the 64-bit value pointed by `*newval` in `*ptr` only if `*ptr` is equal to the 64-bit value pointed by `*oldval`. Return zero if `*ptr` was changed or non-zero if no exchange happened.h](h-Atomically store the 64-bit value pointed by }(hjYhhhNhNubj)}(h `*newval`h]h*newval}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh in }(hjYhhhNhNubj)}(h`*ptr`h]h*ptr}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh only if }(hjYhhhNhNubj)}(h`*ptr`h]h*ptr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh) is equal to the 64-bit value pointed by }(hjYhhhNhNubj)}(h `*oldval`h]h*oldval}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh. Return zero if }(hjYhhhNhNubj)}(h`*ptr`h]h*ptr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjYubh1 was changed or non-zero if no exchange happened.}(hjYhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjUubh)}(hdThe C flag is also set if `*ptr` was changed to allow for assembly optimization in the calling code.h](hThe C flag is also set if }(hjhhhNhNubj)}(h`*ptr`h]h*ptr}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubhD was changed to allow for assembly optimization in the calling code.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjUubeh}(h]h ]h"]h$]h&]uh1j>hhhKhjhhubh)}(hUsage example::h]hUsage example:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubj)}(hXtypedef int (__kuser_cmpxchg64_t)(const int64_t *oldval, const int64_t *newval, volatile int64_t *ptr); #define __kuser_cmpxchg64 (*(__kuser_cmpxchg64_t *)0xffff0f60) int64_t atomic_add64(volatile int64_t *ptr, int64_t val) { int64_t old, new; do { old = *ptr; new = old + val; } while(__kuser_cmpxchg64(&old, &new, ptr)); return new; }h]hXtypedef int (__kuser_cmpxchg64_t)(const int64_t *oldval, const int64_t *newval, volatile int64_t *ptr); #define __kuser_cmpxchg64 (*(__kuser_cmpxchg64_t *)0xffff0f60) int64_t atomic_add64(volatile int64_t *ptr, int64_t val) { int64_t old, new; do { old = *ptr; new = old + val; } while(__kuser_cmpxchg64(&old, &new, ptr)); return new; }}hjsbah}(h]h ]h"]h$]h&]j.j/uh1jhhhKhjhhubh)}(hNotes:h]hNotes:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj?)}(hX - This routine already includes memory barriers as needed. - Due to the length of this sequence, this spans 2 conventional kuser "slots", therefore 0xffff0f80 is not used as a valid entry point. - Valid only if __kuser_helper_version >= 5 (from kernel version 3.1).h]j)}(hhh](j)}(h9This routine already includes memory barriers as needed. h]h)}(h8This routine already includes memory barriers as needed.h]h8This routine already includes memory barriers as needed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hDue to the length of this sequence, this spans 2 conventional kuser "slots", therefore 0xffff0f80 is not used as a valid entry point. h]h)}(hDue to the length of this sequence, this spans 2 conventional kuser "slots", therefore 0xffff0f80 is not used as a valid entry point.h]hDue to the length of this sequence, this spans 2 conventional kuser “slots”, therefore 0xffff0f80 is not used as a valid entry point.}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hj0ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hDValid only if __kuser_helper_version >= 5 (from kernel version 3.1).h]h)}(hjJh]hDValid only if __kuser_helper_version >= 5 (from kernel version 3.1).}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjHubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]jjuh1jhhhMhjubah}(h]h ]h"]h$]h&]uh1j>hhhMhjhhubeh}(h]kuser-cmpxchg64ah ]h"]kuser_cmpxchg64ah$]h&]uh1hhhhhhhhKubeh}(h]kernel-provided-user-helpersah ]h"]kernel-provided user helpersah$]h&]uh1hhhhhhhhKubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksentryfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerjerror_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}refids}nameids}(jxjujjjjjgjdjjjpjmu nametypes}(jxjjjgjjpuh}(juhjhjjjdjjjjjmju footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}Rparse_messages]transform_messages] transformerN include_log] decorationNhhub.