dsphinx.addnodesdocument)}( rawsourcechildren]( translations LanguagesNode)}(hhh](h pending_xref)}(hhh]docutils.nodesTextChinese (Simplified)}parenthsba attributes}(ids]classes]names]dupnames]backrefs] refdomainstdreftypedoc reftarget*/translations/zh_CN/kernel-hacking/lockingmodnameN classnameN refexplicitutagnamehhh ubh)}(hhh]hChinese (Traditional)}hh2sbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget*/translations/zh_TW/kernel-hacking/lockingmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hItalian}hhFsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget*/translations/it_IT/kernel-hacking/lockingmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hJapanese}hhZsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget*/translations/ja_JP/kernel-hacking/lockingmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hKorean}hhnsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget*/translations/ko_KR/kernel-hacking/lockingmodnameN classnameN refexplicituh1hhh ubh)}(hhh]hSpanish}hhsbah}(h]h ]h"]h$]h&] refdomainh)reftypeh+ reftarget*/translations/sp_SP/kernel-hacking/lockingmodnameN classnameN refexplicituh1hhh ubeh}(h]h ]h"]h$]h&]current_languageEnglishuh1h hh _documenthsourceNlineNubhtarget)}(h.. _kernel_hacking_lock:h]h}(h]h ]h"]h$]h&]refidkernel-hacking-lockuh1hhKhhhhhD/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking.rstubhsection)}(hhh](htitle)}(hUnreliable Guide To Lockingh]hUnreliable Guide To Locking}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh field_list)}(hhh]hfield)}(hhh](h field_name)}(hAuthorh]hAuthor}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhhhKubh field_body)}(hRusty Russell h]h paragraph)}(h Rusty Russellh]h Rusty Russell}(hhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhhubah}(h]h ]h"]h$]h&]uh1hhhubeh}(h]h ]h"]h$]h&]uh1hhhhKhhhhubah}(h]h ]h"]h$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(h Introductionh]h Introduction}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK ubh)}(hWelcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking issues. This document describes the locking systems in the Linux Kernel in 2.6.h]hWelcome, to Rusty’s Remarkably Unreliable Guide to Kernel Locking issues. This document describes the locking systems in the Linux Kernel in 2.6.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hjhhubh)}(hWith the wide availability of HyperThreading, and preemption in the Linux Kernel, everyone hacking on the kernel needs to know the fundamentals of concurrency and locking for SMP.h]hWith the wide availability of HyperThreading, and preemption in the Linux Kernel, everyone hacking on the kernel needs to know the fundamentals of concurrency and locking for SMP.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h] introductionah ]h"] introductionah$]h&]uh1hhhhhhhhK ubh)}(hhh](h)}(hThe Problem With Concurrencyh]hThe Problem With Concurrency}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj<hhhhhKubh)}(h1(Skip this if you know what a Race Condition is).h]h1(Skip this if you know what a Race Condition is).}(hjMhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj<hhubh)}(h9In a normal program, you can increment a counter like so:h]h9In a normal program, you can increment a counter like so:}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj<hhubh literal_block)}(hvery_important_count++;h]hvery_important_count++;}hjksbah}(h]h ]h"]h$]h&] xml:spacepreserveuh1jihhhKhj<hhubh)}(h)This is what they would expect to happen:h]h)This is what they would expect to happen:}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK hj<hhubhtable)}(hhh](h)}(hExpected Resultsh]hExpected Results}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK#hjubhtgroup)}(hhh](hcolspec)}(hhh]h}(h]h ]h"]h$]h&]colwidthK$uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK$uh1jhjubhthead)}(hhh]hrow)}(hhh](hentry)}(hhh]h)}(h Instance 1h]h Instance 1}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK&hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h Instance 2h]h Instance 2}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK&hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhjubhtbody)}(hhh](j)}(hhh](j)}(hhh]h)}(hread very_important_count (5)h]hread very_important_count (5)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK(hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(h add 1 (6)h]h add 1 (6)}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK*hj.ubah}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj+ubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hwrite very_important_count (6)h]hwrite very_important_count (6)}(hjZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK,hjWubah}(h]h ]h"]h$]h&]uh1jhjTubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjTubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj}ubj)}(hhh]h)}(hread very_important_count (6)h]hread very_important_count (6)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK.hjubah}(h]h ]h"]h$]h&]uh1jhj}ubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h add 1 (7)h]h add 1 (7)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK0hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hwrite very_important_count (7)h]hwrite very_important_count (7)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK2hjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]colsKuh1jhjubeh}(h]id1ah ]h"]h$]h&]uh1jhj<hhhhhNubh)}(hThis is what might happen:h]hThis is what might happen:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK5hj<hhubj)}(hhh](h)}(hPossible Resultsh]hPossible Results}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK7hjubj)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]colwidthK$uh1jhj+ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK$uh1jhj+ubj)}(hhh]j)}(hhh](j)}(hhh]h)}(h Instance 1h]h Instance 1}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK:hjHubah}(h]h ]h"]h$]h&]uh1jhjEubj)}(hhh]h)}(h Instance 2h]h Instance 2}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK:hj_ubah}(h]h ]h"]h$]h&]uh1jhjEubeh}(h]h ]h"]h$]h&]uh1jhjBubah}(h]h ]h"]h$]h&]uh1jhj+ubj)}(hhh](j)}(hhh](j)}(hhh]h)}(hread very_important_count (5)h]hread very_important_count (5)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(h add 1 (6)h]h add 1 (6)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhK@hjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h add 1 (6)h]h add 1 (6)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKBhj ubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hwrite very_important_count (6)h]hwrite very_important_count (6)}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKDhj,ubah}(h]h ]h"]h$]h&]uh1jhj)ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj)ubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjRubj)}(hhh]h)}(hwrite very_important_count (6)h]hwrite very_important_count (6)}(hjahhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKFhj^ubah}(h]h ]h"]h$]h&]uh1jhjRubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhj+ubeh}(h]h ]h"]h$]h&]colsKuh1jhjubeh}(h]id2ah ]h"]h$]h&]uh1jhj<hhhhhNubh)}(hhh](h)}(h$Race Conditions and Critical Regionsh]h$Race Conditions and Critical Regions}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKKubh)}(hXBThis overlap, where the result depends on the relative timing of multiple tasks, is called a race condition. The piece of code containing the concurrency issue is called a critical region. And especially since Linux starting running on SMP machines, they became one of the major issues in kernel design and implementation.h]hXBThis overlap, where the result depends on the relative timing of multiple tasks, is called a race condition. The piece of code containing the concurrency issue is called a critical region. And especially since Linux starting running on SMP machines, they became one of the major issues in kernel design and implementation.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKMhjhhubh)}(hPreemption can have the same effect, even if there is only one CPU: by preempting one task during the critical region, we have exactly the same race condition. In this case the thread which preempts might run the critical region itself.h]hPreemption can have the same effect, even if there is only one CPU: by preempting one task during the critical region, we have exactly the same race condition. In this case the thread which preempts might run the critical region itself.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKShjhhubh)}(hX>The solution is to recognize when these simultaneous accesses occur, and use locks to make sure that only one instance can enter the critical region at any time. There are many friendly primitives in the Linux kernel to help you do this. And then there are the unfriendly primitives, but I'll pretend they don't exist.h]hXBThe solution is to recognize when these simultaneous accesses occur, and use locks to make sure that only one instance can enter the critical region at any time. There are many friendly primitives in the Linux kernel to help you do this. And then there are the unfriendly primitives, but I’ll pretend they don’t exist.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKXhjhhubeh}(h]$race-conditions-and-critical-regionsah ]h"]$race conditions and critical regionsah$]h&]uh1hhj<hhhhhKKubeh}(h]the-problem-with-concurrencyah ]h"]the problem with concurrencyah$]h&]uh1hhhhhhhhKubh)}(hhh](h)}(hLocking in the Linux Kernelh]hLocking in the Linux Kernel}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhK_ubh)}(hGIf I could give you one piece of advice on locking: **keep it simple**.h](h4If I could give you one piece of advice on locking: }(hjhhhNhNubhstrong)}(h**keep it simple**h]hkeep it simple}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKahjhhubh)}(h$Be reluctant to introduce new locks.h]h$Be reluctant to introduce new locks.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKchjhhubh)}(hhh](h)}(h5Two Main Types of Kernel Locks: Spinlocks and Mutexesh]h5Two Main Types of Kernel Locks: Spinlocks and Mutexes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKfubh)}(hXThere are two main types of kernel locks. The fundamental type is the spinlock (``include/asm/spinlock.h``), which is a very simple single-holder lock: if you can't get the spinlock, you keep trying (spinning) until you can. Spinlocks are very small and fast, and can be used anywhere.h](hPThere are two main types of kernel locks. The fundamental type is the spinlock (}(hj,hhhNhNubhliteral)}(h``include/asm/spinlock.h``h]hinclude/asm/spinlock.h}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj,ubh), which is a very simple single-holder lock: if you can’t get the spinlock, you keep trying (spinning) until you can. Spinlocks are very small and fast, and can be used anywhere.}(hj,hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhhjhhubh)}(hXThe second type is a mutex (``include/linux/mutex.h``): it is like a spinlock, but you may block holding a mutex. If you can't lock a mutex, your task will suspend itself, and be woken up when the mutex is released. This means the CPU can do something else while you are waiting. There are many cases when you simply can't sleep (see `What Functions Are Safe To Call From Interrupts?`_), and so have to use a spinlock instead.h](hThe second type is a mutex (}(hjNhhhNhNubj5)}(h``include/linux/mutex.h``h]hinclude/linux/mutex.h}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjNubhX): it is like a spinlock, but you may block holding a mutex. If you can’t lock a mutex, your task will suspend itself, and be woken up when the mutex is released. This means the CPU can do something else while you are waiting. There are many cases when you simply can’t sleep (see }(hjNhhhNhNubh reference)}(h3`What Functions Are Safe To Call From Interrupts?`_h]h0What Functions Are Safe To Call From Interrupts?}(hjjhhhNhNubah}(h]h ]h"]h$]h&]name0What Functions Are Safe To Call From Interrupts?h/what-functions-are-safe-to-call-from-interruptsuh1jhhjNresolvedKubh)), and so have to use a spinlock instead.}(hjNhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKnhjhhubh)}(hHNeither type of lock is recursive: see `Deadlock: Simple and Advanced`_.h](h'Neither type of lock is recursive: see }(hjhhhNhNubji)}(h `Deadlock: Simple and Advanced`_h]hDeadlock: Simple and Advanced}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameDeadlock: Simple and Advancedhdeadlock-simple-and-advanceduh1jhhjj{Kubh.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKvhjhhubeh}(h]4two-main-types-of-kernel-locks-spinlocks-and-mutexesah ]h"]5two main types of kernel locks: spinlocks and mutexesah$]h&]uh1hhjhhhhhKfubh)}(hhh](h)}(hLocks and Uniprocessor Kernelsh]hLocks and Uniprocessor Kernels}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKzubh)}(hFor kernels compiled without ``CONFIG_SMP``, and without ``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent design decision: when no-one else can run at the same time, there is no reason to have a lock.h](hFor kernels compiled without }(hjhhhNhNubj5)}(h``CONFIG_SMP``h]h CONFIG_SMP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh, and without }(hjhhhNhNubj5)}(h``CONFIG_PREEMPT``h]hCONFIG_PREEMPT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh spinlocks do not exist at all. This is an excellent design decision: when no-one else can run at the same time, there is no reason to have a lock.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhK|hjhhubh)}(hX If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT`` is set, then spinlocks simply disable preemption, which is sufficient to prevent any races. For most purposes, we can think of preemption as equivalent to SMP, and not worry about it separately.h](h"If the kernel is compiled without }(hjhhhNhNubj5)}(h``CONFIG_SMP``h]h CONFIG_SMP}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh, but }(hjhhhNhNubj5)}(h``CONFIG_PREEMPT``h]hCONFIG_PREEMPT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh is set, then spinlocks simply disable preemption, which is sufficient to prevent any races. For most purposes, we can think of preemption as equivalent to SMP, and not worry about it separately.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hYou should always test your locking code with ``CONFIG_SMP`` and ``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box, because it will still catch some kinds of locking bugs.h](h.You should always test your locking code with }(hj&hhhNhNubj5)}(h``CONFIG_SMP``h]h CONFIG_SMP}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj&ubh and }(hj&hhhNhNubj5)}(h``CONFIG_PREEMPT``h]hCONFIG_PREEMPT}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj&ubhk enabled, even if you don’t have an SMP test box, because it will still catch some kinds of locking bugs.}(hj&hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hoMutexes still exist, because they are required for synchronization between user contexts, as we will see below.h]hoMutexes still exist, because they are required for synchronization between user contexts, as we will see below.}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]locks-and-uniprocessor-kernelsah ]h"]locks and uniprocessor kernelsah$]h&]uh1hhjhhhhhKzubh)}(hhh](h)}(hLocking Only In User Contexth]hLocking Only In User Context}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjnhhhhhKubh)}(hXIf you have a data structure which is only ever accessed from user context, then you can use a simple mutex (``include/linux/mutex.h``) to protect it. This is the most trivial case: you initialize the mutex. Then you can call mutex_lock_interruptible() to grab the mutex, and mutex_unlock() to release it. There is also a mutex_lock(), which should be avoided, because it will not return if a signal is received.h](hmIf you have a data structure which is only ever accessed from user context, then you can use a simple mutex (}(hjhhhNhNubj5)}(h``include/linux/mutex.h``h]hinclude/linux/mutex.h}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhX) to protect it. This is the most trivial case: you initialize the mutex. Then you can call mutex_lock_interruptible() to grab the mutex, and mutex_unlock() to release it. There is also a mutex_lock(), which should be avoided, because it will not return if a signal is received.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjnhhubh)}(hXExample: ``net/netfilter/nf_sockopt.c`` allows registration of new setsockopt() and getsockopt() calls, with nf_register_sockopt(). Registration and de-registration are only done on module load and unload (and boot time, where there is no concurrency), and the list of registrations is only consulted for an unknown setsockopt() or getsockopt() system call. The ``nf_sockopt_mutex`` is perfect to protect this, especially since the setsockopt and getsockopt calls may well sleep.h](h Example: }(hjhhhNhNubj5)}(h``net/netfilter/nf_sockopt.c``h]hnet/netfilter/nf_sockopt.c}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhXC allows registration of new setsockopt() and getsockopt() calls, with nf_register_sockopt(). Registration and de-registration are only done on module load and unload (and boot time, where there is no concurrency), and the list of registrations is only consulted for an unknown setsockopt() or getsockopt() system call. The }(hjhhhNhNubj5)}(h``nf_sockopt_mutex``h]hnf_sockopt_mutex}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubha is perfect to protect this, especially since the setsockopt and getsockopt calls may well sleep.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjnhhubeh}(h]locking-only-in-user-contextah ]h"]locking only in user contextah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(h)Locking Between User Context and Softirqsh]h)Locking Between User Context and Softirqs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXIf a softirq shares data with user context, you have two problems. Firstly, the current user context can be interrupted by a softirq, and secondly, the critical region could be entered from another CPU. This is where spin_lock_bh() (``include/linux/spinlock.h``) is used. It disables softirqs on that CPU, then grabs the lock. spin_unlock_bh() does the reverse. (The '_bh' suffix is a historical reference to "Bottom Halves", the old name for software interrupts. It should really be called spin_lock_softirq()' in a perfect world).h](hIf a softirq shares data with user context, you have two problems. Firstly, the current user context can be interrupted by a softirq, and secondly, the critical region could be entered from another CPU. This is where spin_lock_bh() (}(hjhhhNhNubj5)}(h``include/linux/spinlock.h``h]hinclude/linux/spinlock.h}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhX) is used. It disables softirqs on that CPU, then grabs the lock. spin_unlock_bh() does the reverse. (The ‘_bh’ suffix is a historical reference to “Bottom Halves”, the old name for software interrupts. It should really be called spin_lock_softirq()’ in a perfect world).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hNote that you can also use spin_lock_irq() or spin_lock_irqsave() here, which stop hardware interrupts as well: see `Hard IRQ Context`_.h](htNote that you can also use spin_lock_irq() or spin_lock_irqsave() here, which stop hardware interrupts as well: see }(hj hhhNhNubji)}(h`Hard IRQ Context`_h]hHard IRQ Context}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameHard IRQ Contexthhard-irq-contextuh1jhhj j{Kubh.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hThis works perfectly for UP as well: the spin lock vanishes, and this macro simply becomes local_bh_disable() (``include/linux/interrupt.h``), which protects you from the softirq being run.h](hoThis works perfectly for UP as well: the spin lock vanishes, and this macro simply becomes local_bh_disable() (}(hj-hhhNhNubj5)}(h``include/linux/interrupt.h``h]hinclude/linux/interrupt.h}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj-ubh1), which protects you from the softirq being run.}(hj-hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h])locking-between-user-context-and-softirqsah ]h"])locking between user context and softirqsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(h)Locking Between User Context and Taskletsh]h)Locking Between User Context and Tasklets}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjUhhhhhKubh)}(hTThis is exactly the same as above, because tasklets are actually run from a softirq.h]hTThis is exactly the same as above, because tasklets are actually run from a softirq.}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjUhhubeh}(h])locking-between-user-context-and-taskletsah ]h"])locking between user context and taskletsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(h'Locking Between User Context and Timersh]h'Locking Between User Context and Timers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj|hhhhhKubh)}(hThis, too, is exactly the same as above, because timers are actually run from a softirq. From a locking point of view, tasklets and timers are identical.h]hThis, too, is exactly the same as above, because timers are actually run from a softirq. From a locking point of view, tasklets and timers are identical.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj|hhubeh}(h]'locking-between-user-context-and-timersah ]h"]'locking between user context and timersah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hLocking Between Tasklets/Timersh]hLocking Between Tasklets/Timers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hTSometimes a tasklet or timer might want to share data with another tasklet or timer.h]hTSometimes a tasklet or timer might want to share data with another tasklet or timer.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hhh](h)}(hThe Same Tasklet/Timerh]hThe Same Tasklet/Timer}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hSince a tasklet is never run on two CPUs at once, you don't need to worry about your tasklet being reentrant (running twice at once), even on SMP.h]hSince a tasklet is never run on two CPUs at once, you don’t need to worry about your tasklet being reentrant (running twice at once), even on SMP.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]the-same-tasklet-timerah ]h"]the same tasklet/timerah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hDifferent Tasklets/Timersh]hDifferent Tasklets/Timers}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hIf another tasklet/timer wants to share data with your tasklet or timer , you will both need to use spin_lock() and spin_unlock() calls. spin_lock_bh() is unnecessary here, as you are already in a tasklet, and none will be run on the same CPU.h]hIf another tasklet/timer wants to share data with your tasklet or timer , you will both need to use spin_lock() and spin_unlock() calls. spin_lock_bh() is unnecessary here, as you are already in a tasklet, and none will be run on the same CPU.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]different-tasklets-timersah ]h"]different tasklets/timersah$]h&]uh1hhjhhhhhKubeh}(h]locking-between-tasklets-timersah ]h"]locking between tasklets/timersah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hLocking Between Softirqsh]hLocking Between Softirqs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hHOften a softirq might want to share data with itself or a tasklet/timer.h]hHOften a softirq might want to share data with itself or a tasklet/timer.}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hhh](h)}(hThe Same Softirqh]hThe Same Softirq}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj7hhhhhKubh)}(hThe same softirq can run on the other CPUs: you can use a per-CPU array (see `Per-CPU Data`_) for better performance. If you're going so far as to use a softirq, you probably care about scalable performance enough to justify the extra complexity.h](hMThe same softirq can run on the other CPUs: you can use a per-CPU array (see }(hjHhhhNhNubji)}(h`Per-CPU Data`_h]h Per-CPU Data}(hjPhhhNhNubah}(h]h ]h"]h$]h&]name Per-CPU Datah per-cpu-datauh1jhhjHj{Kubh) for better performance. If you’re going so far as to use a softirq, you probably care about scalable performance enough to justify the extra complexity.}(hjHhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhKhj7hhubh)}(hAYou'll need to use spin_lock() and spin_unlock() for shared data.h]hCYou’ll need to use spin_lock() and spin_unlock() for shared data.}(hjkhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhj7hhubeh}(h]the-same-softirqah ]h"]the same softirqah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(hDifferent Softirqsh]hDifferent Softirqs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hYou'll need to use spin_lock() and spin_unlock() for shared data, whether it be a timer, tasklet, different softirq or the same or another softirq: any of them could be running on a different CPU.h]hYou’ll need to use spin_lock() and spin_unlock() for shared data, whether it be a timer, tasklet, different softirq or the same or another softirq: any of them could be running on a different CPU.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubeh}(h]different-softirqsah ]h"]different softirqsah$]h&]uh1hhjhhhhhKubeh}(h]locking-between-softirqsah ]h"]locking between softirqsah$]h&]uh1hhjhhhhhKubeh}(h]locking-in-the-linux-kernelah ]h"]locking in the linux kernelah$]h&]uh1hhhhhhhhK_ubh)}(hhh](h)}(hHard IRQ Contexth]hHard IRQ Context}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hHardware interrupts usually communicate with a tasklet or softirq. Frequently this involves putting work in a queue, which the softirq will take out.h]hHardware interrupts usually communicate with a tasklet or softirq. Frequently this involves putting work in a queue, which the softirq will take out.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hhh](h)}(h.Locking Between Hard IRQ and Softirqs/Taskletsh]h.Locking Between Hard IRQ and Softirqs/Tasklets}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhKubh)}(hXIf a hardware irq handler shares data with a softirq, you have two concerns. Firstly, the softirq processing can be interrupted by a hardware interrupt, and secondly, the critical region could be entered by a hardware interrupt on another CPU. This is where spin_lock_irq() is used. It is defined to disable interrupts on that cpu, then grab the lock. spin_unlock_irq() does the reverse.h]hXIf a hardware irq handler shares data with a softirq, you have two concerns. Firstly, the softirq processing can be interrupted by a hardware interrupt, and secondly, the critical region could be entered by a hardware interrupt on another CPU. This is where spin_lock_irq() is used. It is defined to disable interrupts on that cpu, then grab the lock. spin_unlock_irq() does the reverse.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhKhjhhubh)}(hX1The irq handler does not need to use spin_lock_irq(), because the softirq cannot run while the irq handler is running: it can use spin_lock(), which is slightly faster. The only exception would be if a different hardware irq handler uses the same lock: spin_lock_irq() will stop that from interrupting us.h]hX1The irq handler does not need to use spin_lock_irq(), because the softirq cannot run while the irq handler is running: it can use spin_lock(), which is slightly faster. The only exception would be if a different hardware irq handler uses the same lock: spin_lock_irq() will stop that from interrupting us.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hThis works perfectly for UP as well: the spin lock vanishes, and this macro simply becomes local_irq_disable() (``include/asm/smp.h``), which protects you from the softirq/tasklet/BH being run.h](hpThis works perfectly for UP as well: the spin lock vanishes, and this macro simply becomes local_irq_disable() (}(hj hhhNhNubj5)}(h``include/asm/smp.h``h]hinclude/asm/smp.h}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj ubh<), which protects you from the softirq/tasklet/BH being run.}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM hjhhubh)}(hXJspin_lock_irqsave() (``include/linux/spinlock.h``) is a variant which saves whether interrupts were on or off in a flags word, which is passed to spin_unlock_irqrestore(). This means that the same code can be used inside an hard irq handler (where interrupts are already off) and in softirqs (where the irq disabling is required).h](hspin_lock_irqsave() (}(hj$ hhhNhNubj5)}(h``include/linux/spinlock.h``h]hinclude/linux/spinlock.h}(hj, hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj$ ubhX) is a variant which saves whether interrupts were on or off in a flags word, which is passed to spin_unlock_irqrestore(). This means that the same code can be used inside an hard irq handler (where interrupts are already off) and in softirqs (where the irq disabling is required).}(hj$ hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hNote that softirqs (and hence tasklets and timers) are run on return from hardware interrupts, so spin_lock_irq() also stops these. In that sense, spin_lock_irqsave() is the most general and powerful locking function.h]hNote that softirqs (and hence tasklets and timers) are run on return from hardware interrupts, so spin_lock_irq() also stops these. In that sense, spin_lock_irqsave() is the most general and powerful locking function.}(hjD hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h].locking-between-hard-irq-and-softirqs-taskletsah ]h"].locking between hard irq and softirqs/taskletsah$]h&]uh1hhjhhhhhKubh)}(hhh](h)}(h%Locking Between Two Hard IRQ Handlersh]h%Locking Between Two Hard IRQ Handlers}(hj] hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjZ hhhhhMubh)}(hIt is rare to have to share data between two IRQ handlers, but if you do, spin_lock_irqsave() should be used: it is architecture-specific whether all interrupts are disabled inside irq handlers themselves.h]hIt is rare to have to share data between two IRQ handlers, but if you do, spin_lock_irqsave() should be used: it is architecture-specific whether all interrupts are disabled inside irq handlers themselves.}(hjk hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjZ hhubeh}(h]%locking-between-two-hard-irq-handlersah ]h"]%locking between two hard irq handlersah$]h&]uh1hhjhhhhhMubeh}(h]j"ah ]h"]hard irq contextah$]h&]uh1hhhhhhhhK referencedKubh)}(hhh](h)}(hCheat Sheet For Lockingh]hCheat Sheet For Locking}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhM$ubh)}(h)Pete Zaitcev gives the following summary:h]h)Pete Zaitcev gives the following summary:}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM&hj hhubh bullet_list)}(hhh](h list_item)}(hIf you are in a process context (any syscall) and want to lock other process out, use a mutex. You can take a mutex and sleep (``copy_from_user()`` or ``kmalloc(x,GFP_KERNEL)``). h]h)}(hIf you are in a process context (any syscall) and want to lock other process out, use a mutex. You can take a mutex and sleep (``copy_from_user()`` or ``kmalloc(x,GFP_KERNEL)``).h](hIf you are in a process context (any syscall) and want to lock other process out, use a mutex. You can take a mutex and sleep (}(hj hhhNhNubj5)}(h``copy_from_user()``h]hcopy_from_user()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj ubh or }(hj hhhNhNubj5)}(h``kmalloc(x,GFP_KERNEL)``h]hkmalloc(x,GFP_KERNEL)}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj ubh).}(hj hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM(hj ubah}(h]h ]h"]h$]h&]uh1j hj hhhhhNubj )}(hjOtherwise (== data can be touched in an interrupt), use spin_lock_irqsave() and spin_unlock_irqrestore(). h]h)}(hiOtherwise (== data can be touched in an interrupt), use spin_lock_irqsave() and spin_unlock_irqrestore().h]hiOtherwise (== data can be touched in an interrupt), use spin_lock_irqsave() and spin_unlock_irqrestore().}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM,hj ubah}(h]h ]h"]h$]h&]uh1j hj hhhhhNubj )}(hsAvoid holding spinlock for more than 5 lines of code and across any function call (except accessors like readb()). h]h)}(hrAvoid holding spinlock for more than 5 lines of code and across any function call (except accessors like readb()).h]hrAvoid holding spinlock for more than 5 lines of code and across any function call (except accessors like readb()).}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM0hj ubah}(h]h ]h"]h$]h&]uh1j hj hhhhhNubeh}(h]h ]h"]h$]h&]bullet-uh1j hhhM(hj hhubh)}(hhh](h)}(hTable of Minimum Requirementsh]hTable of Minimum Requirements}(hj& hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj# hhhhhM4ubh)}(hXPThe following table lists the **minimum** locking requirements between various contexts. In some cases, the same context can only be running on one CPU at a time, so no locking is required for that context (eg. a particular thread can only run on one CPU at a time, but if it needs shares data with another thread, locking is required).h](hThe following table lists the }(hj4 hhhNhNubj)}(h **minimum**h]hminimum}(hj< hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4 ubhX' locking requirements between various contexts. In some cases, the same context can only be running on one CPU at a time, so no locking is required for that context (eg. a particular thread can only run on one CPU at a time, but if it needs shares data with another thread, locking is required).}(hj4 hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM6hj# hhubh)}(hxRemember the advice above: you can always use spin_lock_irqsave(), which is a superset of all other spinlock primitives.h]hxRemember the advice above: you can always use spin_lock_irqsave(), which is a superset of all other spinlock primitives.}(hjT hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM<hj# hhubj)}(hhh]j)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthK uh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhje ubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhje ubj)}(hhh]j)}(hhh](j)}(hhh]h)}(h.h]h.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h IRQ Handler Ah]h IRQ Handler A}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h IRQ Handler Bh]h IRQ Handler B}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h Softirq Ah]h Softirq A}(hj$ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj! ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h Softirq Bh]h Softirq B}(hj; hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj8 ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h Tasklet Ah]h Tasklet A}(hjR hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhjO ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(h Tasklet Bh]h Tasklet B}(hji hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhjf ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hTimer Ah]hTimer A}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj} ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hTimer Bh]hTimer B}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hUser Context Ah]hUser Context A}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hUser Context Bh]hUser Context B}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhj ubah}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1jhje ubj)}(hhh](j)}(hhh](j)}(hhh]h)}(h IRQ Handler Ah]h IRQ Handler A}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMChj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hNoneh]hNone}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMChj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(h IRQ Handler Bh]h IRQ Handler B}(hjv hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMDhjs ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h)}(hSLISh]hSLIS}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMDhj ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h)}(hNoneh]hNone}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMDhj ubah}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjp ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjp ubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(h Softirq Ah]h Softirq A}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hSLIh]hSLI}(hj# hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hSLIh]hSLI}(hj: hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhj7 ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hSLh]hSL}(hjQ hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMEhjN ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(h Softirq Bh]h Softirq B}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hSLIh]hSLI}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hSLIh]hSLI}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hSLh]hSL}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h)}(hSLh]hSL}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMFhj ubah}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(h Tasklet Ah]h Tasklet A}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhj_ubah}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h)}(hSLIh]hSLI}(hjyhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhjvubah}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhjubah}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h)}(hSLh]hSL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhjubah}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h)}(hSLh]hSL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhjubah}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h)}(hNoneh]hNone}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMGhjubah}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj\ubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhj\ubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(h Tasklet Bh]h Tasklet B}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhj6ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhjMubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hjghhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhjdubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhj{ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hNoneh]hNone}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(hTimer Ah]hTimer A}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhj2ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhjIubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhj`ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhjwubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hNoneh]hNone}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMIhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(hTimer Bh]hTimer B}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhj%ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhj<ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjSubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLh]hSL}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hNoneh]hNone}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(hUser Context Ah]hUser Context A}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhj&ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhj=ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhjTubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhjkubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hNoneh]hNone}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMKhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhj ubj)}(hhh](j)}(hhh]h)}(hUser Context Bh]hUser Context B}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLIh]hSLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhj5ubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjLubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hSLBHh]hSLBH}(hjfhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjcubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hMLIh]hMLI}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjzubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hNoneh]hNone}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMLhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhj ubeh}(h]h ]h"]h$]h&]uh1jhje ubeh}(h]h ]h"]h$]h&]colsK uh1jhjb ubah}(h]h ]h"]h$]h&]uh1jhj# hhhhhNubh)}(h$Table: Table of Locking Requirementsh]h$Table: Table of Locking Requirements}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMOhj# hhubj)}(hhh]j)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjubj)}(hhh](j)}(hhh](j)}(hhh]h)}(hSLISh]hSLIS}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMRhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hspin_lock_irqsaveh]hspin_lock_irqsave}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMRhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hSLIh]hSLI}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMThj&ubah}(h]h ]h"]h$]h&]uh1jhj#ubj)}(hhh]h)}(h spin_lock_irqh]h spin_lock_irq}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMThj=ubah}(h]h ]h"]h$]h&]uh1jhj#ubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hSLh]hSL}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMVhj]ubah}(h]h ]h"]h$]h&]uh1jhjZubj)}(hhh]h)}(h spin_lockh]h spin_lock}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMVhjtubah}(h]h ]h"]h$]h&]uh1jhjZubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hSLBHh]hSLBH}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMXhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(h spin_lock_bhh]h spin_lock_bh}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMXhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh]h)}(hMLIh]hMLI}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMZhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hmutex_lock_interruptibleh]hmutex_lock_interruptible}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMZhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]colsKuh1jhjubah}(h]h ]h"]h$]h&]uh1jhj# hhhhhNubh)}(h,Table: Legend for Locking Requirements Tableh]h,Table: Legend for Locking Requirements Table}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM]hj# hhubeh}(h]table-of-minimum-requirementsah ]h"]table of minimum requirementsah$]h&]uh1hhj hhhhhM4ubeh}(h]cheat-sheet-for-lockingah ]h"]cheat sheet for lockingah$]h&]uh1hhhhhhhhM$ubh)}(hhh](h)}(hThe trylock Functionsh]hThe trylock Functions}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0hhhhhM`ubh)}(hXcThere are functions that try to acquire a lock only once and immediately return a value telling about success or failure to acquire the lock. They can be used if you need no access to the data protected with the lock when some other thread is holding the lock. You should acquire the lock later if you then need access to the data protected with the lock.h]hXcThere are functions that try to acquire a lock only once and immediately return a value telling about success or failure to acquire the lock. They can be used if you need no access to the data protected with the lock when some other thread is holding the lock. You should acquire the lock later if you then need access to the data protected with the lock.}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMbhj0hhubh)}(hspin_trylock() does not spin but returns non-zero if it acquires the spinlock on the first try or 0 if not. This function can be used in all contexts like spin_lock(): you must have disabled the contexts that might interrupt you and acquire the spin lock.h]hspin_trylock() does not spin but returns non-zero if it acquires the spinlock on the first try or 0 if not. This function can be used in all contexts like spin_lock(): you must have disabled the contexts that might interrupt you and acquire the spin lock.}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhhj0hhubh)}(hmutex_trylock() does not suspend your task but returns non-zero if it could lock the mutex on the first try or 0 if not. This function cannot be safely used in hardware or software interrupt contexts despite not sleeping.h]hmutex_trylock() does not suspend your task but returns non-zero if it could lock the mutex on the first try or 0 if not. This function cannot be safely used in hardware or software interrupt contexts despite not sleeping.}(hj]hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMnhj0hhubeh}(h]the-trylock-functionsah ]h"]the trylock functionsah$]h&]uh1hhhhhhhhM`ubh)}(hhh](h)}(hCommon Examplesh]hCommon Examples}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjshhhhhMtubh)}(hLet's step through a simple example: a cache of number to name mappings. The cache keeps a count of how often each of the objects is used, and when it gets full, throws out the least used one.h]hLet’s step through a simple example: a cache of number to name mappings. The cache keeps a count of how often each of the objects is used, and when it gets full, throws out the least used one.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMvhjshhubh)}(hhh](h)}(hAll In User Contexth]hAll In User Context}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM{ubh)}(hFor our first example, we assume that all operations are in user context (ie. from system calls), so we can sleep. This means we can use a mutex to protect the cache and all the objects within it. Here's the code::h]hFor our first example, we assume that all operations are in user context (ie. from system calls), so we can sleep. This means we can use a mutex to protect the cache and all the objects within it. Here’s the code:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM}hjhhubjj)}(hX#include #include #include #include #include struct object { struct list_head list; int id; char name[32]; int popularity; }; /* Protects the cache, cache_num, and the objects within it */ static DEFINE_MUTEX(cache_lock); static LIST_HEAD(cache); static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 /* Must be holding cache_lock */ static struct object *__cache_find(int id) { struct object *i; list_for_each_entry(i, &cache, list) if (i->id == id) { i->popularity++; return i; } return NULL; } /* Must be holding cache_lock */ static void __cache_delete(struct object *obj) { BUG_ON(!obj); list_del(&obj->list); kfree(obj); cache_num--; } /* Must be holding cache_lock */ static void __cache_add(struct object *obj) { list_add(&obj->list, &cache); if (++cache_num > MAX_CACHE_SIZE) { struct object *i, *outcast = NULL; list_for_each_entry(i, &cache, list) { if (!outcast || i->popularity < outcast->popularity) outcast = i; } __cache_delete(outcast); } } int cache_add(int id, const char *name) { struct object *obj; if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) return -ENOMEM; strscpy(obj->name, name, sizeof(obj->name)); obj->id = id; obj->popularity = 0; mutex_lock(&cache_lock); __cache_add(obj); mutex_unlock(&cache_lock); return 0; } void cache_delete(int id) { mutex_lock(&cache_lock); __cache_delete(__cache_find(id)); mutex_unlock(&cache_lock); } int cache_find(int id, char *name) { struct object *obj; int ret = -ENOENT; mutex_lock(&cache_lock); obj = __cache_find(id); if (obj) { ret = 0; strcpy(name, obj->name); } mutex_unlock(&cache_lock); return ret; } h]hX#include #include #include #include #include struct object { struct list_head list; int id; char name[32]; int popularity; }; /* Protects the cache, cache_num, and the objects within it */ static DEFINE_MUTEX(cache_lock); static LIST_HEAD(cache); static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 /* Must be holding cache_lock */ static struct object *__cache_find(int id) { struct object *i; list_for_each_entry(i, &cache, list) if (i->id == id) { i->popularity++; return i; } return NULL; } /* Must be holding cache_lock */ static void __cache_delete(struct object *obj) { BUG_ON(!obj); list_del(&obj->list); kfree(obj); cache_num--; } /* Must be holding cache_lock */ static void __cache_add(struct object *obj) { list_add(&obj->list, &cache); if (++cache_num > MAX_CACHE_SIZE) { struct object *i, *outcast = NULL; list_for_each_entry(i, &cache, list) { if (!outcast || i->popularity < outcast->popularity) outcast = i; } __cache_delete(outcast); } } int cache_add(int id, const char *name) { struct object *obj; if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) return -ENOMEM; strscpy(obj->name, name, sizeof(obj->name)); obj->id = id; obj->popularity = 0; mutex_lock(&cache_lock); __cache_add(obj); mutex_unlock(&cache_lock); return 0; } void cache_delete(int id) { mutex_lock(&cache_lock); __cache_delete(__cache_find(id)); mutex_unlock(&cache_lock); } int cache_find(int id, char *name) { struct object *obj; int ret = -ENOENT; mutex_lock(&cache_lock); obj = __cache_find(id); if (obj) { ret = 0; strcpy(name, obj->name); } mutex_unlock(&cache_lock); return ret; }}hjsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMhjhhubh)}(hX,Note that we always make sure we have the cache_lock when we add, delete, or look up the cache: both the cache infrastructure itself and the contents of the objects are protected by the lock. In this case it's easy, since we copy the data for the user, and never let them access the objects directly.h]hX.Note that we always make sure we have the cache_lock when we add, delete, or look up the cache: both the cache infrastructure itself and the contents of the objects are protected by the lock. In this case it’s easy, since we copy the data for the user, and never let them access the objects directly.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hThere is a slight (and common) optimization here: in cache_add() we set up the fields of the object before grabbing the lock. This is safe, as no-one else can access it until we put it in cache.h]hThere is a slight (and common) optimization here: in cache_add() we set up the fields of the object before grabbing the lock. This is safe, as no-one else can access it until we put it in cache.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]all-in-user-contextah ]h"]all in user contextah$]h&]uh1hhjshhhhhM{ubh)}(hhh](h)}(h Accessing From Interrupt Contexth]h Accessing From Interrupt Context}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hNow consider the case where cache_find() can be called from interrupt context: either a hardware interrupt or a softirq. An example would be a timer which deletes object from the cache.h]hNow consider the case where cache_find() can be called from interrupt context: either a hardware interrupt or a softirq. An example would be a timer which deletes object from the cache.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hThe change is shown below, in standard patch format: the ``-`` are lines which are taken away, and the ``+`` are lines which are added.h](h9The change is shown below, in standard patch format: the }(hjhhhNhNubj5)}(h``-``h]h-}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh) are lines which are taken away, and the }(hjhhhNhNubj5)}(h``+``h]h+}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh are lines which are added.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubjj)}(hX'--- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100 +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100 @@ -12,7 +12,7 @@ int popularity; }; -static DEFINE_MUTEX(cache_lock); +static DEFINE_SPINLOCK(cache_lock); static LIST_HEAD(cache); static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 @@ -55,6 +55,7 @@ int cache_add(int id, const char *name) { struct object *obj; + unsigned long flags; if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) return -ENOMEM; @@ -63,30 +64,33 @@ obj->id = id; obj->popularity = 0; - mutex_lock(&cache_lock); + spin_lock_irqsave(&cache_lock, flags); __cache_add(obj); - mutex_unlock(&cache_lock); + spin_unlock_irqrestore(&cache_lock, flags); return 0; } void cache_delete(int id) { - mutex_lock(&cache_lock); + unsigned long flags; + + spin_lock_irqsave(&cache_lock, flags); __cache_delete(__cache_find(id)); - mutex_unlock(&cache_lock); + spin_unlock_irqrestore(&cache_lock, flags); } int cache_find(int id, char *name) { struct object *obj; int ret = -ENOENT; + unsigned long flags; - mutex_lock(&cache_lock); + spin_lock_irqsave(&cache_lock, flags); obj = __cache_find(id); if (obj) { ret = 0; strcpy(name, obj->name); } - mutex_unlock(&cache_lock); + spin_unlock_irqrestore(&cache_lock, flags); return ret; }h]hX'--- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100 +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100 @@ -12,7 +12,7 @@ int popularity; }; -static DEFINE_MUTEX(cache_lock); +static DEFINE_SPINLOCK(cache_lock); static LIST_HEAD(cache); static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 @@ -55,6 +55,7 @@ int cache_add(int id, const char *name) { struct object *obj; + unsigned long flags; if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) return -ENOMEM; @@ -63,30 +64,33 @@ obj->id = id; obj->popularity = 0; - mutex_lock(&cache_lock); + spin_lock_irqsave(&cache_lock, flags); __cache_add(obj); - mutex_unlock(&cache_lock); + spin_unlock_irqrestore(&cache_lock, flags); return 0; } void cache_delete(int id) { - mutex_lock(&cache_lock); + unsigned long flags; + + spin_lock_irqsave(&cache_lock, flags); __cache_delete(__cache_find(id)); - mutex_unlock(&cache_lock); + spin_unlock_irqrestore(&cache_lock, flags); } int cache_find(int id, char *name) { struct object *obj; int ret = -ENOENT; + unsigned long flags; - mutex_lock(&cache_lock); + spin_lock_irqsave(&cache_lock, flags); obj = __cache_find(id); if (obj) { ret = 0; strcpy(name, obj->name); } - mutex_unlock(&cache_lock); + spin_unlock_irqrestore(&cache_lock, flags); return ret; }}hj4sbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMhjhhubh)}(hNote that the spin_lock_irqsave() will turn off interrupts if they are on, otherwise does nothing (if we are already in an interrupt handler), hence these functions are safe to call from any context.h]hNote that the spin_lock_irqsave() will turn off interrupts if they are on, otherwise does nothing (if we are already in an interrupt handler), hence these functions are safe to call from any context.}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM3hjhhubh)}(hUnfortunately, cache_add() calls kmalloc() with the ``GFP_KERNEL`` flag, which is only legal in user context. I have assumed that cache_add() is still only called in user context, otherwise this should become a parameter to cache_add().h](h4Unfortunately, cache_add() calls kmalloc() with the }(hjPhhhNhNubj5)}(h``GFP_KERNEL``h]h GFP_KERNEL}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjPubh flag, which is only legal in user context. I have assumed that cache_add() is still only called in user context, otherwise this should become a parameter to cache_add().}(hjPhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM8hjhhubeh}(h] accessing-from-interrupt-contextah ]h"] accessing from interrupt contextah$]h&]uh1hhjshhhhhMubh)}(hhh](h)}(h"Exposing Objects Outside This Fileh]h"Exposing Objects Outside This File}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjxhhhhhM?ubh)}(hXIf our objects contained more information, it might not be sufficient to copy the information in and out: other parts of the code might want to keep pointers to these objects, for example, rather than looking up the id every time. This produces two problems.h]hXIf our objects contained more information, it might not be sufficient to copy the information in and out: other parts of the code might want to keep pointers to these objects, for example, rather than looking up the id every time. This produces two problems.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMAhjxhhubh)}(hThe first problem is that we use the ``cache_lock`` to protect objects: we'd need to make this non-static so the rest of the code can use it. This makes locking trickier, as it is no longer all in one place.h](h%The first problem is that we use the }(hjhhhNhNubj5)}(h``cache_lock``h]h cache_lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh to protect objects: we’d need to make this non-static so the rest of the code can use it. This makes locking trickier, as it is no longer all in one place.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMFhjxhhubh)}(hX=The second problem is the lifetime problem: if another structure keeps a pointer to an object, it presumably expects that pointer to remain valid. Unfortunately, this is only guaranteed while you hold the lock, otherwise someone might call cache_delete() and even worse, add another object, re-using the same address.h]hX=The second problem is the lifetime problem: if another structure keeps a pointer to an object, it presumably expects that pointer to remain valid. Unfortunately, this is only guaranteed while you hold the lock, otherwise someone might call cache_delete() and even worse, add another object, re-using the same address.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMJhjxhhubh)}(hZAs there is only one lock, you can't hold it forever: no-one else would get any work done.h]h\As there is only one lock, you can’t hold it forever: no-one else would get any work done.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMPhjxhhubh)}(hXThe solution to this problem is to use a reference count: everyone who has a pointer to the object increases it when they first get the object, and drops the reference count when they're finished with it. Whoever drops it to zero knows it is unused, and can actually delete it.h]hXThe solution to this problem is to use a reference count: everyone who has a pointer to the object increases it when they first get the object, and drops the reference count when they’re finished with it. Whoever drops it to zero knows it is unused, and can actually delete it.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMShjxhhubh)}(hHere is the code::h]hHere is the code:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMXhjxhhubjj)}(hX--- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100 @@ -7,6 +7,7 @@ struct object { struct list_head list; + unsigned int refcnt; int id; char name[32]; int popularity; @@ -17,6 +18,35 @@ static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 +static void __object_put(struct object *obj) +{ + if (--obj->refcnt == 0) + kfree(obj); +} + +static void __object_get(struct object *obj) +{ + obj->refcnt++; +} + +void object_put(struct object *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&cache_lock, flags); + __object_put(obj); + spin_unlock_irqrestore(&cache_lock, flags); +} + +void object_get(struct object *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&cache_lock, flags); + __object_get(obj); + spin_unlock_irqrestore(&cache_lock, flags); +} + /* Must be holding cache_lock */ static struct object *__cache_find(int id) { @@ -35,6 +65,7 @@ { BUG_ON(!obj); list_del(&obj->list); + __object_put(obj); cache_num--; } @@ -63,6 +94,7 @@ strscpy(obj->name, name, sizeof(obj->name)); obj->id = id; obj->popularity = 0; + obj->refcnt = 1; /* The cache holds a reference */ spin_lock_irqsave(&cache_lock, flags); __cache_add(obj); @@ -79,18 +111,15 @@ spin_unlock_irqrestore(&cache_lock, flags); } -int cache_find(int id, char *name) +struct object *cache_find(int id) { struct object *obj; - int ret = -ENOENT; unsigned long flags; spin_lock_irqsave(&cache_lock, flags); obj = __cache_find(id); - if (obj) { - ret = 0; - strcpy(name, obj->name); - } + if (obj) + __object_get(obj); spin_unlock_irqrestore(&cache_lock, flags); - return ret; + return obj; }h]hX--- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100 @@ -7,6 +7,7 @@ struct object { struct list_head list; + unsigned int refcnt; int id; char name[32]; int popularity; @@ -17,6 +18,35 @@ static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 +static void __object_put(struct object *obj) +{ + if (--obj->refcnt == 0) + kfree(obj); +} + +static void __object_get(struct object *obj) +{ + obj->refcnt++; +} + +void object_put(struct object *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&cache_lock, flags); + __object_put(obj); + spin_unlock_irqrestore(&cache_lock, flags); +} + +void object_get(struct object *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&cache_lock, flags); + __object_get(obj); + spin_unlock_irqrestore(&cache_lock, flags); +} + /* Must be holding cache_lock */ static struct object *__cache_find(int id) { @@ -35,6 +65,7 @@ { BUG_ON(!obj); list_del(&obj->list); + __object_put(obj); cache_num--; } @@ -63,6 +94,7 @@ strscpy(obj->name, name, sizeof(obj->name)); obj->id = id; obj->popularity = 0; + obj->refcnt = 1; /* The cache holds a reference */ spin_lock_irqsave(&cache_lock, flags); __cache_add(obj); @@ -79,18 +111,15 @@ spin_unlock_irqrestore(&cache_lock, flags); } -int cache_find(int id, char *name) +struct object *cache_find(int id) { struct object *obj; - int ret = -ENOENT; unsigned long flags; spin_lock_irqsave(&cache_lock, flags); obj = __cache_find(id); - if (obj) { - ret = 0; - strcpy(name, obj->name); - } + if (obj) + __object_get(obj); spin_unlock_irqrestore(&cache_lock, flags); - return ret; + return obj; }}hjsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMZhjxhhubh)}(hWe encapsulate the reference counting in the standard 'get' and 'put' functions. Now we can return the object itself from cache_find() which has the advantage that the user can now sleep holding the object (eg. to copy_to_user() to name to userspace).h]hXWe encapsulate the reference counting in the standard ‘get’ and ‘put’ functions. Now we can return the object itself from cache_find() which has the advantage that the user can now sleep holding the object (eg. to copy_to_user() to name to userspace).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjxhhubh)}(hXThe other point to note is that I said a reference should be held for every pointer to the object: thus the reference count is 1 when first inserted into the cache. In some versions the framework does not hold a reference count, but they are more complicated.h]hXThe other point to note is that I said a reference should be held for every pointer to the object: thus the reference count is 1 when first inserted into the cache. In some versions the framework does not hold a reference count, but they are more complicated.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjxhhubh)}(hhh](h)}(h/Using Atomic Operations For The Reference Counth]h/Using Atomic Operations For The Reference Count}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hXIn practice, :c:type:`atomic_t` would usually be used for refcnt. There are a number of atomic operations defined in ``include/asm/atomic.h``: these are guaranteed to be seen atomically from all CPUs in the system, so no lock is required. In this case, it is simpler than using spinlocks, although for anything non-trivial using spinlocks is clearer. The atomic_inc() and atomic_dec_and_test() are used instead of the standard increment and decrement operators, and the lock is no longer used to protect the reference count itself.h](h In practice, }(hj*hhhNhNubh)}(h:c:type:`atomic_t`h]j5)}(hj4h]hatomic_t}(hj6hhhNhNubah}(h]h ](xrefcc-typeeh"]h$]h&]uh1j4hj2ubah}(h]h ]h"]h$]h&]refdockernel-hacking/locking refdomainjAreftypetype refexplicitrefwarn reftargetatomic_tuh1hhhhMhj*ubhV would usually be used for refcnt. There are a number of atomic operations defined in }(hj*hhhNhNubj5)}(h``include/asm/atomic.h``h]hinclude/asm/atomic.h}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj*ubhX: these are guaranteed to be seen atomically from all CPUs in the system, so no lock is required. In this case, it is simpler than using spinlocks, although for anything non-trivial using spinlocks is clearer. The atomic_inc() and atomic_dec_and_test() are used instead of the standard increment and decrement operators, and the lock is no longer used to protect the reference count itself.}(hj*hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubjj)}(hXg--- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100 @@ -7,7 +7,7 @@ struct object { struct list_head list; - unsigned int refcnt; + atomic_t refcnt; int id; char name[32]; int popularity; @@ -18,33 +18,15 @@ static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 -static void __object_put(struct object *obj) -{ - if (--obj->refcnt == 0) - kfree(obj); -} - -static void __object_get(struct object *obj) -{ - obj->refcnt++; -} - void object_put(struct object *obj) { - unsigned long flags; - - spin_lock_irqsave(&cache_lock, flags); - __object_put(obj); - spin_unlock_irqrestore(&cache_lock, flags); + if (atomic_dec_and_test(&obj->refcnt)) + kfree(obj); } void object_get(struct object *obj) { - unsigned long flags; - - spin_lock_irqsave(&cache_lock, flags); - __object_get(obj); - spin_unlock_irqrestore(&cache_lock, flags); + atomic_inc(&obj->refcnt); } /* Must be holding cache_lock */ @@ -65,7 +47,7 @@ { BUG_ON(!obj); list_del(&obj->list); - __object_put(obj); + object_put(obj); cache_num--; } @@ -94,7 +76,7 @@ strscpy(obj->name, name, sizeof(obj->name)); obj->id = id; obj->popularity = 0; - obj->refcnt = 1; /* The cache holds a reference */ + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ spin_lock_irqsave(&cache_lock, flags); __cache_add(obj); @@ -119,7 +101,7 @@ spin_lock_irqsave(&cache_lock, flags); obj = __cache_find(id); if (obj) - __object_get(obj); + object_get(obj); spin_unlock_irqrestore(&cache_lock, flags); return obj; }h]hXg--- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100 @@ -7,7 +7,7 @@ struct object { struct list_head list; - unsigned int refcnt; + atomic_t refcnt; int id; char name[32]; int popularity; @@ -18,33 +18,15 @@ static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 -static void __object_put(struct object *obj) -{ - if (--obj->refcnt == 0) - kfree(obj); -} - -static void __object_get(struct object *obj) -{ - obj->refcnt++; -} - void object_put(struct object *obj) { - unsigned long flags; - - spin_lock_irqsave(&cache_lock, flags); - __object_put(obj); - spin_unlock_irqrestore(&cache_lock, flags); + if (atomic_dec_and_test(&obj->refcnt)) + kfree(obj); } void object_get(struct object *obj) { - unsigned long flags; - - spin_lock_irqsave(&cache_lock, flags); - __object_get(obj); - spin_unlock_irqrestore(&cache_lock, flags); + atomic_inc(&obj->refcnt); } /* Must be holding cache_lock */ @@ -65,7 +47,7 @@ { BUG_ON(!obj); list_del(&obj->list); - __object_put(obj); + object_put(obj); cache_num--; } @@ -94,7 +76,7 @@ strscpy(obj->name, name, sizeof(obj->name)); obj->id = id; obj->popularity = 0; - obj->refcnt = 1; /* The cache holds a reference */ + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ spin_lock_irqsave(&cache_lock, flags); __cache_add(obj); @@ -119,7 +101,7 @@ spin_lock_irqsave(&cache_lock, flags); obj = __cache_find(id); if (obj) - __object_get(obj); + object_get(obj); spin_unlock_irqrestore(&cache_lock, flags); return obj; }}hjqsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMhjhhubeh}(h]/using-atomic-operations-for-the-reference-countah ]h"]/using atomic operations for the reference countah$]h&]uh1hhjxhhhhhMubeh}(h]"exposing-objects-outside-this-fileah ]h"]"exposing objects outside this fileah$]h&]uh1hhjshhhhhM?ubh)}(hhh](h)}(h!Protecting The Objects Themselvesh]h!Protecting The Objects Themselves}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hIn these examples, we assumed that the objects (except the reference counts) never changed once they are created. If we wanted to allow the name to change, there are three possibilities:h]hIn these examples, we assumed that the objects (except the reference counts) never changed once they are created. If we wanted to allow the name to change, there are three possibilities:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj )}(hhh](j )}(hrYou can make ``cache_lock`` non-static, and tell people to grab that lock before changing the name in any object. h]h)}(hqYou can make ``cache_lock`` non-static, and tell people to grab that lock before changing the name in any object.h](h You can make }(hjhhhNhNubj5)}(h``cache_lock``h]h cache_lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhV non-static, and tell people to grab that lock before changing the name in any object.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjhhhhhNubj )}(hYou can provide a cache_obj_rename() which grabs this lock and changes the name for the caller, and tell everyone to use that function. h]h)}(hYou can provide a cache_obj_rename() which grabs this lock and changes the name for the caller, and tell everyone to use that function.h]hYou can provide a cache_obj_rename() which grabs this lock and changes the name for the caller, and tell everyone to use that function.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j hjhhhhhNubj )}(hiYou can make the ``cache_lock`` protect only the cache itself, and use another lock to protect the name. h]h)}(hhYou can make the ``cache_lock`` protect only the cache itself, and use another lock to protect the name.h](hYou can make the }(hjhhhNhNubj5)}(h``cache_lock``h]h cache_lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhI protect only the cache itself, and use another lock to protect the name.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM#hjubah}(h]h ]h"]h$]h&]uh1j hjhhhhhNubeh}(h]h ]h"]h$]h&]j! j" uh1j hhhMhjhhubh)}(hTheoretically, you can make the locks as fine-grained as one lock for every field, for every object. In practice, the most common variants are:h]hTheoretically, you can make the locks as fine-grained as one lock for every field, for every object. In practice, the most common variants are:}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM&hjhhubj )}(hhh](j )}(hOne lock which protects the infrastructure (the ``cache`` list in this example) and all the objects. This is what we have done so far. h]h)}(hOne lock which protects the infrastructure (the ``cache`` list in this example) and all the objects. This is what we have done so far.h](h0One lock which protects the infrastructure (the }(hj8hhhNhNubj5)}(h ``cache``h]hcache}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj8ubhM list in this example) and all the objects. This is what we have done so far.}(hj8hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM*hj4ubah}(h]h ]h"]h$]h&]uh1j hj1hhhhhNubj )}(hOne lock which protects the infrastructure (including the list pointers inside the objects), and one lock inside the object which protects the rest of that object. h]h)}(hOne lock which protects the infrastructure (including the list pointers inside the objects), and one lock inside the object which protects the rest of that object.h]hOne lock which protects the infrastructure (including the list pointers inside the objects), and one lock inside the object which protects the rest of that object.}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM-hj^ubah}(h]h ]h"]h$]h&]uh1j hj1hhhhhNubj )}(hvMultiple locks to protect the infrastructure (eg. one lock per hash chain), possibly with a separate per-object lock. h]h)}(huMultiple locks to protect the infrastructure (eg. one lock per hash chain), possibly with a separate per-object lock.h]huMultiple locks to protect the infrastructure (eg. one lock per hash chain), possibly with a separate per-object lock.}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM1hjvubah}(h]h ]h"]h$]h&]uh1j hj1hhhhhNubeh}(h]h ]h"]h$]h&]j! j" uh1j hhhM*hjhhubh)}(h-Here is the "lock-per-object" implementation:h]h1Here is the “lock-per-object” implementation:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM4hjhhubjj)}(hX--- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100 +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100 @@ -6,11 +6,17 @@ struct object { + /* These two protected by cache_lock. */ struct list_head list; + int popularity; + atomic_t refcnt; + + /* Doesn't change once created. */ int id; + + spinlock_t lock; /* Protects the name */ char name[32]; - int popularity; }; static DEFINE_SPINLOCK(cache_lock); @@ -77,6 +84,7 @@ obj->id = id; obj->popularity = 0; atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ + spin_lock_init(&obj->lock); spin_lock_irqsave(&cache_lock, flags); __cache_add(obj);h]hX--- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100 +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100 @@ -6,11 +6,17 @@ struct object { + /* These two protected by cache_lock. */ struct list_head list; + int popularity; + atomic_t refcnt; + + /* Doesn't change once created. */ int id; + + spinlock_t lock; /* Protects the name */ char name[32]; - int popularity; }; static DEFINE_SPINLOCK(cache_lock); @@ -77,6 +84,7 @@ obj->id = id; obj->popularity = 0; atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ + spin_lock_init(&obj->lock); spin_lock_irqsave(&cache_lock, flags); __cache_add(obj);}hjsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhM8hjhhubh)}(hX`Note that I decide that the popularity count should be protected by the ``cache_lock`` rather than the per-object lock: this is because it (like the :c:type:`struct list_head ` inside the object) is logically part of the infrastructure. This way, I don't need to grab the lock of every object in __cache_add() when seeking the least popular.h](hHNote that I decide that the popularity count should be protected by the }(hjhhhNhNubj5)}(h``cache_lock``h]h cache_lock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh? rather than the per-object lock: this is because it (like the }(hjhhhNhNubh)}(h&:c:type:`struct list_head `h]j5)}(hjh]hstruct list_head}(hjhhhNhNubah}(h]h ](j@jAc-typeeh"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]refdocjM refdomainjAreftypetype refexplicitrefwarnjS list_headuh1hhhhMVhjubh inside the object) is logically part of the infrastructure. This way, I don’t need to grab the lock of every object in __cache_add() when seeking the least popular.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMVhjhhubh)}(hI also decided that the id member is unchangeable, so I don't need to grab each object lock in __cache_find() to examine the id: the object lock is only used by a caller who wants to read or write the name field.h]hI also decided that the id member is unchangeable, so I don’t need to grab each object lock in __cache_find() to examine the id: the object lock is only used by a caller who wants to read or write the name field.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM]hjhhubh)}(hNote also that I added a comment describing what data was protected by which locks. This is extremely important, as it describes the runtime behavior of the code, and can be hard to gain from just reading. And as Alan Cox says, “Lock data, not code”.h]hNote also that I added a comment describing what data was protected by which locks. This is extremely important, as it describes the runtime behavior of the code, and can be hard to gain from just reading. And as Alan Cox says, “Lock data, not code”.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMbhjhhubeh}(h]!protecting-the-objects-themselvesah ]h"]!protecting the objects themselvesah$]h&]uh1hhjshhhhhMubeh}(h]common-examplesah ]h"]common examplesah$]h&]uh1hhhhhhhhMtubh)}(hhh](h)}(hCommon Problemsh]hCommon Problems}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMhubh)}(hhh](h)}(hDeadlock: Simple and Advancedh]hDeadlock: Simple and Advanced}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0hhhhhMkubh)}(hX&There is a coding bug where a piece of code tries to grab a spinlock twice: it will spin forever, waiting for the lock to be released (spinlocks, rwlocks and mutexes are not recursive in Linux). This is trivial to diagnose: not a stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.h]hX&There is a coding bug where a piece of code tries to grab a spinlock twice: it will spin forever, waiting for the lock to be released (spinlocks, rwlocks and mutexes are not recursive in Linux). This is trivial to diagnose: not a stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.}(hjAhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMmhj0hhubh)}(hX3For a slightly more complex case, imagine you have a region shared by a softirq and user context. If you use a spin_lock() call to protect it, it is possible that the user context will be interrupted by the softirq while it holds the lock, and the softirq will then spin forever trying to get the same lock.h]hX3For a slightly more complex case, imagine you have a region shared by a softirq and user context. If you use a spin_lock() call to protect it, it is possible that the user context will be interrupted by the softirq while it holds the lock, and the softirq will then spin forever trying to get the same lock.}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMshj0hhubh)}(hBoth of these are called deadlock, and as shown above, it can occur even with a single CPU (although not on UP compiles, since spinlocks vanish on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data corruption in the second example).h](hBoth of these are called deadlock, and as shown above, it can occur even with a single CPU (although not on UP compiles, since spinlocks vanish on kernel compiles with }(hj]hhhNhNubj5)}(h``CONFIG_SMP``h]h CONFIG_SMP}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj]ubh@ =n. You’ll still get data corruption in the second example).}(hj]hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMyhj0hhubh)}(hThis complete lockup is easy to diagnose: on SMP boxes the watchdog timer or compiling with ``DEBUG_SPINLOCK`` set (``include/linux/spinlock.h``) will show this up immediately when it happens.h](h\This complete lockup is easy to diagnose: on SMP boxes the watchdog timer or compiling with }(hj}hhhNhNubj5)}(h``DEBUG_SPINLOCK``h]hDEBUG_SPINLOCK}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj}ubh set (}(hj}hhhNhNubj5)}(h``include/linux/spinlock.h``h]hinclude/linux/spinlock.h}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj}ubh0) will show this up immediately when it happens.}(hj}hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM~hj0hhubh)}(hXA more complex problem is the so-called 'deadly embrace', involving two or more locks. Say you have a hash table: each entry in the table is a spinlock, and a chain of hashed objects. Inside a softirq handler, you sometimes want to alter an object from one place in the hash to another: you grab the spinlock of the old hash chain and the spinlock of the new hash chain, and delete the object from the old one, and insert it in the new one.h]hXA more complex problem is the so-called ‘deadly embrace’, involving two or more locks. Say you have a hash table: each entry in the table is a spinlock, and a chain of hashed objects. Inside a softirq handler, you sometimes want to alter an object from one place in the hash to another: you grab the spinlock of the old hash chain and the spinlock of the new hash chain, and delete the object from the old one, and insert it in the new one.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubh)}(hX There are two problems here. First, if your code ever tries to move the object to the same chain, it will deadlock with itself as it tries to lock it twice. Secondly, if the same softirq on another CPU is trying to move another object in the reverse direction, the following could happen:h]hX There are two problems here. First, if your code ever tries to move the object to the same chain, it will deadlock with itself as it tries to lock it twice. Secondly, if the same softirq on another CPU is trying to move another object in the reverse direction, the following could happen:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubj)}(hhh]j)}(hhh](j)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjubj)}(hhh]h}(h]h ]h"]h$]h&]colwidthKuh1jhjubj)}(hhh]j)}(hhh](j)}(hhh]h)}(hCPU 1h]hCPU 1}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh]h)}(hCPU 2h]hCPU 2}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1jhjubj)}(hhh](j)}(hhh](j)}(hhh]h)}(hGrab lock A -> OKh]hGrab lock A -> OK}(hj.hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj+ubah}(h]h ]h"]h$]h&]uh1jhj(ubj)}(hhh]h)}(hGrab lock B -> OKh]hGrab lock B -> OK}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjBubah}(h]h ]h"]h$]h&]uh1jhj(ubeh}(h]h ]h"]h$]h&]uh1jhj%ubj)}(hhh](j)}(hhh]h)}(hGrab lock B -> spinh]hGrab lock B -> spin}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjbubah}(h]h ]h"]h$]h&]uh1jhj_ubj)}(hhh]h)}(hGrab lock A -> spinh]hGrab lock A -> spin}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjyubah}(h]h ]h"]h$]h&]uh1jhj_ubeh}(h]h ]h"]h$]h&]uh1jhj%ubeh}(h]h ]h"]h$]h&]uh1jhjubeh}(h]h ]h"]h$]h&]colsKuh1jhjubah}(h]h ]h"]h$]h&]uh1jhj0hhhhhNubh)}(hTable: Consequencesh]hTable: Consequences}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubh)}(hxThe two CPUs will spin forever, waiting for the other to give up their lock. It will look, smell, and feel like a crash.h]hxThe two CPUs will spin forever, waiting for the other to give up their lock. It will look, smell, and feel like a crash.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj0hhubeh}(h]jah ]h"]deadlock: simple and advancedah$]h&]uh1hhjhhhhhMkj Kubh)}(hhh](h)}(hPreventing Deadlockh]hPreventing Deadlock}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hX#Textbooks will tell you that if you always lock in the same order, you will never get this kind of deadlock. Practice will tell you that this approach doesn't scale: when I create a new lock, I don't understand enough of the kernel to figure out where in the 5000 lock hierarchy it will fit.h]hX'Textbooks will tell you that if you always lock in the same order, you will never get this kind of deadlock. Practice will tell you that this approach doesn’t scale: when I create a new lock, I don’t understand enough of the kernel to figure out where in the 5000 lock hierarchy it will fit.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hX_The best locks are encapsulated: they never get exposed in headers, and are never held around calls to non-trivial functions outside the same file. You can read through this code and see that it will never deadlock, because it never tries to grab another lock while it has that one. People using your code don't even need to know you are using a lock.h]hXaThe best locks are encapsulated: they never get exposed in headers, and are never held around calls to non-trivial functions outside the same file. You can read through this code and see that it will never deadlock, because it never tries to grab another lock while it has that one. People using your code don’t even need to know you are using a lock.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hA classic problem here is when you provide callbacks or hooks: if you call these with the lock held, you risk simple deadlock, or a deadly embrace (who knows what the callback will do?).h]hA classic problem here is when you provide callbacks or hooks: if you call these with the lock held, you risk simple deadlock, or a deadly embrace (who knows what the callback will do?).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hhh](h)}(h#Overzealous Prevention Of Deadlocksh]h#Overzealous Prevention Of Deadlocks}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hDeadlocks are problematic, but not as bad as data corruption. Code which grabs a read lock, searches a list, fails to find what it wants, drops the read lock, grabs a write lock and inserts the object has a race condition.h]hDeadlocks are problematic, but not as bad as data corruption. Code which grabs a read lock, searches a list, fails to find what it wants, drops the read lock, grabs a write lock and inserts the object has a race condition.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]#overzealous-prevention-of-deadlocksah ]h"]#overzealous prevention of deadlocksah$]h&]uh1hhjhhhhhMubeh}(h]preventing-deadlockah ]h"]preventing deadlockah$]h&]uh1hhjhhhhhMubh)}(hhh](h)}(hRacing Timers: A Kernel Pastimeh]hRacing Timers: A Kernel Pastime}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj6hhhhhMubh)}(hTimers can produce their own special problems with races. Consider a collection of objects (list, hash, etc) where each object has a timer which is due to destroy it.h]hTimers can produce their own special problems with races. Consider a collection of objects (list, hash, etc) where each object has a timer which is due to destroy it.}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj6hhubh)}(hbIf you want to destroy the entire collection (say on module removal), you might do the following::h]haIf you want to destroy the entire collection (say on module removal), you might do the following:}(hjUhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj6hhubjj)}(hX/* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE HUNGARIAN NOTATION */ spin_lock_bh(&list_lock); while (list) { struct foo *next = list->next; timer_delete(&list->timer); kfree(list); list = next; } spin_unlock_bh(&list_lock);h]hX/* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE HUNGARIAN NOTATION */ spin_lock_bh(&list_lock); while (list) { struct foo *next = list->next; timer_delete(&list->timer); kfree(list); list = next; } spin_unlock_bh(&list_lock);}hjcsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMhj6hhubh)}(hSooner or later, this will crash on SMP, because a timer can have just gone off before the spin_lock_bh(), and it will only get the lock after we spin_unlock_bh(), and then try to free the element (which has already been freed!).h]hSooner or later, this will crash on SMP, because a timer can have just gone off before the spin_lock_bh(), and it will only get the lock after we spin_unlock_bh(), and then try to free the element (which has already been freed!).}(hjqhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj6hhubh)}(hThis can be avoided by checking the result of timer_delete(): if it returns 1, the timer has been deleted. If 0, it means (in this case) that it is currently running, so we can do::h]hThis can be avoided by checking the result of timer_delete(): if it returns 1, the timer has been deleted. If 0, it means (in this case) that it is currently running, so we can do:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj6hhubjj)}(hXretry: spin_lock_bh(&list_lock); while (list) { struct foo *next = list->next; if (!timer_delete(&list->timer)) { /* Give timer a chance to delete this */ spin_unlock_bh(&list_lock); goto retry; } kfree(list); list = next; } spin_unlock_bh(&list_lock);h]hXretry: spin_lock_bh(&list_lock); while (list) { struct foo *next = list->next; if (!timer_delete(&list->timer)) { /* Give timer a chance to delete this */ spin_unlock_bh(&list_lock); goto retry; } kfree(list); list = next; } spin_unlock_bh(&list_lock);}hjsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMhj6hhubh)}(hXAnother common problem is deleting timers which restart themselves (by calling add_timer() at the end of their timer function). Because this is a fairly common case which is prone to races, you should use timer_delete_sync() (``include/linux/timer.h``) to handle this case.h](hAnother common problem is deleting timers which restart themselves (by calling add_timer() at the end of their timer function). Because this is a fairly common case which is prone to races, you should use timer_delete_sync() (}(hjhhhNhNubj5)}(h``include/linux/timer.h``h]hinclude/linux/timer.h}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh) to handle this case.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhj6hhubh)}(hBefore freeing a timer, timer_shutdown() or timer_shutdown_sync() should be called which will keep it from being rearmed. Any subsequent attempt to rearm the timer will be silently ignored by the core code.h]hBefore freeing a timer, timer_shutdown() or timer_shutdown_sync() should be called which will keep it from being rearmed. Any subsequent attempt to rearm the timer will be silently ignored by the core code.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj6hhubeh}(h]racing-timers-a-kernel-pastimeah ]h"]racing timers: a kernel pastimeah$]h&]uh1hhjhhhhhMubeh}(h]common-problemsah ]h"]common problemsah$]h&]uh1hhhhhhhhMhubh)}(hhh](h)}(h Locking Speedh]h Locking Speed}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hXThere are three main things to worry about when considering speed of some code which does locking. First is concurrency: how many things are going to be waiting while someone else is holding a lock. Second is the time taken to actually acquire and release an uncontended lock. Third is using fewer, or smarter locks. I'm assuming that the lock is used fairly often: otherwise, you wouldn't be concerned about efficiency.h]hXThere are three main things to worry about when considering speed of some code which does locking. First is concurrency: how many things are going to be waiting while someone else is holding a lock. Second is the time taken to actually acquire and release an uncontended lock. Third is using fewer, or smarter locks. I’m assuming that the lock is used fairly often: otherwise, you wouldn’t be concerned about efficiency.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hX Concurrency depends on how long the lock is usually held: you should hold the lock for as long as needed, but no longer. In the cache example, we always create the object without the lock held, and then grab the lock only when we are ready to insert it in the list.h]hX Concurrency depends on how long the lock is usually held: you should hold the lock for as long as needed, but no longer. In the cache example, we always create the object without the lock held, and then grab the lock only when we are ready to insert it in the list.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXoAcquisition times depend on how much damage the lock operations do to the pipeline (pipeline stalls) and how likely it is that this CPU was the last one to grab the lock (ie. is the lock cache-hot for this CPU): on a machine with more CPUs, this likelihood drops fast. Consider a 700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic increment takes about 58ns, a lock which is cache-hot on this CPU takes 160ns, and a cacheline transfer from another CPU takes an additional 170 to 360ns. (These figures from Paul McKenney's `Linux Journal RCU article `__).h](hXAcquisition times depend on how much damage the lock operations do to the pipeline (pipeline stalls) and how likely it is that this CPU was the last one to grab the lock (ie. is the lock cache-hot for this CPU): on a machine with more CPUs, this likelihood drops fast. Consider a 700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic increment takes about 58ns, a lock which is cache-hot on this CPU takes 160ns, and a cacheline transfer from another CPU takes an additional 170 to 360ns. (These figures from Paul McKenney’s }(hjhhhNhNubji)}(hP`Linux Journal RCU article `__h]hLinux Journal RCU article}(hjhhhNhNubah}(h]h ]h"]h$]h&]nameLinux Journal RCU articlerefuri0http://www.linuxjournal.com/article.php?sid=6993uh1jhhjubh).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hX;These two aims conflict: holding a lock for a short time might be done by splitting locks into parts (such as in our final per-object-lock example), but this increases the number of lock acquisitions, and the results are often slower than having a single lock. This is another reason to advocate locking simplicity.h]hX;These two aims conflict: holding a lock for a short time might be done by splitting locks into parts (such as in our final per-object-lock example), but this increases the number of lock acquisitions, and the results are often slower than having a single lock. This is another reason to advocate locking simplicity.}(hj*hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(htThe third concern is addressed below: there are some methods to reduce the amount of locking which needs to be done.h]htThe third concern is addressed below: there are some methods to reduce the amount of locking which needs to be done.}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hhh](h)}(hRead/Write Lock Variantsh]hRead/Write Lock Variants}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhhhhhMubh)}(hXfBoth spinlocks and mutexes have read/write variants: ``rwlock_t`` and :c:type:`struct rw_semaphore `. These divide users into two classes: the readers and the writers. If you are only reading the data, you can get a read lock, but to write to the data you need the write lock. Many people can hold a read lock, but a writer must be sole holder.h](h5Both spinlocks and mutexes have read/write variants: }(hjWhhhNhNubj5)}(h ``rwlock_t``h]hrwlock_t}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjWubh and }(hjWhhhNhNubh)}(h,:c:type:`struct rw_semaphore `h]j5)}(hjsh]hstruct rw_semaphore}(hjuhhhNhNubah}(h]h ](j@jAc-typeeh"]h$]h&]uh1j4hjqubah}(h]h ]h"]h$]h&]refdocjM refdomainjAreftypetype refexplicitrefwarnjS rw_semaphoreuh1hhhhMhjWubh. These divide users into two classes: the readers and the writers. If you are only reading the data, you can get a read lock, but to write to the data you need the write lock. Many people can hold a read lock, but a writer must be sole holder.}(hjWhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjFhhubh)}(hXIf your code divides neatly along reader/writer lines (as our cache code does), and the lock is held by readers for significant lengths of time, using these locks can help. They are slightly slower than the normal locks though, so in practice ``rwlock_t`` is not usually worthwhile.h](hIf your code divides neatly along reader/writer lines (as our cache code does), and the lock is held by readers for significant lengths of time, using these locks can help. They are slightly slower than the normal locks though, so in practice }(hjhhhNhNubj5)}(h ``rwlock_t``h]hrwlock_t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh is not usually worthwhile.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM#hjFhhubeh}(h]read-write-lock-variantsah ]h"]read/write lock variantsah$]h&]uh1hhjhhhhhMubh)}(hhh](h)}(h Avoiding Locks: Read Copy Updateh]h Avoiding Locks: Read Copy Update}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM)ubh)}(hXThere is a special method of read/write locking called Read Copy Update. Using RCU, the readers can avoid taking a lock altogether: as we expect our cache to be read more often than updated (otherwise the cache is a waste of time), it is a candidate for this optimization.h]hXThere is a special method of read/write locking called Read Copy Update. Using RCU, the readers can avoid taking a lock altogether: as we expect our cache to be read more often than updated (otherwise the cache is a waste of time), it is a candidate for this optimization.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM+hjhhubh)}(hXLHow do we get rid of read locks? Getting rid of read locks means that writers may be changing the list underneath the readers. That is actually quite simple: we can read a linked list while an element is being added if the writer adds the element very carefully. For example, adding ``new`` to a single linked list called ``list``::h](hXHow do we get rid of read locks? Getting rid of read locks means that writers may be changing the list underneath the readers. That is actually quite simple: we can read a linked list while an element is being added if the writer adds the element very carefully. For example, adding }(hjhhhNhNubj5)}(h``new``h]hnew}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh to a single linked list called }(hjhhhNhNubj5)}(h``list``h]hlist}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh:}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM0hjhhubjj)}(h0new->next = list->next; wmb(); list->next = new;h]h0new->next = list->next; wmb(); list->next = new;}hjsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhM6hjhhubh)}(hXThe wmb() is a write memory barrier. It ensures that the first operation (setting the new element's ``next`` pointer) is complete and will be seen by all CPUs, before the second operation is (putting the new element into the list). This is important, since modern compilers and modern CPUs can both reorder instructions unless told otherwise: we want a reader to either not see the new element at all, or see the new element with the ``next`` pointer correctly pointing at the rest of the list.h](hfThe wmb() is a write memory barrier. It ensures that the first operation (setting the new element’s }(hj!hhhNhNubj5)}(h``next``h]hnext}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj!ubhXF pointer) is complete and will be seen by all CPUs, before the second operation is (putting the new element into the list). This is important, since modern compilers and modern CPUs can both reorder instructions unless told otherwise: we want a reader to either not see the new element at all, or see the new element with the }(hj!hhhNhNubj5)}(h``next``h]hnext}(hj;hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj!ubh4 pointer correctly pointing at the rest of the list.}(hj!hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM;hjhhubh)}(hFortunately, there is a function to do this for standard :c:type:`struct list_head ` lists: list_add_rcu() (``include/linux/list.h``).h](h9Fortunately, there is a function to do this for standard }(hjShhhNhNubh)}(h&:c:type:`struct list_head `h]j5)}(hj]h]hstruct list_head}(hj_hhhNhNubah}(h]h ](j@jAc-typeeh"]h$]h&]uh1j4hj[ubah}(h]h ]h"]h$]h&]refdocjM refdomainjAreftypetype refexplicitrefwarnjS list_headuh1hhhhMDhjSubh lists: list_add_rcu() (}(hjShhhNhNubj5)}(h``include/linux/list.h``h]hinclude/linux/list.h}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjSubh).}(hjShhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMDhjhhubh)}(hRemoving an element from the list is even simpler: we replace the pointer to the old element with a pointer to its successor, and readers will either see it, or skip over it.h]hRemoving an element from the list is even simpler: we replace the pointer to the old element with a pointer to its successor, and readers will either see it, or skip over it.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMHhjhhubjj)}(hlist->next = old->next;h]hlist->next = old->next;}hjsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMNhjhhubh)}(hThere is list_del_rcu() (``include/linux/list.h``) which does this (the normal version poisons the old object, which we don't want).h](hThere is list_del_rcu() (}(hjhhhNhNubj5)}(h``include/linux/list.h``h]hinclude/linux/list.h}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhU) which does this (the normal version poisons the old object, which we don’t want).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMQhjhhubh)}(hXThe reader must also be careful: some CPUs can look through the ``next`` pointer to start reading the contents of the next element early, but don't realize that the pre-fetched contents is wrong when the ``next`` pointer changes underneath them. Once again, there is a list_for_each_entry_rcu() (``include/linux/list.h``) to help you. Of course, writers can just use list_for_each_entry(), since there cannot be two simultaneous writers.h](h@The reader must also be careful: some CPUs can look through the }(hjhhhNhNubj5)}(h``next``h]hnext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh pointer to start reading the contents of the next element early, but don’t realize that the pre-fetched contents is wrong when the }(hjhhhNhNubj5)}(h``next``h]hnext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhT pointer changes underneath them. Once again, there is a list_for_each_entry_rcu() (}(hjhhhNhNubj5)}(h``include/linux/list.h``h]hinclude/linux/list.h}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhu) to help you. Of course, writers can just use list_for_each_entry(), since there cannot be two simultaneous writers.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMUhjhhubh)}(hXYOur final dilemma is this: when can we actually destroy the removed element? Remember, a reader might be stepping through this element in the list right now: if we free this element and the ``next`` pointer changes, the reader will jump off into garbage and crash. We need to wait until we know that all the readers who were traversing the list when we deleted the element are finished. We use call_rcu() to register a callback which will actually destroy the object once all pre-existing readers are finished. Alternatively, synchronize_rcu() may be used to block until all pre-existing are finished.h](hOur final dilemma is this: when can we actually destroy the removed element? Remember, a reader might be stepping through this element in the list right now: if we free this element and the }(hjhhhNhNubj5)}(h``next``h]hnext}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhX pointer changes, the reader will jump off into garbage and crash. We need to wait until we know that all the readers who were traversing the list when we deleted the element are finished. We use call_rcu() to register a callback which will actually destroy the object once all pre-existing readers are finished. Alternatively, synchronize_rcu() may be used to block until all pre-existing are finished.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM^hjhhubh)}(hX But how does Read Copy Update know when the readers are finished? The method is this: firstly, the readers always traverse the list inside rcu_read_lock()/rcu_read_unlock() pairs: these simply disable preemption so the reader won't go to sleep while reading the list.h]hX But how does Read Copy Update know when the readers are finished? The method is this: firstly, the readers always traverse the list inside rcu_read_lock()/rcu_read_unlock() pairs: these simply disable preemption so the reader won’t go to sleep while reading the list.}(hj6hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMihjhhubh)}(hX9RCU then waits until every other CPU has slept at least once: since readers cannot sleep, we know that any readers which were traversing the list during the deletion are finished, and the callback is triggered. The real Read Copy Update code is a little more optimized than this, but this is the fundamental idea.h]hX9RCU then waits until every other CPU has slept at least once: since readers cannot sleep, we know that any readers which were traversing the list during the deletion are finished, and the callback is triggered. The real Read Copy Update code is a little more optimized than this, but this is the fundamental idea.}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMohjhhubjj)}(hX--- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100 +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100 @@ -1,15 +1,18 @@ #include #include #include +#include #include #include struct object { - /* These two protected by cache_lock. */ + /* This is protected by RCU */ struct list_head list; int popularity; + struct rcu_head rcu; + atomic_t refcnt; /* Doesn't change once created. */ @@ -40,7 +43,7 @@ { struct object *i; - list_for_each_entry(i, &cache, list) { + list_for_each_entry_rcu(i, &cache, list) { if (i->id == id) { i->popularity++; return i; @@ -49,19 +52,25 @@ return NULL; } +/* Final discard done once we know no readers are looking. */ +static void cache_delete_rcu(void *arg) +{ + object_put(arg); +} + /* Must be holding cache_lock */ static void __cache_delete(struct object *obj) { BUG_ON(!obj); - list_del(&obj->list); - object_put(obj); + list_del_rcu(&obj->list); cache_num--; + call_rcu(&obj->rcu, cache_delete_rcu); } /* Must be holding cache_lock */ static void __cache_add(struct object *obj) { - list_add(&obj->list, &cache); + list_add_rcu(&obj->list, &cache); if (++cache_num > MAX_CACHE_SIZE) { struct object *i, *outcast = NULL; list_for_each_entry(i, &cache, list) { @@ -104,12 +114,11 @@ struct object *cache_find(int id) { struct object *obj; - unsigned long flags; - spin_lock_irqsave(&cache_lock, flags); + rcu_read_lock(); obj = __cache_find(id); if (obj) object_get(obj); - spin_unlock_irqrestore(&cache_lock, flags); + rcu_read_unlock(); return obj; }h]hX--- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100 +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100 @@ -1,15 +1,18 @@ #include #include #include +#include #include #include struct object { - /* These two protected by cache_lock. */ + /* This is protected by RCU */ struct list_head list; int popularity; + struct rcu_head rcu; + atomic_t refcnt; /* Doesn't change once created. */ @@ -40,7 +43,7 @@ { struct object *i; - list_for_each_entry(i, &cache, list) { + list_for_each_entry_rcu(i, &cache, list) { if (i->id == id) { i->popularity++; return i; @@ -49,19 +52,25 @@ return NULL; } +/* Final discard done once we know no readers are looking. */ +static void cache_delete_rcu(void *arg) +{ + object_put(arg); +} + /* Must be holding cache_lock */ static void __cache_delete(struct object *obj) { BUG_ON(!obj); - list_del(&obj->list); - object_put(obj); + list_del_rcu(&obj->list); cache_num--; + call_rcu(&obj->rcu, cache_delete_rcu); } /* Must be holding cache_lock */ static void __cache_add(struct object *obj) { - list_add(&obj->list, &cache); + list_add_rcu(&obj->list, &cache); if (++cache_num > MAX_CACHE_SIZE) { struct object *i, *outcast = NULL; list_for_each_entry(i, &cache, list) { @@ -104,12 +114,11 @@ struct object *cache_find(int id) { struct object *obj; - unsigned long flags; - spin_lock_irqsave(&cache_lock, flags); + rcu_read_lock(); obj = __cache_find(id); if (obj) object_get(obj); - spin_unlock_irqrestore(&cache_lock, flags); + rcu_read_unlock(); return obj; }}hjRsbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMwhjhhubh)}(hX Note that the reader will alter the popularity member in __cache_find(), and now it doesn't hold a lock. One solution would be to make it an ``atomic_t``, but for this usage, we don't really care about races: an approximate result is good enough, so I didn't change it.h](hNote that the reader will alter the popularity member in __cache_find(), and now it doesn’t hold a lock. One solution would be to make it an }(hj`hhhNhNubj5)}(h ``atomic_t``h]hatomic_t}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj`ubhx, but for this usage, we don’t really care about races: an approximate result is good enough, so I didn’t change it.}(hj`hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hThe result is that cache_find() requires no synchronization with any other functions, so is almost as fast on SMP as it would be on UP.h]hThe result is that cache_find() requires no synchronization with any other functions, so is almost as fast on SMP as it would be on UP.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hX6There is a further optimization possible here: remember our original cache code, where there were no reference counts and the caller simply held the lock whenever using the object? This is still possible: if you hold the lock, no one can delete the object, so you don't need to get and put the reference count.h]hX8There is a further optimization possible here: remember our original cache code, where there were no reference counts and the caller simply held the lock whenever using the object? This is still possible: if you hold the lock, no one can delete the object, so you don’t need to get and put the reference count.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hX;Now, because the 'read lock' in RCU is simply disabling preemption, a caller which always has preemption disabled between calling cache_find() and object_put() does not need to actually get and put the reference count: we could expose __cache_find() by making it non-static, and such callers could simply call that.h]hX?Now, because the ‘read lock’ in RCU is simply disabling preemption, a caller which always has preemption disabled between calling cache_find() and object_put() does not need to actually get and put the reference count: we could expose __cache_find() by making it non-static, and such callers could simply call that.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hThe benefit here is that the reference count is not written to: the object is not altered in any way, which is much faster on SMP machines due to caching.h]hThe benefit here is that the reference count is not written to: the object is not altered in any way, which is much faster on SMP machines due to caching.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]avoiding-locks-read-copy-updateah ]h"] avoiding locks: read copy updateah$]h&]uh1hhjhhhhhM)ubh)}(hhh](h)}(h Per-CPU Datah]h Per-CPU Data}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hAnother technique for avoiding locking which is used fairly widely is to duplicate information for each CPU. For example, if you wanted to keep a count of a common condition, you could use a spin lock and a single counter. Nice and simple.h]hAnother technique for avoiding locking which is used fairly widely is to duplicate information for each CPU. For example, if you wanted to keep a count of a common condition, you could use a spin lock and a single counter. Nice and simple.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hXIf that was too slow (it's usually not, but if you've got a really big machine to test on and can show that it is), you could instead use a counter for each CPU, then none of them need an exclusive lock. See DEFINE_PER_CPU(), get_cpu_var() and put_cpu_var() (``include/linux/percpu.h``).h](hXIf that was too slow (it’s usually not, but if you’ve got a really big machine to test on and can show that it is), you could instead use a counter for each CPU, then none of them need an exclusive lock. See DEFINE_PER_CPU(), get_cpu_var() and put_cpu_var() (}(hjhhhNhNubj5)}(h``include/linux/percpu.h``h]hinclude/linux/percpu.h}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hOf particular use for simple per-cpu counters is the ``local_t`` type, and the cpu_local_inc() and related functions, which are more efficient than simple code on some architectures (``include/asm/local.h``).h](h5Of particular use for simple per-cpu counters is the }(hjhhhNhNubj5)}(h ``local_t``h]hlocal_t}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhw type, and the cpu_local_inc() and related functions, which are more efficient than simple code on some architectures (}(hjhhhNhNubj5)}(h``include/asm/local.h``h]hinclude/asm/local.h}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hNote that there is no simple, reliable way of getting an exact value of such a counter, without introducing more locks. This is not a problem for some uses.h]hNote that there is no simple, reliable way of getting an exact value of such a counter, without introducing more locks. This is not a problem for some uses.}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubeh}(h]j`ah ]h"] per-cpu dataah$]h&]uh1hhjhhhhhMj Kubh)}(hhh](h)}(h(Data Which Mostly Used By An IRQ Handlerh]h(Data Which Mostly Used By An IRQ Handler}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjFhhhhhMubh)}(hIf data is always accessed from within the same IRQ handler, you don't need a lock at all: the kernel already guarantees that the irq handler will not run simultaneously on multiple CPUs.h]hIf data is always accessed from within the same IRQ handler, you don’t need a lock at all: the kernel already guarantees that the irq handler will not run simultaneously on multiple CPUs.}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjFhhubh)}(hManfred Spraul points out that you can still do this, even if the data is very occasionally accessed in user context or softirqs/tasklets. The irq handler doesn't use a lock, and all other accesses are done as so::h]hManfred Spraul points out that you can still do this, even if the data is very occasionally accessed in user context or softirqs/tasklets. The irq handler doesn’t use a lock, and all other accesses are done as so:}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjFhhubjj)}(hNmutex_lock(&lock); disable_irq(irq); ... enable_irq(irq); mutex_unlock(&lock);h]hNmutex_lock(&lock); disable_irq(irq); ... enable_irq(irq); mutex_unlock(&lock);}hjssbah}(h]h ]h"]h$]h&]jyjzuh1jihhhMhjFhhubh)}(hXFThe disable_irq() prevents the irq handler from running (and waits for it to finish if it's currently running on other CPUs). The spinlock prevents any other accesses happening at the same time. Naturally, this is slower than just a spin_lock_irq() call, so it only makes sense if this type of access happens extremely rarely.h]hXHThe disable_irq() prevents the irq handler from running (and waits for it to finish if it’s currently running on other CPUs). The spinlock prevents any other accesses happening at the same time. Naturally, this is slower than just a spin_lock_irq() call, so it only makes sense if this type of access happens extremely rarely.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjFhhubeh}(h](data-which-mostly-used-by-an-irq-handlerah ]h"](data which mostly used by an irq handlerah$]h&]uh1hhjhhhhhMubeh}(h] locking-speedah ]h"] locking speedah$]h&]uh1hhhhhhhhMubh)}(hhh](h)}(h0What Functions Are Safe To Call From Interrupts?h]h0What Functions Are Safe To Call From Interrupts?}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hMany functions in the kernel sleep (ie. call schedule()) directly or indirectly: you can never call them while holding a spinlock, or with preemption disabled. This also means you need to be in user context: calling them from an interrupt is illegal.h]hMany functions in the kernel sleep (ie. call schedule()) directly or indirectly: you can never call them while holding a spinlock, or with preemption disabled. This also means you need to be in user context: calling them from an interrupt is illegal.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubh)}(hhh](h)}(hSome Functions Which Sleeph]hSome Functions Which Sleep}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhMubh)}(hX=The most common ones are listed below, but you usually have to read the code to find out if other calls are safe. If everyone else who calls it can sleep, you probably need to be able to sleep, too. In particular, registration and deregistration functions usually expect to be called from user context, and can sleep.h]hX=The most common ones are listed below, but you usually have to read the code to find out if other calls are safe. If everyone else who calls it can sleep, you probably need to be able to sleep, too. In particular, registration and deregistration functions usually expect to be called from user context, and can sleep.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjhhubj )}(hhh](j )}(h]Accesses to userspace: - copy_from_user() - copy_to_user() - get_user() - put_user() h](h)}(hAccesses to userspace:h]hAccesses to userspace:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubj )}(hhh](j )}(hcopy_from_user() h]h)}(hcopy_from_user()h]hcopy_from_user()}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM hjubah}(h]h ]h"]h$]h&]uh1j hjubj )}(hcopy_to_user() h]h)}(hcopy_to_user()h]hcopy_to_user()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM"hj ubah}(h]h ]h"]h$]h&]uh1j hjubj )}(h get_user() h]h)}(h get_user()h]h get_user()}(hj) hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM$hj% ubah}(h]h ]h"]h$]h&]uh1j hjubj )}(h put_user() h]h)}(h put_user()h]h put_user()}(hjA hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM&hj= ubah}(h]h ]h"]h$]h&]uh1j hjubeh}(h]h ]h"]h$]h&]j! j" uh1j hhhM hjubeh}(h]h ]h"]h$]h&]uh1j hjhhhNhNubj )}(hkmalloc(GP_KERNEL) ` h]h)}(hkmalloc(GP_KERNEL) `h]hkmalloc(GP_KERNEL) `}(hje hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM(hja ubah}(h]h ]h"]h$]h&]uh1j hjhhhhhNubj )}(hXWmutex_lock_interruptible() and mutex_lock() There is a mutex_trylock() which does not sleep. Still, it must not be used inside interrupt context since its implementation is not safe for that. mutex_unlock() will also never sleep. It cannot be used in interrupt context either since a mutex must be released by the same task that acquired it. h](h)}(h+mutex_lock_interruptible() and mutex_lock()h]h+mutex_lock_interruptible() and mutex_lock()}(hj} hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM*hjy ubh)}(hX)There is a mutex_trylock() which does not sleep. Still, it must not be used inside interrupt context since its implementation is not safe for that. mutex_unlock() will also never sleep. It cannot be used in interrupt context either since a mutex must be released by the same task that acquired it.h]hX)There is a mutex_trylock() which does not sleep. Still, it must not be used inside interrupt context since its implementation is not safe for that. mutex_unlock() will also never sleep. It cannot be used in interrupt context either since a mutex must be released by the same task that acquired it.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM-hjy ubeh}(h]h ]h"]h$]h&]uh1j hjhhhhhNubeh}(h]h ]h"]h$]h&]j! j" uh1j hhhMhjhhubeh}(h]some-functions-which-sleepah ]h"]some functions which sleepah$]h&]uh1hhjhhhhhMubh)}(hhh](h)}(h Some Functions Which Don't Sleeph]h"Some Functions Which Don’t Sleep}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj hhhhhM4ubh)}(hMSome functions are safe to call from any context, or holding almost any lock.h]hMSome functions are safe to call from any context, or holding almost any lock.}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM6hj hhubj )}(hhh](j )}(h printk() h]h)}(hprintk()h]hprintk()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM9hj ubah}(h]h ]h"]h$]h&]uh1j hj hhhhhNubj )}(hkfree() h]h)}(hkfree()h]hkfree()}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM;hj ubah}(h]h ]h"]h$]h&]uh1j hj hhhhhNubj )}(hadd_timer() and timer_delete() h]h)}(hadd_timer() and timer_delete()h]hadd_timer() and timer_delete()}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM=hj ubah}(h]h ]h"]h$]h&]uh1j hj hhhhhNubeh}(h]h ]h"]h$]h&]j! j" uh1j hhhM9hj hhubeh}(h] some-functions-which-don-t-sleepah ]h"] some functions which don't sleepah$]h&]uh1hhjhhhhhM4ubeh}(h]jzah ]h"]0what functions are safe to call from interrupts?ah$]h&]uh1hhhhhhhhMj Kubh)}(hhh](h)}(hMutex API referenceh]hMutex API reference}(hj/!hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj,!hhhhhM@ubhindex)}(hhh]h}(h]h ]h"]h$]h&]entries](singlemutex_init (C macro) c.mutex_inithNtauh1j=!hj,!hhhNhNubhdesc)}(hhh](hdesc_signature)}(h mutex_inith]hdesc_signature_line)}(h mutex_inith]h desc_name)}(h mutex_inith]h desc_sig_name)}(hjW!h]h mutex_init}(hjg!hhhNhNubah}(h]h ]nah"]h$]h&]uh1je!hja!ubah}(h]h ](sig-namedescnameeh"]h$]h&]jyjzuh1j_!hj[!hhh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKhj,!hhubh block_quote)}(hinitialize the mutex h]h)}(hinitialize the mutexh]hinitialize the mutex}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK5hj!ubah}(h]h ]h"]h$]h&]uh1j!hj!hK5hj,!hhubh container)}(h**Parameters** ``mutex`` the mutex to be initialized **Description** Initialize the mutex to unlocked state. It is not allowed to initialize an already locked mutex.h](h)}(h**Parameters**h]j)}(hj!h]h Parameters}(hj!hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj!ubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK9hj!ubhdefinition_list)}(hhh]hdefinition_list_item)}(h&``mutex`` the mutex to be initialized h](hterm)}(h ``mutex``h]j5)}(hj "h]hmutex}(hj "hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj"ubah}(h]h ]h"]h$]h&]uh1j"h^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK6hj"ubh definition)}(hhh]h)}(hthe mutex to be initializedh]hthe mutex to be initialized}(hj%"hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj"hK6hj""ubah}(h]h ]h"]h$]h&]uh1j "hj"ubeh}(h]h ]h"]h$]h&]uh1j"hj"hK6hj!ubah}(h]h ]h"]h$]h&]uh1j!hj!ubh)}(h**Description**h]j)}(hjG"h]h Description}(hjI"hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjE"ubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK8hj!ubh)}(h'Initialize the mutex to unlocked state.h]h'Initialize the mutex to unlocked state.}(hj]"hhhNhNubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK8hj!ubh)}(h8It is not allowed to initialize an already locked mutex.h]h8It is not allowed to initialize an already locked mutex.}(hjl"hhhNhNubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK:hj!ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!mutex_init_with_key (C macro)c.mutex_init_with_keyhNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(hmutex_init_with_keyh]jZ!)}(hmutex_init_with_keyh]j`!)}(hmutex_init_with_keyh]jf!)}(hj"h]hmutex_init_with_key}(hj"hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj"ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj"hhh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKLubah}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj"hhhj"hKLubah}(h]j"ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj"hKLhj"hhubj!)}(hhh]h}(h]h ]h"]h$]h&]uh1j!hj"hhhj"hKLubeh}(h]h ](jAmacroeh"]h$]h&]j!jAj!j"j!j"j!j!j!uh1jN!hhhj,!hNhNubh)}(h$``mutex_init_with_key (mutex, key)``h]j5)}(hj"h]h mutex_init_with_key (mutex, key)}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj"ubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKNhj,!hhubj!)}(h,initialize a mutex with a given lockdep key h]h)}(h+initialize a mutex with a given lockdep keyh]h+initialize a mutex with a given lockdep key}(hj"hhhNhNubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKDhj"ubah}(h]h ]h"]h$]h&]uh1j!hj"hKDhj,!hhubj!)}(h**Parameters** ``mutex`` the mutex to be initialized ``key`` the lockdep key to be associated with the mutex **Description** Initialize the mutex to the unlocked state. It is not allowed to initialize an already locked mutex.h](h)}(h**Parameters**h]j)}(hj#h]h Parameters}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKHhj#ubj!)}(hhh](j")}(h&``mutex`` the mutex to be initialized h](j")}(h ``mutex``h]j5)}(hj%#h]hmutex}(hj'#hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj##ubah}(h]h ]h"]h$]h&]uh1j"h^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKEhj#ubj!")}(hhh]h)}(hthe mutex to be initializedh]hthe mutex to be initialized}(hj>#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:#hKEhj;#ubah}(h]h ]h"]h$]h&]uh1j "hj#ubeh}(h]h ]h"]h$]h&]uh1j"hj:#hKEhj#ubj")}(h8``key`` the lockdep key to be associated with the mutex h](j")}(h``key``h]j5)}(hj^#h]hkey}(hj`#hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj\#ubah}(h]h ]h"]h$]h&]uh1j"h^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKFhjX#ubj!")}(hhh]h)}(h/the lockdep key to be associated with the mutexh]h/the lockdep key to be associated with the mutex}(hjw#hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjs#hKFhjt#ubah}(h]h ]h"]h$]h&]uh1j "hjX#ubeh}(h]h ]h"]h$]h&]uh1j"hjs#hKFhj#ubeh}(h]h ]h"]h$]h&]uh1j!hj#ubh)}(h**Description**h]j)}(hj#h]h Description}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj#ubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKHhj#ubh)}(h+Initialize the mutex to the unlocked state.h]h+Initialize the mutex to the unlocked state.}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKHhj#ubh)}(h8It is not allowed to initialize an already locked mutex.h]h8It is not allowed to initialize an already locked mutex.}(hj#hhhNhNubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKJhj#ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!mutex_is_locked (C function)c.mutex_is_lockedhNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(h)bool mutex_is_locked (struct mutex *lock)h]jZ!)}(h(bool mutex_is_locked(struct mutex *lock)h](hdesc_sig_keyword_type)}(hboolh]hbool}(hj#hhhNhNubah}(h]h ]ktah"]h$]h&]uh1j#hj#hhh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKbubhdesc_sig_space)}(h h]h }(hj$hhhNhNubah}(h]h ]wah"]h$]h&]uh1j#hj#hhhj#hKbubj`!)}(hmutex_is_lockedh]jf!)}(hmutex_is_lockedh]hmutex_is_locked}(hj$hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj$ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj#hhhj#hKbubhdesc_parameterlist)}(h(struct mutex *lock)h]hdesc_parameter)}(hstruct mutex *lockh](hdesc_sig_keyword)}(hstructh]hstruct}(hj6$hhhNhNubah}(h]h ]kah"]h$]h&]uh1j4$hj0$ubj$)}(h h]h }(hjE$hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj0$ubh)}(hhh]jf!)}(hmutexh]hmutex}(hjV$hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjS$ubah}(h]h ]h"]h$]h&] refdomainjAreftype identifier reftargetjX$modnameN classnameN c:parent_keysphinx.domains.c LookupKey)}data]jq$ ASTIdentifier)}jl$j$sbc.mutex_is_lockedasbuh1hhj0$ubj$)}(h h]h }(hj~$hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj0$ubhdesc_sig_punctuation)}(h*h]h*}(hj$hhhNhNubah}(h]h ]pah"]h$]h&]uh1j$hj0$ubjf!)}(hlockh]hlock}(hj$hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj0$ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj*$ubah}(h]h ]h"]h$]h&]jyjzuh1j($hj#hhhj#hKbubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj#hhhj#hKbubah}(h]j#ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj#hKbhj#hhubj!)}(hhh]h)}(his the mutex lockedh]his the mutex locked}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK]hj$hhubah}(h]h ]h"]h$]h&]uh1j!hj#hhhj#hKbubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j$j!j$j!j!j!uh1jN!hhhj,!hNhNubj!)}(h**Parameters** ``struct mutex *lock`` the mutex to be queried **Description** Returns true if the mutex is locked, false if unlocked.h](h)}(h**Parameters**h]j)}(hj$h]h Parameters}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$ubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhKahj$ubj!)}(hhh]j")}(h/``struct mutex *lock`` the mutex to be queried h](j")}(h``struct mutex *lock``h]j5)}(hj%h]hstruct mutex *lock}(hj %hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj%ubah}(h]h ]h"]h$]h&]uh1j"h^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK^hj%ubj!")}(hhh]h)}(hthe mutex to be queriedh]hthe mutex to be queried}(hj!%hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj%hK^hj%ubah}(h]h ]h"]h$]h&]uh1j "hj%ubeh}(h]h ]h"]h$]h&]uh1j"hj%hK^hj$ubah}(h]h ]h"]h$]h&]uh1j!hj$ubh)}(h**Description**h]j)}(hjC%h]h Description}(hjE%hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjA%ubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK`hj$ubh)}(h7Returns true if the mutex is locked, false if unlocked.h]h7Returns true if the mutex is locked, false if unlocked.}(hjY%hhhNhNubah}(h]h ]h"]h$]h&]uh1hh^/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1346: ./include/linux/mutex.hhK`hj$ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!mutex_lock (C function) c.mutex_lockhNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(h$void mutex_lock (struct mutex *lock)h]jZ!)}(h#void mutex_lock(struct mutex *lock)h](j#)}(hvoidh]hvoid}(hj%hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj%hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMubj$)}(h h]h }(hj%hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj%hhhj%hMubj`!)}(h mutex_lockh]jf!)}(h mutex_lockh]h mutex_lock}(hj%hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj%ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj%hhhj%hMubj)$)}(h(struct mutex *lock)h]j/$)}(hstruct mutex *lockh](j5$)}(hj8$h]hstruct}(hj%hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj%ubj$)}(h h]h }(hj%hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj%ubh)}(hhh]jf!)}(hmutexh]hmutex}(hj%hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj%ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj%modnameN classnameNjp$js$)}jv$]jy$)}jl$j%sb c.mutex_lockasbuh1hhj%ubj$)}(h h]h }(hj&hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj%ubj$)}(hj$h]h*}(hj&hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj%ubjf!)}(hlockh]hlock}(hj&hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj%ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj%ubah}(h]h ]h"]h$]h&]jyjzuh1j($hj%hhhj%hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj%hhhj%hMubah}(h]j{%ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj%hMhj}%hhubj!)}(hhh]h)}(hacquire the mutexh]hacquire the mutex}(hjH&hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chKhjE&hhubah}(h]h ]h"]h$]h&]uh1j!hj}%hhhj%hMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j`&j!j`&j!j!j!uh1jN!hhhj,!hNhNubj!)}(hX**Parameters** ``struct mutex *lock`` the mutex to be acquired **Description** Lock the mutex exclusively for this task. If the mutex is not available right now, it will sleep until it can get it. The mutex must later on be released by the same task that acquired it. Recursive locking is not allowed. The task may not exit without first unlocking the mutex. Also, kernel memory where the mutex resides must not be freed with the mutex still locked. The mutex must first be initialized (or statically defined) before it can be locked. memset()-ing the mutex to 0 is not allowed. (The CONFIG_DEBUG_MUTEXES .config option turns on debugging checks that will enforce the restrictions and will also do deadlock debugging) This function is similar to (but not equivalent to) down().h](h)}(h**Parameters**h]j)}(hjj&h]h Parameters}(hjl&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjh&ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjd&ubj!)}(hhh]j")}(h0``struct mutex *lock`` the mutex to be acquired h](j")}(h``struct mutex *lock``h]j5)}(hj&h]hstruct mutex *lock}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj&ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chKhj&ubj!")}(hhh]h)}(hthe mutex to be acquiredh]hthe mutex to be acquired}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj&hKhj&ubah}(h]h ]h"]h$]h&]uh1j "hj&ubeh}(h]h ]h"]h$]h&]uh1j"hj&hKhj&ubah}(h]h ]h"]h$]h&]uh1j!hjd&ubh)}(h**Description**h]j)}(hj&h]h Description}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj&ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chKhjd&ubh)}(huLock the mutex exclusively for this task. If the mutex is not available right now, it will sleep until it can get it.h]huLock the mutex exclusively for this task. If the mutex is not available right now, it will sleep until it can get it.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chKhjd&ubh)}(hX}The mutex must later on be released by the same task that acquired it. Recursive locking is not allowed. The task may not exit without first unlocking the mutex. Also, kernel memory where the mutex resides must not be freed with the mutex still locked. The mutex must first be initialized (or statically defined) before it can be locked. memset()-ing the mutex to 0 is not allowed.h]hX}The mutex must later on be released by the same task that acquired it. Recursive locking is not allowed. The task may not exit without first unlocking the mutex. Also, kernel memory where the mutex resides must not be freed with the mutex still locked. The mutex must first be initialized (or statically defined) before it can be locked. memset()-ing the mutex to 0 is not allowed.}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjd&ubh)}(h(The CONFIG_DEBUG_MUTEXES .config option turns on debugging checks that will enforce the restrictions and will also do deadlock debugging)h]h(The CONFIG_DEBUG_MUTEXES .config option turns on debugging checks that will enforce the restrictions and will also do deadlock debugging)}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM hjd&ubh)}(h;This function is similar to (but not equivalent to) down().h]h;This function is similar to (but not equivalent to) down().}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjd&ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!mutex_unlock (C function)c.mutex_unlockhNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(h&void mutex_unlock (struct mutex *lock)h]jZ!)}(h%void mutex_unlock(struct mutex *lock)h](j#)}(hvoidh]hvoid}(hj6'hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj2'hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMubj$)}(h h]h }(hjE'hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj2'hhhjD'hMubj`!)}(h mutex_unlockh]jf!)}(h mutex_unlockh]h mutex_unlock}(hjW'hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjS'ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj2'hhhjD'hMubj)$)}(h(struct mutex *lock)h]j/$)}(hstruct mutex *lockh](j5$)}(hj8$h]hstruct}(hjs'hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjo'ubj$)}(h h]h }(hj'hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjo'ubh)}(hhh]jf!)}(hmutexh]hmutex}(hj'hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj'ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj'modnameN classnameNjp$js$)}jv$]jy$)}jl$jY'sbc.mutex_unlockasbuh1hhjo'ubj$)}(h h]h }(hj'hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjo'ubj$)}(hj$h]h*}(hj'hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjo'ubjf!)}(hlockh]hlock}(hj'hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjo'ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjk'ubah}(h]h ]h"]h$]h&]jyjzuh1j($hj2'hhhjD'hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj.'hhhjD'hMubah}(h]j)'ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjD'hMhj+'hhubj!)}(hhh]h)}(hrelease the mutexh]hrelease the mutex}(hj'hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj'hhubah}(h]h ]h"]h$]h&]uh1j!hj+'hhhjD'hMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j(j!j(j!j!j!uh1jN!hhhj,!hNhNubj!)}(hX4**Parameters** ``struct mutex *lock`` the mutex to be released **Description** Unlock a mutex that has been locked by this task previously. This function must not be used in interrupt context. Unlocking of a not locked mutex is not allowed. The caller must ensure that the mutex stays alive until this function has returned - mutex_unlock() can NOT directly be used to release an object such that another concurrent task can free it. Mutexes are different from spinlocks & refcounts in this aspect. This function is similar to (but not equivalent to) up().h](h)}(h**Parameters**h]j)}(hj(h]h Parameters}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj(ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM hj(ubj!)}(hhh]j")}(h0``struct mutex *lock`` the mutex to be released h](j")}(h``struct mutex *lock``h]j5)}(hj7(h]hstruct mutex *lock}(hj9(hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj5(ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj1(ubj!")}(hhh]h)}(hthe mutex to be releasedh]hthe mutex to be released}(hjP(hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjL(hMhjM(ubah}(h]h ]h"]h$]h&]uh1j "hj1(ubeh}(h]h ]h"]h$]h&]uh1j"hjL(hMhj.(ubah}(h]h ]h"]h$]h&]uh1j!hj(ubh)}(h**Description**h]j)}(hjr(h]h Description}(hjt(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjp(ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM hj(ubh)}(h!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!ww_mutex_unlock (C function)c.ww_mutex_unlockhNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(h,void ww_mutex_unlock (struct ww_mutex *lock)h]jZ!)}(h+void ww_mutex_unlock(struct ww_mutex *lock)h](j#)}(hvoidh]hvoid}(hj(hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj(hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM*ubj$)}(h h]h }(hj(hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj(hhhj(hM*ubj`!)}(hww_mutex_unlockh]jf!)}(hww_mutex_unlockh]hww_mutex_unlock}(hj)hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj)ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj(hhhj(hM*ubj)$)}(h(struct ww_mutex *lock)h]j/$)}(hstruct ww_mutex *lockh](j5$)}(hj8$h]hstruct}(hj!)hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj)ubj$)}(h h]h }(hj.)hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj)ubh)}(hhh]jf!)}(hww_mutexh]hww_mutex}(hj?)hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj<)ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjA)modnameN classnameNjp$js$)}jv$]jy$)}jl$j)sbc.ww_mutex_unlockasbuh1hhj)ubj$)}(h h]h }(hj_)hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj)ubj$)}(hj$h]h*}(hjm)hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj)ubjf!)}(hlockh]hlock}(hjz)hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj)ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj)ubah}(h]h ]h"]h$]h&]jyjzuh1j($hj(hhhj(hM*ubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj(hhhj(hM*ubah}(h]j(ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj(hM*hj(hhubj!)}(hhh]h)}(hrelease the w/w mutexh]hrelease the w/w mutex}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM hj)hhubah}(h]h ]h"]h$]h&]uh1j!hj(hhhj(hM*ubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j)j!j)j!j!j!uh1jN!hhhj,!hNhNubj!)}(hX**Parameters** ``struct ww_mutex *lock`` the mutex to be released **Description** Unlock a mutex that has been locked by this task previously with any of the ww_mutex_lock* functions (with or without an acquire context). It is forbidden to release the locks after releasing the acquire context. This function must not be used in interrupt context. Unlocking of a unlocked mutex is not allowed.h](h)}(h**Parameters**h]j)}(hj)h]h Parameters}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj)ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM$hj)ubj!)}(hhh]j")}(h3``struct ww_mutex *lock`` the mutex to be released h](j")}(h``struct ww_mutex *lock``h]j5)}(hj)h]hstruct ww_mutex *lock}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj)ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM!hj)ubj!")}(hhh]h)}(hthe mutex to be releasedh]hthe mutex to be released}(hj)hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj)hM!hj)ubah}(h]h ]h"]h$]h&]uh1j "hj)ubeh}(h]h ]h"]h$]h&]uh1j"hj)hM!hj)ubah}(h]h ]h"]h$]h&]uh1j!hj)ubh)}(h**Description**h]j)}(hj *h]h Description}(hj"*hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj*ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM#hj)ubh)}(hUnlock a mutex that has been locked by this task previously with any of the ww_mutex_lock* functions (with or without an acquire context). It is forbidden to release the locks after releasing the acquire context.h]hUnlock a mutex that has been locked by this task previously with any of the ww_mutex_lock* functions (with or without an acquire context). It is forbidden to release the locks after releasing the acquire context.}(hj6*hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM#hj)ubh)}(hbThis function must not be used in interrupt context. Unlocking of a unlocked mutex is not allowed.h]hbThis function must not be used in interrupt context. Unlocking of a unlocked mutex is not allowed.}(hjE*hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM'hj)ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!ww_mutex_trylock (C function)c.ww_mutex_trylockhNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(hIint ww_mutex_trylock (struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx)h]jZ!)}(hHint ww_mutex_trylock(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx)h](j#)}(hinth]hint}(hjt*hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjp*hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMubj$)}(h h]h }(hj*hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjp*hhhj*hMubj`!)}(hww_mutex_trylockh]jf!)}(hww_mutex_trylockh]hww_mutex_trylock}(hj*hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj*ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjp*hhhj*hMubj)$)}(h4(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx)h](j/$)}(hstruct ww_mutex *wwh](j5$)}(hj8$h]hstruct}(hj*hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj*ubj$)}(h h]h }(hj*hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj*ubh)}(hhh]jf!)}(hww_mutexh]hww_mutex}(hj*hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj*ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj*modnameN classnameNjp$js$)}jv$]jy$)}jl$j*sbc.ww_mutex_trylockasbuh1hhj*ubj$)}(h h]h }(hj*hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj*ubj$)}(hj$h]h*}(hj*hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj*ubjf!)}(hwwh]hww}(hj +hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj*ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj*ubj/$)}(hstruct ww_acquire_ctx *ww_ctxh](j5$)}(hj8$h]hstruct}(hj#+hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj+ubj$)}(h h]h }(hj0+hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj+ubh)}(hhh]jf!)}(hww_acquire_ctxh]hww_acquire_ctx}(hjA+hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj>+ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjC+modnameN classnameNjp$js$)}jv$]j*c.ww_mutex_trylockasbuh1hhj+ubj$)}(h h]h }(hj_+hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj+ubj$)}(hj$h]h*}(hjm+hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj+ubjf!)}(hww_ctxh]hww_ctx}(hjz+hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj+ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj*ubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjp*hhhj*hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjl*hhhj*hMubah}(h]jg*ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj*hMhji*hhubj!)}(hhh]h)}(h!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!%mutex_lock_interruptible (C function)c.mutex_lock_interruptiblehNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(h1int mutex_lock_interruptible (struct mutex *lock)h]jZ!)}(h0int mutex_lock_interruptible(struct mutex *lock)h](j#)}(hinth]hint}(hj,hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj,hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMubj$)}(h h]h }(hj,hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj,hhhj,hMubj`!)}(hmutex_lock_interruptibleh]jf!)}(hmutex_lock_interruptibleh]hmutex_lock_interruptible}(hj,hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj,ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj,hhhj,hMubj)$)}(h(struct mutex *lock)h]j/$)}(hstruct mutex *lockh](j5$)}(hj8$h]hstruct}(hj -hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj-ubj$)}(h h]h }(hj-hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj-ubh)}(hhh]jf!)}(hmutexh]hmutex}(hj)-hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj&-ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj+-modnameN classnameNjp$js$)}jv$]jy$)}jl$j,sbc.mutex_lock_interruptibleasbuh1hhj-ubj$)}(h h]h }(hjI-hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj-ubj$)}(hj$h]h*}(hjW-hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj-ubjf!)}(hlockh]hlock}(hjd-hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj-ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj-ubah}(h]h ]h"]h$]h&]jyjzuh1j($hj,hhhj,hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj,hhhj,hMubah}(h]j,ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj,hMhj,hhubj!)}(hhh]h)}(h,Acquire the mutex, interruptible by signals.h]h,Acquire the mutex, interruptible by signals.}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-hhubah}(h]h ]h"]h$]h&]uh1j!hj,hhhj,hMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j-j!j-j!j!j!uh1jN!hhhj,!hNhNubj!)}(hX]**Parameters** ``struct mutex *lock`` The mutex to be acquired. **Description** Lock the mutex like mutex_lock(). If a signal is delivered while the process is sleeping, this function will return without acquiring the mutex. **Context** Process context. **Return** 0 if the lock was successfully acquired or ``-EINTR`` if a signal arrived.h](h)}(h**Parameters**h]j)}(hj-h]h Parameters}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj-ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-ubj!)}(hhh]j")}(h1``struct mutex *lock`` The mutex to be acquired. h](j")}(h``struct mutex *lock``h]j5)}(hj-h]hstruct mutex *lock}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj-ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-ubj!")}(hhh]h)}(hThe mutex to be acquired.h]hThe mutex to be acquired.}(hj-hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj-hMhj-ubah}(h]h ]h"]h$]h&]uh1j "hj-ubeh}(h]h ]h"]h$]h&]uh1j"hj-hMhj-ubah}(h]h ]h"]h$]h&]uh1j!hj-ubh)}(h**Description**h]j)}(hj .h]h Description}(hj .hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj.ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-ubh)}(hLock the mutex like mutex_lock(). If a signal is delivered while the process is sleeping, this function will return without acquiring the mutex.h]hLock the mutex like mutex_lock(). If a signal is delivered while the process is sleeping, this function will return without acquiring the mutex.}(hj .hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-ubh)}(h **Context**h]j)}(hj1.h]hContext}(hj3.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/.ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-ubh)}(hProcess context.h]hProcess context.}(hjG.hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-ubh)}(h **Return**h]j)}(hjX.h]hReturn}(hjZ.hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjV.ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-ubh)}(hJ0 if the lock was successfully acquired or ``-EINTR`` if a signal arrived.h](h+0 if the lock was successfully acquired or }(hjn.hhhNhNubj5)}(h ``-EINTR``h]h-EINTR}(hjv.hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjn.ubh if a signal arrived.}(hjn.hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj-ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ! mutex_lock_killable (C function)c.mutex_lock_killablehNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(h,int mutex_lock_killable (struct mutex *lock)h]jZ!)}(h+int mutex_lock_killable(struct mutex *lock)h](j#)}(hinth]hint}(hj.hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj.hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMubj$)}(h h]h }(hj.hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj.hhhj.hMubj`!)}(hmutex_lock_killableh]jf!)}(hmutex_lock_killableh]hmutex_lock_killable}(hj.hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj.ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj.hhhj.hMubj)$)}(h(struct mutex *lock)h]j/$)}(hstruct mutex *lockh](j5$)}(hj8$h]hstruct}(hj.hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj.ubj$)}(h h]h }(hj.hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj.ubh)}(hhh]jf!)}(hmutexh]hmutex}(hj /hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj/ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj /modnameN classnameNjp$js$)}jv$]jy$)}jl$j.sbc.mutex_lock_killableasbuh1hhj.ubj$)}(h h]h }(hj*/hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj.ubj$)}(hj$h]h*}(hj8/hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj.ubjf!)}(hlockh]hlock}(hjE/hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj.ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj.ubah}(h]h ]h"]h$]h&]jyjzuh1j($hj.hhhj.hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj.hhhj.hMubah}(h]j.ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj.hMhj.hhubj!)}(hhh]h)}(h2Acquire the mutex, interruptible by fatal signals.h]h2Acquire the mutex, interruptible by fatal signals.}(hjo/hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjl/hhubah}(h]h ]h"]h$]h&]uh1j!hj.hhhj.hMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j/j!j/j!j!j!uh1jN!hhhj,!hNhNubj!)}(hX**Parameters** ``struct mutex *lock`` The mutex to be acquired. **Description** Lock the mutex like mutex_lock(). If a signal which will be fatal to the current process is delivered while the process is sleeping, this function will return without acquiring the mutex. **Context** Process context. **Return** 0 if the lock was successfully acquired or ``-EINTR`` if a fatal signal arrived.h](h)}(h**Parameters**h]j)}(hj/h]h Parameters}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj/ubj!)}(hhh]j")}(h1``struct mutex *lock`` The mutex to be acquired. h](j")}(h``struct mutex *lock``h]j5)}(hj/h]hstruct mutex *lock}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj/ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj/ubj!")}(hhh]h)}(hThe mutex to be acquired.h]hThe mutex to be acquired.}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj/hMhj/ubah}(h]h ]h"]h$]h&]uh1j "hj/ubeh}(h]h ]h"]h$]h&]uh1j"hj/hMhj/ubah}(h]h ]h"]h$]h&]uh1j!hj/ubh)}(h**Description**h]j)}(hj/h]h Description}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj/ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj/ubh)}(hLock the mutex like mutex_lock(). If a signal which will be fatal to the current process is delivered while the process is sleeping, this function will return without acquiring the mutex.h]hLock the mutex like mutex_lock(). If a signal which will be fatal to the current process is delivered while the process is sleeping, this function will return without acquiring the mutex.}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj/ubh)}(h **Context**h]j)}(hj0h]hContext}(hj0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj0ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj/ubh)}(hProcess context.h]hProcess context.}(hj(0hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj/ubh)}(h **Return**h]j)}(hj90h]hReturn}(hj;0hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj70ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj/ubh)}(hP0 if the lock was successfully acquired or ``-EINTR`` if a fatal signal arrived.h](h+0 if the lock was successfully acquired or }(hjO0hhhNhNubj5)}(h ``-EINTR``h]h-EINTR}(hjW0hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjO0ubh if a fatal signal arrived.}(hjO0hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj/ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!mutex_lock_io (C function)c.mutex_lock_iohNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(h'void mutex_lock_io (struct mutex *lock)h]jZ!)}(h&void mutex_lock_io(struct mutex *lock)h](j#)}(hvoidh]hvoid}(hj0hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj0hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMubj$)}(h h]h }(hj0hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj0hhhj0hMubj`!)}(h mutex_lock_ioh]jf!)}(h mutex_lock_ioh]h mutex_lock_io}(hj0hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj0ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj0hhhj0hMubj)$)}(h(struct mutex *lock)h]j/$)}(hstruct mutex *lockh](j5$)}(hj8$h]hstruct}(hj0hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj0ubj$)}(h h]h }(hj0hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj0ubh)}(hhh]jf!)}(hmutexh]hmutex}(hj0hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj0ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj0modnameN classnameNjp$js$)}jv$]jy$)}jl$j0sbc.mutex_lock_ioasbuh1hhj0ubj$)}(h h]h }(hj 1hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj0ubj$)}(hj$h]h*}(hj1hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj0ubjf!)}(hlockh]hlock}(hj&1hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj0ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj0ubah}(h]h ]h"]h$]h&]jyjzuh1j($hj0hhhj0hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj0hhhj0hMubah}(h]j0ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj0hMhj0hhubj!)}(hhh]h)}(h9Acquire the mutex and mark the process as waiting for I/Oh]h9Acquire the mutex and mark the process as waiting for I/O}(hjP1hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjM1hhubah}(h]h ]h"]h$]h&]uh1j!hj0hhhj0hMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jh1j!jh1j!j!j!uh1jN!hhhj,!hNhNubj!)}(hX**Parameters** ``struct mutex *lock`` The mutex to be acquired. **Description** Lock the mutex like mutex_lock(). While the task is waiting for this mutex, it will be accounted as being in the IO wait state by the scheduler. **Context** Process context.h](h)}(h**Parameters**h]j)}(hjr1h]h Parameters}(hjt1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjp1ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjl1ubj!)}(hhh]j")}(h1``struct mutex *lock`` The mutex to be acquired. h](j")}(h``struct mutex *lock``h]j5)}(hj1h]hstruct mutex *lock}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj1ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhj1ubj!")}(hhh]h)}(hThe mutex to be acquired.h]hThe mutex to be acquired.}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj1hMhj1ubah}(h]h ]h"]h$]h&]uh1j "hj1ubeh}(h]h ]h"]h$]h&]uh1j"hj1hMhj1ubah}(h]h ]h"]h$]h&]uh1j!hjl1ubh)}(h**Description**h]j)}(hj1h]h Description}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjl1ubh)}(hLock the mutex like mutex_lock(). While the task is waiting for this mutex, it will be accounted as being in the IO wait state by the scheduler.h]hLock the mutex like mutex_lock(). While the task is waiting for this mutex, it will be accounted as being in the IO wait state by the scheduler.}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjl1ubh)}(h **Context**h]j)}(hj1h]hContext}(hj1hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj1ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjl1ubh)}(hProcess context.h]hProcess context.}(hj 2hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMhjl1ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!mutex_trylock (C function)c.mutex_trylockhNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(h&int mutex_trylock (struct mutex *lock)h]jZ!)}(h%int mutex_trylock(struct mutex *lock)h](j#)}(hinth]hint}(hj82hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj42hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM7ubj$)}(h h]h }(hjG2hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj42hhhjF2hM7ubj`!)}(h mutex_trylockh]jf!)}(h mutex_trylockh]h mutex_trylock}(hjY2hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjU2ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj42hhhjF2hM7ubj)$)}(h(struct mutex *lock)h]j/$)}(hstruct mutex *lockh](j5$)}(hj8$h]hstruct}(hju2hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjq2ubj$)}(h h]h }(hj2hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjq2ubh)}(hhh]jf!)}(hmutexh]hmutex}(hj2hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj2ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj2modnameN classnameNjp$js$)}jv$]jy$)}jl$j[2sbc.mutex_trylockasbuh1hhjq2ubj$)}(h h]h }(hj2hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjq2ubj$)}(hj$h]h*}(hj2hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjq2ubjf!)}(hlockh]hlock}(hj2hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjq2ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjm2ubah}(h]h ]h"]h$]h&]jyjzuh1j($hj42hhhjF2hM7ubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj02hhhjF2hM7ubah}(h]j+2ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjF2hM7hj-2hhubj!)}(hhh]h)}(h)try to acquire the mutex, without waitingh]h)try to acquire the mutex, without waiting}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM*hj2hhubah}(h]h ]h"]h$]h&]uh1j!hj-2hhhjF2hM7ubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j3j!j3j!j!j!uh1jN!hhhj,!hNhNubj!)}(hX**Parameters** ``struct mutex *lock`` the mutex to be acquired **Description** Try to acquire the mutex atomically. Returns 1 if the mutex has been acquired successfully, and 0 on contention. This function must not be used in interrupt context. The mutex must be released by the same task that acquired it. **NOTE** this function follows the spin_trylock() convention, so it is negated from the down_trylock() return values! Be careful about this when converting semaphore users to mutexes.h](h)}(h**Parameters**h]j)}(hj3h]h Parameters}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM.hj3ubj!)}(hhh]j")}(h0``struct mutex *lock`` the mutex to be acquired h](j")}(h``struct mutex *lock``h]j5)}(hj93h]hstruct mutex *lock}(hj;3hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj73ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM+hj33ubj!")}(hhh]h)}(hthe mutex to be acquiredh]hthe mutex to be acquired}(hjR3hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjN3hM+hjO3ubah}(h]h ]h"]h$]h&]uh1j "hj33ubeh}(h]h ]h"]h$]h&]uh1j"hjN3hM+hj03ubah}(h]h ]h"]h$]h&]uh1j!hj3ubh)}(h**Description**h]j)}(hjt3h]h Description}(hjv3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjr3ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM-hj3ubh)}(hpTry to acquire the mutex atomically. Returns 1 if the mutex has been acquired successfully, and 0 on contention.h]hpTry to acquire the mutex atomically. Returns 1 if the mutex has been acquired successfully, and 0 on contention.}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM-hj3ubh)}(hrThis function must not be used in interrupt context. The mutex must be released by the same task that acquired it.h]hrThis function must not be used in interrupt context. The mutex must be released by the same task that acquired it.}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM0hj3ubh)}(h**NOTE**h]j)}(hj3h]hNOTE}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM3hj3ubh)}(hthis function follows the spin_trylock() convention, so it is negated from the down_trylock() return values! Be careful about this when converting semaphore users to mutexes.h]hthis function follows the spin_trylock() convention, so it is negated from the down_trylock() return values! Be careful about this when converting semaphore users to mutexes.}(hj3hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chM0hj3ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!&atomic_dec_and_mutex_lock (C function)c.atomic_dec_and_mutex_lockhNtauh1j=!hj,!hhhNhNubjO!)}(hhh](jT!)}(hAint atomic_dec_and_mutex_lock (atomic_t *cnt, struct mutex *lock)h]jZ!)}(h@int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock)h](j#)}(hinth]hint}(hj3hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj3hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMqubj$)}(h h]h }(hj3hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj3hhhj3hMqubj`!)}(hatomic_dec_and_mutex_lockh]jf!)}(hatomic_dec_and_mutex_lockh]hatomic_dec_and_mutex_lock}(hj4hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj 4ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj3hhhj3hMqubj)$)}(h#(atomic_t *cnt, struct mutex *lock)h](j/$)}(h atomic_t *cnth](h)}(hhh]jf!)}(hatomic_th]hatomic_t}(hj/4hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj,4ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj14modnameN classnameNjp$js$)}jv$]jy$)}jl$j4sbc.atomic_dec_and_mutex_lockasbuh1hhj(4ubj$)}(h h]h }(hjO4hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj(4ubj$)}(hj$h]h*}(hj]4hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj(4ubjf!)}(hcnth]hcnt}(hjj4hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj(4ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj$4ubj/$)}(hstruct mutex *lockh](j5$)}(hj8$h]hstruct}(hj4hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj4ubj$)}(h h]h }(hj4hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj4ubh)}(hhh]jf!)}(hmutexh]hmutex}(hj4hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj4ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj4modnameN classnameNjp$js$)}jv$]jK4c.atomic_dec_and_mutex_lockasbuh1hhj4ubj$)}(h h]h }(hj4hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj4ubj$)}(hj$h]h*}(hj4hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj4ubjf!)}(hlockh]hlock}(hj4hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj4ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj$4ubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj3hhhj3hMqubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj3hhhj3hMqubah}(h]j3ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj3hMqhj3hhubj!)}(hhh]h)}(h#return holding mutex if we dec to 0h]h#return holding mutex if we dec to 0}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMkhj5hhubah}(h]h ]h"]h$]h&]uh1j!hj3hhhj3hMqubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j5j!j5j!j!j!uh1jN!hhhj,!hNhNubj!)}(h**Parameters** ``atomic_t *cnt`` the atomic which we are to dec ``struct mutex *lock`` the mutex to return holding if we dec to 0 **Description** return true and hold lock if we dec to 0, return false otherwiseh](h)}(h**Parameters**h]j)}(hj&5h]h Parameters}(hj(5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$5ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMohj 5ubj!)}(hhh](j")}(h1``atomic_t *cnt`` the atomic which we are to dec h](j")}(h``atomic_t *cnt``h]j5)}(hjE5h]h atomic_t *cnt}(hjG5hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjC5ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMlhj?5ubj!")}(hhh]h)}(hthe atomic which we are to dech]hthe atomic which we are to dec}(hj^5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjZ5hMlhj[5ubah}(h]h ]h"]h$]h&]uh1j "hj?5ubeh}(h]h ]h"]h$]h&]uh1j"hjZ5hMlhj<5ubj")}(hB``struct mutex *lock`` the mutex to return holding if we dec to 0 h](j")}(h``struct mutex *lock``h]j5)}(hj~5h]hstruct mutex *lock}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj|5ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMmhjx5ubj!")}(hhh]h)}(h*the mutex to return holding if we dec to 0h]h*the mutex to return holding if we dec to 0}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj5hMmhj5ubah}(h]h ]h"]h$]h&]uh1j "hjx5ubeh}(h]h ]h"]h$]h&]uh1j"hj5hMmhj<5ubeh}(h]h ]h"]h$]h&]uh1j!hj 5ubh)}(h**Description**h]j)}(hj5h]h Description}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj5ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMohj 5ubh)}(h@return true and hold lock if we dec to 0, return false otherwiseh]h@return true and hold lock if we dec to 0, return false otherwise}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1349: ./kernel/locking/mutex.chMohj 5ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj,!hhhNhNubeh}(h]mutex-api-referenceah ]h"]mutex api referenceah$]h&]uh1hhhhhhhhM@ubh)}(hhh](h)}(hFutex API referenceh]hFutex API reference}(hj5hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj5hhhhhMIubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_hash (C function) c.futex_hashhNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(h!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_setup_timer (C function)c.futex_setup_timerhNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(htstruct hrtimer_sleeper * futex_setup_timer (ktime_t *time, struct hrtimer_sleeper *timeout, int flags, u64 range_ns)h]jZ!)}(hrstruct hrtimer_sleeper *futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout, int flags, u64 range_ns)h](j5$)}(hj8$h]hstruct}(hj7hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj7hhh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKubj$)}(h h]h }(hj7hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj7hhhj7hKubh)}(hhh]jf!)}(hhrtimer_sleeperh]hhrtimer_sleeper}(hj7hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj7ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj7modnameN classnameNjp$js$)}jv$]jy$)}jl$futex_setup_timersbc.futex_setup_timerasbuh1hhj7hhhj7hKubj$)}(h h]h }(hj8hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj7hhhj7hKubj$)}(hj$h]h*}(hj"8hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj7hhhj7hKubj`!)}(hfutex_setup_timerh]jf!)}(hj8h]hfutex_setup_timer}(hj38hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj/8ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj7hhhj7hKubj)$)}(hI(ktime_t *time, struct hrtimer_sleeper *timeout, int flags, u64 range_ns)h](j/$)}(h ktime_t *timeh](h)}(hhh]jf!)}(hktime_th]hktime_t}(hjQ8hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjN8ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjS8modnameN classnameNjp$js$)}jv$]j8c.futex_setup_timerasbuh1hhjJ8ubj$)}(h h]h }(hjo8hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjJ8ubj$)}(hj$h]h*}(hj}8hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjJ8ubjf!)}(htimeh]htime}(hj8hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjJ8ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjF8ubj/$)}(hstruct hrtimer_sleeper *timeouth](j5$)}(hj8$h]hstruct}(hj8hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj8ubj$)}(h h]h }(hj8hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj8ubh)}(hhh]jf!)}(hhrtimer_sleeperh]hhrtimer_sleeper}(hj8hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj8ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj8modnameN classnameNjp$js$)}jv$]j8c.futex_setup_timerasbuh1hhj8ubj$)}(h h]h }(hj8hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj8ubj$)}(hj$h]h*}(hj8hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj8ubjf!)}(htimeouth]htimeout}(hj8hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj8ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjF8ubj/$)}(h int flagsh](j#)}(hinth]hint}(hj9hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj9ubj$)}(h h]h }(hj!9hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj9ubjf!)}(hflagsh]hflags}(hj/9hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj9ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjF8ubj/$)}(h u64 range_nsh](h)}(hhh]jf!)}(hu64h]hu64}(hjK9hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjH9ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjM9modnameN classnameNjp$js$)}jv$]j8c.futex_setup_timerasbuh1hhjD9ubj$)}(h h]h }(hji9hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjD9ubjf!)}(hrange_nsh]hrange_ns}(hjw9hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjD9ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjF8ubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj7hhhj7hKubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj7hhhj7hKubah}(h]j7ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj7hKhj7hhubj!)}(hhh]h)}(hset up the sleeping hrtimer.h]hset up the sleeping hrtimer.}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj9hhubah}(h]h ]h"]h$]h&]uh1j!hj7hhhj7hKubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j9j!j9j!j!j!uh1jN!hhhj5hNhNubj!)}(hX7**Parameters** ``ktime_t *time`` ptr to the given timeout value ``struct hrtimer_sleeper *timeout`` the hrtimer_sleeper structure to be set up ``int flags`` futex flags ``u64 range_ns`` optional range in ns **Return** Initialized hrtimer_sleeper structure or NULL if no timeout value givenh](h)}(h**Parameters**h]j)}(hj9h]h Parameters}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj9ubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj9ubj!)}(hhh](j")}(h1``ktime_t *time`` ptr to the given timeout value h](j")}(h``ktime_t *time``h]j5)}(hj9h]h ktime_t *time}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj9ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj9ubj!")}(hhh]h)}(hptr to the given timeout valueh]hptr to the given timeout value}(hj9hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj9hKhj9ubah}(h]h ]h"]h$]h&]uh1j "hj9ubeh}(h]h ]h"]h$]h&]uh1j"hj9hKhj9ubj")}(hO``struct hrtimer_sleeper *timeout`` the hrtimer_sleeper structure to be set up h](j")}(h#``struct hrtimer_sleeper *timeout``h]j5)}(hj:h]hstruct hrtimer_sleeper *timeout}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj:ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj:ubj!")}(hhh]h)}(h*the hrtimer_sleeper structure to be set uph]h*the hrtimer_sleeper structure to be set up}(hj4:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj0:hKhj1:ubah}(h]h ]h"]h$]h&]uh1j "hj:ubeh}(h]h ]h"]h$]h&]uh1j"hj0:hKhj9ubj")}(h``int flags`` futex flags h](j")}(h ``int flags``h]j5)}(hjT:h]h int flags}(hjV:hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjR:ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhjN:ubj!")}(hhh]h)}(h futex flagsh]h futex flags}(hjm:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhji:hKhjj:ubah}(h]h ]h"]h$]h&]uh1j "hjN:ubeh}(h]h ]h"]h$]h&]uh1j"hji:hKhj9ubj")}(h&``u64 range_ns`` optional range in ns h](j")}(h``u64 range_ns``h]j5)}(hj:h]h u64 range_ns}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj:ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj:ubj!")}(hhh]h)}(hoptional range in nsh]hoptional range in ns}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj:hKhj:ubah}(h]h ]h"]h$]h&]uh1j "hj:ubeh}(h]h ]h"]h$]h&]uh1j"hj:hKhj9ubeh}(h]h ]h"]h$]h&]uh1j!hj9ubh)}(h **Return**h]j)}(hj:h]hReturn}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:ubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj9ubj!)}(hhh]j")}(hGInitialized hrtimer_sleeper structure or NULL if no timeout value givenh](j")}(h;Initialized hrtimer_sleeper structure or NULL if no timeouth]h;Initialized hrtimer_sleeper structure or NULL if no timeout}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj:ubj!")}(hhh]h)}(h value givenh]h value given}(hj:hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj:ubah}(h]h ]h"]h$]h&]uh1j "hj:ubeh}(h]h ]h"]h$]h&]uh1j"hj:hKhj:ubah}(h]h ]h"]h$]h&]uh1j!hj9ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!get_futex_key (C function)c.get_futex_keyhNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(heint get_futex_key (u32 __user *uaddr, unsigned int flags, union futex_key *key, enum futex_access rw)h]jZ!)}(hdint get_futex_key(u32 __user *uaddr, unsigned int flags, union futex_key *key, enum futex_access rw)h](j#)}(hinth]hint}(hj8;hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj4;hhh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKubj$)}(h h]h }(hjG;hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj4;hhhjF;hKubj`!)}(h get_futex_keyh]jf!)}(h get_futex_keyh]h get_futex_key}(hjY;hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjU;ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj4;hhhjF;hKubj)$)}(hS(u32 __user *uaddr, unsigned int flags, union futex_key *key, enum futex_access rw)h](j/$)}(hu32 __user *uaddrh](h)}(hhh]jf!)}(hu32h]hu32}(hjx;hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hju;ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjz;modnameN classnameNjp$js$)}jv$]jy$)}jl$j[;sbc.get_futex_keyasbuh1hhjq;ubj$)}(h h]h }(hj;hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjq;ubh__user}(hjq;hhhNhNubj$)}(h h]h }(hj;hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjq;ubj$)}(hj$h]h*}(hj;hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjq;ubjf!)}(huaddrh]huaddr}(hj;hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjq;ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjm;ubj/$)}(hunsigned int flagsh](j#)}(hunsignedh]hunsigned}(hj;hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj;ubj$)}(h h]h }(hj;hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj;ubj#)}(hinth]hint}(hj;hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj;ubj$)}(h h]h }(hj<hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj;ubjf!)}(hflagsh]hflags}(hj<hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj;ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjm;ubj/$)}(hunion futex_key *keyh](j5$)}(hj6h]hunion}(hj/<hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj+<ubj$)}(h h]h }(hj<<hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj+<ubh)}(hhh]jf!)}(h futex_keyh]h futex_key}(hjM<hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjJ<ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjO<modnameN classnameNjp$js$)}jv$]j;c.get_futex_keyasbuh1hhj+<ubj$)}(h h]h }(hjk<hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj+<ubj$)}(hj$h]h*}(hjy<hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj+<ubjf!)}(hkeyh]hkey}(hj<hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj+<ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjm;ubj/$)}(henum futex_access rwh](j5$)}(henumh]henum}(hj<hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj<ubj$)}(h h]h }(hj<hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj<ubh)}(hhh]jf!)}(h futex_accessh]h futex_access}(hj<hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj<ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj<modnameN classnameNjp$js$)}jv$]j;c.get_futex_keyasbuh1hhj<ubj$)}(h h]h }(hj<hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj<ubjf!)}(hrwh]hrw}(hj<hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj<ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjm;ubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj4;hhhjF;hKubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj0;hhhjF;hKubah}(h]j+;ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjF;hKhj-;hhubj!)}(hhh]h)}(h-Get parameters which are the keys for a futexh]h-Get parameters which are the keys for a futex}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj=hhubah}(h]h ]h"]h$]h&]uh1j!hj-;hhhjF;hKubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j,=j!j,=j!j!j!uh1jN!hhhj5hNhNubj!)}(hX-**Parameters** ``u32 __user *uaddr`` virtual address of the futex ``unsigned int flags`` FLAGS_* ``union futex_key *key`` address where result is stored. ``enum futex_access rw`` mapping needs to be read/write (values: FUTEX_READ, FUTEX_WRITE) **Return** a negative error code or 0 **Description** The key words are stored in **key** on success. For shared mappings (when **fshared**), the key is: ( inode->i_sequence, page->index, offset_within_page ) [ also see get_inode_sequence_number() ] For private mappings (or when **!fshared**), the key is: ( current->mm, address, 0 ) This allows (cross process, where applicable) identification of the futex without keeping the page pinned for the duration of the FUTEX_WAIT. lock_page() might sleep, the caller should not hold a spinlock.h](h)}(h**Parameters**h]j)}(hj6=h]h Parameters}(hj8=hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4=ubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubj!)}(hhh](j")}(h3``u32 __user *uaddr`` virtual address of the futex h](j")}(h``u32 __user *uaddr``h]j5)}(hjU=h]hu32 __user *uaddr}(hjW=hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjS=ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhjO=ubj!")}(hhh]h)}(hvirtual address of the futexh]hvirtual address of the futex}(hjn=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjj=hKhjk=ubah}(h]h ]h"]h$]h&]uh1j "hjO=ubeh}(h]h ]h"]h$]h&]uh1j"hjj=hKhjL=ubj")}(h``unsigned int flags`` FLAGS_* h](j")}(h``unsigned int flags``h]j5)}(hj=h]hunsigned int flags}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj=ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj=ubj!")}(hhh]h)}(hFLAGS_*h]hFLAGS_*}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=hKhj=ubah}(h]h ]h"]h$]h&]uh1j "hj=ubeh}(h]h ]h"]h$]h&]uh1j"hj=hKhjL=ubj")}(h9``union futex_key *key`` address where result is stored. h](j")}(h``union futex_key *key``h]j5)}(hj=h]hunion futex_key *key}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj=ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj=ubj!")}(hhh]h)}(haddress where result is stored.h]haddress where result is stored.}(hj=hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj=hKhj=ubah}(h]h ]h"]h$]h&]uh1j "hj=ubeh}(h]h ]h"]h$]h&]uh1j"hj=hKhjL=ubj")}(hZ``enum futex_access rw`` mapping needs to be read/write (values: FUTEX_READ, FUTEX_WRITE) h](j")}(h``enum futex_access rw``h]j5)}(hj>h]henum futex_access rw}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj=ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj=ubj!")}(hhh]h)}(h@mapping needs to be read/write (values: FUTEX_READ, FUTEX_WRITE)h]h@mapping needs to be read/write (values: FUTEX_READ, FUTEX_WRITE)}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj>ubah}(h]h ]h"]h$]h&]uh1j "hj=ubeh}(h]h ]h"]h$]h&]uh1j"hj>hKhjL=ubeh}(h]h ]h"]h$]h&]uh1j!hj0=ubh)}(h **Return**h]j)}(hj<>h]hReturn}(hj>>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:>ubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubh)}(ha negative error code or 0h]ha negative error code or 0}(hjR>hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubh)}(h**Description**h]j)}(hjc>h]h Description}(hje>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhja>ubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubh)}(h/The key words are stored in **key** on success.h](hThe key words are stored in }(hjy>hhhNhNubj)}(h**key**h]hkey}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjy>ubh on success.}(hjy>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubh)}(h3For shared mappings (when **fshared**), the key is:h](hFor shared mappings (when }(hj>hhhNhNubj)}(h **fshared**h]hfshared}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubh), the key is:}(hj>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubj!)}(h7( inode->i_sequence, page->index, offset_within_page ) h]h)}(h6( inode->i_sequence, page->index, offset_within_page )h]h6( inode->i_sequence, page->index, offset_within_page )}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj>ubah}(h]h ]h"]h$]h&]uh1j!hj>hKhj0=ubh)}(h([ also see get_inode_sequence_number() ]h]h([ also see get_inode_sequence_number() ]}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubh)}(h8For private mappings (or when **!fshared**), the key is:h](hFor private mappings (or when }(hj>hhhNhNubj)}(h **!fshared**h]h!fshared}(hj>hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj>ubh), the key is:}(hj>hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubj!)}(h( current->mm, address, 0 ) h]h)}(h( current->mm, address, 0 )h]h( current->mm, address, 0 )}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj?ubah}(h]h ]h"]h$]h&]uh1j!hj?hKhj0=ubh)}(hThis allows (cross process, where applicable) identification of the futex without keeping the page pinned for the duration of the FUTEX_WAIT.h]hThis allows (cross process, where applicable) identification of the futex without keeping the page pinned for the duration of the FUTEX_WAIT.}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubh)}(h?lock_page() might sleep, the caller should not hold a spinlock.h]h?lock_page() might sleep, the caller should not hold a spinlock.}(hj,?hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chKhj0=ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!$fault_in_user_writeable (C function)c.fault_in_user_writeablehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(h/int fault_in_user_writeable (u32 __user *uaddr)h]jZ!)}(h.int fault_in_user_writeable(u32 __user *uaddr)h](j#)}(hinth]hint}(hj[?hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjW?hhh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMubj$)}(h h]h }(hjj?hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjW?hhhji?hMubj`!)}(hfault_in_user_writeableh]jf!)}(hfault_in_user_writeableh]hfault_in_user_writeable}(hj|?hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjx?ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjW?hhhji?hMubj)$)}(h(u32 __user *uaddr)h]j/$)}(hu32 __user *uaddrh](h)}(hhh]jf!)}(hu32h]hu32}(hj?hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj?ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj?modnameN classnameNjp$js$)}jv$]jy$)}jl$j~?sbc.fault_in_user_writeableasbuh1hhj?ubj$)}(h h]h }(hj?hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj?ubh__user}(hj?hhhNhNubj$)}(h h]h }(hj?hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj?ubj$)}(hj$h]h*}(hj?hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj?ubjf!)}(huaddrh]huaddr}(hj?hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj?ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj?ubah}(h]h ]h"]h$]h&]jyjzuh1j($hjW?hhhji?hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjS?hhhji?hMubah}(h]jN?ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hji?hMhjP?hhubj!)}(hhh]h)}(h*Fault in user address and verify RW accessh]h*Fault in user address and verify RW access}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj@hhubah}(h]h ]h"]h$]h&]uh1j!hjP?hhhji?hMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j*@j!j*@j!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``u32 __user *uaddr`` pointer to faulting user space address **Description** Slow path to fixup the fault we just took in the atomic write access to **uaddr**. We have no generic implementation of a non-destructive write to the user address. We know that we faulted in the atomic pagefault disabled section so we can as well avoid the #PF overhead by calling get_user_pages() right away.h](h)}(h**Parameters**h]j)}(hj4@h]h Parameters}(hj6@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj2@ubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj.@ubj!)}(hhh]j")}(h=``u32 __user *uaddr`` pointer to faulting user space address h](j")}(h``u32 __user *uaddr``h]j5)}(hjS@h]hu32 __user *uaddr}(hjU@hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjQ@ubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjM@ubj!")}(hhh]h)}(h&pointer to faulting user space addressh]h&pointer to faulting user space address}(hjl@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjh@hMhji@ubah}(h]h ]h"]h$]h&]uh1j "hjM@ubeh}(h]h ]h"]h$]h&]uh1j"hjh@hMhjJ@ubah}(h]h ]h"]h$]h&]uh1j!hj.@ubh)}(h**Description**h]j)}(hj@h]h Description}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj.@ubh)}(hRSlow path to fixup the fault we just took in the atomic write access to **uaddr**.h](hHSlow path to fixup the fault we just took in the atomic write access to }(hj@hhhNhNubj)}(h **uaddr**h]huaddr}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubh.}(hj@hhhNhNubeh}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj.@ubh)}(hWe have no generic implementation of a non-destructive write to the user address. We know that we faulted in the atomic pagefault disabled section so we can as well avoid the #PF overhead by calling get_user_pages() right away.h]hWe have no generic implementation of a non-destructive write to the user address. We know that we faulted in the atomic pagefault disabled section so we can as well avoid the #PF overhead by calling get_user_pages() right away.}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj.@ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_top_waiter (C function)c.futex_top_waiterhNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hVstruct futex_q * futex_top_waiter (struct futex_hash_bucket *hb, union futex_key *key)h]jZ!)}(hTstruct futex_q *futex_top_waiter(struct futex_hash_bucket *hb, union futex_key *key)h](j5$)}(hj8$h]hstruct}(hj@hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj@hhh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMubj$)}(h h]h }(hjAhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj@hhhjAhMubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hjAhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjAubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjAmodnameN classnameNjp$js$)}jv$]jy$)}jl$futex_top_waitersbc.futex_top_waiterasbuh1hhj@hhhjAhMubj$)}(h h]h }(hj4AhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj@hhhjAhMubj$)}(hj$h]h*}(hjBAhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj@hhhjAhMubj`!)}(hfutex_top_waiterh]jf!)}(hj1Ah]hfutex_top_waiter}(hjSAhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjOAubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj@hhhjAhMubj)$)}(h4(struct futex_hash_bucket *hb, union futex_key *key)h](j/$)}(hstruct futex_hash_bucket *hbh](j5$)}(hj8$h]hstruct}(hjnAhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjjAubj$)}(h h]h }(hj{AhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjjAubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hjAhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjAubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjAmodnameN classnameNjp$js$)}jv$]j/Ac.futex_top_waiterasbuh1hhjjAubj$)}(h h]h }(hjAhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjjAubj$)}(hj$h]h*}(hjAhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjjAubjf!)}(hhbh]hhb}(hjAhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjjAubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjfAubj/$)}(hunion futex_key *keyh](j5$)}(hj6h]hunion}(hjAhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjAubj$)}(h h]h }(hjAhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjAubh)}(hhh]jf!)}(h futex_keyh]h futex_key}(hjAhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjAubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjAmodnameN classnameNjp$js$)}jv$]j/Ac.futex_top_waiterasbuh1hhjAubj$)}(h h]h }(hjBhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjAubj$)}(hj$h]h*}(hj(BhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjAubjf!)}(hkeyh]hkey}(hj5BhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjAubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjfAubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj@hhhjAhMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj@hhhjAhMubah}(h]j@ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjAhMhj@hhubj!)}(hhh]h)}(h-Return the highest priority waiter on a futexh]h-Return the highest priority waiter on a futex}(hj_BhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj\Bhhubah}(h]h ]h"]h$]h&]uh1j!hj@hhhjAhMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jwBj!jwBj!j!j!uh1jN!hhhj5hNhNubj!)}(h**Parameters** ``struct futex_hash_bucket *hb`` the hash bucket the futex_q's reside in ``union futex_key *key`` the futex key (to distinguish it from other futex futex_q's) **Description** Must be called with the hb lock held.h](h)}(h**Parameters**h]j)}(hjBh]h Parameters}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj{Bubj!)}(hhh](j")}(hI``struct futex_hash_bucket *hb`` the hash bucket the futex_q's reside in h](j")}(h ``struct futex_hash_bucket *hb``h]j5)}(hjBh]hstruct futex_hash_bucket *hb}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjBubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjBubj!")}(hhh]h)}(h'the hash bucket the futex_q's reside inh]h)the hash bucket the futex_q’s reside in}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjBhMhjBubah}(h]h ]h"]h$]h&]uh1j "hjBubeh}(h]h ]h"]h$]h&]uh1j"hjBhMhjBubj")}(hV``union futex_key *key`` the futex key (to distinguish it from other futex futex_q's) h](j")}(h``union futex_key *key``h]j5)}(hjBh]hunion futex_key *key}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjBubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjBubj!")}(hhh]h)}(hthe futex key (to distinguish it from other futex futex_q’s)}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjBhMhjBubah}(h]h ]h"]h$]h&]uh1j "hjBubeh}(h]h ]h"]h$]h&]uh1j"hjBhMhjBubeh}(h]h ]h"]h$]h&]uh1j!hj{Bubh)}(h**Description**h]j)}(hjCh]h Description}(hjChhhNhNubah}(h]h ]h"]h$]h&]uh1jhjCubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj{Bubh)}(h%Must be called with the hb lock held.h]h%Must be called with the hb lock held.}(hj*ChhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj{Bubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!#wait_for_owner_exiting (C function)c.wait_for_owner_exitinghNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hBvoid wait_for_owner_exiting (int ret, struct task_struct *exiting)h]jZ!)}(hAvoid wait_for_owner_exiting(int ret, struct task_struct *exiting)h](j#)}(hvoidh]hvoid}(hjYChhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjUChhh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMubj$)}(h h]h }(hjhChhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjUChhhjgChMubj`!)}(hwait_for_owner_exitingh]jf!)}(hwait_for_owner_exitingh]hwait_for_owner_exiting}(hjzChhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjvCubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjUChhhjgChMubj)$)}(h&(int ret, struct task_struct *exiting)h](j/$)}(hint reth](j#)}(hinth]hint}(hjChhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjCubj$)}(h h]h }(hjChhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjCubjf!)}(hreth]hret}(hjChhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjCubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjCubj/$)}(hstruct task_struct *exitingh](j5$)}(hj8$h]hstruct}(hjChhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjCubj$)}(h h]h }(hjChhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjCubh)}(hhh]jf!)}(h task_structh]h task_struct}(hjChhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjCubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjCmodnameN classnameNjp$js$)}jv$]jy$)}jl$j|Csbc.wait_for_owner_exitingasbuh1hhjCubj$)}(h h]h }(hj DhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjCubj$)}(hj$h]h*}(hjDhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjCubjf!)}(hexitingh]hexiting}(hj$DhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjCubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjCubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjUChhhjgChMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjQChhhjgChMubah}(h]jLCah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjgChMhjNChhubj!)}(hhh]h)}(h Block until the owner has exitedh]h Block until the owner has exited}(hjNDhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjKDhhubah}(h]h ]h"]h$]h&]uh1j!hjNChhhjgChMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jfDj!jfDj!j!j!uh1jN!hhhj5hNhNubj!)}(h**Parameters** ``int ret`` owner's current futex lock status ``struct task_struct *exiting`` Pointer to the exiting task **Description** Caller must hold a refcount on **exiting**.h](h)}(h**Parameters**h]j)}(hjpDh]h Parameters}(hjrDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnDubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjjDubj!)}(hhh](j")}(h.``int ret`` owner's current futex lock status h](j")}(h ``int ret``h]j5)}(hjDh]hint ret}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjDubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjDubj!")}(hhh]h)}(h!owner's current futex lock statush]h#owner’s current futex lock status}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjDhMhjDubah}(h]h ]h"]h$]h&]uh1j "hjDubeh}(h]h ]h"]h$]h&]uh1j"hjDhMhjDubj")}(h<``struct task_struct *exiting`` Pointer to the exiting task h](j")}(h``struct task_struct *exiting``h]j5)}(hjDh]hstruct task_struct *exiting}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjDubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjDubj!")}(hhh]h)}(hPointer to the exiting taskh]hPointer to the exiting task}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjDhMhjDubah}(h]h ]h"]h$]h&]uh1j "hjDubeh}(h]h ]h"]h$]h&]uh1j"hjDhMhjDubeh}(h]h ]h"]h$]h&]uh1j!hjjDubh)}(h**Description**h]j)}(hjEh]h Description}(hjEhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjjDubh)}(h+Caller must hold a refcount on **exiting**.h](hCaller must hold a refcount on }(hjEhhhNhNubj)}(h **exiting**h]hexiting}(hj!EhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjEubh.}(hjEhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjjDubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!__futex_unqueue (C function)c.__futex_unqueuehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(h(void __futex_unqueue (struct futex_q *q)h]jZ!)}(h'void __futex_unqueue(struct futex_q *q)h](j#)}(hvoidh]hvoid}(hjZEhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjVEhhh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMubj$)}(h h]h }(hjiEhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjVEhhhjhEhMubj`!)}(h__futex_unqueueh]jf!)}(h__futex_unqueueh]h__futex_unqueue}(hj{EhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjwEubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjVEhhhjhEhMubj)$)}(h(struct futex_q *q)h]j/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hjEhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjEubj$)}(h h]h }(hjEhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjEubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hjEhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjEubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjEmodnameN classnameNjp$js$)}jv$]jy$)}jl$j}Esbc.__futex_unqueueasbuh1hhjEubj$)}(h h]h }(hjEhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjEubj$)}(hj$h]h*}(hjEhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjEubjf!)}(hqh]hq}(hjEhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjEubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjEubah}(h]h ]h"]h$]h&]jyjzuh1j($hjVEhhhjhEhMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjREhhhjhEhMubah}(h]jMEah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjhEhMhjOEhhubj!)}(hhh]h)}(h-Remove the futex_q from its futex_hash_bucketh]h-Remove the futex_q from its futex_hash_bucket}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjFhhubah}(h]h ]h"]h$]h&]uh1j!hjOEhhhjhEhMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j2Fj!j2Fj!j!j!uh1jN!hhhj5hNhNubj!)}(h**Parameters** ``struct futex_q *q`` The futex_q to unqueue **Description** The q->lock_ptr must not be NULL and must be held by the caller.h](h)}(h**Parameters**h]j)}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj:Fubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj6Fubj!)}(hhh]j")}(h-``struct futex_q *q`` The futex_q to unqueue h](j")}(h``struct futex_q *q``h]j5)}(hj[Fh]hstruct futex_q *q}(hj]FhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjYFubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjUFubj!")}(hhh]h)}(hThe futex_q to unqueueh]hThe futex_q to unqueue}(hjtFhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjpFhMhjqFubah}(h]h ]h"]h$]h&]uh1j "hjUFubeh}(h]h ]h"]h$]h&]uh1j"hjpFhMhjRFubah}(h]h ]h"]h$]h&]uh1j!hj6Fubh)}(h**Description**h]j)}(hjFh]h Description}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjFubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj6Fubh)}(h@The q->lock_ptr must not be NULL and must be held by the caller.h]h@The q->lock_ptr must not be NULL and must be held by the caller.}(hjFhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhj6Fubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_unqueue (C function)c.futex_unqueuehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(h%int futex_unqueue (struct futex_q *q)h]jZ!)}(h$int futex_unqueue(struct futex_q *q)h](j#)}(hinth]hint}(hjFhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjFhhh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM6ubj$)}(h h]h }(hjFhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjFhhhjFhM6ubj`!)}(h futex_unqueueh]jf!)}(h futex_unqueueh]h futex_unqueue}(hjFhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjFubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjFhhhjFhM6ubj)$)}(h(struct futex_q *q)h]j/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hjGhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjGubj$)}(h h]h }(hj%GhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjGubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hj6GhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj3Gubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj8GmodnameN classnameNjp$js$)}jv$]jy$)}jl$jFsbc.futex_unqueueasbuh1hhjGubj$)}(h h]h }(hjVGhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjGubj$)}(hj$h]h*}(hjdGhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjGubjf!)}(hjEh]hq}(hjqGhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjGubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjGubah}(h]h ]h"]h$]h&]jyjzuh1j($hjFhhhjFhM6ubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjFhhhjFhM6ubah}(h]jFah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjFhM6hjFhhubj!)}(hhh]h)}(h-Remove the futex_q from its futex_hash_bucketh]h-Remove the futex_q from its futex_hash_bucket}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM,hjGhhubah}(h]h ]h"]h$]h&]uh1j!hjFhhhjFhM6ubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jGj!jGj!j!j!uh1jN!hhhj5hNhNubj!)}(hXj**Parameters** ``struct futex_q *q`` The futex_q to unqueue **Description** The q->lock_ptr must not be held by the caller. A call to futex_unqueue() must be paired with exactly one earlier call to futex_queue(). **Return** - 1 - if the futex_q was still queued (and we removed unqueued it); - 0 - if the futex_q was already removed by the waking threadh](h)}(h**Parameters**h]j)}(hjGh]h Parameters}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjGubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM0hjGubj!)}(hhh]j")}(h-``struct futex_q *q`` The futex_q to unqueue h](j")}(h``struct futex_q *q``h]j5)}(hjGh]hstruct futex_q *q}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjGubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM-hjGubj!")}(hhh]h)}(hThe futex_q to unqueueh]hThe futex_q to unqueue}(hjGhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjGhM-hjGubah}(h]h ]h"]h$]h&]uh1j "hjGubeh}(h]h ]h"]h$]h&]uh1j"hjGhM-hjGubah}(h]h ]h"]h$]h&]uh1j!hjGubh)}(h**Description**h]j)}(hjHh]h Description}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM/hjGubh)}(hThe q->lock_ptr must not be held by the caller. A call to futex_unqueue() must be paired with exactly one earlier call to futex_queue().h]hThe q->lock_ptr must not be held by the caller. A call to futex_unqueue() must be paired with exactly one earlier call to futex_queue().}(hj,HhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM/hjGubh)}(h **Return**h]j)}(hj=Hh]hReturn}(hj?HhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj;Hubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM2hjGubj!)}(h- 1 - if the futex_q was still queued (and we removed unqueued it); - 0 - if the futex_q was already removed by the waking threadh]j )}(hhh](j )}(hA1 - if the futex_q was still queued (and we removed unqueued it);h]h)}(hj\Hh]hA1 - if the futex_q was still queued (and we removed unqueued it);}(hj^HhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM2hjZHubah}(h]h ]h"]h$]h&]uh1j hjWHubj )}(h;0 - if the futex_q was already removed by the waking threadh]h)}(hjtHh]h;0 - if the futex_q was already removed by the waking thread}(hjvHhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chM3hjrHubah}(h]h ]h"]h$]h&]uh1j hjWHubeh}(h]h ]h"]h$]h&]j! j" uh1j hjkHhM2hjSHubah}(h]h ]h"]h$]h&]uh1j!hjkHhM2hjGubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!!futex_exit_recursive (C function)c.futex_exit_recursivehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(h3void futex_exit_recursive (struct task_struct *tsk)h]jZ!)}(h2void futex_exit_recursive(struct task_struct *tsk)h](j#)}(hvoidh]hvoid}(hjHhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjHhhh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMubj$)}(h h]h }(hjHhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjHhhhjHhMubj`!)}(hfutex_exit_recursiveh]jf!)}(hfutex_exit_recursiveh]hfutex_exit_recursive}(hjHhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjHubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjHhhhjHhMubj)$)}(h(struct task_struct *tsk)h]j/$)}(hstruct task_struct *tskh](j5$)}(hj8$h]hstruct}(hjHhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjHubj$)}(h h]h }(hjIhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjHubh)}(hhh]jf!)}(h task_structh]h task_struct}(hjIhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjIubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjImodnameN classnameNjp$js$)}jv$]jy$)}jl$jHsbc.futex_exit_recursiveasbuh1hhjHubj$)}(h h]h }(hj1IhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjHubj$)}(hj$h]h*}(hj?IhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjHubjf!)}(htskh]htsk}(hjLIhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjHubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjHubah}(h]h ]h"]h$]h&]jyjzuh1j($hjHhhhjHhMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjHhhhjHhMubah}(h]jHah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjHhMhjHhhubj!)}(hhh]h)}(h-Set the tasks futex state to FUTEX_STATE_DEADh]h-Set the tasks futex state to FUTEX_STATE_DEAD}(hjvIhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjsIhhubah}(h]h ]h"]h$]h&]uh1j!hjHhhhjHhMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jIj!jIj!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``struct task_struct *tsk`` task to set the state on **Description** Set the futex exit state of the task lockless. The futex waiter code observes that state when a task is exiting and loops until the task has actually finished the futex cleanup. The worst case for this is that the waiter runs through the wait loop until the state becomes visible. This is called from the recursive fault handling path in make_task_dead(). This is best effort. Either the futex exit code has run already or not. If the OWNER_DIED bit has been set on the futex then the waiter can take it over. If not, the problem is pushed back to user space. If the futex exit code did not run yet, then an already queued waiter might block forever, but there is nothing which can be done about that.h](h)}(h**Parameters**h]j)}(hjIh]h Parameters}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjIubj!)}(hhh]j")}(h5``struct task_struct *tsk`` task to set the state on h](j")}(h``struct task_struct *tsk``h]j5)}(hjIh]hstruct task_struct *tsk}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjIubah}(h]h ]h"]h$]h&]uh1j"h\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjIubj!")}(hhh]h)}(htask to set the state onh]htask to set the state on}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjIhMhjIubah}(h]h ]h"]h$]h&]uh1j "hjIubeh}(h]h ]h"]h$]h&]uh1j"hjIhMhjIubah}(h]h ]h"]h$]h&]uh1j!hjIubh)}(h**Description**h]j)}(hjIh]h Description}(hjIhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjIubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjIubh)}(hXSet the futex exit state of the task lockless. The futex waiter code observes that state when a task is exiting and loops until the task has actually finished the futex cleanup. The worst case for this is that the waiter runs through the wait loop until the state becomes visible.h]hXSet the futex exit state of the task lockless. The futex waiter code observes that state when a task is exiting and loops until the task has actually finished the futex cleanup. The worst case for this is that the waiter runs through the wait loop until the state becomes visible.}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjIubh)}(hJThis is called from the recursive fault handling path in make_task_dead().h]hJThis is called from the recursive fault handling path in make_task_dead().}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjIubh)}(hXYThis is best effort. Either the futex exit code has run already or not. If the OWNER_DIED bit has been set on the futex then the waiter can take it over. If not, the problem is pushed back to user space. If the futex exit code did not run yet, then an already queued waiter might block forever, but there is nothing which can be done about that.h]hXYThis is best effort. Either the futex exit code has run already or not. If the OWNER_DIED bit has been set on the futex then the waiter can take it over. If not, the problem is pushed back to user space. If the futex exit code did not run yet, then an already queued waiter might block forever, but there is nothing which can be done about that.}(hj&JhhhNhNubah}(h]h ]h"]h$]h&]uh1hh\/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1355: ./kernel/futex/core.chMhjIubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_q (C struct) c.futex_qhNtauh1j=!hj5hhh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhNubjO!)}(hhh](jT!)}(hfutex_qh]jZ!)}(hstruct futex_qh](j5$)}(hj8$h]hstruct}(hjVJhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjRJhhh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhKubj$)}(h h]h }(hjdJhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjRJhhhjcJhKubj`!)}(hfutex_qh]jf!)}(hjPJh]hfutex_q}(hjvJhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjrJubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjRJhhhjcJhKubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjNJhhhjcJhKubah}(h]jHJah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjcJhKhjKJhhubj!)}(hhh]h)}(h2The hashed futex queue entry, one per waiting taskh]h2The hashed futex queue entry, one per waiting task}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhKhjJhhubah}(h]h ]h"]h$]h&]uh1j!hjKJhhhjcJhKubeh}(h]h ](jAstructeh"]h$]h&]j!jAj!jJj!jJj!j!j!uh1jN!hhhj5hjJJhNubj!)}(hX;**Definition**:: struct futex_q { struct plist_node list; struct task_struct *task; spinlock_t *lock_ptr; futex_wake_fn *wake; void *wake_data; union futex_key key; struct futex_pi_state *pi_state; struct rt_mutex_waiter *rt_waiter; union futex_key *requeue_pi_key; u32 bitset; atomic_t requeue_state; #ifdef CONFIG_PREEMPT_RT; struct rcuwait requeue_wait; #endif; }; **Members** ``list`` priority-sorted list of tasks waiting on this futex ``task`` the task waiting on the futex ``lock_ptr`` the hash bucket lock ``wake`` the wake handler for this queue ``wake_data`` data associated with the wake handler ``key`` the key the futex is hashed on ``pi_state`` optional priority inheritance state ``rt_waiter`` rt_waiter storage for use with requeue_pi ``requeue_pi_key`` the requeue_pi target futex key ``bitset`` bitset for the optional bitmasked wakeup ``requeue_state`` State field for futex_requeue_pi() ``requeue_wait`` RCU wait for futex_requeue_pi() (RT only)h](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubh:}(hjJhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhKhjJubjj)}(hXstruct futex_q { struct plist_node list; struct task_struct *task; spinlock_t *lock_ptr; futex_wake_fn *wake; void *wake_data; union futex_key key; struct futex_pi_state *pi_state; struct rt_mutex_waiter *rt_waiter; union futex_key *requeue_pi_key; u32 bitset; atomic_t requeue_state; #ifdef CONFIG_PREEMPT_RT; struct rcuwait requeue_wait; #endif; };h]hXstruct futex_q { struct plist_node list; struct task_struct *task; spinlock_t *lock_ptr; futex_wake_fn *wake; void *wake_data; union futex_key key; struct futex_pi_state *pi_state; struct rt_mutex_waiter *rt_waiter; union futex_key *requeue_pi_key; u32 bitset; atomic_t requeue_state; #ifdef CONFIG_PREEMPT_RT; struct rcuwait requeue_wait; #endif; };}hjJsbah}(h]h ]h"]h$]h&]jyjzuh1jih]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhKhjJubh)}(h **Members**h]j)}(hjJh]hMembers}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjJubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhKhjJubj!)}(hhh](j")}(h=``list`` priority-sorted list of tasks waiting on this futex h](j")}(h``list``h]j5)}(hjKh]hlist}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjKubah}(h]h ]h"]h$]h&]uh1j"h]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhKhjJubj!")}(hhh]h)}(h3priority-sorted list of tasks waiting on this futexh]h3priority-sorted list of tasks waiting on this futex}(hjKhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjKhKhjKubah}(h]h ]h"]h$]h&]uh1j "hjJubeh}(h]h ]h"]h$]h&]uh1j"hjKhKhjJubj")}(h'``task`` the task waiting on the futex h](j")}(h``task``h]j5)}(hj>Kh]htask}(hj@KhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjlist `) || q->lock_ptr == 0. The order of wakeup is always to make the first condition true, then the second.h](hmA futex_q has a woken state, just like tasks have TASK_RUNNING. It is considered woken when plist_node_empty(}(hjMhhhNhNubh)}(h:c:type:`q->list `h]j5)}(hjMh]hq->list}(hjMhhhNhNubah}(h]h ](j@jAc-typeeh"]h$]h&]uh1j4hjMubah}(h]h ]h"]h$]h&]refdocjM refdomainjAreftypetype refexplicitrefwarnjp$js$)}jv$]sbjSjEuh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhKhjMubhg) || q->lock_ptr == 0. The order of wakeup is always to make the first condition true, then the second.}(hjMhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhj NhKhj5hhubh)}(hxPI futexes are typically woken before they are removed from the hash list via the rt_mutex code. See futex_unqueue_pi().h]hxPI futexes are typically woken before they are removed from the hash list via the rt_mutex code. See futex_unqueue_pi().}(hjNhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhKhj5hhubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_match (C function) c.futex_matchhNtauh1j=!hj5hhhjJJhNubjO!)}(hhh](jT!)}(h>int futex_match (union futex_key *key1, union futex_key *key2)h]jZ!)}(h=int futex_match(union futex_key *key1, union futex_key *key2)h](j#)}(hinth]hint}(hj!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_queue (C function) c.futex_queuehNtauh1j=!hj5hhhjJJhNubjO!)}(hhh](jT!)}(h\void futex_queue (struct futex_q *q, struct futex_hash_bucket *hb, struct task_struct *task)h]jZ!)}(h[void futex_queue(struct futex_q *q, struct futex_hash_bucket *hb, struct task_struct *task)h](j#)}(hvoidh]hvoid}(hjfPhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjbPhhh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM3ubj$)}(h h]h }(hjuPhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjbPhhhjtPhM3ubj`!)}(h futex_queueh]jf!)}(h futex_queueh]h futex_queue}(hjPhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjPubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjbPhhhjtPhM3ubj)$)}(hK(struct futex_q *q, struct futex_hash_bucket *hb, struct task_struct *task)h](j/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hjPhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjPubj$)}(h h]h }(hjPhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjPubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hjPhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjPubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjPmodnameN classnameNjp$js$)}jv$]jy$)}jl$jPsb c.futex_queueasbuh1hhjPubj$)}(h h]h }(hjPhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjPubj$)}(hj$h]h*}(hjPhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjPubjf!)}(hjEh]hq}(hjPhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjPubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjPubj/$)}(hstruct futex_hash_bucket *hbh](j5$)}(hj8$h]hstruct}(hjQhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjQubj$)}(h h]h }(hj!QhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjQubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hj2QhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj/Qubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj4QmodnameN classnameNjp$js$)}jv$]jP c.futex_queueasbuh1hhjQubj$)}(h h]h }(hjPQhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjQubj$)}(hj$h]h*}(hj^QhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjQubjf!)}(hhbh]hhb}(hjkQhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjQubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjPubj/$)}(hstruct task_struct *taskh](j5$)}(hj8$h]hstruct}(hjQhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjQubj$)}(h h]h }(hjQhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjQubh)}(hhh]jf!)}(h task_structh]h task_struct}(hjQhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjQubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjQmodnameN classnameNjp$js$)}jv$]jP c.futex_queueasbuh1hhjQubj$)}(h h]h }(hjQhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjQubj$)}(hj$h]h*}(hjQhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjQubjf!)}(htaskh]htask}(hjQhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjQubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjPubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjbPhhhjtPhM3ubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj^PhhhjtPhM3ubah}(h]jYPah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjtPhM3hj[Phhubj!)}(hhh]h)}(h,Enqueue the futex_q on the futex_hash_bucketh]h,Enqueue the futex_q on the futex_hash_bucket}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM%hjRhhubah}(h]h ]h"]h$]h&]uh1j!hj[PhhhjtPhM3ubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jRj!jRj!j!j!uh1jN!hhhj5hjJJhNubj!)}(hX**Parameters** ``struct futex_q *q`` The futex_q to enqueue ``struct futex_hash_bucket *hb`` The destination hash bucket ``struct task_struct *task`` Task queueing this futex **Description** The hb->lock must be held by the caller, and is released here. A call to futex_queue() is typically paired with exactly one call to futex_unqueue(). The exceptions involve the PI related operations, which may use futex_unqueue_pi() or nothing if the unqueue is done as part of the wake process and the unqueue state is implicit in the state of woken task (see futex_wait_requeue_pi() for an example). Note that **task** may be NULL, for async usage of futexes.h](h)}(h**Parameters**h]j)}(hj'Rh]h Parameters}(hj)RhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj%Rubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM)hj!Rubj!)}(hhh](j")}(h-``struct futex_q *q`` The futex_q to enqueue h](j")}(h``struct futex_q *q``h]j5)}(hjFRh]hstruct futex_q *q}(hjHRhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjDRubah}(h]h ]h"]h$]h&]uh1j"h]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM&hj@Rubj!")}(hhh]h)}(hThe futex_q to enqueueh]hThe futex_q to enqueue}(hj_RhhhNhNubah}(h]h ]h"]h$]h&]uh1hhj[RhM&hj\Rubah}(h]h ]h"]h$]h&]uh1j "hj@Rubeh}(h]h ]h"]h$]h&]uh1j"hj[RhM&hj=Rubj")}(h=``struct futex_hash_bucket *hb`` The destination hash bucket h](j")}(h ``struct futex_hash_bucket *hb``h]j5)}(hjRh]hstruct futex_hash_bucket *hb}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj}Rubah}(h]h ]h"]h$]h&]uh1j"h]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM'hjyRubj!")}(hhh]h)}(hThe destination hash bucketh]hThe destination hash bucket}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjRhM'hjRubah}(h]h ]h"]h$]h&]uh1j "hjyRubeh}(h]h ]h"]h$]h&]uh1j"hjRhM'hj=Rubj")}(h6``struct task_struct *task`` Task queueing this futex h](j")}(h``struct task_struct *task``h]j5)}(hjRh]hstruct task_struct *task}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjRubah}(h]h ]h"]h$]h&]uh1j"h]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM(hjRubj!")}(hhh]h)}(hTask queueing this futexh]hTask queueing this futex}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjRhM(hjRubah}(h]h ]h"]h$]h&]uh1j "hjRubeh}(h]h ]h"]h$]h&]uh1j"hjRhM(hj=Rubeh}(h]h ]h"]h$]h&]uh1j!hj!Rubh)}(h**Description**h]j)}(hjRh]h Description}(hjRhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjRubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM*hj!Rubh)}(hXThe hb->lock must be held by the caller, and is released here. A call to futex_queue() is typically paired with exactly one call to futex_unqueue(). The exceptions involve the PI related operations, which may use futex_unqueue_pi() or nothing if the unqueue is done as part of the wake process and the unqueue state is implicit in the state of woken task (see futex_wait_requeue_pi() for an example).h]hXThe hb->lock must be held by the caller, and is released here. A call to futex_queue() is typically paired with exactly one call to futex_unqueue(). The exceptions involve the PI related operations, which may use futex_unqueue_pi() or nothing if the unqueue is done as part of the wake process and the unqueue state is implicit in the state of woken task (see futex_wait_requeue_pi() for an example).}(hj ShhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM*hj!Rubh)}(h;Note that **task** may be NULL, for async usage of futexes.h](h Note that }(hjShhhNhNubj)}(h**task**h]htask}(hj ShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh) may be NULL, for async usage of futexes.}(hjShhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM1hj!Rubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhjJJhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_vector (C struct)c.futex_vectorhNtauh1j=!hj5hhhjJJhNubjO!)}(hhh](jT!)}(h futex_vectorh]jZ!)}(hstruct futex_vectorh](j5$)}(hj8$h]hstruct}(hjYShhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjUShhh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhM6ubj$)}(h h]h }(hjgShhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjUShhhjfShM6ubj`!)}(h futex_vectorh]jf!)}(hjSSh]h futex_vector}(hjyShhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjuSubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjUShhhjfShM6ubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjQShhhjfShM6ubah}(h]jLSah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjfShM6hjNShhubj!)}(hhh]h)}(h"Auxiliary struct for futex_waitv()h]h"Auxiliary struct for futex_waitv()}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhjShhubah}(h]h ]h"]h$]h&]uh1j!hjNShhhjfShM6ubeh}(h]h ](jAstructeh"]h$]h&]j!jAj!jSj!jSj!j!j!uh1jN!hhhj5hjJJhNubj!)}(h**Definition**:: struct futex_vector { struct futex_waitv w; struct futex_q q; }; **Members** ``w`` Userspace provided data ``q`` Kernel side datah](h)}(h**Definition**::h](j)}(h**Definition**h]h Definition}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubh:}(hjShhhNhNubeh}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhjSubjj)}(hHstruct futex_vector { struct futex_waitv w; struct futex_q q; };h]hHstruct futex_vector { struct futex_waitv w; struct futex_q q; };}hjSsbah}(h]h ]h"]h$]h&]jyjzuh1jih]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhjSubh)}(h **Members**h]j)}(hjSh]hMembers}(hjShhhNhNubah}(h]h ]h"]h$]h&]uh1jhjSubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhjSubj!)}(hhh](j")}(h``w`` Userspace provided data h](j")}(h``w``h]j5)}(hjTh]hw}(hj ThhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjTubah}(h]h ]h"]h$]h&]uh1j"h]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhjTubj!")}(hhh]h)}(hUserspace provided datah]hUserspace provided data}(hj!ThhhNhNubah}(h]h ]h"]h$]h&]uh1hhjThMhjTubah}(h]h ]h"]h$]h&]uh1j "hjTubeh}(h]h ]h"]h$]h&]uh1j"hjThMhjSubj")}(h``q`` Kernel side datah](j")}(h``q``h]j5)}(hjATh]hq}(hjCThhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj?Tubah}(h]h ]h"]h$]h&]uh1j"h]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhj;Tubj!")}(hhh]h)}(hKernel side datah]hKernel side data}(hjZThhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhjWTubah}(h]h ]h"]h$]h&]uh1j "hj;Tubeh}(h]h ]h"]h$]h&]uh1j"hjVThMhjSubeh}(h]h ]h"]h$]h&]uh1j!hjSubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhjJJhNubh)}(h**Description**h]j)}(hjTh]h Description}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1jhjTubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhj5hhubh)}(hBStruct used to build an array with all data need for futex_waitv()h]hBStruct used to build an array with all data need for futex_waitv()}(hjThhhNhNubah}(h]h ]h"]h$]h&]uh1hh]/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1358: ./kernel/futex/futex.hhMhj5hhubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!!futex_lock_pi_atomic (C function)c.futex_lock_pi_atomichNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hint futex_lock_pi_atomic (u32 __user *uaddr, struct futex_hash_bucket *hb, union futex_key *key, struct futex_pi_state **ps, struct task_struct *task, struct task_struct **exiting, int set_waiters)h]jZ!)}(hint futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb, union futex_key *key, struct futex_pi_state **ps, struct task_struct *task, struct task_struct **exiting, int set_waiters)h](j#)}(hinth]hint}(hjThhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjThhhZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMubj$)}(h h]h }(hjThhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjThhhjThMubj`!)}(hfutex_lock_pi_atomich]jf!)}(hfutex_lock_pi_atomich]hfutex_lock_pi_atomic}(hjThhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjTubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjThhhjThMubj)$)}(h(u32 __user *uaddr, struct futex_hash_bucket *hb, union futex_key *key, struct futex_pi_state **ps, struct task_struct *task, struct task_struct **exiting, int set_waiters)h](j/$)}(hu32 __user *uaddrh](h)}(hhh]jf!)}(hu32h]hu32}(hjUhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjTubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjUmodnameN classnameNjp$js$)}jv$]jy$)}jl$jTsbc.futex_lock_pi_atomicasbuh1hhjTubj$)}(h h]h }(hj"UhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjTubh__user}(hjThhhNhNubj$)}(h h]h }(hj4UhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjTubj$)}(hj$h]h*}(hjBUhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjTubjf!)}(huaddrh]huaddr}(hjOUhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjTubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjTubj/$)}(hstruct futex_hash_bucket *hbh](j5$)}(hj8$h]hstruct}(hjhUhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjdUubj$)}(h h]h }(hjuUhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjdUubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hjUhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjUubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjUmodnameN classnameNjp$js$)}jv$]jUc.futex_lock_pi_atomicasbuh1hhjdUubj$)}(h h]h }(hjUhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjdUubj$)}(hj$h]h*}(hjUhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjdUubjf!)}(hhbh]hhb}(hjUhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjdUubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjTubj/$)}(hunion futex_key *keyh](j5$)}(hj6h]hunion}(hjUhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjUubj$)}(h h]h }(hjUhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjUubh)}(hhh]jf!)}(h futex_keyh]h futex_key}(hjUhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjUubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjUmodnameN classnameNjp$js$)}jv$]jUc.futex_lock_pi_atomicasbuh1hhjUubj$)}(h h]h }(hjVhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjUubj$)}(hj$h]h*}(hj"VhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjUubjf!)}(hkeyh]hkey}(hj/VhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjUubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjTubj/$)}(hstruct futex_pi_state **psh](j5$)}(hj8$h]hstruct}(hjHVhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjDVubj$)}(h h]h }(hjUVhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjDVubh)}(hhh]jf!)}(hfutex_pi_stateh]hfutex_pi_state}(hjfVhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjcVubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjhVmodnameN classnameNjp$js$)}jv$]jUc.futex_lock_pi_atomicasbuh1hhjDVubj$)}(h h]h }(hjVhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjDVubj$)}(hj$h]h*}(hjVhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjDVubj$)}(hj$h]h*}(hjVhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjDVubjf!)}(hpsh]hps}(hjVhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjDVubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjTubj/$)}(hstruct task_struct *taskh](j5$)}(hj8$h]hstruct}(hjVhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjVubj$)}(h h]h }(hjVhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjVubh)}(hhh]jf!)}(h task_structh]h task_struct}(hjVhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjVubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjVmodnameN classnameNjp$js$)}jv$]jUc.futex_lock_pi_atomicasbuh1hhjVubj$)}(h h]h }(hjWhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjVubj$)}(hj$h]h*}(hjWhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjVubjf!)}(htaskh]htask}(hjWhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjVubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjTubj/$)}(hstruct task_struct **exitingh](j5$)}(hj8$h]hstruct}(hj5WhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj1Wubj$)}(h h]h }(hjBWhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj1Wubh)}(hhh]jf!)}(h task_structh]h task_struct}(hjSWhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjPWubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjUWmodnameN classnameNjp$js$)}jv$]jUc.futex_lock_pi_atomicasbuh1hhj1Wubj$)}(h h]h }(hjqWhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj1Wubj$)}(hj$h]h*}(hjWhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj1Wubj$)}(hj$h]h*}(hjWhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj1Wubjf!)}(hexitingh]hexiting}(hjWhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj1Wubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjTubj/$)}(hint set_waitersh](j#)}(hinth]hint}(hjWhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjWubj$)}(h h]h }(hjWhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjWubjf!)}(h set_waitersh]h set_waiters}(hjWhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjWubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjTubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjThhhjThMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjThhhjThMubah}(h]jTah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjThMhjThhubj!)}(hhh]h)}(h0Atomic work required to acquire a pi aware futexh]h0Atomic work required to acquire a pi aware futex}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hhZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhjWhhubah}(h]h ]h"]h$]h&]uh1j!hjThhhjThMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jXj!jXj!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``u32 __user *uaddr`` the pi futex user address ``struct futex_hash_bucket *hb`` the pi futex hash bucket ``union futex_key *key`` the futex key associated with uaddr and hb ``struct futex_pi_state **ps`` the pi_state pointer where we store the result of the lookup ``struct task_struct *task`` the task to perform the atomic lock work for. This will be "current" except in the case of requeue pi. ``struct task_struct **exiting`` Pointer to store the task pointer of the owner task which is in the middle of exiting ``int set_waiters`` force setting the FUTEX_WAITERS bit (1) or not (0) **Return** - 0 - ready to wait; - 1 - acquired the lock; - <0 - error **Description** The hb->lock must be held by the caller. **exiting** is only set when the return value is -EBUSY. If so, this holds a refcount on the exiting task on return and the caller needs to drop it after waiting for the exit to complete.h](h)}(h**Parameters**h]j)}(hjXh]h Parameters}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjXubah}(h]h ]h"]h$]h&]uh1hhZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhjXubj!)}(hhh](j")}(h0``u32 __user *uaddr`` the pi futex user address h](j")}(h``u32 __user *uaddr``h]j5)}(hj9Xh]hu32 __user *uaddr}(hj;XhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj7Xubah}(h]h ]h"]h$]h&]uh1j"hZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhj3Xubj!")}(hhh]h)}(hthe pi futex user addressh]hthe pi futex user address}(hjRXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjNXhMhjOXubah}(h]h ]h"]h$]h&]uh1j "hj3Xubeh}(h]h ]h"]h$]h&]uh1j"hjNXhMhj0Xubj")}(h:``struct futex_hash_bucket *hb`` the pi futex hash bucket h](j")}(h ``struct futex_hash_bucket *hb``h]j5)}(hjrXh]hstruct futex_hash_bucket *hb}(hjtXhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjpXubah}(h]h ]h"]h$]h&]uh1j"hZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhjlXubj!")}(hhh]h)}(hthe pi futex hash bucketh]hthe pi futex hash bucket}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjXhMhjXubah}(h]h ]h"]h$]h&]uh1j "hjlXubeh}(h]h ]h"]h$]h&]uh1j"hjXhMhj0Xubj")}(hD``union futex_key *key`` the futex key associated with uaddr and hb h](j")}(h``union futex_key *key``h]j5)}(hjXh]hunion futex_key *key}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjXubah}(h]h ]h"]h$]h&]uh1j"hZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhjXubj!")}(hhh]h)}(h*the futex key associated with uaddr and hbh]h*the futex key associated with uaddr and hb}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjXhMhjXubah}(h]h ]h"]h$]h&]uh1j "hjXubeh}(h]h ]h"]h$]h&]uh1j"hjXhMhj0Xubj")}(h\``struct futex_pi_state **ps`` the pi_state pointer where we store the result of the lookup h](j")}(h``struct futex_pi_state **ps``h]j5)}(hjXh]hstruct futex_pi_state **ps}(hjXhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjXubah}(h]h ]h"]h$]h&]uh1j"hZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhjXubj!")}(hhh]h)}(hlock must be held by the caller.h]h(The hb->lock must be held by the caller.}(hjWZhhhNhNubah}(h]h ]h"]h$]h&]uh1hhZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhjXubh)}(h**exiting** is only set when the return value is -EBUSY. If so, this holds a refcount on the exiting task on return and the caller needs to drop it after waiting for the exit to complete.h](j)}(h **exiting**h]hexiting}(hjjZhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfZubh is only set when the return value is -EBUSY. If so, this holds a refcount on the exiting task on return and the caller needs to drop it after waiting for the exit to complete.}(hjfZhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhjXubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!fixup_pi_owner (C function)c.fixup_pi_ownerhNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hEint fixup_pi_owner (u32 __user *uaddr, struct futex_q *q, int locked)h]jZ!)}(hDint fixup_pi_owner(u32 __user *uaddr, struct futex_q *q, int locked)h](j#)}(hinth]hint}(hjZhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjZhhhZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMhubj$)}(h h]h }(hjZhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjZhhhjZhMhubj`!)}(hfixup_pi_ownerh]jf!)}(hfixup_pi_ownerh]hfixup_pi_owner}(hjZhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjZubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjZhhhjZhMhubj)$)}(h2(u32 __user *uaddr, struct futex_q *q, int locked)h](j/$)}(hu32 __user *uaddrh](h)}(hhh]jf!)}(hu32h]hu32}(hjZhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjZubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjZmodnameN classnameNjp$js$)}jv$]jy$)}jl$jZsbc.fixup_pi_ownerasbuh1hhjZubj$)}(h h]h }(hj[hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjZubh__user}(hjZhhhNhNubj$)}(h h]h }(hj[hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjZubj$)}(hj$h]h*}(hj#[hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjZubjf!)}(huaddrh]huaddr}(hj0[hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjZubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjZubj/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hjI[hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjE[ubj$)}(h h]h }(hjV[hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjE[ubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hjg[hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjd[ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetji[modnameN classnameNjp$js$)}jv$]jZc.fixup_pi_ownerasbuh1hhjE[ubj$)}(h h]h }(hj[hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjE[ubj$)}(hj$h]h*}(hj[hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjE[ubjf!)}(hjEh]hq}(hj[hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjE[ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjZubj/$)}(h int lockedh](j#)}(hinth]hint}(hj[hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj[ubj$)}(h h]h }(hj[hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj[ubjf!)}(hlockedh]hlocked}(hj[hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj[ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjZubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjZhhhjZhMhubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjZhhhjZhMhubah}(h]jZah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjZhMhhjZhhubj!)}(hhh]h)}(h-Post lock pi_state and corner case managementh]h-Post lock pi_state and corner case management}(hj[hhhNhNubah}(h]h ]h"]h$]h&]uh1hhZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chMZhj[hhubah}(h]h ]h"]h$]h&]uh1j!hjZhhhjZhMhubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j\j!j\j!j!j!uh1jN!hhhj5hNhNubj!)}(hX#**Parameters** ``u32 __user *uaddr`` user address of the futex ``struct futex_q *q`` futex_q (contains pi_state and access to the rt_mutex) ``int locked`` if the attempt to take the rt_mutex succeeded (1) or not (0) **Description** After attempting to lock an rt_mutex, this function is called to cleanup the pi_state owner as well as handle race conditions that may allow us to acquire the lock. Must be called with the hb lock held. **Return** - 1 - success, lock taken; - 0 - success, lock not taken; - <0 - on error (-EFAULT)h](h)}(h**Parameters**h]j)}(hj \h]h Parameters}(hj"\hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj\ubah}(h]h ]h"]h$]h&]uh1hhZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chM^hj\ubj!)}(hhh](j")}(h0``u32 __user *uaddr`` user address of the futex h](j")}(h``u32 __user *uaddr``h]j5)}(hj?\h]hu32 __user *uaddr}(hjA\hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj=\ubah}(h]h ]h"]h$]h&]uh1j"hZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chM[hj9\ubj!")}(hhh]h)}(huser address of the futexh]huser address of the futex}(hjX\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjT\hM[hjU\ubah}(h]h ]h"]h$]h&]uh1j "hj9\ubeh}(h]h ]h"]h$]h&]uh1j"hjT\hM[hj6\ubj")}(hM``struct futex_q *q`` futex_q (contains pi_state and access to the rt_mutex) h](j")}(h``struct futex_q *q``h]j5)}(hjx\h]hstruct futex_q *q}(hjz\hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjv\ubah}(h]h ]h"]h$]h&]uh1j"hZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chM\hjr\ubj!")}(hhh]h)}(h6futex_q (contains pi_state and access to the rt_mutex)h]h6futex_q (contains pi_state and access to the rt_mutex)}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj\hM\hj\ubah}(h]h ]h"]h$]h&]uh1j "hjr\ubeh}(h]h ]h"]h$]h&]uh1j"hj\hM\hj6\ubj")}(hL``int locked`` if the attempt to take the rt_mutex succeeded (1) or not (0) h](j")}(h``int locked``h]j5)}(hj\h]h int locked}(hj\hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj\ubah}(h]h ]h"]h$]h&]uh1j"hZ/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1361: ./kernel/futex/pi.chM]hj\ubj!")}(hhh]h)}(h!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!requeue_futex (C function)c.requeue_futexhNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(h{void requeue_futex (struct futex_q *q, struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2, union futex_key *key2)h]jZ!)}(hzvoid requeue_futex(struct futex_q *q, struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2, union futex_key *key2)h](j#)}(hvoidh]hvoid}(hj]hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj]hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKKubj$)}(h h]h }(hj]hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj]hhhj]hKKubj`!)}(h requeue_futexh]jf!)}(h requeue_futexh]h requeue_futex}(hj]hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj]ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj]hhhj]hKKubj)$)}(hh(struct futex_q *q, struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2, union futex_key *key2)h](j/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hj]hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj]ubj$)}(h h]h }(hj]hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj]ubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hj]hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj]ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj^modnameN classnameNjp$js$)}jv$]jy$)}jl$j]sbc.requeue_futexasbuh1hhj]ubj$)}(h h]h }(hj^hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj]ubj$)}(hj$h]h*}(hj-^hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj]ubjf!)}(hjEh]hq}(hj:^hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj]ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj]ubj/$)}(hstruct futex_hash_bucket *hb1h](j5$)}(hj8$h]hstruct}(hjR^hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjN^ubj$)}(h h]h }(hj_^hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjN^ubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hjp^hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjm^ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjr^modnameN classnameNjp$js$)}jv$]j^c.requeue_futexasbuh1hhjN^ubj$)}(h h]h }(hj^hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjN^ubj$)}(hj$h]h*}(hj^hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjN^ubjf!)}(hhb1h]hhb1}(hj^hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjN^ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj]ubj/$)}(hstruct futex_hash_bucket *hb2h](j5$)}(hj8$h]hstruct}(hj^hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj^ubj$)}(h h]h }(hj^hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj^ubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hj^hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj^ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj^modnameN classnameNjp$js$)}jv$]j^c.requeue_futexasbuh1hhj^ubj$)}(h h]h }(hj^hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj^ubj$)}(hj$h]h*}(hj _hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj^ubjf!)}(hhb2h]hhb2}(hj_hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj^ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj]ubj/$)}(hunion futex_key *key2h](j5$)}(hj6h]hunion}(hj2_hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj._ubj$)}(h h]h }(hj?_hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj._ubh)}(hhh]jf!)}(h futex_keyh]h futex_key}(hjP_hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjM_ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjR_modnameN classnameNjp$js$)}jv$]j^c.requeue_futexasbuh1hhj._ubj$)}(h h]h }(hjn_hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj._ubj$)}(hj$h]h*}(hj|_hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj._ubjf!)}(hkey2h]hkey2}(hj_hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj._ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj]ubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj]hhhj]hKKubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj]hhhj]hKKubah}(h]j]ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj]hKKhj]hhubj!)}(hhh]h)}(h(Requeue a futex_q from one hb to anotherh]h(Requeue a futex_q from one hb to another}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKEhj_hhubah}(h]h ]h"]h$]h&]uh1j!hj]hhhj]hKKubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j_j!j_j!j!j!uh1jN!hhhj5hNhNubj!)}(h**Parameters** ``struct futex_q *q`` the futex_q to requeue ``struct futex_hash_bucket *hb1`` the source hash_bucket ``struct futex_hash_bucket *hb2`` the target hash_bucket ``union futex_key *key2`` the new key for the requeued futex_qh](h)}(h**Parameters**h]j)}(hj_h]h Parameters}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj_ubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKIhj_ubj!)}(hhh](j")}(h-``struct futex_q *q`` the futex_q to requeue h](j")}(h``struct futex_q *q``h]j5)}(hj_h]hstruct futex_q *q}(hj_hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj_ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKFhj_ubj!")}(hhh]h)}(hthe futex_q to requeueh]hthe futex_q to requeue}(hj `hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj `hKFhj `ubah}(h]h ]h"]h$]h&]uh1j "hj_ubeh}(h]h ]h"]h$]h&]uh1j"hj `hKFhj_ubj")}(h9``struct futex_hash_bucket *hb1`` the source hash_bucket h](j")}(h!``struct futex_hash_bucket *hb1``h]j5)}(hj-`h]hstruct futex_hash_bucket *hb1}(hj/`hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj+`ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKGhj'`ubj!")}(hhh]h)}(hthe source hash_bucketh]hthe source hash_bucket}(hjF`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjB`hKGhjC`ubah}(h]h ]h"]h$]h&]uh1j "hj'`ubeh}(h]h ]h"]h$]h&]uh1j"hjB`hKGhj_ubj")}(h9``struct futex_hash_bucket *hb2`` the target hash_bucket h](j")}(h!``struct futex_hash_bucket *hb2``h]j5)}(hjf`h]hstruct futex_hash_bucket *hb2}(hjh`hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjd`ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKHhj``ubj!")}(hhh]h)}(hthe target hash_bucketh]hthe target hash_bucket}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj{`hKHhj|`ubah}(h]h ]h"]h$]h&]uh1j "hj``ubeh}(h]h ]h"]h$]h&]uh1j"hj{`hKHhj_ubj")}(h>``union futex_key *key2`` the new key for the requeued futex_qh](j")}(h``union futex_key *key2``h]j5)}(hj`h]hunion futex_key *key2}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj`ubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKJhj`ubj!")}(hhh]h)}(h$the new key for the requeued futex_qh]h$the new key for the requeued futex_q}(hj`hhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKIhj`ubah}(h]h ]h"]h$]h&]uh1j "hj`ubeh}(h]h ]h"]h$]h&]uh1j"hj`hKJhj_ubeh}(h]h ]h"]h$]h&]uh1j!hj_ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!"requeue_pi_wake_futex (C function)c.requeue_pi_wake_futexhNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hbvoid requeue_pi_wake_futex (struct futex_q *q, union futex_key *key, struct futex_hash_bucket *hb)h]jZ!)}(havoid requeue_pi_wake_futex(struct futex_q *q, union futex_key *key, struct futex_hash_bucket *hb)h](j#)}(hvoidh]hvoid}(hj`hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj`hhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKubj$)}(h h]h }(hjahhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj`hhhjahKubj`!)}(hrequeue_pi_wake_futexh]jf!)}(hrequeue_pi_wake_futexh]hrequeue_pi_wake_futex}(hjahhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjaubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj`hhhjahKubj)$)}(hG(struct futex_q *q, union futex_key *key, struct futex_hash_bucket *hb)h](j/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hj6ahhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj2aubj$)}(h h]h }(hjCahhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj2aubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hjTahhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjQaubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjVamodnameN classnameNjp$js$)}jv$]jy$)}jl$jasbc.requeue_pi_wake_futexasbuh1hhj2aubj$)}(h h]h }(hjtahhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj2aubj$)}(hj$h]h*}(hjahhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj2aubjf!)}(hjEh]hq}(hjahhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj2aubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj.aubj/$)}(hunion futex_key *keyh](j5$)}(hj6h]hunion}(hjahhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjaubj$)}(h h]h }(hjahhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjaubh)}(hhh]jf!)}(h futex_keyh]h futex_key}(hjahhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjaubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjamodnameN classnameNjp$js$)}jv$]jpac.requeue_pi_wake_futexasbuh1hhjaubj$)}(h h]h }(hjahhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjaubj$)}(hj$h]h*}(hjahhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjaubjf!)}(hkeyh]hkey}(hjahhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjaubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj.aubj/$)}(hstruct futex_hash_bucket *hbh](j5$)}(hj8$h]hstruct}(hjbhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjbubj$)}(h h]h }(hj$bhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjbubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hj5bhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj2bubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj7bmodnameN classnameNjp$js$)}jv$]jpac.requeue_pi_wake_futexasbuh1hhjbubj$)}(h h]h }(hjSbhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjbubj$)}(hj$h]h*}(hjabhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjbubjf!)}(hhbh]hhb}(hjnbhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjbubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj.aubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj`hhhjahKubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj`hhhjahKubah}(h]j`ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjahKhj`hhubj!)}(hhh]h)}(h1Wake a task that acquired the lock during requeueh]h1Wake a task that acquired the lock during requeue}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjbhhubah}(h]h ]h"]h$]h&]uh1j!hj`hhhjahKubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jbj!jbj!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``struct futex_q *q`` the futex_q ``union futex_key *key`` the key of the requeue target futex ``struct futex_hash_bucket *hb`` the hash_bucket of the requeue target futex **Description** During futex_requeue, with requeue_pi=1, it is possible to acquire the target futex if it is uncontended or via a lock steal. 1) Set **q**::key to the requeue target futex key so the waiter can detect the wakeup on the right futex. 2) Dequeue **q** from the hash bucket. 3) Set **q**::rt_waiter to NULL so the woken up task can detect atomic lock acquisition. 4) Set the q->lock_ptr to the requeue target hb->lock for the case that the waiter has to fixup the pi state. 5) Complete the requeue state so the waiter can make progress. After this point the waiter task can return from the syscall immediately in case that the pi state does not have to be fixed up. 6) Wake the waiter task. Must be called with both q->lock_ptr and hb->lock held.h](h)}(h**Parameters**h]j)}(hjbh]h Parameters}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjbubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjbubj!)}(hhh](j")}(h"``struct futex_q *q`` the futex_q h](j")}(h``struct futex_q *q``h]j5)}(hjbh]hstruct futex_q *q}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjbubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjbubj!")}(hhh]h)}(h the futex_qh]h the futex_q}(hjbhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjbhKhjbubah}(h]h ]h"]h$]h&]uh1j "hjbubeh}(h]h ]h"]h$]h&]uh1j"hjbhKhjbubj")}(h=``union futex_key *key`` the key of the requeue target futex h](j")}(h``union futex_key *key``h]j5)}(hjch]hunion futex_key *key}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjcubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhj cubj!")}(hhh]h)}(h#the key of the requeue target futexh]h#the key of the requeue target futex}(hj+chhhNhNubah}(h]h ]h"]h$]h&]uh1hhj'chKhj(cubah}(h]h ]h"]h$]h&]uh1j "hj cubeh}(h]h ]h"]h$]h&]uh1j"hj'chKhjbubj")}(hM``struct futex_hash_bucket *hb`` the hash_bucket of the requeue target futex h](j")}(h ``struct futex_hash_bucket *hb``h]j5)}(hjKch]hstruct futex_hash_bucket *hb}(hjMchhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjIcubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjEcubj!")}(hhh]h)}(h+the hash_bucket of the requeue target futexh]h+the hash_bucket of the requeue target futex}(hjdchhhNhNubah}(h]h ]h"]h$]h&]uh1hhj`chKhjacubah}(h]h ]h"]h$]h&]uh1j "hjEcubeh}(h]h ]h"]h$]h&]uh1j"hj`chKhjbubeh}(h]h ]h"]h$]h&]uh1j!hjbubh)}(h**Description**h]j)}(hjch]h Description}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjbubh)}(h}During futex_requeue, with requeue_pi=1, it is possible to acquire the target futex if it is uncontended or via a lock steal.h]h}During futex_requeue, with requeue_pi=1, it is possible to acquire the target futex if it is uncontended or via a lock steal.}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjbubhenumerated_list)}(hhh](j )}(hgSet **q**::key to the requeue target futex key so the waiter can detect the wakeup on the right futex. h]h)}(hfSet **q**::key to the requeue target futex key so the waiter can detect the wakeup on the right futex.h](hSet }(hjchhhNhNubj)}(h**q**h]hq}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubh]::key to the requeue target futex key so the waiter can detect the wakeup on the right futex.}(hjchhhNhNubeh}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjcubah}(h]h ]h"]h$]h&]uh1j hjcubj )}(h$Dequeue **q** from the hash bucket. h]h)}(h#Dequeue **q** from the hash bucket.h](hDequeue }(hjchhhNhNubj)}(h**q**h]hq}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1jhjcubh from the hash bucket.}(hjchhhNhNubeh}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjcubah}(h]h ]h"]h$]h&]uh1j hjcubj )}(hVSet **q**::rt_waiter to NULL so the woken up task can detect atomic lock acquisition. h]h)}(hUSet **q**::rt_waiter to NULL so the woken up task can detect atomic lock acquisition.h](hSet }(hj dhhhNhNubj)}(h**q**h]hq}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj dubhL::rt_waiter to NULL so the woken up task can detect atomic lock acquisition.}(hj dhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjdubah}(h]h ]h"]h$]h&]uh1j hjcubj )}(hkSet the q->lock_ptr to the requeue target hb->lock for the case that the waiter has to fixup the pi state. h]h)}(hjSet the q->lock_ptr to the requeue target hb->lock for the case that the waiter has to fixup the pi state.h]hjSet the q->lock_ptr to the requeue target hb->lock for the case that the waiter has to fixup the pi state.}(hj5dhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhj1dubah}(h]h ]h"]h$]h&]uh1j hjcubj )}(hComplete the requeue state so the waiter can make progress. After this point the waiter task can return from the syscall immediately in case that the pi state does not have to be fixed up. h]h)}(hComplete the requeue state so the waiter can make progress. After this point the waiter task can return from the syscall immediately in case that the pi state does not have to be fixed up.h]hComplete the requeue state so the waiter can make progress. After this point the waiter task can return from the syscall immediately in case that the pi state does not have to be fixed up.}(hjNdhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjJdubah}(h]h ]h"]h$]h&]uh1j hjcubj )}(hWake the waiter task. h]h)}(hWake the waiter task.h]hWake the waiter task.}(hjgdhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjcdubah}(h]h ]h"]h$]h&]uh1j hjcubeh}(h]h ]h"]h$]h&]enumtypearabicprefixhsuffix)uh1jchjbubh)}(h7Must be called with both q->lock_ptr and hb->lock held.h]h7Must be called with both q->lock_ptr and hb->lock held.}(hjdhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjbubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!'futex_proxy_trylock_atomic (C function)c.futex_proxy_trylock_atomichNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hint futex_proxy_trylock_atomic (u32 __user *pifutex, struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2, union futex_key *key1, union futex_key *key2, struct futex_pi_state **ps, struct task_struct **exiting, int set_waiters)h]jZ!)}(hint futex_proxy_trylock_atomic(u32 __user *pifutex, struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2, union futex_key *key1, union futex_key *key2, struct futex_pi_state **ps, struct task_struct **exiting, int set_waiters)h](j#)}(hinth]hint}(hjdhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjdhhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chM ubj$)}(h h]h }(hjdhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjdhhhjdhM ubj`!)}(hfutex_proxy_trylock_atomich]jf!)}(hfutex_proxy_trylock_atomich]hfutex_proxy_trylock_atomic}(hjdhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjdubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjdhhhjdhM ubj)$)}(h(u32 __user *pifutex, struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2, union futex_key *key1, union futex_key *key2, struct futex_pi_state **ps, struct task_struct **exiting, int set_waiters)h](j/$)}(hu32 __user *pifutexh](h)}(hhh]jf!)}(hu32h]hu32}(hjdhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjdubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjdmodnameN classnameNjp$js$)}jv$]jy$)}jl$jdsbc.futex_proxy_trylock_atomicasbuh1hhjdubj$)}(h h]h }(hjehhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjdubh__user}(hjdhhhNhNubj$)}(h h]h }(hj(ehhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjdubj$)}(hj$h]h*}(hj6ehhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjdubjf!)}(hpifutexh]hpifutex}(hjCehhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjdubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjdubj/$)}(hstruct futex_hash_bucket *hb1h](j5$)}(hj8$h]hstruct}(hj\ehhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjXeubj$)}(h h]h }(hjiehhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjXeubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hjzehhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjweubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj|emodnameN classnameNjp$js$)}jv$]jec.futex_proxy_trylock_atomicasbuh1hhjXeubj$)}(h h]h }(hjehhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjXeubj$)}(hj$h]h*}(hjehhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjXeubjf!)}(hhb1h]hhb1}(hjehhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjXeubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjdubj/$)}(hstruct futex_hash_bucket *hb2h](j5$)}(hj8$h]hstruct}(hjehhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjeubj$)}(h h]h }(hjehhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjeubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hjehhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjeubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjemodnameN classnameNjp$js$)}jv$]jec.futex_proxy_trylock_atomicasbuh1hhjeubj$)}(h h]h }(hjfhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjeubj$)}(hj$h]h*}(hjfhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjeubjf!)}(hhb2h]hhb2}(hj#fhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjeubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjdubj/$)}(hunion futex_key *key1h](j5$)}(hj6h]hunion}(hj0 - acquired the lock, return value is vpid of the top_waiter - <0 - errorh](h)}(h**Parameters**h]j)}(hj~hh]h Parameters}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|hubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjxhubj!)}(hhh](j")}(h9``u32 __user *pifutex`` the user address of the to futex h](j")}(h``u32 __user *pifutex``h]j5)}(hjhh]hu32 __user *pifutex}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjhubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjhubj!")}(hhh]h)}(h the user address of the to futexh]h the user address of the to futex}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhKhjhubah}(h]h ]h"]h$]h&]uh1j "hjhubeh}(h]h ]h"]h$]h&]uh1j"hjhhKhjhubj")}(h[``struct futex_hash_bucket *hb1`` the from futex hash bucket, must be locked by the caller h](j")}(h!``struct futex_hash_bucket *hb1``h]j5)}(hjhh]hstruct futex_hash_bucket *hb1}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjhubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjhubj!")}(hhh]h)}(h8the from futex hash bucket, must be locked by the callerh]h8the from futex hash bucket, must be locked by the caller}(hjhhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhKhjhubah}(h]h ]h"]h$]h&]uh1j "hjhubeh}(h]h ]h"]h$]h&]uh1j"hjhhKhjhubj")}(hY``struct futex_hash_bucket *hb2`` the to futex hash bucket, must be locked by the caller h](j")}(h!``struct futex_hash_bucket *hb2``h]j5)}(hjih]hstruct futex_hash_bucket *hb2}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj iubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhj iubj!")}(hhh]h)}(h6the to futex hash bucket, must be locked by the callerh]h6the to futex hash bucket, must be locked by the caller}(hj(ihhhNhNubah}(h]h ]h"]h$]h&]uh1hhj$ihKhj%iubah}(h]h ]h"]h$]h&]uh1j "hj iubeh}(h]h ]h"]h$]h&]uh1j"hj$ihKhjhubj")}(h-``union futex_key *key1`` the from futex key h](j")}(h``union futex_key *key1``h]j5)}(hjHih]hunion futex_key *key1}(hjJihhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjFiubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjBiubj!")}(hhh]h)}(hthe from futex keyh]hthe from futex key}(hjaihhhNhNubah}(h]h ]h"]h$]h&]uh1hhj]ihKhj^iubah}(h]h ]h"]h$]h&]uh1j "hjBiubeh}(h]h ]h"]h$]h&]uh1j"hj]ihKhjhubj")}(h+``union futex_key *key2`` the to futex key h](j")}(h``union futex_key *key2``h]j5)}(hjih]hunion futex_key *key2}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjiubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhj{iubj!")}(hhh]h)}(hthe to futex keyh]hthe to futex key}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhjihKhjiubah}(h]h ]h"]h$]h&]uh1j "hj{iubeh}(h]h ]h"]h$]h&]uh1j"hjihKhjhubj")}(hE``struct futex_pi_state **ps`` address to store the pi_state pointer h](j")}(h``struct futex_pi_state **ps``h]j5)}(hjih]hstruct futex_pi_state **ps}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjiubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjiubj!")}(hhh]h)}(h%address to store the pi_state pointerh]h%address to store the pi_state pointer}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhjihKhjiubah}(h]h ]h"]h$]h&]uh1j "hjiubeh}(h]h ]h"]h$]h&]uh1j"hjihKhjhubj")}(hw``struct task_struct **exiting`` Pointer to store the task pointer of the owner task which is in the middle of exiting h](j")}(h ``struct task_struct **exiting``h]j5)}(hjih]hstruct task_struct **exiting}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjiubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjiubj!")}(hhh]h)}(hUPointer to store the task pointer of the owner task which is in the middle of exitingh]hUPointer to store the task pointer of the owner task which is in the middle of exiting}(hj jhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhj jubah}(h]h ]h"]h$]h&]uh1j "hjiubeh}(h]h ]h"]h$]h&]uh1j"hjjhKhjhubj")}(hG``int set_waiters`` force setting the FUTEX_WAITERS bit (1) or not (0) h](j")}(h``int set_waiters``h]j5)}(hj-jh]hint set_waiters}(hj/jhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj+jubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhj'jubj!")}(hhh]h)}(h2force setting the FUTEX_WAITERS bit (1) or not (0)h]h2force setting the FUTEX_WAITERS bit (1) or not (0)}(hjFjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjBjhKhjCjubah}(h]h ]h"]h$]h&]uh1j "hj'jubeh}(h]h ]h"]h$]h&]uh1j"hjBjhKhjhubeh}(h]h ]h"]h$]h&]uh1j!hjxhubh)}(h**Description**h]j)}(hjhjh]h Description}(hjjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjfjubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjxhubh)}(hXTry and get the lock on behalf of the top waiter if we can do it atomically. Wake the top waiter if we succeed. If the caller specified set_waiters, then direct futex_lock_pi_atomic() to force setting the FUTEX_WAITERS bit. hb1 and hb2 must be held by the caller.h]hXTry and get the lock on behalf of the top waiter if we can do it atomically. Wake the top waiter if we succeed. If the caller specified set_waiters, then direct futex_lock_pi_atomic() to force setting the FUTEX_WAITERS bit. hb1 and hb2 must be held by the caller.}(hj~jhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chKhjxhubh)}(h**exiting** is only set when the return value is -EBUSY. If so, this holds a refcount on the exiting task on return and the caller needs to drop it after waiting for the exit to complete.h](j)}(h **exiting**h]hexiting}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubh is only set when the return value is -EBUSY. If so, this holds a refcount on the exiting task on return and the caller needs to drop it after waiting for the exit to complete.}(hjjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjxhubh)}(h **Return**h]j)}(hjjh]hReturn}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjjubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjxhubj!)}(h{- 0 - failed to acquire the lock atomically; - >0 - acquired the lock, return value is vpid of the top_waiter - <0 - errorh]j )}(hhh](j )}(h*0 - failed to acquire the lock atomically;h]h)}(hjjh]h*0 - failed to acquire the lock atomically;}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjjubah}(h]h ]h"]h$]h&]uh1j hjjubj )}(h>>0 - acquired the lock, return value is vpid of the top_waiterh]h)}(hjjh]h>>0 - acquired the lock, return value is vpid of the top_waiter}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjjubah}(h]h ]h"]h$]h&]uh1j hjjubj )}(h <0 - errorh]h)}(hjjh]h <0 - error}(hjjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chM hjjubah}(h]h ]h"]h$]h&]uh1j hjjubeh}(h]h ]h"]h$]h&]j! j" uh1j hjjhMhjjubah}(h]h ]h"]h$]h&]uh1j!hjjhMhjxhubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_requeue (C function)c.futex_requeuehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hint futex_requeue (u32 __user *uaddr1, unsigned int flags1, u32 __user *uaddr2, unsigned int flags2, int nr_wake, int nr_requeue, u32 *cmpval, int requeue_pi)h]jZ!)}(hint futex_requeue(u32 __user *uaddr1, unsigned int flags1, u32 __user *uaddr2, unsigned int flags2, int nr_wake, int nr_requeue, u32 *cmpval, int requeue_pi)h](j#)}(hinth]hint}(hj=khhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj9khhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMoubj$)}(h h]h }(hjLkhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj9khhhjKkhMoubj`!)}(h futex_requeueh]jf!)}(h futex_requeueh]h futex_requeue}(hj^khhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjZkubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj9khhhjKkhMoubj)$)}(h(u32 __user *uaddr1, unsigned int flags1, u32 __user *uaddr2, unsigned int flags2, int nr_wake, int nr_requeue, u32 *cmpval, int requeue_pi)h](j/$)}(hu32 __user *uaddr1h](h)}(hhh]jf!)}(hu32h]hu32}(hj}khhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjzkubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjkmodnameN classnameNjp$js$)}jv$]jy$)}jl$j`ksbc.futex_requeueasbuh1hhjvkubj$)}(h h]h }(hjkhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjvkubh__user}(hjvkhhhNhNubj$)}(h h]h }(hjkhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjvkubj$)}(hj$h]h*}(hjkhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjvkubjf!)}(huaddr1h]huaddr1}(hjkhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjvkubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjrkubj/$)}(hunsigned int flags1h](j#)}(hunsignedh]hunsigned}(hjkhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjkubj$)}(h h]h }(hjkhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjkubj#)}(hinth]hint}(hjkhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjkubj$)}(h h]h }(hj lhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjkubjf!)}(hflags1h]hflags1}(hjlhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjkubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjrkubj/$)}(hu32 __user *uaddr2h](h)}(hhh]jf!)}(hu32h]hu32}(hj7lhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj4lubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj9lmodnameN classnameNjp$js$)}jv$]jkc.futex_requeueasbuh1hhj0lubj$)}(h h]h }(hjUlhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj0lubh__user}(hj0lhhhNhNubj$)}(h h]h }(hjglhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj0lubj$)}(hj$h]h*}(hjulhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj0lubjf!)}(huaddr2h]huaddr2}(hjlhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj0lubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjrkubj/$)}(hunsigned int flags2h](j#)}(hunsignedh]hunsigned}(hjlhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjlubj$)}(h h]h }(hjlhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjlubj#)}(hinth]hint}(hjlhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjlubj$)}(h h]h }(hjlhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjlubjf!)}(hflags2h]hflags2}(hjlhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjlubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjrkubj/$)}(h int nr_wakeh](j#)}(hinth]hint}(hjlhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjlubj$)}(h h]h }(hjlhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjlubjf!)}(hnr_wakeh]hnr_wake}(hjmhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjlubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjrkubj/$)}(hint nr_requeueh](j#)}(hinth]hint}(hj!mhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjmubj$)}(h h]h }(hj/mhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjmubjf!)}(h nr_requeueh]h nr_requeue}(hj=mhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjmubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjrkubj/$)}(h u32 *cmpvalh](h)}(hhh]jf!)}(hu32h]hu32}(hjYmhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjVmubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj[mmodnameN classnameNjp$js$)}jv$]jkc.futex_requeueasbuh1hhjRmubj$)}(h h]h }(hjwmhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjRmubj$)}(hj$h]h*}(hjmhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjRmubjf!)}(hcmpvalh]hcmpval}(hjmhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjRmubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjrkubj/$)}(hint requeue_pih](j#)}(hinth]hint}(hjmhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjmubj$)}(h h]h }(hjmhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjmubjf!)}(h requeue_pih]h requeue_pi}(hjmhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjmubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjrkubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj9khhhjKkhMoubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj5khhhjKkhMoubah}(h]j0kah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjKkhMohj2khhubj!)}(hhh]h)}(h%Requeue waiters from uaddr1 to uaddr2h]h%Requeue waiters from uaddr1 to uaddr2}(hjmhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chM]hjmhhubah}(h]h ]h"]h$]h&]uh1j!hj2khhhjKkhMoubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j nj!j nj!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``u32 __user *uaddr1`` source futex user address ``unsigned int flags1`` futex flags (FLAGS_SHARED, etc.) ``u32 __user *uaddr2`` target futex user address ``unsigned int flags2`` futex flags (FLAGS_SHARED, etc.) ``int nr_wake`` number of waiters to wake (must be 1 for requeue_pi) ``int nr_requeue`` number of waiters to requeue (0-INT_MAX) ``u32 *cmpval`` **uaddr1** expected value (or ``NULL``) ``int requeue_pi`` if we are attempting to requeue from a non-pi futex to a pi futex (pi to pi requeue is not supported) **Description** Requeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire uaddr2 atomically on behalf of the top waiter. **Return** - >=0 - on success, the number of tasks requeued or woken; - <0 - on errorh](h)}(h**Parameters**h]j)}(hjnh]h Parameters}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjnubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMahj nubj!)}(hhh](j")}(h1``u32 __user *uaddr1`` source futex user address h](j")}(h``u32 __user *uaddr1``h]j5)}(hj2nh]hu32 __user *uaddr1}(hj4nhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj0nubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chM^hj,nubj!")}(hhh]h)}(hsource futex user addressh]hsource futex user address}(hjKnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjGnhM^hjHnubah}(h]h ]h"]h$]h&]uh1j "hj,nubeh}(h]h ]h"]h$]h&]uh1j"hjGnhM^hj)nubj")}(h9``unsigned int flags1`` futex flags (FLAGS_SHARED, etc.) h](j")}(h``unsigned int flags1``h]j5)}(hjknh]hunsigned int flags1}(hjmnhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjinubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chM_hjenubj!")}(hhh]h)}(h futex flags (FLAGS_SHARED, etc.)h]h futex flags (FLAGS_SHARED, etc.)}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjnhM_hjnubah}(h]h ]h"]h$]h&]uh1j "hjenubeh}(h]h ]h"]h$]h&]uh1j"hjnhM_hj)nubj")}(h1``u32 __user *uaddr2`` target futex user address h](j")}(h``u32 __user *uaddr2``h]j5)}(hjnh]hu32 __user *uaddr2}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjnubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chM`hjnubj!")}(hhh]h)}(htarget futex user addressh]htarget futex user address}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjnhM`hjnubah}(h]h ]h"]h$]h&]uh1j "hjnubeh}(h]h ]h"]h$]h&]uh1j"hjnhM`hj)nubj")}(h9``unsigned int flags2`` futex flags (FLAGS_SHARED, etc.) h](j")}(h``unsigned int flags2``h]j5)}(hjnh]hunsigned int flags2}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjnubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMahjnubj!")}(hhh]h)}(h futex flags (FLAGS_SHARED, etc.)h]h futex flags (FLAGS_SHARED, etc.)}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjnhMahjnubah}(h]h ]h"]h$]h&]uh1j "hjnubeh}(h]h ]h"]h$]h&]uh1j"hjnhMahj)nubj")}(hE``int nr_wake`` number of waiters to wake (must be 1 for requeue_pi) h](j")}(h``int nr_wake``h]j5)}(hjoh]h int nr_wake}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjoubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMbhjoubj!")}(hhh]h)}(h4number of waiters to wake (must be 1 for requeue_pi)h]h4number of waiters to wake (must be 1 for requeue_pi)}(hj/ohhhNhNubah}(h]h ]h"]h$]h&]uh1hhj+ohMbhj,oubah}(h]h ]h"]h$]h&]uh1j "hjoubeh}(h]h ]h"]h$]h&]uh1j"hj+ohMbhj)nubj")}(h<``int nr_requeue`` number of waiters to requeue (0-INT_MAX) h](j")}(h``int nr_requeue``h]j5)}(hjOoh]hint nr_requeue}(hjQohhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjMoubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMchjIoubj!")}(hhh]h)}(h(number of waiters to requeue (0-INT_MAX)h]h(number of waiters to requeue (0-INT_MAX)}(hjhohhhNhNubah}(h]h ]h"]h$]h&]uh1hhjdohMchjeoubah}(h]h ]h"]h$]h&]uh1j "hjIoubeh}(h]h ]h"]h$]h&]uh1j"hjdohMchj)nubj")}(h8``u32 *cmpval`` **uaddr1** expected value (or ``NULL``) h](j")}(h``u32 *cmpval``h]j5)}(hjoh]h u32 *cmpval}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjoubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMdhjoubj!")}(hhh]h)}(h'**uaddr1** expected value (or ``NULL``)h](j)}(h **uaddr1**h]huaddr1}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1jhjoubh expected value (or }(hjohhhNhNubj5)}(h``NULL``h]hNULL}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjoubh)}(hjohhhNhNubeh}(h]h ]h"]h$]h&]uh1hhjohMdhjoubah}(h]h ]h"]h$]h&]uh1j "hjoubeh}(h]h ]h"]h$]h&]uh1j"hjohMdhj)nubj")}(hy``int requeue_pi`` if we are attempting to requeue from a non-pi futex to a pi futex (pi to pi requeue is not supported) h](j")}(h``int requeue_pi``h]j5)}(hjoh]hint requeue_pi}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjoubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMfhjoubj!")}(hhh]h)}(heif we are attempting to requeue from a non-pi futex to a pi futex (pi to pi requeue is not supported)h]heif we are attempting to requeue from a non-pi futex to a pi futex (pi to pi requeue is not supported)}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMehjoubah}(h]h ]h"]h$]h&]uh1j "hjoubeh}(h]h ]h"]h$]h&]uh1j"hjohMfhj)nubeh}(h]h ]h"]h$]h&]uh1j!hj nubh)}(h**Description**h]j)}(hjph]h Description}(hjphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjpubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhhj nubh)}(hzRequeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire uaddr2 atomically on behalf of the top waiter.h]hzRequeue waiters on uaddr1 to uaddr2. In the requeue_pi case, try to acquire uaddr2 atomically on behalf of the top waiter.}(hj3phhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhhj nubh)}(h **Return**h]j)}(hjDph]hReturn}(hjFphhhNhNubah}(h]h ]h"]h$]h&]uh1jhjBpubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMkhj nubj!)}(hK- >=0 - on success, the number of tasks requeued or woken; - <0 - on errorh]j )}(hhh](j )}(h8>=0 - on success, the number of tasks requeued or woken;h]h)}(hjcph]h8>=0 - on success, the number of tasks requeued or woken;}(hjephhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMkhjapubah}(h]h ]h"]h$]h&]uh1j hj^pubj )}(h <0 - on errorh]h)}(hj{ph]h <0 - on error}(hj}phhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMlhjypubah}(h]h ]h"]h$]h&]uh1j hj^pubeh}(h]h ]h"]h$]h&]j! j" uh1j hjrphMkhjZpubah}(h]h ]h"]h$]h&]uh1j!hjrphMkhj nubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!+handle_early_requeue_pi_wakeup (C function) c.handle_early_requeue_pi_wakeuphNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(huint handle_early_requeue_pi_wakeup (struct futex_hash_bucket *hb, struct futex_q *q, struct hrtimer_sleeper *timeout)h]jZ!)}(htint handle_early_requeue_pi_wakeup(struct futex_hash_bucket *hb, struct futex_q *q, struct hrtimer_sleeper *timeout)h](j#)}(hinth]hint}(hjphhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjphhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMubj$)}(h h]h }(hjphhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjphhhjphMubj`!)}(hhandle_early_requeue_pi_wakeuph]jf!)}(hhandle_early_requeue_pi_wakeuph]hhandle_early_requeue_pi_wakeup}(hjphhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjpubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjphhhjphMubj)$)}(hR(struct futex_hash_bucket *hb, struct futex_q *q, struct hrtimer_sleeper *timeout)h](j/$)}(hstruct futex_hash_bucket *hbh](j5$)}(hj8$h]hstruct}(hjphhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjpubj$)}(h h]h }(hjqhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjpubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hjqhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjqubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjqmodnameN classnameNjp$js$)}jv$]jy$)}jl$jpsb c.handle_early_requeue_pi_wakeupasbuh1hhjpubj$)}(h h]h }(hj8qhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjpubj$)}(hj$h]h*}(hjFqhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjpubjf!)}(hhbh]hhb}(hjSqhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjpubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjpubj/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hjlqhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjhqubj$)}(h h]h }(hjyqhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjhqubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hjqhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjqubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjqmodnameN classnameNjp$js$)}jv$]j4q c.handle_early_requeue_pi_wakeupasbuh1hhjhqubj$)}(h h]h }(hjqhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjhqubj$)}(hj$h]h*}(hjqhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjhqubjf!)}(hjEh]hq}(hjqhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjhqubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjpubj/$)}(hstruct hrtimer_sleeper *timeouth](j5$)}(hj8$h]hstruct}(hjqhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjqubj$)}(h h]h }(hjqhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjqubh)}(hhh]jf!)}(hhrtimer_sleeperh]hhrtimer_sleeper}(hjqhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjqubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjqmodnameN classnameNjp$js$)}jv$]j4q c.handle_early_requeue_pi_wakeupasbuh1hhjqubj$)}(h h]h }(hjrhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjqubj$)}(hj$h]h*}(hj%rhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjqubjf!)}(htimeouth]htimeout}(hj2rhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjqubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjpubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjphhhjphMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjphhhjphMubah}(h]jpah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjphMhjphhubj!)}(hhh]h)}(h(Handle early wakeup on the initial futexh]h(Handle early wakeup on the initial futex}(hj\rhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjYrhhubah}(h]h ]h"]h$]h&]uh1j!hjphhhjphMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jtrj!jtrj!j!j!uh1jN!hhhj5hNhNubj!)}(hX~**Parameters** ``struct futex_hash_bucket *hb`` the hash_bucket futex_q was original enqueued on ``struct futex_q *q`` the futex_q woken while waiting to be requeued ``struct hrtimer_sleeper *timeout`` the timeout associated with the wait (NULL if none) **Description** Determine the cause for the early wakeup. **Return** -EWOULDBLOCK or -ETIMEDOUT or -ERESTARTNOINTRh](h)}(h**Parameters**h]j)}(hj~rh]h Parameters}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|rubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjxrubj!)}(hhh](j")}(hR``struct futex_hash_bucket *hb`` the hash_bucket futex_q was original enqueued on h](j")}(h ``struct futex_hash_bucket *hb``h]j5)}(hjrh]hstruct futex_hash_bucket *hb}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjrubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjrubj!")}(hhh]h)}(h0the hash_bucket futex_q was original enqueued onh]h0the hash_bucket futex_q was original enqueued on}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjrhMhjrubah}(h]h ]h"]h$]h&]uh1j "hjrubeh}(h]h ]h"]h$]h&]uh1j"hjrhMhjrubj")}(hE``struct futex_q *q`` the futex_q woken while waiting to be requeued h](j")}(h``struct futex_q *q``h]j5)}(hjrh]hstruct futex_q *q}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjrubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjrubj!")}(hhh]h)}(h.the futex_q woken while waiting to be requeuedh]h.the futex_q woken while waiting to be requeued}(hjrhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjrhMhjrubah}(h]h ]h"]h$]h&]uh1j "hjrubeh}(h]h ]h"]h$]h&]uh1j"hjrhMhjrubj")}(hX``struct hrtimer_sleeper *timeout`` the timeout associated with the wait (NULL if none) h](j")}(h#``struct hrtimer_sleeper *timeout``h]j5)}(hjsh]hstruct hrtimer_sleeper *timeout}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj subah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj subj!")}(hhh]h)}(h3the timeout associated with the wait (NULL if none)h]h3the timeout associated with the wait (NULL if none)}(hj(shhhNhNubah}(h]h ]h"]h$]h&]uh1hhj$shMhj%subah}(h]h ]h"]h$]h&]uh1j "hj subeh}(h]h ]h"]h$]h&]uh1j"hj$shMhjrubeh}(h]h ]h"]h$]h&]uh1j!hjxrubh)}(h**Description**h]j)}(hjJsh]h Description}(hjLshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjHsubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjxrubh)}(h)Determine the cause for the early wakeup.h]h)Determine the cause for the early wakeup.}(hj`shhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjxrubh)}(h **Return**h]j)}(hjqsh]hReturn}(hjsshhhNhNubah}(h]h ]h"]h$]h&]uh1jhjosubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjxrubj!)}(h--EWOULDBLOCK or -ETIMEDOUT or -ERESTARTNOINTRh]h)}(hjsh]h--EWOULDBLOCK or -ETIMEDOUT or -ERESTARTNOINTR}(hjshhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjsubah}(h]h ]h"]h$]h&]uh1j!hjshMhjxrubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!"futex_wait_requeue_pi (C function)c.futex_wait_requeue_pihNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(h}int futex_wait_requeue_pi (u32 __user *uaddr, unsigned int flags, u32 val, ktime_t *abs_time, u32 bitset, u32 __user *uaddr2)h]jZ!)}(h|int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, u32 val, ktime_t *abs_time, u32 bitset, u32 __user *uaddr2)h](j#)}(hinth]hint}(hjshhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjshhh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMubj$)}(h h]h }(hjshhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjshhhjshMubj`!)}(hfutex_wait_requeue_pih]jf!)}(hfutex_wait_requeue_pih]hfutex_wait_requeue_pi}(hjshhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjsubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjshhhjshMubj)$)}(hc(u32 __user *uaddr, unsigned int flags, u32 val, ktime_t *abs_time, u32 bitset, u32 __user *uaddr2)h](j/$)}(hu32 __user *uaddrh](h)}(hhh]jf!)}(hu32h]hu32}(hjshhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjsubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjtmodnameN classnameNjp$js$)}jv$]jy$)}jl$jssbc.futex_wait_requeue_piasbuh1hhjsubj$)}(h h]h }(hjthhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjsubh__user}(hjshhhNhNubj$)}(h h]h }(hj1thhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjsubj$)}(hj$h]h*}(hj?thhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjsubjf!)}(huaddrh]huaddr}(hjLthhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjsubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjsubj/$)}(hunsigned int flagsh](j#)}(hunsignedh]hunsigned}(hjethhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjatubj$)}(h h]h }(hjsthhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjatubj#)}(hinth]hint}(hjthhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjatubj$)}(h h]h }(hjthhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjatubjf!)}(hflagsh]hflags}(hjthhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjatubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjsubj/$)}(hu32 valh](h)}(hhh]jf!)}(hu32h]hu32}(hjthhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjtubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjtmodnameN classnameNjp$js$)}jv$]jtc.futex_wait_requeue_piasbuh1hhjtubj$)}(h h]h }(hjthhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjtubjf!)}(hvalh]hval}(hjthhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjtubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjsubj/$)}(hktime_t *abs_timeh](h)}(hhh]jf!)}(hktime_th]hktime_t}(hjuhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjtubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjumodnameN classnameNjp$js$)}jv$]jtc.futex_wait_requeue_piasbuh1hhjtubj$)}(h h]h }(hjuhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjtubj$)}(hj$h]h*}(hj-uhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjtubjf!)}(habs_timeh]habs_time}(hj:uhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjtubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjsubj/$)}(h u32 bitseth](h)}(hhh]jf!)}(hu32h]hu32}(hjVuhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjSuubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjXumodnameN classnameNjp$js$)}jv$]jtc.futex_wait_requeue_piasbuh1hhjOuubj$)}(h h]h }(hjtuhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjOuubjf!)}(hbitseth]hbitset}(hjuhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjOuubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjsubj/$)}(hu32 __user *uaddr2h](h)}(hhh]jf!)}(hu32h]hu32}(hjuhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjuubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjumodnameN classnameNjp$js$)}jv$]jtc.futex_wait_requeue_piasbuh1hhjuubj$)}(h h]h }(hjuhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjuubh__user}(hjuhhhNhNubj$)}(h h]h }(hjuhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjuubj$)}(hj$h]h*}(hjuhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjuubjf!)}(huaddr2h]huaddr2}(hjuhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjuubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjsubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjshhhjshMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjshhhjshMubah}(h]jsah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjshMhjshhubj!)}(hhh]h)}(hWait on uaddr and take uaddr2h]hWait on uaddr and take uaddr2}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjvhhubah}(h]h ]h"]h$]h&]uh1j!hjshhhjshMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j+vj!j+vj!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``u32 __user *uaddr`` the futex we initially wait on (non-pi) ``unsigned int flags`` futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be the same type, no requeueing from private to shared, etc. ``u32 val`` the expected value of uaddr ``ktime_t *abs_time`` absolute timeout ``u32 bitset`` 32 bit wakeup bitset set by userspace, defaults to all ``u32 __user *uaddr2`` the pi futex we will take prior to returning to user-space **Description** The caller will wait on uaddr and will be requeued by futex_requeue() to uaddr2 which must be PI aware and unique from uaddr. Normal wakeup will wake on uaddr2 and complete the acquisition of the rt_mutex prior to returning to userspace. This ensures the rt_mutex maintains an owner when it has waiters; without one, the pi logic would not know which task to boost/deboost, if there was a need to. We call schedule in futex_wait_queue() when we enqueue and return there via the following-- 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue() 2) wakeup on uaddr2 after a requeue 3) signal 4) timeout If 3, cleanup and return -ERESTARTNOINTR. If 2, we may then block on trying to take the rt_mutex and return via: 5) successful lock 6) signal 7) timeout 8) other lock acquisition failure If 6, return -EWOULDBLOCK (restarting the syscall would do the same). If 4 or 7, we cleanup and return with -ETIMEDOUT. **Return** - 0 - On success; - <0 - On errorh](h)}(h**Parameters**h]j)}(hj5vh]h Parameters}(hj7vhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj3vubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubj!)}(hhh](j")}(h>``u32 __user *uaddr`` the futex we initially wait on (non-pi) h](j")}(h``u32 __user *uaddr``h]j5)}(hjTvh]hu32 __user *uaddr}(hjVvhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjRvubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjNvubj!")}(hhh]h)}(h'the futex we initially wait on (non-pi)h]h'the futex we initially wait on (non-pi)}(hjmvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjivhMhjjvubah}(h]h ]h"]h$]h&]uh1j "hjNvubeh}(h]h ]h"]h$]h&]uh1j"hjivhMhjKvubj")}(h``unsigned int flags`` futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be the same type, no requeueing from private to shared, etc. h](j")}(h``unsigned int flags``h]j5)}(hjvh]hunsigned int flags}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjvubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjvubj!")}(hhh]h)}(hwfutex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be the same type, no requeueing from private to shared, etc.h]hwfutex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc.), they must be the same type, no requeueing from private to shared, etc.}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjvubah}(h]h ]h"]h$]h&]uh1j "hjvubeh}(h]h ]h"]h$]h&]uh1j"hjvhMhjKvubj")}(h(``u32 val`` the expected value of uaddr h](j")}(h ``u32 val``h]j5)}(hjvh]hu32 val}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjvubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjvubj!")}(hhh]h)}(hthe expected value of uaddrh]hthe expected value of uaddr}(hjvhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjvhMhjvubah}(h]h ]h"]h$]h&]uh1j "hjvubeh}(h]h ]h"]h$]h&]uh1j"hjvhMhjKvubj")}(h'``ktime_t *abs_time`` absolute timeout h](j")}(h``ktime_t *abs_time``h]j5)}(hjwh]hktime_t *abs_time}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjvubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjvubj!")}(hhh]h)}(habsolute timeouth]habsolute timeout}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjwhMhjwubah}(h]h ]h"]h$]h&]uh1j "hjvubeh}(h]h ]h"]h$]h&]uh1j"hjwhMhjKvubj")}(hF``u32 bitset`` 32 bit wakeup bitset set by userspace, defaults to all h](j")}(h``u32 bitset``h]j5)}(hj9wh]h u32 bitset}(hj;whhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj7wubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj3wubj!")}(hhh]h)}(h632 bit wakeup bitset set by userspace, defaults to allh]h632 bit wakeup bitset set by userspace, defaults to all}(hjRwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjNwhMhjOwubah}(h]h ]h"]h$]h&]uh1j "hj3wubeh}(h]h ]h"]h$]h&]uh1j"hjNwhMhjKvubj")}(hR``u32 __user *uaddr2`` the pi futex we will take prior to returning to user-space h](j")}(h``u32 __user *uaddr2``h]j5)}(hjrwh]hu32 __user *uaddr2}(hjtwhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjpwubah}(h]h ]h"]h$]h&]uh1j"h_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhjlwubj!")}(hhh]h)}(h:the pi futex we will take prior to returning to user-spaceh]h:the pi futex we will take prior to returning to user-space}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjwhMhjwubah}(h]h ]h"]h$]h&]uh1j "hjlwubeh}(h]h ]h"]h$]h&]uh1j"hjwhMhjKvubeh}(h]h ]h"]h$]h&]uh1j!hj/vubh)}(h**Description**h]j)}(hjwh]h Description}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjwubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubh)}(hXThe caller will wait on uaddr and will be requeued by futex_requeue() to uaddr2 which must be PI aware and unique from uaddr. Normal wakeup will wake on uaddr2 and complete the acquisition of the rt_mutex prior to returning to userspace. This ensures the rt_mutex maintains an owner when it has waiters; without one, the pi logic would not know which task to boost/deboost, if there was a need to.h]hXThe caller will wait on uaddr and will be requeued by futex_requeue() to uaddr2 which must be PI aware and unique from uaddr. Normal wakeup will wake on uaddr2 and complete the acquisition of the rt_mutex prior to returning to userspace. This ensures the rt_mutex maintains an owner when it has waiters; without one, the pi logic would not know which task to boost/deboost, if there was a need to.}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubh)}(hWe call schedule in futex_wait_queue() when we enqueue and return there via the following-- 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue() 2) wakeup on uaddr2 after a requeue 3) signal 4) timeouth]hWe call schedule in futex_wait_queue() when we enqueue and return there via the following-- 1) wakeup on uaddr2 after an atomic lock acquisition by futex_requeue() 2) wakeup on uaddr2 after a requeue 3) signal 4) timeout}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubh)}(h)If 3, cleanup and return -ERESTARTNOINTR.h]h)If 3, cleanup and return -ERESTARTNOINTR.}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubh)}(hIf 2, we may then block on trying to take the rt_mutex and return via: 5) successful lock 6) signal 7) timeout 8) other lock acquisition failureh]hIf 2, we may then block on trying to take the rt_mutex and return via: 5) successful lock 6) signal 7) timeout 8) other lock acquisition failure}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubh)}(hEIf 6, return -EWOULDBLOCK (restarting the syscall would do the same).h]hEIf 6, return -EWOULDBLOCK (restarting the syscall would do the same).}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubh)}(h1If 4 or 7, we cleanup and return with -ETIMEDOUT.h]h1If 4 or 7, we cleanup and return with -ETIMEDOUT.}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubh)}(h **Return**h]j)}(hjxh]hReturn}(hj!xhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjxubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj/vubj!)}(h"- 0 - On success; - <0 - On errorh]j )}(hhh](j )}(h0 - On success;h]h)}(hj>xh]h0 - On success;}(hj@xhhhNhNubah}(h]h ]h"]h$]h&]uh1hh_/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1364: ./kernel/futex/requeue.chMhj!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_wait_queue (C function)c.futex_wait_queuehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hhvoid futex_wait_queue (struct futex_hash_bucket *hb, struct futex_q *q, struct hrtimer_sleeper *timeout)h]jZ!)}(hgvoid futex_wait_queue(struct futex_hash_bucket *hb, struct futex_q *q, struct hrtimer_sleeper *timeout)h](j#)}(hvoidh]hvoid}(hjxhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjxhhh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMVubj$)}(h h]h }(hjxhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjxhhhjxhMVubj`!)}(hfutex_wait_queueh]jf!)}(hfutex_wait_queueh]hfutex_wait_queue}(hjxhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjxubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjxhhhjxhMVubj)$)}(hR(struct futex_hash_bucket *hb, struct futex_q *q, struct hrtimer_sleeper *timeout)h](j/$)}(hstruct futex_hash_bucket *hbh](j5$)}(hj8$h]hstruct}(hjxhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjxubj$)}(h h]h }(hjxhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjxubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hjxhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjxubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjxmodnameN classnameNjp$js$)}jv$]jy$)}jl$jxsbc.futex_wait_queueasbuh1hhjxubj$)}(h h]h }(hjyhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjxubj$)}(hj$h]h*}(hj!yhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjxubjf!)}(hhbh]hhb}(hj.yhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjxubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjxubj/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hjGyhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjCyubj$)}(h h]h }(hjTyhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjCyubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hjeyhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjbyubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjgymodnameN classnameNjp$js$)}jv$]jyc.futex_wait_queueasbuh1hhjCyubj$)}(h h]h }(hjyhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjCyubj$)}(hj$h]h*}(hjyhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjCyubjf!)}(hjEh]hq}(hjyhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjCyubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjxubj/$)}(hstruct hrtimer_sleeper *timeouth](j5$)}(hj8$h]hstruct}(hjyhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjyubj$)}(h h]h }(hjyhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjyubh)}(hhh]jf!)}(hhrtimer_sleeperh]hhrtimer_sleeper}(hjyhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjyubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjymodnameN classnameNjp$js$)}jv$]jyc.futex_wait_queueasbuh1hhjyubj$)}(h h]h }(hjyhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjyubj$)}(hj$h]h*}(hjzhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjyubjf!)}(htimeouth]htimeout}(hj zhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjyubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjxubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjxhhhjxhMVubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjxhhhjxhMVubah}(h]jxah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjxhMVhjxhhubj!)}(hhh]h)}(h5futex_queue() and wait for wakeup, timeout, or signalh]h5futex_queue() and wait for wakeup, timeout, or signal}(hj7zhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMQhj4zhhubah}(h]h ]h"]h$]h&]uh1j!hjxhhhjxhMVubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jOzj!jOzj!j!j!uh1jN!hhhj5hNhNubj!)}(h**Parameters** ``struct futex_hash_bucket *hb`` the futex hash bucket, must be locked by the caller ``struct futex_q *q`` the futex_q to queue up on ``struct hrtimer_sleeper *timeout`` the prepared hrtimer_sleeper, or null for no timeouth](h)}(h**Parameters**h]j)}(hjYzh]h Parameters}(hj[zhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjWzubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMUhjSzubj!)}(hhh](j")}(hU``struct futex_hash_bucket *hb`` the futex hash bucket, must be locked by the caller h](j")}(h ``struct futex_hash_bucket *hb``h]j5)}(hjxzh]hstruct futex_hash_bucket *hb}(hjzzhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjvzubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMRhjrzubj!")}(hhh]h)}(h3the futex hash bucket, must be locked by the callerh]h3the futex hash bucket, must be locked by the caller}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjzhMRhjzubah}(h]h ]h"]h$]h&]uh1j "hjrzubeh}(h]h ]h"]h$]h&]uh1j"hjzhMRhjozubj")}(h1``struct futex_q *q`` the futex_q to queue up on h](j")}(h``struct futex_q *q``h]j5)}(hjzh]hstruct futex_q *q}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjzubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMShjzubj!")}(hhh]h)}(hthe futex_q to queue up onh]hthe futex_q to queue up on}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjzhMShjzubah}(h]h ]h"]h$]h&]uh1j "hjzubeh}(h]h ]h"]h$]h&]uh1j"hjzhMShjozubj")}(hX``struct hrtimer_sleeper *timeout`` the prepared hrtimer_sleeper, or null for no timeouth](j")}(h#``struct hrtimer_sleeper *timeout``h]j5)}(hjzh]hstruct hrtimer_sleeper *timeout}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjzubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMUhjzubj!")}(hhh]h)}(h4the prepared hrtimer_sleeper, or null for no timeouth]h4the prepared hrtimer_sleeper, or null for no timeout}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMThj{ubah}(h]h ]h"]h$]h&]uh1j "hjzubeh}(h]h ]h"]h$]h&]uh1j"hjzhMUhjozubeh}(h]h ]h"]h$]h&]uh1j!hjSzubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!#futex_unqueue_multiple (C function)c.futex_unqueue_multiplehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(h>int futex_unqueue_multiple (struct futex_vector *v, int count)h]jZ!)}(h=int futex_unqueue_multiple(struct futex_vector *v, int count)h](j#)}(hinth]hint}(hjD{hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj@{hhh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMubj$)}(h h]h }(hjS{hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj@{hhhjR{hMubj`!)}(hfutex_unqueue_multipleh]jf!)}(hfutex_unqueue_multipleh]hfutex_unqueue_multiple}(hje{hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hja{ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj@{hhhjR{hMubj)$)}(h#(struct futex_vector *v, int count)h](j/$)}(hstruct futex_vector *vh](j5$)}(hj8$h]hstruct}(hj{hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj}{ubj$)}(h h]h }(hj{hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj}{ubh)}(hhh]jf!)}(h futex_vectorh]h futex_vector}(hj{hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj{ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj{modnameN classnameNjp$js$)}jv$]jy$)}jl$jg{sbc.futex_unqueue_multipleasbuh1hhj}{ubj$)}(h h]h }(hj{hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj}{ubj$)}(hj$h]h*}(hj{hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj}{ubjf!)}(hvh]hv}(hj{hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj}{ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjy{ubj/$)}(h int counth](j#)}(hinth]hint}(hj{hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj{ubj$)}(h h]h }(hj|hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj{ubjf!)}(hcounth]hcount}(hj|hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj{ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjy{ubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj@{hhhjR{hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj<{hhhjR{hMubah}(h]j7{ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjR{hMhj9{hhubj!)}(hhh]h)}(h-Remove various futexes from their hash bucketh]h-Remove various futexes from their hash bucket}(hj9|hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMwhj6|hhubah}(h]h ]h"]h$]h&]uh1j!hj9{hhhjR{hMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jQ|j!jQ|j!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``struct futex_vector *v`` The list of futexes to unqueue ``int count`` Number of futexes in the list **Description** Helper to unqueue a list of futexes. This can't fail. **Return** - >=0 - Index of the last futex that was awoken; - -1 - No futex was awokenh](h)}(h**Parameters**h]j)}(hj[|h]h Parameters}(hj]|hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjY|ubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chM{hjU|ubj!)}(hhh](j")}(h:``struct futex_vector *v`` The list of futexes to unqueue h](j")}(h``struct futex_vector *v``h]j5)}(hjz|h]hstruct futex_vector *v}(hj||hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjx|ubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMxhjt|ubj!")}(hhh]h)}(hThe list of futexes to unqueueh]hThe list of futexes to unqueue}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj|hMxhj|ubah}(h]h ]h"]h$]h&]uh1j "hjt|ubeh}(h]h ]h"]h$]h&]uh1j"hj|hMxhjq|ubj")}(h,``int count`` Number of futexes in the list h](j")}(h ``int count``h]j5)}(hj|h]h int count}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj|ubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMyhj|ubj!")}(hhh]h)}(hNumber of futexes in the listh]hNumber of futexes in the list}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj|hMyhj|ubah}(h]h ]h"]h$]h&]uh1j "hj|ubeh}(h]h ]h"]h$]h&]uh1j"hj|hMyhjq|ubeh}(h]h ]h"]h$]h&]uh1j!hjU|ubh)}(h**Description**h]j)}(hj|h]h Description}(hj|hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj|ubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chM{hjU|ubh)}(h5Helper to unqueue a list of futexes. This can't fail.h]h7Helper to unqueue a list of futexes. This can’t fail.}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chM{hjU|ubh)}(h **Return**h]j)}(hj}h]hReturn}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj}ubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chM}hjU|ubj!)}(hL- >=0 - Index of the last futex that was awoken; - -1 - No futex was awokenh]j )}(hhh](j )}(h.>=0 - Index of the last futex that was awoken;h]h)}(hj4}h]h.>=0 - Index of the last futex that was awoken;}(hj6}hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chM}hj2}ubah}(h]h ]h"]h$]h&]uh1j hj/}ubj )}(h-1 - No futex was awokenh]h option_list)}(hhh]hoption_list_item)}(hhh](h option_group)}(hhh]hoption)}(h-1h]h option_string)}(hja}h]h-1}hje}sbah}(h]h ]h"]h$]h&]uh1jc}hj_}ubah}(h]h ]h"]h$]h&]uh1j]}hjZ}ubah}(h]h ]h"]h$]h&]uh1jX}hjU}ubh description)}(h- No futex was awokenh]j )}(hhh]j )}(hNo futex was awokenh]h)}(hj}h]hNo futex was awoken}(hj}hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chM~hj}ubah}(h]h ]h"]h$]h&]uh1j hj}ubah}(h]h ]h"]h$]h&]j! j" uh1j hj}hM~hj}ubah}(h]h ]h"]h$]h&]uh1j~}hjU}ubeh}(h]h ]h"]h$]h&]uh1jS}hjP}ubah}(h]h ]h"]h$]h&]uh1jN}hj}hM~hjJ}ubah}(h]h ]h"]h$]h&]uh1j hj/}ubeh}(h]h ]h"]h$]h&]j! j" uh1j hjC}hM}hj+}ubah}(h]h ]h"]h$]h&]uh1j!hjC}hM}hjU|ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!&futex_wait_multiple_setup (C function)c.futex_wait_multiple_setuphNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hNint futex_wait_multiple_setup (struct futex_vector *vs, int count, int *woken)h]jZ!)}(hMint futex_wait_multiple_setup(struct futex_vector *vs, int count, int *woken)h](j#)}(hinth]hint}(hj}hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj}hhh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMubj$)}(h h]h }(hj}hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj}hhhj}hMubj`!)}(hfutex_wait_multiple_setuph]jf!)}(hfutex_wait_multiple_setuph]hfutex_wait_multiple_setup}(hj ~hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj~ubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hj}hhhj}hMubj)$)}(h0(struct futex_vector *vs, int count, int *woken)h](j/$)}(hstruct futex_vector *vsh](j5$)}(hj8$h]hstruct}(hj&~hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj"~ubj$)}(h h]h }(hj3~hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj"~ubh)}(hhh]jf!)}(h futex_vectorh]h futex_vector}(hjD~hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjA~ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjF~modnameN classnameNjp$js$)}jv$]jy$)}jl$j ~sbc.futex_wait_multiple_setupasbuh1hhj"~ubj$)}(h h]h }(hjd~hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj"~ubj$)}(hj$h]h*}(hjr~hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj"~ubjf!)}(hvsh]hvs}(hj~hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj"~ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj~ubj/$)}(h int counth](j#)}(hinth]hint}(hj~hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj~ubj$)}(h h]h }(hj~hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj~ubjf!)}(hcounth]hcount}(hj~hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj~ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj~ubj/$)}(h int *wokenh](j#)}(hinth]hint}(hj~hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj~ubj$)}(h h]h }(hj~hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj~ubj$)}(hj$h]h*}(hj~hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj~ubjf!)}(hwokenh]hwoken}(hj~hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj~ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hj~ubeh}(h]h ]h"]h$]h&]jyjzuh1j($hj}hhhj}hMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hj}hhhj}hMubah}(h]j}ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hj}hMhj}hhubj!)}(hhh]h)}(h,Prepare to wait and enqueue multiple futexesh]h,Prepare to wait and enqueue multiple futexes}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjhhubah}(h]h ]h"]h$]h&]uh1j!hj}hhhj}hMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j8j!j8j!j!j!uh1jN!hhhj5hNhNubj!)}(hXO**Parameters** ``struct futex_vector *vs`` The futex list to wait on ``int count`` The size of the list ``int *woken`` Index of the last woken futex, if any. Used to notify the caller that it can return this index to userspace (return parameter) **Description** Prepare multiple futexes in a single step and enqueue them. This may fail if the futex list is invalid or if any futex was already awoken. On success the task is ready to interruptible sleep. **Return** - 1 - One of the futexes was woken by another thread - 0 - Success - <0 - -EFAULT, -EWOULDBLOCK or -EINVALh](h)}(h**Parameters**h]j)}(hjBh]h Parameters}(hjDhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj@ubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhj<ubj!)}(hhh](j")}(h6``struct futex_vector *vs`` The futex list to wait on h](j")}(h``struct futex_vector *vs``h]j5)}(hjah]hstruct futex_vector *vs}(hjchhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj_ubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhj[ubj!")}(hhh]h)}(hThe futex list to wait onh]hThe futex list to wait on}(hjzhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjvhMhjwubah}(h]h ]h"]h$]h&]uh1j "hj[ubeh}(h]h ]h"]h$]h&]uh1j"hjvhMhjXubj")}(h#``int count`` The size of the list h](j")}(h ``int count``h]j5)}(hjh]h int count}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubj!")}(hhh]h)}(hThe size of the listh]hThe size of the list}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjhMhjXubj")}(h``int *woken`` Index of the last woken futex, if any. Used to notify the caller that it can return this index to userspace (return parameter) h](j")}(h``int *woken``h]j5)}(hjh]h int *woken}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubj!")}(hhh]h)}(h~Index of the last woken futex, if any. Used to notify the caller that it can return this index to userspace (return parameter)h]h~Index of the last woken futex, if any. Used to notify the caller that it can return this index to userspace (return parameter)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjhMhjXubeh}(h]h ]h"]h$]h&]uh1j!hj<ubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhj ubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhj<ubh)}(hPrepare multiple futexes in a single step and enqueue them. This may fail if the futex list is invalid or if any futex was already awoken. On success the task is ready to interruptible sleep.h]hPrepare multiple futexes in a single step and enqueue them. This may fail if the futex list is invalid or if any futex was already awoken. On success the task is ready to interruptible sleep.}(hj%hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhj<ubh)}(h **Return**h]j)}(hj6h]hReturn}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhj<ubj!)}(hl- 1 - One of the futexes was woken by another thread - 0 - Success - <0 - -EFAULT, -EWOULDBLOCK or -EINVALh]j )}(hhh](j )}(h21 - One of the futexes was woken by another threadh]h)}(hjUh]h21 - One of the futexes was woken by another thread}(hjWhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjSubah}(h]h ]h"]h$]h&]uh1j hjPubj )}(h 0 - Successh]h)}(hjmh]h 0 - Success}(hjohhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjkubah}(h]h ]h"]h$]h&]uh1j hjPubj )}(h%<0 - -EFAULT, -EWOULDBLOCK or -EINVALh]h)}(hjh]h%<0 - -EFAULT, -EWOULDBLOCK or -EINVAL}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubah}(h]h ]h"]h$]h&]uh1j hjPubeh}(h]h ]h"]h$]h&]j! j" uh1j hjdhMhjLubah}(h]h ]h"]h$]h&]uh1j!hjdhMhj<ubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!!futex_sleep_multiple (C function)c.futex_sleep_multiplehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hcvoid futex_sleep_multiple (struct futex_vector *vs, unsigned int count, struct hrtimer_sleeper *to)h]jZ!)}(hbvoid futex_sleep_multiple(struct futex_vector *vs, unsigned int count, struct hrtimer_sleeper *to)h](j#)}(hvoidh]hvoid}(hjǀhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjÀhhh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMubj$)}(h h]h }(hjրhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjÀhhhjՀhMubj`!)}(hfutex_sleep_multipleh]jf!)}(hfutex_sleep_multipleh]hfutex_sleep_multiple}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjÀhhhjՀhMubj)$)}(hI(struct futex_vector *vs, unsigned int count, struct hrtimer_sleeper *to)h](j/$)}(hstruct futex_vector *vsh](j5$)}(hj8$h]hstruct}(hjhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjubh)}(hhh]jf!)}(h futex_vectorh]h futex_vector}(hj"hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj$modnameN classnameNjp$js$)}jv$]jy$)}jl$jsbc.futex_sleep_multipleasbuh1hhjubj$)}(h h]h }(hjBhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjubj$)}(hj$h]h*}(hjPhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjubjf!)}(hvsh]hvs}(hj]hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubj/$)}(hunsigned int counth](j#)}(hunsignedh]hunsigned}(hjvhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjrubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjrubj#)}(hinth]hint}(hjhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjrubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjrubjf!)}(hcounth]hcount}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjrubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubj/$)}(hstruct hrtimer_sleeper *toh](j5$)}(hj8$h]hstruct}(hjǁhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjÁubj$)}(h h]h }(hjԁhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjÁubh)}(hhh]jf!)}(hhrtimer_sleeperh]hhrtimer_sleeper}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjmodnameN classnameNjp$js$)}jv$]j>c.futex_sleep_multipleasbuh1hhjÁubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjÁubj$)}(hj$h]h*}(hjhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjÁubjf!)}(htoh]hto}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjÁubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjÀhhhjՀhMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjhhhjՀhMubah}(h]jah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjՀhMhjhhubj!)}(hhh]h)}(h#Check sleeping conditions and sleeph]h#Check sleeping conditions and sleep}(hjHhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjEhhubah}(h]h ]h"]h$]h&]uh1j!hjhhhjՀhMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!j`j!j`j!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``struct futex_vector *vs`` List of futexes to wait for ``unsigned int count`` Length of vs ``struct hrtimer_sleeper *to`` Timeout **Description** Sleep if and only if the timeout hasn't expired and no futex on the list has been woken up.h](h)}(h**Parameters**h]j)}(hjjh]h Parameters}(hjlhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjhubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjdubj!)}(hhh](j")}(h8``struct futex_vector *vs`` List of futexes to wait for h](j")}(h``struct futex_vector *vs``h]j5)}(hjh]hstruct futex_vector *vs}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubj!")}(hhh]h)}(hList of futexes to wait forh]hList of futexes to wait for}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjhMhjubj")}(h$``unsigned int count`` Length of vs h](j")}(h``unsigned int count``h]j5)}(hj‚h]hunsigned int count}(hjĂhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubj!")}(hhh]h)}(h Length of vsh]h Length of vs}(hjۂhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjׂhMhj؂ubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjׂhMhjubj")}(h'``struct hrtimer_sleeper *to`` Timeout h](j")}(h``struct hrtimer_sleeper *to``h]j5)}(hjh]hstruct hrtimer_sleeper *to}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubj!")}(hhh]h)}(hTimeouth]hTimeout}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjhMhjubeh}(h]h ]h"]h$]h&]uh1j!hjdubh)}(h**Description**h]j)}(hj6h]h Description}(hj8hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj4ubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjdubh)}(h[Sleep if and only if the timeout hasn't expired and no futex on the list has been woken up.h]h]Sleep if and only if the timeout hasn’t expired and no futex on the list has been woken up.}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjdubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ! futex_wait_multiple (C function)c.futex_wait_multiplehNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(haint futex_wait_multiple (struct futex_vector *vs, unsigned int count, struct hrtimer_sleeper *to)h]jZ!)}(h`int futex_wait_multiple(struct futex_vector *vs, unsigned int count, struct hrtimer_sleeper *to)h](j#)}(hinth]hint}(hj{hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjwhhh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjwhhhjhMubj`!)}(hfutex_wait_multipleh]jf!)}(hfutex_wait_multipleh]hfutex_wait_multiple}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjwhhhjhMubj)$)}(hI(struct futex_vector *vs, unsigned int count, struct hrtimer_sleeper *to)h](j/$)}(hstruct futex_vector *vsh](j5$)}(hj8$h]hstruct}(hjhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjubj$)}(h h]h }(hjŃhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjubh)}(hhh]jf!)}(h futex_vectorh]h futex_vector}(hjփhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjӃubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj؃modnameN classnameNjp$js$)}jv$]jy$)}jl$jsbc.futex_wait_multipleasbuh1hhjubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjubj$)}(hj$h]h*}(hjhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjubjf!)}(hvsh]hvs}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubj/$)}(hunsigned int counth](j#)}(hunsignedh]hunsigned}(hj*hhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj&ubj$)}(h h]h }(hj8hhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj&ubj#)}(hinth]hint}(hjFhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hj&ubj$)}(h h]h }(hjThhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj&ubjf!)}(hcounth]hcount}(hjbhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj&ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubj/$)}(hstruct hrtimer_sleeper *toh](j5$)}(hj8$h]hstruct}(hj{hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjwubj$)}(h h]h }(hjhhAhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjwubh)}(hhh]jf!)}(hhrtimer_sleeperh]hhrtimer_sleeper}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjmodnameN classnameNjp$js$)}jv$]jc.futex_wait_multipleasbuh1hhjwubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjwubj$)}(hj$h]h*}(hjńhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjwubjf!)}(htoh]hto}(hj҄hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjwubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjwhhhjhMubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjshhhjhMubah}(h]jnah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjhMhjphhubj!)}(hhh]h)}(h.Prepare to wait on and enqueue several futexesh]h.Prepare to wait on and enqueue several futexes}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chM hjhhubah}(h]h ]h"]h$]h&]uh1j!hjphhhjhMubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jj!jj!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``struct futex_vector *vs`` The list of futexes to wait on ``unsigned int count`` The number of objects ``struct hrtimer_sleeper *to`` Timeout before giving up and returning to userspace **Description** Entry point for the FUTEX_WAIT_MULTIPLE futex operation, this function sleeps on a group of futexes and returns on the first futex that is wake, or after the timeout has elapsed. **Return** - >=0 - Hint to the futex that was awoken - <0 - On errorh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubj!)}(hhh](j")}(h;``struct futex_vector *vs`` The list of futexes to wait on h](j")}(h``struct futex_vector *vs``h]j5)}(hj=h]hstruct futex_vector *vs}(hj?hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj;ubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhj7ubj!")}(hhh]h)}(hThe list of futexes to wait onh]hThe list of futexes to wait on}(hjVhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjRhMhjSubah}(h]h ]h"]h$]h&]uh1j "hj7ubeh}(h]h ]h"]h$]h&]uh1j"hjRhMhj4ubj")}(h-``unsigned int count`` The number of objects h](j")}(h``unsigned int count``h]j5)}(hjvh]hunsigned int count}(hjxhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjtubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjpubj!")}(hhh]h)}(hThe number of objectsh]hThe number of objects}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMhjubah}(h]h ]h"]h$]h&]uh1j "hjpubeh}(h]h ]h"]h$]h&]uh1j"hjhMhj4ubj")}(hS``struct hrtimer_sleeper *to`` Timeout before giving up and returning to userspace h](j")}(h``struct hrtimer_sleeper *to``h]j5)}(hjh]hstruct hrtimer_sleeper *to}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubj!")}(hhh]h)}(h3Timeout before giving up and returning to userspaceh]h3Timeout before giving up and returning to userspace}(hjȅhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjąhMhjŅubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjąhMhj4ubeh}(h]h ]h"]h$]h&]uh1j!hjubh)}(h**Description**h]j)}(hjh]h Description}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubh)}(hEntry point for the FUTEX_WAIT_MULTIPLE futex operation, this function sleeps on a group of futexes and returns on the first futex that is wake, or after the timeout has elapsed.h]hEntry point for the FUTEX_WAIT_MULTIPLE futex operation, this function sleeps on a group of futexes and returns on the first futex that is wake, or after the timeout has elapsed.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubh)}(h **Return**h]j)}(hjh]hReturn}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjubj!)}(h:- >=0 - Hint to the futex that was awoken - <0 - On errorh]j )}(hhh](j )}(h'>=0 - Hint to the futex that was awokenh]h)}(hj0h]h'>=0 - Hint to the futex that was awoken}(hj2hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhj.ubah}(h]h ]h"]h$]h&]uh1j hj+ubj )}(h<0 - On errorh]h)}(hjHh]h<0 - On error}(hjJhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMhjFubah}(h]h ]h"]h$]h&]uh1j hj+ubeh}(h]h ]h"]h$]h&]j! j" uh1j hj?hMhj'ubah}(h]h ]h"]h$]h&]uh1j!hj?hMhjubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubj>!)}(hhh]h}(h]h ]h"]h$]h&]entries](jJ!futex_wait_setup (C function)c.futex_wait_setuphNtauh1j=!hj5hhhNhNubjO!)}(hhh](jT!)}(hwint futex_wait_setup (u32 __user *uaddr, u32 val, unsigned int flags, struct futex_q *q, struct futex_hash_bucket **hb)h]jZ!)}(hvint futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags, struct futex_q *q, struct futex_hash_bucket **hb)h](j#)}(hinth]hint}(hjhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjhhh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMOubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjhhhjhMOubj`!)}(hfutex_wait_setuph]jf!)}(hfutex_wait_setuph]hfutex_wait_setup}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubah}(h]h ](jx!jy!eh"]h$]h&]jyjzuh1j_!hjhhhjhMOubj)$)}(hb(u32 __user *uaddr, u32 val, unsigned int flags, struct futex_q *q, struct futex_hash_bucket **hb)h](j/$)}(hu32 __user *uaddrh](h)}(hhh]jf!)}(hu32h]hu32}(hjʆhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjdžubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj̆modnameN classnameNjp$js$)}jv$]jy$)}jl$jsbc.futex_wait_setupasbuh1hhjÆubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjÆubh__user}(hjÆhhhNhNubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjÆubj$)}(hj$h]h*}(hj hhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjÆubjf!)}(huaddrh]huaddr}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjÆubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubj/$)}(hu32 valh](h)}(hhh]jf!)}(hu32h]hu32}(hj3hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj0ubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetj5modnameN classnameNjp$js$)}jv$]jc.futex_wait_setupasbuh1hhj,ubj$)}(h h]h }(hjQhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj,ubjf!)}(hvalh]hval}(hj_hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj,ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubj/$)}(hunsigned int flagsh](j#)}(hunsignedh]hunsigned}(hjxhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjtubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjtubj#)}(hinth]hint}(hjhhhNhNubah}(h]h ]j#ah"]h$]h&]uh1j#hjtubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjtubjf!)}(hflagsh]hflags}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjtubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubj/$)}(hstruct futex_q *qh](j5$)}(hj8$h]hstruct}(hjɇhhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hjŇubj$)}(h h]h }(hjևhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjŇubh)}(hhh]jf!)}(hfutex_qh]hfutex_q}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjmodnameN classnameNjp$js$)}jv$]jc.futex_wait_setupasbuh1hhjŇubj$)}(h h]h }(hjhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hjŇubj$)}(hj$h]h*}(hjhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hjŇubjf!)}(hjEh]hq}(hj hhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjŇubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubj/$)}(hstruct futex_hash_bucket **hbh](j5$)}(hj8$h]hstruct}(hj8hhhNhNubah}(h]h ]jA$ah"]h$]h&]uh1j4$hj4ubj$)}(h h]h }(hjEhhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj4ubh)}(hhh]jf!)}(hfutex_hash_bucketh]hfutex_hash_bucket}(hjVhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hjSubah}(h]h ]h"]h$]h&] refdomainjAreftypejl$ reftargetjXmodnameN classnameNjp$js$)}jv$]jc.futex_wait_setupasbuh1hhj4ubj$)}(h h]h }(hjthhhNhNubah}(h]h ]j $ah"]h$]h&]uh1j#hj4ubj$)}(hj$h]h*}(hjhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj4ubj$)}(hj$h]h*}(hjhhhNhNubah}(h]h ]j$ah"]h$]h&]uh1j$hj4ubjf!)}(hhbh]hhb}(hjhhhNhNubah}(h]h ]jq!ah"]h$]h&]uh1je!hj4ubeh}(h]h ]h"]h$]h&]noemphjyjzuh1j.$hjubeh}(h]h ]h"]h$]h&]jyjzuh1j($hjhhhjhMOubeh}(h]h ]h"]h$]h&]jyjzj!uh1jY!j!j!hjhhhjhMOubah}(h]j}ah ](j!j!eh"]h$]h&]j!j!)j!huh1jS!hjhMOhjhhubj!)}(hhh]h)}(hPrepare to wait on a futexh]hPrepare to wait on a futex}(hjƈhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chM@hjÈhhubah}(h]h ]h"]h$]h&]uh1j!hjhhhjhMOubeh}(h]h ](jAfunctioneh"]h$]h&]j!jAj!jވj!jވj!j!j!uh1jN!hhhj5hNhNubj!)}(hX**Parameters** ``u32 __user *uaddr`` the futex userspace address ``u32 val`` the expected value ``unsigned int flags`` futex flags (FLAGS_SHARED, etc.) ``struct futex_q *q`` the associated futex_q ``struct futex_hash_bucket **hb`` storage for hash_bucket pointer to be returned to caller **Description** Setup the futex_q and locate the hash_bucket. Get the futex value and compare it with the expected value. Handle atomic faults internally. Return with the hb lock held on success, and unlocked on failure. **Return** - 0 - uaddr contains val and hb has been locked; - <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlockedh](h)}(h**Parameters**h]j)}(hjh]h Parameters}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMDhjubj!)}(hhh](j")}(h2``u32 __user *uaddr`` the futex userspace address h](j")}(h``u32 __user *uaddr``h]j5)}(hjh]hu32 __user *uaddr}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMAhjubj!")}(hhh]h)}(hthe futex userspace addressh]hthe futex userspace address}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMAhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjhMAhjubj")}(h``u32 val`` the expected value h](j")}(h ``u32 val``h]j5)}(hj@h]hu32 val}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj>ubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMBhj:ubj!")}(hhh]h)}(hthe expected valueh]hthe expected value}(hjYhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjUhMBhjVubah}(h]h ]h"]h$]h&]uh1j "hj:ubeh}(h]h ]h"]h$]h&]uh1j"hjUhMBhjubj")}(h8``unsigned int flags`` futex flags (FLAGS_SHARED, etc.) h](j")}(h``unsigned int flags``h]j5)}(hjyh]hunsigned int flags}(hj{hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjwubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMChjsubj!")}(hhh]h)}(h futex flags (FLAGS_SHARED, etc.)h]h futex flags (FLAGS_SHARED, etc.)}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMChjubah}(h]h ]h"]h$]h&]uh1j "hjsubeh}(h]h ]h"]h$]h&]uh1j"hjhMChjubj")}(h-``struct futex_q *q`` the associated futex_q h](j")}(h``struct futex_q *q``h]j5)}(hjh]hstruct futex_q *q}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMDhjubj!")}(hhh]h)}(hthe associated futex_qh]hthe associated futex_q}(hjˉhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjljhMDhjȉubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjljhMDhjubj")}(h[``struct futex_hash_bucket **hb`` storage for hash_bucket pointer to be returned to caller h](j")}(h!``struct futex_hash_bucket **hb``h]j5)}(hjh]hstruct futex_hash_bucket **hb}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubah}(h]h ]h"]h$]h&]uh1j"h`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMEhjubj!")}(hhh]h)}(h8storage for hash_bucket pointer to be returned to callerh]h8storage for hash_bucket pointer to be returned to caller}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhMEhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hjhMEhjubeh}(h]h ]h"]h$]h&]uh1j!hjubh)}(h**Description**h]j)}(hj&h]h Description}(hj(hhhNhNubah}(h]h ]h"]h$]h&]uh1jhj$ubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMGhjubh)}(hSetup the futex_q and locate the hash_bucket. Get the futex value and compare it with the expected value. Handle atomic faults internally. Return with the hb lock held on success, and unlocked on failure.h]hSetup the futex_q and locate the hash_bucket. Get the futex value and compare it with the expected value. Handle atomic faults internally. Return with the hb lock held on success, and unlocked on failure.}(hj<hhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMGhjubh)}(h **Return**h]j)}(hjMh]hReturn}(hjOhhhNhNubah}(h]h ]h"]h$]h&]uh1jhjKubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMKhjubj!)}(h- 0 - uaddr contains val and hb has been locked; - <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlockedh]j )}(hhh](j )}(h.0 - uaddr contains val and hb has been locked;h]h)}(hjlh]h.0 - uaddr contains val and hb has been locked;}(hjnhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMKhjjubah}(h]h ]h"]h$]h&]uh1j hjgubj )}(hL<1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlockedh]h)}(hjh]hL<1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlocked}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hh`/var/lib/git/docbuild/linux/Documentation/kernel-hacking/locking:1367: ./kernel/futex/waitwake.chMLhjubah}(h]h ]h"]h$]h&]uh1j hjgubeh}(h]h ]h"]h$]h&]j! j" uh1j hj{hMKhjcubah}(h]h ]h"]h$]h&]uh1j!hj{hMKhjubeh}(h]h ] kernelindentah"]h$]h&]uh1j!hj5hhhNhNubeh}(h]futex-api-referenceah ]h"]futex api referenceah$]h&]uh1hhhhhhhhMIubh)}(hhh](h)}(hFurther readingh]hFurther reading}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjhhhhhM[ubj )}(hhh](j )}(he``Documentation/locking/spinlocks.rst``: Linus Torvalds' spinlocking tutorial in the kernel sources. h]h)}(hd``Documentation/locking/spinlocks.rst``: Linus Torvalds' spinlocking tutorial in the kernel sources.h](j5)}(h'``Documentation/locking/spinlocks.rst``h]h#Documentation/locking/spinlocks.rst}(hjъhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj͊ubh?: Linus Torvalds’ spinlocking tutorial in the kernel sources.}(hj͊hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhM]hjɊubah}(h]h ]h"]h$]h&]uh1j hjƊhhhhhNubj )}(hX?Unix Systems for Modern Architectures: Symmetric Multiprocessing and Caching for Kernel Programmers: Curt Schimmel's very good introduction to kernel level locking (not written for Linux, but nearly everything applies). The book is expensive, but really worth every penny to understand SMP locking. [ISBN: 0201633388] h](h)}(hdUnix Systems for Modern Architectures: Symmetric Multiprocessing and Caching for Kernel Programmers:h]hdUnix Systems for Modern Architectures: Symmetric Multiprocessing and Caching for Kernel Programmers:}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhM`hjubh)}(hCurt Schimmel's very good introduction to kernel level locking (not written for Linux, but nearly everything applies). The book is expensive, but really worth every penny to understand SMP locking. [ISBN: 0201633388]h]hCurt Schimmel’s very good introduction to kernel level locking (not written for Linux, but nearly everything applies). The book is expensive, but really worth every penny to understand SMP locking. [ISBN: 0201633388]}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMchjubeh}(h]h ]h"]h$]h&]uh1j hjƊhhhhhNubeh}(h]h ]h"]h$]h&]j! j" uh1j hhhM]hjhhubeh}(h]further-readingah ]h"]further readingah$]h&]uh1hhhhhhhhM[ubh)}(hhh](h)}(hThanksh]hThanks}(hj&hhhNhNubah}(h]h ]h"]h$]h&]uh1hhj#hhhhhMiubh)}(hBThanks to Telsa Gwynne for DocBooking, neatening and adding style.h]hBThanks to Telsa Gwynne for DocBooking, neatening and adding style.}(hj4hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMkhj#hhubh)}(hThanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras, Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev, James Morris, Robert Love, Paul McKenney, John Ashby for proofreading, correcting, flaming, commenting.h]hThanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras, Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev, James Morris, Robert Love, Paul McKenney, John Ashby for proofreading, correcting, flaming, commenting.}(hjBhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMmhj#hhubh)}(h=Thanks to the cabal for having no influence on this document.h]h=Thanks to the cabal for having no influence on this document.}(hjPhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMrhj#hhubeh}(h]thanksah ]h"]thanksah$]h&]uh1hhhhhhhhMiubh)}(hhh](h)}(hGlossaryh]hGlossary}(hjihhhNhNubah}(h]h ]h"]h$]h&]uh1hhjfhhhhhMuubj!)}(hhh](j")}(hXpreemption Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user context inside the kernel would not preempt each other (ie. you had that CPU until you gave it up, except for interrupts). With the addition of ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher priority tasks can "cut in": spinlocks were changed to disable preemption, even on UP. h](j")}(h preemptionh]h preemption}(hj~hhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhM}hjzubj!")}(hhh]h)}(hXsPrior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user context inside the kernel would not preempt each other (ie. you had that CPU until you gave it up, except for interrupts). With the addition of ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher priority tasks can "cut in": spinlocks were changed to disable preemption, even on UP.h](hPrior to 2.5, or when }(hjhhhNhNubj5)}(h``CONFIG_PREEMPT``h]hCONFIG_PREEMPT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh is unset, processes in user context inside the kernel would not preempt each other (ie. you had that CPU until you gave it up, except for interrupts). With the addition of }(hjhhhNhNubj5)}(h``CONFIG_PREEMPT``h]hCONFIG_PREEMPT}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh in 2.5.4, this changed: when in user context, higher priority tasks can “cut in”: spinlocks were changed to disable preemption, even on UP.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMxhjubah}(h]h ]h"]h$]h&]uh1j "hjzubeh}(h]h ]h"]h$]h&]uh1j"hhhM}hjwubj")}(hX3bh Bottom Half: for historical reasons, functions with '_bh' in them often now refer to any software interrupt, e.g. spin_lock_bh() blocks any software interrupt on the current CPU. Bottom halves are deprecated, and will eventually be replaced by tasklets. Only one bottom half will be running at any time. h](j")}(hbhh]hbh}(hjыhhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhj͋ubj!")}(hhh]h)}(hX/Bottom Half: for historical reasons, functions with '_bh' in them often now refer to any software interrupt, e.g. spin_lock_bh() blocks any software interrupt on the current CPU. Bottom halves are deprecated, and will eventually be replaced by tasklets. Only one bottom half will be running at any time.h]hX3Bottom Half: for historical reasons, functions with ‘_bh’ in them often now refer to any software interrupt, e.g. spin_lock_bh() blocks any software interrupt on the current CPU. Bottom halves are deprecated, and will eventually be replaced by tasklets. Only one bottom half will be running at any time.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjߋubah}(h]h ]h"]h$]h&]uh1j "hj͋ubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(hyHardware Interrupt / Hardware IRQ Hardware interrupt request. in_hardirq() returns true in a hardware interrupt handler. h](j")}(h!Hardware Interrupt / Hardware IRQh]h!Hardware Interrupt / Hardware IRQ}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhjubj!")}(hhh]h)}(hVHardware interrupt request. in_hardirq() returns true in a hardware interrupt handler.h]hVHardware interrupt request. in_hardirq() returns true in a hardware interrupt handler.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(hInterrupt Context Not user context: processing a hardware irq or software irq. Indicated by the in_interrupt() macro returning true. h](j")}(hInterrupt Contexth]hInterrupt Context}(hj/hhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhj+ubj!")}(hhh]h)}(hrNot user context: processing a hardware irq or software irq. Indicated by the in_interrupt() macro returning true.h]hrNot user context: processing a hardware irq or software irq. Indicated by the in_interrupt() macro returning true.}(hj@hhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj=ubah}(h]h ]h"]h$]h&]uh1j "hj+ubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(h_SMP Symmetric Multi-Processor: kernels compiled for multiple-CPU machines. (``CONFIG_SMP=y``). h](j")}(hSMPh]hSMP}(hj^hhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhjZubj!")}(hhh]h)}(hZSymmetric Multi-Processor: kernels compiled for multiple-CPU machines. (``CONFIG_SMP=y``).h](hHSymmetric Multi-Processor: kernels compiled for multiple-CPU machines. (}(hjohhhNhNubj5)}(h``CONFIG_SMP=y``h]h CONFIG_SMP=y}(hjwhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjoubh).}(hjohhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjlubah}(h]h ]h"]h$]h&]uh1j "hjZubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(hX~Software Interrupt / softirq Software interrupt handler. in_hardirq() returns false; in_softirq() returns true. Tasklets and softirqs both fall into the category of 'software interrupts'. Strictly speaking a softirq is one of up to 32 enumerated software interrupts which can run on multiple CPUs at once. Sometimes used to refer to tasklets as well (ie. all software interrupts). h](j")}(hSoftware Interrupt / softirqh]hSoftware Interrupt / softirq}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhjubj!")}(hhh](h)}(hSoftware interrupt handler. in_hardirq() returns false; in_softirq() returns true. Tasklets and softirqs both fall into the category of 'software interrupts'.h]hSoftware interrupt handler. in_hardirq() returns false; in_softirq() returns true. Tasklets and softirqs both fall into the category of ‘software interrupts’.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubh)}(hStrictly speaking a softirq is one of up to 32 enumerated software interrupts which can run on multiple CPUs at once. Sometimes used to refer to tasklets as well (ie. all software interrupts).h]hStrictly speaking a softirq is one of up to 32 enumerated software interrupts which can run on multiple CPUs at once. Sometimes used to refer to tasklets as well (ie. all software interrupts).}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubeh}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(hltasklet A dynamically-registrable software interrupt, which is guaranteed to only run on one CPU at a time. h](j")}(htaskleth]htasklet}(hj܌hhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhj،ubj!")}(hhh]h)}(hcA dynamically-registrable software interrupt, which is guaranteed to only run on one CPU at a time.h]hcA dynamically-registrable software interrupt, which is guaranteed to only run on one CPU at a time.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j "hj،ubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(htimer A dynamically-registrable software interrupt, which is run at (or close to) a given time. When running, it is just like a tasklet (in fact, they are called from the ``TIMER_SOFTIRQ``). h](j")}(htimerh]htimer}(hj hhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhjubj!")}(hhh]h)}(hA dynamically-registrable software interrupt, which is run at (or close to) a given time. When running, it is just like a tasklet (in fact, they are called from the ``TIMER_SOFTIRQ``).h](hA dynamically-registrable software interrupt, which is run at (or close to) a given time. When running, it is just like a tasklet (in fact, they are called from the }(hjhhhNhNubj5)}(h``TIMER_SOFTIRQ``h]h TIMER_SOFTIRQ}(hj$hhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubh).}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(h/UP Uni-Processor: Non-SMP. (``CONFIG_SMP=n``). h](j")}(hUPh]hUP}(hjLhhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhjHubj!")}(hhh]h)}(h+Uni-Processor: Non-SMP. (``CONFIG_SMP=n``).h](hUni-Processor: Non-SMP. (}(hj]hhhNhNubj5)}(h``CONFIG_SMP=n``h]h CONFIG_SMP=n}(hjehhhNhNubah}(h]h ]h"]h$]h&]uh1j4hj]ubh).}(hj]hhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjZubah}(h]h ]h"]h$]h&]uh1j "hjHubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(hXUser Context The kernel executing on behalf of a particular process (ie. a system call or trap) or kernel thread. You can tell which process with the ``current`` macro.) Not to be confused with userspace. Can be interrupted by software or hardware interrupts. h](j")}(h User Contexth]h User Context}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhjubj!")}(hhh]h)}(hThe kernel executing on behalf of a particular process (ie. a system call or trap) or kernel thread. You can tell which process with the ``current`` macro.) Not to be confused with userspace. Can be interrupted by software or hardware interrupts.h](hThe kernel executing on behalf of a particular process (ie. a system call or trap) or kernel thread. You can tell which process with the }(hjhhhNhNubj5)}(h ``current``h]hcurrent}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1j4hjubhb macro.) Not to be confused with userspace. Can be interrupted by software or hardware interrupts.}(hjhhhNhNubeh}(h]h ]h"]h$]h&]uh1hhhhMhjubah}(h]h ]h"]h$]h&]uh1j "hjubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubj")}(h>Userspace A process executing its own code outside the kernel.h](j")}(h Userspaceh]h Userspace}(hj΍hhhNhNubah}(h]h ]h"]h$]h&]uh1j"hhhMhjʍubj!")}(hhh]h)}(h4A process executing its own code outside the kernel.h]h4A process executing its own code outside the kernel.}(hjߍhhhNhNubah}(h]h ]h"]h$]h&]uh1hhhhMhj܍ubah}(h]h ]h"]h$]h&]uh1j "hjʍubeh}(h]h ]h"]h$]h&]uh1j"hhhMhjwhhubeh}(h]h ]h"]h$]h&]uh1j!hjfhhhhhNubeh}(h]glossaryah ]h"]glossaryah$]h&]uh1hhhhhhhhMuubeh}(h](unreliable-guide-to-lockingheh ]h"](unreliable guide to lockingkernel_hacking_lockeh$]h&]uh1hhhhhhhhKexpect_referenced_by_name}j hsexpect_referenced_by_id}hhsubeh}(h]h ]h"]h$]h&]sourcehuh1hcurrent_sourceN current_lineNsettingsdocutils.frontendValues)}(hN generatorN datestampN source_linkN source_urlN toc_backlinksjfootnote_backlinksK sectnum_xformKstrip_commentsNstrip_elements_with_classesN strip_classesN report_levelK halt_levelKexit_status_levelKdebugNwarning_streamN tracebackinput_encoding utf-8-siginput_encoding_error_handlerstrictoutput_encodingutf-8output_encoding_error_handlerj6error_encodingutf-8error_encoding_error_handlerbackslashreplace language_codeenrecord_dependenciesNconfigN id_prefixhauto_id_prefixid dump_settingsNdump_internalsNdump_transformsNdump_pseudo_xmlNexpose_internalsNstrict_visitorN_disable_configN_sourceh _destinationN _config_files]7/var/lib/git/docbuild/linux/Documentation/docutils.confafile_insertion_enabled raw_enabledKline_length_limitM'pep_referencesN pep_base_urlhttps://peps.python.org/pep_file_url_templatepep-%04drfc_referencesN rfc_base_url&https://datatracker.ietf.org/doc/html/ tab_widthKtrim_footnote_reference_spacesyntax_highlightlong smart_quotessmartquotes_locales]character_level_inline_markupdoctitle_xform docinfo_xformKsectsubtitle_xform image_loadinglinkembed_stylesheetcloak_email_addressessection_self_linkenvNubreporterNindirect_targets]substitution_defs}substitution_names}refnames}(0what functions are safe to call from interrupts?]jjadeadlock: simple and advanced]jahard irq context]ja per-cpu data]jPaurefids}h]hasnameids}(j hj j j9j6jjjjjjjjjkjhjjjRjOjyjvjjjjjjj j jjj~j{jjj j"jW jT j~ j{ j-j*j%j"jpjmjjjjjujrjjjjjjjjjjj3j0j+j(jjjjjjjjjCj`jjj)!jzj j j"!j!j5j5jjj jjcj`jju nametypes}(j j j9jjjjjkjjRjyjjjj jj~jj jW j~ j-j%jpjjjujjjjjj3j+jjjjjCjj)!j j"!j5jj jcjuh}(hhj hj6jjj<jjjjjjjhjjjnjOjjvjUjj|jjjjj jjjj{j7jjj"jjT jj{ jZ j*j j"j# jmj0jjsjjjrjjjxjjjjjjjj0j0jj(jjj6jjjjFjjj`jjjFjzjj jj!j j5j,!jL!jU!j"j"j#j#j{%j%j)'j.'j(j(jg*jl*j,j,j.j.j0j0j+2j02j3j3jj5j 6j6j7j7j+;j0;jN?jS?j@j@jLCjQCjMEjREjFjFjHjHjHJjNJj/Nj4NjYPj^PjLSjQSjTjTjZjZj]j]j`j`jdjdj0kj5kjpjpjsjsjxjxj7{j<{j}j}jjjnjsj}jjjj`j#jjfjjjju footnote_refs} citation_refs} autofootnotes]autofootnote_refs]symbol_footnotes]symbol_footnote_refs] footnotes] citations]autofootnote_startKsymbol_footnote_startK id_counter collectionsCounter}jDKsRparse_messages]hsystem_message)}(hhh]h)}(heUnexpected possible title overline or transition. Treating it as ordinary text because it's so short.h]hgUnexpected possible title overline or transition. Treating it as ordinary text because it’s so short.}(hjhhhNhNubah}(h]h ]h"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]levelKtypeINFOlineMBsourcehuh1jhj ubatransform_messages]j)}(hhh]h)}(hhh]h9Hyperlink target "kernel-hacking-lock" is not referenced.}hjĎsbah}(h]h ]h"]h$]h&]uh1hhjubah}(h]h ]h"]h$]h&]levelKtypejsourcehlineKuh1juba transformerN include_log] decorationNhhub.