aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/Changes31
-rw-r--r--Documentation/CodingStyle41
-rw-r--r--Documentation/RCU/rcuref.txt87
-rw-r--r--Documentation/SubmittingDrivers24
-rw-r--r--Documentation/SubmittingPatches63
-rw-r--r--Documentation/applying-patches.txt29
-rw-r--r--Documentation/block/stat.txt82
-rw-r--r--Documentation/cpu-hotplug.txt357
-rw-r--r--Documentation/cpusets.txt161
-rw-r--r--Documentation/fb/cyblafb/bugs1
-rw-r--r--Documentation/fb/cyblafb/fb.modes57
-rw-r--r--Documentation/fb/cyblafb/performance1
-rw-r--r--Documentation/fb/cyblafb/todo5
-rw-r--r--Documentation/fb/cyblafb/usage33
-rw-r--r--Documentation/fb/cyblafb/whatsnew29
-rw-r--r--Documentation/filesystems/ext3.txt5
-rw-r--r--Documentation/filesystems/proc.txt17
-rw-r--r--Documentation/filesystems/ramfs-rootfs-initramfs.txt72
-rw-r--r--Documentation/filesystems/relayfs.txt126
-rw-r--r--Documentation/keys-request-key.txt22
-rw-r--r--Documentation/keys.txt43
-rw-r--r--Documentation/networking/bonding.txt2
-rw-r--r--Documentation/sysctl/vm.txt20
23 files changed, 1085 insertions, 223 deletions
diff --git a/Documentation/Changes b/Documentation/Changes
index 86b86399d61d72..fe5ae0f550202f 100644
--- a/Documentation/Changes
+++ b/Documentation/Changes
@@ -31,8 +31,6 @@ al espaņol de este documento en varios formatos.
Eine deutsche Version dieser Datei finden Sie unter
<http://www.stefan-winter.de/Changes-2.4.0.txt>.
-Last updated: October 29th, 2002
-
Chris Ricker (kaboom@gatech.edu or chris.ricker@genetics.utah.edu).
Current Minimal Requirements
@@ -48,7 +46,7 @@ necessary on all systems; obviously, if you don't have any ISDN
hardware, for example, you probably needn't concern yourself with
isdn4k-utils.
-o Gnu C 2.95.3 # gcc --version
+o Gnu C 3.2 # gcc --version
o Gnu make 3.79.1 # make --version
o binutils 2.12 # ld -v
o util-linux 2.10o # fdformat --version
@@ -74,26 +72,7 @@ GCC
---
The gcc version requirements may vary depending on the type of CPU in your
-computer. The next paragraph applies to users of x86 CPUs, but not
-necessarily to users of other CPUs. Users of other CPUs should obtain
-information about their gcc version requirements from another source.
-
-The recommended compiler for the kernel is gcc 2.95.x (x >= 3), and it
-should be used when you need absolute stability. You may use gcc 3.0.x
-instead if you wish, although it may cause problems. Later versions of gcc
-have not received much testing for Linux kernel compilation, and there are
-almost certainly bugs (mainly, but not exclusively, in the kernel) that
-will need to be fixed in order to use these compilers. In any case, using
-pgcc instead of plain gcc is just asking for trouble.
-
-The Red Hat gcc 2.96 compiler subtree can also be used to build this tree.
-You should ensure you use gcc-2.96-74 or later. gcc-2.96-54 will not build
-the kernel correctly.
-
-In addition, please pay attention to compiler optimization. Anything
-greater than -O2 may not be wise. Similarly, if you choose to use gcc-2.95.x
-or derivatives, be sure not to use -fstrict-aliasing (which, depending on
-your version of gcc 2.95.x, may necessitate using -fno-strict-aliasing).
+computer.
Make
----
@@ -322,9 +301,9 @@ Getting updated software
Kernel compilation
******************
-gcc 2.95.3
-----------
-o <ftp://ftp.gnu.org/gnu/gcc/gcc-2.95.3.tar.gz>
+gcc
+---
+o <ftp://ftp.gnu.org/gnu/gcc/>
Make
----
diff --git a/Documentation/CodingStyle b/Documentation/CodingStyle
index eb7db3c192273a..ce780ef648f1d5 100644
--- a/Documentation/CodingStyle
+++ b/Documentation/CodingStyle
@@ -344,7 +344,7 @@ Remember: if another thread can find your data structure, and you don't
have a reference count on it, you almost certainly have a bug.
- Chapter 11: Macros, Enums, Inline functions and RTL
+ Chapter 11: Macros, Enums and RTL
Names of macros defining constants and labels in enums are capitalized.
@@ -429,7 +429,35 @@ from void pointer to any other pointer type is guaranteed by the C programming
language.
- Chapter 14: References
+ Chapter 14: The inline disease
+
+There appears to be a common misperception that gcc has a magic "make me
+faster" speedup option called "inline". While the use of inlines can be
+appropriate (for example as a means of replacing macros, see Chapter 11), it
+very often is not. Abundant use of the inline keyword leads to a much bigger
+kernel, which in turn slows the system as a whole down, due to a bigger
+icache footprint for the CPU and simply because there is less memory
+available for the pagecache. Just think about it; a pagecache miss causes a
+disk seek, which easily takes 5 miliseconds. There are a LOT of cpu cycles
+that can go into these 5 miliseconds.
+
+A reasonable rule of thumb is to not put inline at functions that have more
+than 3 lines of code in them. An exception to this rule are the cases where
+a parameter is known to be a compiletime constant, and as a result of this
+constantness you *know* the compiler will be able to optimize most of your
+function away at compile time. For a good example of this later case, see
+the kmalloc() inline function.
+
+Often people argue that adding inline to functions that are static and used
+only once is always a win since there is no space tradeoff. While this is
+technically correct, gcc is capable of inlining these automatically without
+help, and the maintenance issue of removing the inline when a second user
+appears outweighs the potential value of the hint that tells gcc to do
+something it would have done anyway.
+
+
+
+ Chapter 15: References
The C Programming Language, Second Edition
by Brian W. Kernighan and Dennis M. Ritchie.
@@ -444,10 +472,13 @@ ISBN 0-201-61586-X.
URL: http://cm.bell-labs.com/cm/cs/tpop/
GNU manuals - where in compliance with K&R and this text - for cpp, gcc,
-gcc internals and indent, all available from http://www.gnu.org
+gcc internals and indent, all available from http://www.gnu.org/manual/
WG14 is the international standardization working group for the programming
-language C, URL: http://std.dkuug.dk/JTC1/SC22/WG14/
+language C, URL: http://www.open-std.org/JTC1/SC22/WG14/
+
+Kernel CodingStyle, by greg@kroah.com at OLS 2002:
+http://www.kroah.com/linux/talks/ols_2002_kernel_codingstyle_talk/html/
--
-Last updated on 16 February 2004 by a community effort on LKML.
+Last updated on 30 December 2005 by a community effort on LKML.
diff --git a/Documentation/RCU/rcuref.txt b/Documentation/RCU/rcuref.txt
index a23fee66064df4..3f60db41b2f0fd 100644
--- a/Documentation/RCU/rcuref.txt
+++ b/Documentation/RCU/rcuref.txt
@@ -1,74 +1,67 @@
-Refcounter framework for elements of lists/arrays protected by
-RCU.
+Refcounter design for elements of lists/arrays protected by RCU.
Refcounting on elements of lists which are protected by traditional
reader/writer spinlocks or semaphores are straight forward as in:
-1. 2.
-add() search_and_reference()
-{ {
- alloc_object read_lock(&list_lock);
- ... search_for_element
- atomic_set(&el->rc, 1); atomic_inc(&el->rc);
- write_lock(&list_lock); ...
- add_element read_unlock(&list_lock);
- ... ...
- write_unlock(&list_lock); }
+1. 2.
+add() search_and_reference()
+{ {
+ alloc_object read_lock(&list_lock);
+ ... search_for_element
+ atomic_set(&el->rc, 1); atomic_inc(&el->rc);
+ write_lock(&list_lock); ...
+ add_element read_unlock(&list_lock);
+ ... ...
+ write_unlock(&list_lock); }
}
3. 4.
release_referenced() delete()
{ {
- ... write_lock(&list_lock);
- atomic_dec(&el->rc, relfunc) ...
- ... delete_element
-} write_unlock(&list_lock);
- ...
- if (atomic_dec_and_test(&el->rc))
- kfree(el);
- ...
+ ... write_lock(&list_lock);
+ atomic_dec(&el->rc, relfunc) ...
+ ... delete_element
+} write_unlock(&list_lock);
+ ...
+ if (atomic_dec_and_test(&el->rc))
+ kfree(el);
+ ...
}
If this list/array is made lock free using rcu as in changing the
write_lock in add() and delete() to spin_lock and changing read_lock
-in search_and_reference to rcu_read_lock(), the rcuref_get in
+in search_and_reference to rcu_read_lock(), the atomic_get in
search_and_reference could potentially hold reference to an element which
-has already been deleted from the list/array. rcuref_lf_get_rcu takes
+has already been deleted from the list/array. atomic_inc_not_zero takes
care of this scenario. search_and_reference should look as;
1. 2.
add() search_and_reference()
{ {
- alloc_object rcu_read_lock();
- ... search_for_element
- atomic_set(&el->rc, 1); if (rcuref_inc_lf(&el->rc)) {
- write_lock(&list_lock); rcu_read_unlock();
- return FAIL;
- add_element }
- ... ...
- write_unlock(&list_lock); rcu_read_unlock();
+ alloc_object rcu_read_lock();
+ ... search_for_element
+ atomic_set(&el->rc, 1); if (atomic_inc_not_zero(&el->rc)) {
+ write_lock(&list_lock); rcu_read_unlock();
+ return FAIL;
+ add_element }
+ ... ...
+ write_unlock(&list_lock); rcu_read_unlock();
} }
3. 4.
release_referenced() delete()
{ {
- ... write_lock(&list_lock);
- rcuref_dec(&el->rc, relfunc) ...
- ... delete_element
-} write_unlock(&list_lock);
- ...
- if (rcuref_dec_and_test(&el->rc))
- call_rcu(&el->head, el_free);
- ...
+ ... write_lock(&list_lock);
+ atomic_dec(&el->rc, relfunc) ...
+ ... delete_element
+} write_unlock(&list_lock);
+ ...
+ if (atomic_dec_and_test(&el->rc))
+ call_rcu(&el->head, el_free);
+ ...
}
Sometimes, reference to the element need to be obtained in the
-update (write) stream. In such cases, rcuref_inc_lf might be an overkill
-since the spinlock serialising list updates are held. rcuref_inc
+update (write) stream. In such cases, atomic_inc_not_zero might be an
+overkill since the spinlock serialising list updates are held. atomic_inc
is to be used in such cases.
-For arches which do not have cmpxchg rcuref_inc_lf
-api uses a hashed spinlock implementation and the same hashed spinlock
-is acquired in all rcuref_xxx primitives to preserve atomicity.
-Note: Use rcuref_inc api only if you need to use rcuref_inc_lf on the
-refcounter atleast at one place. Mixing rcuref_inc and atomic_xxx api
-might lead to races. rcuref_inc_lf() must be used in lockfree
-RCU critical sections only.
+
diff --git a/Documentation/SubmittingDrivers b/Documentation/SubmittingDrivers
index c3cca924e94b4a..dd311cff1cc30b 100644
--- a/Documentation/SubmittingDrivers
+++ b/Documentation/SubmittingDrivers
@@ -27,18 +27,17 @@ Who To Submit Drivers To
------------------------
Linux 2.0:
- No new drivers are accepted for this kernel tree
+ No new drivers are accepted for this kernel tree.
Linux 2.2:
+ No new drivers are accepted for this kernel tree.
+
+Linux 2.4:
If the code area has a general maintainer then please submit it to
the maintainer listed in MAINTAINERS in the kernel file. If the
maintainer does not respond or you cannot find the appropriate
- maintainer then please contact the 2.2 kernel maintainer:
- Marc-Christian Petersen <m.c.p@wolk-project.de>.
-
-Linux 2.4:
- The same rules apply as 2.2. The final contact point for Linux 2.4
- submissions is Marcelo Tosatti <marcelo.tosatti@cyclades.com>.
+ maintainer then please contact Marcelo Tosatti
+ <marcelo.tosatti@cyclades.com>.
Linux 2.6:
The same rules apply as 2.4 except that you should follow linux-kernel
@@ -53,6 +52,7 @@ Licensing: The code must be released to us under the
of exclusive GPL licensing, and if you wish the driver
to be useful to other communities such as BSD you may well
wish to release under multiple licenses.
+ See accepted licenses at include/linux/module.h
Copyright: The copyright owner must agree to use of GPL.
It's best if the submitter and copyright owner
@@ -143,5 +143,13 @@ KernelNewbies:
http://kernelnewbies.org/
Linux USB project:
- http://sourceforge.net/projects/linux-usb/
+ http://linux-usb.sourceforge.net/
+
+How to NOT write kernel driver by arjanv@redhat.com
+ http://people.redhat.com/arjanv/olspaper.pdf
+
+Kernel Janitor:
+ http://janitor.kernelnewbies.org/
+--
+Last updated on 17 Nov 2005.
diff --git a/Documentation/SubmittingPatches b/Documentation/SubmittingPatches
index 1d47e6c09dc60c..6198e5ebcf65be 100644
--- a/Documentation/SubmittingPatches
+++ b/Documentation/SubmittingPatches
@@ -78,7 +78,9 @@ Randy Dunlap's patch scripts:
http://www.xenotime.net/linux/scripts/patching-scripts-002.tar.gz
Andrew Morton's patch scripts:
-http://www.zip.com.au/~akpm/linux/patches/patch-scripts-0.20
+http://www.zip.com.au/~akpm/linux/patches/
+Instead of these scripts, quilt is the recommended patch management
+tool (see above).
@@ -97,7 +99,7 @@ need to split up your patch. See #3, next.
3) Separate your changes.
-Separate each logical change into its own patch.
+Separate _logical changes_ into a single patch file.
For example, if your changes include both bug fixes and performance
enhancements for a single driver, separate those changes into two
@@ -112,6 +114,10 @@ If one patch depends on another patch in order for a change to be
complete, that is OK. Simply note "this patch depends on patch X"
in your patch description.
+If you cannot condense your patch set into a smaller set of patches,
+then only post say 15 or so at a time and wait for review and integration.
+
+
4) Select e-mail destination.
@@ -124,6 +130,10 @@ your patch to the primary Linux kernel developer's mailing list,
linux-kernel@vger.kernel.org. Most kernel developers monitor this
e-mail list, and can comment on your changes.
+
+Do not send more than 15 patches at once to the vger mailing lists!!!
+
+
Linus Torvalds is the final arbiter of all changes accepted into the
Linux kernel. His e-mail address is <torvalds@osdl.org>. He gets
a lot of e-mail, so typically you should do your best to -avoid- sending
@@ -149,6 +159,9 @@ USB, framebuffer devices, the VFS, the SCSI subsystem, etc. See the
MAINTAINERS file for a mailing list that relates specifically to
your change.
+Majordomo lists of VGER.KERNEL.ORG at:
+ <http://vger.kernel.org/vger-lists.html>
+
If changes affect userland-kernel interfaces, please send
the MAN-PAGES maintainer (as listed in the MAINTAINERS file)
a man-pages patch, or at least a notification of the change,
@@ -373,27 +386,14 @@ a diffstat, to show what files have changed, and the number of inserted
and deleted lines per file. A diffstat is especially useful on bigger
patches. Other comments relevant only to the moment or the maintainer,
not suitable for the permanent changelog, should also go here.
+Use diffstat options "-p 1 -w 70" so that filenames are listed from the
+top of the kernel source tree and don't use too much horizontal space
+(easily fit in 80 columns, maybe with some indentation).
See more details on the proper patch format in the following
references.
-13) More references for submitting patches
-
-Andrew Morton, "The perfect patch" (tpp).
- <http://www.zip.com.au/~akpm/linux/patches/stuff/tpp.txt>
-
-Jeff Garzik, "Linux kernel patch submission format."
- <http://linux.yyz.us/patch-format.html>
-
-Greg KH, "How to piss off a kernel subsystem maintainer"
- <http://www.kroah.com/log/2005/03/31/>
-
-Kernel Documentation/CodingStyle
- <http://sosdg.org/~coywolf/lxr/source/Documentation/CodingStyle>
-
-Linus Torvald's mail on the canonical patch format:
- <http://lkml.org/lkml/2005/4/7/183>
-----------------------------------
@@ -466,3 +466,30 @@ and 'extern __inline__'.
Don't try to anticipate nebulous future cases which may or may not
be useful: "Make it as simple as you can, and no simpler."
+
+
+----------------------
+SECTION 3 - REFERENCES
+----------------------
+
+Andrew Morton, "The perfect patch" (tpp).
+ <http://www.zip.com.au/~akpm/linux/patches/stuff/tpp.txt>
+
+Jeff Garzik, "Linux kernel patch submission format."
+ <http://linux.yyz.us/patch-format.html>
+
+Greg Kroah, "How to piss off a kernel subsystem maintainer".
+ <http://www.kroah.com/log/2005/03/31/>
+ <http://www.kroah.com/log/2005/07/08/>
+ <http://www.kroah.com/log/2005/10/19/>
+
+NO!!!! No more huge patch bombs to linux-kernel@vger.kernel.org people!.
+ <http://marc.theaimsgroup.com/?l=linux-kernel&m=112112749912944&w=2>
+
+Kernel Documentation/CodingStyle
+ <http://sosdg.org/~coywolf/lxr/source/Documentation/CodingStyle>
+
+Linus Torvald's mail on the canonical patch format:
+ <http://lkml.org/lkml/2005/4/7/183>
+--
+Last updated on 17 Nov 2005.
diff --git a/Documentation/applying-patches.txt b/Documentation/applying-patches.txt
index 681e426e248210..05a08c2c18897d 100644
--- a/Documentation/applying-patches.txt
+++ b/Documentation/applying-patches.txt
@@ -2,7 +2,8 @@
Applying Patches To The Linux Kernel
------------------------------------
- (Written by Jesper Juhl, August 2005)
+ Original by: Jesper Juhl, August 2005
+ Last update: 2005-12-02
@@ -118,7 +119,7 @@ wrong.
When patch encounters a change that it can't fix up with fuzz it rejects it
outright and leaves a file with a .rej extension (a reject file). You can
-read this file to see exactely what change couldn't be applied, so you can
+read this file to see exactly what change couldn't be applied, so you can
go fix it up by hand if you wish.
If you don't have any third party patches applied to your kernel source, but
@@ -127,7 +128,7 @@ and have made no modifications yourself to the source files, then you should
never see a fuzz or reject message from patch. If you do see such messages
anyway, then there's a high risk that either your local source tree or the
patch file is corrupted in some way. In that case you should probably try
-redownloading the patch and if things are still not OK then you'd be advised
+re-downloading the patch and if things are still not OK then you'd be advised
to start with a fresh tree downloaded in full from kernel.org.
Let's look a bit more at some of the messages patch can produce.
@@ -180,9 +181,11 @@ wish to apply.
Are there any alternatives to `patch'?
---
- Yes there are alternatives. You can use the `interdiff' program
-(http://cyberelk.net/tim/patchutils/) to generate a patch representing the
-differences between two patches and then apply the result.
+ Yes there are alternatives.
+
+ You can use the `interdiff' program (http://cyberelk.net/tim/patchutils/) to
+generate a patch representing the differences between two patches and then
+apply the result.
This will let you move from something like 2.6.12.2 to 2.6.12.3 in a single
step. The -z flag to interdiff will even let you feed it patches in gzip or
bzip2 compressed form directly without the use of zcat or bzcat or manual
@@ -197,7 +200,7 @@ do the additional steps since interdiff can get things wrong in some cases.
Another alternative is `ketchup', which is a python script for automatic
downloading and applying of patches (http://www.selenic.com/ketchup/).
-Other nice tools are diffstat which shows a summary of changes made by a
+ Other nice tools are diffstat which shows a summary of changes made by a
patch, lsdiff which displays a short listing of affected files in a patch
file, along with (optionally) the line numbers of the start of each patch
and grepdiff which displays a list of the files modified by a patch where
@@ -258,7 +261,7 @@ $ patch -p1 -R < ../patch-2.6.11.1 # revert the 2.6.11.1 patch
# source dir is now 2.6.11
$ patch -p1 < ../patch-2.6.12 # apply new 2.6.12 patch
$ cd ..
-$ mv linux-2.6.11.1 inux-2.6.12 # rename source dir
+$ mv linux-2.6.11.1 linux-2.6.12 # rename source dir
The 2.6.x.y kernels
@@ -433,7 +436,11 @@ $ cd ..
$ mv linux-2.6.12-mm1 linux-2.6.13-rc3-mm3 # rename the source dir
-This concludes this list of explanations of the various kernel trees and I
-hope you are now crystal clear on how to apply the various patches and help
-testing the kernel.
+This concludes this list of explanations of the various kernel trees.
+I hope you are now clear on how to apply the various patches and help testing
+the kernel.
+
+Thank you's to Randy Dunlap, Rolf Eike Beer, Linus Torvalds, Bodo Eggert,
+Johannes Stezenbach, Grant Coady, Pavel Machek and others that I may have
+forgotten for their reviews and contributions to this document.
diff --git a/Documentation/block/stat.txt b/Documentation/block/stat.txt
new file mode 100644
index 00000000000000..0dbc946de2eab8
--- /dev/null
+++ b/Documentation/block/stat.txt
@@ -0,0 +1,82 @@
+Block layer statistics in /sys/block/<dev>/stat
+===============================================
+
+This file documents the contents of the /sys/block/<dev>/stat file.
+
+The stat file provides several statistics about the state of block
+device <dev>.
+
+Q. Why are there multiple statistics in a single file? Doesn't sysfs
+ normally contain a single value per file?
+A. By having a single file, the kernel can guarantee that the statistics
+ represent a consistent snapshot of the state of the device. If the
+ statistics were exported as multiple files containing one statistic
+ each, it would be impossible to guarantee that a set of readings
+ represent a single point in time.
+
+The stat file consists of a single line of text containing 11 decimal
+values separated by whitespace. The fields are summarized in the
+following table, and described in more detail below.
+
+Name units description
+---- ----- -----------
+read I/Os requests number of read I/Os processed
+read merges requests number of read I/Os merged with in-queue I/O
+read sectors sectors number of sectors read
+read ticks milliseconds total wait time for read requests
+write I/Os requests number of write I/Os processed
+write merges requests number of write I/Os merged with in-queue I/O
+write sectors sectors number of sectors written
+write ticks milliseconds total wait time for write requests
+in_flight requests number of I/Os currently in flight
+io_ticks milliseconds total time this block device has been active
+time_in_queue milliseconds total wait time for all requests
+
+read I/Os, write I/Os
+=====================
+
+These values increment when an I/O request completes.
+
+read merges, write merges
+=========================
+
+These values increment when an I/O request is merged with an
+already-queued I/O request.
+
+read sectors, write sectors
+===========================
+
+These values count the number of sectors read from or written to this
+block device. The "sectors" in question are the standard UNIX 512-byte
+sectors, not any device- or filesystem-specific block size. The
+counters are incremented when the I/O completes.
+
+read ticks, write ticks
+=======================
+
+These values count the number of milliseconds that I/O requests have
+waited on this block device. If there are multiple I/O requests waiting,
+these values will increase at a rate greater than 1000/second; for
+example, if 60 read requests wait for an average of 30 ms, the read_ticks
+field will increase by 60*30 = 1800.
+
+in_flight
+=========
+
+This value counts the number of I/O requests that have been issued to
+the device driver but have not yet completed. It does not include I/O
+requests that are in the queue but not yet issued to the device driver.
+
+io_ticks
+========
+
+This value counts the number of milliseconds during which the device has
+had I/O requests queued.
+
+time_in_queue
+=============
+
+This value counts the number of milliseconds that I/O requests have waited
+on this block device. If there are multiple I/O requests waiting, this
+value will increase as the product of the number of milliseconds times the
+number of requests waiting (see "read ticks" above for an example).
diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
new file mode 100644
index 00000000000000..08c5d04f308600
--- /dev/null
+++ b/Documentation/cpu-hotplug.txt
@@ -0,0 +1,357 @@
+ CPU hotplug Support in Linux(tm) Kernel
+
+ Maintainers:
+ CPU Hotplug Core:
+ Rusty Russell <rusty@rustycorp.com.au>
+ Srivatsa Vaddagiri <vatsa@in.ibm.com>
+ i386:
+ Zwane Mwaikambo <zwane@arm.linux.org.uk>
+ ppc64:
+ Nathan Lynch <nathanl@austin.ibm.com>
+ Joel Schopp <jschopp@austin.ibm.com>
+ ia64/x86_64:
+ Ashok Raj <ashok.raj@intel.com>
+
+Authors: Ashok Raj <ashok.raj@intel.com>
+Lots of feedback: Nathan Lynch <nathanl@austin.ibm.com>,
+ Joel Schopp <jschopp@austin.ibm.com>
+
+Introduction
+
+Modern advances in system architectures have introduced advanced error
+reporting and correction capabilities in processors. CPU architectures permit
+partitioning support, where compute resources of a single CPU could be made
+available to virtual machine environments. There are couple OEMS that
+support NUMA hardware which are hot pluggable as well, where physical
+node insertion and removal require support for CPU hotplug.
+
+Such advances require CPUs available to a kernel to be removed either for
+provisioning reasons, or for RAS purposes to keep an offending CPU off
+system execution path. Hence the need for CPU hotplug support in the
+Linux kernel.
+
+A more novel use of CPU-hotplug support is its use today in suspend
+resume support for SMP. Dual-core and HT support makes even
+a laptop run SMP kernels which didn't support these methods. SMP support
+for suspend/resume is a work in progress.
+
+General Stuff about CPU Hotplug
+--------------------------------
+
+Command Line Switches
+---------------------
+maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using
+ maxcpus=2 will only boot 2. You can choose to bring the
+ other cpus later online, read FAQ's for more info.
+
+additional_cpus=n [x86_64 only] use this to limit hotpluggable cpus.
+ This option sets
+ cpu_possible_map = cpu_present_map + additional_cpus
+
+CPU maps and such
+-----------------
+[More on cpumaps and primitive to manipulate, please check
+include/linux/cpumask.h that has more descriptive text.]
+
+cpu_possible_map: Bitmap of possible CPUs that can ever be available in the
+system. This is used to allocate some boot time memory for per_cpu variables
+that aren't designed to grow/shrink as CPUs are made available or removed.
+Once set during boot time discovery phase, the map is static, i.e no bits
+are added or removed anytime. Trimming it accurately for your system needs
+upfront can save some boot time memory. See below for how we use heuristics
+in x86_64 case to keep this under check.
+
+cpu_online_map: Bitmap of all CPUs currently online. Its set in __cpu_up()
+after a cpu is available for kernel scheduling and ready to receive
+interrupts from devices. Its cleared when a cpu is brought down using
+__cpu_disable(), before which all OS services including interrupts are
+migrated to another target CPU.
+
+cpu_present_map: Bitmap of CPUs currently present in the system. Not all
+of them may be online. When physical hotplug is processed by the relevant
+subsystem (e.g ACPI) can change and new bit either be added or removed
+from the map depending on the event is hot-add/hot-remove. There are currently
+no locking rules as of now. Typical usage is to init topology during boot,
+at which time hotplug is disabled.
+
+You really dont need to manipulate any of the system cpu maps. They should
+be read-only for most use. When setting up per-cpu resources almost always use
+cpu_possible_map/for_each_cpu() to iterate.
+
+Never use anything other than cpumask_t to represent bitmap of CPUs.
+
+#include <linux/cpumask.h>
+
+for_each_cpu - Iterate over cpu_possible_map
+for_each_online_cpu - Iterate over cpu_online_map
+for_each_present_cpu - Iterate over cpu_present_map
+for_each_cpu_mask(x,mask) - Iterate over some random collection of cpu mask.
+
+#include <linux/cpu.h>
+lock_cpu_hotplug() and unlock_cpu_hotplug():
+
+The above calls are used to inhibit cpu hotplug operations. While holding the
+cpucontrol mutex, cpu_online_map will not change. If you merely need to avoid
+cpus going away, you could also use preempt_disable() and preempt_enable()
+for those sections. Just remember the critical section cannot call any
+function that can sleep or schedule this process away. The preempt_disable()
+will work as long as stop_machine_run() is used to take a cpu down.
+
+CPU Hotplug - Frequently Asked Questions.
+
+Q: How to i enable my kernel to support CPU hotplug?
+A: When doing make defconfig, Enable CPU hotplug support
+
+ "Processor type and Features" -> Support for Hotpluggable CPUs
+
+Make sure that you have CONFIG_HOTPLUG, and CONFIG_SMP turned on as well.
+
+You would need to enable CONFIG_HOTPLUG_CPU for SMP suspend/resume support
+as well.
+
+Q: What architectures support CPU hotplug?
+A: As of 2.6.14, the following architectures support CPU hotplug.
+
+i386 (Intel), ppc, ppc64, parisc, s390, ia64 and x86_64
+
+Q: How to test if hotplug is supported on the newly built kernel?
+A: You should now notice an entry in sysfs.
+
+Check if sysfs is mounted, using the "mount" command. You should notice
+an entry as shown below in the output.
+
+....
+none on /sys type sysfs (rw)
+....
+
+if this is not mounted, do the following.
+
+#mkdir /sysfs
+#mount -t sysfs sys /sys
+
+now you should see entries for all present cpu, the following is an example
+in a 8-way system.
+
+#pwd
+#/sys/devices/system/cpu
+#ls -l
+total 0
+drwxr-xr-x 10 root root 0 Sep 19 07:44 .
+drwxr-xr-x 13 root root 0 Sep 19 07:45 ..
+drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu0
+drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu1
+drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu2
+drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu3
+drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu4
+drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu5
+drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu6
+drwxr-xr-x 3 root root 0 Sep 19 07:48 cpu7
+
+Under each directory you would find an "online" file which is the control
+file to logically online/offline a processor.
+
+Q: Does hot-add/hot-remove refer to physical add/remove of cpus?
+A: The usage of hot-add/remove may not be very consistently used in the code.
+CONFIG_CPU_HOTPLUG enables logical online/offline capability in the kernel.
+To support physical addition/removal, one would need some BIOS hooks and
+the platform should have something like an attention button in PCI hotplug.
+CONFIG_ACPI_HOTPLUG_CPU enables ACPI support for physical add/remove of CPUs.
+
+Q: How do i logically offline a CPU?
+A: Do the following.
+
+#echo 0 > /sys/devices/system/cpu/cpuX/online
+
+once the logical offline is successful, check
+
+#cat /proc/interrupts
+
+you should now not see the CPU that you removed. Also online file will report
+the state as 0 when a cpu if offline and 1 when its online.
+
+#To display the current cpu state.
+#cat /sys/devices/system/cpu/cpuX/online
+
+Q: Why cant i remove CPU0 on some systems?
+A: Some architectures may have some special dependency on a certain CPU.
+
+For e.g in IA64 platforms we have ability to sent platform interrupts to the
+OS. a.k.a Corrected Platform Error Interrupts (CPEI). In current ACPI
+specifications, we didn't have a way to change the target CPU. Hence if the
+current ACPI version doesn't support such re-direction, we disable that CPU
+by making it not-removable.
+
+In such cases you will also notice that the online file is missing under cpu0.
+
+Q: How do i find out if a particular CPU is not removable?
+A: Depending on the implementation, some architectures may show this by the
+absence of the "online" file. This is done if it can be determined ahead of
+time that this CPU cannot be removed.
+
+In some situations, this can be a run time check, i.e if you try to remove the
+last CPU, this will not be permitted. You can find such failures by
+investigating the return value of the "echo" command.
+
+Q: What happens when a CPU is being logically offlined?
+A: The following happen, listed in no particular order :-)
+
+- A notification is sent to in-kernel registered modules by sending an event
+ CPU_DOWN_PREPARE
+- All process is migrated away from this outgoing CPU to a new CPU
+- All interrupts targeted to this CPU is migrated to a new CPU
+- timers/bottom half/task lets are also migrated to a new CPU
+- Once all services are migrated, kernel calls an arch specific routine
+ __cpu_disable() to perform arch specific cleanup.
+- Once this is successful, an event for successful cleanup is sent by an event
+ CPU_DEAD.
+
+ "It is expected that each service cleans up when the CPU_DOWN_PREPARE
+ notifier is called, when CPU_DEAD is called its expected there is nothing
+ running on behalf of this CPU that was offlined"
+
+Q: If i have some kernel code that needs to be aware of CPU arrival and
+ departure, how to i arrange for proper notification?
+A: This is what you would need in your kernel code to receive notifications.
+
+ #include <linux/cpu.h>
+ static int __cpuinit foobar_cpu_callback(struct notifier_block *nfb,
+ unsigned long action, void *hcpu)
+ {
+ unsigned int cpu = (unsigned long)hcpu;
+
+ switch (action) {
+ case CPU_ONLINE:
+ foobar_online_action(cpu);
+ break;
+ case CPU_DEAD:
+ foobar_dead_action(cpu);
+ break;
+ }
+ return NOTIFY_OK;
+ }
+
+ static struct notifier_block foobar_cpu_notifer =
+ {
+ .notifier_call = foobar_cpu_callback,
+ };
+
+
+In your init function,
+
+ register_cpu_notifier(&foobar_cpu_notifier);
+
+You can fail PREPARE notifiers if something doesn't work to prepare resources.
+This will stop the activity and send a following CANCELED event back.
+
+CPU_DEAD should not be failed, its just a goodness indication, but bad
+things will happen if a notifier in path sent a BAD notify code.
+
+Q: I don't see my action being called for all CPUs already up and running?
+A: Yes, CPU notifiers are called only when new CPUs are on-lined or offlined.
+ If you need to perform some action for each cpu already in the system, then
+
+ for_each_online_cpu(i) {
+ foobar_cpu_callback(&foobar_cpu_notifier, CPU_UP_PREPARE, i);
+ foobar_cpu_callback(&foobar-cpu_notifier, CPU_ONLINE, i);
+ }
+
+Q: If i would like to develop cpu hotplug support for a new architecture,
+ what do i need at a minimum?
+A: The following are what is required for CPU hotplug infrastructure to work
+ correctly.
+
+ - Make sure you have an entry in Kconfig to enable CONFIG_HOTPLUG_CPU
+ - __cpu_up() - Arch interface to bring up a CPU
+ - __cpu_disable() - Arch interface to shutdown a CPU, no more interrupts
+ can be handled by the kernel after the routine
+ returns. Including local APIC timers etc are
+ shutdown.
+ - __cpu_die() - This actually supposed to ensure death of the CPU.
+ Actually look at some example code in other arch
+ that implement CPU hotplug. The processor is taken
+ down from the idle() loop for that specific
+ architecture. __cpu_die() typically waits for some
+ per_cpu state to be set, to ensure the processor
+ dead routine is called to be sure positively.
+
+Q: I need to ensure that a particular cpu is not removed when there is some
+ work specific to this cpu is in progress.
+A: First switch the current thread context to preferred cpu
+
+ int my_func_on_cpu(int cpu)
+ {
+ cpumask_t saved_mask, new_mask = CPU_MASK_NONE;
+ int curr_cpu, err = 0;
+
+ saved_mask = current->cpus_allowed;
+ cpu_set(cpu, new_mask);
+ err = set_cpus_allowed(current, new_mask);
+
+ if (err)
+ return err;
+
+ /*
+ * If we got scheduled out just after the return from
+ * set_cpus_allowed() before running the work, this ensures
+ * we stay locked.
+ */
+ curr_cpu = get_cpu();
+
+ if (curr_cpu != cpu) {
+ err = -EAGAIN;
+ goto ret;
+ } else {
+ /*
+ * Do work : But cant sleep, since get_cpu() disables preempt
+ */
+ }
+ ret:
+ put_cpu();
+ set_cpus_allowed(current, saved_mask);
+ return err;
+ }
+
+
+Q: How do we determine how many CPUs are available for hotplug.
+A: There is no clear spec defined way from ACPI that can give us that
+ information today. Based on some input from Natalie of Unisys,
+ that the ACPI MADT (Multiple APIC Description Tables) marks those possible
+ CPUs in a system with disabled status.
+
+ Andi implemented some simple heuristics that count the number of disabled
+ CPUs in MADT as hotpluggable CPUS. In the case there are no disabled CPUS
+ we assume 1/2 the number of CPUs currently present can be hotplugged.
+
+ Caveat: Today's ACPI MADT can only provide 256 entries since the apicid field
+ in MADT is only 8 bits.
+
+User Space Notification
+
+Hotplug support for devices is common in Linux today. Its being used today to
+support automatic configuration of network, usb and pci devices. A hotplug
+event can be used to invoke an agent script to perform the configuration task.
+
+You can add /etc/hotplug/cpu.agent to handle hotplug notification user space
+scripts.
+
+ #!/bin/bash
+ # $Id: cpu.agent
+ # Kernel hotplug params include:
+ #ACTION=%s [online or offline]
+ #DEVPATH=%s
+ #
+ cd /etc/hotplug
+ . ./hotplug.functions
+
+ case $ACTION in
+ online)
+ echo `date` ":cpu.agent" add cpu >> /tmp/hotplug.txt
+ ;;
+ offline)
+ echo `date` ":cpu.agent" remove cpu >>/tmp/hotplug.txt
+ ;;
+ *)
+ debug_mesg CPU $ACTION event not supported
+ exit 1
+ ;;
+ esac
diff --git a/Documentation/cpusets.txt b/Documentation/cpusets.txt
index a09a8eb80665ed..9e49b1c3572961 100644
--- a/Documentation/cpusets.txt
+++ b/Documentation/cpusets.txt
@@ -14,7 +14,10 @@ CONTENTS:
1.1 What are cpusets ?
1.2 Why are cpusets needed ?
1.3 How are cpusets implemented ?
- 1.4 How do I use cpusets ?
+ 1.4 What are exclusive cpusets ?
+ 1.5 What does notify_on_release do ?
+ 1.6 What is memory_pressure ?
+ 1.7 How do I use cpusets ?
2. Usage Examples and Syntax
2.1 Basic Usage
2.2 Adding/removing cpus
@@ -49,29 +52,6 @@ its cpus_allowed vector, and the kernel page allocator will not
allocate a page on a node that is not allowed in the requesting tasks
mems_allowed vector.
-If a cpuset is cpu or mem exclusive, no other cpuset, other than a direct
-ancestor or descendent, may share any of the same CPUs or Memory Nodes.
-A cpuset that is cpu exclusive has a sched domain associated with it.
-The sched domain consists of all cpus in the current cpuset that are not
-part of any exclusive child cpusets.
-This ensures that the scheduler load balacing code only balances
-against the cpus that are in the sched domain as defined above and not
-all of the cpus in the system. This removes any overhead due to
-load balancing code trying to pull tasks outside of the cpu exclusive
-cpuset only to be prevented by the tasks' cpus_allowed mask.
-
-A cpuset that is mem_exclusive restricts kernel allocations for
-page, buffer and other data commonly shared by the kernel across
-multiple users. All cpusets, whether mem_exclusive or not, restrict
-allocations of memory for user space. This enables configuring a
-system so that several independent jobs can share common kernel
-data, such as file system pages, while isolating each jobs user
-allocation in its own cpuset. To do this, construct a large
-mem_exclusive cpuset to hold all the jobs, and construct child,
-non-mem_exclusive cpusets for each individual job. Only a small
-amount of typical kernel memory, such as requests from interrupt
-handlers, is allowed to be taken outside even a mem_exclusive cpuset.
-
User level code may create and destroy cpusets by name in the cpuset
virtual file system, manage the attributes and permissions of these
cpusets and which CPUs and Memory Nodes are assigned to each cpuset,
@@ -192,9 +172,15 @@ containing the following files describing that cpuset:
- cpus: list of CPUs in that cpuset
- mems: list of Memory Nodes in that cpuset
+ - memory_migrate flag: if set, move pages to cpusets nodes
- cpu_exclusive flag: is cpu placement exclusive?
- mem_exclusive flag: is memory placement exclusive?
- tasks: list of tasks (by pid) attached to that cpuset
+ - notify_on_release flag: run /sbin/cpuset_release_agent on exit?
+ - memory_pressure: measure of how much paging pressure in cpuset
+
+In addition, the root cpuset only has the following file:
+ - memory_pressure_enabled flag: compute memory_pressure?
New cpusets are created using the mkdir system call or shell
command. The properties of a cpuset, such as its flags, allowed
@@ -228,7 +214,108 @@ exclusive cpuset. Also, the use of a Linux virtual file system (vfs)
to represent the cpuset hierarchy provides for a familiar permission
and name space for cpusets, with a minimum of additional kernel code.
-1.4 How do I use cpusets ?
+
+1.4 What are exclusive cpusets ?
+--------------------------------
+
+If a cpuset is cpu or mem exclusive, no other cpuset, other than
+a direct ancestor or descendent, may share any of the same CPUs or
+Memory Nodes.
+
+A cpuset that is cpu_exclusive has a scheduler (sched) domain
+associated with it. The sched domain consists of all CPUs in the
+current cpuset that are not part of any exclusive child cpusets.
+This ensures that the scheduler load balancing code only balances
+against the CPUs that are in the sched domain as defined above and
+not all of the CPUs in the system. This removes any overhead due to
+load balancing code trying to pull tasks outside of the cpu_exclusive
+cpuset only to be prevented by the tasks' cpus_allowed mask.
+
+A cpuset that is mem_exclusive restricts kernel allocations for
+page, buffer and other data commonly shared by the kernel across
+multiple users. All cpusets, whether mem_exclusive or not, restrict
+allocations of memory for user space. This enables configuring a
+system so that several independent jobs can share common kernel data,
+such as file system pages, while isolating each jobs user allocation in
+its own cpuset. To do this, construct a large mem_exclusive cpuset to
+hold all the jobs, and construct child, non-mem_exclusive cpusets for
+each individual job. Only a small amount of typical kernel memory,
+such as requests from interrupt handlers, is allowed to be taken
+outside even a mem_exclusive cpuset.
+
+
+1.5 What does notify_on_release do ?
+------------------------------------
+
+If the notify_on_release flag is enabled (1) in a cpuset, then whenever
+the last task in the cpuset leaves (exits or attaches to some other
+cpuset) and the last child cpuset of that cpuset is removed, then
+the kernel runs the command /sbin/cpuset_release_agent, supplying the
+pathname (relative to the mount point of the cpuset file system) of the
+abandoned cpuset. This enables automatic removal of abandoned cpusets.
+The default value of notify_on_release in the root cpuset at system
+boot is disabled (0). The default value of other cpusets at creation
+is the current value of their parents notify_on_release setting.
+
+
+1.6 What is memory_pressure ?
+-----------------------------
+The memory_pressure of a cpuset provides a simple per-cpuset metric
+of the rate that the tasks in a cpuset are attempting to free up in
+use memory on the nodes of the cpuset to satisfy additional memory
+requests.
+
+This enables batch managers monitoring jobs running in dedicated
+cpusets to efficiently detect what level of memory pressure that job
+is causing.
+
+This is useful both on tightly managed systems running a wide mix of
+submitted jobs, which may choose to terminate or re-prioritize jobs that
+are trying to use more memory than allowed on the nodes assigned them,
+and with tightly coupled, long running, massively parallel scientific
+computing jobs that will dramatically fail to meet required performance
+goals if they start to use more memory than allowed to them.
+
+This mechanism provides a very economical way for the batch manager
+to monitor a cpuset for signs of memory pressure. It's up to the
+batch manager or other user code to decide what to do about it and
+take action.
+
+==> Unless this feature is enabled by writing "1" to the special file
+ /dev/cpuset/memory_pressure_enabled, the hook in the rebalance
+ code of __alloc_pages() for this metric reduces to simply noticing
+ that the cpuset_memory_pressure_enabled flag is zero. So only
+ systems that enable this feature will compute the metric.
+
+Why a per-cpuset, running average:
+
+ Because this meter is per-cpuset, rather than per-task or mm,
+ the system load imposed by a batch scheduler monitoring this
+ metric is sharply reduced on large systems, because a scan of
+ the tasklist can be avoided on each set of queries.
+
+ Because this meter is a running average, instead of an accumulating
+ counter, a batch scheduler can detect memory pressure with a
+ single read, instead of having to read and accumulate results
+ for a period of time.
+
+ Because this meter is per-cpuset rather than per-task or mm,
+ the batch scheduler can obtain the key information, memory
+ pressure in a cpuset, with a single read, rather than having to
+ query and accumulate results over all the (dynamically changing)
+ set of tasks in the cpuset.
+
+A per-cpuset simple digital filter (requires a spinlock and 3 words
+of data per-cpuset) is kept, and updated by any task attached to that
+cpuset, if it enters the synchronous (direct) page reclaim code.
+
+A per-cpuset file provides an integer number representing the recent
+(half-life of 10 seconds) rate of direct page reclaims caused by
+the tasks in the cpuset, in units of reclaims attempted per second,
+times 1000.
+
+
+1.7 How do I use cpusets ?
--------------------------
In order to minimize the impact of cpusets on critical kernel
@@ -277,6 +364,30 @@ rewritten to the 'tasks' file of its cpuset. This is done to avoid
impacting the scheduler code in the kernel with a check for changes
in a tasks processor placement.
+Normally, once a page is allocated (given a physical page
+of main memory) then that page stays on whatever node it
+was allocated, so long as it remains allocated, even if the
+cpusets memory placement policy 'mems' subsequently changes.
+If the cpuset flag file 'memory_migrate' is set true, then when
+tasks are attached to that cpuset, any pages that task had
+allocated to it on nodes in its previous cpuset are migrated
+to the tasks new cpuset. Depending on the implementation,
+this migration may either be done by swapping the page out,
+so that the next time the page is referenced, it will be paged
+into the tasks new cpuset, usually on the node where it was
+referenced, or this migration may be done by directly copying
+the pages from the tasks previous cpuset to the new cpuset,
+where possible to the same node, relative to the new cpuset,
+as the node that held the page, relative to the old cpuset.
+Also if 'memory_migrate' is set true, then if that cpusets
+'mems' file is modified, pages allocated to tasks in that
+cpuset, that were on nodes in the previous setting of 'mems',
+will be moved to nodes in the new setting of 'mems.' Again,
+depending on the implementation, this might be done by swapping,
+or by direct copying. In either case, pages that were not in
+the tasks prior cpuset, or in the cpusets prior 'mems' setting,
+will not be moved.
+
There is an exception to the above. If hotplug functionality is used
to remove all the CPUs that are currently assigned to a cpuset,
then the kernel will automatically update the cpus_allowed of all
diff --git a/Documentation/fb/cyblafb/bugs b/Documentation/fb/cyblafb/bugs
index f90cc66ea91972..9443a6d72cdd2a 100644
--- a/Documentation/fb/cyblafb/bugs
+++ b/Documentation/fb/cyblafb/bugs
@@ -11,4 +11,3 @@ Untested features
All LCD stuff is untested. If it worked in tridentfb, it should work in
cyblafb. Please test and report the results to Knut_Petersen@t-online.de.
-
diff --git a/Documentation/fb/cyblafb/fb.modes b/Documentation/fb/cyblafb/fb.modes
index cf4351fc32ff69..fe0e5223ba86cd 100644
--- a/Documentation/fb/cyblafb/fb.modes
+++ b/Documentation/fb/cyblafb/fb.modes
@@ -14,142 +14,141 @@
#
mode "640x480-50"
- geometry 640 480 640 3756 8
+ geometry 640 480 2048 4096 8
timings 47619 4294967256 24 17 0 216 3
endmode
mode "640x480-60"
- geometry 640 480 640 3756 8
+ geometry 640 480 2048 4096 8
timings 39682 4294967256 24 17 0 216 3
endmode
mode "640x480-70"
- geometry 640 480 640 3756 8
+ geometry 640 480 2048 4096 8
timings 34013 4294967256 24 17 0 216 3
endmode
mode "640x480-72"
- geometry 640 480 640 3756 8
+ geometry 640 480 2048 4096 8
timings 33068 4294967256 24 17 0 216 3
endmode
mode "640x480-75"
- geometry 640 480 640 3756 8
+ geometry 640 480 2048 4096 8
timings 31746 4294967256 24 17 0 216 3
endmode
mode "640x480-80"
- geometry 640 480 640 3756 8
+ geometry 640 480 2048 4096 8
timings 29761 4294967256 24 17 0 216 3
endmode
mode "640x480-85"
- geometry 640 480 640 3756 8
+ geometry 640 480 2048 4096 8
timings 28011 4294967256 24 17 0 216 3
endmode
mode "800x600-50"
- geometry 800 600 800 3221 8
+ geometry 800 600 2048 4096 8
timings 30303 96 24 14 0 136 11
endmode
mode "800x600-60"
- geometry 800 600 800 3221 8
+ geometry 800 600 2048 4096 8
timings 25252 96 24 14 0 136 11
endmode
mode "800x600-70"
- geometry 800 600 800 3221 8
+ geometry 800 600 2048 4096 8
timings 21645 96 24 14 0 136 11
endmode
mode "800x600-72"
- geometry 800 600 800 3221 8
+ geometry 800 600 2048 4096 8
timings 21043 96 24 14 0 136 11
endmode
mode "800x600-75"
- geometry 800 600 800 3221 8
+ geometry 800 600 2048 4096 8
timings 20202 96 24 14 0 136 11
endmode
mode "800x600-80"
- geometry 800 600 800 3221 8
+ geometry 800 600 2048 4096 8
timings 18939 96 24 14 0 136 11
endmode
mode "800x600-85"
- geometry 800 600 800 3221 8
+ geometry 800 600 2048 4096 8
timings 17825 96 24 14 0 136 11
endmode
mode "1024x768-50"
- geometry 1024 768 1024 2815 8
+ geometry 1024 768 2048 4096 8
timings 19054 144 24 29 0 120 3
endmode
mode "1024x768-60"
- geometry 1024 768 1024 2815 8
+ geometry 1024 768 2048 4096 8
timings 15880 144 24 29 0 120 3
endmode
mode "1024x768-70"
- geometry 1024 768 1024 2815 8
+ geometry 1024 768 2048 4096 8
timings 13610 144 24 29 0 120 3
endmode
mode "1024x768-72"
- geometry 1024 768 1024 2815 8
+ geometry 1024 768 2048 4096 8
timings 13232 144 24 29 0 120 3
endmode
mode "1024x768-75"
- geometry 1024 768 1024 2815 8
+ geometry 1024 768 2048 4096 8
timings 12703 144 24 29 0 120 3
endmode
mode "1024x768-80"
- geometry 1024 768 1024 2815 8
+ geometry 1024 768 2048 4096 8
timings 11910 144 24 29 0 120 3
endmode
mode "1024x768-85"
- geometry 1024 768 1024 2815 8
+ geometry 1024 768 2048 4096 8
timings 11209 144 24 29 0 120 3
endmode
mode "1280x1024-50"
- geometry 1280 1024 1280 2662 8
+ geometry 1280 1024 2048 4096 8
timings 11114 232 16 39 0 160 3
endmode
mode "1280x1024-60"
- geometry 1280 1024 1280 2662 8
+ geometry 1280 1024 2048 4096 8
timings 9262 232 16 39 0 160 3
endmode
mode "1280x1024-70"
- geometry 1280 1024 1280 2662 8
+ geometry 1280 1024 2048 4096 8
timings 7939 232 16 39 0 160 3
endmode
mode "1280x1024-72"
- geometry 1280 1024 1280 2662 8
+ geometry 1280 1024 2048 4096 8
timings 7719 232 16 39 0 160 3
endmode
mode "1280x1024-75"
- geometry 1280 1024 1280 2662 8
+ geometry 1280 1024 2048 4096 8
timings 7410 232 16 39 0 160 3
endmode
mode "1280x1024-80"
- geometry 1280 1024 1280 2662 8
+ geometry 1280 1024 2048 4096 8
timings 6946 232 16 39 0 160 3
endmode
mode "1280x1024-85"
- geometry 1280 1024 1280 2662 8
+ geometry 1280 1024 2048 4096 8
timings 6538 232 16 39 0 160 3
endmode
-
diff --git a/Documentation/fb/cyblafb/performance b/Documentation/fb/cyblafb/performance
index eb4e47a9cea62b..8d15d5dfc6b34a 100644
--- a/Documentation/fb/cyblafb/performance
+++ b/Documentation/fb/cyblafb/performance
@@ -77,4 +77,3 @@ patch that speeds up kernel bitblitting a lot ( > 20%).
| | | | |
| | | | |
+-----------+-----------------+-----------------+-----------------+
-
diff --git a/Documentation/fb/cyblafb/todo b/Documentation/fb/cyblafb/todo
index 80fb2f89b6c173..c5f6d0eae54531 100644
--- a/Documentation/fb/cyblafb/todo
+++ b/Documentation/fb/cyblafb/todo
@@ -22,11 +22,10 @@ accelerated color blitting Who needs it? The console driver does use color
everything else is done using color expanding
blitting of 1bpp character bitmaps.
-xpanning Who needs it?
-
ioctls Who needs it?
-TV-out Will be done later
+TV-out Will be done later. Use "vga= " at boot time
+ to set a suitable video mode.
??? Feel free to contact me if you have any
feature requests
diff --git a/Documentation/fb/cyblafb/usage b/Documentation/fb/cyblafb/usage
index e627c8f542115c..a39bb3d402a2b0 100644
--- a/Documentation/fb/cyblafb/usage
+++ b/Documentation/fb/cyblafb/usage
@@ -40,6 +40,16 @@ Selecting Modes
None of the modes possible to select as startup modes are affected by
the problems described at the end of the next subsection.
+ For all startup modes cyblafb chooses a virtual x resolution of 2048,
+ the only exception is mode 1280x1024 in combination with 32 bpp. This
+ allows ywrap scrolling for all those modes if rotation is 0 or 2, and
+ also fast scrolling if rotation is 1 or 3. The default virtual y reso-
+ lution is 4096 for bpp == 8, 2048 for bpp==16 and 1024 for bpp == 32,
+ again with the only exception of 1280x1024 at 32 bpp.
+
+ Please do set your video memory size to 8 Mb in the Bios setup. Other
+ values will work, but performace is decreased for a lot of modes.
+
Mode changes using fbset
========================
@@ -54,20 +64,26 @@ Selecting Modes
- if a flat panel is found, cyblafb does not allow you
to program a resolution higher than the physical
resolution of the flat panel monitor
- - cyblafb does not allow xres to differ from xres_virtual
- cyblafb does not allow vclk to exceed 230 MHz. As 32 bpp
and (currently) 24 bit modes use a doubled vclk internally,
the dotclock limit as seen by fbset is 115 MHz for those
modes and 230 MHz for 8 and 16 bpp modes.
+ - cyblafb will allow you to select very high resolutions as
+ long as the hardware can be programmed to these modes. The
+ documented limit 1600x1200 is not enforced, but don't expect
+ perfect signal quality.
- Any request that violates the rules given above will be ignored and
- fbset will return an error.
+ Any request that violates the rules given above will be either changed
+ to something the hardware supports or an error value will be returned.
If you program a virtual y resolution higher than the hardware limit,
cyblafb will silently decrease that value to the highest possible
- value.
+ value. The same is true for a virtual x resolution that is not
+ supported by the hardware. Cyblafb tries to adapt vyres first because
+ vxres decides if ywrap scrolling is possible or not.
- Attempts to disable acceleration are ignored.
+ Attempts to disable acceleration are ignored, I believe that this is
+ safe.
Some video modes that should work do not work as expected. If you use
the standard fb.modes, fbset 640x480-60 will program that mode, but
@@ -129,10 +145,6 @@ mode 640x480 or 800x600 or 1024x768 or 1280x1024
verbosity 0 is the default, increase to at least 2 for every
bug report!
-vesafb allows cyblafb to be loaded after vesafb has been
- loaded. See sections "Module unloading ...".
-
-
Development hints
=================
@@ -195,7 +207,7 @@ a graphics mode.
After booting, load cyblafb without any mode and bpp parameter and assign
cyblafb to individual ttys using con2fb, e.g.:
- modprobe cyblafb vesafb=1
+ modprobe cyblafb
con2fb /dev/fb1 /dev/tty1
Unloading cyblafb works without problems after you assign vesafb to all
@@ -203,4 +215,3 @@ ttys again, e.g.:
con2fb /dev/fb0 /dev/tty1
rmmod cyblafb
-
diff --git a/Documentation/fb/cyblafb/whatsnew b/Documentation/fb/cyblafb/whatsnew
new file mode 100644
index 00000000000000..76c07a26e04436
--- /dev/null
+++ b/Documentation/fb/cyblafb/whatsnew
@@ -0,0 +1,29 @@
+0.62
+====
+
+ - the vesafb parameter has been removed as I decided to allow the
+ feature without any special parameter.
+
+ - Cyblafb does not use the vga style of panning any longer, now the
+ "right view" register in the graphics engine IO space is used. Without
+ that change it was impossible to use all available memory, and without
+ access to all available memory it is impossible to ywrap.
+
+ - The imageblit function now uses hardware acceleration for all font
+ widths. Hardware blitting across pixel column 2048 is broken in the
+ cyberblade/i1 graphics core, but we work around that hardware bug.
+
+ - modes with vxres != xres are supported now.
+
+ - ywrap scrolling is supported now and the default. This is a big
+ performance gain.
+
+ - default video modes use vyres > yres and vxres > xres to allow
+ almost optimal scrolling speed for normal and rotated screens
+
+ - some features mainly usefull for debugging the upper layers of the
+ framebuffer system have been added, have a look at the code
+
+ - fixed: Oops after unloading cyblafb when reading /proc/io*
+
+ - we work around some bugs of the higher framebuffer layers.
diff --git a/Documentation/filesystems/ext3.txt b/Documentation/filesystems/ext3.txt
index 9840d5b8d5b999..22e4040564d51d 100644
--- a/Documentation/filesystems/ext3.txt
+++ b/Documentation/filesystems/ext3.txt
@@ -22,6 +22,11 @@ journal=inum When a journal already exists, this option is
the inode which will represent the ext3 file
system's journal file.
+journal_dev=devnum When the external journal device's major/minor numbers
+ have changed, this option allows to specify the new
+ journal location. The journal device is identified
+ through its new major/minor numbers encoded in devnum.
+
noload Don't load the journal on mounting.
data=journal All data are committed into the journal prior
diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
index d4773565ea2f20..a4dcf42c2fd93f 100644
--- a/Documentation/filesystems/proc.txt
+++ b/Documentation/filesystems/proc.txt
@@ -1302,6 +1302,23 @@ VM has token based thrashing control mechanism and uses the token to prevent
unnecessary page faults in thrashing situation. The unit of the value is
second. The value would be useful to tune thrashing behavior.
+drop_caches
+-----------
+
+Writing to this will cause the kernel to drop clean caches, dentries and
+inodes from memory, causing that memory to become free.
+
+To free pagecache:
+ echo 1 > /proc/sys/vm/drop_caches
+To free dentries and inodes:
+ echo 2 > /proc/sys/vm/drop_caches
+To free pagecache, dentries and inodes:
+ echo 3 > /proc/sys/vm/drop_caches
+
+As this is a non-destructive operation and dirty objects are not freeable, the
+user should run `sync' first.
+
+
2.5 /proc/sys/dev - Device specific parameters
----------------------------------------------
diff --git a/Documentation/filesystems/ramfs-rootfs-initramfs.txt b/Documentation/filesystems/ramfs-rootfs-initramfs.txt
index b3404a0325967d..60ab61e54e8ab9 100644
--- a/Documentation/filesystems/ramfs-rootfs-initramfs.txt
+++ b/Documentation/filesystems/ramfs-rootfs-initramfs.txt
@@ -143,12 +143,26 @@ as the following example:
dir /mnt 755 0 0
file /init initramfs/init.sh 755 0 0
+Run "usr/gen_init_cpio" (after the kernel build) to get a usage message
+documenting the above file format.
+
One advantage of the text file is that root access is not required to
set permissions or create device nodes in the new archive. (Note that those
two example "file" entries expect to find files named "init.sh" and "busybox" in
a directory called "initramfs", under the linux-2.6.* directory. See
Documentation/early-userspace/README for more details.)
+The kernel does not depend on external cpio tools, gen_init_cpio is created
+from usr/gen_init_cpio.c which is entirely self-contained, and the kernel's
+boot-time extractor is also (obviously) self-contained. However, if you _do_
+happen to have cpio installed, the following command line can extract the
+generated cpio image back into its component files:
+
+ cpio -i -d -H newc -F initramfs_data.cpio --no-absolute-filenames
+
+Contents of initramfs:
+----------------------
+
If you don't already understand what shared libraries, devices, and paths
you need to get a minimal root filesystem up and running, here are some
references:
@@ -161,13 +175,69 @@ designed to be a tiny C library to statically link early userspace
code against, along with some related utilities. It is BSD licensed.
I use uClibc (http://www.uclibc.org) and busybox (http://www.busybox.net)
-myself. These are LGPL and GPL, respectively.
+myself. These are LGPL and GPL, respectively. (A self-contained initramfs
+package is planned for the busybox 1.2 release.)
In theory you could use glibc, but that's not well suited for small embedded
uses like this. (A "hello world" program statically linked against glibc is
over 400k. With uClibc it's 7k. Also note that glibc dlopens libnss to do
name lookups, even when otherwise statically linked.)
+Why cpio rather than tar?
+-------------------------
+
+This decision was made back in December, 2001. The discussion started here:
+
+ http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1538.html
+
+And spawned a second thread (specifically on tar vs cpio), starting here:
+
+ http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1587.html
+
+The quick and dirty summary version (which is no substitute for reading
+the above threads) is:
+
+1) cpio is a standard. It's decades old (from the AT&T days), and already
+ widely used on Linux (inside RPM, Red Hat's device driver disks). Here's
+ a Linux Journal article about it from 1996:
+
+ http://www.linuxjournal.com/article/1213
+
+ It's not as popular as tar because the traditional cpio command line tools
+ require _truly_hideous_ command line arguments. But that says nothing
+ either way about the archive format, and there are alternative tools,
+ such as:
+
+ http://freshmeat.net/projects/afio/
+
+2) The cpio archive format chosen by the kernel is simpler and cleaner (and
+ thus easier to create and parse) than any of the (literally dozens of)
+ various tar archive formats. The complete initramfs archive format is
+ explained in buffer-format.txt, created in usr/gen_init_cpio.c, and
+ extracted in init/initramfs.c. All three together come to less than 26k
+ total of human-readable text.
+
+3) The GNU project standardizing on tar is approximately as relevant as
+ Windows standardizing on zip. Linux is not part of either, and is free
+ to make its own technical decisions.
+
+4) Since this is a kernel internal format, it could easily have been
+ something brand new. The kernel provides its own tools to create and
+ extract this format anyway. Using an existing standard was preferable,
+ but not essential.
+
+5) Al Viro made the decision (quote: "tar is ugly as hell and not going to be
+ supported on the kernel side"):
+
+ http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1540.html
+
+ explained his reasoning:
+
+ http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1550.html
+ http://www.uwsg.iu.edu/hypermail/linux/kernel/0112.2/1638.html
+
+ and, most importantly, designed and implemented the initramfs code.
+
Future directions:
------------------
diff --git a/Documentation/filesystems/relayfs.txt b/Documentation/filesystems/relayfs.txt
index d803abed29f01d..5832377b7340ed 100644
--- a/Documentation/filesystems/relayfs.txt
+++ b/Documentation/filesystems/relayfs.txt
@@ -44,30 +44,41 @@ relayfs can operate in a mode where it will overwrite data not yet
collected by userspace, and not wait for it to consume it.
relayfs itself does not provide for communication of such data between
-userspace and kernel, allowing the kernel side to remain simple and not
-impose a single interface on userspace. It does provide a separate
-helper though, described below.
+userspace and kernel, allowing the kernel side to remain simple and
+not impose a single interface on userspace. It does provide a set of
+examples and a separate helper though, described below.
+
+klog and relay-apps example code
+================================
+
+relayfs itself is ready to use, but to make things easier, a couple
+simple utility functions and a set of examples are provided.
+
+The relay-apps example tarball, available on the relayfs sourceforge
+site, contains a set of self-contained examples, each consisting of a
+pair of .c files containing boilerplate code for each of the user and
+kernel sides of a relayfs application; combined these two sets of
+boilerplate code provide glue to easily stream data to disk, without
+having to bother with mundane housekeeping chores.
+
+The 'klog debugging functions' patch (klog.patch in the relay-apps
+tarball) provides a couple of high-level logging functions to the
+kernel which allow writing formatted text or raw data to a channel,
+regardless of whether a channel to write into exists or not, or
+whether relayfs is compiled into the kernel or is configured as a
+module. These functions allow you to put unconditional 'trace'
+statements anywhere in the kernel or kernel modules; only when there
+is a 'klog handler' registered will data actually be logged (see the
+klog and kleak examples for details).
+
+It is of course possible to use relayfs from scratch i.e. without
+using any of the relay-apps example code or klog, but you'll have to
+implement communication between userspace and kernel, allowing both to
+convey the state of buffers (full, empty, amount of padding).
+
+klog and the relay-apps examples can be found in the relay-apps
+tarball on http://relayfs.sourceforge.net
-klog, relay-app & librelay
-==========================
-
-relayfs itself is ready to use, but to make things easier, two
-additional systems are provided. klog is a simple wrapper to make
-writing formatted text or raw data to a channel simpler, regardless of
-whether a channel to write into exists or not, or whether relayfs is
-compiled into the kernel or is configured as a module. relay-app is
-the kernel counterpart of userspace librelay.c, combined these two
-files provide glue to easily stream data to disk, without having to
-bother with housekeeping. klog and relay-app can be used together,
-with klog providing high-level logging functions to the kernel and
-relay-app taking care of kernel-user control and disk-logging chores.
-
-It is possible to use relayfs without relay-app & librelay, but you'll
-have to implement communication between userspace and kernel, allowing
-both to convey the state of buffers (full, empty, amount of padding).
-
-klog, relay-app and librelay can be found in the relay-apps tarball on
-http://relayfs.sourceforge.net
The relayfs user space API
==========================
@@ -125,6 +136,8 @@ Here's a summary of the API relayfs provides to in-kernel clients:
relay_reset(chan)
relayfs_create_dir(name, parent)
relayfs_remove_dir(dentry)
+ relayfs_create_file(name, parent, mode, fops, data)
+ relayfs_remove_file(dentry)
channel management typically called on instigation of userspace:
@@ -141,6 +154,8 @@ Here's a summary of the API relayfs provides to in-kernel clients:
subbuf_start(buf, subbuf, prev_subbuf, prev_padding)
buf_mapped(buf, filp)
buf_unmapped(buf, filp)
+ create_buf_file(filename, parent, mode, buf, is_global)
+ remove_buf_file(dentry)
helper functions:
@@ -320,6 +335,71 @@ forces a sub-buffer switch on all the channel buffers, and can be used
to finalize and process the last sub-buffers before the channel is
closed.
+Creating non-relay files
+------------------------
+
+relay_open() automatically creates files in the relayfs filesystem to
+represent the per-cpu kernel buffers; it's often useful for
+applications to be able to create their own files alongside the relay
+files in the relayfs filesystem as well e.g. 'control' files much like
+those created in /proc or debugfs for similar purposes, used to
+communicate control information between the kernel and user sides of a
+relayfs application. For this purpose the relayfs_create_file() and
+relayfs_remove_file() API functions exist. For relayfs_create_file(),
+the caller passes in a set of user-defined file operations to be used
+for the file and an optional void * to a user-specified data item,
+which will be accessible via inode->u.generic_ip (see the relay-apps
+tarball for examples). The file_operations are a required parameter
+to relayfs_create_file() and thus the semantics of these files are
+completely defined by the caller.
+
+See the relay-apps tarball at http://relayfs.sourceforge.net for
+examples of how these non-relay files are meant to be used.
+
+Creating relay files in other filesystems
+-----------------------------------------
+
+By default of course, relay_open() creates relay files in the relayfs
+filesystem. Because relay_file_operations is exported, however, it's
+also possible to create and use relay files in other pseudo-filesytems
+such as debugfs.
+
+For this purpose, two callback functions are provided,
+create_buf_file() and remove_buf_file(). create_buf_file() is called
+once for each per-cpu buffer from relay_open() to allow the client to
+create a file to be used to represent the corresponding buffer; if
+this callback is not defined, the default implementation will create
+and return a file in the relayfs filesystem to represent the buffer.
+The callback should return the dentry of the file created to represent
+the relay buffer. Note that the parent directory passed to
+relay_open() (and passed along to the callback), if specified, must
+exist in the same filesystem the new relay file is created in. If
+create_buf_file() is defined, remove_buf_file() must also be defined;
+it's responsible for deleting the file(s) created in create_buf_file()
+and is called during relay_close().
+
+The create_buf_file() implementation can also be defined in such a way
+as to allow the creation of a single 'global' buffer instead of the
+default per-cpu set. This can be useful for applications interested
+mainly in seeing the relative ordering of system-wide events without
+the need to bother with saving explicit timestamps for the purpose of
+merging/sorting per-cpu files in a postprocessing step.
+
+To have relay_open() create a global buffer, the create_buf_file()
+implementation should set the value of the is_global outparam to a
+non-zero value in addition to creating the file that will be used to
+represent the single buffer. In the case of a global buffer,
+create_buf_file() and remove_buf_file() will be called only once. The
+normal channel-writing functions e.g. relay_write() can still be used
+- writes from any cpu will transparently end up in the global buffer -
+but since it is a global buffer, callers should make sure they use the
+proper locking for such a buffer, either by wrapping writes in a
+spinlock, or by copying a write function from relayfs_fs.h and
+creating a local version that internally does the proper locking.
+
+See the 'exported-relayfile' examples in the relay-apps tarball for
+examples of creating and using relay files in debugfs.
+
Misc
----
diff --git a/Documentation/keys-request-key.txt b/Documentation/keys-request-key.txt
index 5f2b9c5edbb517..22488d7911681e 100644
--- a/Documentation/keys-request-key.txt
+++ b/Documentation/keys-request-key.txt
@@ -56,10 +56,12 @@ A request proceeds in the following manner:
(4) request_key() then forks and executes /sbin/request-key with a new session
keyring that contains a link to auth key V.
- (5) /sbin/request-key execs an appropriate program to perform the actual
+ (5) /sbin/request-key assumes the authority associated with key U.
+
+ (6) /sbin/request-key execs an appropriate program to perform the actual
instantiation.
- (6) The program may want to access another key from A's context (say a
+ (7) The program may want to access another key from A's context (say a
Kerberos TGT key). It just requests the appropriate key, and the keyring
search notes that the session keyring has auth key V in its bottom level.
@@ -67,19 +69,19 @@ A request proceeds in the following manner:
UID, GID, groups and security info of process A as if it was process A,
and come up with key W.
- (7) The program then does what it must to get the data with which to
+ (8) The program then does what it must to get the data with which to
instantiate key U, using key W as a reference (perhaps it contacts a
Kerberos server using the TGT) and then instantiates key U.
- (8) Upon instantiating key U, auth key V is automatically revoked so that it
+ (9) Upon instantiating key U, auth key V is automatically revoked so that it
may not be used again.
- (9) The program then exits 0 and request_key() deletes key V and returns key
+(10) The program then exits 0 and request_key() deletes key V and returns key
U to the caller.
-This also extends further. If key W (step 5 above) didn't exist, key W would be
-created uninstantiated, another auth key (X) would be created [as per step 3]
-and another copy of /sbin/request-key spawned [as per step 4]; but the context
+This also extends further. If key W (step 7 above) didn't exist, key W would be
+created uninstantiated, another auth key (X) would be created (as per step 3)
+and another copy of /sbin/request-key spawned (as per step 4); but the context
specified by auth key X will still be process A, as it was in auth key V.
This is because process A's keyrings can't simply be attached to
@@ -138,8 +140,8 @@ until one succeeds:
(3) The process's session keyring is searched.
- (4) If the process has a request_key() authorisation key in its session
- keyring then:
+ (4) If the process has assumed the authority associated with a request_key()
+ authorisation key then:
(a) If extant, the calling process's thread keyring is searched.
diff --git a/Documentation/keys.txt b/Documentation/keys.txt
index 6304db59bfe456..aaa01b0e3ee942 100644
--- a/Documentation/keys.txt
+++ b/Documentation/keys.txt
@@ -308,6 +308,8 @@ process making the call:
KEY_SPEC_USER_KEYRING -4 UID-specific keyring
KEY_SPEC_USER_SESSION_KEYRING -5 UID-session keyring
KEY_SPEC_GROUP_KEYRING -6 GID-specific keyring
+ KEY_SPEC_REQKEY_AUTH_KEY -7 assumed request_key()
+ authorisation key
The main syscalls are:
@@ -498,7 +500,11 @@ The keyctl syscall functions are:
keyring is full, error ENFILE will result.
The link procedure checks the nesting of the keyrings, returning ELOOP if
- it appears to deep or EDEADLK if the link would introduce a cycle.
+ it appears too deep or EDEADLK if the link would introduce a cycle.
+
+ Any links within the keyring to keys that match the new key in terms of
+ type and description will be discarded from the keyring as the new one is
+ added.
(*) Unlink a key or keyring from another keyring:
@@ -628,6 +634,41 @@ The keyctl syscall functions are:
there is one, otherwise the user default session keyring.
+ (*) Set the timeout on a key.
+
+ long keyctl(KEYCTL_SET_TIMEOUT, key_serial_t key, unsigned timeout);
+
+ This sets or clears the timeout on a key. The timeout can be 0 to clear
+ the timeout or a number of seconds to set the expiry time that far into
+ the future.
+
+ The process must have attribute modification access on a key to set its
+ timeout. Timeouts may not be set with this function on negative, revoked
+ or expired keys.
+
+
+ (*) Assume the authority granted to instantiate a key
+
+ long keyctl(KEYCTL_ASSUME_AUTHORITY, key_serial_t key);
+
+ This assumes or divests the authority required to instantiate the
+ specified key. Authority can only be assumed if the thread has the
+ authorisation key associated with the specified key in its keyrings
+ somewhere.
+
+ Once authority is assumed, searches for keys will also search the
+ requester's keyrings using the requester's security label, UID, GID and
+ groups.
+
+ If the requested authority is unavailable, error EPERM will be returned,
+ likewise if the authority has been revoked because the target key is
+ already instantiated.
+
+ If the specified key is 0, then any assumed authority will be divested.
+
+ The assumed authorititive key is inherited across fork and exec.
+
+
===============
KERNEL SERVICES
===============
diff --git a/Documentation/networking/bonding.txt b/Documentation/networking/bonding.txt
index b0fe41da007bf8..8d8b4e5ea184a2 100644
--- a/Documentation/networking/bonding.txt
+++ b/Documentation/networking/bonding.txt
@@ -945,7 +945,6 @@ bond0 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4
collisions:0 txqueuelen:0
eth0 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4
- inet addr:XXX.XXX.XXX.YYY Bcast:XXX.XXX.XXX.255 Mask:255.255.252.0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:3573025 errors:0 dropped:0 overruns:0 frame:0
TX packets:1643167 errors:1 dropped:0 overruns:1 carrier:0
@@ -953,7 +952,6 @@ eth0 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4
Interrupt:10 Base address:0x1080
eth1 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4
- inet addr:XXX.XXX.XXX.YYY Bcast:XXX.XXX.XXX.255 Mask:255.255.252.0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:3651769 errors:0 dropped:0 overruns:0 frame:0
TX packets:1643480 errors:0 dropped:0 overruns:0 carrier:0
diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt
index 2f1aae32a5d9df..6910c0136f8d7e 100644
--- a/Documentation/sysctl/vm.txt
+++ b/Documentation/sysctl/vm.txt
@@ -26,12 +26,13 @@ Currently, these files are in /proc/sys/vm:
- min_free_kbytes
- laptop_mode
- block_dump
+- drop-caches
==============================================================
dirty_ratio, dirty_background_ratio, dirty_expire_centisecs,
dirty_writeback_centisecs, vfs_cache_pressure, laptop_mode,
-block_dump, swap_token_timeout:
+block_dump, swap_token_timeout, drop-caches:
See Documentation/filesystems/proc.txt
@@ -102,3 +103,20 @@ This is used to force the Linux VM to keep a minimum number
of kilobytes free. The VM uses this number to compute a pages_min
value for each lowmem zone in the system. Each lowmem zone gets
a number of reserved free pages based proportionally on its size.
+
+==============================================================
+
+percpu_pagelist_fraction
+
+This is the fraction of pages at most (high mark pcp->high) in each zone that
+are allocated for each per cpu page list. The min value for this is 8. It
+means that we don't allow more than 1/8th of pages in each zone to be
+allocated in any single per_cpu_pagelist. This entry only changes the value
+of hot per cpu pagelists. User can specify a number like 100 to allocate
+1/100th of each zone to each per cpu page list.
+
+The batch value of each per cpu pagelist is also updated as a result. It is
+set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
+
+The initial value is zero. Kernel does not use this value at boot time to set
+the high water marks for each per cpu page list.