aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorThomas Renninger <trenn@suse.de>2009-11-03 09:20:56 +0100
committerDominik Brodowski <linux@dominikbrodowski.net>2009-11-07 11:20:41 +0100
commitb7e6359765e530fa03a2c01013c8c95009e57eb2 (patch)
treefdf05e0dde90889558cad7016d2f7a52af3ed758
parentcc4f7cb834d949f3b95f1cc18b23ec74164c3806 (diff)
downloadcpufrequtils-b7e6359765e530fa03a2c01013c8c95009e57eb2.tar.gz
cpufreq-bench: Fix installation of bench README - enhance the README file
Currently the bench README gets overridden by the cpufrequtils README when installing -> rename bench README to BENCH-README when installing. The rest are bench README enhancements. CC: ckornacker@suse.de Signed-off-by: Thomas Renninger <trenn@suse.de> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
-rw-r--r--bench/Makefile2
-rw-r--r--bench/README49
-rw-r--r--bench/README-BENCH124
3 files changed, 125 insertions, 50 deletions
diff --git a/bench/Makefile b/bench/Makefile
index 8ca609f..74732bb 100644
--- a/bench/Makefile
+++ b/bench/Makefile
@@ -19,7 +19,7 @@ install:
mkdir -p $(DESTDIR)/$(docdir)
mkdir -p $(DESTDIR)/$(confdir)
install -m 755 cpufreq-bench $(DESTDIR)/$(sbindir)/cpufreq-bench
- install -m 644 README $(DESTDIR)/$(docdir)/README
+ install -m 644 README-BENCH $(DESTDIR)/$(docdir)/README-BENCH
install -m 644 example.cfg $(DESTDIR)/$(confdir)/cpufreq-bench.conf
clean:
diff --git a/bench/README b/bench/README
deleted file mode 100644
index 4f6a225..0000000
--- a/bench/README
+++ /dev/null
@@ -1,49 +0,0 @@
-This is cpufreq-bench, a microbenchmark for the cpufreq framework.
-
-description:
-cpufreq-bench helps to test the condition of a given cpufreq governor.
-For that purpose, it compares the performance governor to a configured
-powersave module.
-
-The functional principle is quite easy: we generate a load for a specific
-time with the performance governor. The load is generated with some rounds
-of calculation. Now, we idle for some time to let the CPU change to a lower
-frequency. Then, We take that amount of rounds and do another test with
-the powersave governor. But now, we don’t generate load for a specific time
-but rather generate load with the amount of calculations we’ve got.
-The resulting time is compared to the time we spent for the initial
-performance calculation.
-The powersave cycle should take 1-40% longer than the performance cycle due
-to the time the CPU needs to change to a higher frequency.
-nr. of calculations
-
-^
-|__________________ _ _ _ _ _
-| performance | powersave|
-| | |
-| | |
-|-----------------------------------------------> time
-
-To get a more precise value, this sleep/load cycle is done several times.
-We use the average values for the comparison.
-After each round, a specific time is added to the load and sleep time to see
-how good the sleep/load switch behaves with different timeframes.
-
-
-usage:
--l, --load=<long int> initial load time in us
--s, --sleep=<long int> initial sleep time in us
--x, --load-step=<long int> time to be added to load time, in us
--y, --sleep-step=<long int> time to be added to sleep time, in us
--c, --cpu=<unsigned int> CPU Number to use, starting at 0
--p, --prio=<priority> scheduler priority, HIGH, LOW or DEFAULT
--g, --governor=<governor> cpufreq governor to test
--n, --cycles=<int> load/sleep cycles to get an avarage value to compare
--r, --rounds<int> load/sleep rounds
--f, --file=<configfile> config file to use
--o, --output=<dir> output dir, must exist
--v, --verbose verbose output on/off
-
-Due to the high priority, the application my not be responsible for some time.
-After the benchmark, the logfile is saved in OUTPUTDIR/benchmark_TIMESTAMP.log
-
diff --git a/bench/README-BENCH b/bench/README-BENCH
new file mode 100644
index 0000000..8093ec7
--- /dev/null
+++ b/bench/README-BENCH
@@ -0,0 +1,124 @@
+This is cpufreq-bench, a microbenchmark for the cpufreq framework.
+
+Purpose
+=======
+
+What is this benchmark for:
+ - Identify worst case performance loss when doing dynamic frequency
+ scaling using Linux kernel governors
+ - Identify average reaction time of a governor to CPU load changes
+ - (Stress) Testing whether a cpufreq low level driver or governor works
+ as expected
+ - Identify cpufreq related performance regressions between kernels
+ - Possibly Real time priority testing? -> what happens if there are
+ processes with a higher prio than the governor's kernel thread
+ - ...
+
+What this benchmark does *not* cover:
+ - Power saving related regressions (In fact as better the performance
+ throughput is, the worse the power savings will be, but the first should
+ mostly count more...)
+ - Real world (workloads)
+
+
+Description
+===========
+
+cpufreq-bench helps to test the condition of a given cpufreq governor.
+For that purpose, it compares the performance governor to a configured
+powersave module.
+
+
+How it works
+============
+You can specify load (100% CPU load) and sleep (0% CPU load) times in us which
+will be run X time in a row (cycles):
+
+ sleep=25000
+ load=25000
+ cycles=20
+
+This part of the configuration file will create 25ms load/sleep turns,
+repeated 20 times.
+
+Adding this:
+ sleep_step=25000
+ load_step=25000
+ rounds=5
+Will increase load and sleep time by 25ms 5 times.
+Together you get following test:
+25ms load/sleep time repeated 20 times (cycles).
+50ms load/sleep time repeated 20 times (cycles).
+..
+100ms load/sleep time repeated 20 times (cycles).
+
+First it is calibrated how long a specific CPU intensive calculation
+takes on this machine and needs to be run in a loop using the performance
+governor.
+Then the above test runs are processed using the performance governor
+and the governor to test. The time the calculation really needed
+with the dynamic freq scaling governor is compared with the time needed
+on full performance and you get the overall performance loss.
+
+
+Example of expected results with ondemand governor:
+
+This shows expected results of the first two test run rounds from
+above config, you there have:
+
+100% CPU load (load) | 0 % CPU load (sleep) | round
+ 25 ms | 25 ms | 1
+ 50 ms | 50 ms | 2
+
+For example if ondemand governor is configured to have a 50ms
+sampling rate you get:
+
+In round 1, ondemand should have rather static 50% load and probably
+won't ever switch up (as long as up_threshold is above).
+
+In round 2, if the ondemand sampling times exactly match the load/sleep
+trigger of the cpufreq-bench, you will see no performance loss (compare with
+below possible ondemand sample kick ins (1)):
+
+But if ondemand always kicks in in the middle of the load sleep cycles, it
+will always see 50% loads and you get worst performance impact never
+switching up (compare with below possible ondemand sample kick ins (2))::
+
+ 50 50 50 50ms ->time
+load -----| |-----| |-----| |-----|
+ | | | | | | |
+sleep |-----| |-----| |-----| |----
+ |-----|-----|-----|-----|-----|-----|-----|---- ondemand sampling (1)
+ 100 0 100 0 100 0 100 load seen by ondemand(%)
+ |-----|-----|-----|-----|-----|-----|-----|-- ondemand sampling (2)
+ 50 50 50 50 50 50 50 load seen by ondemand(%)
+
+You can easily test all kind of load/sleep times and check whether your
+governor in average behaves as expected.
+
+
+ToDo
+====
+
+Provide a gnuplot utility script for easy generation of plots to present
+the outcome nicely.
+
+
+cpufreq-bench Command Usage
+===========================
+-l, --load=<long int> initial load time in us
+-s, --sleep=<long int> initial sleep time in us
+-x, --load-step=<long int> time to be added to load time, in us
+-y, --sleep-step=<long int> time to be added to sleep time, in us
+-c, --cpu=<unsigned int> CPU Number to use, starting at 0
+-p, --prio=<priority> scheduler priority, HIGH, LOW or DEFAULT
+-g, --governor=<governor> cpufreq governor to test
+-n, --cycles=<int> load/sleep cycles to get an avarage value to compare
+-r, --rounds<int> load/sleep rounds
+-f, --file=<configfile> config file to use
+-o, --output=<dir> output dir, must exist
+-v, --verbose verbose output on/off
+
+Due to the high priority, the application may not be responsible for some time.
+After the benchmark, the logfile is saved in OUTPUTDIR/benchmark_TIMESTAMP.log
+