aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorColy Li <colyli@suse.de>2021-07-26 15:17:13 +0800
committerColy Li <colyli@suse.de>2021-07-26 15:17:13 +0800
commit9800b7c8692617e70bf80fa47c9549a35ad6429d (patch)
tree794a639724e6a8ea65d18093d752d678bd71f1be
parentb4a947c4f3054998b78ff170407a88a0a57c8346 (diff)
downloadbcache-patches-9800b7c8692617e70bf80fa47c9549a35ad6429d.tar.gz
for-next: clean merged patches
-rw-r--r--for-next/0001-bcache-consider-the-fragmentation-when-update-the-wr.patch266
-rw-r--r--for-next/0002-bcache-Fix-register_device_aync-typo.patch39
-rw-r--r--for-next/0003-Revert-bcache-Kill-btree_io_wq.patch118
-rw-r--r--for-next/0004-bcache-Give-btree_io_wq-correct-semantics-again.patch37
-rw-r--r--for-next/0005-bcache-Move-journal-work-to-new-flush-wq.patch101
-rw-r--r--for-next/0006-bcache-Avoid-comma-separated-statements.patch64
6 files changed, 0 insertions, 625 deletions
diff --git a/for-next/0001-bcache-consider-the-fragmentation-when-update-the-wr.patch b/for-next/0001-bcache-consider-the-fragmentation-when-update-the-wr.patch
deleted file mode 100644
index 122241f..0000000
--- a/for-next/0001-bcache-consider-the-fragmentation-when-update-the-wr.patch
+++ /dev/null
@@ -1,266 +0,0 @@
-From 5b756fccaa5a77ce84362be304d57eb29229b728 Mon Sep 17 00:00:00 2001
-From: dongdong tao <dongdong.tao@canonical.com>
-Date: Wed, 20 Jan 2021 20:01:52 +0800
-Subject: [PATCH 1/6] bcache: consider the fragmentation when update the
- writeback rate
-
-Current way to calculate the writeback rate only considered the
-dirty sectors, this usually works fine when the fragmentation
-is not high, but it will give us unreasonable small rate when
-we are under a situation that very few dirty sectors consumed
-a lot dirty buckets. In some case, the dirty bucekts can reached
-to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) not even
-reached the writeback_percent, the writeback rate will still
-be the minimum value (4k), thus it will cause all the writes to be
-stucked in a non-writeback mode because of the slow writeback.
-
-We accelerate the rate in 3 stages with different aggressiveness,
-the first stage starts when dirty buckets percent reach above
-BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW (50), the second is
-BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID (57), the third is
-BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH (64). By default
-the first stage tries to writeback the amount of dirty data
-in one bucket (on average) in (1 / (dirty_buckets_percent - 50)) second,
-the second stage tries to writeback the amount of dirty data in one bucket
-in (1 / (dirty_buckets_percent - 57)) * 100 millisecond, the third
-stage tries to writeback the amount of dirty data in one bucket in
-(1 / (dirty_buckets_percent - 64)) millisecond.
-
-the initial rate at each stage can be controlled by 3 configurable
-parameters writeback_rate_fp_term_{low|mid|high}, they are by default
-1, 10, 1000, the hint of IO throughput that these values are trying
-to achieve is described by above paragraph, the reason that
-I choose those value as default is based on the testing and the
-production data, below is some details:
-
-A. When it comes to the low stage, there is still a bit far from the 70
- threshold, so we only want to give it a little bit push by setting the
- term to 1, it means the initial rate will be 170 if the fragment is 6,
- it is calculated by bucket_size/fragment, this rate is very small,
- but still much reasonable than the minimum 8.
- For a production bcache with unheavy workload, if the cache device
- is bigger than 1 TB, it may take hours to consume 1% buckets,
- so it is very possible to reclaim enough dirty buckets in this stage,
- thus to avoid entering the next stage.
-
-B. If the dirty buckets ratio didn't turn around during the first stage,
- it comes to the mid stage, then it is necessary for mid stage
- to be more aggressive than low stage, so i choose the initial rate
- to be 10 times more than low stage, that means 1700 as the initial
- rate if the fragment is 6. This is some normal rate
- we usually see for a normal workload when writeback happens
- because of writeback_percent.
-
-C. If the dirty buckets ratio didn't turn around during the low and mid
- stages, it comes to the third stage, and it is the last chance that
- we can turn around to avoid the horrible cutoff writeback sync issue,
- then we choose 100 times more aggressive than the mid stage, that
- means 170000 as the initial rate if the fragment is 6. This is also
- inferred from a production bcache, I've got one week's writeback rate
- data from a production bcache which has quite heavy workloads,
- again, the writeback is triggered by the writeback percent,
- the highest rate area is around 100000 to 240000, so I believe this
- kind aggressiveness at this stage is reasonable for production.
- And it should be mostly enough because the hint is trying to reclaim
- 1000 bucket per second, and from that heavy production env,
- it is consuming 50 bucket per second on average in one week's data.
-
-Option writeback_consider_fragment is to control whether we want
-this feature to be on or off, it's on by default.
-
-Lastly, below is the performance data for all the testing result,
-including the data from production env:
-https://docs.google.com/document/d/1AmbIEa_2MhB9bqhC3rfga9tp7n9YX9PLn0jSUxscVW0/edit?usp=sharing
-
-Signed-off-by: dongdong tao <dongdong.tao@canonical.com>
-Signed-off-by: Coly Li <colyli@suse.de>
----
- drivers/md/bcache/bcache.h | 4 ++++
- drivers/md/bcache/sysfs.c | 23 +++++++++++++++++++
- drivers/md/bcache/writeback.c | 42 +++++++++++++++++++++++++++++++++++
- drivers/md/bcache/writeback.h | 4 ++++
- 4 files changed, 73 insertions(+)
-
-diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
-index 1d57f48307e6..d7a84327b7f1 100644
---- a/drivers/md/bcache/bcache.h
-+++ b/drivers/md/bcache/bcache.h
-@@ -373,6 +373,7 @@ struct cached_dev {
- unsigned int partial_stripes_expensive:1;
- unsigned int writeback_metadata:1;
- unsigned int writeback_running:1;
-+ unsigned int writeback_consider_fragment:1;
- unsigned char writeback_percent;
- unsigned int writeback_delay;
-
-@@ -385,6 +386,9 @@ struct cached_dev {
- unsigned int writeback_rate_update_seconds;
- unsigned int writeback_rate_i_term_inverse;
- unsigned int writeback_rate_p_term_inverse;
-+ unsigned int writeback_rate_fp_term_low;
-+ unsigned int writeback_rate_fp_term_mid;
-+ unsigned int writeback_rate_fp_term_high;
- unsigned int writeback_rate_minimum;
-
- enum stop_on_failure stop_when_cache_set_failed;
-diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
-index 00a520c03f41..eef15f8022ba 100644
---- a/drivers/md/bcache/sysfs.c
-+++ b/drivers/md/bcache/sysfs.c
-@@ -117,10 +117,14 @@ rw_attribute(writeback_running);
- rw_attribute(writeback_percent);
- rw_attribute(writeback_delay);
- rw_attribute(writeback_rate);
-+rw_attribute(writeback_consider_fragment);
-
- rw_attribute(writeback_rate_update_seconds);
- rw_attribute(writeback_rate_i_term_inverse);
- rw_attribute(writeback_rate_p_term_inverse);
-+rw_attribute(writeback_rate_fp_term_low);
-+rw_attribute(writeback_rate_fp_term_mid);
-+rw_attribute(writeback_rate_fp_term_high);
- rw_attribute(writeback_rate_minimum);
- read_attribute(writeback_rate_debug);
-
-@@ -195,6 +199,7 @@ SHOW(__bch_cached_dev)
- var_printf(bypass_torture_test, "%i");
- var_printf(writeback_metadata, "%i");
- var_printf(writeback_running, "%i");
-+ var_printf(writeback_consider_fragment, "%i");
- var_print(writeback_delay);
- var_print(writeback_percent);
- sysfs_hprint(writeback_rate,
-@@ -205,6 +210,9 @@ SHOW(__bch_cached_dev)
- var_print(writeback_rate_update_seconds);
- var_print(writeback_rate_i_term_inverse);
- var_print(writeback_rate_p_term_inverse);
-+ var_print(writeback_rate_fp_term_low);
-+ var_print(writeback_rate_fp_term_mid);
-+ var_print(writeback_rate_fp_term_high);
- var_print(writeback_rate_minimum);
-
- if (attr == &sysfs_writeback_rate_debug) {
-@@ -303,6 +311,7 @@ STORE(__cached_dev)
- sysfs_strtoul_bool(bypass_torture_test, dc->bypass_torture_test);
- sysfs_strtoul_bool(writeback_metadata, dc->writeback_metadata);
- sysfs_strtoul_bool(writeback_running, dc->writeback_running);
-+ sysfs_strtoul_bool(writeback_consider_fragment, dc->writeback_consider_fragment);
- sysfs_strtoul_clamp(writeback_delay, dc->writeback_delay, 0, UINT_MAX);
-
- sysfs_strtoul_clamp(writeback_percent, dc->writeback_percent,
-@@ -331,6 +340,16 @@ STORE(__cached_dev)
- sysfs_strtoul_clamp(writeback_rate_p_term_inverse,
- dc->writeback_rate_p_term_inverse,
- 1, UINT_MAX);
-+ sysfs_strtoul_clamp(writeback_rate_fp_term_low,
-+ dc->writeback_rate_fp_term_low,
-+ 1, dc->writeback_rate_fp_term_mid - 1);
-+ sysfs_strtoul_clamp(writeback_rate_fp_term_mid,
-+ dc->writeback_rate_fp_term_mid,
-+ dc->writeback_rate_fp_term_low + 1,
-+ dc->writeback_rate_fp_term_high - 1);
-+ sysfs_strtoul_clamp(writeback_rate_fp_term_high,
-+ dc->writeback_rate_fp_term_high,
-+ dc->writeback_rate_fp_term_mid + 1, UINT_MAX);
- sysfs_strtoul_clamp(writeback_rate_minimum,
- dc->writeback_rate_minimum,
- 1, UINT_MAX);
-@@ -499,9 +518,13 @@ static struct attribute *bch_cached_dev_files[] = {
- &sysfs_writeback_delay,
- &sysfs_writeback_percent,
- &sysfs_writeback_rate,
-+ &sysfs_writeback_consider_fragment,
- &sysfs_writeback_rate_update_seconds,
- &sysfs_writeback_rate_i_term_inverse,
- &sysfs_writeback_rate_p_term_inverse,
-+ &sysfs_writeback_rate_fp_term_low,
-+ &sysfs_writeback_rate_fp_term_mid,
-+ &sysfs_writeback_rate_fp_term_high,
- &sysfs_writeback_rate_minimum,
- &sysfs_writeback_rate_debug,
- &sysfs_io_errors,
-diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
-index a129e4d2707c..82d4e0880a99 100644
---- a/drivers/md/bcache/writeback.c
-+++ b/drivers/md/bcache/writeback.c
-@@ -88,6 +88,44 @@ static void __update_writeback_rate(struct cached_dev *dc)
- int64_t integral_scaled;
- uint32_t new_rate;
-
-+ /*
-+ * We need to consider the number of dirty buckets as well
-+ * when calculating the proportional_scaled, Otherwise we might
-+ * have an unreasonable small writeback rate at a highly fragmented situation
-+ * when very few dirty sectors consumed a lot dirty buckets, the
-+ * worst case is when dirty buckets reached cutoff_writeback_sync and
-+ * dirty data is still not even reached to writeback percent, so the rate
-+ * still will be at the minimum value, which will cause the write
-+ * stuck at a non-writeback mode.
-+ */
-+ struct cache_set *c = dc->disk.c;
-+
-+ int64_t dirty_buckets = c->nbuckets - c->avail_nbuckets;
-+
-+ if (dc->writeback_consider_fragment &&
-+ c->gc_stats.in_use > BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW && dirty > 0) {
-+ int64_t fragment =
-+ div_s64((dirty_buckets * c->cache->sb.bucket_size), dirty);
-+ int64_t fp_term;
-+ int64_t fps;
-+
-+ if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID) {
-+ fp_term = dc->writeback_rate_fp_term_low *
-+ (c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW);
-+ } else if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH) {
-+ fp_term = dc->writeback_rate_fp_term_mid *
-+ (c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID);
-+ } else {
-+ fp_term = dc->writeback_rate_fp_term_high *
-+ (c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH);
-+ }
-+ fps = div_s64(dirty, dirty_buckets) * fp_term;
-+ if (fragment > 3 && fps > proportional_scaled) {
-+ /* Only overrite the p when fragment > 3 */
-+ proportional_scaled = fps;
-+ }
-+ }
-+
- if ((error < 0 && dc->writeback_rate_integral > 0) ||
- (error > 0 && time_before64(local_clock(),
- dc->writeback_rate.next + NSEC_PER_MSEC))) {
-@@ -977,6 +1015,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
-
- dc->writeback_metadata = true;
- dc->writeback_running = false;
-+ dc->writeback_consider_fragment = true;
- dc->writeback_percent = 10;
- dc->writeback_delay = 30;
- atomic_long_set(&dc->writeback_rate.rate, 1024);
-@@ -984,6 +1023,9 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
-
- dc->writeback_rate_update_seconds = WRITEBACK_RATE_UPDATE_SECS_DEFAULT;
- dc->writeback_rate_p_term_inverse = 40;
-+ dc->writeback_rate_fp_term_low = 1;
-+ dc->writeback_rate_fp_term_mid = 10;
-+ dc->writeback_rate_fp_term_high = 1000;
- dc->writeback_rate_i_term_inverse = 10000;
-
- WARN_ON(test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags));
-diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
-index 3f1230e22de0..02b2f9df73f6 100644
---- a/drivers/md/bcache/writeback.h
-+++ b/drivers/md/bcache/writeback.h
-@@ -16,6 +16,10 @@
-
- #define BCH_AUTO_GC_DIRTY_THRESHOLD 50
-
-+#define BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW 50
-+#define BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID 57
-+#define BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH 64
-+
- #define BCH_DIRTY_INIT_THRD_MAX 64
- /*
- * 14 (16384ths) is chosen here as something that each backing device
---
-2.26.2
-
diff --git a/for-next/0002-bcache-Fix-register_device_aync-typo.patch b/for-next/0002-bcache-Fix-register_device_aync-typo.patch
deleted file mode 100644
index 2b3434c..0000000
--- a/for-next/0002-bcache-Fix-register_device_aync-typo.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From 40c0086acb7d8384e9998715d70bfff12b2de4d7 Mon Sep 17 00:00:00 2001
-From: Kai Krakow <kai@kaishome.de>
-Date: Thu, 28 Jan 2021 15:33:19 +0100
-Subject: [PATCH 2/6] bcache: Fix register_device_aync typo
-
-Should be `register_device_async`.
-
-Cc: Coly Li <colyli@suse.de>
-Signed-off-by: Kai Krakow <kai@kaishome.de>
-Signed-off-by: Coly Li <colyli@suse.de>
----
- drivers/md/bcache/super.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
-index 2047a9cccdb5..e7d1b52c5cc8 100644
---- a/drivers/md/bcache/super.c
-+++ b/drivers/md/bcache/super.c
-@@ -2517,7 +2517,7 @@ static void register_cache_worker(struct work_struct *work)
- module_put(THIS_MODULE);
- }
-
--static void register_device_aync(struct async_reg_args *args)
-+static void register_device_async(struct async_reg_args *args)
- {
- if (SB_IS_BDEV(args->sb))
- INIT_DELAYED_WORK(&args->reg_work, register_bdev_worker);
-@@ -2611,7 +2611,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
- args->sb = sb;
- args->sb_disk = sb_disk;
- args->bdev = bdev;
-- register_device_aync(args);
-+ register_device_async(args);
- /* No wait and returns to user space */
- goto async_done;
- }
---
-2.26.2
-
diff --git a/for-next/0003-Revert-bcache-Kill-btree_io_wq.patch b/for-next/0003-Revert-bcache-Kill-btree_io_wq.patch
deleted file mode 100644
index b63c78d..0000000
--- a/for-next/0003-Revert-bcache-Kill-btree_io_wq.patch
+++ /dev/null
@@ -1,118 +0,0 @@
-From 0e29284793e52fd086da2fed409b0af9bca03b53 Mon Sep 17 00:00:00 2001
-From: Kai Krakow <kai@kaishome.de>
-Date: Fri, 29 Jan 2021 17:40:05 +0100
-Subject: [PATCH 3/6] Revert "bcache: Kill btree_io_wq"
-
-This reverts commit 56b30770b27d54d68ad51eccc6d888282b568cee.
-
-With the btree using the `system_wq`, I seem to see a lot more desktop
-latency than I should.
-
-After some more investigation, it looks like the original assumption
-of 56b3077 no longer is true, and bcache has a very high potential of
-congesting the `system_wq`. In turn, this introduces laggy desktop
-performance, IO stalls (at least with btrfs), and input events may be
-delayed.
-
-So let's revert this. It's important to note that the semantics of
-using `system_wq` previously mean that `btree_io_wq` should be created
-before and destroyed after other bcache wqs to keep the same
-assumptions.
-
-Cc: Coly Li <colyli@suse.de>
-Cc: stable@vger.kernel.org # 5.4+
-Signed-off-by: Kai Krakow <kai@kaishome.de>
-Signed-off-by: Coly Li <colyli@suse.de>
----
- drivers/md/bcache/bcache.h | 2 ++
- drivers/md/bcache/btree.c | 21 +++++++++++++++++++--
- drivers/md/bcache/super.c | 4 ++++
- 3 files changed, 25 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
-index d7a84327b7f1..2b8c7dd2cfae 100644
---- a/drivers/md/bcache/bcache.h
-+++ b/drivers/md/bcache/bcache.h
-@@ -1046,5 +1046,7 @@ void bch_debug_exit(void);
- void bch_debug_init(void);
- void bch_request_exit(void);
- int bch_request_init(void);
-+void bch_btree_exit(void);
-+int bch_btree_init(void);
-
- #endif /* _BCACHE_H */
-diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
-index 910df242c83d..952f022db5a5 100644
---- a/drivers/md/bcache/btree.c
-+++ b/drivers/md/bcache/btree.c
-@@ -99,6 +99,8 @@
- #define PTR_HASH(c, k) \
- (((k)->ptr[0] >> c->bucket_bits) | PTR_GEN(k, 0))
-
-+static struct workqueue_struct *btree_io_wq;
-+
- #define insert_lock(s, b) ((b)->level <= (s)->lock)
-
-
-@@ -308,7 +310,7 @@ static void __btree_node_write_done(struct closure *cl)
- btree_complete_write(b, w);
-
- if (btree_node_dirty(b))
-- schedule_delayed_work(&b->work, 30 * HZ);
-+ queue_delayed_work(btree_io_wq, &b->work, 30 * HZ);
-
- closure_return_with_destructor(cl, btree_node_write_unlock);
- }
-@@ -481,7 +483,7 @@ static void bch_btree_leaf_dirty(struct btree *b, atomic_t *journal_ref)
- BUG_ON(!i->keys);
-
- if (!btree_node_dirty(b))
-- schedule_delayed_work(&b->work, 30 * HZ);
-+ queue_delayed_work(btree_io_wq, &b->work, 30 * HZ);
-
- set_btree_node_dirty(b);
-
-@@ -2764,3 +2766,18 @@ void bch_keybuf_init(struct keybuf *buf)
- spin_lock_init(&buf->lock);
- array_allocator_init(&buf->freelist);
- }
-+
-+void bch_btree_exit(void)
-+{
-+ if (btree_io_wq)
-+ destroy_workqueue(btree_io_wq);
-+}
-+
-+int __init bch_btree_init(void)
-+{
-+ btree_io_wq = create_singlethread_workqueue("bch_btree_io");
-+ if (!btree_io_wq)
-+ return -ENOMEM;
-+
-+ return 0;
-+}
-diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
-index e7d1b52c5cc8..85a44a0cffe0 100644
---- a/drivers/md/bcache/super.c
-+++ b/drivers/md/bcache/super.c
-@@ -2821,6 +2821,7 @@ static void bcache_exit(void)
- destroy_workqueue(bcache_wq);
- if (bch_journal_wq)
- destroy_workqueue(bch_journal_wq);
-+ bch_btree_exit();
-
- if (bcache_major)
- unregister_blkdev(bcache_major, "bcache");
-@@ -2876,6 +2877,9 @@ static int __init bcache_init(void)
- return bcache_major;
- }
-
-+ if (bch_btree_init())
-+ goto err;
-+
- bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM, 0);
- if (!bcache_wq)
- goto err;
---
-2.26.2
-
diff --git a/for-next/0004-bcache-Give-btree_io_wq-correct-semantics-again.patch b/for-next/0004-bcache-Give-btree_io_wq-correct-semantics-again.patch
deleted file mode 100644
index 94e0a53..0000000
--- a/for-next/0004-bcache-Give-btree_io_wq-correct-semantics-again.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-From 57c862900fae3b3a9158e28e71f8a6f1af305246 Mon Sep 17 00:00:00 2001
-From: Kai Krakow <kai@kaishome.de>
-Date: Fri, 29 Jan 2021 17:40:06 +0100
-Subject: [PATCH 4/6] bcache: Give btree_io_wq correct semantics again
-
-Before killing `btree_io_wq`, the queue was allocated using
-`create_singlethread_workqueue()` which has `WQ_MEM_RECLAIM`. After
-killing it, it no longer had this property but `system_wq` is not
-single threaded.
-
-Let's combine both worlds and make it multi threaded but able to
-reclaim memory.
-
-Cc: Coly Li <colyli@suse.de>
-Cc: stable@vger.kernel.org # 5.4+
-Signed-off-by: Kai Krakow <kai@kaishome.de>
-Signed-off-by: Coly Li <colyli@suse.de>
----
- drivers/md/bcache/btree.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
-index 952f022db5a5..fe6dce125aba 100644
---- a/drivers/md/bcache/btree.c
-+++ b/drivers/md/bcache/btree.c
-@@ -2775,7 +2775,7 @@ void bch_btree_exit(void)
-
- int __init bch_btree_init(void)
- {
-- btree_io_wq = create_singlethread_workqueue("bch_btree_io");
-+ btree_io_wq = alloc_workqueue("bch_btree_io", WQ_MEM_RECLAIM, 0);
- if (!btree_io_wq)
- return -ENOMEM;
-
---
-2.26.2
-
diff --git a/for-next/0005-bcache-Move-journal-work-to-new-flush-wq.patch b/for-next/0005-bcache-Move-journal-work-to-new-flush-wq.patch
deleted file mode 100644
index d7c0c22..0000000
--- a/for-next/0005-bcache-Move-journal-work-to-new-flush-wq.patch
+++ /dev/null
@@ -1,101 +0,0 @@
-From 06ccb26034779f39e0f3ed945c90fc8b2dbcc1f5 Mon Sep 17 00:00:00 2001
-From: Kai Krakow <kai@kaishome.de>
-Date: Fri, 29 Jan 2021 17:40:07 +0100
-Subject: [PATCH 5/6] bcache: Move journal work to new flush wq
-
-This is potentially long running and not latency sensitive, let's get
-it out of the way of other latency sensitive events.
-
-As observed in the previous commit, the `system_wq` comes easily
-congested by bcache, and this fixes a few more stalls I was observing
-every once in a while.
-
-Let's not make this `WQ_MEM_RECLAIM` as it showed to reduce performance
-of boot and file system operations in my tests. Also, without
-`WQ_MEM_RECLAIM`, I no longer see desktop stalls. This matches the
-previous behavior as `system_wq` also does no memory reclaim:
-
-> // workqueue.c:
-> system_wq = alloc_workqueue("events", 0, 0);
-
-Cc: Coly Li <colyli@suse.de>
-Cc: stable@vger.kernel.org # 5.4+
-Signed-off-by: Kai Krakow <kai@kaishome.de>
-Signed-off-by: Coly Li <colyli@suse.de>
----
- drivers/md/bcache/bcache.h | 1 +
- drivers/md/bcache/journal.c | 4 ++--
- drivers/md/bcache/super.c | 16 ++++++++++++++++
- 3 files changed, 19 insertions(+), 2 deletions(-)
-
-diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
-index 2b8c7dd2cfae..848dd4db1659 100644
---- a/drivers/md/bcache/bcache.h
-+++ b/drivers/md/bcache/bcache.h
-@@ -1005,6 +1005,7 @@ void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent);
-
- extern struct workqueue_struct *bcache_wq;
- extern struct workqueue_struct *bch_journal_wq;
-+extern struct workqueue_struct *bch_flush_wq;
- extern struct mutex bch_register_lock;
- extern struct list_head bch_cache_sets;
-
-diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
-index aefbdb7e003b..c6613e817333 100644
---- a/drivers/md/bcache/journal.c
-+++ b/drivers/md/bcache/journal.c
-@@ -932,8 +932,8 @@ atomic_t *bch_journal(struct cache_set *c,
- journal_try_write(c);
- } else if (!w->dirty) {
- w->dirty = true;
-- schedule_delayed_work(&c->journal.work,
-- msecs_to_jiffies(c->journal_delay_ms));
-+ queue_delayed_work(bch_flush_wq, &c->journal.work,
-+ msecs_to_jiffies(c->journal_delay_ms));
- spin_unlock(&c->journal.lock);
- } else {
- spin_unlock(&c->journal.lock);
-diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
-index 85a44a0cffe0..0228ccb293fc 100644
---- a/drivers/md/bcache/super.c
-+++ b/drivers/md/bcache/super.c
-@@ -49,6 +49,7 @@ static int bcache_major;
- static DEFINE_IDA(bcache_device_idx);
- static wait_queue_head_t unregister_wait;
- struct workqueue_struct *bcache_wq;
-+struct workqueue_struct *bch_flush_wq;
- struct workqueue_struct *bch_journal_wq;
-
-
-@@ -2821,6 +2822,8 @@ static void bcache_exit(void)
- destroy_workqueue(bcache_wq);
- if (bch_journal_wq)
- destroy_workqueue(bch_journal_wq);
-+ if (bch_flush_wq)
-+ destroy_workqueue(bch_flush_wq);
- bch_btree_exit();
-
- if (bcache_major)
-@@ -2884,6 +2887,19 @@ static int __init bcache_init(void)
- if (!bcache_wq)
- goto err;
-
-+ /*
-+ * Let's not make this `WQ_MEM_RECLAIM` for the following reasons:
-+ *
-+ * 1. It used `system_wq` before which also does no memory reclaim.
-+ * 2. With `WQ_MEM_RECLAIM` desktop stalls, increased boot times, and
-+ * reduced throughput can be observed.
-+ *
-+ * We still want to user our own queue to not congest the `system_wq`.
-+ */
-+ bch_flush_wq = alloc_workqueue("bch_flush", 0, 0);
-+ if (!bch_flush_wq)
-+ goto err;
-+
- bch_journal_wq = alloc_workqueue("bch_journal", WQ_MEM_RECLAIM, 0);
- if (!bch_journal_wq)
- goto err;
---
-2.26.2
-
diff --git a/for-next/0006-bcache-Avoid-comma-separated-statements.patch b/for-next/0006-bcache-Avoid-comma-separated-statements.patch
deleted file mode 100644
index d6b4322..0000000
--- a/for-next/0006-bcache-Avoid-comma-separated-statements.patch
+++ /dev/null
@@ -1,64 +0,0 @@
-From 4ed5a5a2b21e41ebc478b2becb8e1b1798b6d88c Mon Sep 17 00:00:00 2001
-From: Joe Perches <joe@perches.com>
-Date: Mon, 24 Aug 2020 21:56:10 -0700
-Subject: [PATCH 6/6] bcache: Avoid comma separated statements
-
-Use semicolons and braces.
-
-Signed-off-by: Joe Perches <joe@perches.com>
-Signed-off-by: Coly Li <colyli@suse.de>
----
- drivers/md/bcache/bset.c | 12 ++++++++----
- drivers/md/bcache/sysfs.c | 6 ++++--
- 2 files changed, 12 insertions(+), 6 deletions(-)
-
-diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c
-index 67a2c47f4201..94d38e8a59b3 100644
---- a/drivers/md/bcache/bset.c
-+++ b/drivers/md/bcache/bset.c
-@@ -712,8 +712,10 @@ void bch_bset_build_written_tree(struct btree_keys *b)
- for (j = inorder_next(0, t->size);
- j;
- j = inorder_next(j, t->size)) {
-- while (bkey_to_cacheline(t, k) < cacheline)
-- prev = k, k = bkey_next(k);
-+ while (bkey_to_cacheline(t, k) < cacheline) {
-+ prev = k;
-+ k = bkey_next(k);
-+ }
-
- t->prev[j] = bkey_u64s(prev);
- t->tree[j].m = bkey_to_cacheline_offset(t, cacheline++, k);
-@@ -901,8 +903,10 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k,
- status = BTREE_INSERT_STATUS_INSERT;
-
- while (m != bset_bkey_last(i) &&
-- bkey_cmp(k, b->ops->is_extents ? &START_KEY(m) : m) > 0)
-- prev = m, m = bkey_next(m);
-+ bkey_cmp(k, b->ops->is_extents ? &START_KEY(m) : m) > 0) {
-+ prev = m;
-+ m = bkey_next(m);
-+ }
-
- /* prev is in the tree, if we merge we're done */
- status = BTREE_INSERT_STATUS_BACK_MERGE;
-diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
-index eef15f8022ba..cc89f3156d1a 100644
---- a/drivers/md/bcache/sysfs.c
-+++ b/drivers/md/bcache/sysfs.c
-@@ -1094,8 +1094,10 @@ SHOW(__bch_cache)
- --n;
-
- while (cached < p + n &&
-- *cached == BTREE_PRIO)
-- cached++, n--;
-+ *cached == BTREE_PRIO) {
-+ cached++;
-+ n--;
-+ }
-
- for (i = 0; i < n; i++)
- sum += INITIAL_PRIO - cached[i];
---
-2.26.2
-