aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorNick Piggin <nickpiggin@yahoo.com.au>2004-10-27 18:23:26 -0700
committerLinus Torvalds <torvalds@ppc970.osdl.org>2004-10-27 18:23:26 -0700
commitdc05d09df6bae6f001833b66733f7867f820d5d9 (patch)
tree772b82077423a0535125fdfcaa0d8ccc2690a610 /mm
parente9a9fc6a21d72ebe5e0b06047d23128aa791899e (diff)
downloadhistory-dc05d09df6bae6f001833b66733f7867f820d5d9.tar.gz
[PATCH] vmscan: pages_scanned fix
kswapd is still sometimes going into loops. The problem seemed to be happening on systems with zero inactive pages in ZONE_DMA, so pages_scanned could never be increased, all_unreclaimable would never be set, and kswapd would never break. So change pages_scanned to be a count of the number of _active_ list pages scanned rather than inactive. This has been reported to solve the problems. This is not subject to the reverse problem where one might have zero active list pages, because inactive pages are either be reclaimed, or put onto the active list. I think it is reasonable to have all_unreclaimed trigger based on the amount of active list scanning rather than inactive. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1f0d7fb0396ebf..44dadf2d3778bb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -574,7 +574,6 @@ static void shrink_cache(struct zone *zone, struct scan_control *sc)
nr_taken++;
}
zone->nr_inactive -= nr_taken;
- zone->pages_scanned += nr_taken;
spin_unlock_irq(&zone->lru_lock);
if (nr_taken == 0)
@@ -675,6 +674,7 @@ refill_inactive_zone(struct zone *zone, struct scan_control *sc)
}
pgscanned++;
}
+ zone->pages_scanned += pgscanned;
zone->nr_active -= pgmoved;
spin_unlock_irq(&zone->lru_lock);