The current refill logic in refill_inactive_zone() takes an arbitrarily large number of pages and chops it down to SWAP_CLUSTER_MAX*4, regardless of the size of the zone. This has the effect of reducing the amount of refilling of large zones proportionately much more than of small zones. We made this change in may 2003 and I'm damned if I remember why. let's put it back so we don't truncate the refill count and see what happens. --- mm/vmscan.c | 9 +-------- 1 files changed, 1 insertion(+), 8 deletions(-) diff -puN mm/vmscan.c~vm-balance-refill-rate mm/vmscan.c --- 25/mm/vmscan.c~vm-balance-refill-rate 2004-03-02 00:45:38.000000000 -0800 +++ 25-akpm/mm/vmscan.c 2004-03-02 01:07:19.000000000 -0800 @@ -758,17 +758,10 @@ shrink_zone(struct zone *zone, int max_s */ ratio = (unsigned long)SWAP_CLUSTER_MAX * zone->nr_active / ((zone->nr_inactive | 1) * 2); + atomic_add(ratio+1, &zone->nr_scan_active); count = atomic_read(&zone->nr_scan_active); if (count >= SWAP_CLUSTER_MAX) { - /* - * Don't try to bring down too many pages in one attempt. - * If this fails, the caller will increase `priority' and - * we'll try again, with an increased chance of reclaiming - * mapped memory. - */ - if (count > SWAP_CLUSTER_MAX * 4) - count = SWAP_CLUSTER_MAX * 4; atomic_set(&zone->nr_scan_active, 0); refill_inactive_zone(zone, count, ps); } _