From: Nick Piggin When refill_inactive_list is running !reclaim_mapped, it clears a mapped pages referenced bits then puts them back to the head of the active list. Referenced and non referenced mapped pages are treated the same, so you lose the "referenced" information. This patch causes the referenced bits to not be cleared during !reclaim_mapped. It improves heavy swapping performance significantly. --- mm/vmscan.c | 14 ++++++++++---- 1 files changed, 10 insertions(+), 4 deletions(-) diff -puN mm/vmscan.c~vm-lru-info mm/vmscan.c --- 25/mm/vmscan.c~vm-lru-info 2004-02-04 02:34:22.000000000 -0800 +++ 25-akpm/mm/vmscan.c 2004-02-04 02:34:22.000000000 -0800 @@ -709,6 +709,16 @@ refill_inactive_zone(struct zone *zone, page = lru_to_page(&l_hold); list_del(&page->lru); if (page_mapped(page)) { + + /* + * Don't clear page referenced if we're not going + * to use it. + */ + if (!reclaim_mapped) { + list_add(&page->lru, &l_ignore); + continue; + } + /* * probably it would be useful to transfer dirty bit * from pte to the @page here. @@ -720,10 +730,6 @@ refill_inactive_zone(struct zone *zone, continue; } pte_chain_unlock(page); - if (!reclaim_mapped) { - list_add(&page->lru, &l_ignore); - continue; - } } /* * FIXME: need to consider page_count(page) here if/when we _