From: Martin Schwidefsky Fix endless loop in get_user_pages() on s390. It happens only on s/390 because pte_dirty always returns 0. For all other architectures this is an optimization. In the case of "write && !pte_dirty(pte)" follow_page() returns NULL. On all architectures except s390 handle_pte_fault() will then create a pte with pte_dirty(pte)==1 because write_access==1. In the following, second call to follow_page() all is fine. With the physical dirty bit patch pte_dirty() is always 0 for s/390 because the dirty bit doesn't live in the pte. --- mm/memory.c | 21 +++++++++++++-------- 1 files changed, 13 insertions(+), 8 deletions(-) diff -puN mm/memory.c~s390-16-follow_page-lockup-fix mm/memory.c --- 25/mm/memory.c~s390-16-follow_page-lockup-fix 2004-01-12 01:44:24.000000000 -0800 +++ 25-akpm/mm/memory.c 2004-01-12 01:44:24.000000000 -0800 @@ -657,14 +657,19 @@ follow_page(struct mm_struct *mm, unsign pte = *ptep; pte_unmap(ptep); if (pte_present(pte)) { - if (!write || (pte_write(pte) && pte_dirty(pte))) { - pfn = pte_pfn(pte); - if (pfn_valid(pfn)) { - struct page *page = pfn_to_page(pfn); - - mark_page_accessed(page); - return page; - } + if (write && !pte_write(pte)) + goto out; + if (write && !pte_dirty(pte)) { + struct page *page = pte_page(pte); + if (!PageDirty(page)) + set_page_dirty(page); + } + pfn = pte_pfn(pte); + if (pfn_valid(pfn)) { + struct page *page = pfn_to_page(pfn); + + mark_page_accessed(page); + return page; } } _