A while ago we merged a patch which tried to solve a problem wherein a concurrent read() and invalidate_inode_pages() would cause the read() to return -EIO because invalidate cleared PageUptodate() at the wrong time. That patch tests for (page_count(page) != 2) in invalidate_complete_page() and bales out if false. Problem is, the page may be in the per-cpu LRU front-ends over in lru_cache_add. This elevates the refcount pending spillage of the page onto the LRU for real. That causes a false positive in invalidate_complete_page(), causing the page to not get invalidated. This screws up the logic in my new O_DIRECT-vs-buffered coherency fix. So let's solve the invalidate-vs-read in a different manner. Over on the read() side, add an explicit check to see if the page was invalidated. If so, just drop it on the floor and redo the read from scratch. Note that only do_generic_mapping_read() needs treatment. filemap_nopage(), filemap_getpage() and read_cache_page() are already doing the oh-it-was-invalidated-so-try-again thing. Signed-off-by: Andrew Morton --- 25-akpm/mm/filemap.c | 12 +++++++++++- 25-akpm/mm/truncate.c | 4 ---- 2 files changed, 11 insertions(+), 5 deletions(-) diff -puN mm/filemap.c~readpage-vs-invalidate-fix mm/filemap.c --- 25/mm/filemap.c~readpage-vs-invalidate-fix 2004-12-03 20:57:10.146096360 -0800 +++ 25-akpm/mm/filemap.c 2004-12-03 20:57:10.152095448 -0800 @@ -822,11 +822,21 @@ readpage: goto readpage_error; if (!PageUptodate(page)) { - wait_on_page_locked(page); + lock_page(page); if (!PageUptodate(page)) { + if (page->mapping == NULL) { + /* + * invalidate_inode_pages got it + */ + unlock_page(page); + page_cache_release(page); + goto find_page; + } + unlock_page(page); error = -EIO; goto readpage_error; } + unlock_page(page); } /* diff -puN mm/truncate.c~readpage-vs-invalidate-fix mm/truncate.c --- 25/mm/truncate.c~readpage-vs-invalidate-fix 2004-12-03 20:57:10.147096208 -0800 +++ 25-akpm/mm/truncate.c 2004-12-03 20:57:10.153095296 -0800 @@ -89,10 +89,6 @@ invalidate_complete_page(struct address_ } BUG_ON(PagePrivate(page)); - if (page_count(page) != 2) { - write_unlock_irq(&mapping->tree_lock); - return 0; - } __remove_from_page_cache(page); write_unlock_irq(&mapping->tree_lock); ClearPageUptodate(page); _