BACKPORT: FROMGIT: mm: skip CMA pages when they are not available

This patch fixes unproductive reclaiming of CMA pages by skipping them when they are not available for current context. It is arise from bellowing OOM issue, which caused by large proportion of MIGRATE_CMA pages among free pages.

[   36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0
[   36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB
[   36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB
...
[   36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
[   36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0
[   36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0

Bug: 286444744
Link: https://lkml.kernel.org/r/1685501461-19290-1-git-send-email-zhaoyang.huang@unisoc.com
[zhaoyang.huang: modifications for backporting]
(cherry picked from commit 132aeb51c5c5745776368bd065ba8749538c67fa
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-unstable)
Change-Id: Iba53e7117fc429e894635ed0d33a1fd3aaf5f470
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
This commit is contained in:
zhaoyang.huang
2023-06-05 14:45:22 +08:00
committed by Treehugger Robot
parent 087877d515
commit 0a52bf2972

View File

@@ -1943,6 +1943,25 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec,
}
#ifdef CONFIG_CMA
/*
* It is waste of effort to scan and reclaim CMA pages if it is not available
* for current allocation context. Kswapd can not be enrolled as it can not
* distinguish this scenario by using sc->gfp_mask = GFP_KERNEL
*/
static bool skip_cma(struct page *page, struct scan_control *sc)
{
return !current_is_kswapd() &&
gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
get_pageblock_migratetype(page) == MIGRATE_CMA;
}
#else
static bool skip_cma(struct page *page, struct scan_control *sc)
{
return false;
}
#endif
/*
* Isolating page from the lruvec to fill in @dst list by nr_to_scan times.
*
@@ -1989,7 +2008,8 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
nr_pages = compound_nr(page);
total_scan += nr_pages;
if (page_zonenum(page) > sc->reclaim_idx) {
if (page_zonenum(page) > sc->reclaim_idx ||
skip_cma(page, sc)) {
nr_skipped[page_zonenum(page)] += nr_pages;
move_to = &pages_skipped;
goto move;