Commit 584810d3 authored by Shakeel Butt's avatar Shakeel Butt Committed by Greg Kroah-Hartman

mm/vmscan.c: prevent useless kswapd loops

commit dffcac2cb88e4ec5906235d64a83d802580b119e upstream.

In production we have noticed hard lockups on large machines running
large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
sc->reclaim_idx is 0 which is a small zone.  The lru was couple hundred
GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
isolate_lru_pages() was basically skipping GiBs of pages while holding
the LRU spinlock with interrupt disabled.

On further inspection, it seems like there are two issues:

(1) If kswapd on the return from balance_pgdat() could not sleep (i.e.
    node is still unbalanced), the classzone_idx is unintentionally set
    to 0 and the whole reclaim cycle of kswapd will try to reclaim only
    the lowest and smallest zone while traversing the whole memory.

(2) Fundamentally isolate_lru_pages() is really bad when the
    allocation has woken kswapd for a smaller zone on a very large machine
    running very large jobs.  It can hoard the LRU spinlock while skipping
    over 100s of GiBs of pages.

This patch only fixes (1).  (2) needs a more fundamental solution.  To
fix (1), in the kswapd context, if pgdat->kswapd_classzone_idx is
invalid use the classzone_idx of the previous kswapd loop otherwise use
the one the waker has requested.

Fixes: e716f2eb ("mm, vmscan: prevent kswapd sleeping prematurely due to mismatched classzone_idx")
Signed-off-by: default avatarShakeel Butt <>
Reviewed-by: default avatarYang Shi <>
Acked-by: default avatarMel Gorman <>
Cc: Johannes Weiner <>
Cc: Michal Hocko <>
Cc: Vlastimil Babka <>
Cc: Hillf Danton <>
Cc: Roman Gushchin <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
Signed-off-by: default avatarGreg Kroah-Hartman <>
parent 0c0b5477
......@@ -3439,19 +3439,18 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
* pgdat->kswapd_classzone_idx is the highest zone index that a recent
* allocation request woke kswapd for. When kswapd has not woken recently,
* the value is MAX_NR_ZONES which is not a valid index. This compares a
* given classzone and returns it or the highest classzone index kswapd
* was recently woke for.
* The pgdat->kswapd_classzone_idx is used to pass the highest zone index to be
* reclaimed by kswapd from the waker. If the value is MAX_NR_ZONES which is not
* a valid index then either kswapd runs for first time or kswapd couldn't sleep
* after previous reclaim attempt (node is still unbalanced). In that case
* return the zone index of the previous kswapd reclaim cycle.
static enum zone_type kswapd_classzone_idx(pg_data_t *pgdat,
enum zone_type classzone_idx)
enum zone_type prev_classzone_idx)
if (pgdat->kswapd_classzone_idx == MAX_NR_ZONES)
return classzone_idx;
return max(pgdat->kswapd_classzone_idx, classzone_idx);
return prev_classzone_idx;
return pgdat->kswapd_classzone_idx;
static void kswapd_try_to_sleep(pg_data_t *pgdat, int alloc_order, int reclaim_order,
......@@ -3592,7 +3591,7 @@ static int kswapd(void *p)
/* Read the new order and classzone_idx */
alloc_order = reclaim_order = pgdat->kswapd_order;
classzone_idx = kswapd_classzone_idx(pgdat, 0);
classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx);
pgdat->kswapd_order = 0;
pgdat->kswapd_classzone_idx = MAX_NR_ZONES;
......@@ -3643,8 +3642,12 @@ void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx)
if (!cpuset_zone_allowed(zone, GFP_KERNEL | __GFP_HARDWALL))
pgdat = zone->zone_pgdat;
pgdat->kswapd_classzone_idx = kswapd_classzone_idx(pgdat,
if (pgdat->kswapd_classzone_idx == MAX_NR_ZONES)
pgdat->kswapd_classzone_idx = classzone_idx;
pgdat->kswapd_classzone_idx = max(pgdat->kswapd_classzone_idx,
pgdat->kswapd_order = max(pgdat->kswapd_order, order);
if (!waitqueue_active(&pgdat->kswapd_wait))
Markdown is supported
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment