Commit eb03aa00 authored by Gerald Schaefer's avatar Gerald Schaefer Committed by Linus Torvalds

mm/hugetlb: improve locking in dissolve_free_huge_pages()

For every pfn aligned to minimum_order, dissolve_free_huge_pages() will
call dissolve_free_huge_page() which takes the hugetlb spinlock, even if
the page is not huge at all or a hugepage that is in-use.

Improve this by doing the PageHuge() and page_count() checks already in
dissolve_free_huge_pages() before calling dissolve_free_huge_page().  In
dissolve_free_huge_page(), when holding the spinlock, those checks need
to be revalidated.

Link: default avatarGerald Schaefer <>
Acked-by: default avatarMichal Hocko <>
Acked-by: default avatarNaoya Horiguchi <>
Cc: "Kirill A . Shutemov" <>
Cc: Vlastimil Babka <>
Cc: Mike Kravetz <>
Cc: "Aneesh Kumar K . V" <>
Cc: Martin Schwidefsky <>
Cc: Heiko Carstens <>
Cc: Rui Teng <>
Cc: Dave Hansen <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 082d5b6b
......@@ -1476,14 +1476,20 @@ out:
int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
unsigned long pfn;
struct page *page;
int rc = 0;
if (!hugepages_supported())
return rc;
for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
if (rc = dissolve_free_huge_page(pfn_to_page(pfn)))
for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
page = pfn_to_page(pfn);
if (PageHuge(page) && !page_count(page)) {
rc = dissolve_free_huge_page(page);
if (rc)
return rc;
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment