Commit 447effd3 authored by Mike Kravetz's avatar Mike Kravetz Committed by Greg Kroah-Hartman

hugetlbfs: check for pgoff value overflow

commit 63489f8e821144000e0bdca7e65a8d1cc23a7ee7 upstream.

A vma with vm_pgoff large enough to overflow a loff_t type when
converted to a byte offset can be passed via the remap_file_pages system
call.  The hugetlbfs mmap routine uses the byte offset to calculate
reservations and file size.

A sequence such as:

  mmap(0x20a00000, 0x600000, 0, 0x66033, -1, 0);
  remap_file_pages(0x20a00000, 0x600000, 0, 0x20000000000000, 0);

will result in the following when task exits/file closed,

  kernel BUG at mm/hugetlb.c:749!
  Call Trace:

The overflowed pgoff value causes hugetlbfs to try to set up a mapping
with a negative range (end < start) that leaves invalid state which
causes the BUG.

The previous overflow fix to this code was incomplete and did not take
the remap_file_pages system call into account.

[ v3]
[ include mmdebug.h]
[ fix -ve left shift count on sh]
Fixes: 045c7a3f ("hugetlbfs: fix offset overflow in hugetlbfs mmap")
Signed-off-by: default avatarMike Kravetz <>
Reported-by: default avatarNic Losby <>
Acked-by: default avatarMichal Hocko <>
Cc: "Kirill A . Shutemov" <>
Cc: Yisheng Xie <>
Cc: <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
Signed-off-by: default avatarBen Hutchings <>
Signed-off-by: default avatarGreg Kroah-Hartman <>
parent 3d101f33
......@@ -118,6 +118,16 @@ static void huge_pagevec_release(struct pagevec *pvec)
* Mask used when checking the page offset value passed in via system
* calls. This value will be converted to a loff_t which is signed.
* Therefore, we want to check the upper PAGE_SHIFT + 1 bits of the
* value. The extra bit (- 1 in the shift value) is to take the sign
* bit into account.
(((1UL << (PAGE_SHIFT + 1)) - 1) << (BITS_PER_LONG - (PAGE_SHIFT + 1)))
static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
struct inode *inode = file_inode(file);
......@@ -137,12 +147,13 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
vma->vm_ops = &hugetlb_vm_ops;
* Offset passed to mmap (before page shift) could have been
* negative when represented as a (l)off_t.
* page based offset in vm_pgoff could be sufficiently large to
* overflow a (l)off_t when converted to byte offset.
if (((loff_t)vma->vm_pgoff << PAGE_SHIFT) < 0)
if (vma->vm_pgoff & PGOFF_LOFFT_MAX)
return -EINVAL;
/* must be huge page aligned */
if (vma->vm_pgoff & (~huge_page_mask(h) >> PAGE_SHIFT))
return -EINVAL;
......@@ -4170,6 +4170,12 @@ int hugetlb_reserve_pages(struct inode *inode,
struct resv_map *resv_map;
long gbl_reserve;
/* This should never happen */
if (from > to) {
VM_WARN(1, "%s called with a negative range\n", __func__);
return -EINVAL;
* Only apply hugepage reservation if asked. At fault time, an
* attempt will be made for VM_NORESERVE to allocate a page
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment