1. 05 Dec, 2018 2 commits
    • Max Filippov's avatar
      xtensa: fix coprocessor context offset definitions · f4038875
      Max Filippov authored
      commit 03bc996af0cc71c7f30c384d8ce7260172423b34 upstream.
      
      Coprocessor context offsets are used by the assembly code that moves
      coprocessor context between the individual fields of the
      thread_info::xtregs_cp structure and coprocessor registers.
      This fixes coprocessor context clobbering on flushing and reloading
      during normal user code execution and user process debugging in the
      presence of more than one coprocessor in the core configuration.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f4038875
    • Max Filippov's avatar
      xtensa: enable coprocessors that are being flushed · c26e3c6c
      Max Filippov authored
      commit 2958b66694e018c552be0b60521fec27e8d12988 upstream.
      
      coprocessor_flush_all may be called from a context of a thread that is
      different from the thread being flushed. In that case contents of the
      cpenable special register may not match ti->cpenable of the target
      thread, resulting in unhandled coprocessor exception in the kernel
      context.
      Set cpenable special register to the ti->cpenable of the target register
      for the duration of the flush and restore it afterwards.
      This fixes the following crash caused by coprocessor register inspection
      in native gdb:
      
        (gdb) p/x $w0
        Illegal instruction in kernel: sig: 9 [#1] PREEMPT
        Call Trace:
          ___might_sleep+0x184/0x1a4
          __might_sleep+0x41/0xac
          exit_signals+0x14/0x218
          do_exit+0xc9/0x8b8
          die+0x99/0xa0
          do_illegal_instruction+0x18/0x6c
          common_exception+0x77/0x77
          coprocessor_flush+0x16/0x3c
          arch_ptrace+0x46c/0x674
          sys_ptrace+0x2ce/0x3b4
          system_call+0x54/0x80
          common_exception+0x77/0x77
        note: gdb[100] exited with preempt_count 1
        Killed
      
      Cc: stable@vger.kernel.org
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c26e3c6c
  2. 21 Nov, 2018 3 commits
    • Max Filippov's avatar
      xtensa: fix boot parameters address translation · 0dac0281
      Max Filippov authored
      commit 40dc948f234b73497c3278875eb08a01d5854d3f upstream.
      
      The bootloader may pass physical address of the boot parameters structure
      to the MMUv3 kernel in the register a2. Code in the _SetupMMU block in
      the arch/xtensa/kernel/head.S is supposed to map that physical address to
      the virtual address in the configured virtual memory layout.
      
      This code haven't been updated when additional 256+256 and 512+512
      memory layouts were introduced and it may produce wrong addresses when
      used with these layouts.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0dac0281
    • Max Filippov's avatar
      xtensa: make sure bFLT stack is 16 byte aligned · a10aa331
      Max Filippov authored
      commit 0773495b1f5f1c5e23551843f87b5ff37e7af8f7 upstream.
      
      Xtensa ABI requires stack alignment to be at least 16. In noMMU
      configuration ARCH_SLAB_MINALIGN is used to align stack. Make it at
      least 16.
      
      This fixes the following runtime error in noMMU configuration, caused by
      interaction between insufficiently aligned stack and alloca function,
      that results in corruption of on-stack variable in the libc function
      glob:
      
       Caught unhandled exception in 'sh' (pid = 47, pc = 0x02d05d65)
        - should not happen
        EXCCAUSE is 15
      
      Cc: stable@vger.kernel.org
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a10aa331
    • Max Filippov's avatar
      xtensa: add NOTES section to the linker script · c9226141
      Max Filippov authored
      commit 4119ba211bc4f1bf638f41e50b7a0f329f58aa16 upstream.
      
      This section collects all source .note.* sections together in the
      vmlinux image. Without it .note.Linux section may be placed at address
      0, while the rest of the kernel is at its normal address, resulting in a
      huge vmlinux.bin image that may not be linked into the xtensa Image.elf.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c9226141
  3. 09 Sep, 2018 2 commits
    • Max Filippov's avatar
      xtensa: increase ranges in ___invalidate_{i,d}cache_all · 7f2163b5
      Max Filippov authored
      commit fec3259c9f747c039f90e99570540114c8d81a14 upstream.
      
      Cache invalidation macros use cache line size to iterate over
      invalidated cache lines, assuming that all cache ways are invalidated by
      single instruction, but xtensa ISA recommends to not assume that for
      future compatibility:
        In some implementations all ways at index Addry-1..z are invalidated
        regardless of the specified way, but for future compatibility this
        behavior should not be assumed.
      
      Iterate over all cache ways in ___invalidate_icache_all and
      ___invalidate_dcache_all.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7f2163b5
    • Max Filippov's avatar
      xtensa: limit offsets in __loop_cache_{all,page} · e996a24d
      Max Filippov authored
      commit be75de25251f7cf3e399ca1f584716a95510d24a upstream.
      
      When building kernel for xtensa cores with big cache lines (e.g. 128
      bytes or more) __loop_cache_all and __loop_cache_page may generate
      assembly instructions with immediate fields that are too big. This
      results in the following build errors:
      
        arch/xtensa/mm/misc.S: Assembler messages:
        arch/xtensa/mm/misc.S:464: Error: operand 2 of 'diwbi' has invalid value '256'
        arch/xtensa/mm/misc.S:464: Error: operand 2 of 'diwbi' has invalid value '384'
        arch/xtensa/kernel/head.S: Assembler messages:
        arch/xtensa/kernel/head.S:172: Error: operand 2 of 'diu' has invalid value '256'
        arch/xtensa/kernel/head.S:172: Error: operand 2 of 'diu' has invalid value '384'
        arch/xtensa/kernel/head.S:176: Error: operand 2 of 'iiu' has invalid value '256'
        arch/xtensa/kernel/head.S:176: Error: operand 2 of 'iiu' has invalid value '384'
        arch/xtensa/kernel/head.S:255: Error: operand 2 of 'diwb' has invalid value '256'
        arch/xtensa/kernel/head.S:255: Error: operand 2 of 'diwb' has invalid value '384'
      
      Add parameter max_immed to these macros and use it to limit values of
      immediate operands. Extract common code of these macros into the new
      macro __loop_cache_unroll.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e996a24d
  4. 03 Jul, 2018 1 commit
  5. 19 May, 2018 1 commit
    • Jiri Slaby's avatar
      futex: Remove duplicated code and fix undefined behaviour · 81da9f87
      Jiri Slaby authored
      commit 30d6e0a4 upstream.
      
      There is code duplicated over all architecture's headers for
      futex_atomic_op_inuser. Namely op decoding, access_ok check for uaddr,
      and comparison of the result.
      
      Remove this duplication and leave up to the arches only the needed
      assembly which is now in arch_futex_atomic_op_inuser.
      
      This effectively distributes the Will Deacon's arm64 fix for undefined
      behaviour reported by UBSAN to all architectures. The fix was done in
      commit 5f16a046 (arm64: futex: Fix undefined behaviour with
      FUTEX_OP_OPARG_SHIFT usage). Look there for an example dump.
      
      And as suggested by Thomas, check for negative oparg too, because it was
      also reported to cause undefined behaviour report.
      
      Note that s390 removed access_ok check in d12a2970 ("s390/uaccess:
      remove pointless access_ok() checks") as access_ok there returns true.
      We introduce it back to the helper for the sake of simplicity (it gets
      optimized away anyway).
      Signed-off-by: 's avatarJiri Slaby <jslaby@suse.cz>
      Signed-off-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: 's avatarRussell King <rmk+kernel@armlinux.org.uk>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> [s390]
      Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile]
      Reviewed-by: 's avatarDarren Hart (VMware) <dvhart@infradead.org>
      Reviewed-by: Will Deacon <will.deacon@arm.com> [core/arm64]
      Cc: linux-mips@linux-mips.org
      Cc: Rich Felker <dalias@libc.org>
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: peterz@infradead.org
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: sparclinux@vger.kernel.org
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: linux-s390@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: linux-hexagon@vger.kernel.org
      Cc: Helge Deller <deller@gmx.de>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: linux-snps-arc@lists.infradead.org
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: linux-xtensa@linux-xtensa.org
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: openrisc@lists.librecores.org
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-parisc@vger.kernel.org
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: linux-alpha@vger.kernel.org
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: "David S. Miller" <davem@davemloft.net>
      Link: http://lkml.kernel.org/r/20170824073105.3901-1-jslaby@suse.cz
      Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      81da9f87
  6. 28 Feb, 2018 1 commit
    • Max Filippov's avatar
      xtensa: fix high memory/reserved memory collision · a5ecf56c
      Max Filippov authored
      commit 6ac5a11d upstream.
      
      Xtensa memory initialization code frees high memory pages without
      checking whether they are in the reserved memory regions or not. That
      results in invalid value of totalram_pages and duplicate page usage by
      CMA and highmem. It produces a bunch of BUGs at startup looking like
      this:
      
      BUG: Bad page state in process swapper  pfn:70800
      page:be60c000 count:0 mapcount:-127 mapping:  (null) index:0x1
      flags: 0x80000000()
      raw: 80000000 00000000 00000001 ffffff80 00000000 be60c014 be60c014 0000000a
      page dumped because: nonzero mapcount
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper Tainted: G    B            4.16.0-rc1-00015-g7928b2cb-dirty #23
      Stack:
       bd839d33 00000000 00000018 ba97b64c a106578c bd839d70 be60c000 00000000
       a1378054 bd86a000 00000003 ba97b64c a1066166 bd839da0 be60c000 ffe00000
       a1066b58 bd839dc0 be504000 00000000 000002f4 bd838000 00000000 0000001e
      Call Trace:
       [<a1065734>] bad_page+0xac/0xd0
       [<a106578c>] free_pages_check_bad+0x34/0x4c
       [<a1066166>] __free_pages_ok+0xae/0x14c
       [<a1066b58>] __free_pages+0x30/0x64
       [<a1365de5>] init_cma_reserved_pageblock+0x35/0x44
       [<a13682dc>] cma_init_reserved_areas+0xf4/0x148
       [<a10034b8>] do_one_initcall+0x80/0xf8
       [<a1361c16>] kernel_init_freeable+0xda/0x13c
       [<a125b59d>] kernel_init+0x9/0xd0
       [<a1004304>] ret_from_kernel_thread+0xc/0x18
      
      Only free high memory pages that are not reserved.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a5ecf56c
  7. 17 Feb, 2018 1 commit
  8. 16 Aug, 2017 3 commits
  9. 24 Jun, 2017 1 commit
    • Hugh Dickins's avatar
      mm: larger stack guard gap, between vmas · cfc0eb40
      Hugh Dickins authored
      commit 1be7107f upstream.
      
      Stack guard page is a useful feature to reduce a risk of stack smashing
      into a different mapping. We have been using a single page gap which
      is sufficient to prevent having stack adjacent to a different mapping.
      But this seems to be insufficient in the light of the stack usage in
      userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
      used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
      which is 256kB or stack strings with MAX_ARG_STRLEN.
      
      This will become especially dangerous for suid binaries and the default
      no limit for the stack size limit because those applications can be
      tricked to consume a large portion of the stack and a single glibc call
      could jump over the guard page. These attacks are not theoretical,
      unfortunatelly.
      
      Make those attacks less probable by increasing the stack guard gap
      to 1MB (on systems with 4k pages; but make it depend on the page size
      because systems with larger base pages might cap stack allocations in
      the PAGE_SIZE units) which should cover larger alloca() and VLA stack
      allocations. It is obviously not a full fix because the problem is
      somehow inherent, but it should reduce attack space a lot.
      
      One could argue that the gap size should be configurable from userspace,
      but that can be done later when somebody finds that the new 1MB is wrong
      for some special case applications.  For now, add a kernel command line
      option (stack_guard_gap) to specify the stack gap size (in page units).
      
      Implementation wise, first delete all the old code for stack guard page:
      because although we could get away with accounting one extra page in a
      stack vma, accounting a larger gap can break userspace - case in point,
      a program run with "ulimit -S -v 20000" failed when the 1MB gap was
      counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
      and strict non-overcommit mode.
      
      Instead of keeping gap inside the stack vma, maintain the stack guard
      gap as a gap between vmas: using vm_start_gap() in place of vm_start
      (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
      places which need to respect the gap - mainly arch_get_unmapped_area(),
      and and the vma tree's subtree_gap support for that.
      Original-patch-by: 's avatarOleg Nesterov <oleg@redhat.com>
      Original-patch-by: 's avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: 's avatarHugh Dickins <hughd@google.com>
      Acked-by: 's avatarMichal Hocko <mhocko@suse.com>
      Tested-by: Helge Deller <deller@gmx.de> # parisc
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      [wt: backport to 4.11: adjust context]
      [wt: backport to 4.9: adjust context ; kernel doc was not in admin-guide]
      Signed-off-by: 's avatarWilly Tarreau <w@1wt.eu>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cfc0eb40
  10. 17 Jun, 2017 1 commit
    • Max Filippov's avatar
      xtensa: don't use linux IRQ #0 · a2f68276
      Max Filippov authored
      commit e5c86679 upstream.
      
      Linux IRQ #0 is reserved for error reporting and may not be used.
      Increase NR_IRQS for one additional slot and increase
      irq_domain_add_legacy parameter first_irq value to 1, so that linux
      IRQ #0 is not associated with hardware IRQ #0 in legacy IRQ domains.
      Introduce macro XTENSA_PIC_LINUX_IRQ for static translation of xtensa
      PIC hardware IRQ # to linux IRQ #. Use this macro in XTFPGA platform
      data definitions.
      
      This fixes inability to use hardware IRQ #0 in configurations that don't
      use device tree and allows for non-identity mapping between linux IRQ #
      and hardware IRQ #.
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a2f68276
  11. 12 Apr, 2017 1 commit
  12. 15 Mar, 2017 1 commit
  13. 09 Feb, 2017 1 commit
  14. 14 Nov, 2016 1 commit
  15. 07 Nov, 2016 1 commit
  16. 08 Oct, 2016 1 commit
  17. 29 Sep, 2016 1 commit
    • Max Filippov's avatar
      xtensa: disable MMU initialization option on MMUv2 cores · a4c6be5a
      Max Filippov authored
      MMU initialization option is currently ignored on MMUv2 cores, but it is
      used in Kconfig to select kernel load and start addresses. This choice
      is not available for MMUv2 cores as they have hardwired TLB entries.
      Disable MMU initialization option for known MMUv2 cores so that they get
      correct kernel load/start address by default.
      This fixes the default allmodconfig build.
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      a4c6be5a
  18. 28 Sep, 2016 2 commits
  19. 21 Sep, 2016 8 commits
  20. 19 Sep, 2016 1 commit
  21. 12 Sep, 2016 3 commits
  22. 10 Sep, 2016 2 commits
    • Scott Telford's avatar
      xtensa: Added Cadence CSP kernel configuration for Xtensa · 23c2b932
      Scott Telford authored
      Added defconfig, device tree and Xtensa variant header files for the
      Cadence Configurable System Platform "xt_lnx" processor configuration.
      Signed-off-by: 's avatarScott Telford <stelford@cadence.com>
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      23c2b932
    • Max Filippov's avatar
      xtensa: fix default kernel load address · 73a3eed0
      Max Filippov authored
      Make default kernel load address 0xd0003000 for MMUv2 cores and
      0x60003000 for noMMU cores. Don't initialize MMU inside vmlinux for
      predefined MMUv2 cores (it's noop anyway).
      
      This fixes the following defconfig build error:
        arch/xtensa/kernel/built-in.o: In function `fast_alloca':
        (.text+0x99a): dangerous relocation: j: cannot encode: _WindowUnderflow12
        arch/xtensa/kernel/built-in.o: In function `fast_alloca':
        (.text+0x99d): dangerous relocation: j: cannot encode: _WindowUnderflow8
        arch/xtensa/kernel/built-in.o: In function `fast_alloca':
        (.text+0x9a0): dangerous relocation: j: cannot encode: _WindowUnderflow4
        arch/xtensa/kernel/built-in.o: In function `window_overflow_restore_a0_fixup':
        (.text+0x23a3): dangerous relocation: j: cannot encode: (.DoubleExceptionVector.text+0x104)
        arch/xtensa/kernel/built-in.o: In function `window_overflow_restore_a0_fixup':
        (.text+0x23c1): dangerous relocation: j: cannot encode: (.DoubleExceptionVector.text+0x104)
        arch/xtensa/kernel/built-in.o: In function `window_overflow_restore_a0_fixup':
        (.text+0x23dd): dangerous relocation: j: cannot encode: (.DoubleExceptionVector.text+0x104)
      
      With this change all xtensa defconfigs build correctly.
      Reported-by: 's avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: 's avatarMax Filippov <jcmvbkbc@gmail.com>
      73a3eed0
  23. 09 Sep, 2016 1 commit
    • Dave Hansen's avatar
      x86/pkeys: Allocation/free syscalls · e8c24d3a
      Dave Hansen authored
      This patch adds two new system calls:
      
      	int pkey_alloc(unsigned long flags, unsigned long init_access_rights)
      	int pkey_free(int pkey);
      
      These implement an "allocator" for the protection keys
      themselves, which can be thought of as analogous to the allocator
      that the kernel has for file descriptors.  The kernel tracks
      which numbers are in use, and only allows operations on keys that
      are valid.  A key which was not obtained by pkey_alloc() may not,
      for instance, be passed to pkey_mprotect().
      
      These system calls are also very important given the kernel's use
      of pkeys to implement execute-only support.  These help ensure
      that userspace can never assume that it has control of a key
      unless it first asks the kernel.  The kernel does not promise to
      preserve PKRU (right register) contents except for allocated
      pkeys.
      
      The 'init_access_rights' argument to pkey_alloc() specifies the
      rights that will be established for the returned pkey.  For
      instance:
      
      	pkey = pkey_alloc(flags, PKEY_DENY_WRITE);
      
      will allocate 'pkey', but also sets the bits in PKRU[1] such that
      writing to 'pkey' is already denied.
      
      The kernel does not prevent pkey_free() from successfully freeing
      in-use pkeys (those still assigned to a memory range by
      pkey_mprotect()).  It would be expensive to implement the checks
      for this, so we instead say, "Just don't do it" since sane
      software will never do it anyway.
      
      Any piece of userspace calling pkey_alloc() needs to be prepared
      for it to fail.  Why?  pkey_alloc() returns the same error code
      (ENOSPC) when there are no pkeys and when pkeys are unsupported.
      They can be unsupported for a whole host of reasons, so apps must
      be prepared for this.  Also, libraries or LD_PRELOADs might steal
      keys before an application gets access to them.
      
      This allocation mechanism could be implemented in userspace.
      Even if we did it in userspace, we would still need additional
      user/kernel interfaces to tell userspace which keys are being
      used by the kernel internally (such as for execute-only
      mappings).  Having the kernel provide this facility completely
      removes the need for these additional interfaces, or having an
      implementation of this in userspace at all.
      
      Note that we have to make changes to all of the architectures
      that do not use mman-common.h because we use the new
      PKEY_DENY_ACCESS/WRITE macros in arch-independent code.
      
      1. PKRU is the Protection Key Rights User register.  It is a
         usermode-accessible register that controls whether writes
         and/or access to each individual pkey is allowed or denied.
      Signed-off-by: 's avatarDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: 's avatarMel Gorman <mgorman@techsingularity.net>
      Cc: linux-arch@vger.kernel.org
      Cc: Dave Hansen <dave@sr71.net>
      Cc: arnd@arndb.de
      Cc: linux-api@vger.kernel.org
      Cc: linux-mm@kvack.org
      Cc: luto@kernel.org
      Cc: akpm@linux-foundation.org
      Cc: torvalds@linux-foundation.org
      Link: http://lkml.kernel.org/r/20160729163015.444FE75F@viggo.jf.intel.comSigned-off-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      e8c24d3a