1. 23 Nov, 2018 1 commit
  2. 13 Nov, 2018 1 commit
    • Jiri Kosina's avatar
      x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation · 66fe51cb
      Jiri Kosina authored
      commit 53c613fe6349994f023245519265999eed75957f upstream.
      
      STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
      (once enabled) prevents cross-hyperthread control of decisions made by
      indirect branch predictors.
      
      Enable this feature if
      
      - the CPU is vulnerable to spectre v2
      - the CPU supports SMT and has SMT siblings online
      - spectre_v2 mitigation autoselection is enabled (default)
      
      After some previous discussion, this leaves STIBP on all the time, as wrmsr
      on crossing kernel boundary is a no-no. This could perhaps later be a bit
      more optimized (like disabling it in NOHZ, experiment with disabling it in
      idle, etc) if needed.
      
      Note that the synchronization of the mask manipulation via newly added
      spec_ctrl_mutex is currently not strictly needed, as the only updater is
      already being serialized by cpu_add_remove_lock, but let's make this a
      little bit more future-proof.
      Signed-off-by: 's avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc:  "WoodhouseDavid" <dwmw@amazon.co.uk>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc:  "SchauflerCasey" <casey.schaufler@intel.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/nycvar.YFH.7.76.1809251438240.15880@cbobk.fhfr.pmSigned-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      66fe51cb
  3. 15 Aug, 2018 12 commits
  4. 13 Apr, 2018 1 commit
  5. 02 Jan, 2018 1 commit
  6. 14 Dec, 2017 1 commit
  7. 07 Aug, 2017 2 commits
    • Thomas Gleixner's avatar
      smp/hotplug: Replace BUG_ON and react useful · 6b3d13fe
      Thomas Gleixner authored
      commit dea1d0f5 upstream.
      
      The move of the unpark functions to the control thread moved the BUG_ON()
      there as well. While it made some sense in the idle thread of the upcoming
      CPU, it's bogus to crash the control thread on the already online CPU,
      especially as the function has a return value and the callsite is prepared
      to handle an error return.
      
      Replace it with a WARN_ON_ONCE() and return a proper error code.
      
      Fixes: 9cd4f1a4 ("smp/hotplug: Move unparking of percpu threads to the control CPU")
      Rightfully-ranted-at-by: 's avatarLinux Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6b3d13fe
    • Thomas Gleixner's avatar
      smp/hotplug: Move unparking of percpu threads to the control CPU · 7b4e4b18
      Thomas Gleixner authored
      commit 9cd4f1a4 upstream.
      
      Vikram reported the following backtrace:
      
         BUG: scheduling while atomic: swapper/7/0/0x00000002
         CPU: 7 PID: 0 Comm: swapper/7 Not tainted 4.9.32-perf+ #680
         schedule
         schedule_hrtimeout_range_clock
         schedule_hrtimeout
         wait_task_inactive
         __kthread_bind_mask
         __kthread_bind
         __kthread_unpark
         kthread_unpark
         cpuhp_online_idle
         cpu_startup_entry
         secondary_start_kernel
      
      He analyzed correctly that a parked cpu hotplug thread of an offlined CPU
      was still on the runqueue when the CPU came back online and tried to unpark
      it. This causes the thread which invoked kthread_unpark() to call
      wait_task_inactive() and subsequently schedule() with preemption disabled.
      His proposed workaround was to "make sure" that a parked thread has
      scheduled out when the CPU goes offline, so the situation cannot happen.
      
      But that's still wrong because the root cause is not the fact that the
      percpu thread is still on the runqueue and neither that preemption is
      disabled, which could be simply solved by enabling preemption before
      calling kthread_unpark().
      
      The real issue is that the calling thread is the idle task of the upcoming
      CPU, which is not supposed to call anything which might sleep.  The moron,
      who wrote that code, missed completely that kthread_unpark() might end up
      in schedule().
      
      The solution is simpler than expected. The thread which controls the
      hotplug operation is waiting for the CPU to call complete() on the hotplug
      state completion. So the idle task of the upcoming CPU can set its state to
      CPUHP_AP_ONLINE_IDLE and invoke complete(). This in turn wakes the control
      task on a different CPU, which then can safely do the unpark and kick the
      now unparked hotplug thread of the upcoming CPU to complete the bringup to
      the final target state.
      
      Control CPU                     AP
      
      bringup_cpu();
        __cpu_up()  ------------>
      				bringup_ap();
        bringup_wait_for_ap()
          wait_for_completion();
                                      cpuhp_online_idle();
                      <------------    complete();
          unpark(AP->stopper);
          unpark(AP->hotplugthread);
                                      while(1)
                                        do_idle();
          kick(AP->hotplugthread);
          wait_for_completion();	hotplug_thread()
      				  run_online_callbacks();
      				  complete();
      
      Fixes: 8df3e07e ("cpu/hotplug: Let upcoming cpu bring itself fully up")
      Reported-by: 's avatarVikram Mulukutla <markivx@codeaurora.org>
      Signed-off-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: 's avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Sewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1707042218020.2131@nanosSigned-off-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7b4e4b18
  8. 14 Jun, 2017 1 commit
  9. 08 May, 2017 1 commit
  10. 06 Jan, 2017 1 commit
    • Michal Hocko's avatar
      hotplug: Make register and unregister notifier API symmetric · 56eaecc8
      Michal Hocko authored
      commit 777c6e0d upstream.
      
      Yu Zhao has noticed that __unregister_cpu_notifier only unregisters its
      notifiers when HOTPLUG_CPU=y while the registration might succeed even
      when HOTPLUG_CPU=n if MODULE is enabled. This means that e.g. zswap
      might keep a stale notifier on the list on the manual clean up during
      the pool tear down and thus corrupt the list. Resulting in the following
      
      [  144.964346] BUG: unable to handle kernel paging request at ffff880658a2be78
      [  144.971337] IP: [<ffffffffa290b00b>] raw_notifier_chain_register+0x1b/0x40
      <snipped>
      [  145.122628] Call Trace:
      [  145.125086]  [<ffffffffa28e5cf8>] __register_cpu_notifier+0x18/0x20
      [  145.131350]  [<ffffffffa2a5dd73>] zswap_pool_create+0x273/0x400
      [  145.137268]  [<ffffffffa2a5e0fc>] __zswap_param_set+0x1fc/0x300
      [  145.143188]  [<ffffffffa2944c1d>] ? trace_hardirqs_on+0xd/0x10
      [  145.149018]  [<ffffffffa2908798>] ? kernel_param_lock+0x28/0x30
      [  145.154940]  [<ffffffffa2a3e8cf>] ? __might_fault+0x4f/0xa0
      [  145.160511]  [<ffffffffa2a5e237>] zswap_compressor_param_set+0x17/0x20
      [  145.167035]  [<ffffffffa2908d3c>] param_attr_store+0x5c/0xb0
      [  145.172694]  [<ffffffffa290848d>] module_attr_store+0x1d/0x30
      [  145.178443]  [<ffffffffa2b2b41f>] sysfs_kf_write+0x4f/0x70
      [  145.183925]  [<ffffffffa2b2a5b9>] kernfs_fop_write+0x149/0x180
      [  145.189761]  [<ffffffffa2a99248>] __vfs_write+0x18/0x40
      [  145.194982]  [<ffffffffa2a9a412>] vfs_write+0xb2/0x1a0
      [  145.200122]  [<ffffffffa2a9a732>] SyS_write+0x52/0xa0
      [  145.205177]  [<ffffffffa2ff4d97>] entry_SYSCALL_64_fastpath+0x12/0x17
      
      This can be even triggered manually by changing
      /sys/module/zswap/parameters/compressor multiple times.
      
      Fix this issue by making unregister APIs symmetric to the register so
      there are no surprises.
      
      Fixes: 47e627bc ("[PATCH] hotplug: Allow modules to use the cpu hotplug notifiers even if !CONFIG_HOTPLUG_CPU")
      Reported-and-tested-by: 's avatarYu Zhao <yuzhao@google.com>
      Signed-off-by: 's avatarMichal Hocko <mhocko@suse.com>
      Cc: linux-mm@kvack.org
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Link: http://lkml.kernel.org/r/20161207135438.4310-1-mhocko@kernel.orgSigned-off-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: 's avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      56eaecc8
  11. 16 Oct, 2016 1 commit
  12. 06 Sep, 2016 4 commits
  13. 05 Sep, 2016 1 commit
  14. 02 Sep, 2016 3 commits
  15. 26 Aug, 2016 1 commit
    • James Morse's avatar
      cpu/hotplug: Allow suspend/resume CPU to be specified · d391e552
      James Morse authored
      disable_nonboot_cpus() assumes that the lowest numbered online CPU is
      the boot CPU, and that this is the correct CPU to run any power
      management code on.
      
      On x86 this is always correct, as CPU0 cannot (easily) by taken offline.
      
      On arm64 CPU0 can be taken offline. For hibernate/resume this means we
      may hibernate on a CPU other than CPU0. If the system is rebooted with
      kexec 'CPU0' will be assigned to a different physical CPU. This
      complicates hibernate/resume as now we can't trust the CPU numbers.
      Arch code can find the correct physical CPU, and ensure it is online
      before resume from hibernate begins, but also needs to influence
      disable_nonboot_cpus()s choice of CPU.
      
      Rename disable_nonboot_cpus() as freeze_secondary_cpus() and add an
      argument indicating which CPU should be left standing. Follow the logic
      in migrate_to_reboot_cpu() to use the lowest numbered online CPU if the
      requested CPU is not online.
      Add disable_nonboot_cpus() as an inline function that has the existing
      behaviour.
      
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Reviewed-by: 's avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: 's avatarJames Morse <james.morse@arm.com>
      Signed-off-by: 's avatarWill Deacon <will.deacon@arm.com>
      d391e552
  16. 24 Aug, 2016 1 commit
  17. 22 Aug, 2016 2 commits
    • Sebastian Andrzej Siewior's avatar
      cpu/hotplug: Get rid of CPU_STARTING reference · 0c6d4576
      Sebastian Andrzej Siewior authored
      CPU_STARTING is scheduled for removal. There is no use of it in drivers
      and core code uses it only for compatibility with old-style CPU-hotplug
      notifiers.  This patch removes therefore removes CPU_STARTING from an
      RCU-related comment.
      Signed-off-by: 's avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: 's avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0c6d4576
    • Paul E. McKenney's avatar
      rcu: Provide exact CPU-online tracking for RCU · 7ec99de3
      Paul E. McKenney authored
      Up to now, RCU has assumed that the CPU-online process makes it from
      CPU_UP_PREPARE to set_cpu_online() within one jiffy.  Given the recent
      rise of virtualized environments, this assumption is very clearly
      obsolete.  Failing to meet this deadline can result in RCU paying
      attention to an incoming CPU for one jiffy, then ignoring it until the
      grace period following the one in which that CPU sets itself online.
      This situation might prove to be fatally disappointing to any RCU
      read-side critical sections that had the misfortune to execute during
      the time in which RCU was ignoring the slow-to-come-online CPU.
      
      This commit therefore updates RCU's internal CPU state-tracking
      information at notify_cpu_starting() time, thus providing RCU with
      an exact transition of the CPU's state from offline to online.
      
      Note that this means that incoming CPUs must not use RCU read-side
      critical section (other than those of SRCU) until notify_cpu_starting()
      time.  Note also that the CPU_STARTING notifiers -are- allowed to use
      RCU read-side critical sections.  (Of course, CPU-hotplug notifiers are
      rapidly becoming obsolete, so you need to act fast!)
      
      If a given architecture or CPU family needs to use RCU read-side
      critical sections earlier, the call to rcu_cpu_starting() from
      notify_cpu_starting() will need to be architecture-specific, with
      architectures that need early use being required to hand-place
      the call to rcu_cpu_starting() at some point preceding the call to
      notify_cpu_starting().
      Signed-off-by: 's avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      7ec99de3
  18. 10 Aug, 2016 1 commit
  19. 28 Jul, 2016 1 commit
  20. 15 Jul, 2016 3 commits