1. 09 Dec, 2017 1 commit
  2. 22 Jun, 2017 2 commits
  3. 13 Jun, 2017 1 commit
    • Frederic Weisbecker's avatar
      nohz: Fix spurious warning when hrtimer and clockevent get out of sync · d4af6d93
      Frederic Weisbecker authored
      The sanity check ensuring that the tick expiry cache (ts->next_tick)
      is actually in sync with the hardware clock (dev->next_event) makes the
      wrong assumption that the clock can't be programmed later than the
      hrtimer deadline.
      
      In fact the clock hardware can be programmed later on some conditions
      such as:
      
          * The hrtimer deadline is already in the past.
          * The hrtimer deadline is earlier than the minimum delay supported
            by the hardware.
      
      Such conditions can be met when we program the tick, for example if the
      last jiffies update hasn't been seen by the current CPU yet, we may
      program the hrtimer to a deadline that is earlier than ktime_get()
      because last_jiffies_update is our timestamp base to compute the next
      tick.
      
      As a result, we can randomly observe such warning:
      
      	WARNING: CPU: 5 PID: 0 at kernel/time/tick-sched.c:794 tick_nohz_stop_sched_tick kernel/time/tick-sched.c:791 [inline]
      	Call Trace:
      	 tick_nohz_irq_exit
      	 tick_irq_exit
      	 irq_exit
      	 exiting_irq
      	 smp_call_function_interrupt
      	 smp_call_function_single_interrupt
      	 call_function_single_interrupt
      
      Therefore, let's rather make sure that the tick expiry cache is sync'ed
      with the tick hrtimer deadline, against which it is not supposed to
      drift away. The clock hardware instead has its own will and can't be
      used as a reliable comparison point.
      Reported-and-tested-by: default avatarSasha Levin <alexander.levin@verizon.com>
      Reported-and-tested-by: default avatarAbdul Haleem <abdhalee@linux.vnet.ibm.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: James Hartsock <hartsjc@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Wright <tim@binbash.co.uk>
      Link: http://lkml.kernel.org/r/1497326654-14122-1-git-send-email-fweisbec@gmail.com
      [ Minor readability edit. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d4af6d93
  4. 05 Jun, 2017 1 commit
    • Frederic Weisbecker's avatar
      nohz: Fix buggy tick delay on IRQ storms · f99973e1
      Frederic Weisbecker authored
      When the tick is stopped and we reach the dynticks evaluation code on
      IRQ exit, we perform a soft tick restart if we observe an expired timer
      from there. It means we program the nearest possible tick but we stay in
      dynticks mode (ts->tick_stopped = 1) because we may need to stop the tick
      again after that expired timer is handled.
      
      Now this solution works most of the time but if we suffer an IRQ storm
      and those interrupts trigger faster than the hardware clockevents min
      delay, our tick won't fire until that IRQ storm is finished.
      
      Here is the problem: on IRQ exit we reprog the timer to at least
      NOW() + min_clockevents_delay. Another IRQ fires before the tick so we
      reschedule again to NOW() + min_clockevents_delay, etc... The tick
      is eternally rescheduled min_clockevents_delay ahead.
      
      A solution is to simply remove this soft tick restart. After all
      the normal dynticks evaluation path can handle 0 delay just fine. And
      by doing that we benefit from the optimization branch which avoids
      clock reprogramming if the clockevents deadline hasn't changed since
      the last reprog. This fixes our issue because we don't do repetitive
      clock reprog that always add hardware min delay.
      
      As a side effect it should even optimize the 0 delay path in general.
      Reported-and-tested-by: default avatarOctavian Purdila <octavian.purdila@nxp.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1496328429-13317-1-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f99973e1
  5. 30 May, 2017 1 commit
  6. 17 May, 2017 2 commits
    • Frederic Weisbecker's avatar
      nohz: Fix collision between tick and other hrtimers, again · 411fe24e
      Frederic Weisbecker authored
      This restores commit:
      
        24b91e36: ("nohz: Fix collision between tick and other hrtimers")
      
      ... which got reverted by commit:
      
        558e8e27: ('Revert "nohz: Fix collision between tick and other hrtimers"')
      
      ... due to a regression where CPUs spuriously stopped ticking.
      
      The bug happened when a tick fired too early past its expected expiration:
      on IRQ exit the tick was scheduled again to the same deadline but skipped
      reprogramming because ts->next_tick still kept in cache the deadline.
      This has been fixed now with resetting ts->next_tick from the tick
      itself. Extra care has also been taken to prevent from obsolete values
      throughout CPU hotplug operations.
      
      When the tick is stopped and an interrupt occurs afterward, we check on
      that interrupt exit if the next tick needs to be rescheduled. If it
      doesn't need any update, we don't want to do anything.
      
      In order to check if the tick needs an update, we compare it against the
      clockevent device deadline. Now that's a problem because the clockevent
      device is at a lower level than the tick itself if it is implemented
      on top of hrtimer.
      
      Every hrtimer share this clockevent device. So comparing the next tick
      deadline against the clockevent device deadline is wrong because the
      device may be programmed for another hrtimer whose deadline collides
      with the tick. As a result we may end up not reprogramming the tick
      accidentally.
      
      In a worst case scenario under full dynticks mode, the tick stops firing
      as it is supposed to every 1hz, leaving /proc/stat stalled:
      
            Task in a full dynticks CPU
            ----------------------------
      
            * hrtimer A is queued 2 seconds ahead
            * the tick is stopped, scheduled 1 second ahead
            * tick fires 1 second later
            * on tick exit, nohz schedules the tick 1 second ahead but sees
              the clockevent device is already programmed to that deadline,
              fooled by hrtimer A, the tick isn't rescheduled.
            * hrtimer A is cancelled before its deadline
            * tick never fires again until an interrupt happens...
      
      In order to fix this, store the next tick deadline to the tick_sched
      local structure and reuse that value later to check whether we need to
      reprogram the clock after an interrupt.
      
      On the other hand, ts->sleep_length still wants to know about the next
      clock event and not just the tick, so we want to improve the related
      comment to avoid confusion.
      Reported-and-tested-by: default avatarTim Wright <tim@binbash.co.uk>
      Reported-and-tested-by: default avatarPavel Machek <pavel@ucw.cz>
      Reported-by: default avatarJames Hartsock <hartsjc@redhat.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/1492783255-5051-2-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      411fe24e
    • Frederic Weisbecker's avatar
      ce6cf9a1
  7. 15 May, 2017 1 commit
  8. 23 Mar, 2017 1 commit
    • Rafael J. Wysocki's avatar
      cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely · b7eaf1aa
      Rafael J. Wysocki authored
      The way the schedutil governor uses the PELT metric causes it to
      underestimate the CPU utilization in some cases.
      
      That can be easily demonstrated by running kernel compilation on
      a Sandy Bridge Intel processor, running turbostat in parallel with
      it and looking at the values written to the MSR_IA32_PERF_CTL
      register.  Namely, the expected result would be that when all CPUs
      were 100% busy, all of them would be requested to run in the maximum
      P-state, but observation shows that this clearly isn't the case.
      The CPUs run in the maximum P-state for a while and then are
      requested to run slower and go back to the maximum P-state after
      a while again.  That causes the actual frequency of the processor to
      visibly oscillate below the sustainable maximum in a jittery fashion
      which clearly is not desirable.
      
      That has been attributed to CPU utilization metric updates on task
      migration that cause the total utilization value for the CPU to be
      reduced by the utilization of the migrated task.  If that happens,
      the schedutil governor may see a CPU utilization reduction and will
      attempt to reduce the CPU frequency accordingly right away.  That
      may be premature, though, for example if the system is generally
      busy and there are other runnable tasks waiting to be run on that
      CPU already.
      
      This is unlikely to be an issue on systems where cpufreq policies are
      shared between multiple CPUs, because in those cases the policy
      utilization is computed as the maximum of the CPU utilization values
      over the whole policy and if that turns out to be low, reducing the
      frequency for the policy most likely is a good idea anyway.  On
      systems with one CPU per policy, however, it may affect performance
      adversely and even lead to increased energy consumption in some cases.
      
      On those systems it may be addressed by taking another utilization
      metric into consideration, like whether or not the CPU whose
      frequency is about to be reduced has been idle recently, because if
      that's not the case, the CPU is likely to be busy in the near future
      and its frequency should not be reduced.
      
      To that end, use the counter of idle calls in the timekeeping code.
      Namely, make the schedutil governor look at that counter for the
      current CPU every time before its frequency is about to be reduced.
      If the counter has not changed since the previous iteration of the
      governor computations for that CPU, the CPU has been busy for all
      that time and its frequency should not be decreased, so if the new
      frequency would be lower than the one set previously, the governor
      will skip the frequency update.
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarViresh Kumar <viresh.kumar@linaro.org>
      Reviewed-by: default avatarJoel Fernandes <joelaf@google.com>
      b7eaf1aa
  9. 02 Mar, 2017 5 commits
  10. 16 Feb, 2017 1 commit
    • Linus Torvalds's avatar
      Revert "nohz: Fix collision between tick and other hrtimers" · 558e8e27
      Linus Torvalds authored
      This reverts commit 24b91e36 and commit
      7bdb59f1 ("tick/nohz: Fix possible missing clock reprog after tick
      soft restart") that depends on it,
      
      Pavel reports that it causes occasional boot hangs for him that seem to
      depend on just how the machine was booted.  In particular, his machine
      hangs at around the PCI fixups of the EHCI USB host controller, but only
      hangs from cold boot, not from a warm boot.
      
      Thomas Gleixner suspecs it's a CPU hotplug interaction, particularly
      since Pavel also saw suspend/resume issues that seem to be related.
      We're reverting for now while trying to figure out the root cause.
      Reported-bisected-and-tested-by: default avatarPavel Machek <pavel@ucw.cz>
      Acked-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@kernel.org  # reverted commits were marked for stable
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      558e8e27
  11. 10 Feb, 2017 1 commit
    • Frederic Weisbecker's avatar
      tick/nohz: Fix possible missing clock reprog after tick soft restart · 7bdb59f1
      Frederic Weisbecker authored
      ts->next_tick keeps track of the next tick deadline in order to optimize
      clock programmation on irq exit and avoid redundant clock device writes.
      
      Now if ts->next_tick missed an update, we may spuriously miss a clock
      reprog later as the nohz code is fooled by an obsolete next_tick value.
      
      This is what happens here on a specific path: when we observe an
      expired timer from the nohz update code on irq exit, we perform a soft
      tick restart which simply fires the closest possible tick without
      actually exiting the nohz mode and restoring a periodic state. But we
      forget to update ts->next_tick accordingly.
      
      As a result, after the next tick resulting from such soft tick restart,
      the nohz code sees a stale value on ts->next_tick which doesn't match
      the clock deadline that just expired. If that obsolete ts->next_tick
      value happens to collide with the actual next tick deadline to be
      scheduled, we may spuriously bypass the clock reprogramming. In the
      worst case, the tick may never fire again.
      
      Fix this with a ts->next_tick reset on soft tick restart.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Reviewed: Wanpeng Li <wanpeng.li@hotmail.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/1486485894-29173-1-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      7bdb59f1
  12. 11 Jan, 2017 1 commit
    • Frederic Weisbecker's avatar
      nohz: Fix collision between tick and other hrtimers · 24b91e36
      Frederic Weisbecker authored
      When the tick is stopped and an interrupt occurs afterward, we check on
      that interrupt exit if the next tick needs to be rescheduled. If it
      doesn't need any update, we don't want to do anything.
      
      In order to check if the tick needs an update, we compare it against the
      clockevent device deadline. Now that's a problem because the clockevent
      device is at a lower level than the tick itself if it is implemented
      on top of hrtimer.
      
      Every hrtimer share this clockevent device. So comparing the next tick
      deadline against the clockevent device deadline is wrong because the
      device may be programmed for another hrtimer whose deadline collides
      with the tick. As a result we may end up not reprogramming the tick
      accidentally.
      
      In a worst case scenario under full dynticks mode, the tick stops firing
      as it is supposed to every 1hz, leaving /proc/stat stalled:
      
            Task in a full dynticks CPU
            ----------------------------
      
            * hrtimer A is queued 2 seconds ahead
            * the tick is stopped, scheduled 1 second ahead
            * tick fires 1 second later
            * on tick exit, nohz schedules the tick 1 second ahead but sees
              the clockevent device is already programmed to that deadline,
              fooled by hrtimer A, the tick isn't rescheduled.
            * hrtimer A is cancelled before its deadline
            * tick never fires again until an interrupt happens...
      
      In order to fix this, store the next tick deadline to the tick_sched
      local structure and reuse that value later to check whether we need to
      reprogram the clock after an interrupt.
      
      On the other hand, ts->sleep_length still wants to know about the next
      clock event and not just the tick, so we want to improve the related
      comment to avoid confusion.
      Reported-by: default avatarJames Hartsock <hartsjc@redhat.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Reviewed-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Link: http://lkml.kernel.org/r/1483539124-5693-1-git-send-email-fweisbec@gmail.com
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      24b91e36
  13. 25 Dec, 2016 1 commit
    • Thomas Gleixner's avatar
      ktime: Get rid of the union · 2456e855
      Thomas Gleixner authored
      ktime is a union because the initial implementation stored the time in
      scalar nanoseconds on 64 bit machine and in a endianess optimized timespec
      variant for 32bit machines. The Y2038 cleanup removed the timespec variant
      and switched everything to scalar nanoseconds. The union remained, but
      become completely pointless.
      
      Get rid of the union and just keep ktime_t as simple typedef of type s64.
      
      The conversion was done with coccinelle and some manual mopping up.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      2456e855
  14. 22 Nov, 2016 1 commit
  15. 13 Sep, 2016 1 commit
  16. 02 Sep, 2016 1 commit
    • Wanpeng Li's avatar
      tick/nohz: Fix softlockup on scheduler stalls in kvm guest · 08d07259
      Wanpeng Li authored
      tick_nohz_start_idle() is prevented to be called if the idle tick can't 
      be stopped since commit 1f3b0f82 ("tick/nohz: Optimize nohz idle 
      enter"). As a result, after suspend/resume the host machine, full dynticks 
      kvm guest will softlockup:
      
       NMI watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [swapper/0:0]
       Call Trace:
        default_idle+0x31/0x1a0
        arch_cpu_idle+0xf/0x20
        default_idle_call+0x2a/0x50
        cpu_startup_entry+0x39b/0x4d0
        rest_init+0x138/0x140
        ? rest_init+0x5/0x140
        start_kernel+0x4c1/0x4ce
        ? set_init_arg+0x55/0x55
        ? early_idt_handler_array+0x120/0x120
        x86_64_start_reservations+0x24/0x26
        x86_64_start_kernel+0x142/0x14f
      
      In addition, cat /proc/stat | grep cpu in guest or host:
      
      cpu  398 16 5049 15754 5490 0 1 46 0 0
      cpu0 206 5 450 0 0 0 1 14 0 0
      cpu1 81 0 3937 3149 1514 0 0 9 0 0
      cpu2 45 6 332 6052 2243 0 0 11 0 0
      cpu3 65 2 328 6552 1732 0 0 11 0 0
      
      The idle and iowait states are weird 0 for cpu0(housekeeping). 
      
      The bug is present in both guest and host kernels, and they both have 
      cpu0's idle and iowait states issue, however, host kernel's suspend/resume 
      path etc will touch watchdog to avoid the softlockup.
      
      - The watchdog will not be touched in tick_nohz_stop_idle path (need be 
        touched since the scheduler stall is expected) if idle_active flags are 
        not detected.
      - The idle and iowait states will not be accounted when exit idle loop 
        (resched or interrupt) if idle start time and idle_active flags are 
        not set. 
      
      This patch fixes it by reverting commit 1f3b0f82 since can't stop 
      idle tick doesn't mean can't be idle.
      
      Fixes: 1f3b0f82 ("tick/nohz: Optimize nohz idle enter")
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Cc: Sanjeev Yadav<sanjeev.yadav@spreadtrum.com>
      Cc: Gaurav Jindal<gaurav.jindal@spreadtrum.com>
      Cc: stable@vger.kernel.org
      Cc: kvm@vger.kernel.org
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Link: http://lkml.kernel.org/r/1472798303-4154-1-git-send-email-wanpeng.li@hotmail.comSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      08d07259
  17. 19 Jul, 2016 1 commit
  18. 07 Jul, 2016 2 commits
  19. 01 Jul, 2016 2 commits
  20. 05 May, 2016 1 commit
  21. 23 Apr, 2016 2 commits
    • Frederic Weisbecker's avatar
      sched/fair: Correctly handle nohz ticks CPU load accounting · 1f41906a
      Frederic Weisbecker authored
      Ticks can happen while the CPU is in dynticks-idle or dynticks-singletask
      mode. In fact "nohz" or "dynticks" only mean that we exit the periodic
      mode and we try to minimize the ticks as much as possible. The nohz
      subsystem uses a confusing terminology with the internal state
      "ts->tick_stopped" which is also available through its public interface
      with tick_nohz_tick_stopped(). This is a misnomer as the tick is instead
      reduced with the best effort rather than stopped. In the best case the
      tick can indeed be actually stopped but there is no guarantee about that.
      If a timer needs to fire one second later, a tick will fire while the
      CPU is in nohz mode and this is a very common scenario.
      
      Now this confusion happens to be a problem with CPU load updates:
      cpu_load_update_active() doesn't handle nohz ticks correctly because it
      assumes that ticks are completely stopped in nohz mode and that
      cpu_load_update_active() can't be called in dynticks mode. When that
      happens, the whole previous tickless load is ignored and the function
      just records the load for the current tick, ignoring potentially long
      idle periods behind.
      
      In order to solve this, we could account the current load for the
      previous nohz time but there is a risk that we account the load of a
      task that got freshly enqueued for the whole nohz period.
      
      So instead, lets record the dynticks load on nohz frame entry so we know
      what to record in case of nohz ticks, then use this record to account
      the tickless load on nohz ticks and nohz frame end.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1460555812-25375-3-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1f41906a
    • Frederic Weisbecker's avatar
      sched/fair: Gather CPU load functions under a more conventional namespace · cee1afce
      Frederic Weisbecker authored
      The CPU load update related functions have a weak naming convention
      currently, starting with update_cpu_load_*() which isn't ideal as
      "update" is a very generic concept.
      
      Since two of these functions are public already (and a third is to come)
      that's enough to introduce a more conventional naming scheme. So let's
      do the following rename instead:
      
      	update_cpu_load_*() -> cpu_load_update_*()
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1460555812-25375-2-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      cee1afce
  22. 29 Mar, 2016 1 commit
  23. 22 Mar, 2016 1 commit
  24. 17 Mar, 2016 1 commit
  25. 02 Mar, 2016 6 commits
    • Frederic Weisbecker's avatar
      sched-clock: Migrate to use new tick dependency mask model · 4f49b90a
      Frederic Weisbecker authored
      Instead of checking sched_clock_stable from the nohz subsystem to verify
      its tick dependency, migrate it to the new mask in order to include it
      to the all-in-one check.
      Reviewed-by: default avatarChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      4f49b90a
    • Frederic Weisbecker's avatar
      posix-cpu-timers: Migrate to use new tick dependency mask model · b7878300
      Frederic Weisbecker authored
      Instead of providing asynchronous checks for the nohz subsystem to verify
      posix cpu timers tick dependency, migrate the latter to the new mask.
      
      In order to keep track of the running timers and expose the tick
      dependency accordingly, we must probe the timers queuing and dequeuing
      on threads and process lists.
      
      Unfortunately it implies both task and signal level dependencies. We
      should be able to further optimize this and merge all that on the task
      level dependency, at the cost of a bit of complexity and may be overhead.
      Reviewed-by: default avatarChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      b7878300
    • Frederic Weisbecker's avatar
      sched: Migrate sched to use new tick dependency mask model · 76d92ac3
      Frederic Weisbecker authored
      Instead of providing asynchronous checks for the nohz subsystem to verify
      sched tick dependency, migrate sched to the new mask.
      
      Everytime a task is enqueued or dequeued, we evaluate the state of the
      tick dependency on top of the policy of the tasks in the runqueue, by
      order of priority:
      
      SCHED_DEADLINE: Need the tick in order to periodically check for runtime
      SCHED_FIFO    : Don't need the tick (no round-robin)
      SCHED_RR      : Need the tick if more than 1 task of the same priority
                      for round robin (simplified with checking if more than
                      one SCHED_RR task no matter what priority).
      SCHED_NORMAL  : Need the tick if more than 1 task for round-robin.
      
      We could optimize that further with one flag per sched policy on the tick
      dependency mask and perform only the checks relevant to the policy
      concerned by an enqueue/dequeue operation.
      
      Since the checks aren't based on the current task anymore, we could get
      rid of the task switch hook but it's still needed for posix cpu
      timers.
      Reviewed-by: default avatarChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      76d92ac3
    • Frederic Weisbecker's avatar
      perf: Migrate perf to use new tick dependency mask model · 555e0c1e
      Frederic Weisbecker authored
      Instead of providing asynchronous checks for the nohz subsystem to verify
      perf event tick dependency, migrate perf to the new mask.
      
      Perf needs the tick for two situations:
      
      1) Freq events. We could set the tick dependency when those are
      installed on a CPU context. But setting a global dependency on top of
      the global freq events accounting is much easier. If people want that
      to be optimized, we can still refine that on the per-CPU tick dependency
      level. This patch dooesn't change the current behaviour anyway.
      
      2) Throttled events: this is a per-cpu dependency.
      Reviewed-by: default avatarChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      555e0c1e
    • Frederic Weisbecker's avatar
      nohz: Use enum code for tick stop failure tracing message · e6e6cc22
      Frederic Weisbecker authored
      It makes nohz tracing more lightweight, standard and easier to parse.
      
      Examples:
      
             user_loop-2904  [007] d..1   517.701126: tick_stop: success=1 dependency=NONE
             user_loop-2904  [007] dn.1   518.021181: tick_stop: success=0 dependency=SCHED
          posix_timers-6142  [007] d..1  1739.027400: tick_stop: success=0 dependency=POSIX_TIMER
             user_loop-5463  [007] dN.1  1185.931939: tick_stop: success=0 dependency=PERF_EVENTS
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Reviewed-by: default avatarChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      e6e6cc22
    • Frederic Weisbecker's avatar
      nohz: New tick dependency mask · d027d45d
      Frederic Weisbecker authored
      The tick dependency is evaluated on every IRQ and context switch. This
      consists is a batch of checks which determine whether it is safe to
      stop the tick or not. These checks are often split in many details:
      posix cpu timers, scheduler, sched clock, perf events.... each of which
      are made of smaller details: posix cpu timer involves checking process
      wide timers then thread wide timers. Perf involves checking freq events
      then more per cpu details.
      
      Checking these informations asynchronously every time we update the full
      dynticks state bring avoidable overhead and a messy layout.
      
      Let's introduce instead tick dependency masks: one for system wide
      dependency (unstable sched clock, freq based perf events), one for CPU
      wide dependency (sched, throttling perf events), and task/signal level
      dependencies (posix cpu timers). The subsystems are responsible
      for setting and clearing their dependency through a set of APIs that will
      take care of concurrent dependency mask modifications and kick targets
      to restart the relevant CPU tick whenever needed.
      
      This new dependency engine stays beside the old one until all subsystems
      having a tick dependency are converted to it.
      Suggested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Reviewed-by: default avatarChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      d027d45d
  26. 13 Feb, 2016 1 commit
    • Frederic Weisbecker's avatar
      nohz: Implement wide kick on top of irq work · 8537bb95
      Frederic Weisbecker authored
      It simplifies it and allows wide kick to be performed, even when IRQs
      are disabled, without an asynchronous level in the middle.
      
      This comes at a cost of some more overhead on features like perf and
      posix cpu timers slow-paths, which is probably not much important
      for nohz full users.
      Requested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Reviewed-by: default avatarChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      8537bb95