1. 01 Nov, 2018 2 commits
    • Philippe Gerum's avatar
      lockdep: ipipe: make the logic aware of interrupt pipelining · 393ce342
      Philippe Gerum authored
      The lockdep engine will check for the current interrupt state as part
      of the locking validation process, which must encompass:
      
      - the CPU interrupt state
      - the current pipeline domain
      - the virtual interrupt disable flag
      
      so that we can traverse the tracepoints from any context sanely and
      safely.
      
      In addition trace_hardirqs_on_virt_caller() should be called by the
      arch-dependent code when tracking the interrupt state before returning
      to user-space after a kernel entry (exceptions, IRQ). This makes sure
      that the tracking logic only applies to the root domain, and considers
      the virtual disable flag exclusively.
      
      For instance, the kernel may be entered when interrupts are (only)
      virtually disabled for the root domain (i.e. stalled), and we should
      tell the IRQ tracing logic that IRQs are about to be enabled back only
      if the root domain is unstalled before leaving to user-space. In such
      a context, the state of the interrupt bit in the CPU would be
      irrelevant.
      393ce342
    • Philippe Gerum's avatar
      ftrace: ipipe: enable tracing from the head domain · 1430346b
      Philippe Gerum authored
      Enabling ftrace for a co-kernel running in the head domain of a
      pipelined interrupt context means to:
      
      - make sure that ftrace's live kernel code patching still runs
        unpreempted by any head domain activity (so that the latter can't
        tread on invalid or half-baked changes in the .text section).
      
      - allow the co-kernel code running in the head domain to traverse
        ftrace's tracepoints safely.
      
      The changes introduced by this commit ensure this by fixing up some
      key critical sections so that interrupts are still disabled in the
      CPU, undoing the interrupt flag virtualization in those particular
      cases.
      1430346b
  2. 25 Dec, 2016 1 commit
  3. 09 Dec, 2016 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too · 1a414428
      Steven Rostedt (Red Hat) authored
      Currently both the wakeup and irqsoff traces do not handle set_graph_notrace
      well. The ftrace infrastructure will ignore the return paths of all
      functions leaving them hanging without an end:
      
        # echo '*spin*' > set_graph_notrace
        # cat trace
        [...]
                _raw_spin_lock() {
                  preempt_count_add() {
                  do_raw_spin_lock() {
                update_rq_clock();
      
      Where the '*spin*' functions should have looked like this:
      
                _raw_spin_lock() {
                  preempt_count_add();
                  do_raw_spin_lock();
                }
                update_rq_clock();
      
      Instead, have the wakeup and irqsoff tracers ignore the functions that are
      set by the set_graph_notrace like the function_graph tracer does. Move
      the logic in the function_graph tracer into a header to allow wakeup and
      irqsoff tracers to use it as well.
      
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1a414428
  4. 18 Mar, 2016 2 commits
    • Dmitry Safonov's avatar
      tracing: Remove redundant reset per-CPU buff in irqsoff tracer · 741f3a69
      Dmitry Safonov authored
        There is no reason to do it twice: from commit b6f11df2
      ("trace: Call tracing_reset_online_cpus before tracer->init()")
      resetting of per-CPU buffers done before tracer->init() call.
      
      tracer->init() calls {irqs,preempt,preemptirqs}off_tracer_init() and it
      calls __irqsoff_tracer_init(), which resets per-CPU ringbuffer second
      time.
      It's slowpath, but anyway.
      
      Link: http://lkml.kernel.org/r/1445278226-16187-1-git-send-email-0x7f454c46@gmail.comSigned-off-by: default avatarDmitry Safonov <0x7f454c46@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      741f3a69
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Have preempt(irqs)off trace preempt disabled functions · cb86e053
      Steven Rostedt (Red Hat) authored
      Joel Fernandes reported that the function tracing of preempt disabled
      sections was not being reported when running either the preemptirqsoff or
      preemptoff tracers. This was due to the fact that the function tracer
      callback for those tracers checked if irqs were disabled before tracing. But
      this fails when we want to trace preempt off locations as well.
      
      Joel explained that he wanted to see funcitons where interrupts are enabled
      but preemption was disabled. The expected output he wanted:
      
         <...>-2265    1d.h1 3419us : preempt_count_sub <-irq_exit
         <...>-2265    1d..1 3419us : __do_softirq <-irq_exit
         <...>-2265    1d..1 3419us : msecs_to_jiffies <-__do_softirq
         <...>-2265    1d..1 3420us : irqtime_account_irq <-__do_softirq
         <...>-2265    1d..1 3420us : __local_bh_disable_ip <-__do_softirq
         <...>-2265    1..s1 3421us : run_timer_softirq <-__do_softirq
         <...>-2265    1..s1 3421us : hrtimer_run_pending <-run_timer_softirq
         <...>-2265    1..s1 3421us : _raw_spin_lock_irq <-run_timer_softirq
         <...>-2265    1d.s1 3422us : preempt_count_add <-_raw_spin_lock_irq
         <...>-2265    1d.s2 3422us : _raw_spin_unlock_irq <-run_timer_softirq
         <...>-2265    1..s2 3422us : preempt_count_sub <-_raw_spin_unlock_irq
         <...>-2265    1..s1 3423us : rcu_bh_qs <-__do_softirq
         <...>-2265    1d.s1 3423us : irqtime_account_irq <-__do_softirq
         <...>-2265    1d.s1 3423us : __local_bh_enable <-__do_softirq
      
      There's a comment saying that the irq disabled check is because there's a
      possible race that tracing_cpu may be set when the function is executed. But
      I don't remember that race. For now, I added a check for preemption being
      enabled too to not record the function, as there would be no race if that
      was the case. I need to re-investigate this, as I'm now thinking that the
      tracing_cpu will always be correct. But no harm in keeping the check for
      now, except for the slight performance hit.
      
      Link: http://lkml.kernel.org/r/1457770386-88717-1-git-send-email-agnel.joel@gmail.com
      
      Fixes: 5e6d2b9c "tracing: Use one prologue for the preempt irqs off tracer function tracers"
      Cc: stable@vget.kernel.org # 2.6.37+
      Reported-by: default avatarJoel Fernandes <agnel.joel@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      cb86e053
  5. 02 Nov, 2015 1 commit
  6. 30 Sep, 2015 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Move trace_flags from global to a trace_array field · 983f938a
      Steven Rostedt (Red Hat) authored
      In preparation to make trace options per instance, the global trace_flags
      needs to be moved from being a global variable to a field within the trace
      instance trace_array structure.
      
      There's still more work to do, as there's some functions that use
      trace_flags without passing in a way to get to the current_trace array. For
      those, the global_trace is used directly (from trace.c). This includes
      setting and clearing the trace_flags. This means that when a new instance is
      created, it just gets the trace_flags of the global_trace and will not be
      able to modify them. Depending on the functions that have access to the
      trace_array, the flags of an instance may not affect parts of its trace,
      where the global_trace is used. These will be fixed in future changes.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      983f938a
  7. 29 Sep, 2015 3 commits
  8. 22 Jan, 2015 1 commit
  9. 21 Apr, 2014 3 commits
  10. 20 Feb, 2014 2 commits
  11. 14 Feb, 2014 1 commit
  12. 02 Jul, 2013 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Use flag buffer_disabled for irqsoff tracer · 10246fa3
      Steven Rostedt (Red Hat) authored
      If the ring buffer is disabled and the irqsoff tracer records a trace it
      will clear out its buffer and lose the data it had previously recorded.
      
      Currently there's a callback when writing to the tracing_of file, but if
      tracing is disabled via the function tracer trigger, it will not inform
      the irqsoff tracer to stop recording.
      
      By using the "mirror" flag (buffer_disabled) in the trace_array, that keeps
      track of the status of the trace_array's buffer, it gives the irqsoff
      tracer a fast way to know if it should record a new trace or not.
      The flag may be a little behind the real state of the buffer, but it
      should not affect the trace too much. It's more important for the irqsoff
      tracer to be fast.
      Reported-by: default avatarDave Jones <davej@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      10246fa3
  13. 15 Mar, 2013 5 commits
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Add function-trace option to disable function tracing of latency tracers · 328df475
      Steven Rostedt (Red Hat) authored
      Currently, the only way to stop the latency tracers from doing function
      tracing is to fully disable the function tracer from the proc file
      system:
      
        echo 0 > /proc/sys/kernel/ftrace_enabled
      
      This is a big hammer approach as it disables function tracing for
      all users. This includes kprobes, perf, stack tracer, etc.
      
      Instead, create a function-trace option that the latency tracers can
      check to determine if it should enable function tracing or not.
      This option can be set or cleared even while the tracer is active
      and the tracers will disable or enable function tracing depending
      on how the option was set.
      
      Instead of using the proc file, disable latency function tracing with
      
        echo 0 > /debug/tracing/options/function-trace
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Clark Williams <williams@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      328df475
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Consolidate max_tr into main trace_array structure · 12883efb
      Steven Rostedt (Red Hat) authored
      Currently, the way the latency tracers and snapshot feature works
      is to have a separate trace_array called "max_tr" that holds the
      snapshot buffer. For latency tracers, this snapshot buffer is used
      to swap the running buffer with this buffer to save the current max
      latency.
      
      The only items needed for the max_tr is really just a copy of the buffer
      itself, the per_cpu data pointers, the time_start timestamp that states
      when the max latency was triggered, and the cpu that the max latency
      was triggered on. All other fields in trace_array are unused by the
      max_tr, making the max_tr mostly bloat.
      
      This change removes the max_tr completely, and adds a new structure
      called trace_buffer, that holds the buffer pointer, the per_cpu data
      pointers, the time_start timestamp, and the cpu where the latency occurred.
      
      The trace_array, now has two trace_buffers, one for the normal trace and
      one for the max trace or snapshot. By doing this, not only do we remove
      the bloat from the max_trace but the instances of traces can now use
      their own snapshot feature and not have just the top level global_trace have
      the snapshot feature and latency tracers for itself.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      12883efb
    • Steven Rostedt's avatar
      tracing: Replace the static global per_cpu arrays with allocated per_cpu · a7603ff4
      Steven Rostedt authored
      The global and max-tr currently use static per_cpu arrays for the CPU data
      descriptors. But in order to get new allocated trace_arrays, they need to
      be allocated per_cpu arrays. Instead of using the static arrays, switch
      the global and max-tr to use allocated data.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a7603ff4
    • Steven Rostedt's avatar
      tracing: Encapsulate global_trace and remove dependencies on global vars · 2b6080f2
      Steven Rostedt authored
      The global_trace variable in kernel/trace/trace.c has been kept 'static' and
      local to that file so that it would not be used too much outside of that
      file. This has paid off, even though there were lots of changes to make
      the trace_array structure more generic (not depending on global_trace).
      
      Removal of a lot of direct usages of global_trace is needed to be able to
      create more trace_arrays such that we can add multiple buffers.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2b6080f2
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Prevent buffer overwrite disabled for latency tracers · 613f04a0
      Steven Rostedt (Red Hat) authored
      The latency tracers require the buffers to be in overwrite mode,
      otherwise they get screwed up. Force the buffers to stay in overwrite
      mode when latency tracers are enabled.
      
      Added a flag_changed() method to the tracer structure to allow
      the tracers to see what flags are being changed, and also be able
      to prevent the change from happing.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      613f04a0
  14. 06 Dec, 2012 1 commit
  15. 31 Oct, 2012 2 commits
  16. 31 Jul, 2012 1 commit
    • Steven Rostedt's avatar
      ftrace: Add default recursion protection for function tracing · 4740974a
      Steven Rostedt authored
      As more users of the function tracer utility are being added, they do
      not always add the necessary recursion protection. To protect from
      function recursion due to tracing, if the callback ftrace_ops does not
      specifically specify that it protects against recursion (by setting
      the FTRACE_OPS_FL_RECURSION_SAFE flag), the list operation will be
      called by the mcount trampoline which adds recursion protection.
      
      If the flag is set, then the function will be called directly with no
      extra protection.
      
      Note, the list operation is called if more than one function callback
      is registered, or if the arch does not support all of the function
      tracer features.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      4740974a
  17. 19 Jul, 2012 2 commits
  18. 07 Nov, 2011 1 commit
    • Jiri Olsa's avatar
      tracing/latency: Fix header output for latency tracers · 7e9a49ef
      Jiri Olsa authored
      In case the the graph tracer (CONFIG_FUNCTION_GRAPH_TRACER) or even the
      function tracer (CONFIG_FUNCTION_TRACER) are not set, the latency tracers
      do not display proper latency header.
      
      The involved/fixed latency tracers are:
              wakeup_rt
              wakeup
              preemptirqsoff
              preemptoff
              irqsoff
      
      The patch adds proper handling of tracer configuration options for latency
      tracers, and displaying correct header info accordingly.
      
      * The current output (for wakeup tracer) with both graph and function
        tracers disabled is:
      
        # tracer: wakeup
        #
          <idle>-0       0d.h5    1us+:      0:120:R   + [000]     7:  0:R watchdog/0
          <idle>-0       0d.h5    3us+: ttwu_do_activate.clone.1 <-try_to_wake_up
          ...
      
      * The fixed output is:
      
        # tracer: wakeup
        #
        # wakeup latency trace v1.1.5 on 3.1.0-tip+
        # --------------------------------------------------------------------
        # latency: 55 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
        #    -----------------
        #    | task: migration/0-6 (uid:0 nice:0 policy:1 rt_prio:99)
        #    -----------------
        #
        #                  _------=> CPU#
        #                 / _-----=> irqs-off
        #                | / _----=> need-resched
        #                || / _---=> hardirq/softirq
        #                ||| / _--=> preempt-depth
        #                |||| /     delay
        #  cmd     pid   ||||| time  |   caller
        #     \   /      |||||  \    |   /
             cat-1129    0d..4    1us :   1129:120:R   + [000]     6:  0:R migration/0
             cat-1129    0d..4    2us+: ttwu_do_activate.clone.1 <-try_to_wake_up
      
      * The current output (for wakeup tracer) with only function
        tracer enabled is:
      
        # tracer: wakeup
        #
             cat-1140    0d..4    1us+:   1140:120:R   + [000]     6:  0:R migration/0
             cat-1140    0d..4    2us : ttwu_do_activate.clone.1 <-try_to_wake_up
      
      * The fixed output is:
        # tracer: wakeup
        #
        # wakeup latency trace v1.1.5 on 3.1.0-tip+
        # --------------------------------------------------------------------
        # latency: 207 us, #109/109, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
        #    -----------------
        #    | task: watchdog/1-12 (uid:0 nice:0 policy:1 rt_prio:99)
        #    -----------------
        #
        #                  _------=> CPU#
        #                 / _-----=> irqs-off
        #                | / _----=> need-resched
        #                || / _---=> hardirq/softirq
        #                ||| / _--=> preempt-depth
        #                |||| /     delay
        #  cmd     pid   ||||| time  |   caller
        #     \   /      |||||  \    |   /
          <idle>-0       1d.h5    1us+:      0:120:R   + [001]    12:  0:R watchdog/1
          <idle>-0       1d.h5    3us : ttwu_do_activate.clone.1 <-try_to_wake_up
      
      Link: http://lkml.kernel.org/r/20111107150849.GE1807@m.brq.redhat.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7e9a49ef
  19. 22 Sep, 2011 1 commit
  20. 13 Sep, 2011 1 commit
  21. 15 Jun, 2011 1 commit
    • Jiri Olsa's avatar
      tracing, function_graph: Remove dependency of abstime and duration fields on latency · 321e68b0
      Jiri Olsa authored
      The display of absolute time and duration fields is based on the
      latency field. This was added during the irqsoff/wakeup tracers
      graph support changes.
      
      It's causing confusion in what fields will be displayed for the
      function_graph tracer itself. So I'm removing this depency, and
      adding absolute time and duration fields to the preemptirqsoff
      preemptoff irqsoff wakeup tracers.
      
      With following commands:
      	# echo function_graph > ./current_tracer
      	# cat trace
      
      This is what it looked like before:
      # tracer: function_graph
      #
      #     TIME        CPU  DURATION                  FUNCTION CALLS
      #      |          |     |   |                     |   |   |   |
       0)   0.068 us    |          } /* page_add_file_rmap */
       0)               |          _raw_spin_unlock() {
      ...
      
      This is what it looks like now:
      # tracer: function_graph
      #
      # CPU  DURATION                  FUNCTION CALLS
      # |     |   |                     |   |   |   |
       0)   0.068 us    |                } /* add_preempt_count */
       0)   0.993 us    |              } /* vfsmount_lock_local_lock */
      ...
      
      For preemptirqsoff preemptoff irqsoff wakeup tracers,
      this is what it looked like before:
      SNIP
      #                       _-----=> irqs-off
      #                      / _----=> need-resched
      #                     | / _---=> hardirq/softirq
      #                     || / _--=> preempt-depth
      #                     ||| / _-=> lock-depth
      #                     |||| /
      # CPU  TASK/PID       |||||  DURATION                  FUNCTION CALLS
      # |     |    |        |||||   |   |                     |   |   |   |
       1)    <idle>-0    |  d..1  0.000 us    |  acpi_idle_enter_simple();
      ...
      
      This is what it looks like now:
      SNIP
      #
      #                                       _-----=> irqs-off
      #                                      / _----=> need-resched
      #                                     | / _---=> hardirq/softirq
      #                                     || / _--=> preempt-depth
      #                                     ||| /
      #     TIME        CPU  TASK/PID       ||||  DURATION                  FUNCTION CALLS
      #      |          |     |    |        ||||   |   |                     |   |   |   |
         19.847735 |   1)    <idle>-0    |  d..1  0.000 us    |  acpi_idle_enter_simple();
      ...
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      Link: http://lkml.kernel.org/r/1307113131-10045-2-git-send-email-jolsa@redhat.comSigned-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      321e68b0
  22. 18 May, 2011 1 commit
    • Steven Rostedt's avatar
      ftrace: Implement separate user function filtering · b848914c
      Steven Rostedt authored
      ftrace_ops that are registered to trace functions can now be
      agnostic to each other in respect to what functions they trace.
      Each ops has their own hash of the functions they want to trace
      and a hash to what they do not want to trace. A empty hash for
      the functions they want to trace denotes all functions should
      be traced that are not in the notrace hash.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      b848914c
  23. 31 Mar, 2011 1 commit
  24. 20 Jan, 2011 1 commit
    • Tejun Heo's avatar
      lockdep: Move early boot local IRQ enable/disable status to init/main.c · 2ce802f6
      Tejun Heo authored
      During early boot, local IRQ is disabled until IRQ subsystem is
      properly initialized.  During this time, no one should enable
      local IRQ and some operations which usually are not allowed with
      IRQ disabled, e.g. operations which might sleep or require
      communications with other processors, are allowed.
      
      lockdep tracked this with early_boot_irqs_off/on() callbacks.
      As other subsystems need this information too, move it to
      init/main.c and make it generally available.  While at it,
      toggle the boolean to early_boot_irqs_disabled instead of
      enabled so that it can be initialized with %false and %true
      indicates the exceptional condition.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarPekka Enberg <penberg@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20110120110635.GB6036@htj.dyndns.org>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      2ce802f6
  25. 18 Oct, 2010 2 commits
  26. 21 Jul, 2010 1 commit
    • KOSAKI Motohiro's avatar
      tracing: Shrink max latency ringbuffer if unnecessary · ef710e10
      KOSAKI Motohiro authored
      Documentation/trace/ftrace.txt says
      
        buffer_size_kb:
      
              This sets or displays the number of kilobytes each CPU
              buffer can hold. The tracer buffers are the same size
              for each CPU. The displayed number is the size of the
              CPU buffer and not total size of all buffers. The
              trace buffers are allocated in pages (blocks of memory
              that the kernel uses for allocation, usually 4 KB in size).
              If the last page allocated has room for more bytes
              than requested, the rest of the page will be used,
              making the actual allocation bigger than requested.
              ( Note, the size may not be a multiple of the page size
                due to buffer management overhead. )
      
              This can only be updated when the current_tracer
              is set to "nop".
      
      But it's incorrect. currently total memory consumption is
      'buffer_size_kb x CPUs x 2'.
      
      Why two times difference is there? because ftrace implicitly allocate
      the buffer for max latency too.
      
      That makes sad result when admin want to use large buffer. (If admin
      want full logging and makes detail analysis). example, If admin
      have 24 CPUs machine and write 200MB to buffer_size_kb, the system
      consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is
      usually unacceptable.
      
      Fortunatelly, almost all users don't use max latency feature.
      The max latency buffer can be disabled easily.
      
      This patch shrink buffer size of the max latency buffer if
      unnecessary.
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      LKML-Reference: <20100701104554.DA2D.A69D9226@jp.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      ef710e10