Commit 1dc0fffc authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

sched/core: Robustify preemption leak checks

When we warn about a preempt_count leak; reset the preempt_count to
the known good value such that the problem does not ripple forward.

This is most important on x86 which has a per cpu preempt_count that is
not saved/restored (after this series). So if you schedule with an
invalid (!2*PREEMPT_DISABLE_OFFSET) preempt_count the next task is
messed up too.

Enforcing this invariant limits the borkage to just the one task.
Signed-off-by: default avatarPeter Zijlstra (Intel) <>
Reviewed-by: default avatarFrederic Weisbecker <>
Reviewed-by: default avatarThomas Gleixner <>
Reviewed-by: default avatarSteven Rostedt <>
Cc: Linus Torvalds <>
Cc: Mike Galbraith <>
Cc: Peter Zijlstra <>
Signed-off-by: default avatarIngo Molnar <>
parent 3d8f74dd
......@@ -706,10 +706,12 @@ void do_exit(long code)
if (unlikely(in_atomic()))
if (unlikely(in_atomic())) {
pr_info("note: %s[%d] exited with preempt_count %d\n",
current->comm, task_pid_nr(current),
/* sync mm's RSS info before statistics gathering */
if (tsk->mm)
......@@ -2968,8 +2968,10 @@ static inline void schedule_debug(struct task_struct *prev)
if (unlikely(in_atomic_preempt_off()))
if (unlikely(in_atomic_preempt_off())) {
profile_hit(SCHED_PROFILING, __builtin_return_address(0));
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment