1. 16 Jun, 2016 1 commit
    • Peter Zijlstra's avatar
      locking/atomic: Implement... · 28aa2bda
      Peter Zijlstra authored
      locking/atomic: Implement atomic{,64,_long}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()
      Now that all the architectures have implemented support for these new
      atomic primitives add on the generic infrastructure to expose and use
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
  2. 27 Jul, 2015 1 commit
  3. 14 Aug, 2014 1 commit
  4. 20 Dec, 2012 1 commit
    • Stephen Boyd's avatar
      lib: atomic64: Initialize locks statically to fix early users · fcc16882
      Stephen Boyd authored
      The atomic64 library uses a handful of static spin locks to implement
      atomic 64-bit operations on architectures without support for atomic
      64-bit instructions.
      Unfortunately, the spinlocks are initialized in a pure initcall and that
      is too late for the vfs namespace code which wants to use atomic64
      operations before the initcall is run.
      This became a problem as of commit 8823c079: "vfs: Add setns support
      for the mount namespace".
      This leads to BUG messages such as:
        BUG: spinlock bad magic on CPU#0, swapper/0/0
         lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
      coming out early on during boot when spinlock debugging is enabled.
      Fix this by initializing the spinlocks statically at compile time.
      Reported-and-tested-by: default avatarVaibhav Bedia <vaibhav.bedia@ti.com>
      Tested-by: default avatarTony Lindgren <tony@atomide.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: default avatarStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  5. 07 Mar, 2012 1 commit
  6. 14 Sep, 2011 1 commit
  7. 13 Sep, 2011 1 commit
    • Shan Hai's avatar
      locking, lib/atomic64: Annotate atomic64_lock::lock as raw · f59ca058
      Shan Hai authored
      The spinlock protected atomic64 operations must be irq safe as they
      are used in hard interrupt context and cannot be preempted on -rt:
       NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
        LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
       Call Trace:
        [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
        [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
        [eb459c40] [c03d2a14] atomic64_read+0x48/0x84
        [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
        [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
        [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c
      So annotate it.
      In mainline this change documents the low level nature of
      the lock - otherwise there's no functional difference. Lockdep
      and Sparse checking will work as usual.
      Signed-off-by: default avatarShan Hai <haishan.bai@gmail.com>
      Reviewed-by: default avatarYong Zhang <yong.zhang0@gmail.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
  8. 26 Jul, 2011 1 commit
  9. 01 Mar, 2010 1 commit
  10. 30 Jul, 2009 1 commit
  11. 15 Jun, 2009 1 commit
    • Paul Mackerras's avatar
      lib: Provide generic atomic64_t implementation · 09d4e0ed
      Paul Mackerras authored
      Many processor architectures have no 64-bit atomic instructions, but
      we need atomic64_t in order to support the perf_counter subsystem.
      This adds an implementation of 64-bit atomic operations using hashed
      spinlocks to provide atomicity.  For each atomic operation, the address
      of the atomic64_t variable is hashed to an index into an array of 16
      spinlocks.  That spinlock is taken (with interrupts disabled) around the
      operation, which can then be coded non-atomically within the lock.
      On UP, all the spinlock manipulation goes away and we simply disable
      interrupts around each operation.  In fact gcc eliminates the whole
      atomic64_lock variable as well.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>