Commit c38d185d authored by Bart Van Assche's avatar Bart Van Assche Committed by Jens Axboe

blk-mq: Fix a race between bt_clear_tag() and bt_get()

What we need is the following two guarantees:
* Any thread that observes the effect of the test_and_set_bit() by
  __bt_get_word() also observes the preceding addition of 'current'
  to the appropriate wait list. This is guaranteed by the semantics
  of the spin_unlock() operation performed by prepare_and_wait().
  Hence the conversion of test_and_set_bit_lock() into
* The wait lists are examined by bt_clear() after the tag bit has
  been cleared. clear_bit_unlock() guarantees that any thread that
  observes that the bit has been cleared also observes the store
  operations preceding clear_bit_unlock(). However,
  clear_bit_unlock() does not prevent that the wait lists are examined
  before that the tag bit is cleared. Hence the addition of a memory
  barrier between clear_bit() and the wait list examination.
Signed-off-by: default avatarBart Van Assche <>
Cc: Christoph Hellwig <>
Cc: Robert Elliott <>
Cc: Ming Lei <>
Cc: Alexander Gordeev <>
Cc: <> # v3.13+
Signed-off-by: default avatarJens Axboe <>
parent 9e98e9d7
......@@ -158,7 +158,7 @@ static int __bt_get_word(struct blk_align_bitmap *bm, unsigned int last_tag)
return -1;
last_tag = tag + 1;
} while (test_and_set_bit_lock(tag, &bm->word));
} while (test_and_set_bit(tag, &bm->word));
return tag;
......@@ -357,11 +357,10 @@ static void bt_clear_tag(struct blk_mq_bitmap_tags *bt, unsigned int tag)
struct bt_wait_state *bs;
int wait_cnt;
* The unlock memory barrier need to order access to req in free
* path and clearing tag bit
clear_bit_unlock(TAG_TO_BIT(bt, tag), &bt->map[index].word);
clear_bit(TAG_TO_BIT(bt, tag), &bt->map[index].word);
/* Ensure that the wait list checks occur after clear_bit(). */
bs = bt_wake_ptr(bt);
if (!bs)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment