Commit 89e8a244 authored by David Rientjes's avatar David Rientjes Committed by Linus Torvalds

cpusets: avoid looping when storing to mems_allowed if one node remains set

{get,put}_mems_allowed() exist so that general kernel code may locklessly
access a task's set of allowable nodes without having the chance that a
concurrent write will cause the nodemask to be empty on configurations

This could incur a significant delay, however, especially in low memory
conditions because the page allocator is blocking and reclaim requires
get_mems_allowed() itself.  It is not atypical to see writes to
cpuset.mems take over 2 seconds to complete, for example.  In low memory
conditions, this is problematic because it's one of the most imporant
times to change cpuset.mems in the first place!

The only way a task's set of allowable nodes may change is through cpusets
by writing to cpuset.mems and when attaching a task to a generic code is
not reading the nodemask with get_mems_allowed() at the same time, and
then clearing all the old nodes.  This prevents the possibility that a
reader will see an empty nodemask at the same time the writer is storing a
new nodemask.

If at least one node remains unchanged, though, it's possible to simply
set all new nodes and then clear all the old nodes.  Changing a task's
nodemask is protected by cgroup_mutex so it's guaranteed that two threads
are not changing the same task's nodemask at the same time, so the
nodemask is guaranteed to be stored before another thread changes it and
determines whether a node remains set or not.
Signed-off-by: default avatarDavid Rientjes <>
Cc: Miao Xie <>
Cc: KOSAKI Motohiro <>
Cc: Nick Piggin <>
Cc: Paul Menage <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 61600f57
......@@ -949,6 +949,8 @@ static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from,
static void cpuset_change_task_nodemask(struct task_struct *tsk,
nodemask_t *newmems)
bool masks_disjoint = !nodes_intersects(*newmems, tsk->mems_allowed);
* Allow tasks that have access to memory reserves because they have
......@@ -963,7 +965,6 @@ repeat:
nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems);
mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP1);
* ensure checking ->mems_allowed_change_disable after setting all new
* allowed nodes.
......@@ -980,9 +981,11 @@ repeat:
* Allocation of memory is very fast, we needn't sleep when waiting
* for the read-side.
* for the read-side. No wait is necessary, however, if at least one
* node remains unchanged.
while (ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
while (masks_disjoint &&
ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
if (!task_curr(tsk))
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment