From eb63011f17f2cff2fbf786d7f3fcd6dae63ec4bc Mon Sep 17 00:00:00 2001 From: Sultan Alsawaf Date: Tue, 2 Jun 2020 23:03:50 -0700 Subject: [PATCH] Revert "mutex: Add a delay into the SPIN_ON_OWNER wait loop." This reverts commit 1e5a5b5e00e9706cd48e3c87de1607fcaa5214d2. This doesn't make sense for a few reasons. Firstly, upstream uses this mutex code and it works fine on all arches; why should arm be any different? Secondly, once the mutex owner starts to spin on `wait_lock`, preemption is disabled and the owner will be in an actively-running state. The optimistic mutex spinning occurs when the lock owner is actively running on a CPU, and while the optimistic spinning takes place, no attempt to acquire `wait_lock` is made by the new waiter. Therefore, it is guaranteed that new mutex waiters which optimistically spin will not contend the `wait_lock` spin lock that the owner needs to acquire in order to make forward progress. Another potential source of `wait_lock` contention can come from tasks that call mutex_trylock(), but this isn't actually problematic (and if it were, it would affect the MUTEX_SPIN_ON_OWNER=n use-case too). This won't introduce significant contention on `wait_lock` because the trylock code exits before attempting to lock `wait_lock`, specifically when the atomic mutex counter indicates that the mutex is already locked. So in reality, the amount of `wait_lock` contention that can come from mutex_trylock() amounts to only one task. And once it finishes, `wait_lock` will no longer be contended and the previous mutex owner can proceed with clean up. Signed-off-by: Sultan Alsawaf --- kernel/locking/mutex.c | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 57e28af96c5b..858a07590e39 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -28,7 +28,6 @@ #include #include #include -#include #ifdef CONFIG_DEBUG_MUTEXES # include "mutex-debug.h" @@ -555,17 +554,6 @@ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, * values at the cost of a few extra spins. */ cpu_relax(); - - /* - * On arm systems, we must slow down the waiter's repeated - * aquisition of spin_mlock and atomics on the lock count, or - * we risk starving out a thread attempting to release the - * mutex. The mutex slowpath release must take spin lock - * wait_lock. This spin lock can share a monitor with the - * other waiter atomics in the mutex data structure, so must - * take care to rate limit the waiters. - */ - udelay(1); } if (!waiter)