FROMGIT: sched: Add __releases annotations to affine_move_task()

affine_move_task() assumes task_rq_lock() has been called and it does
an implicit task_rq_unlock() before returning. Add the appropriate
__releases annotations to make this clear.

A typo error in comment is also fixed.

Change-Id: Ib02de3bb81d21489dc6e16a89a8cc0b0c64ccce8
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20220922180041.1768141-2-longman@redhat.com
BUG: 254447891
(cherry picked from commit 9722bb2bbcb7ff70c9f7fdbe71a39b5f1f6aa428
 https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/core)
Signed-off-by: Ashay Jaiswal <quic_ashayj@quicinc.com>
This commit is contained in:
Waiman Long
2022-09-22 14:00:37 -04:00
committed by Quentin Perret
parent b3bb41cebd
commit 5001781910

View File

@@ -2672,6 +2672,8 @@ void release_user_cpus_ptr(struct task_struct *p)
*/
static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flags *rf,
int dest_cpu, unsigned int flags)
__releases(rq->lock)
__releases(p->pi_lock)
{
struct set_affinity_pending my_pending = { }, *pending = NULL;
bool stop_pending, complete = false;
@@ -2984,7 +2986,7 @@ err_unlock:
/*
* Restrict the CPU affinity of task @p so that it is a subset of
* task_cpu_possible_mask() and point @p->user_cpu_ptr to a copy of the
* task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
* old affinity mask. If the resulting mask is empty, we warn and walk
* up the cpuset hierarchy until we find a suitable mask.
*/