Commit Graph

810295 Commits

Author SHA1 Message Date
Tashfin Shakeer Rhythm
70f98fe87a Revert "msm: kgsl: Avoid busy waiting for fenced GMU writes"
This regresses 3DMark scores by a short margin due to regwrite being outside the spinlock
that causes potential race condition. Hence, revert this.

This reverts commit 0cbd93ad24ea0eaf839ed151149a1180ecf23a57.

Reported-by: Kazuki H <kazukih0205@gmail.com>
Suggested-by: Sultan Alsawaf <sultan@kerneltoast.com>
Cc: EmanuelCN <emanuelghub@gmail.com>
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2023-08-10 12:25:45 -05:00
Tashfin Shakeer Rhythm
bce599d91e Revert "thread_info: Order thread flag tests with respect to flag mutations"
This implies that there's an unseen ordering dependency between test_bit and set_bit
which isn't true and just adds memory barriers for no reason. Therefore, revert this.

This reverts commit fec203b7346886e7ec96af8e697ef15b66b304f7.

Suggested-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2023-08-10 12:25:45 -05:00
Keith Busch
72cef9deb7 dmapool: create/destroy cleanup
Set the 'empty' bool directly from the result of the function that
determines its value instead of adding additional logic.

Link: https://lkml.kernel.org/r/20230126215125.4069751-13-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:44 -05:00
Keith Busch
51873439d0 dmapool: link blocks across pages
The allocated dmapool pages are never freed for the lifetime of the pool. 
There is no need for the two level list+stack lookup for finding a free
block since nothing is ever removed from the list.  Just use a simple
stack, reducing time complexity to constant.

The implementation inserts the stack linking elements and the dma handle
of the block within itself when freed.  This means the smallest possible
dmapool block is increased to at most 16 bytes to accommodate these
fields, but there are no exisiting users requesting a dma pool smaller
than that anyway.

Removing the list has a significant change in performance. Using the
kernel's micro-benchmarking self test:

Before:

  # modprobe dmapool_test
  dmapool test: size:16   blocks:8192   time:57282
  dmapool test: size:64   blocks:8192   time:172562
  dmapool test: size:256  blocks:8192   time:789247
  dmapool test: size:1024 blocks:2048   time:371823
  dmapool test: size:4096 blocks:1024   time:362237

After:

  # modprobe dmapool_test
  dmapool test: size:16   blocks:8192   time:24997
  dmapool test: size:64   blocks:8192   time:26584
  dmapool test: size:256  blocks:8192   time:33542
  dmapool test: size:1024 blocks:2048   time:9022
  dmapool test: size:4096 blocks:1024   time:6045

The module test allocates quite a few blocks that may not accurately
represent how these pools are used in real life.  For a more marco level
benchmark, running fio high-depth + high-batched on nvme, this patch shows
submission and completion latency reduced by ~100usec each, 1% IOPs
improvement, and perf record's time spent in dma_pool_alloc/free were
reduced by half.

[kbusch@kernel.org: push new blocks in ascending order]
  Link: https://lkml.kernel.org/r/20230221165400.1595247-1-kbusch@meta.com
Link: https://lkml.kernel.org/r/20230126215125.4069751-12-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:44 -05:00
Keith Busch
095d7f7892 dmapool: don't memset on free twice
If debug is enabled, dmapool will poison the range, so no need to clear it
to 0 immediately before writing over it.

Link: https://lkml.kernel.org/r/20230126215125.4069751-11-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:43 -05:00
Keith Busch
29b408b0e2 dmapool: simplify freeing
The actions for busy and not busy are mostly the same, so combine these
and remove the unnecessary function.  Also, the pool is about to be freed
so there's no need to poison the page data since we only check for poison
on alloc, which can't be done on a freed pool.

Link: https://lkml.kernel.org/r/20230126215125.4069751-10-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:43 -05:00
Keith Busch
6592138e85 dmapool: consolidate page initialization
Various fields of the dma pool are set in different places. Move it all
to one function.

Link: https://lkml.kernel.org/r/20230126215125.4069751-9-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:43 -05:00
Keith Busch
fc73fb4b15 dmapool: rearrange page alloc failure handling
Handle the error in a condition so the good path can be in the normal
flow.

Link: https://lkml.kernel.org/r/20230126215125.4069751-8-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:42 -05:00
Keith Busch
8dc470e1e3 dmapool: move debug code to own functions
Clean up the normal path by moving the debug code outside it.

Link: https://lkml.kernel.org/r/20230126215125.4069751-7-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:42 -05:00
Tony Battersby
13e320e5db dmapool: speedup DMAPOOL_DEBUG with init_on_alloc
Avoid double-memset of the same allocated memory in dma_pool_alloc() when
both DMAPOOL_DEBUG is enabled and init_on_alloc=1.

Link: https://lkml.kernel.org/r/20230126215125.4069751-6-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:42 -05:00
Tony Battersby
d32c1f28cd dmapool: cleanup integer types
To represent the size of a single allocation, dmapool currently uses
'unsigned int' in some places and 'size_t' in other places.  Standardize
on 'unsigned int' to reduce overhead, but use 'size_t' when counting all
the blocks in the entire pool.

Link: https://lkml.kernel.org/r/20230126215125.4069751-5-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:41 -05:00
Tony Battersby
d29289c60a dmapool: use sysfs_emit() instead of scnprintf()
Use sysfs_emit instead of scnprintf, snprintf or sprintf.

Link: https://lkml.kernel.org/r/20230126215125.4069751-4-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:41 -05:00
Tony Battersby
5460efe7fa dmapool: remove checks for dev == NULL
dmapool originally tried to support pools without a device because
dma_alloc_coherent() supports allocations without a device.  But nobody
ended up using dma pools without a device, and trying to do so will result
in an oops.  So remove the checks for pool->dev == NULL since they are
unneeded bloat.

[kbusch@kernel.org: add check for null dev on create]
Link: https://lkml.kernel.org/r/20230126215125.4069751-3-kbusch@meta.com
Fixes: 2d55c16c0c54 ("dmapool: create/destroy cleanup")
Signed-off-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-10 12:25:40 -05:00
Christian König
e94fd207e1 mm/dmapool.c: revert "make dma pool to use kmalloc_node"
This reverts commit 2618c60b8b ("dma: make dma pool to use
kmalloc_node").

While working myself into the dmapool code I've found this little odd
kmalloc_node().

What basically happens here is that we allocate the housekeeping
structure on the numa node where the device is attached to.  Since the
device is never doing DMA to or from that memory this doesn't seem to
make sense at all.

So while this doesn't seem to cause much harm it's probably cleaner to
revert the change for consistency.

Link: https://lkml.kernel.org/r/20211221110724.97664-1-christian.koenig@amd.com
Signed-off-by: Christian König <christian.koenig@amd.com>
Cc: Yinghai Lu <yinghai.lu@sun.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-08-10 12:25:40 -05:00
Zhiyuan Dai
3a03fc7190 mm/dmapool: switch from strlcpy to strscpy
strlcpy is marked as deprecated in Documentation/process/deprecated.rst,
and there is no functional difference when the caller expects truncation
(when not checking the return value). strscpy is relatively better as it
also avoids scanning the whole source string.

Link: https://lkml.kernel.org/r/1613962050-14188-1-git-send-email-daizhiyuan@phytium.com.cn
Signed-off-by: Zhiyuan Dai <daizhiyuan@phytium.com.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-08-10 12:25:40 -05:00
Andy Shevchenko
efecf99e1b mm/dmapool.c: replace hard coded function name with __func__
No need to hard code function name when __func__ can be used.

While here, replace specifiers for special types like dma_addr_t.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Link: http://lkml.kernel.org/r/20200814135055.24898-2-andriy.shevchenko@linux.intel.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-08-10 12:25:39 -05:00
Andy Shevchenko
c86864119f mm/dmapool.c: replace open-coded list_for_each_entry_safe()
There is a place in the code where open-coded version of
list_for_each_entry_safe() is used.  Replace that with the standard macro.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Link: http://lkml.kernel.org/r/20200814135055.24898-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-08-10 12:25:39 -05:00
Mateusz Nosek
e0c1a936d6 mm/dmapool.c: micro-optimisation remove unnecessary branch
Previously there was a check if 'size' is aligned to 'align' and if not
then it was aligned.  This check was expensive as both branch and division
are expensive instructions in most architectures.  'ALIGN' function on
already aligned value will not change it, and as it is cheaper than branch
+ division it can be executed all the time and branch can be removed.

Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20200320173317.26408-1-mateusznosek0@gmail.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-08-10 12:25:39 -05:00
Sultan Alsawaf
54d40c3eaa cpu: Silence log spam when a CPU is brought up
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
Signed-off-by: Carlos Jimenez (JavaShin-X) <javashin1986@gmail.com>
2023-08-10 12:25:38 -05:00
balgxmr
6bb04ddd4a arm64: dts: Change remaining user_space to step_wise thermal governor
- Fixup of (my derp) the following commit: 89018d5
2023-08-10 12:25:38 -05:00
Kazuki Hashimoto
9eceef628c disp: msm: dsi: Don't busy wait
That's a LONG wait. Don't busy wait there to save power.

Signed-off-by: Kazuki Hashimoto <kazukih@tuta.io>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2023-08-10 12:25:38 -05:00
balgxmr
8c4da81354 drm/msm: Kill doze logging
Signed-off-by: balgxmr <jose@pixelos.net>
2023-08-10 12:25:37 -05:00
balgxmr
259b7704cb drivers/power: Kill more logging on release
Signed-off-by: balgxmr <jose@pixelos.net>
2023-08-10 12:25:37 -05:00
Cyber Knight
b0bd76595a irqchip: Reduce verbosity of logging
Silences:
[41385.516876] GICv3: CPU1: found redistributor 100 region 0:0x0000000017a80000
[41385.519545] GICv3: CPU2: found redistributor 200 region 0:0x0000000017aa0000
[41385.522043] GICv3: CPU3: found redistributor 300 region 0:0x0000000017ac0000
[41385.525185] GICv3: CPU4: found redistributor 400 region 0:0x0000000017ae0000
[41385.527049] GICv3: CPU5: found redistributor 500 region 0:0x0000000017b00000
[41385.528764] GICv3: CPU6: found redistributor 600 region 0:0x0000000017b20000
[41385.530522] GICv3: CPU7: found redistributor 700 region 0:0x0000000017b40000

Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2023-08-10 12:25:36 -05:00
balgxmr
f6095d7861 drivers: msm: vidc: Silence opening/closed video instance
Signed-off-by: balgxmr <jose@pixelos.net>
2023-08-10 12:25:36 -05:00
balgxmr
5302959e3b drivers: fts: fts_521: Silence logspam
- This spams way too much

Signed-off-by: balgxmr <jose@pixelos.net>
2023-08-10 12:25:36 -05:00
Juhyung Park
143d88bf42 drm: msm: always assume the panel is OLED
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
2023-08-10 12:25:35 -05:00
Rohan Sethi
01288ce0b8 msm: kgsl: Set vm_pgoff of vma to zero
kgsl gets the entry id or the gpu address through vm_pgoff. It is used
during mmap and never needed again. But this pgoff has different meaning
at other parts of kernel. Not setting to zero will let way for wrong
assumption when tried to unmap a page from the vma.

Change-Id: Ia81c64a77456caf168c6bd23bdf5755c3f3ee31c
Signed-off-by: Puranam V G Tejaswi <pvgtejas@codeaurora.org>
Signed-off-by: Rohan Sethi <rohsethi@codeaurora.org>
2023-08-10 12:25:35 -05:00
Rohan Sethi
780eac373e msm: kgsl: Skip VM page insert operations for IO-Coherent cached buffers
IO-Coherent cached buffers can be reclaimed. There is possibility that for
reclaimed buffer mmap() request can result into null pointer dereference
in vm_insert_page(). So, skip VM page insert operations for IO-coherent
cached buffers in mmap(). These buffers can be handled at CPU page fault
time in the kgsl vmfault handler.

Change-Id: I6cf29af2d37de736df27f745fc9bceb01cb097e6
Signed-off-by: Hareesh Gundu <quic_hareeshg@quicinc.com>
Signed-off-by: Rohan Sethi <quic_rohsethi@quicinc.com>
2023-08-10 12:25:34 -05:00
Akhil P Oommen
cf29640e7d msm: kgsl: Trigger timers during inline submission
Currently, we don't trigger dispatcher timer while doing an inline
submission. This breaks the long ib detection. So, trigger dispatcher
timer during an inline submission.

Change-Id: I36397cea3f6ea4393789cd4b54a2258e189f4b13
Signed-off-by: Akhil P Oommen <quic_akhilpo@quicinc.com>

[@RealJohnGalt] update idle timer usage for 4.14
2023-08-10 12:25:34 -05:00
Puranam V G Tejaswi
5efb6f9434 msm: kgsl: Use kthread instead of workqueue for event work
Currently a workqueue is being used to process the event work. In certain
scenarios like when most of CPU cores are busy, there can be a significant
delay between the actual timestamp retire event and when the work is
processed by the events workqueue as workqueues cannot have RT priority.
Hence use kthread instead of workqueue for event work.

Change-Id: Ib1ec7fa1ec3a133d03104c9a029dcc4c06180609
Signed-off-by: Puranam V G Tejaswi <quic_pvgtejas@quicinc.com>

[@RealJohnGalt] adapted to 4.14
2023-08-10 12:25:34 -05:00
kondors1995
0b8f856e2b Revert "Revert "msm: kgsl: Use event workqueue for event work instead of RT Kthread worker""
This reverts commit 97b35a555c.
2023-08-10 12:25:33 -05:00
kondors1995
02509d2eb9 Revert "BACKPORT: msm: kgsl: Fix possible NULL pointer dereference"
This reverts commit cda9ba307f.
2023-08-10 12:25:33 -05:00
Jordan Crouse
7435ab0924 BACKPORT: msm: kgsl: Make kgsl_mem_entry_get() return a pointer to the entry
Add some sugar to make kgsl_mem_entry_get() return the pointer it
just got which makes the code cleaner.

Change-Id: Ic0dedbadd3bb755a9ad1906eab04aeb02d5da53b
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
[ Tashar02: Backport to k4.19 ]
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2023-08-10 12:25:33 -05:00
Lynus Vaz
2c2d74762d BACKPORT: msm: kgsl: Allocate memory for sync callbacks using GFP_KERNEL
The sync fence callbacks are allocated in kernel context. Use the
GFP_KERNEL flag instead of GFP_ATOMIC to permit the allocation to
sleep if required.

Change-Id: I2099229cb1fb734e87e4bff0ddc38a2ced2c03ea
Signed-off-by: Lynus Vaz <quic_lvaz@quicinc.com>
[ Tashar02: Backport to k4.19 ]
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2023-08-10 12:25:32 -05:00
John Galt
4d152a92f9 schedutil: checkout to 934c3511b5b53 2023-08-10 12:25:32 -05:00
Quentin Perret
fe7b1e0f76 BACKPORT: FROMGIT: sched: Skip priority checks with SCHED_FLAG_KEEP_PARAMS
SCHED_FLAG_KEEP_PARAMS can be passed to sched_setattr to specify that
the call must not touch scheduling parameters (nice or priority). This
is particularly handy for uclamp when used in conjunction with
SCHED_FLAG_KEEP_POLICY as that allows to issue a syscall that only
impacts uclamp values.

However, sched_setattr always checks whether the priorities and nice
values passed in sched_attr are valid first, even if those never get
used down the line. This is useless at best since userspace can
trivially bypass this check to set the uclamp values by specifying low
priorities. However, it is cumbersome to do so as there is no single
expression of this that skips both RT and CFS checks at once. As such,
userspace needs to query the task policy first with e.g. sched_getattr
and then set sched_attr.sched_priority accordingly. This is racy and
slower than a single call.

As the priority and nice checks are useless when SCHED_FLAG_KEEP_PARAMS
is specified, simply inherit them in this case to match the policy
inheritance of SCHED_FLAG_KEEP_POLICY.

Reported-by: Wei Wang <wvw@google.com>
Signed-off-by: Quentin Perret <qperret@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Reviewed-by: Qais Yousef <qais.yousef@arm.com>
Link: https://lore.kernel.org/r/20210805102154.590709-3-qperret@google.com

Bug: 190237315
(cherry picked from commit f4dddf9
 git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core)
Signed-off-by: Quentin Perret <qperret@google.com>
Change-Id: Ifdbc9262b82c7f5c0d34952ece07770a53e3f6a5
[panchajanya1999: adapt for k4.14]
Signed-off-by: Panchajanya1999 <panchajanya@azure-dev.live>
Signed-off-by: mcdofrenchfreis <xyzevan@androidist.net>
2023-08-10 12:25:31 -05:00
Patrick Bellasi
8be6bdb00f cpufreq: schedutil: Fix iowait boost reset
A more energy efficient update of the IO wait boosting mechanism has
been introduced in:

   commit a5a0809 ("cpufreq: schedutil: Make iowait boost more energy
efficient")

where the boost value is expected to be:

 - doubled at each successive wakeup from IO
   staring from the minimum frequency supported by a CPU

 - reset when a CPU is not updated for more then one tick
   by either disabling the IO wait boost or resetting its value to the
   minimum frequency if this new update requires an IO boost.

This approach is supposed to "ignore" boosting for sporadic wakeups from
IO, while still getting the frequency boosted to the maximum to benefit
long sequence of wakeup from IO operations.

However, these assumptions are not always satisfied.
For example, when an IO boosted CPU enters idle for more the one tick
and then wakes up after an IO wait, since in sugov_set_iowait_boost() we
first check the IOWAIT flag, we keep doubling the iowait boost instead
of restarting from the minimum frequency value.

This misbehavior could happen mainly on non-shared frequency domains,
thus defeating the energy efficiency optimization, but it can also
happen on shared frequency domain systems.

Let fix this issue in sugov_set_iowait_boost() by:
 - first check the IO wait boost reset conditions
   to eventually reset the boost value
 - then applying the correct IO boost value
   if required by the caller

Fixes: a5a0809 (cpufreq: schedutil: Make iowait boost more energy
efficient)
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
2023-08-10 12:25:31 -05:00
Vincent Donnefort
e37a5886fe sched/pelt: Fix task util_est update filtering
[ Upstream commit b89997aa88f0b07d8a6414c908af75062103b8c9 ]

Being called for each dequeue, util_est reduces the number of its updates
by filtering out when the EWMA signal is different from the task util_avg
by less than 1%. It is a problem for a sudden util_avg ramp-up. Due to the
decay from a previous high util_avg, EWMA might now be close enough to
the new util_avg. No update would then happen while it would leave
ue.enqueued with an out-of-date value.

Taking into consideration the two util_est members, EWMA and enqueued for
the filtering, ensures, for both, an up-to-date value.

This is for now an issue only for the trace probe that might return the
stale value. Functional-wise, it isn't a problem, as the value is always
accessed through max(enqueued, ewma).

This problem has been observed using LISA's UtilConvergence:test_means on
the sd845c board.

No regression observed with Hackbench on sd845c and Perf-bench sched pipe
on hikey/hikey960.

Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/20210225165820.1377125-1-vincent.donnefort@arm.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-10 12:25:31 -05:00
Satya Durga Srinivasu Prabhala
182fc3909a sched/fair: honor uclamp restrictions in fbt()
While calculating untilization of CPU during task placement in fbt(),
current code doesn't take uclamp into account which would lead to
selection of incorrect CPU for the task when uclamp restrictions
are in place for the task.

Change-Id: I8371affe3b37733d222e5c57953e53f91fc19a53
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
2023-08-10 12:25:30 -05:00
Shaleen Agrawal
c9510f56c8 sched: Enable latency sensitive feature
Make use of the existing need_idle feature to incorporate upstream latency
sensitive tasks.

Change-Id: Ie1513187d024b93c8b619d9e0a35d84195488696
Signed-off-by: Shaleen Agrawal <shalagra@codeaurora.org>
2023-08-10 12:25:30 -05:00
John Galt
f0c098c6b4 sched/fair: further improve migration margins 2023-08-10 12:25:30 -05:00
John Galt
a8e5d7d98f sched/fair: reset migration margins for balance
Reapply: Even properly calculated from table there are significant
efficiency regressions.
2023-08-10 12:25:29 -05:00
kondors1995
525bb7715c Revert "sched/fair: Revert Google's capacity margin hacks"
This reverts commit e7c82d8c4d.
2023-08-10 12:25:29 -05:00
Danny Lin
8c9a878419 ARM: dts: sm8150: Enable freq-energy-model
Subsequent to 3936d91 ("sched/energy: Checkout to branch android-4.14 of https://android.googlesource.com/kernel/common")

The freq-energy-model property needs to be set when a freq-power
energy model is in use.

Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2023-08-10 12:25:28 -05:00
Adam W. Willis
f792575a16 sched/energy: Checkout to branch android-4.14 of https://android.googlesource.com/kernel/common
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
2023-08-10 12:25:28 -05:00
balgxmr
df3944e170 touchscreen: fts_521: Remove input_report_key on fod status check
* This fixes an issue with stuck scrolling specifically on fod area.

Test: Open chrome, search for a long website, scroll through
      fod area and check that it doesn't get stuck.
2023-08-10 12:25:28 -05:00
Cyber Knight
a0055705f0 arm64/kernel: Reduce verbosity of logging
Silences:
[41738.969700] CPU1: shutdown
[41738.971909] CPU2: shutdown
[41738.973703] CPU3: shutdown
[41738.974936] CPU4: shutdown
[41738.976110] CPU5: shutdown
[41738.977360] CPU6: shutdown
[41738.979050] CPU7: shutdown
[41738.982293] CPU1: Booted secondary processor [51df805e]
[41738.985375] CPU2: Booted secondary processor [51df805e]
[41738.988476] CPU3: Booted secondary processor [51df805e]
[41738.991854] CPU4: Booted secondary processor [51df804e]
[41738.993877] CPU5: Booted secondary processor [51df804e]
[41738.995698] CPU6: Booted secondary processor [51df804e]
[41738.997624] CPU7: Booted secondary processor [51df804e]

Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2023-08-10 12:25:18 -05:00
Pzqqt
2bc524a19b Revert "f2fs: avoid to check PG_error flag"
[Suggestion from Tashar02](375754065c (commitcomment-114679849))

This reverts commit 375754065cdb21304bec51240d2fcb03246d4c79.
2023-08-09 18:23:15 -05:00
Pzqqt
88463890ee f2fs: Backport from 6.4-rc1-5.10
https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-stable.git/tag/?h=6.4-rc1-5.10

This is an empty commit, just for flagging.
2023-08-09 18:23:15 -05:00