830400 Commits

Author SHA1 Message Date
wulan17
4f82e79ddb arch: arm64: configs: Enable ThinLTO
Signed-off-by: wulan17 <wulan17@nusantararom.org>
2025-12-17 09:33:40 +01:00
Kevin Park
07dbeefe8f GPUCORE-36665 Fix OOB issue on KBASE_IOCTL_CS_TILER_HEAP_INIT
'group_id' member of the ioctl (KBASE_IOCTL_CS_TILER_HEAP_INIT) struct
must be validated before initializing CSF tiler heap.
Otherwise out-of-boundary of memory group pools array for the CSF tiler
heap could happen and will potentially lead to kernel panic.

TI2: 933204 (DDK Precommit)
TI2: 933199 (BASE_CSF_TEST)

Bug: 259061568
Test: verified fix using poc
Provenance: https://code.ipdelivery.arm.com/c/GPU/mali-ddk/+/4766
Change-Id: I209a3d5152a34c278c17383e4aa9080aa9735822
(cherry picked from commit 55b44117111bf6a7e324301cbbf4f89669fa04c3)
2025-12-14 18:28:47 +00:00
Akash Goel
379ddcf6d2 GPUCORE-36251: Make HeapContext GPU VA to be GPU cacheline aligned
Customer reported an issue where an unexpected GPU page fault happened
due to Tiler trying to access the chunk that was already freed by the
Userspace. The issue was root caused to cacheline sharing between the
HeapContexts of 2 Tiler heaps of the same Kbase context.

The page fault occurred for an Application that made use of more than 1
GPU queue group where one of the group, and its corresponding Tiler heap
instance, is created and destroyed multiple times over the lifetime of
Application.

Kbase sub-allocates memory for a HeapContext from a 4KB page that is
mapped as cached on GPU side, and the memory for HeapContext is zeroed
on allocation through an uncached CPU mapping.
Since the size of HeapContext is 32 bytes, 2 HeapContexts (corresponding
to 2 Tiler heaps of the same context) can end up sharing the same GPU
cacheline (which is 64 bytes in size).

GPU page fault occurred as FW found a non NULL or stale value for the
'free_list_head' pointer in the HeapContext, even though the Heap was
newly created, and so FW assumed a free chunk is available and passed
the address of it to the Tiler and didn't raise an OoM event for Host.
The stale value was found as the zeroing of new HeapContext's memory on
allocation got lost due to the eviction of cacheline from L2 cache.
The cacheline became dirty when FW had updated the contents of older
HeapContext (sharing the cacheline with new HeapContext) on CSG suspend
operation.

This commit makes the GPU VA of HeapContext to be GPU cacheline aligned
to avoid cacheline sharing. The alignment would suffice and there is no
explicit cache flush needed when HeapContext is freed, as whole GPU
cache would anyways be flushed on Odin & Turse GPUs when the initial
chunks are freed just before the HeapContext is freed.

Provenance: https://code.ipdelivery.arm.com/c/GPU/mali-ddk/+/4724/
Test: Boot to home
Bug: 259523790
Change-Id: Ie9e8bffcadbd2ca7705dcd44f9be76754e28138d
Signed-off-by: Jeremy Kemp <jeremykemp@google.com>
2025-12-14 18:26:14 +00:00
Akash Goel
d89fdf55b1 GPUCORE-35070: Order write to JOB_IRQ_CLEAR reg with read from iface mem
There was an issue on Turse platform where sometimes Kbase misses the
CSG idle event notification from FW. This happens when FW sends back to
back notifications for the same CSG for events like SYNC_UPDATE and IDLE
but Kbase gets a single IRQ and observes only the first event.

The issue was root caused to a barrier missing on Kbase side between the
write to JOB_IRQ_CLEAR register and the read from interface memory, i.e.
CSG_ACK. Without the barrier there is no guarantee about the ordering,
the write to JOB_IRQ_CLEAR can take effect after the read from interface
memory.
The ordering is needed considering the way FW & Kbase writes to the
JOB_IRQ_RAWSTAT & JOB_IRQ_CLEAR registers without any synchronization.

This commit adds dmb(osh) barrier after the write to JOB_IRQ_CLEAR to
resolve the issue.

TI2: 896668 (PLAN-12467r490 TGT CS Nightly, few CSF scenarios)
Bug: 243913790
Test: SST ~4600 hours
Provenance: https://code.ipdelivery.arm.com/c/GPU/mali-ddk/+/3841
Change-Id: I094a3b55c8ae28e8126057cdaf81990f62cd388e
(cherry picked from commit 220d89fd264b11a5b68290c3ca5a8c232e1d45db)
2025-12-14 18:16:52 +00:00
Paul E. McKenney
1cea7dab2a rcu: Report error for bad rcu_nocbs= parameter values
This commit prints a console message when cpulist_parse() reports a
bad list of CPUs, and sets all CPUs' bits in that case.  The reason for
setting all CPUs' bits is that this is the safe(r) choice for real-time
workloads, which would normally be the ones using the rcu_nocbs= kernel
boot parameter.  Either way, later RCU console log messages list the
actual set of CPUs whose RCU callbacks will be offloaded.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri15072019@gmail.com>
Signed-off-by: Edwiin Kusuma Jaya <kutemeikito0905@gmail.com>
2025-12-14 18:01:37 +00:00
Paul E. McKenney
cdb948743a rcu: Allow rcu_nocbs= to specify all CPUs
Currently, the rcu_nocbs= kernel boot parameter requires that a specific
list of CPUs be specified, and has no way to say "all of them".
As noted by user RavFX in a comment to Phoronix topic 1002538, this
is an inconvenient side effect of the removal of the RCU_NOCB_CPU_ALL
Kconfig option.  This commit therefore enables the rcu_nocbs= kernel boot
parameter to be given the string "all", as in "rcu_nocbs=all" to specify
that all CPUs on the system are to have their RCU callbacks offloaded.

Another approach would be to make cpulist_parse() check for "all", but
there are uses of cpulist_parse() that do other checking, which could
conflict with an "all".  This commit therefore focuses on the specific
use of cpulist_parse() in rcu_nocb_setup().

Just a note to other people who would like changes to Linux-kernel RCU:
If you send your requests to me directly, they might get fixed somewhat
faster.  RavFX's comment was posted on January 22, 2018 and I first saw
it on March 5, 2019.  And the only reason that I found it -at- -all- was
that I was looking for projects using RCU, and my search engine showed
me that Phoronix comment quite by accident.  Your choice, though!  ;-)

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri15072019@gmail.com>
Signed-off-by: Edwiin Kusuma Jaya <kutemeikito0905@gmail.com>
2025-12-14 18:01:17 +00:00
Wilco Dijkstra
af15843f65 arm64: Use optimized memcmp
Patch written by Wilco Dijkstra submitted for review to newlib:
https://sourceware.org/ml/newlib/2017/msg00524.html

This is an optimized memcmp for AArch64.  This is a complete rewrite
using a different algorithm.  The previous version split into cases
where both inputs were aligned, the inputs were mutually aligned and
unaligned using a byte loop.  The new version combines all these cases,
while small inputs of less than 8 bytes are handled separately.

This allows the main code to be sped up using unaligned loads since
there are now at least 8 bytes to be compared.  After the first 8 bytes,
align the first input.  This ensures each iteration does at most one
unaligned access and mutually aligned inputs behave as aligned.
After the main loop, process the last 8 bytes using unaligned accesses.

This improves performance of (mutually) aligned cases by 25% and
unaligned by >500% (yes >6 times faster) on large inputs.

2017-06-28  Wilco Dijkstra  <wdijkstr@arm.com>

        * bionic/libc/arch-arm64/generic/bionic/memcmp.S (memcmp):
                Rewrite of optimized memcmp.

GLIBC benchtests/bench-memcmp.c performance comparison for Cortex-A53:

Length    1, alignment  1/ 1:        153%
Length    1, alignment  1/ 1:        119%
Length    1, alignment  1/ 1:        154%
Length    2, alignment  2/ 2:        121%
Length    2, alignment  2/ 2:        140%
Length    2, alignment  2/ 2:        121%
Length    3, alignment  3/ 3:        105%
Length    3, alignment  3/ 3:        105%
Length    3, alignment  3/ 3:        105%
Length    4, alignment  4/ 4:        155%
Length    4, alignment  4/ 4:        154%
Length    4, alignment  4/ 4:        161%
Length    5, alignment  5/ 5:        173%
Length    5, alignment  5/ 5:        173%
Length    5, alignment  5/ 5:        173%
Length    6, alignment  6/ 6:        145%
Length    6, alignment  6/ 6:        145%
Length    6, alignment  6/ 6:        145%
Length    7, alignment  7/ 7:        125%
Length    7, alignment  7/ 7:        125%
Length    7, alignment  7/ 7:        125%
Length    8, alignment  8/ 8:        111%
Length    8, alignment  8/ 8:        130%
Length    8, alignment  8/ 8:        124%
Length    9, alignment  9/ 9:        160%
Length    9, alignment  9/ 9:        160%
Length    9, alignment  9/ 9:        150%
Length   10, alignment 10/10:        170%
Length   10, alignment 10/10:        137%
Length   10, alignment 10/10:        150%
Length   11, alignment 11/11:        160%
Length   11, alignment 11/11:        160%
Length   11, alignment 11/11:        160%
Length   12, alignment 12/12:        146%
Length   12, alignment 12/12:        168%
Length   12, alignment 12/12:        156%
Length   13, alignment 13/13:        167%
Length   13, alignment 13/13:        167%
Length   13, alignment 13/13:        173%
Length   14, alignment 14/14:        167%
Length   14, alignment 14/14:        168%
Length   14, alignment 14/14:        168%
Length   15, alignment 15/15:        168%
Length   15, alignment 15/15:        173%
Length   15, alignment 15/15:        173%
Length    1, alignment  0/ 0:        134%
Length    1, alignment  0/ 0:        127%
Length    1, alignment  0/ 0:        119%
Length    2, alignment  0/ 0:        94%
Length    2, alignment  0/ 0:        94%
Length    2, alignment  0/ 0:        106%
Length    3, alignment  0/ 0:        82%
Length    3, alignment  0/ 0:        87%
Length    3, alignment  0/ 0:        82%
Length    4, alignment  0/ 0:        115%
Length    4, alignment  0/ 0:        115%
Length    4, alignment  0/ 0:        122%
Length    5, alignment  0/ 0:        127%
Length    5, alignment  0/ 0:        119%
Length    5, alignment  0/ 0:        127%
Length    6, alignment  0/ 0:        103%
Length    6, alignment  0/ 0:        100%
Length    6, alignment  0/ 0:        100%
Length    7, alignment  0/ 0:        82%
Length    7, alignment  0/ 0:        91%
Length    7, alignment  0/ 0:        87%
Length    8, alignment  0/ 0:        111%
Length    8, alignment  0/ 0:        124%
Length    8, alignment  0/ 0:        124%
Length    9, alignment  0/ 0:        136%
Length    9, alignment  0/ 0:        136%
Length    9, alignment  0/ 0:        136%
Length   10, alignment  0/ 0:        136%
Length   10, alignment  0/ 0:        135%
Length   10, alignment  0/ 0:        136%
Length   11, alignment  0/ 0:        136%
Length   11, alignment  0/ 0:        136%
Length   11, alignment  0/ 0:        135%
Length   12, alignment  0/ 0:        136%
Length   12, alignment  0/ 0:        136%
Length   12, alignment  0/ 0:        136%
Length   13, alignment  0/ 0:        135%
Length   13, alignment  0/ 0:        136%
Length   13, alignment  0/ 0:        136%
Length   14, alignment  0/ 0:        136%
Length   14, alignment  0/ 0:        136%
Length   14, alignment  0/ 0:        136%
Length   15, alignment  0/ 0:        136%
Length   15, alignment  0/ 0:        136%
Length   15, alignment  0/ 0:        136%
Length    4, alignment  0/ 0:        115%
Length    4, alignment  0/ 0:        115%
Length    4, alignment  0/ 0:        115%
Length   32, alignment  0/ 0:        127%
Length   32, alignment  7/ 2:        395%
Length   32, alignment  0/ 0:        127%
Length   32, alignment  0/ 0:        127%
Length    8, alignment  0/ 0:        111%
Length    8, alignment  0/ 0:        124%
Length    8, alignment  0/ 0:        124%
Length   64, alignment  0/ 0:        128%
Length   64, alignment  6/ 4:        475%
Length   64, alignment  0/ 0:        131%
Length   64, alignment  0/ 0:        134%
Length   16, alignment  0/ 0:        128%
Length   16, alignment  0/ 0:        119%
Length   16, alignment  0/ 0:        128%
Length  128, alignment  0/ 0:        129%
Length  128, alignment  5/ 6:        475%
Length  128, alignment  0/ 0:        130%
Length  128, alignment  0/ 0:        129%
Length   32, alignment  0/ 0:        126%
Length   32, alignment  0/ 0:        126%
Length   32, alignment  0/ 0:        126%
Length  256, alignment  0/ 0:        127%
Length  256, alignment  4/ 8:        545%
Length  256, alignment  0/ 0:        126%
Length  256, alignment  0/ 0:        128%
Length   64, alignment  0/ 0:        171%
Length   64, alignment  0/ 0:        171%
Length   64, alignment  0/ 0:        174%
Length  512, alignment  0/ 0:        126%
Length  512, alignment  3/10:        585%
Length  512, alignment  0/ 0:        126%
Length  512, alignment  0/ 0:        127%
Length  128, alignment  0/ 0:        129%
Length  128, alignment  0/ 0:        128%
Length  128, alignment  0/ 0:        129%
Length 1024, alignment  0/ 0:        125%
Length 1024, alignment  2/12:        611%
Length 1024, alignment  0/ 0:        126%
Length 1024, alignment  0/ 0:        126%
Length  256, alignment  0/ 0:        128%
Length  256, alignment  0/ 0:        127%
Length  256, alignment  0/ 0:        128%
Length 2048, alignment  0/ 0:        125%
Length 2048, alignment  1/14:        625%
Length 2048, alignment  0/ 0:        125%
Length 2048, alignment  0/ 0:        125%
Length  512, alignment  0/ 0:        126%
Length  512, alignment  0/ 0:        127%
Length  512, alignment  0/ 0:        127%
Length 4096, alignment  0/ 0:        125%
Length 4096, alignment  0/16:        125%
Length 4096, alignment  0/ 0:        125%
Length 4096, alignment  0/ 0:        125%
Length 1024, alignment  0/ 0:        126%
Length 1024, alignment  0/ 0:        126%
Length 1024, alignment  0/ 0:        126%
Length 8192, alignment  0/ 0:        125%
Length 8192, alignment 63/18:        636%
Length 8192, alignment  0/ 0:        125%
Length 8192, alignment  0/ 0:        125%
Length   16, alignment  1/ 2:        317%
Length   16, alignment  1/ 2:        317%
Length   16, alignment  1/ 2:        317%
Length   32, alignment  2/ 4:        395%
Length   32, alignment  2/ 4:        395%
Length   32, alignment  2/ 4:        398%
Length   64, alignment  3/ 6:        475%
Length   64, alignment  3/ 6:        475%
Length   64, alignment  3/ 6:        477%
Length  128, alignment  4/ 8:        479%
Length  128, alignment  4/ 8:        479%
Length  128, alignment  4/ 8:        479%
Length  256, alignment  5/10:        543%
Length  256, alignment  5/10:        539%
Length  256, alignment  5/10:        543%
Length  512, alignment  6/12:        585%
Length  512, alignment  6/12:        585%
Length  512, alignment  6/12:        585%
Length 1024, alignment  7/14:        611%
Length 1024, alignment  7/14:        611%
Length 1024, alignment  7/14:        611%

Signed-off-by: Francisco Franco <franciscofranco.1990@gmail.com>
Signed-off-by: kdrag0n <dragon@khronodragon.com>
Signed-off-by: utsavbalar1231 <utsavbalar1231@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri15072019@gmail.com>
Signed-off-by: Edwiin Kusuma Jaya <kutemeikito0905@gmail.com>
2025-12-14 17:58:46 +00:00
Yuanyuan Zhong
0d53b9a549 arm64: strcmp: align to 64B cache line
Align strcmp to 64B. This will ensure the preformance critical
loop is within one 64B cache line.

Change-Id: I9240fbb4407637b2290a44e02ad59098a377b356
Signed-off-by: Yuanyuan Zhong <zyy@motorola.com>
Reviewed-on: https://gerrit.mot.com/902536
SME-Granted: SME Approvals Granted
SLTApproved: Slta Waiver <sltawvr@motorola.com>
Tested-by: Jira Key <jirakey@motorola.com>
Reviewed-by: Yi-Wei Zhao <gbjc64@motorola.com>
Reviewed-by: Igor Kovalenko <igork@motorola.com>
Submit-Approved: Jira Key <jirakey@motorola.com>
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Nauval Rizky <enuma.alrizky@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri15072019@gmail.com>
Signed-off-by: Edwiin Kusuma Jaya <kutemeikito0905@gmail.com>
2025-12-14 17:58:38 +00:00
Rafael Ortolan
244a84a926 driver/usb: Fix buffer overflow issue detected by KASAN
Fix stack-out-of-bounds issue detected by KASAN, which could result
in random kernel memory corruptions:

[685:tcpc_event_type]==================================================================
[685:tcpc_event_type]BUG: KASAN: stack-out-of-bounds in mt6360_transmit+0xec/0x260
[685:tcpc_event_type]Write of size 28 at addr ffffffe6ca09f963 by task tcpc_event_type/685
[685:tcpc_event_type]
[685:tcpc_event_type]CPU: 1 PID: 685 Comm: tcpc_event_type Tainted: G S      W  O    4.14.186+ #1
[685:tcpc_event_type]Hardware name: MT6853V/NZA (DT)
[685:tcpc_event_type]Call trace:
[685:tcpc_event_type] dump_backtrace+0x0/0x374
[685:tcpc_event_type] show_stack+0x20/0x2c
[685:tcpc_event_type] dump_stack+0x148/0x1b8
[685:tcpc_event_type] print_address_description+0x70/0x248
[685:tcpc_event_type] __kasan_report+0x150/0x180
[685:tcpc_event_type] kasan_report+0x10/0x18
[685:tcpc_event_type] check_memory_region+0x18c/0x198
[685:tcpc_event_type] memcpy+0x48/0x68
[685:tcpc_event_type] mt6360_transmit+0xec/0x260
[685:tcpc_event_type] tcpci_transmit+0xb8/0xe4
[685:tcpc_event_type] pd_send_message+0x238/0x388
[685:tcpc_event_type] pd_reply_svdm_request+0x1f0/0x2f8
[685:tcpc_event_type] pd_dpm_ufp_request_id_info+0xcc/0x188
[685:tcpc_event_type] pe_ufp_vdm_get_identity_entry+0x1c/0x28
[685:tcpc_event_type] pd_handle_event+0x3cc/0x74c
[685:tcpc_event_type] pd_policy_enGine_run+0x18c/0x748
[685:tcpc_event_type] tcpc_event_thread_fn+0x1b4/0x32c
[685:tcpc_event_type] kthread+0x2a8/0x2c0
[685:tcpc_event_type] ret_from_fork+0x10/0x18
[685:tcpc_event_type]==================================================================

Change-Id: I25ee1b2457592d470619f3bea1fb3fc1a2bc678c
Reviewed-on: https://gerrit.mot.com/2320832
SME-Granted: SME Approvals Granted
SLTApproved: Slta Waiver
Reviewed-by: Murilo Alves <alvesm@motorola.com>
Reviewed-by: Gilberto Gambugge Neto <gambugge@motorola.com>
Tested-by: Jira Key
Submit-Approved: Jira Key
Signed-off-by: Murilo Alves <alvesm@motorola.com>
Reviewed-on: https://gerrit.mot.com/2334041
Reviewed-by: Rafael Ortolan <rafones@motorola.com>
Reviewed-by: Zhihong Kang <kangzh@motorola.com>
2025-12-13 16:31:14 +00:00
Damien Le Moal
b7f275383a block: Expose queue nr_zones in sysfs
Expose through sysfs the nr_zones field of struct request_queue.
Exposing this value helps in debugging disk issues as well as
facilitating scripts based use of the disk (e.g. blktests).

For zoned block devices, the nr_zones field indicates the total number
of zones of the device calculated using the known disk capacity and
zone size. This number of zones is always 0 for regular block devices.

Since nr_zones is defined conditionally with CONFIG_BLK_DEV_ZONED,
introduce the blk_queue_nr_zones() function to return the correct value
for any device, regardless if CONFIG_BLK_DEV_ZONED is set.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:09 +00:00
Damien Le Moal
2f17d2875c block: Improve zone reset execution
There is no need to synchronously execute all REQ_OP_ZONE_RESET BIOs
necessary to reset a range of zones. Similarly to what is done for
discard BIOs in blk-lib.c, all zone reset BIOs can be chained and
executed asynchronously and a synchronous call done only for the last
BIO of the chain.

Modify blkdev_reset_zones() to operate similarly to
blkdev_issue_discard() using the next_bio() helper for chaining BIOs. To
avoid code duplication of that function in blk_zoned.c, rename
next_bio() into blk_next_bio() and declare it as a block internal
function in blk.h.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:09 +00:00
Damien Le Moal
f243b64a11 block: Introduce BLKGETNRZONES ioctl
Get a zoned block device total number of zones. The device can be a
partition of the whole device. The number of zones is always 0 for
regular block devices.

Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Damien Le Moal
6645652532 block: Introduce BLKGETZONESZ ioctl
Get a zoned block device zone size in number of 512 B sectors.
The zone size is always 0 for regular block devices.

Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Damien Le Moal
6a2c25e507 block: Limit allocation of zone descriptors for report zones
There is no point in allocating more zone descriptors than the number of
zones a block device has for doing a zone report. Avoid doing that in
blkdev_report_zones_ioctl() by limiting the number of zone decriptors
allocated internally to process the user request.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Damien Le Moal
2fe6c878c8 block: Introduce blkdev_nr_zones() helper
Introduce the blkdev_nr_zones() helper function to get the total
number of zones of a zoned block device. This number is always 0 for a
regular block device (q->limits.zoned == BLK_ZONED_NONE case).

Replace hard-coded number of zones calculation in dmz_get_zoned_device()
with a call to this helper.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Omar Sandoval
5da7123f99 kyber: fix integer overflow of latency targets on 32-bit
NSEC_PER_SEC has type long, so 5 * NSEC_PER_SEC is calculated as a long.
However, 5 seconds is 5,000,000,000 nanoseconds, which overflows a
32-bit long. Make sure all of the targets are calculated as 64-bit
values.

Fixes: 6e25cb01ea20 ("kyber: implement improved heuristics")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Omar Sandoval
512db70a52 kyber: add tracepoints
When debugging Kyber, it's really useful to know what latencies we've
been having, how the domain depths have been adjusted, and if we've
actually been throttling. Add three tracepoints, kyber_latency,
kyber_adjust, and kyber_throttled, to record that.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Omar Sandoval
4b06e8872a kyber: implement improved heuristics
Kyber's current heuristics have a few flaws:

- It's based on the mean latency, but p99 latency tends to be more
  meaningful to anyone who cares about latency. The mean can also be
  skewed by rare outliers that the scheduler can't do anything about.
- The statistics calculations are purely time-based with a short window.
  This works for steady, high load, but is more sensitive to outliers
  with bursty workloads.
- It only considers the latency once an I/O has been submitted to the
  device, but the user cares about the time spent in the kernel, as
  well.

These are shortcomings of the generic blk-stat code which doesn't quite
fit the ideal use case for Kyber. So, this replaces the statistics with
a histogram used to calculate percentiles of total latency and I/O
latency, which we then use to adjust depths in a slightly more
intelligent manner:

- Sync and async writes are now the same domain.
- Discards are a separate domain.
- Domain queue depths are scaled by the ratio of the p99 total latency
  to the target latency (e.g., if the p99 latency is double the target
  latency, we will double the queue depth; if the p99 latency is half of
  the target latency, we can halve the queue depth).
- We use the I/O latency to determine whether we should scale queue
  depths down: we will only scale down if any domain's I/O latency
  exceeds the target latency, which is an indicator of congestion in the
  device.

These new heuristics are just as scalable as the heuristics they
replace.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Omar Sandoval
95c81bbedc kyber: don't make domain token sbitmap larger than necessary
The domain token sbitmaps are currently initialized to the device queue
depth or 256, whichever is larger, and immediately resized to the
maximum depth for that domain (256, 128, or 64 for read, write, and
other, respectively). The sbitmap is never resized larger than that, so
it's unnecessary to allocate a bitmap larger than the maximum depth.
Let's just allocate it to the maximum depth to begin with. This will use
marginally less memory, and more importantly, give us a more appropriate
number of bits per sbitmap word.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:07 +00:00
Omar Sandoval
0a9c2ef26a block: move call of scheduler's ->completed_request() hook
Commit 4bc6339a58 ("block: move blk_stat_add() to
__blk_mq_end_request()") consolidated some calls using ktime_get() so
we'd only need to call it once. Kyber's ->completed_request() hook also
calls ktime_get(), so let's move it to the same place, too.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:07 +00:00
claxten10
2d3c7708e0 arch: arm64: configs: Enable Kyber I/O sched
Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:52:07 +00:00
Roman Gushchin
e8f74fb113 mm: memcg/slab: generalize postponed non-root kmem_cache deactivation
Currently SLUB uses a work scheduled after an RCU grace period to
deactivate a non-root kmem_cache.  This mechanism can be reused for
kmem_caches release, but requires generalization for SLAB case.

Introduce kmemcg_cache_deactivate() function, which calls
allocator-specific __kmem_cache_deactivate() and schedules execution of
__kmem_cache_deactivate_after_rcu() with all necessary locks in a worker
context after an rcu grace period.

Here is the new calling scheme:
  kmemcg_cache_deactivate()
    __kmemcg_cache_deactivate()                  SLAB/SLUB-specific
    kmemcg_rcufn()                               rcu
      kmemcg_workfn()                            work
        __kmemcg_cache_deactivate_after_rcu()    SLAB/SLUB-specific

instead of:
  __kmemcg_cache_deactivate()                    SLAB/SLUB-specific
    slab_deactivate_memcg_cache_rcu_sched()      SLUB-only
      kmemcg_rcufn()                             rcu
        kmemcg_workfn()                          work
          kmemcg_cache_deact_after_rcu()         SLUB-only

For consistency, all allocator-specific functions start with "__".

Link: http://lkml.kernel.org/r/20190611231813.3148843-4-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Waiman Long <longman@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Andrei Vagin <avagin@gmail.com>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-12-08 00:52:07 +00:00
Sultan Alsawaf
5ac450d018 arch: arm64: configs: Disable SLUB per-CPU partial caches
CONFIG_SLUB_CPU_PARTIAL is not set

This causes load spikes when the per-CPU partial caches are filled and
need to be drained, which is bad for maintaining low latency.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-08 00:52:07 +00:00
Manaf Meethalavalappu Pallikunhi
04450f04ff arch: arm64: configs: Enable powercap framework
CONFIG_POWERCAP=y

It enables the power capping sysfs interface for
different power zone devices.

Bug: 220884335
Change-Id: I11bc3efe06d2a02dcc602d223d3e6757088ca771
Signed-off-by: Manaf Meethalavalappu Pallikunhi <quic_manafm@quicinc.com>
2025-12-08 00:52:07 +00:00
Ocean Chen
6fe001fbc3 arch: arm64: configs: Enable zram-writeback
CONFIG_ZRAM_WRITEBACK=y

Bug: 142299185
Change-Id: Id9a928d436a3069c32e7569bfddc6da79beee3c2
Signed-off-by: Ocean Chen <oceanchen@google.com>
2025-12-08 00:52:07 +00:00
Paul Zhang
1537524516 arch: arm64: configs: Disable CONFIG_CFG80211_CRDA_SUPPORT
CONFIG_CFG80211_CRDA_SUPPORT is not set

Since CRDA is not supported, disable CONFIG_CFG80211_CRDA_SUPPORT
by default.

Change-Id: I01bde48aea21612b9d5c79b11931999e02d610b4
CRs-Fixed: 2946898
Signed-off-by: Paul Zhang <paulz@codeaurora.org>
2025-12-08 00:52:06 +00:00
Nathan Chancellor
6c5709097a kernel/profile: Use cpumask_available to check for NULL cpumask
When building with clang + -Wtautological-pointer-compare, these
instances pop up:

  kernel/profile.c:339:6: warning: comparison of array 'prof_cpu_mask' not equal to a null pointer is always true [-Wtautological-pointer-compare]
          if (prof_cpu_mask != NULL)
              ^~~~~~~~~~~~~    ~~~~
  kernel/profile.c:376:6: warning: comparison of array 'prof_cpu_mask' not equal to a null pointer is always true [-Wtautological-pointer-compare]
          if (prof_cpu_mask != NULL)
              ^~~~~~~~~~~~~    ~~~~
  kernel/profile.c:406:26: warning: comparison of array 'prof_cpu_mask' not equal to a null pointer is always true [-Wtautological-pointer-compare]
          if (!user_mode(regs) && prof_cpu_mask != NULL &&
                                ^~~~~~~~~~~~~    ~~~~
  3 warnings generated.

This can be addressed with the cpumask_available helper, introduced in
commit f7e30f0 ("cpumask: Add helper cpumask_available()") to fix
warnings like this while keeping the code the same.

Link: ClangBuiltLinux#747
Link: http://lkml.kernel.org/r/20191022191957.9554-1-natechancellor@gmail.com
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-12-08 00:52:06 +00:00
Davidlohr Bueso
47bb162e2c kernel/sched/core: Add branch prediction hint to wake_q_add() cmpxchg
The cmpxchg() will fail when the task is already in the process
of waking up, and as such is an extremely rare occurrence.
Micro-optimize the call and put an unlikely() around it.

To no surprise, when using CONFIG_PROFILE_ANNOTATED_BRANCHES
under a number of workloads the incorrect rate was a mere 1-2%.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yongji Xie <elohimes@gmail.com>
Cc: andrea.parri@amarulasolutions.com
Cc: lilin24@baidu.com
Cc: liuqi16@baidu.com
Cc: nixun@baidu.com
Cc: xieyongji@baidu.com
Cc: yuanlinsi01@baidu.com
Cc: zhangyu31@baidu.com
Link: https://lkml.kernel.org/r/20181203053130.gwkw6kg72azt2npb@linux-r8p5
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-12-08 00:52:06 +00:00
Li zeming
0a3492d5ec kernel/time/alarmtimer: Remove unnecessary initialization of variable 'ret'
ret is assigned before checked, so it does not need to initialize the
variable

Signed-off-by: Li zeming <zeming@nfschina.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20230609182856.4660-1-zeming@nfschina.com
2025-12-08 00:52:06 +00:00
Li zeming
3fd6a03917 kernel/time/alarmtimer: Remove unnecessary (void *) cast
Pointers of type void * do not require a type cast when they are assigned
to a real pointer.

Signed-off-by: Li zeming <zeming@nfschina.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20230609182059.4509-1-zeming@nfschina.com
2025-12-08 00:52:06 +00:00
Tetsuo Handa
884c45b53e kernel: Initialize cpumask before parsing
KMSAN complains that new_value at cpumask_parse_user() from
write_irq_affinity() from irq_affinity_proc_write() is uninitialized.

  [  148.133411][ T5509] =====================================================
  [  148.135383][ T5509] BUG: KMSAN: uninit-value in find_next_bit+0x325/0x340
  [  148.137819][ T5509]
  [  148.138448][ T5509] Local variable ----new_value.i@irq_affinity_proc_write created at:
  [  148.140768][ T5509]  irq_affinity_proc_write+0xc3/0x3d0
  [  148.142298][ T5509]  irq_affinity_proc_write+0xc3/0x3d0
  [  148.143823][ T5509] =====================================================

Since bitmap_parse() from cpumask_parse_user() calls find_next_bit(),
any alloc_cpumask_var() + cpumask_parse_user() sequence has possibility
that find_next_bit() accesses uninitialized cpu mask variable. Fix this
problem by replacing alloc_cpumask_var() with zalloc_cpumask_var().

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20210401055823.3929-1-penguin-kernel@I-love.SAKURA.ne.jp
2025-12-08 00:52:06 +00:00
Philippe Liard
9bc2ed82df fs/squashfs: Migrate from ll_rw_block usage to BIO
ll_rw_block() function has been deprecated in favor of BIO which appears
to come with large performance improvements.

This patch decreases boot time by close to 40% when using squashfs for
the root file-system.  This is observed at least in the context of
starting an Android VM on Chrome OS using crosvm.  The patch was tested
on 4.19 as well as master.

This patch is largely based on Adrien Schildknecht's patch that was
originally sent as https://lkml.org/lkml/2017/9/22/814 though with some
significant changes and simplifications while also taking Phillip
Lougher's feedback into account, around preserving support for
FILE_CACHE in particular.

[akpm@linux-foundation.org: fix build error reported by Randy]
  Link: http://lkml.kernel.org/r/319997c2-5fc8-f889-2ea3-d913308a7c1f@infradead.org
Signed-off-by: Philippe Liard <pliard@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Adrien Schildknecht <adrien+dev@schischi.me>
Cc: Phillip Lougher <phillip@squashfs.org.uk>
Cc: Guenter Roeck <groeck@chromium.org>
Cc: Daniel Rosenberg <drosen@google.com>
Link: https://chromium.googlesource.com/chromiumos/platform/crosvm
Link: http://lkml.kernel.org/r/20191106074238.186023-1-pliard@google.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-12-08 00:52:05 +00:00
Alexander Winkowski
bd9e610f6d mm/page_alloc: Disable pcp lists checks on !DEBUG_VM
Reference: https://lore.kernel.org/all/20230201162549.68384-1-halbuer@sra.uni-hannover.de/T/#m2d0dccbb7653a8761a657ee046766dcd56e35df9

Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-08 00:52:05 +00:00
Minchan Kim
01a6275c21 arch: arm64: configs: Disable CONFIG_MEMCG and MEMCG_SWAP
CONFIG_MEMCG is not set

Pixel doesn't use the memcg but it hurts 15% performance in minor
fault benchmark so disable it until we see strong reason.

Bug: 169443770
Signed-off-by: Minchan Kim <minchan@google.com>
Change-Id: Ifd9ddcd54559c590260d52f60a2e5e4b79c5480f
2025-12-08 00:52:05 +00:00
Frederic Weisbecker
e00a2dfe71 rcu: Assume rcu_init() is called before smp
The rcu_init() function is called way before SMP is initialized and
therefore only the boot CPU should be online at this stage.

Simplify the boot per-cpu initialization accordingly.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2025-12-08 00:52:05 +00:00
Mashopy
a0425d99f8 arch: arm64: boot: dts: Remove initcall_debug=1 for MT6781
This is a production kernel, not a debug one.
2025-12-08 00:52:05 +00:00
Cyrill Gorcunov
c9e54c78d7 rcu: rcu_qs -- Use raise_softirq_irqoff to not save irqs twice
The rcu_qs is disabling IRQs by self so no need to do the same in raise_softirq
but instead we can save some cycles using raise_softirq_irqoff directly.

CC: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2025-12-08 00:52:05 +00:00
Paul E. McKenney
04904ffe37 rcu/tiny: Convert to SPDX license identifier
Replace the license boiler plate with a SPDX license identifier.
While in the area, update an email address.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
2025-12-08 00:52:05 +00:00
Paul E. McKenney
3dd3836e93 rcu: Rename rcu_check_callbacks() to rcu_sched_clock_irq()
The name rcu_check_callbacks() arguably made sense back in the early
2000s when RCU was quite a bit simpler than it is today, but it has
become quite misleading, especially with the advent of dyntick-idle
and NO_HZ_FULL.  The rcu_check_callbacks() function is RCU's hook into
the scheduling-clock interrupt, and is now but one of many ways that
callbacks get promoted to invocable state.

This commit therefore changes the name to rcu_sched_clock_irq(),
which is the same number of characters and clearly indicates this
function's relation to the rest of the Linux kernel.  In addition, for
the sake of consistency, rcu_flavor_check_callbacks() is also renamed
to rcu_flavor_sched_clock_irq().

While in the area, the header comments for both functions are reworked.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
20743a0645 srcu: Make call_srcu() available during very early boot
Event tracing is moving to SRCU in order to take advantage of the fact
that SRCU may be safely used from idle and even offline CPUs.  However,
event tracing can invoke call_srcu() very early in the boot process,
even before workqueue_init_early() is invoked (let alone rcu_init()).
Therefore, call_srcu()'s attempts to queue work fail miserably.

This commit therefore detects this situation, and refrains from attempting
to queue work before rcu_init() time, but does everything else that it
would have done, and in addition, adds the srcu_struct to a global list.
The rcu_init() function now invokes a new srcu_init() function, which
is empty if CONFIG_SRCU=n.  Otherwise, srcu_init() queues work for
each srcu_struct on the list.  This all happens early enough in boot
that there is but a single CPU with interrupts disabled, which allows
synchronization to be dispensed with.

Of course, the queued work won't actually be invoked until after
workqueue_init() is invoked, which happens shortly after the scheduler
is up and running.  This means that although call_srcu() may be invoked
any time after per-CPU variables have been set up, there is still a very
narrow window when synchronize_srcu() won't work, and this window
extends from the time that the scheduler starts until the time that
workqueue_init() returns.  This can be fixed in a manner similar to
the fix for synchronize_rcu_expedited() and friends, but until someone
actually needs to use synchronize_srcu() during this window, this fix
is added churn for no benefit.

Finally, note that Tree SRCU's new srcu_init() function invokes
queue_work() rather than the queue_delayed_work() function that is
invoked post-boot.  The reason is that queue_delayed_work() will (as you
would expect) post a timer, and timers have not yet been initialized.
So use of queue_work() avoids the complaints about use of uninitialized
spinlocks that would otherwise result.  Besides, some delay is already
provide by the aforementioned fact that the queued work won't actually
be invoked until after the scheduler is up and running.

Requested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
071728046f rcu: Motivate Tiny RCU forward progress
If a long-running CPU-bound in-kernel task invokes call_rcu(), the
callback won't be invoked until the next context switch.  If there are
no other runnable tasks (which is not an uncommon situation on deep
embedded systems), the callback might never be invoked.

This commit therefore causes rcu_check_callbacks() to ask the scheduler
for a context switch if there are callbacks posted that are still waiting
for a grace period.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
1e9c40c21d rcu: Clean up flavor-related definitions and comments in tiny.c
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
8cb39e4b1e rcu: Express Tiny RCU updates in terms of RCU rather than RCU-sched
This commit renames Tiny RCU functions so that the lowest level of
functionality is RCU (e.g., synchronize_rcu()) rather than RCU-sched
(e.g., synchronize_sched()).  This provides greater naming compatibility
with Tree RCU, which will in turn permit more LoC removal once
the RCU-sched and RCU-bh update-side API is removed.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
[ paulmck: Fix Tiny call_rcu()'s EXPORT_SYMBOL() in response to a bug
  report from kbuild test robot. ]
2025-12-08 00:52:04 +00:00
Paul E. McKenney
c10274c5a7 rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds
Now that RCU-preempt knows about preemption disabling, its implementation
of synchronize_rcu() works for synchronize_sched(), and likewise for the
other RCU-sched update-side API members.  This commit therefore confines
the RCU-sched update-side code to CONFIG_PREEMPT=n builds, and defines
RCU-sched's update-side API members in terms of those of RCU-preempt.

This means that any given build of the Linux kernel has only one
update-side flavor of RCU, namely RCU-preempt for CONFIG_PREEMPT=y builds
and RCU-sched for CONFIG_PREEMPT=n builds.  This in turn means that kernels
built with CONFIG_RCU_NOCB_CPU=y have only one rcuo kthread per CPU.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Andi Kleen <ak@linux.intel.com>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
d3b896077a rcu: Define RCU-bh update API in terms of RCU
Now that the main RCU API knows about softirq disabling and softirq's
quiescent states, the RCU-bh update code can be dispensed with.
This commit therefore removes the RCU-bh update-side implementation and
defines RCU-bh's update-side API in terms of that of either RCU-preempt or
RCU-sched, depending on the setting of the CONFIG_PREEMPT Kconfig option.

In kernels built with CONFIG_RCU_NOCB_CPU=y this has the knock-on effect
of reducing by one the number of rcuo kthreads per CPU.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2025-12-08 00:52:04 +00:00
Aeron-Aeron
1b014dbe07 perf: make MediaTek perf observer suspend-aware and reduce wakeups
* Converted the observer polling to be suspend-aware so it stops pinging every
  32 ms. Added a pob_timer_active flag, suspend/resume notifier, and bumped the
  interval to 64 ms. The hrtimer callback now quietly backs off when disabled.

* Maybe now the CPU can finally enjoy its beauty sleep.

Signed-off-by: Aeron-Aeron <aeronrules2@gmail.com>
2025-12-08 00:52:03 +00:00
Woomymy
54404ec743 kernel: irq_work: Remove mediatek schedule monitor support
Change-Id: I4cf9879d9e8eb605f37e50fcde089b32ef6e7c9d
2025-12-08 00:52:03 +00:00
Mashopy
3db111c587 gen4m: Let scheduler handle cpu boosting
Change-Id: I06ea48ce6663563c8240a6196bc85a6b6fc0b43c
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:03 +00:00
Cyber Knight
4f7b123d6d connectivity/wlan-core-gen4m: Bump 2.4GHz hotspot bandwidth to 40mhz
- This should improve the reliability of 2.4GHz hotspot connections.

Change-Id: Iea450301518d22701c35040a2581cb37d2d39ccf
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:03 +00:00
kehaizhou
cd86d8762a [ALPS09502745] mgmt: Fix kernel panic due to hardware watchdog
[Description]
Modified debug logging to prevent kernel panic caused by hardware
watchdog when handling IRQ from wifi module.

[Test]
UT

MTK-Commit-Id: cc6d75fbfacaed326aeaa9fdea03d95fe558a6f3

Signed-off-by: kehaizhou <haizhou.ke@mediatek.com>
CR-Id: ALPS09502745
Feature: Others
Change-Id: I67a0b50a587d1e93c7136a685dcd1b8f0e1f7e89
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/9912268
Commit-Check: srv_check_service <srv_check_service@mediatek.com>
AutoUT-Review-Label: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Reviewed-by: shuaishuai.kong <shuaishuai.kong@mediatek.com>
(cherry picked from commit ee220b7700144a6d16e0274fede3029aa2543ac3)
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/9921411
Build: srv_preflight_a001 <srv_preflight_a001@mediatek.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:03 +00:00