90 Commits
bka ... 16.0

Author SHA1 Message Date
Samuel Pascua
ecab66f27c Merge branch 'android-4.14' of https://github.com/pascua28/android_kernel_samsung_sm7150 into 16.0
Change-Id: I0b8c22853de7baba34abdc8c4792d4b2bf07cfef
Signed-off-by: Samuel Pascua <pascua.samuel.14@gmail.com>
2025-12-27 09:35:40 +08:00
Sultan Alsawaf
3340d216db scsi: ufs: Add simple IRQ-affined PM QoS operations
Qualcomm's PM QoS solution suffers from a number of issues: applying
PM QoS to all CPUs, convoluted spaghetti code that wastes CPU cycles,
and keeping PM QoS applied for 10 ms after all requests finish
processing.

This implements a simple IRQ-affined PM QoS mechanism for each UFS
adapter which uses atomics to elide locking, and enqueues a worker to
apply PM QoS to the target CPU as soon as a command request is issued.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: alk3pInjection <webmaster@raspii.tech>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2025-12-27 09:35:00 +08:00
Jaegeuk Kim
577e047f9f scsi: ufs: disallow SECURITY_PROTOCOL_IN without _OUT
This merged the following fix:
6a317b49c98c ("scsi: ufs: revise commit ecd2676bd513 ("disallow SECURITY_PROTOCOL_IN without _OUT")")

If we allow this, Hynix will give timeout due to spec violation.
The latest Hynix controller gives error instead of timeout.

Bug: 113580864
Bug: 79898356
Bug: 109850759
Bug: 117682499
Bug: 112560467
Change-Id: Ie7820a9604e4c7bc4cc530acf41bb5bb72f33d5b
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Signed-off-by: Randall Huang <huangrandall@google.com>
(cherry picked from commit 003012f13632af193b7ec5656e5ed5a6747ee0dd)
Signed-off-by: alk3pInjection <webmaster@raspii.tech>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2025-12-27 09:35:00 +08:00
Sultan Alsawaf
07aeced86b scsi: ufs: Fix compilation when command logging is disabled
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-27 09:35:00 +08:00
Sultan Alsawaf
00290c4372 scsi: ufs: Scrap Qualcomm's PM QoS implementation
This implementation is completely over the top and wastes lots of CPU
cycles. It's too convoluted to fix, so just scrap it to make way for a
simpler solution. This purges every PM QoS reference in the UFS drivers.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-27 09:35:00 +08:00
Sultan Alsawaf
704df79435 scsi: ufs: Only apply pm_qos to the CPU servicing UFS interrupts
Applying pm_qos restrictions to multiple CPUs which aren't used for ufs
processing is a waste of power. Instead, only apply the pm_qos
restrictions to the CPU that services the UFS interrupts to save power.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Samuel Pascua <sgpascua@ngcp.ph>
2025-12-27 09:35:00 +08:00
Mimi Wu
2bf55e9119 scsi: ufs: disable clock scaling
Disable clock scaling to avoid costly workqueue overheads.

Power test results on Blueline:
[without this change]
  Suspend: 9.75mA
  Idle: 238.26mA
  Camera Preview: 1309.99mA
  Partial Wake Lock: 13.67mA
[with this change - disable clock scaling]
  Suspend: 9.73mA (-0.21%)
  Idle: 215.87mA (-9.4%)
  Camera Preview: 1181.71mA (-9.79%)
  Partial Wake Lock: 13.85mA (+1.32%)

Bug: 78601190
Signed-off-by: Mimi Wu <mimiwu@google.com>
Change-Id: I09f07619ab3e11b05149358c1d06b0d1039decf3
2025-12-27 09:35:00 +08:00
Srikar Dronamraju
758bd66cf9 sched/numa: Modify migrate_swap() to accept additional parameters
There are checks in migrate_swap_stop() that check if the task/CPU
combination is as per migrate_swap_arg before migrating.

However atleast one of the two tasks to be swapped by migrate_swap() could
have migrated to a completely different CPU before updating the
migrate_swap_arg. The new CPU where the task is currently running could
be a different node too. If the task has migrated, numa balancer might
end up placing a task in a wrong node.  Instead of achieving node
consolidation, it may end up spreading the load across nodes.

To avoid that pass the CPUs as additional parameters.

While here, place migrate_swap under CONFIG_NUMA_BALANCING.

Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS  LAST_PATCH  WITH_PATCH  %CHANGE
16    25377.3     25226.6     -0.59
1     72287       73326       1.437

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-10-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 0ad4e3dfe6cf3f207e61cbd8e3e4a943f8c1ad20)
Change-Id: Ia520fdeb7233d96891af72f80a44b71658951981
[dereference23: Backport to msm-4.14]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Rishabh Bhatnagar
05712a61e3 sched: walt: Increase nr_threshold to 40 percent
Increase the nr_threshold percentage to 40 from 15.

Change-Id: I32ce7246fc4cd32d4c8110bef63971c9a2dceb55
Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Pavankumar Kondeti
aeb2647ddb sched: walt: Fix stale walt CPU reservation flag
When CPU trying to move a task to other cpu in active load balance or
by other means, then the other helping cpu marked as reserved to avoid
 it for other scheduler decisions. Once the task moved successfully,
the reservation will be cleared enables for other scheduler decisions.
The reserved flag is been analogously protected with busy cpu’s
rq->active_balance, which is protected with runqueue locks. So whenever
rq->active_balance is set for busy cpu, then reserved flag would set for
helping cpu.

Sometimes, it is observed that, cpu is marked as reserved with no cpu's
rq->active_balance set. There are some unlikely possible corner cases
may cause this behavior:
 - On active load balance path, cpu stop machine returns queued status
   of active_balance work on cpu_stopper, which is not checked on active
   balance path. so when stop machine is not able to queue ( unlikely),
   then reserved flag wouldn't be cleared.

   So, catch the return value and on failure, clear reserved flag for cpu.

 - Clear_walt_request() called on the cpu to clear any pending walt works,
   it may possible that, push_task might have changed or cleared, then
   reserved cpu would be left uncleared.

   So clear the push_cpu independent of push_task.

Change-Id: I75d032bf399cb3da8e807186b1bc903114168a4e
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Abhijeet Dharmapurikar
e5afb625b3 sched/walt: Improve the scheduler
This change is for general scheduler improvement.

Change-Id: Iffd4ae221581aaa4aeb244a0cddd40a8b6aac74d
Signed-off-by: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
[dereference23: Backport to msm-4.14]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Lingutla Chandrasekhar
7978413de4 sched: Improve the scheduler
This change is for general scheduler improvements.

Change-Id: I37d6cb75ca8b08d9ca155b86b7d71ff369f46e14
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Lingutla Chandrasekhar
7a2e034ebd sched: walt: Improve the scheduler
This change is for general scheduler improvements.

Change-Id: Ia2854ae8701151761fe0780b6451133ab09a050b
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Abhijeet Dharmapurikar
e14809d0a3 sched: Improve the Scheduler
This change is for general scheduler improvement.

Change-Id: I7cb85ea7133a94923fae97d99f5b0027750ce189
Signed-off-by: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Pavankumar Kondeti
8c9d3503f4 sched/fair: Optimize the tick path active migration
When a task is upmigrating via tickpath, the lower capacity CPU
that is running the task will wake up the migration task to
carry the migration to the other higher capacity CPU. The migration
task dequeue the task from lower capacity CPU and enqueue it on
the higher capacity CPU. A rescheduler IPI is sent now to the higher
capacity CPU. If the higher capacity CPU was in deep sleep state, it
results in more waiting time for the task to be upmigrated. This can
be optimized by waking up the higher capacity CPU along with waking
the migration task on the lower capacity CPU. Since we reserve the
higher capacity CPU, the is_reserved() API can be used to prevent
the CPU entering idle again.

Change-Id: I7bda9a905a66a9326c1dc74e50fa94eb58e6b705
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
[clingutla@codeaurora.org: Resolved minor merge conflicts]
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Alexander Winkowski
bdf23ea276 sched: Introduce rotation_ctl
This is WALT rotation logic extracted from core_ctl to avoid
CPU isolation overhead while retaining the performance gain.

Change-Id: I912d2dabf7e32eaf9da2f30b38898d1b29ff0a53
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Alexander Winkowski
77f43184da sched: Remove unused core_ctl.h
To avoid confusion with include/linux/sched/core_ctl.h

Change-Id: I037b1cc0fa09c06737a369b4e7dfdd89cd7ad9af
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:59 +08:00
Sultan Alsawaf
a67055f665 sched/fair: Set asym priority equally for all CPUs in a performance domain
All CPUs in a performance domain share the same capacity, and therefore
aren't different from one another when distinguishing between which one is
better for asymmetric packing.

Instead of unfairly prioritizing lower-numbered CPUs within the same
performance domain, treat all CPUs in a performance domain equally for
asymmetric packing.

Change-Id: Ibc18d45fabc2983650ebebec910578e26bd26809
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-27 09:34:58 +08:00
Wei Wang
8cc2ad424a Revert "sched/core: fix userspace affining threads incorrectly"
This reverts commit d43b69c4ad.

Bug:133481659
Test: build
Change-Id: I615023c611c4de1eb334e4374af7306991f4216b
Signed-off-by: Wei Wang <wvw@google.com>
2025-12-27 09:34:58 +08:00
Wei Wang
62d72466e8 Revert "sched/core: Fix use after free issue in is_sched_lib_based_app()"
This reverts commit 0e6ca1640c.

Bug:133481659
Test: build
Change-Id: Ie6a0b5e46386c98882614be19dedc61ffd3870e5
Signed-off-by: Wei Wang <wvw@google.com>
2025-12-27 09:34:58 +08:00
Wei Wang
77f1ecf303 Revert "sched: Improve the scheduler"
This reverts commit a3dd94a1bb.

Bug:133481659
Test: build
Change-Id: Ib23609315f3446223521612621fe54469537c172
Signed-off-by: Wei Wang <wvw@google.com>
2025-12-27 09:34:58 +08:00
Alexander Winkowski
c1e31c8a1e Revert "sched: Improve the scheduler"
This reverts commit 92daaf50af.

Change-Id: I52d562da3c755f114d459ad09813188697ca81d8
2025-12-27 09:34:58 +08:00
Sultan Alsawaf
51d1d0cf50 cpufreq: schedutil: Use the frequency below the target if they're close
Schedutil targets a frequency tipping point of 80% to vote for a higher
frequency when utilization crosses that threshold.

Since the tipping point calculation is done without regard to the size of
the gap between each frequency step, this often results in a large
frequency jump when it isn't strictly necessary, which hurts energy
efficiency.

For example, if a CPU has 2000 MHz and 3000 MHz frequency steps, and
schedutil targets a frequency of 2005 MHz, then the 3000 MHz frequency step
will be used even though the target frequency of 2005 MHz is very close to
2000 MHz. In this hypothetical scenario, using 2000 MHz would clearly
satisfy the system's performance needs while consuming less energy than the
3000 MHz step.

To counter-balance the frequency tipping point, add a frequency tipping
point in the opposite direction to prefer the frequency step below the
calculated target frequency when the target frequency is less than 20%
higher than that lower step. A threshold of 20% was empirically determined
to provide significant energy savings without really impacting performance.

This improves schedutil's energy efficiency on CPUs which have large gaps
between their frequency steps, as is often the case on ARM.

Change-Id: Ie75b79e5eb9f52c966848a9fb1c8016d7ae22098
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-27 09:34:58 +08:00
Connor O'Brien
6cddf1769c cpufreq: schedutil: fix check for stale utilization values
Part of the fix from commit d86ab9cff8 ("cpufreq: schedutil: use now
as reference when aggregating shared policy requests") is reversed in
commit 05d2ca2420 ("cpufreq: schedutil: Ignore CPU load older than
WALT window size") due to a porting mistake. Restore it while keeping
the relevant change from the latter patch.

Bug: 117438867
Test: build & boot
Change-Id: I21399be760d7c8e2fff6c158368a285dc6261647
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:58 +08:00
Daniel Bristot de Oliveira
22755a05d3 UPSTREAM: sched/rt: Disable RT_RUNTIME_SHARE by default
The RT_RUNTIME_SHARE sched feature enables the sharing of rt_runtime
between CPUs, allowing a CPU to run a real-time task up to 100% of the
time while leaving more space for non-real-time tasks to run on the CPU
that lend rt_runtime.

The problem is that a CPU can easily borrow enough rt_runtime to allow
a spinning rt-task to run forever, starving per-cpu tasks like kworkers,
which are non-real-time by design.

This patch disables RT_RUNTIME_SHARE by default, avoiding this problem.
The feature will still be present for users that want to enable it,
though.

Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Wei Wang <wvw@google.com>
Link: https://lkml.kernel.org/r/b776ab46817e3db5d8ef79175fa0d71073c051c7.1600697903.git.bristot@redhat.com
(cherry picked from commit 2586af1ac187f6b3a50930a4e33497074e81762d)
Change-Id: Ibb1b185d512130783ac9f0a29f0e20e9828c86fd

Bug: 169673278
Test: build, boot and check the trace with RT task
Signed-off-by: Kyle Lin <kylelin@google.com>
Change-Id: Iffede8107863b02ad4a0cb902fc8119416931bdb
2025-12-27 09:34:58 +08:00
Sultan Alsawaf
6a7d395bea msm: kgsl: Wake GPU upon receiving an ioctl rather than upon touch input
Waking the GPU upon touch wastes power when the screen is being touched
in a way that does not induce animation or any actual need for GPU usage.
Instead of preemptively waking the GPU on touch input, wake it up upon
receiving a IOCTL_KGSL_GPU_COMMAND ioctl since it is a sign that the GPU
will soon be needed.

Change-Id: I6387083562578b229ea0913b5b2fa6562d4a85e9
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-27 09:34:58 +08:00
Sultan Alsawaf
f94b7cd5fb msm: kgsl: Remove L2PC PM QoS feature
KGSL already has PM QoS covering what matters. The L2PC PM QoS code is
not only unneeded, but also unused, so remove it. It's poorly designed
anyway since it uses a timeout with PM QoS, which is drastically bad for
power consumption.

Change-Id: I3aba9f5c0cf09d8c5e13e5c5e87e20456ca1c5f4
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-27 09:34:57 +08:00
Kazuki H
00a256b532 sched/idle: Enter wfi state instead of polling during active migration
WFI's wakeup latency is low enough, use that instead of polling and
burning power.

Change-Id: Iee1c1cdf515224267925037a859c6a74fc61abb7
Signed-off-by: Kazuki H <kazukih0205@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-27 09:34:57 +08:00
Samuel Pascua
5b33eef4f9 ARM64: configs: a71: disable coresight
Change-Id: Iff16307e360f1eff522d5bb998af79e72747a0b9
Signed-off-by: Samuel Pascua <pascua.samuel.14@gmail.com>
2025-12-27 09:34:57 +08:00
Fiqri Ardyansyah
8ea1c50f0c hwtracing: coresight: Add coresight IDs from sdmmagpie
cat arch/arm64/boot/dts/qcom/sdmmagpie-coresight.dtsi | grep primecell-periphid | cut -c29- | sed "s/>;//g;s/^/ETM4x_AMBA_ID(/g;s/$/),/g" | sort -u

Signed-off-by: Fiqri Ardyansyah <fiqri0927936@gmail.com>
2025-12-27 09:34:57 +08:00
J. Avila
f34bd568d6 hwtracing: Add a driver for disabling coresight clocks
In certain configs which don't use coresight, the clocks are left on,
leading to power regressions. Add a driver which can disable them.

Bug: 170753932
Signed-off-by: J. Avila <elavila@google.com>
Signed-off-by: Yabin Cui <yabinc@google.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri0927936@gmail.com>
2025-12-27 09:34:57 +08:00
Samuel Pascua
7f2e6f150f ARM64: configs: a71: remove QCOM_RTB
Change-Id: I1d15b4b2302c356a311b4f48d6370b83d7addde3
Signed-off-by: Samuel Pascua <pascua.samuel.14@gmail.com>
2025-12-27 09:34:57 +08:00
Park Ju Hyung
ccf94d3627 treewide: remove remaining _no_log() usage
sed -i -e 's/_no_log//g' $(git grep -l _no_log | tr '\n' ' ')

and manually fix drivers/clk/qcom/clk-cpu-osm.c.

Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:57 +08:00
Park Ju Hyung
8d8a89145b Revert "ARM: msm: add support for logged IO accessors"
This reverts commit 7a0322f8a1.

Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:57 +08:00
Park Ju Hyung
3e35920984 Revert "arm64: mm: Log the process id in the rtb"
This reverts commit c3c9f76495.

Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:57 +08:00
Park Ju Hyung
419d15dca1 Revert "sched: move logging process id in the rtb to sched"
This reverts commit d21bdd9c88.

Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:56 +08:00
Park Ju Hyung
81a3ee23f6 Revert "ARM: gic-v3: Log the IRQs in RTB before handling an IRQ"
This reverts commit 5f0823d3f6.

Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:56 +08:00
idkwhoiam322
286b7e43c5 Revert "ARM: gic: Add support for logging interrupts in RTB"
This reverts commit b6f137ab06.
2025-12-27 09:34:56 +08:00
Park Ju Hyung
1ed9fa92d9 Revert "trace: rtb: add msm_rtb register tracing feature snapshot"
This reverts commit 122e0ddaad.

Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:56 +08:00
Park Ju Hyung
a49d9eeed0 Revert "msm: redefine __raw_{read, write}v for RTB"
This reverts commit f45fe19bc5.

Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:56 +08:00
Park Ju Hyung
3d43b88a93 Revert "arm64: Prevent msm-rtb tracing in memcpy_{from,to}io and memset_io"
This reverts commit 9bbe8bfbb6.

Change-Id: Iecddbfc9a5e7f0449ee5837f9b6c70828ea26282
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:56 +08:00
Park Ju Hyung
8a26f3eda6 Revert "drivers: GICv3: remove the rtb logs of gic write and read"
This reverts commit 1bfe1dd120.

Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
2025-12-27 09:34:56 +08:00
Will Deacon
c9df1b74dd UPSTREAM: arm64: tlb: Rewrite stale comment in asm/tlbflush.h
Peter Z asked me to justify the barrier usage in asm/tlbflush.h, but
actually that whole block comment needs to be rewritten.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: If49b019942043655d3ce72021e4daa66a82c60fb
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
1af59a7f9e BACKPORT: arm64: tlb: Avoid synchronous TLBIs when freeing page tables
By selecting HAVE_RCU_TABLE_INVALIDATE, we can rely on tlb_flush() being
called if we fail to batch table pages for freeing. This in turn allows
us to postpone walk-cache invalidation until tlb_finish_mmu(), which
avoids lots of unnecessary DSBs and means we can shoot down the ASID if
the range is large enough.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: Ie25f4be366f5a170adbb0e64c7d57ecc2b379a58
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[cyberknight777: Backport to msm-4.14]
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
ef0e7e7e3a BACKPORT: arm64: tlb: Adjust stride and type of TLBI according to mmu_gather
Now that the core mmu_gather code keeps track of both the levels of page
table cleared and also whether or not these entries correspond to
intermediate entries, we can use this in our tlb_flush() callback to
reduce the number of invalidations we issue as well as their scope.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: Ibe3adb99f9f7b64517c614fd08cf3fa5c034c7ee
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[cyberknight777: Backport to msm-4.14]
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
c05b72865c UPSTREAM: arm64: tlb: Remove redundant !CONFIG_HAVE_RCU_TABLE_FREE code
If there's one thing the RCU-based table freeing doesn't need, it's more
ifdeffery.

Remove the redundant !CONFIG_HAVE_RCU_TABLE_FREE code, since this option
is unconditionally selected in our Kconfig.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: Ifbe6dc2d8ce9e7e0d17c1c594325b04c3d39ca95
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
7e2c7b29af UPSTREAM: arm64: tlbflush: Allow stride to be specified for __flush_tlb_range()
When we are unmapping intermediate page-table entries or huge pages, we
don't need to issue a TLBI instruction for every PAGE_SIZE chunk in the
VA range being unmapped.

Allow the invalidation stride to be passed to __flush_tlb_range(), and
adjust our "just nuke the ASID" heuristic to take this into account.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: I75dd94e14ea9920b3500e8003cad2ee0a74bb05f
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
ab4a866007 UPSTREAM: arm64: tlb: Justify non-leaf invalidation in flush_tlb_range()
Add a comment to explain why we can't get away with last-level
invalidation in flush_tlb_range()

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: I6e5251011b20a0270206b0cf50c34f991752792a
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
74f612dc64 BACKPORT: arm64: pgtable: Implement p[mu]d_valid() and check in set_p[mu]d()
Now that our walk-cache invalidation routines imply a DSB before the
invalidation, we no longer need one when we are clearing an entry during
unmap.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: Ib0ad415b232f766fb93455f39de5449f4bf45dfb
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[cyberknight777: Backport to msm-4.14]
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
a4e0fe14f1 UPSTREAM: arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable()
__flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after
writing the new table entry and therefore avoid the barrier prior to the
TLBI instruction.

In preparation for delaying our walk-cache invalidation on the unmap()
path, move the DSB into the TLB invalidation routines.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: I7a8a259d78b6d4410c4a6e59b2f229dbd58244af
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
0d93d4fbf1 UPSTREAM: arm64: tlb: Use last-level invalidation in flush_tlb_kernel_range()
flush_tlb_kernel_range() is only ever used to invalidate last-level
entries, so we can restrict the scope of the TLB invalidation
instruction.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: I1c7944e35ba4c39e0736419f8fc5fce37c1eebd8
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Will Deacon
af36119f83 UPSTREAM: MAINTAINERS: Add entry for MMU GATHER AND TLB INVALIDATION
We recently had to debug a TLB invalidation problem on the munmap()
path, which was made more difficult than necessary because:

  (a) The MMU gather code had changed without people realising
  (b) Many people subtly misunderstood the operation of the MMU gather
      code and its interactions with RCU and arch-specific TLB invalidation
  (c) Untangling the intended behaviour involved educated guesswork and
      plenty of discussion

Hopefully, we can avoid getting into this mess again by designating a
cross-arch group of people to look after this code. It is not intended
that they will have a separate tree, but they at least provide a point
of contact for anybody working in this area and can co-ordinate any
proposed future changes to the internal API.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Change-Id: Ie434451c6fea97908ce566d3ce5cf8976207d2fb
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:58 +08:00
Nadav Amit
5d04f78905 RELAND: hugetlbfs: flush TLBs correctly after huge_pmd_unshare
commit a4a118f2eead1d6c49e00765de89878288d4b890 upstream.

When __unmap_hugepage_range() calls to huge_pmd_unshare() succeed, a TLB
flush is missing.  This TLB flush must be performed before releasing the
i_mmap_rwsem, in order to prevent an unshared PMDs page from being
released and reused before the TLB flush took place.

Arguably, a comprehensive solution would use mmu_gather interface to
batch the TLB flushes and the PMDs page release, however it is not an
easy solution: (1) try_to_unmap_one() and try_to_migrate_one() also call
huge_pmd_unshare() and they cannot use the mmu_gather interface; and (2)
deferring the release of the page reference for the PMDs page until
after i_mmap_rwsem is dropeed can confuse huge_pmd_unshare() into
thinking PMDs are shared when they are not.

Fix __unmap_hugepage_range() by adding the missing TLB flush, and
forcing a flush when unshare is successful.

Fixes: 24669e5847 ("hugetlb: use mmu_gather instead of a temporary linked list for accumulating pages)" # 3.6

[Jebaitedneko: move tlb_flush_pmd_range() into mmu_gather.c]

Change-Id: Ic0b2a2b47792a24ee2ea4112c34152b0d263009a
Signed-off-by: Nadav Amit <namit@vmware.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Co-authored-by: Jebaitedneko <Jebaitedneko@gmail.com>
2025-12-26 17:07:57 +08:00
Peter Zijlstra
1e15e94255 BACKPORT: mm/memory: Move mmu_gather and TLB invalidation code into its own file
In preparation for maintaining the mmu_gather code as its own entity,
move the implementation out of memory.c and into its own file.

Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Change-Id: Ia925c303703e188a89bd3e66e6cc7302cb651826
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[cyberknight777: Backport to msm-4.14 & move tlb_remove_table_sync_one() to mmu_gather.c]
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:57 +08:00
Cyber Knight
cbfc0b77b5 Revert "hugetlbfs: flush TLBs correctly after huge_pmd_unshare"
This reverts commit 7bf1f5cb51 to reapply it with changes in accordance with an upcoming commit that moves the TLB flushing logic into mmu_gather.c.

Change-Id: I706c51a56b083669f70822d6ad148f2d6f91d8bf
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:57 +08:00
Will Deacon
f497d3676b UPSTREAM: asm-generic/tlb: Track which levels of the page tables have been cleared
It is common for architectures with hugepage support to require only a
single TLB invalidation operation per hugepage during unmap(), rather than
iterating through the mapping at a PAGE_SIZE increment. Currently,
however, the level in the page table where the unmap() operation occurs
is not stored in the mmu_gather structure, therefore forcing
architectures to issue additional TLB invalidation operations or to give
up and over-invalidate by e.g. invalidating the entire TLB.

Ideally, we could add an interval rbtree to the mmu_gather structure,
which would allow us to associate the correct mapping granule with the
various sub-mappings within the range being invalidated. However, this
is costly in terms of book-keeping and memory management, so instead we
approximate by keeping track of the page table levels that are cleared
and provide a means to query the smallest granule required for invalidation.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Change-Id: Ifb486381b6e71f4e05c9d38a246bf82de2d224ac
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:57 +08:00
Peter Zijlstra
50136c947b UPSTREAM: asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather
Some architectures require different TLB invalidation instructions
depending on whether it is only the last-level of page table being
changed, or whether there are also changes to the intermediate
(directory) entries higher up the tree.

Add a new bit to the flags bitfield in struct mmu_gather so that the
architecture code can operate accordingly if it's the intermediate
levels being invalidated.

Acked-by: Nicholas Piggin <npiggin@gmail.com>
Change-Id: I9a19a09e1ddff1e2386a29fe1392b0cb0de9cfe7
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:57 +08:00
Will Deacon
b5571fa155 UPSTREAM: asm-generic/tlb: Guard with #ifdef CONFIG_MMU
The inner workings of the mmu_gather-based TLB invalidation mechanism
are not relevant to nommu configurations, so guard them with an #ifdef.
This allows us to implement future functions using static inlines
without breaking the build.

Acked-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Change-Id: I8d6673a8daa1ff4de448477b8f0bfc5cd0ec5719
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:57 +08:00
Will Deacon
008d866c95 UPSTREAM: arm64: tlb: Provide forward declaration of tlb_flush() before including tlb.h
As of commit fd1102f0aade ("mm: mmu_notifier fix for tlb_end_vma"),
asm-generic/tlb.h now calls tlb_flush() from a static inline function,
so we need to make sure that it's declared before #including the
asm-generic header in the arch header.

Change-Id: Ib914ff3a30a5f081a05eeccff3d59dd7e084838a
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:57 +08:00
Nicholas Piggin
6009929951 UPSTREAM: mm: mmu_notifier fix for tlb_end_vma
The generic tlb_end_vma does not call invalidate_range mmu notifier, and
it resets resets the mmu_gather range, which means the notifier won't be
called on part of the range in case of an unmap that spans multiple
vmas.

ARM64 seems to be the only arch I could see that has notifiers and uses
the generic tlb_end_vma.  I have not actually tested it.

[ Catalin and Will point out that ARM64 currently only uses the
  notifiers for KVM, which doesn't use the ->invalidate_range()
  callback right now, so it's a bug, but one that happens to
  not affect them.  So not necessary for stable.  - Linus ]

Change-Id: Id7b31c8a84be494b2f6341beb3be23485b5dd6bb
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-26 17:07:57 +08:00
Uwe Kleine-König
fadf3cfa47 of: restore old handling of cells_name=NULL in of_*_phandle_with_args()
Before commit e42ee61017f5 ("of: Let of_for_each_phandle fallback to
non-negative cell_count") the iterator functions calling
of_for_each_phandle assumed a cell count of 0 if cells_name was NULL.
This corner case was missed when implementing the fallback logic in
e42ee61017f5 and resulted in an endless loop.

Restore the old behaviour of of_count_phandle_with_args() and
of_parse_phandle_with_args() and add a check to
of_phandle_iterator_init() to prevent a similar failure as a safety
precaution. of_parse_phandle_with_args_map() doesn't need a similar fix
as cells_name isn't NULL there.

Affected drivers are:
 - drivers/base/power/domain.c
 - drivers/base/power/domain.c
 - drivers/clk/ti/clk-dra7-atl.c
 - drivers/hwmon/ibmpowernv.c
 - drivers/i2c/muxes/i2c-demux-pinctrl.c
 - drivers/iommu/mtk_iommu.c
 - drivers/net/ethernet/freescale/fman/mac.c
 - drivers/opp/of.c
 - drivers/perf/arm_dsu_pmu.c
 - drivers/regulator/of_regulator.c
 - drivers/remoteproc/imx_rproc.c
 - drivers/soc/rockchip/pm_domains.c
 - sound/soc/fsl/imx-audmix.c
 - sound/soc/fsl/imx-audmix.c
 - sound/soc/meson/axg-card.c
 - sound/soc/samsung/tm2_wm5110.c
 - sound/soc/samsung/tm2_wm5110.c

Thanks to Geert Uytterhoeven for reporting the issue, Peter Rosin for
helping pinpoint the actual problem and the testers for confirming this
fix.

Fixes: e42ee61017f5 ("of: Let of_for_each_phandle fallback to non-negative cell_count")
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Change-Id: I684efc01df23ea32c578c1da4f8ea6fcf6f03ced
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Rob Herring <robh@kernel.org>
2025-12-26 17:07:54 +08:00
Uwe Kleine-König
e6812d5584 of: Let of_for_each_phandle fallback to non-negative cell_count
Referencing device tree nodes from a property allows to pass arguments.
This is for example used for referencing gpios. This looks as follows:

	gpio_ctrl: gpio-controller {
		#gpio-cells = <2>
		...
	}

	someothernode {
		gpios = <&gpio_ctrl 5 0 &gpio_ctrl 3 0>;
		...
	}

To know the number of arguments this must be either fixed, or the
referenced node is checked for a $cells_name (here: "#gpio-cells")
property and with this information the start of the second reference can
be determined.

Currently regulators are referenced with no additional arguments. To
allow some optional arguments without having to change all referenced
nodes this change introduces a way to specify a default cell_count. So
when a phandle is parsed we check for the $cells_name property and use
it as before if present. If it is not present we fall back to
cells_count if non-negative and only fail if cells_count is smaller than
zero.

Change-Id: Ic7a6a5e667d46847becb2a9593a00ba6db49fc98
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Rob Herring <robh@kernel.org>
2025-12-26 17:07:54 +08:00
Jinfeng Gu
d1dfb7d454 disp: msm: dsi: add null pointer check in dsi_display_dev_remove
This change add display null pointer check in dsi_display_dev_remove.

Change-Id: Ib31756c3b22256d19cbcb508f60de4550e3834e1
Signed-off-by: Jinfeng Gu <quic_gjinfeng@quicinc.com>
2025-12-26 17:07:54 +08:00
Abinath S
4fa5939ec9 asoc: codec: avoid out of bound write to map array
added check for port num and channel iteration are lessthan 8
to avoid out of bound write to 8x8 map array.

Change-Id: I4c6fe13a5eb09be623a1c40ce16c5a5e4246e021
Signed-off-by: Abinath S <quic_abins@quicinc.com>
2025-12-26 17:07:54 +08:00
Jiri Kosina
abdabd6880 UPSTREAM: HID: core: zero-initialize the report buffer
[ Upstream commit 177f25d1292c7e16e1199b39c85480f7f8815552 ]

Since the report buffer is used by all kinds of drivers in various ways, let's
zero-initialize it during allocation to make sure that it can't be ever used
to leak kernel memory via specially-crafted report.

Bug: 380395346
Fixes: 27ce405039 ("HID: fix data access in implement()")
Reported-by: Benoît Sevens <bsevens@google.com>
Acked-by: Benjamin Tissoires <bentiss@kernel.org>
Signed-off-by: Jiri Kosina <jkosina@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit 9d9f5c75c0c7f31766ec27d90f7a6ac673193191)
Signed-off-by: Lee Jones <joneslee@google.com>
Change-Id: I31f64f2745347137bbc415eb35b7fab5761867f3
2025-12-26 17:07:53 +08:00
Dan Carpenter
259196ff11 UPSTREAM: ALSA: usb-audio: Fix a DMA to stack memory bug
commit f7d306b47a24367302bd4fe846854e07752ffcd9 upstream.

The usb_get_descriptor() function does DMA so we're not allowed
to use a stack buffer for that.  Doing DMA to the stack is not portable
all architectures.  Move the "new_device_descriptor" from being stored
on the stack and allocate it with kmalloc() instead.

Bug: 382243530
Fixes: b909df18ce2a ("ALSA: usb-audio: Fix potential out-of-bound accesses for Extigy and Mbox devices")
Cc: stable@kernel.org
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Link: https://patch.msgid.link/60e3aa09-039d-46d2-934c-6f123026c2eb@stanley.mountain
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Benoît Sevens <bsevens@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 4e54dc4bbc602133217de301d9f814f3e6d22eee)
Signed-off-by: Lee Jones <joneslee@google.com>
Change-Id: I469212aa538584e3d8cc5b0087b68c99acf43f64
2025-12-26 17:07:53 +08:00
Benoît Sevens
2aedf65efd UPSTREAM: ALSA: usb-audio: Fix potential out-of-bound accesses for Extigy and Mbox devices
commit b909df18ce2a998afef81d58bbd1a05dc0788c40 upstream.

A bogus device can provide a bNumConfigurations value that exceeds the
initial value used in usb_get_configuration for allocating dev->config.

This can lead to out-of-bounds accesses later, e.g. in
usb_destroy_configuration.

Bug: 382243530
Signed-off-by: Benoît Sevens <bsevens@google.com>
Fixes: 1da177e4c3 ("Linux-2.6.12-rc2")
Cc: stable@kernel.org
Link: https://patch.msgid.link/20241120124144.3814457-1-bsevens@google.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 9887d859cd60727432a01564e8f91302d361b72b)
Signed-off-by: Lee Jones <joneslee@google.com>
Change-Id: I2df0d59750943fa34747bd4bae2e549320f2a0ce
2025-12-26 17:07:53 +08:00
Benoit Sevens
0739c908f6 UPSTREAM: USB: media: uvcvideo: Skip parsing frames of type UVC_VS_UNDEFINED in uvc_parse_format
This can lead to out of bounds writes since frames of this type were not
taken into account when calculating the size of the frames buffer in
uvc_parse_streaming.

Fixes: c0efd23292 ("V4L/DVB (8145a): USB Video Class driver")
Signed-off-by: Benoit Sevens <bsevens@google.com>
Cc: stable@vger.kernel.org
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Hans Verkuil <hverkuil@xs4all.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 378455392
(cherry picked from commit ecf2b43018da9579842c774b7f35dbe11b5c38dd)
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I959a6374ba7adf021fc19da755f5c7611fef9b8c
2025-12-26 17:07:53 +08:00
Santosh Sakore
f8b865aaa6 msm: adsprpc: use-after-free (UAF) in global maps
Currently, remote heap maps get added to the global list before the
fastrpc_internal_mmap function completes the mapping. Meanwhile, the
fastrpc_internal_munmap function accesses the map, starts unmapping, and
frees the map before the fastrpc_internal_mmap function completes,
resulting in a use-after-free (UAF) issue. Add the map to the list after
the fastrpc_internal_mmap function completes the mapping.

Change-Id: I73c536718f3228b7cbb7a19b76270e0dd3e32bd1
Acked-by: Abhishek Singh <abhishes@qti.qualcomm.com>
Signed-off-by: Santosh Sakore <quic_ssakore@quicinc.com>
(cherry picked from commit 6f39d9be6244a1c23397fd959bee425be4440849)
2025-12-26 17:07:53 +08:00
Shalini Manjunatha
a037977eb0 BACKPORT: dsp: afe: check for param size before copying
Check for the proper param size before copying,
to avoid buffer overflow.

Original-Change-Id: I70c52e6ab76f528ea3714784ab9013b070839c40
Signed-off-by: Shalini Manjunatha <quic_c_shalma@quicinc.com>
Change-Id: Ic7fa9b3dd047d8eeba3cea02b99d6bc5b9df8daf
2025-12-26 17:07:33 +08:00
Samuel Pascua
a41b337a43 lz4: armv8: use old annotations
Signed-off-by: Samuel Pascua <pascua.samuel.14@gmail.com>
2025-12-26 17:07:20 +08:00
Juhyung Park
b9b4b69b95 lz4: fix LZ4_compress_fast() definition
LZ4_compress_fast() should be exported with wrkmem.

Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2025-12-26 17:07:20 +08:00
Juhyung Park
e0792f4bb3 lz4: move LZ4_ACCELERATION_* macros to lz4.h
zram uses this.

Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2025-12-26 17:07:20 +08:00
Juhyung Park
d51b289108 lz4: define LZ4HC_DEFAULT_CLEVEL for compatibility
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2025-12-26 17:07:20 +08:00
EmanuelCN
5c71eb07fb lz4: Rename conflicting macro
Rename current to curr because lz4accel.h imports asm/current.h indirectly which defines current as get_current().
2025-12-26 17:07:20 +08:00
Tashfin Shakeer Rhythm
a4518203d4 lz4: Use ARM64 v8 ASM to accelerate lz4 decompression
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2025-12-26 17:07:20 +08:00
Dark-Matter7232
8a9ee3a4b1 lz4armv8: Update assembly instructions from Huawei kernel drop
Signed-off-by: Dark-Matter7232 <me@const.eu.org>
[Tashar02: Fragment from original commit, improve indentations and reword commit message]
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
2025-12-26 17:07:20 +08:00
阿菌•未霜
c7699b13a9 lib/lz4: Import arm64 V8 ASM lz4 decompression acceleration
Change-Id: I3c8dd91df090bb692784a6b7a61c8877b1e1dfba
2025-12-26 17:07:20 +08:00
EmanuelCN
d65ac21a95 lz4: Run clang-format 2025-12-26 17:07:20 +08:00
Chenyang Zhong
1ee5706da6 lz4: enable LZ4_FAST_DEC_LOOP on aarch64 Clang builds
Upstream lz4 mentioned a performance regression on Qualcomm SoCs
when built with Clang, but not with GCC [1]. However, according to my
testing on sm8350 with LLVM Clang 15, this patch does offer a nice
10% boost in decompression, so enable the fast dec loop for Clang
as well.

Testing procedure:
- pre-fill zram with 1GB of real-word zram data dumped under memory
  pressure, for example
  $ dd if=/sdcard/zram.test of=/dev/block/zram0 bs=1m count=1000
- $ fio --readonly --name=randread --direct=1 --rw=randread \
  --ioengine=psync --randrepeat=0 --numjobs=4 --iodepth=1 \
  --group_reporting=1 --filename=/dev/block/zram0 --bs=4K --size=1000M

Results:
- vanilla lz4: read: IOPS=1646k, BW=6431MiB/s (6743MB/s)(4000MiB/622msec)
- lz4 fast dec: read: IOPS=1775k, BW=6932MiB/s (7269MB/s)(4000MiB/577msec)

[1] lz4/lz4#707

Signed-off-by: Chenyang Zhong <zhongcy95@gmail.com>
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
2025-12-26 17:07:19 +08:00
Juhyung Park
34caf53141 lz4: adapt to Linux kernel
A quick benchmark shows this improves zram performance by 3.8% in
4K blocks, 3.4% in 1M blocks.

Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
2025-12-26 17:07:19 +08:00
Juhyung Park
f031c99645 lz4: import v1.10.0 from upstream
Change-Id: Ic8937ac5cc952272ab8cb26cc73361f255813264
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
2025-12-26 17:07:19 +08:00
Guo Xuenan
3eda101d26 lz4: fix LZ4_decompress_safe_partial read out of bound
commit eafc0a02391b7b36617b36c97c4b5d6832cf5e24 upstream.

When partialDecoding, it is EOF if we've either filled the output buffer
or can't proceed with reading an offset for following match.

In some extreme corner cases when compressed data is suitably corrupted,
UAF will occur.  As reported by KASAN [1], LZ4_decompress_safe_partial
may lead to read out of bound problem during decoding.  lz4 upstream has
fixed it [2] and this issue has been disscussed here [3] before.

current decompression routine was ported from lz4 v1.8.3, bumping
lib/lz4 to v1.9.+ is certainly a huge work to be done later, so, we'd
better fix it first.

[1] https://lore.kernel.org/all/000000000000830d1205cf7f0477@google.com/
[2] c5d6f8a8be#
[3] https://lore.kernel.org/all/CC666AE8-4CA4-4951-B6FB-A2EFDE3AC03B@fb.com/

Link: https://lkml.kernel.org/r/20211111105048.2006070-1-guoxuenan@huawei.com
Reported-by: syzbot+63d688f1d899c588fb71@syzkaller.appspotmail.com
Change-Id: I24b1fe4aaed8b89b65f66d753b72a2f9f32ac79b
Signed-off-by: Guo Xuenan <guoxuenan@huawei.com>
Reviewed-by: Nick Terrell <terrelln@fb.com>
Acked-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Cc: Yann Collet <cyan@fb.com>
Cc: Chengyang Fan <cy.fan@huawei.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-12-26 17:07:16 +08:00
Gao Xiang
fa77fadadc lib/lz4: explicitly support in-place decompression
LZ4 final literal copy could be overlapped when doing
in-place decompression, so it's unsafe to just use memcpy()
on an optimized memcpy approach but memmove() instead.

Upstream LZ4 has updated this years ago [1] (and the impact
is non-sensible [2] plus only a few bytes remain), this commit
just synchronizes LZ4 upstream code to the kernel side as well.

It can be observed as EROFS in-place decompression failure
on specific files when X86_FEATURE_ERMS is unsupported,
memcpy() optimization of commit 59daa706fb ("x86, mem:
Optimize memcpy by avoiding memory false dependece") will
be enabled then.

Currently most modern x86-CPUs support ERMS, these CPUs just
use "rep movsb" approach so no problem at all. However, it can
still be verified with forcely disabling ERMS feature...

arch/x86/lib/memcpy_64.S:
        ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
-                     "jmp memcpy_erms", X86_FEATURE_ERMS
+                     "jmp memcpy_orig", X86_FEATURE_ERMS

We didn't observe any strange on arm64/arm/x86 platform before
since most memcpy() would behave in an increasing address order
("copy upwards" [3]) and it's the correct order of in-place
decompression but it really needs an update to memmove() for sure
considering it's an undefined behavior according to the standard
and some unique optimization already exists in the kernel.

[1]
33cb8518ac
[2] https://github.com/lz4/lz4/pull/717#issuecomment-497818921
[3] https://sourceware.org/bugzilla/show_bug.cgi?id=12518

Link:
https: //lkml.kernel.org/r/20201122030749.2698994-1-hsiangkao@redhat.com
Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
Reviewed-by: Nick Terrell <terrelln@fb.com>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Miao Xie <miaoxie@huawei.com>
Cc: Chao Yu <yuchao0@huawei.com>
Cc: Li Guifu <bluce.liguifu@huawei.com>
Cc: Guo Xuenan <guoxuenan@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: Iec77608d7cd5201f761ac78a34b2fc617294c495
2025-12-26 17:06:34 +08:00
Nick Terrell
c7d5cb0d8e lz4: fix kernel decompression speed
This patch replaces all memcpy() calls with LZ4_memcpy() which calls
__builtin_memcpy() so the compiler can inline it.

LZ4 relies heavily on memcpy() with a constant size being inlined. In
x86
and i386 pre-boot environments memcpy() cannot be inlined because
memcpy()
doesn't get defined as __builtin_memcpy().

An equivalent patch has been applied upstream so that the next import
won't lose this change [1].

I've measured the kernel decompression speed using QEMU before and after
this patch for the x86_64 and i386 architectures.  The speed-up is about
10x as shown below.

Code	Arch	Kernel Size	Time	Speed
v5.8	x86_64	11504832 B	148 ms	 79 MB/s
patch	x86_64	11503872 B	 13 ms	885 MB/s
v5.8	i386	 9621216 B	 91 ms	106 MB/s
patch	i386	 9620224 B	 10 ms	962 MB/s

I also measured the time to decompress the initramfs on x86_64, i386,
and
arm.  All three show the same decompression speed before and after, as
expected.

[1] https://github.com/lz4/lz4/pull/890

Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Gao Xiang <gaoxiang25@huawei.com>
Cc: Sven Schmidt <4sschmid@informatik.uni-hamburg.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Link:
http: //lkml.kernel.org/r/20200803194022.2966806-1-nickrterrell@gmail.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I3e725b70595227145a8c8b42a6626cb0629fdddf
2025-12-26 17:06:29 +08:00
Gao Xiang
8f6f687324 lib/lz4: update LZ4 decompressor module
Update the LZ4 compression module based on LZ4 v1.8.3 in order for the
erofs file system to use the newest LZ4_decompress_safe_partial() which
can now decode exactly the nb of bytes requested [1] to take place of
the
open hacked code in the erofs file system itself.

Currently, apart from the erofs file system, no other users use
LZ4_decompress_safe_partial, so no worry about the interface.

In addition, LZ4 v1.8.x boosts up decompression speed compared to the
current code which is based on LZ4 v1.7.3, mainly due to shortcut
optimization for the specific common LZ4-sequences [2].

lzbench testdata (tested in kirin710, 8 cores, 4 big cores
at 2189Mhz, 2GB DDR RAM at 1622Mhz, with enwik8 testdata [3]):

Compressor name Compress. Decompress. Compr. size Ratio Filename
memcpy                   5004 MB/s  4924 MB/s   100000000 100.00 enwik8
lz4hc 1.7.3 -9             12 MB/s   653 MB/s    42203253  42.20 enwik8
lz4hc 1.8.0 -9             12 MB/s   908 MB/s    42203096  42.20 enwik8
lz4hc 1.8.3 -9             11 MB/s   965 MB/s    42203094  42.20 enwik8

[1] https://github.com/lz4/lz4/issues/566
    08d347b5b2

[2] v1.8.1 perf: slightly faster compression and decompression speed
    a31b7058cb
    v1.8.2 perf: slightly faster HC compression and decompression speed
    45f8603aae
    1a191b3f8d

[3] http://mattmahoney.net/dc/textdata.html
    http://mattmahoney.net/dc/enwik8.zip

Link:
http: //lkml.kernel.org/r/1537181207-21932-1-git-send-email-gaoxiang25@huawei.com
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Tested-by: Guo Xuenan <guoxuenan@huawei.com>
Cc: Colin Ian King <colin.king@canonical.com>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Fang Wei <fangwei1@huawei.com>
Cc: Chao Yu <yuchao0@huawei.com>
Cc: Miao Xie <miaoxie@huawei.com>
Cc: Sven Schmidt <4sschmid@informatik.uni-hamburg.de>
Cc: Kyungsik Lee <kyungsik.lee@lge.com>
Cc: <weidu.du@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I457b806e87ee22830537cf3927140202de78c11a
2025-12-26 17:06:22 +08:00
Nguyen Manh Dung
1574397aa6 ARM64: configs: CONFIG_USERFAULTFD=y 2025-12-23 06:20:09 +07:00
Samuel Pascua
fb2e9a4f1d Merge branch 'android-4.14' of https://github.com/pascua28/android_kernel_samsung_sm7150 into 16.0
Change-Id: I70dd6d9d8f57595d95c2b8ce0cb76a866c43d949
2025-10-02 22:19:27 +08:00
Samuel Pascua
90c1db580b Merge branch 'android-4.14' of https://github.com/pascua28/android_kernel_samsung_sm7150 into 16.0
Change-Id: Ifb1f6c7f758f8bbb1a0ff880da603171ff88fa5f
2025-09-29 22:10:10 +08:00
Samuel Pascua
6f0e0ecc18 kernelsu/susfs: import from rsuntk/KernelSU@41e8600de7
Change-Id: Iae93122929b9cb992bdfc714fd34711e86dd1602
Signed-off-by: Samuel Pascua <pascua.samuel.14@gmail.com>
2025-09-28 12:35:46 +08:00
163 changed files with 18674 additions and 4874 deletions

View File

@@ -194,13 +194,6 @@ Optional Properties:
Specify the number of macrotiling channels for this chip.
This is programmed into certain registers and also pass to
the user as a property.
- qcom,l2pc-cpu-mask:
Disables L2PC on masked CPUs when any of Graphics
rendering thread is running on masked CPUs.
Bit 0 is for CPU-0, bit 1 is for CPU-1...
- qcom,l2pc-update-queue:
Disables L2PC on masked CPUs at queue time when it's true.
- qcom,snapshot-size:
Specify the size of snapshot in bytes. This will override

View File

@@ -9057,6 +9057,19 @@ S: Maintained
F: arch/arm/boot/dts/mmp*
F: arch/arm/mach-mmp/
MMU GATHER AND TLB INVALIDATION
M: Will Deacon <will.deacon@arm.com>
M: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
M: Andrew Morton <akpm@linux-foundation.org>
M: Nick Piggin <npiggin@gmail.com>
M: Peter Zijlstra <peterz@infradead.org>
L: linux-arch@vger.kernel.org
L: linux-mm@kvack.org
S: Maintained
F: arch/*/include/asm/tlb.h
F: include/asm-generic/tlb.h
F: mm/mmu_gather.c
MN88472 MEDIA DRIVER
M: Antti Palosaari <crope@iki.fi>
L: linux-media@vger.kernel.org

View File

@@ -28,7 +28,6 @@
#include <asm/byteorder.h>
#include <asm/memory.h>
#include <asm-generic/pci_iomap.h>
#include <linux/msm_rtb.h>
#include <xen/xen.h>
/*
@@ -62,24 +61,23 @@ void __raw_readsl(const volatile void __iomem *addr, void *data, int longlen);
* the bus. Rather than special-case the machine, just let the compiler
* generate the access for CPUs prior to ARMv6.
*/
#define __raw_readw_no_log(a) (__chk_io_ptr(a), \
*(volatile unsigned short __force *)(a))
#define __raw_writew_no_log(v, a) ((void)(__chk_io_ptr(a), \
*(volatile unsigned short __force *)\
(a) = (v)))
#define __raw_readw(a) (__chk_io_ptr(a), *(volatile unsigned short __force *)(a))
#define __raw_writew(v,a) ((void)(__chk_io_ptr(a), *(volatile unsigned short __force *)(a) = (v)))
#else
/*
* When running under a hypervisor, we want to avoid I/O accesses with
* writeback addressing modes as these incur a significant performance
* overhead (the address generation must be emulated in software).
*/
static inline void __raw_writew_no_log(u16 val, volatile void __iomem *addr)
#define __raw_writew __raw_writew
static inline void __raw_writew(u16 val, volatile void __iomem *addr)
{
asm volatile("strh %1, %0"
: : "Q" (*(volatile u16 __force *)addr), "r" (val));
}
static inline u16 __raw_readw_no_log(const volatile void __iomem *addr)
#define __raw_readw __raw_readw
static inline u16 __raw_readw(const volatile void __iomem *addr)
{
u16 val;
asm volatile("ldrh %0, %1"
@@ -89,19 +87,22 @@ static inline u16 __raw_readw_no_log(const volatile void __iomem *addr)
}
#endif
static inline void __raw_writeb_no_log(u8 val, volatile void __iomem *addr)
#define __raw_writeb __raw_writeb
static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
{
asm volatile("strb %1, %0"
: : "Qo" (*(volatile u8 __force *)addr), "r" (val));
}
static inline void __raw_writel_no_log(u32 val, volatile void __iomem *addr)
#define __raw_writel __raw_writel
static inline void __raw_writel(u32 val, volatile void __iomem *addr)
{
asm volatile("str %1, %0"
: : "Qo" (*(volatile u32 __force *)addr), "r" (val));
}
static inline void __raw_writeq_no_log(u64 val, volatile void __iomem *addr)
#define __raw_writeq __raw_writeq
static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
{
register u64 v asm ("r2");
@@ -112,7 +113,8 @@ static inline void __raw_writeq_no_log(u64 val, volatile void __iomem *addr)
: "r" (v));
}
static inline u8 __raw_readb_no_log(const volatile void __iomem *addr)
#define __raw_readb __raw_readb
static inline u8 __raw_readb(const volatile void __iomem *addr)
{
u8 val;
asm volatile("ldrb %0, %1"
@@ -121,7 +123,8 @@ static inline u8 __raw_readb_no_log(const volatile void __iomem *addr)
return val;
}
static inline u32 __raw_readl_no_log(const volatile void __iomem *addr)
#define __raw_readl __raw_readl
static inline u32 __raw_readl(const volatile void __iomem *addr)
{
u32 val;
asm volatile("ldr %0, %1"
@@ -130,7 +133,8 @@ static inline u32 __raw_readl_no_log(const volatile void __iomem *addr)
return val;
}
static inline u64 __raw_readq_no_log(const volatile void __iomem *addr)
#define __raw_readq __raw_readq
static inline u64 __raw_readq(const volatile void __iomem *addr)
{
register u64 val asm ("r2");
@@ -140,48 +144,6 @@ static inline u64 __raw_readq_no_log(const volatile void __iomem *addr)
return val;
}
/*
* There may be cases when clients don't want to support or can't support the
* logging. The appropriate functions can be used but clients should carefully
* consider why they can't support the logging.
*/
#define __raw_write_logged(v, a, _t) ({ \
int _ret; \
volatile void __iomem *_a = (a); \
void *_addr = (void __force *)(_a); \
_ret = uncached_logk(LOGK_WRITEL, _addr); \
ETB_WAYPOINT; \
__raw_write##_t##_no_log((v), _a); \
if (_ret) \
LOG_BARRIER; \
})
#define __raw_writeb(v, a) __raw_write_logged((v), (a), b)
#define __raw_writew(v, a) __raw_write_logged((v), (a), w)
#define __raw_writel(v, a) __raw_write_logged((v), (a), l)
#define __raw_writeq(v, a) __raw_write_logged((v), (a), q)
#define __raw_read_logged(a, _l, _t) ({ \
unsigned _t __a; \
const volatile void __iomem *_a = (a); \
void *_addr = (void __force *)(_a); \
int _ret; \
_ret = uncached_logk(LOGK_READL, _addr); \
ETB_WAYPOINT; \
__a = __raw_read##_l##_no_log(_a);\
if (_ret) \
LOG_BARRIER; \
__a; \
})
#define __raw_readb(a) __raw_read_logged((a), b, char)
#define __raw_readw(a) __raw_read_logged((a), w, short)
#define __raw_readl(a) __raw_read_logged((a), l, int)
#define __raw_readq(a) __raw_read_logged((a), q, long long)
/*
* Architecture ioremap implementation.
*/
@@ -363,24 +325,12 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
__raw_readl(c)); __r; })
#define readq_relaxed(c) ({ u64 __r = le64_to_cpu((__force __le64) \
__raw_readq(c)); __r; })
#define readb_relaxed_no_log(c) ({ u8 __r = __raw_readb_no_log(c); __r; })
#define readl_relaxed_no_log(c) ({ u32 __r = le32_to_cpu((__force __le32) \
__raw_readl_no_log(c)); __r; })
#define readq_relaxed_no_log(c) ({ u64 __r = le64_to_cpu((__force __le64) \
__raw_readq_no_log(c)); __r; })
#define writeb_relaxed(v, c) __raw_writeb(v, c)
#define writew_relaxed(v, c) __raw_writew((__force u16) cpu_to_le16(v), c)
#define writel_relaxed(v, c) __raw_writel((__force u32) cpu_to_le32(v), c)
#define writeq_relaxed(v, c) __raw_writeq((__force u64) cpu_to_le64(v), c)
#define writeb_relaxed_no_log(v, c) ((void)__raw_writeb_no_log((v), (c)))
#define writew_relaxed_no_log(v, c) __raw_writew_no_log((__force u16) \
cpu_to_le16(v), c)
#define writel_relaxed_no_log(v, c) __raw_writel_no_log((__force u32) \
cpu_to_le32(v), c)
#define writeq_relaxed_no_log(v, c) __raw_writeq_no_log((__force u64) \
cpu_to_le64(v), c)
#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; })
#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
@@ -401,24 +351,6 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
#define writesw(p,d,l) __raw_writesw(p,d,l)
#define writesl(p,d,l) __raw_writesl(p,d,l)
#define readb_no_log(c) \
({ u8 __v = readb_relaxed_no_log(c); __iormb(); __v; })
#define readw_no_log(c) \
({ u16 __v = readw_relaxed_no_log(c); __iormb(); __v; })
#define readl_no_log(c) \
({ u32 __v = readl_relaxed_no_log(c); __iormb(); __v; })
#define readq_no_log(c) \
({ u64 __v = readq_relaxed_no_log(c); __iormb(); __v; })
#define writeb_no_log(v, c) \
({ __iowmb(); writeb_relaxed_no_log((v), (c)); })
#define writew_no_log(v, c) \
({ __iowmb(); writew_relaxed_no_log((v), (c)); })
#define writel_no_log(v, c) \
({ __iowmb(); writel_relaxed_no_log((v), (c)); })
#define writeq_no_log(v, c) \
({ __iowmb(); writeq_relaxed_no_log((v), (c)); })
#ifndef __ARMBE__
static inline void memset_io(volatile void __iomem *dst, unsigned c,
size_t count)

View File

@@ -46,21 +46,21 @@ EXPORT_SYMBOL(atomic_io_modify);
void _memcpy_fromio(void *to, const volatile void __iomem *from, size_t count)
{
while (count && (!IO_CHECK_ALIGN(from, 8) || !IO_CHECK_ALIGN(to, 8))) {
*(u8 *)to = readb_relaxed_no_log(from);
*(u8 *)to = readb_relaxed(from);
from++;
to++;
count--;
}
while (count >= 8) {
*(u64 *)to = readq_relaxed_no_log(from);
*(u64 *)to = readq_relaxed(from);
from += 8;
to += 8;
count -= 8;
}
while (count) {
*(u8 *)to = readb_relaxed_no_log(from);
*(u8 *)to = readb_relaxed(from);
from++;
to++;
count--;
@@ -76,21 +76,21 @@ void _memcpy_toio(volatile void __iomem *to, const void *from, size_t count)
void *p = (void __force *)to;
while (count && (!IO_CHECK_ALIGN(p, 8) || !IO_CHECK_ALIGN(from, 8))) {
writeb_relaxed_no_log(*(volatile u8 *)from, p);
writeb_relaxed(*(volatile u8 *)from, p);
from++;
p++;
count--;
}
while (count >= 8) {
writeq_relaxed_no_log(*(volatile u64 *)from, p);
writeq_relaxed(*(volatile u64 *)from, p);
from += 8;
p += 8;
count -= 8;
}
while (count) {
writeb_relaxed_no_log(*(volatile u8 *)from, p);
writeb_relaxed(*(volatile u8 *)from, p);
from++;
p++;
count--;
@@ -111,19 +111,19 @@ void _memset_io(volatile void __iomem *dst, int c, size_t count)
qc |= qc << 32;
while (count && !IO_CHECK_ALIGN(p, 8)) {
writeb_relaxed_no_log(c, p);
writeb_relaxed(c, p);
p++;
count--;
}
while (count >= 8) {
writeq_relaxed_no_log(qc, p);
writeq_relaxed(qc, p);
p += 8;
count -= 8;
}
while (count) {
writeb_relaxed_no_log(c, p);
writeb_relaxed(c, p);
p++;
count--;
}

View File

@@ -127,6 +127,7 @@ config ARM64
select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RCU_TABLE_FREE
select HAVE_RCU_TABLE_INVALIDATE
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_KPROBES
select HAVE_KRETPROBES

View File

@@ -5524,11 +5524,10 @@ CONFIG_DAX=y
CONFIG_NVMEM=y
# CONFIG_QCOM_QFPROM is not set
CONFIG_NVMEM_SPMI_SDAM=y
CONFIG_STM=y
# CONFIG_STM_DUMMY is not set
# CONFIG_STM_SOURCE_CONSOLE is not set
# CONFIG_STM_SOURCE_HEARTBEAT is not set
# CONFIG_STM is not set
# CONFIG_INTEL_TH is not set
CONFIG_CORESIGHT_PLACEHOLDER=y
CONFIG_CORESIGHT_AMBA_PLACEHOLDER=y
# CONFIG_FPGA is not set
#
@@ -5789,6 +5788,34 @@ CONFIG_TZIC_USE_QSEECOM=y
# CONFIG_TZIC_DEFAULT is not set
CONFIG_SPU_VERIFY=y
#
# KernelSU
#
CONFIG_KSU=y
# CONFIG_KSU_DEBUG is not set
CONFIG_KSU_ALLOWLIST_WORKAROUND=y
# CONFIG_KSU_CMDLINE is not set
CONFIG_KSU_MANUAL_HOOK=y
#
# KernelSU - SUSFS
#
CONFIG_KSU_SUSFS=y
# CONFIG_KSU_SUSFS_HAS_MAGIC_MOUNT is not set
CONFIG_KSU_SUSFS_SUS_PATH=y
CONFIG_KSU_SUSFS_SUS_MOUNT=y
# CONFIG_KSU_SUSFS_AUTO_ADD_SUS_KSU_DEFAULT_MOUNT is not set
# CONFIG_KSU_SUSFS_AUTO_ADD_SUS_BIND_MOUNT is not set
# CONFIG_KSU_SUSFS_SUS_KSTAT is not set
# CONFIG_KSU_SUSFS_SUS_OVERLAYFS is not set
CONFIG_KSU_SUSFS_TRY_UMOUNT=y
# CONFIG_KSU_SUSFS_AUTO_ADD_TRY_UMOUNT_FOR_BIND_MOUNT is not set
CONFIG_KSU_SUSFS_SPOOF_UNAME=y
CONFIG_KSU_SUSFS_ENABLE_LOG=y
# CONFIG_KSU_SUSFS_HIDE_KSU_SUSFS_SYMBOLS is not set
# CONFIG_KSU_SUSFS_SPOOF_CMDLINE_OR_BOOTCONFIG is not set
# CONFIG_KSU_SUSFS_OPEN_REDIRECT is not set
#
# Firmware Drivers
#
@@ -5851,6 +5878,7 @@ CONFIG_F2FS_SEC_BLOCK_OPERATIONS_DEBUG=y
CONFIG_F2FS_SEC_SUPPORT_DNODE_RELOCATION=y
# CONFIG_FS_DAX is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
# CONFIG_EXPORTFS_BLOCK_OPS is not set
CONFIG_FILE_LOCKING=y
CONFIG_MANDATORY_FILE_LOCKING=y
@@ -5876,7 +5904,9 @@ CONFIG_QUOTACTL=y
CONFIG_FUSE_FS=y
# CONFIG_CUSE is not set
CONFIG_FUSE_SUPPORT_STLOG=y
# CONFIG_OVERLAY_FS is not set
CONFIG_OVERLAY_FS=y
CONFIG_OVERLAY_FS_REDIRECT_DIR=y
CONFIG_OVERLAY_FS_INDEX=y
# CONFIG_INCREMENTAL_FS is not set
#
@@ -6191,8 +6221,6 @@ CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_IPC_LOGGING=y
CONFIG_QCOM_RTB=y
CONFIG_QCOM_RTB_SEPARATE_CPUS=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
@@ -6274,30 +6302,7 @@ CONFIG_DEBUG_ALIGN_RODATA=y
#
CONFIG_SEC_PM=y
CONFIG_SEC_PM_DEBUG=y
CONFIG_CORESIGHT=y
CONFIG_CORESIGHT_LINKS_AND_SINKS=y
CONFIG_CORESIGHT_LINK_AND_SINK_TMC=y
# CONFIG_CORESIGHT_CATU is not set
# CONFIG_CORESIGHT_SINK_TPIU is not set
# CONFIG_CORESIGHT_SINK_ETBV10 is not set
# CONFIG_CORESIGHT_SOURCE_ETM4X is not set
CONFIG_CORESIGHT_DYNAMIC_REPLICATOR=y
# CONFIG_CORESIGHT_DBGUI is not set
CONFIG_CORESIGHT_STM=y
# CONFIG_CORESIGHT_CPU_DEBUG is not set
CONFIG_CORESIGHT_CTI=y
CONFIG_CORESIGHT_OST=y
CONFIG_CORESIGHT_TPDA=y
CONFIG_CORESIGHT_TPDM=y
# CONFIG_CORESIGHT_TPDM_DEFAULT_ENABLE is not set
# CONFIG_CORESIGHT_QPDI is not set
CONFIG_CORESIGHT_HWEVENT=y
CONFIG_CORESIGHT_DUMMY=y
CONFIG_CORESIGHT_REMOTE_ETM=y
CONFIG_CORESIGHT_REMOTE_ETM_DEFAULT_ENABLE=0
CONFIG_CORESIGHT_CSR=y
# CONFIG_CORESIGHT_TGU is not set
CONFIG_CORESIGHT_EVENT=y
# CONFIG_CORESIGHT is not set
#
# Security options

View File

@@ -120,8 +120,8 @@ static inline void gic_write_bpr1(u32 val)
write_sysreg_s(val, SYS_ICC_BPR1_EL1);
}
#define gic_read_typer(c) readq_relaxed_no_log(c)
#define gic_write_irouter(v, c) writeq_relaxed_no_log(v, c)
#define gic_read_typer(c) readq_relaxed(c)
#define gic_write_irouter(v, c) writeq_relaxed(v, c)
#define gic_read_lpir(c) readq_relaxed(c)
#define gic_write_lpir(v, c) writeq_relaxed(v, c)

View File

@@ -30,35 +30,38 @@
#include <asm/early_ioremap.h>
#include <asm/alternative.h>
#include <asm/cpufeature.h>
#include <linux/msm_rtb.h>
#include <xen/xen.h>
/*
* Generic IO read/write. These perform native-endian accesses.
* that some architectures will want to re-define __raw_{read,write}w.
*/
static inline void __raw_writeb_no_log(u8 val, volatile void __iomem *addr)
#define __raw_writeb __raw_writeb
static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
{
asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr));
}
static inline void __raw_writew_no_log(u16 val, volatile void __iomem *addr)
#define __raw_writew __raw_writew
static inline void __raw_writew(u16 val, volatile void __iomem *addr)
{
asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr));
}
static inline void __raw_writel_no_log(u32 val, volatile void __iomem *addr)
#define __raw_writel __raw_writel
static inline void __raw_writel(u32 val, volatile void __iomem *addr)
{
asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
}
static inline void __raw_writeq_no_log(u64 val, volatile void __iomem *addr)
#define __raw_writeq __raw_writeq
static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
{
asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
}
static inline u8 __raw_readb_no_log(const volatile void __iomem *addr)
#define __raw_readb __raw_readb
static inline u8 __raw_readb(const volatile void __iomem *addr)
{
u8 val;
asm volatile(ALTERNATIVE("ldrb %w0, [%1]",
@@ -68,7 +71,8 @@ static inline u8 __raw_readb_no_log(const volatile void __iomem *addr)
return val;
}
static inline u16 __raw_readw_no_log(const volatile void __iomem *addr)
#define __raw_readw __raw_readw
static inline u16 __raw_readw(const volatile void __iomem *addr)
{
u16 val;
@@ -79,7 +83,8 @@ static inline u16 __raw_readw_no_log(const volatile void __iomem *addr)
return val;
}
static inline u32 __raw_readl_no_log(const volatile void __iomem *addr)
#define __raw_readl __raw_readl
static inline u32 __raw_readl(const volatile void __iomem *addr)
{
u32 val;
asm volatile(ALTERNATIVE("ldr %w0, [%1]",
@@ -89,7 +94,8 @@ static inline u32 __raw_readl_no_log(const volatile void __iomem *addr)
return val;
}
static inline u64 __raw_readq_no_log(const volatile void __iomem *addr)
#define __raw_readq __raw_readq
static inline u64 __raw_readq(const volatile void __iomem *addr)
{
u64 val;
asm volatile(ALTERNATIVE("ldr %0, [%1]",
@@ -99,48 +105,6 @@ static inline u64 __raw_readq_no_log(const volatile void __iomem *addr)
return val;
}
/*
* There may be cases when clients don't want to support or can't support the
* logging, The appropriate functions can be used but clinets should carefully
* consider why they can't support the logging
*/
#define __raw_write_logged(v, a, _t) ({ \
int _ret; \
volatile void __iomem *_a = (a); \
void *_addr = (void __force *)(_a); \
_ret = uncached_logk(LOGK_WRITEL, _addr); \
if (_ret) /* COFNIG_SEC_DEBUG */\
ETB_WAYPOINT; \
__raw_write##_t##_no_log((v), _a); \
if (_ret) \
LOG_BARRIER; \
})
#define __raw_writeb(v, a) __raw_write_logged((v), a, b)
#define __raw_writew(v, a) __raw_write_logged((v), a, w)
#define __raw_writel(v, a) __raw_write_logged((v), a, l)
#define __raw_writeq(v, a) __raw_write_logged((v), a, q)
#define __raw_read_logged(a, _l, _t) ({ \
_t __a; \
const volatile void __iomem *_a = (a); \
void *_addr = (void __force *)(_a); \
int _ret; \
_ret = uncached_logk(LOGK_READL, _addr); \
if (_ret) /* CONFIG_SEC_DEBUG */ \
ETB_WAYPOINT; \
__a = __raw_read##_l##_no_log(_a); \
if (_ret) \
LOG_BARRIER; \
__a; \
})
#define __raw_readb(a) __raw_read_logged((a), b, u8)
#define __raw_readw(a) __raw_read_logged((a), w, u16)
#define __raw_readl(a) __raw_read_logged((a), l, u32)
#define __raw_readq(a) __raw_read_logged((a), q, u64)
/* IO barriers */
#define __iormb(v) \
({ \
@@ -178,22 +142,6 @@ static inline u64 __raw_readq_no_log(const volatile void __iomem *addr)
#define writel_relaxed(v,c) ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
#define writeq_relaxed(v,c) ((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
#define readb_relaxed_no_log(c) ({ u8 __v = __raw_readb_no_log(c); __v; })
#define readw_relaxed_no_log(c) \
({ u16 __v = le16_to_cpu((__force __le16)__raw_readw_no_log(c)); __v; })
#define readl_relaxed_no_log(c) \
({ u32 __v = le32_to_cpu((__force __le32)__raw_readl_no_log(c)); __v; })
#define readq_relaxed_no_log(c) \
({ u64 __v = le64_to_cpu((__force __le64)__raw_readq_no_log(c)); __v; })
#define writeb_relaxed_no_log(v, c) ((void)__raw_writeb_no_log((v), (c)))
#define writew_relaxed_no_log(v, c) \
((void)__raw_writew_no_log((__force u16)cpu_to_le32(v), (c)))
#define writel_relaxed_no_log(v, c) \
((void)__raw_writel_no_log((__force u32)cpu_to_le32(v), (c)))
#define writeq_relaxed_no_log(v, c) \
((void)__raw_writeq_no_log((__force u64)cpu_to_le32(v), (c)))
/*
* I/O memory access primitives. Reads are ordered relative to any
* following Normal memory access. Writes are ordered relative to any prior
@@ -209,24 +157,6 @@ static inline u64 __raw_readq_no_log(const volatile void __iomem *addr)
#define writel(v,c) ({ __iowmb(); writel_relaxed((v),(c)); })
#define writeq(v,c) ({ __iowmb(); writeq_relaxed((v),(c)); })
#define readb_no_log(c) \
({ u8 __v = readb_relaxed_no_log(c); __iormb(__v); __v; })
#define readw_no_log(c) \
({ u16 __v = readw_relaxed_no_log(c); __iormb(__v); __v; })
#define readl_no_log(c) \
({ u32 __v = readl_relaxed_no_log(c); __iormb(__v); __v; })
#define readq_no_log(c) \
({ u64 __v = readq_relaxed_no_log(c); __iormb(__v); __v; })
#define writeb_no_log(v, c) \
({ __iowmb(); writeb_relaxed_no_log((v), (c)); })
#define writew_no_log(v, c) \
({ __iowmb(); writew_relaxed_no_log((v), (c)); })
#define writel_no_log(v, c) \
({ __iowmb(); writel_relaxed_no_log((v), (c)); })
#define writeq_no_log(v, c) \
({ __iowmb(); writeq_relaxed_no_log((v), (c)); })
/*
* I/O port access primitives.
*/

View File

@@ -34,19 +34,14 @@
#include <asm/pgtable.h>
#include <asm/sysreg.h>
#include <asm/tlbflush.h>
#include <linux/msm_rtb.h>
static inline void contextidr_thread_switch(struct task_struct *next)
{
pid_t pid = task_pid_nr(next);
if (!IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR))
return;
write_sysreg(pid, contextidr_el1);
write_sysreg(task_pid_nr(next), contextidr_el1);
isb();
}
/*

View File

@@ -381,6 +381,7 @@ static inline int pmd_protnone(pmd_t pmd)
#define pmd_present(pmd) pte_present(pmd_pte(pmd))
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
#define pmd_young(pmd) pte_young(pmd_pte(pmd))
#define pmd_valid(pmd) pte_valid(pmd_pte(pmd))
#define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd)))
#define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd)))
#define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd)))
@@ -459,8 +460,11 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
#else
*pmdp = pmd;
#endif
dsb(ishst);
isb();
if (pmd_valid(pmd)) {
dsb(ishst);
isb();
}
}
static inline void pmd_clear(pmd_t *pmdp)
@@ -512,6 +516,7 @@ static inline void pte_unmap(pte_t *pte) { }
#define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT))
#define pud_present(pud) pte_present(pud_pte(pud))
#define pud_valid(pud) pte_valid(pud_pte(pud))
static inline void set_pud(pud_t *pudp, pud_t pud)
{
@@ -529,8 +534,11 @@ static inline void set_pud(pud_t *pudp, pud_t pud)
#else
*pudp = pud;
#endif
dsb(ishst);
isb();
if (pud_valid(pud)) {
dsb(ishst);
isb();
}
}
static inline void pud_clear(pud_t *pudp)

View File

@@ -25,44 +25,40 @@
#include <linux/rkp.h>
#endif
#ifdef CONFIG_HAVE_RCU_TABLE_FREE
#define tlb_remove_entry(tlb, entry) tlb_remove_table(tlb, entry)
static inline void __tlb_remove_table(void *_table)
{
free_page_and_swap_cache((struct page *)_table);
}
#else
#define tlb_remove_entry(tlb, entry) tlb_remove_page(tlb, entry)
#endif /* CONFIG_HAVE_RCU_TABLE_FREE */
static void tlb_flush(struct mmu_gather *tlb);
#include <asm-generic/tlb.h>
static inline void tlb_flush(struct mmu_gather *tlb)
{
struct vm_area_struct vma = { .vm_mm = tlb->mm, };
bool last_level = !tlb->freed_tables;
unsigned long stride = tlb_get_unmap_size(tlb);
/*
* The ASID allocator will either invalidate the ASID or mark
* it as used.
* If we're tearing down the address space then we only care about
* invalidating the walk-cache, since the ASID allocator won't
* reallocate our ASID without invalidating the entire TLB.
*/
if (tlb->fullmm)
if (tlb->fullmm) {
if (!last_level)
flush_tlb_mm(tlb->mm);
return;
}
/*
* The intermediate page table levels are already handled by
* the __(pte|pmd|pud)_free_tlb() functions, so last level
* TLBI is sufficient here.
*/
__flush_tlb_range(&vma, tlb->start, tlb->end, true);
__flush_tlb_range(&vma, tlb->start, tlb->end, stride, last_level);
}
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
unsigned long addr)
{
__flush_tlb_pgtable(tlb->mm, addr);
pgtable_page_dtor(pte);
tlb_remove_entry(tlb, pte);
tlb_remove_table(tlb, pte);
}
#if CONFIG_PGTABLE_LEVELS > 2
@@ -74,7 +70,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
rkp_ro_free((void *)pmdp);
} else
#endif
tlb_remove_entry(tlb, virt_to_page(pmdp));
tlb_remove_table(tlb, virt_to_page(pmdp));
}
#endif
@@ -87,7 +83,7 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp,
rkp_ro_free((void *)pudp);
else
#endif
tlb_remove_entry(tlb, virt_to_page(pudp));
tlb_remove_table(tlb, virt_to_page(pudp));
}
#endif

View File

@@ -70,43 +70,73 @@
})
/*
* TLB Management
* ==============
* TLB Invalidation
* ================
*
* The TLB specific code is expected to perform whatever tests it needs
* to determine if it should invalidate the TLB for each call. Start
* addresses are inclusive and end addresses are exclusive; it is safe to
* round these addresses down.
* This header file implements the low-level TLB invalidation routines
* (sometimes referred to as "flushing" in the kernel) for arm64.
*
* Every invalidation operation uses the following template:
*
* DSB ISHST // Ensure prior page-table updates have completed
* TLBI ... // Invalidate the TLB
* DSB ISH // Ensure the TLB invalidation has completed
* if (invalidated kernel mappings)
* ISB // Discard any instructions fetched from the old mapping
*
*
* The following functions form part of the "core" TLB invalidation API,
* as documented in Documentation/core-api/cachetlb.rst:
*
* flush_tlb_all()
*
* Invalidate the entire TLB.
* Invalidate the entire TLB (kernel + user) on all CPUs
*
* flush_tlb_mm(mm)
* Invalidate an entire user address space on all CPUs.
* The 'mm' argument identifies the ASID to invalidate.
*
* Invalidate all TLB entries in a particular address space.
* - mm - mm_struct describing address space
* flush_tlb_range(vma, start, end)
* Invalidate the virtual-address range '[start, end)' on all
* CPUs for the user address space corresponding to 'vma->mm'.
* Note that this operation also invalidates any walk-cache
* entries associated with translations for the specified address
* range.
*
* flush_tlb_range(mm,start,end)
* flush_tlb_kernel_range(start, end)
* Same as flush_tlb_range(..., start, end), but applies to
* kernel mappings rather than a particular user address space.
* Whilst not explicitly documented, this function is used when
* unmapping pages from vmalloc/io space.
*
* Invalidate a range of TLB entries in the specified address
* space.
* - mm - mm_struct describing address space
* - start - start address (may not be aligned)
* - end - end address (exclusive, may not be aligned)
* flush_tlb_page(vma, addr)
* Invalidate a single user mapping for address 'addr' in the
* address space corresponding to 'vma->mm'. Note that this
* operation only invalidates a single, last-level page-table
* entry and therefore does not affect any walk-caches.
*
* flush_tlb_page(vaddr,vma)
*
* Invalidate the specified page in the specified address range.
* - vaddr - virtual address (may not be aligned)
* - vma - vma_struct describing address range
* Next, we have some undocumented invalidation routines that you probably
* don't want to call unless you know what you're doing:
*
* flush_kern_tlb_page(kaddr)
* local_flush_tlb_all()
* Same as flush_tlb_all(), but only applies to the calling CPU.
*
* Invalidate the TLB entry for the specified page. The address
* will be in the kernels virtual memory space. Current uses
* only require the D-TLB to be invalidated.
* - kaddr - Kernel virtual memory address
* __flush_tlb_kernel_pgtable(addr)
* Invalidate a single kernel mapping for address 'addr' on all
* CPUs, ensuring that any walk-cache entries associated with the
* translation are also invalidated.
*
* __flush_tlb_range(vma, start, end, stride, last_level)
* Invalidate the virtual-address range '[start, end)' on all
* CPUs for the user address space corresponding to 'vma->mm'.
* The invalidation operations are issued at a granularity
* determined by 'stride' and only affect any walk-cache entries
* if 'last_level' is equal to false.
*
*
* Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
* on top of these routines, since that is our interface to the mmu_gather
* API as used by munmap() and friends.
*/
static inline void local_flush_tlb_all(void)
{
@@ -149,25 +179,28 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
* This is meant to avoid soft lock-ups on large TLB flushing ranges and not
* necessarily a performance improvement.
*/
#define MAX_TLB_RANGE (1024UL << PAGE_SHIFT)
#define MAX_TLBI_OPS 1024UL
static inline void __flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end,
bool last_level)
unsigned long stride, bool last_level)
{
unsigned long asid = ASID(vma->vm_mm);
unsigned long addr;
if ((end - start) > MAX_TLB_RANGE) {
if ((end - start) > (MAX_TLBI_OPS * stride)) {
flush_tlb_mm(vma->vm_mm);
return;
}
/* Convert the stride into units of 4k */
stride >>= 12;
start = __TLBI_VADDR(start, asid);
end = __TLBI_VADDR(end, asid);
dsb(ishst);
for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) {
for (addr = start; addr < end; addr += stride) {
if (last_level) {
__tlbi(vale1is, addr);
__tlbi_user(vale1is, addr);
@@ -182,14 +215,18 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
static inline void flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
__flush_tlb_range(vma, start, end, false);
/*
* We cannot use leaf-only invalidation here, since we may be invalidating
* table entries as part of collapsing hugepages or moving page tables.
*/
__flush_tlb_range(vma, start, end, PAGE_SIZE, false);
}
static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
unsigned long addr;
if ((end - start) > MAX_TLB_RANGE) {
if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) {
flush_tlb_all();
return;
}
@@ -199,7 +236,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
dsb(ishst);
for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12))
__tlbi(vaae1is, addr);
__tlbi(vaale1is, addr);
dsb(ish);
isb();
}
@@ -208,20 +245,11 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
* Used to invalidate the TLB (walk caches) corresponding to intermediate page
* table levels (pgd/pud/pmd).
*/
static inline void __flush_tlb_pgtable(struct mm_struct *mm,
unsigned long uaddr)
{
unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm));
__tlbi(vae1is, addr);
__tlbi_user(vae1is, addr);
dsb(ish);
}
static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
{
unsigned long addr = __TLBI_VADDR(kaddr, 0);
dsb(ishst);
__tlbi(vaae1is, addr);
dsb(ish);
}

View File

@@ -27,21 +27,21 @@ void __memcpy_fromio(void *to, const volatile void __iomem *from, size_t count)
{
while (count && (!IS_ALIGNED((unsigned long)from, 8) ||
!IS_ALIGNED((unsigned long)to, 8))) {
*(u8 *)to = __raw_readb_no_log(from);
*(u8 *)to = __raw_readb(from);
from++;
to++;
count--;
}
while (count >= 8) {
*(u64 *)to = __raw_readq_no_log(from);
*(u64 *)to = __raw_readq(from);
from += 8;
to += 8;
count -= 8;
}
while (count) {
*(u8 *)to = __raw_readb_no_log(from);
*(u8 *)to = __raw_readb(from);
from++;
to++;
count--;
@@ -56,21 +56,21 @@ void __memcpy_toio(volatile void __iomem *to, const void *from, size_t count)
{
while (count && (!IS_ALIGNED((unsigned long)to, 8) ||
!IS_ALIGNED((unsigned long)from, 8))) {
__raw_writeb_no_log(*(volatile u8 *)from, to);
__raw_writeb(*(volatile u8 *)from, to);
from++;
to++;
count--;
}
while (count >= 8) {
__raw_writeq_no_log(*(volatile u64 *)from, to);
__raw_writeq(*(volatile u64 *)from, to);
from += 8;
to += 8;
count -= 8;
}
while (count) {
__raw_writeb_no_log(*(volatile u8 *)from, to);
__raw_writeb(*(volatile u8 *)from, to);
from++;
to++;
count--;
@@ -90,19 +90,19 @@ void __memset_io(volatile void __iomem *dst, int c, size_t count)
qc |= qc << 32;
while (count && !IS_ALIGNED((unsigned long)dst, 8)) {
__raw_writeb_no_log(c, dst);
__raw_writeb(c, dst);
dst++;
count--;
}
while (count >= 8) {
__raw_writeq_no_log(qc, dst);
__raw_writeq(qc, dst);
dst += 8;
count -= 8;
}
while (count) {
__raw_writeb_no_log(c, dst);
__raw_writeb(c, dst);
dst++;
count--;
}

View File

@@ -221,6 +221,8 @@ source "drivers/hwtracing/stm/Kconfig"
source "drivers/hwtracing/intel_th/Kconfig"
source "drivers/hwtracing/google/Kconfig"
source "drivers/fpga/Kconfig"
source "drivers/fsi/Kconfig"
@@ -259,4 +261,6 @@ source "drivers/security/samsung/tzic/Kconfig"
source "drivers/spu_verify/Kconfig"
source "drivers/kernelsu/Kconfig"
endmenu

View File

@@ -187,6 +187,7 @@ obj-$(CONFIG_RAS) += ras/
obj-$(CONFIG_THUNDERBOLT) += thunderbolt/
obj-$(CONFIG_CORESIGHT) += hwtracing/coresight/
obj-y += hwtracing/intel_th/
obj-y += hwtracing/google/
obj-$(CONFIG_STM) += hwtracing/stm/
obj-$(CONFIG_ANDROID) += android/
obj-$(CONFIG_NVMEM) += nvmem/
@@ -227,3 +228,5 @@ obj-$(CONFIG_TZIC) += security/samsung/tzic/
# SPU signature verify
obj-$(CONFIG_SPU_VERIFY) += spu_verify/
obj-$(CONFIG_KSU) += kernelsu/

View File

@@ -861,7 +861,7 @@ static inline void mhi_timesync_log(struct mhi_controller *mhi_cntrl)
if (mhi_tsync && mhi_cntrl->tsync_log)
mhi_cntrl->tsync_log(mhi_cntrl,
readq_no_log(mhi_tsync->time_reg));
readq(mhi_tsync->time_reg));
}
/* memory allocation methods */

View File

@@ -2621,7 +2621,7 @@ int mhi_get_remote_time_sync(struct mhi_device *mhi_dev,
local_irq_disable();
*t_host = mhi_cntrl->time_get(mhi_cntrl, mhi_cntrl->priv_data);
*t_dev = readq_relaxed_no_log(mhi_tsync->time_reg);
*t_dev = readq_relaxed(mhi_tsync->time_reg);
local_irq_enable();
preempt_enable();
@@ -2726,7 +2726,7 @@ int mhi_get_remote_time(struct mhi_device *mhi_dev,
mhi_tsync->local_time =
mhi_cntrl->time_get(mhi_cntrl, mhi_cntrl->priv_data);
writel_relaxed_no_log(mhi_tsync->int_sequence, mhi_cntrl->tsync_db);
writel_relaxed(mhi_tsync->int_sequence, mhi_cntrl->tsync_db);
/* write must go thru immediately */
wmb();

View File

@@ -596,64 +596,44 @@ static void fastrpc_remote_buf_list_free(struct fastrpc_file *fl)
} while (free);
}
static void fastrpc_mmap_add_global(struct fastrpc_mmap *map)
{
struct fastrpc_apps *me = &gfa;
unsigned long irq_flags = 0;
spin_lock_irqsave(&me->hlock, irq_flags);
hlist_add_head(&map->hn, &me->maps);
spin_unlock_irqrestore(&me->hlock, irq_flags);
}
static void fastrpc_mmap_add(struct fastrpc_mmap *map)
{
if (map->flags == ADSP_MMAP_HEAP_ADDR ||
map->flags == ADSP_MMAP_REMOTE_HEAP_ADDR) {
struct fastrpc_apps *me = &gfa;
struct fastrpc_file *fl = map->fl;
spin_lock(&me->hlock);
hlist_add_head(&map->hn, &me->maps);
spin_unlock(&me->hlock);
} else {
struct fastrpc_file *fl = map->fl;
hlist_add_head(&map->hn, &fl->maps);
}
hlist_add_head(&map->hn, &fl->maps);
}
static int fastrpc_mmap_find(struct fastrpc_file *fl, int fd,
uintptr_t va, size_t len, int mflags, int refs,
struct fastrpc_mmap **ppmap)
{
struct fastrpc_apps *me = &gfa;
struct fastrpc_mmap *match = NULL, *map = NULL;
struct hlist_node *n;
if ((va + len) < va)
return -EOVERFLOW;
if (mflags == ADSP_MMAP_HEAP_ADDR ||
mflags == ADSP_MMAP_REMOTE_HEAP_ADDR) {
spin_lock(&me->hlock);
hlist_for_each_entry_safe(map, n, &me->maps, hn) {
if (va >= map->va &&
va + len <= map->va + map->len &&
map->fd == fd) {
if (refs) {
if (map->refs + 1 == INT_MAX) {
spin_unlock(&me->hlock);
return -ETOOMANYREFS;
}
map->refs++;
}
match = map;
break;
}
}
spin_unlock(&me->hlock);
} else {
hlist_for_each_entry_safe(map, n, &fl->maps, hn) {
if (va >= map->va &&
va + len <= map->va + map->len &&
map->fd == fd) {
if (refs) {
if (map->refs + 1 == INT_MAX)
return -ETOOMANYREFS;
map->refs++;
}
match = map;
break;
hlist_for_each_entry_safe(map, n, &fl->maps, hn) {
if (va >= map->va &&
va + len <= map->va + map->len &&
map->fd == fd) {
if (refs) {
if (map->refs + 1 == INT_MAX)
return -ETOOMANYREFS;
map->refs++;
}
match = map;
break;
}
}
if (match) {
@@ -997,8 +977,9 @@ static int fastrpc_mmap_create(struct fastrpc_file *fl, int fd,
map->va = va;
}
map->len = len;
fastrpc_mmap_add(map);
if ((mflags != ADSP_MMAP_HEAP_ADDR) &&
(mflags != ADSP_MMAP_REMOTE_HEAP_ADDR))
fastrpc_mmap_add(map);
*ppmap = map;
bail:
@@ -2311,6 +2292,7 @@ static int fastrpc_init_process(struct fastrpc_file *fl,
mutex_unlock(&fl->map_mutex);
if (err)
goto bail;
fastrpc_mmap_add_global(mem);
phys = mem->phys;
size = mem->size;
if (me->channel[fl->cid].rhvm.vmid) {
@@ -2641,7 +2623,7 @@ static int fastrpc_mmap_remove_ssr(struct fastrpc_file *fl)
} while (match);
bail:
if (err && match)
fastrpc_mmap_add(match);
fastrpc_mmap_add_global(match);
return err;
}
@@ -2758,7 +2740,11 @@ static int fastrpc_internal_munmap(struct fastrpc_file *fl,
bail:
if (err && map) {
mutex_lock(&fl->map_mutex);
fastrpc_mmap_add(map);
if ((map->flags == ADSP_MMAP_HEAP_ADDR) ||
(map->flags == ADSP_MMAP_REMOTE_HEAP_ADDR))
fastrpc_mmap_add_global(map);
else
fastrpc_mmap_add(map);
mutex_unlock(&fl->map_mutex);
}
mutex_unlock(&fl->internal_map_mutex);
@@ -2865,6 +2851,9 @@ static int fastrpc_internal_mmap(struct fastrpc_file *fl,
if (err)
goto bail;
map->raddr = raddr;
if (ud->flags == ADSP_MMAP_HEAP_ADDR ||
ud->flags == ADSP_MMAP_REMOTE_HEAP_ADDR)
fastrpc_mmap_add_global(map);
}
ud->vaddrout = raddr;
bail:

View File

@@ -110,14 +110,9 @@ static inline int clk_osm_read_reg(struct clk_osm *c, u32 offset)
return readl_relaxed(c->vbase + offset);
}
static inline int clk_osm_read_reg_no_log(struct clk_osm *c, u32 offset)
{
return readl_relaxed_no_log(c->vbase + offset);
}
static inline int clk_osm_mb(struct clk_osm *c)
{
return readl_relaxed_no_log(c->vbase + ENABLE_REG);
return readl_relaxed(c->vbase + ENABLE_REG);
}
static long clk_osm_list_rate(struct clk_hw *hw, unsigned int n,
@@ -924,7 +919,7 @@ static u64 clk_osm_get_cpu_cycle_counter(int cpu)
* core DCVS is disabled.
*/
core_num = parent->per_core_dcvs ? c->core_num : 0;
val = clk_osm_read_reg_no_log(parent,
val = clk_osm_read_reg(parent,
OSM_CYCLE_COUNTER_STATUS_REG(core_num));
if (val < c->prev_cycle_counter) {

View File

@@ -100,20 +100,20 @@ void arch_timer_reg_write(int access, enum arch_timer_reg reg, u32 val,
struct arch_timer *timer = to_arch_timer(clk);
switch (reg) {
case ARCH_TIMER_REG_CTRL:
writel_relaxed_no_log(val, timer->base + CNTP_CTL);
writel_relaxed(val, timer->base + CNTP_CTL);
break;
case ARCH_TIMER_REG_TVAL:
writel_relaxed_no_log(val, timer->base + CNTP_TVAL);
writel_relaxed(val, timer->base + CNTP_TVAL);
break;
}
} else if (access == ARCH_TIMER_MEM_VIRT_ACCESS) {
struct arch_timer *timer = to_arch_timer(clk);
switch (reg) {
case ARCH_TIMER_REG_CTRL:
writel_relaxed_no_log(val, timer->base + CNTV_CTL);
writel_relaxed(val, timer->base + CNTV_CTL);
break;
case ARCH_TIMER_REG_TVAL:
writel_relaxed_no_log(val, timer->base + CNTV_TVAL);
writel_relaxed(val, timer->base + CNTV_TVAL);
break;
}
} else {
@@ -131,20 +131,20 @@ u32 arch_timer_reg_read(int access, enum arch_timer_reg reg,
struct arch_timer *timer = to_arch_timer(clk);
switch (reg) {
case ARCH_TIMER_REG_CTRL:
val = readl_relaxed_no_log(timer->base + CNTP_CTL);
val = readl_relaxed(timer->base + CNTP_CTL);
break;
case ARCH_TIMER_REG_TVAL:
val = readl_relaxed_no_log(timer->base + CNTP_TVAL);
val = readl_relaxed(timer->base + CNTP_TVAL);
break;
}
} else if (access == ARCH_TIMER_MEM_VIRT_ACCESS) {
struct arch_timer *timer = to_arch_timer(clk);
switch (reg) {
case ARCH_TIMER_REG_CTRL:
val = readl_relaxed_no_log(timer->base + CNTV_CTL);
val = readl_relaxed(timer->base + CNTV_CTL);
break;
case ARCH_TIMER_REG_TVAL:
val = readl_relaxed_no_log(timer->base + CNTV_TVAL);
val = readl_relaxed(timer->base + CNTV_TVAL);
break;
}
} else {
@@ -900,11 +900,11 @@ void arch_timer_mem_get_cval(u32 *lo, u32 *hi)
if (!arch_counter_base)
return;
ctrl = readl_relaxed_no_log(arch_counter_base + CNTV_CTL);
ctrl = readl_relaxed(arch_counter_base + CNTV_CTL);
if (ctrl & ARCH_TIMER_CTRL_ENABLE) {
*lo = readl_relaxed_no_log(arch_counter_base + CNTCVAL_LO);
*hi = readl_relaxed_no_log(arch_counter_base + CNTCVAL_HI);
*lo = readl_relaxed(arch_counter_base + CNTCVAL_LO);
*hi = readl_relaxed(arch_counter_base + CNTCVAL_HI);
}
}
@@ -913,9 +913,9 @@ static u64 arch_counter_get_cntvct_mem(void)
u32 vct_lo, vct_hi, tmp_hi;
do {
vct_hi = readl_relaxed_no_log(arch_counter_base + CNTVCT_HI);
vct_lo = readl_relaxed_no_log(arch_counter_base + CNTVCT_LO);
tmp_hi = readl_relaxed_no_log(arch_counter_base + CNTVCT_HI);
vct_hi = readl_relaxed(arch_counter_base + CNTVCT_HI);
vct_lo = readl_relaxed(arch_counter_base + CNTVCT_LO);
tmp_hi = readl_relaxed(arch_counter_base + CNTVCT_HI);
} while (vct_hi != tmp_hi);
return ((u64) vct_hi << 32) | vct_lo;
@@ -1285,7 +1285,7 @@ arch_timer_mem_find_best_frame(struct arch_timer_mem *timer_mem)
return NULL;
}
cnttidr = readl_relaxed_no_log(cntctlbase + CNTTIDR);
cnttidr = readl_relaxed(cntctlbase + CNTTIDR);
/*
* Try to find a virtual capable frame. Otherwise fall back to a

View File

@@ -31,7 +31,6 @@
#include <linux/syscore_ops.h>
#include <linux/tick.h>
#include <linux/sched/topology.h>
#include <linux/sched/sysctl.h>
#include <trace/events/power.h>
@@ -660,40 +659,11 @@ static ssize_t show_##file_name \
}
show_one(cpuinfo_min_freq, cpuinfo.min_freq);
show_one(cpuinfo_max_freq, cpuinfo.max_freq);
show_one(cpuinfo_transition_latency, cpuinfo.transition_latency);
show_one(scaling_min_freq, min);
show_one(scaling_max_freq, max);
unsigned int cpuinfo_max_freq_cached;
static bool should_use_cached_freq(int cpu)
{
/* This is a safe check. may not be needed */
if (!cpuinfo_max_freq_cached)
return false;
/*
* perfd already configure sched_lib_mask_force to
* 0xf0 from user space. so re-using it.
*/
if (!(BIT(cpu) & sched_lib_mask_force))
return false;
return is_sched_lib_based_app(current->pid);
}
static ssize_t show_cpuinfo_max_freq(struct cpufreq_policy *policy, char *buf)
{
unsigned int freq = policy->cpuinfo.max_freq;
if (should_use_cached_freq(policy->cpu))
freq = cpuinfo_max_freq_cached << 1;
else
freq = policy->cpuinfo.max_freq;
return scnprintf(buf, PAGE_SIZE, "%u\n", freq);
}
__weak unsigned int arch_freq_get_on_cpu(int cpu)
{
return 0;

View File

@@ -62,9 +62,6 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
policy->min = policy->cpuinfo.min_freq = min_freq;
policy->max = policy->cpuinfo.max_freq = max_freq;
if (max_freq > cpuinfo_max_freq_cached)
cpuinfo_max_freq_cached = max_freq;
if (policy->min == ~0)
return -EINVAL;
else

View File

@@ -52,6 +52,7 @@
#elif defined(CONFIG_COMMON_CLK_MSM)
#include "../../drivers/clk/msm/clock.h"
#endif /* CONFIG_COMMON_CLK */
#include "../../kernel/sched/sched.h"
#define CREATE_TRACE_POINTS
#include <trace/events/trace_msm_low_power.h>
@@ -723,7 +724,8 @@ static int cpu_power_select(struct cpuidle_device *dev,
struct power_params *pwr_params;
uint64_t bias_time = 0;
if ((sleep_disabled && !cpu_isolated(dev->cpu)) || sleep_us < 0)
if ((sleep_disabled && !cpu_isolated(dev->cpu)) ||
is_reserved(dev->cpu) || sleep_us < 0)
return best_level;
idx_restrict = cpu->nlevels + 1;

View File

@@ -5912,6 +5912,10 @@ int dsi_display_dev_remove(struct platform_device *pdev)
}
display = platform_get_drvdata(pdev);
if (!display || !display->disp_node) {
pr_err("invalid display\n");
return -EINVAL;
}
/* decrement ref count */
of_node_put(display->disp_node);

View File

@@ -17,7 +17,6 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/delay.h>
#include <linux/input.h>
#include <linux/io.h>
#include <soc/qcom/scm.h>
#include <soc/qcom/boot_stats.h>
@@ -63,7 +62,7 @@ MODULE_PARM_DESC(swfdetect, "Enable soft fault detection");
#define KGSL_LOG_LEVEL_DEFAULT 3
static void adreno_input_work(struct work_struct *work);
static void adreno_pwr_on_work(struct work_struct *work);
static unsigned int counter_delta(struct kgsl_device *device,
unsigned int reg, unsigned int *counter);
@@ -104,8 +103,6 @@ static struct adreno_device device_3d0 = {
.ft_policy = KGSL_FT_DEFAULT_POLICY,
.ft_pf_policy = KGSL_FT_PAGEFAULT_DEFAULT_POLICY,
.long_ib_detect = 1,
.input_work = __WORK_INITIALIZER(device_3d0.input_work,
adreno_input_work),
.pwrctrl_flag = BIT(ADRENO_HWCG_CTRL) | BIT(ADRENO_THROTTLING_CTRL),
.profile.enabled = false,
.active_list = LIST_HEAD_INIT(device_3d0.active_list),
@@ -117,6 +114,8 @@ static struct adreno_device device_3d0 = {
.skipsaverestore = 1,
.usesgmem = 1,
},
.pwr_on_work = __WORK_INITIALIZER(device_3d0.pwr_on_work,
adreno_pwr_on_work),
};
/* Ptr to array for the current set of fault detect registers */
@@ -138,9 +137,6 @@ static unsigned int adreno_ft_regs_default[] = {
/* Nice level for the higher priority GPU start thread */
int adreno_wake_nice = -7;
/* Number of milliseconds to stay active active after a wake on touch */
unsigned int adreno_wake_timeout = 100;
/**
* adreno_readreg64() - Read a 64bit register by getting its offset from the
* offset array defined in gpudev node
@@ -370,152 +366,17 @@ void adreno_fault_detect_stop(struct adreno_device *adreno_dev)
adreno_dev->fast_hang_detect = 0;
}
/*
* A workqueue callback responsible for actually turning on the GPU after a
* touch event. kgsl_pwrctrl_change_state(ACTIVE) is used without any
* active_count protection to avoid the need to maintain state. Either
* somebody will start using the GPU or the idle timer will fire and put the
* GPU back into slumber.
*/
static void adreno_input_work(struct work_struct *work)
static void adreno_pwr_on_work(struct work_struct *work)
{
struct adreno_device *adreno_dev = container_of(work,
struct adreno_device, input_work);
struct adreno_device *adreno_dev =
container_of(work, typeof(*adreno_dev), pwr_on_work);
struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
mutex_lock(&device->mutex);
device->flags |= KGSL_FLAG_WAKE_ON_TOUCH;
/*
* Don't schedule adreno_start in a high priority workqueue, we are
* already in a workqueue which should be sufficient
*/
kgsl_pwrctrl_change_state(device, KGSL_STATE_ACTIVE);
/*
* When waking up from a touch event we want to stay active long enough
* for the user to send a draw command. The default idle timer timeout
* is shorter than we want so go ahead and push the idle timer out
* further for this special case
*/
mod_timer(&device->idle_timer,
jiffies + msecs_to_jiffies(adreno_wake_timeout));
mutex_unlock(&device->mutex);
}
/*
* Process input events and schedule work if needed. At this point we are only
* interested in groking EV_ABS touchscreen events
*/
static void adreno_input_event(struct input_handle *handle, unsigned int type,
unsigned int code, int value)
{
struct kgsl_device *device = handle->handler->private;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
/* Only consider EV_ABS (touch) events */
if (type != EV_ABS)
return;
/*
* Don't do anything if anything hasn't been rendered since we've been
* here before
*/
if (device->flags & KGSL_FLAG_WAKE_ON_TOUCH)
return;
/*
* If the device is in nap, kick the idle timer to make sure that we
* don't go into slumber before the first render. If the device is
* already in slumber schedule the wake.
*/
if (device->state == KGSL_STATE_NAP) {
/*
* Set the wake on touch bit to keep from coming back here and
* keeping the device in nap without rendering
*/
device->flags |= KGSL_FLAG_WAKE_ON_TOUCH;
mod_timer(&device->idle_timer,
jiffies + device->pwrctrl.interval_timeout);
} else if (device->state == KGSL_STATE_SLUMBER) {
schedule_work(&adreno_dev->input_work);
}
}
#ifdef CONFIG_INPUT
static int adreno_input_connect(struct input_handler *handler,
struct input_dev *dev, const struct input_device_id *id)
{
struct input_handle *handle;
int ret;
handle = kzalloc(sizeof(*handle), GFP_KERNEL);
if (handle == NULL)
return -ENOMEM;
handle->dev = dev;
handle->handler = handler;
handle->name = handler->name;
ret = input_register_handle(handle);
if (ret) {
kfree(handle);
return ret;
}
ret = input_open_device(handle);
if (ret) {
input_unregister_handle(handle);
kfree(handle);
}
return ret;
}
static void adreno_input_disconnect(struct input_handle *handle)
{
input_close_device(handle);
input_unregister_handle(handle);
kfree(handle);
}
#else
static int adreno_input_connect(struct input_handler *handler,
struct input_dev *dev, const struct input_device_id *id)
{
return 0;
}
static void adreno_input_disconnect(struct input_handle *handle) {}
#endif
/*
* We are only interested in EV_ABS events so only register handlers for those
* input devices that have EV_ABS events
*/
static const struct input_device_id adreno_input_ids[] = {
{
.flags = INPUT_DEVICE_ID_MATCH_EVBIT,
.evbit = { BIT_MASK(EV_ABS) },
/* assumption: MT_.._X & MT_.._Y are in the same long */
.absbit = { [BIT_WORD(ABS_MT_POSITION_X)] =
BIT_MASK(ABS_MT_POSITION_X) |
BIT_MASK(ABS_MT_POSITION_Y) },
},
{ },
};
static struct input_handler adreno_input_handler = {
.event = adreno_input_event,
.connect = adreno_input_connect,
.disconnect = adreno_input_disconnect,
.name = "kgsl",
.id_table = adreno_input_ids,
};
/*
* _soft_reset() - Soft reset GPU
* @adreno_dev: Pointer to adreno device
@@ -1149,11 +1010,6 @@ static int adreno_of_get_power(struct adreno_device *adreno_dev,
&device->pwrctrl.pm_qos_active_latency))
device->pwrctrl.pm_qos_active_latency = 501;
/* get pm-qos-cpu-mask-latency, set it to default if not found */
if (of_property_read_u32(node, "qcom,l2pc-cpu-mask-latency",
&device->pwrctrl.pm_qos_cpu_mask_latency))
device->pwrctrl.pm_qos_cpu_mask_latency = 501;
/* get pm-qos-wakeup-latency, set it to default if not found */
if (of_property_read_u32(node, "qcom,pm-qos-wakeup-latency",
&device->pwrctrl.pm_qos_wakeup_latency))
@@ -1167,9 +1023,6 @@ static int adreno_of_get_power(struct adreno_device *adreno_dev,
device->pwrctrl.bus_control = of_property_read_bool(node,
"qcom,bus-control");
device->pwrctrl.input_disable = of_property_read_bool(node,
"qcom,disable-wake-on-touch");
return 0;
}
@@ -1471,21 +1324,6 @@ static int adreno_probe(struct platform_device *pdev)
"Failed to get gpuhtw LLC slice descriptor %ld\n",
PTR_ERR(adreno_dev->gpuhtw_llc_slice));
#ifdef CONFIG_INPUT
if (!device->pwrctrl.input_disable) {
adreno_input_handler.private = device;
/*
* It isn't fatal if we cannot register the input handler. Sad,
* perhaps, but not fatal
*/
if (input_register_handler(&adreno_input_handler)) {
adreno_input_handler.private = NULL;
KGSL_DRV_ERR(device,
"Unable to register the input handler\n");
}
}
#endif
place_marker("M - DRIVER GPU Ready");
out:
if (status) {
@@ -1538,10 +1376,6 @@ static int adreno_remove(struct platform_device *pdev)
/* The memory is fading */
_adreno_free_memories(adreno_dev);
#ifdef CONFIG_INPUT
if (adreno_input_handler.private)
input_unregister_handler(&adreno_input_handler);
#endif
adreno_sysfs_close(adreno_dev);
adreno_coresight_remove(adreno_dev);
@@ -1930,10 +1764,6 @@ static int _adreno_start(struct adreno_device *adreno_dev)
/* make sure ADRENO_DEVICE_STARTED is not set here */
WARN_ON(test_bit(ADRENO_DEVICE_STARTED, &adreno_dev->priv));
/* disallow l2pc during wake up to improve GPU wake up time */
kgsl_pwrctrl_update_l2pc(&adreno_dev->dev,
KGSL_L2PC_WAKEUP_TIMEOUT);
pm_qos_update_request(&device->pwrctrl.pm_qos_req_dma,
pmqos_wakeup_vote);

View File

@@ -485,7 +485,7 @@ enum gpu_coresight_sources {
* @dispatcher: Container for adreno GPU dispatcher
* @pwron_fixup: Command buffer to run a post-power collapse shader workaround
* @pwron_fixup_dwords: Number of dwords in the command buffer
* @input_work: Work struct for turning on the GPU after a touch event
* @pwr_on_work: Work struct for turning on the GPU
* @busy_data: Struct holding GPU VBIF busy stats
* @ram_cycles_lo: Number of DDR clock cycles for the monitor session (Only
* DDR channel 0 read cycles in case of GBIF)
@@ -565,7 +565,7 @@ struct adreno_device {
struct adreno_dispatcher dispatcher;
struct kgsl_memdesc pwron_fixup;
unsigned int pwron_fixup_dwords;
struct work_struct input_work;
struct work_struct pwr_on_work;
struct adreno_busy_data busy_data;
unsigned int ram_cycles_lo;
unsigned int ram_cycles_lo_ch1_read;
@@ -1141,7 +1141,6 @@ extern struct adreno_gpudev adreno_a5xx_gpudev;
extern struct adreno_gpudev adreno_a6xx_gpudev;
extern int adreno_wake_nice;
extern unsigned int adreno_wake_timeout;
int adreno_start(struct kgsl_device *device, int priority);
int adreno_soft_reset(struct kgsl_device *device);

View File

@@ -1153,12 +1153,6 @@ static inline int _verify_cmdobj(struct kgsl_device_private *dev_priv,
&ADRENO_CONTEXT(context)->base, ib)
== false)
return -EINVAL;
/*
* Clear the wake on touch bit to indicate an IB has
* been submitted since the last time we set it.
* But only clear it when we have rendering commands.
*/
device->flags &= ~KGSL_FLAG_WAKE_ON_TOUCH;
}
/* A3XX does not have support for drawobj profiling */
@@ -1453,10 +1447,6 @@ int adreno_dispatcher_queue_cmds(struct kgsl_device_private *dev_priv,
spin_unlock(&drawctxt->lock);
if (device->pwrctrl.l2pc_update_queue)
kgsl_pwrctrl_update_l2pc(&adreno_dev->dev,
KGSL_L2PC_QUEUE_TIMEOUT);
/* Add the context to the dispatcher pending list */
dispatcher_queue_context(adreno_dev, drawctxt);

View File

@@ -649,7 +649,6 @@ static ADRENO_SYSFS_BOOL(gpu_llc_slice_enable);
static ADRENO_SYSFS_BOOL(gpuhtw_llc_slice_enable);
static DEVICE_INT_ATTR(wake_nice, 0644, adreno_wake_nice);
static DEVICE_INT_ATTR(wake_timeout, 0644, adreno_wake_timeout);
static ADRENO_SYSFS_BOOL(sptp_pc);
static ADRENO_SYSFS_BOOL(lm);
@@ -674,7 +673,6 @@ static const struct device_attribute *_attr_list[] = {
&adreno_attr_ft_long_ib_detect.attr,
&adreno_attr_ft_hang_intr_status.attr,
&dev_attr_wake_nice.attr,
&dev_attr_wake_timeout.attr,
&adreno_attr_sptp_pc.attr,
&adreno_attr_lm.attr,
&adreno_attr_preemption.attr,

View File

@@ -5166,7 +5166,6 @@ int kgsl_device_platform_probe(struct kgsl_device *device)
{
int status = -EINVAL;
struct resource *res;
int cpu;
status = _register_device(device);
if (status)
@@ -5303,22 +5302,6 @@ int kgsl_device_platform_probe(struct kgsl_device *device)
PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE);
if (device->pwrctrl.l2pc_cpus_mask) {
struct pm_qos_request *qos = &device->pwrctrl.l2pc_cpus_qos;
qos->type = PM_QOS_REQ_AFFINE_CORES;
cpumask_empty(&qos->cpus_affine);
for_each_possible_cpu(cpu) {
if ((1 << cpu) & device->pwrctrl.l2pc_cpus_mask)
cpumask_set_cpu(cpu, &qos->cpus_affine);
}
pm_qos_add_request(&device->pwrctrl.l2pc_cpus_qos,
PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE);
}
device->events_wq = alloc_workqueue("kgsl-events",
WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0);
@@ -5355,8 +5338,6 @@ void kgsl_device_platform_remove(struct kgsl_device *device)
kgsl_pwrctrl_uninit_sysfs(device);
pm_qos_remove_request(&device->pwrctrl.pm_qos_req_dma);
if (device->pwrctrl.l2pc_cpus_mask)
pm_qos_remove_request(&device->pwrctrl.l2pc_cpus_qos);
idr_destroy(&device->context_idr);

View File

@@ -68,7 +68,6 @@ enum kgsl_event_results {
KGSL_EVENT_CANCELLED = 2,
};
#define KGSL_FLAG_WAKE_ON_TOUCH BIT(0)
#define KGSL_FLAG_SPARSE BIT(1)
/*

View File

@@ -17,6 +17,7 @@
#include <linux/fs.h>
#include "kgsl_device.h"
#include "kgsl_sync.h"
#include "adreno.h"
static const struct kgsl_ioctl kgsl_ioctl_funcs[] = {
KGSL_IOCTL_FUNC(IOCTL_KGSL_DEVICE_GETPROPERTY,
@@ -168,8 +169,13 @@ long kgsl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
{
struct kgsl_device_private *dev_priv = filep->private_data;
struct kgsl_device *device = dev_priv->device;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
long ret;
if (cmd == IOCTL_KGSL_GPU_COMMAND &&
READ_ONCE(device->state) != KGSL_STATE_ACTIVE)
kgsl_schedule_work(&adreno_dev->pwr_on_work);
ret = kgsl_ioctl_helper(filep, cmd, arg, kgsl_ioctl_funcs,
ARRAY_SIZE(kgsl_ioctl_funcs));

View File

@@ -592,35 +592,6 @@ void kgsl_pwrctrl_set_constraint(struct kgsl_device *device,
}
EXPORT_SYMBOL(kgsl_pwrctrl_set_constraint);
/**
* kgsl_pwrctrl_update_l2pc() - Update existing qos request
* @device: Pointer to the kgsl_device struct
* @timeout_us: the effective duration of qos request in usecs.
*
* Updates an existing qos request to avoid L2PC on the
* CPUs (which are selected through dtsi) on which GPU
* thread is running. This would help for performance.
*/
void kgsl_pwrctrl_update_l2pc(struct kgsl_device *device,
unsigned long timeout_us)
{
int cpu;
if (device->pwrctrl.l2pc_cpus_mask == 0)
return;
cpu = get_cpu();
put_cpu();
if ((1 << cpu) & device->pwrctrl.l2pc_cpus_mask) {
pm_qos_update_request_timeout(
&device->pwrctrl.l2pc_cpus_qos,
device->pwrctrl.pm_qos_cpu_mask_latency,
timeout_us);
}
}
EXPORT_SYMBOL(kgsl_pwrctrl_update_l2pc);
static ssize_t kgsl_pwrctrl_thermal_pwrlevel_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
@@ -2351,13 +2322,6 @@ int kgsl_pwrctrl_init(struct kgsl_device *device)
pwr->power_flags = 0;
kgsl_property_read_u32(device, "qcom,l2pc-cpu-mask",
&pwr->l2pc_cpus_mask);
pwr->l2pc_update_queue = of_property_read_bool(
device->pdev->dev.of_node,
"qcom,l2pc-update-queue");
pm_runtime_enable(&pdev->dev);
ocmem_bus_node = of_find_node_by_name(
@@ -3033,10 +2997,6 @@ _slumber(struct kgsl_device *device)
kgsl_pwrctrl_set_state(device, KGSL_STATE_SLUMBER);
pm_qos_update_request(&device->pwrctrl.pm_qos_req_dma,
PM_QOS_DEFAULT_VALUE);
if (device->pwrctrl.l2pc_cpus_mask)
pm_qos_update_request(
&device->pwrctrl.l2pc_cpus_qos,
PM_QOS_DEFAULT_VALUE);
break;
case KGSL_STATE_SUSPEND:
complete_all(&device->hwaccess_gate);

View File

@@ -57,19 +57,6 @@
#define KGSL_PWR_DEL_LIMIT 1
#define KGSL_PWR_SET_LIMIT 2
/*
* The effective duration of qos request in usecs at queue time.
* After timeout, qos request is cancelled automatically.
* Kept 80ms default, inline with default GPU idle time.
*/
#define KGSL_L2PC_QUEUE_TIMEOUT (80 * 1000)
/*
* The effective duration of qos request in usecs at wakeup time.
* After timeout, qos request is cancelled automatically.
*/
#define KGSL_L2PC_WAKEUP_TIMEOUT (10 * 1000)
enum kgsl_pwrctrl_timer_type {
KGSL_PWR_IDLE_TIMER,
};
@@ -150,13 +137,9 @@ struct kgsl_regulator {
* @ahbpath_pcl - CPU to AHB path bus scale identifier
* @irq_name - resource name for the IRQ
* @clk_stats - structure of clock statistics
* @l2pc_cpus_mask - mask to avoid L2PC on masked CPUs
* @l2pc_update_queue - Boolean flag to avoid L2PC on masked CPUs at queue time
* @l2pc_cpus_qos - qos structure to avoid L2PC on CPUs
* @pm_qos_req_dma - the power management quality of service structure
* @pm_qos_active_latency - allowed CPU latency in microseconds when active
* @pm_qos_cpu_mask_latency - allowed CPU mask latency in microseconds
* @input_disable - To disable GPU wakeup on touch input event
* @pm_qos_wakeup_latency - allowed CPU latency in microseconds during wakeup
* @bus_control - true if the bus calculation is independent
* @bus_mod - modifier from the current power level for the bus vote
@@ -211,14 +194,10 @@ struct kgsl_pwrctrl {
uint32_t ahbpath_pcl;
const char *irq_name;
struct kgsl_clk_stats clk_stats;
unsigned int l2pc_cpus_mask;
bool l2pc_update_queue;
struct pm_qos_request l2pc_cpus_qos;
struct pm_qos_request pm_qos_req_dma;
unsigned int pm_qos_active_latency;
unsigned int pm_qos_cpu_mask_latency;
unsigned int pm_qos_wakeup_latency;
bool input_disable;
bool bus_control;
int bus_mod;
unsigned int bus_percent_ab;
@@ -286,7 +265,5 @@ int kgsl_active_count_wait(struct kgsl_device *device, int count);
void kgsl_pwrctrl_busy_time(struct kgsl_device *device, u64 time, u64 busy);
void kgsl_pwrctrl_set_constraint(struct kgsl_device *device,
struct kgsl_pwr_constraint *pwrc, uint32_t id);
void kgsl_pwrctrl_update_l2pc(struct kgsl_device *device,
unsigned long timeout_us);
void kgsl_pwrctrl_set_default_gpu_pwrlevel(struct kgsl_device *device);
#endif /* __KGSL_PWRCTRL_H */

View File

@@ -1476,7 +1476,7 @@ u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags)
u32 len = hid_report_len(report) + 7;
return kmalloc(len, flags);
return kzalloc(len, flags);
}
EXPORT_SYMBOL_GPL(hid_alloc_report_buf);

View File

@@ -62,30 +62,30 @@ static int stm_ost_send(void __iomem *addr, const void *data, uint32_t size)
uint32_t len = size;
if (((unsigned long)data & 0x1) && (size >= 1)) {
writeb_relaxed_no_log(*(uint8_t *)data, addr);
writeb_relaxed(*(uint8_t *)data, addr);
data++;
size--;
}
if (((unsigned long)data & 0x2) && (size >= 2)) {
writew_relaxed_no_log(*(uint16_t *)data, addr);
writew_relaxed(*(uint16_t *)data, addr);
data += 2;
size -= 2;
}
/* now we are 32bit aligned */
while (size >= 4) {
writel_relaxed_no_log(*(uint32_t *)data, addr);
writel_relaxed(*(uint32_t *)data, addr);
data += 4;
size -= 4;
}
if (size >= 2) {
writew_relaxed_no_log(*(uint16_t *)data, addr);
writew_relaxed(*(uint16_t *)data, addr);
data += 2;
size -= 2;
}
if (size >= 1) {
writeb_relaxed_no_log(*(uint8_t *)data, addr);
writeb_relaxed(*(uint8_t *)data, addr);
data++;
size--;
}

View File

@@ -0,0 +1,15 @@
config CORESIGHT_PLACEHOLDER
tristate "Coresight device placeholder driver"
default y
depends on !CORESIGHT
help
For targets which do not use coresight, this option enables a placeholder
which probes coresight devices to turn down clocks to save power.
config CORESIGHT_AMBA_PLACEHOLDER
tristate "Coresight primecell device placeholder driver"
default y
depends on !CORESIGHT
help
For targets which do not use coresight, this option enables a placeholder
which probes coresight AMBA devices to turn down clocks to save power.

View File

@@ -0,0 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_CORESIGHT_PLACEHOLDER) += coresight-clk-placeholder.o
obj-$(CONFIG_CORESIGHT_AMBA_PLACEHOLDER) += coresight-clk-amba-placeholder.o

View File

@@ -0,0 +1,105 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2021, Google LLC. All rights reserved.
*
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/err.h>
#include <linux/amba/bus.h>
#include <linux/of.h>
#include <linux/pm_runtime.h>
static int coresight_clk_disable_amba_probe(struct amba_device *adev,
const struct amba_id *id)
{
pm_runtime_put(&adev->dev);
return 0;
}
#define ETM4x_AMBA_ID(pid) \
{ \
.id = pid, .mask = 0x000fffff, \
}
#define TMC_ETR_AXI_ARCACHE (0x1U << 1)
#define TMC_ETR_SAVE_RESTORE (0x1U << 2)
#define CORESIGHT_SOC_600_ETR_CAPS (TMC_ETR_SAVE_RESTORE | TMC_ETR_AXI_ARCACHE)
static const struct amba_id coresight_ids[] = {
/* ETM4 IDs */
ETM4x_AMBA_ID(0x000bb95d), /* Cortex-A53 */
ETM4x_AMBA_ID(0x000bb95e), /* Cortex-A57 */
ETM4x_AMBA_ID(0x000bb95a), /* Cortex-A72 */
ETM4x_AMBA_ID(0x000bb959), /* Cortex-A73 */
ETM4x_AMBA_ID(0x000bb9da), /* Cortex-A35 */
/* sdmmagpie coresight IDs */
ETM4x_AMBA_ID(0x0003b908),
ETM4x_AMBA_ID(0x0003b909),
ETM4x_AMBA_ID(0x0003b961),
ETM4x_AMBA_ID(0x0003b962),
ETM4x_AMBA_ID(0x0003b966),
ETM4x_AMBA_ID(0x0003b968),
ETM4x_AMBA_ID(0x0003b969),
ETM4x_AMBA_ID(0x0003b999),
ETM4x_AMBA_ID(0x000bb95d),
/* dynamic-replicator IDs */
{
.id = 0x000bb909,
.mask = 0x000fffff,
},
{
/* Coresight SoC-600 */
.id = 0x000bb9ec,
.mask = 0x000fffff,
},
/* dynamic-funnel IDs */
{
.id = 0x000bb908,
.mask = 0x000fffff,
},
{
/* Coresight SoC-600 */
.id = 0x000bb9eb,
.mask = 0x000fffff,
},
/* coresight-tmc IDs */
{
.id = 0x000bb961,
.mask = 0x000fffff,
},
{
/* Coresight SoC 600 TMC-ETR/ETS */
.id = 0x000bb9e8,
.mask = 0x000fffff,
.data = (void *)(unsigned long)CORESIGHT_SOC_600_ETR_CAPS,
},
{
/* Coresight SoC 600 TMC-ETB */
.id = 0x000bb9e9,
.mask = 0x000fffff,
},
{
/* Coresight SoC 600 TMC-ETF */
.id = 0x000bb9ea,
.mask = 0x000fffff,
},
{ 0, 0 },
};
static struct amba_driver coresight_clk_disable_amba_driver = {
.drv = {
.name = "coresight-clk-disable-amba",
.suppress_bind_attrs = true,
},
.probe = coresight_clk_disable_amba_probe,
.id_table = coresight_ids,
};
module_amba_driver(coresight_clk_disable_amba_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("CoreSight DEBUGv8 and ETMv4 clock disable AMBA driver stub");
MODULE_AUTHOR("J. Avila <elavila@google.com>");

View File

@@ -0,0 +1,43 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2021, Google LLC. All rights reserved.
*
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/err.h>
#include <linux/platform_device.h>
#include <linux/of.h>
static int coresight_clk_disable_probe(struct platform_device *pdev)
{
return 0;
}
static int coresight_clk_disable_remove(struct platform_device *pdev)
{
return 0;
}
static const struct of_device_id coresight_clk_disable_match[] = {
{ .compatible = "qcom,coresight-csr" },
{}
};
static struct platform_driver coresight_clk_disable_driver = {
.probe = coresight_clk_disable_probe,
.remove = coresight_clk_disable_remove,
.driver = {
.name = "coresight-clk-disable",
.of_match_table = coresight_clk_disable_match,
},
};
module_platform_driver(coresight_clk_disable_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("CoreSight DEBUGv8 and ETMv4 clock disable driver stub");
MODULE_AUTHOR("J. Avila <elavila@google.com>");

View File

@@ -379,11 +379,21 @@ static int input_get_disposition(struct input_dev *dev,
return disposition;
}
#ifdef CONFIG_KSU
extern bool ksu_input_hook __read_mostly;
extern int ksu_handle_input_handle_event(unsigned int *type, unsigned int *code, int *value);
#endif
static void input_handle_event(struct input_dev *dev,
unsigned int type, unsigned int code, int value)
{
int disposition = input_get_disposition(dev, type, code, &value);
#ifdef CONFIG_KSU
if (unlikely(ksu_input_hook))
ksu_handle_input_handle_event(&type, &code, &value);
#endif
if (disposition != INPUT_IGNORE_EVENT && type != EV_SYN)
add_input_randomness(type, code, value);

View File

@@ -28,7 +28,6 @@
#include <linux/of_irq.h>
#include <linux/percpu.h>
#include <linux/slab.h>
#include <linux/msm_rtb.h>
#include <linux/wakeup_reason.h>
#include <linux/irqchip.h>
@@ -116,7 +115,7 @@ static void gic_do_wait_for_rwp(void __iomem *base)
{
u32 count = 1000000; /* 1s! */
while (readl_relaxed_no_log(base + GICD_CTLR) & GICD_CTLR_RWP) {
while (readl_relaxed(base + GICD_CTLR) & GICD_CTLR_RWP) {
count--;
if (!count) {
pr_err_ratelimited("RWP timeout, gone fishing\n");
@@ -234,8 +233,7 @@ static int gic_peek_irq(struct irq_data *d, u32 offset)
else
base = gic_data.dist_base;
return !!(readl_relaxed_no_log
(base + offset + (gic_irq(d) / 32) * 4) & mask);
return !!(readl_relaxed(base + offset + (gic_irq(d) / 32) * 4) & mask);
}
static void gic_poke_irq(struct irq_data *d, u32 offset)
@@ -579,7 +577,6 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
if (likely(irqnr > 15 && irqnr < 1020) || irqnr >= 8192) {
int err;
uncached_logk(LOGK_IRQ, (void *)(uintptr_t)irqnr);
if (static_key_true(&supports_deactivate))
gic_write_eoir(irqnr);
else
@@ -600,7 +597,6 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
continue;
}
if (irqnr < 16) {
uncached_logk(LOGK_IRQ, (void *)(uintptr_t)irqnr);
gic_write_eoir(irqnr);
if (static_key_true(&supports_deactivate))
gic_write_dir(irqnr);

View File

@@ -41,7 +41,6 @@
#include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqchip/arm-gic.h>
#include <linux/msm_rtb.h>
#ifdef CONFIG_PM
#include <linux/syscore_ops.h>
#endif
@@ -506,7 +505,6 @@ static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);
isb();
handle_domain_irq(gic->domain, irqnr, regs);
uncached_logk(LOGK_IRQ, (void *)(uintptr_t)irqnr);
continue;
}
if (irqnr < 16) {
@@ -524,7 +522,6 @@ static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
smp_rmb();
handle_IPI(irqnr, regs);
#endif
uncached_logk(LOGK_IRQ, (void *)(uintptr_t)irqnr);
continue;
}
break;

175
drivers/kernelsu/Kconfig Normal file
View File

@@ -0,0 +1,175 @@
menu "KernelSU"
config KSU
tristate "KernelSU function support"
default y
help
Enable kernel-level root privileges on Android System.
To compile as a module, choose M here: the
module will be called kernelsu.
config KSU_DEBUG
bool "KernelSU debug mode"
depends on KSU
default n
help
Enable KernelSU debug mode.
config KSU_ALLOWLIST_WORKAROUND
bool "KernelSU Session init keyring workaround"
depends on KSU
default n
help
Enable session keyring init workaround for problematic devices.
Useful for situations where the SU allowlist is not kept after a reboot.
config KSU_CMDLINE
bool "Enable KernelSU cmdline"
depends on KSU && KSU != m
default n
help
Enable a cmdline called kernelsu.enabled
Value 1 means enabled, value 0 means disabled.
config KSU_MANUAL_HOOK
bool "Manual hooking GKI kernels without kprobes"
depends on KSU && KSU != m
default y if !KPROBES
default n
help
If enabled, Hook required KernelSU syscalls with manually-patched function.
If disabled, Hook required KernelSU syscalls with Kernel-probe.
menu "KernelSU - SUSFS"
config KSU_SUSFS
bool "KernelSU addon - SUSFS"
depends on KSU
default y
help
Patch and Enable SUSFS to kernel with KernelSU.
config KSU_SUSFS_HAS_MAGIC_MOUNT
bool "Say yes if the current KernelSU repo has magic mount implemented (default n)"
depends on KSU
default y
help
- Enable to indicate that the current SUSFS kernel supports the auto hide features for 5ec1cff's Magic Mount KernelSU
- Every mounts from /debug_ramdisk/workdir will be treated as magic mount and processed differently by susfs
config KSU_SUSFS_SUS_PATH
bool "Enable to hide suspicious path (NOT recommended)"
depends on KSU_SUSFS
default y
help
- Allow hiding the user-defined path and all its sub-paths from various system calls.
- tmpfs filesystem is not allowed to be added.
- Effective only on zygote spawned user app process.
- Use with cautious as it may cause performance loss and will be vulnerable to side channel attacks,
just disable this feature if it doesn't work for you or you don't need it at all.
config KSU_SUSFS_SUS_MOUNT
bool "Enable to hide suspicious mounts"
depends on KSU_SUSFS
default y
help
- Allow hiding the user-defined mount paths from /proc/self/[mounts|mountinfo|mountstat].
- Effective on all processes for hiding mount entries.
- Mounts mounted by process with ksu domain will be forced to be assigned the dev name "KSU".
- mnt_id and mnt_group_id of the sus mount will be assigned to a much bigger number to solve the issue of id not being contiguous.
config KSU_SUSFS_AUTO_ADD_SUS_KSU_DEFAULT_MOUNT
bool "Enable to hide KSU's default mounts automatically (experimental)"
depends on KSU_SUSFS_SUS_MOUNT
default y
help
- Automatically add KSU's default mounts to sus_mount.
- No susfs command is needed in userspace.
- Only mount operation from process with ksu domain will be checked.
config KSU_SUSFS_AUTO_ADD_SUS_BIND_MOUNT
bool "Enable to hide suspicious bind mounts automatically (experimental)"
depends on KSU_SUSFS_SUS_MOUNT
default y
help
- Automatically add binded mounts to sus_mount.
- No susfs command is needed in userspace.
- Only mount operation from process with ksu domain will be checked.
config KSU_SUSFS_SUS_KSTAT
bool "Enable to spoof suspicious kstat"
depends on KSU_SUSFS
default y
help
- Allow spoofing the kstat of user-defined file/directory.
- Effective only on zygote spawned user app process.
config KSU_SUSFS_SUS_OVERLAYFS
bool "Enable to automatically spoof kstat and kstatfs for overlayed files/directories"
depends on KSU_SUSFS
default n
help
- Automatically spoof the kstat and kstatfs for overlayed files/directories.
- Enable it if you are using legacy KernelSU and dont have auto hide features enabled.
- No susfs command is needed in userspace.
- Effective on all processes.
config KSU_SUSFS_TRY_UMOUNT
bool "Enable to use ksu's ksu_try_umount"
depends on KSU_SUSFS
default y
help
- Allow using ksu_try_umount to umount other user-defined mount paths prior to ksu's default umount paths.
- Effective on all NO-root-access-granted processes.
config KSU_SUSFS_AUTO_ADD_TRY_UMOUNT_FOR_BIND_MOUNT
bool "Enable to add bind mounts to ksu's ksu_try_umount automatically (experimental)"
depends on KSU_SUSFS_TRY_UMOUNT
default y
help
- Automatically add binded mounts to ksu's ksu_try_umount.
- No susfs command is needed in userspace.
- Only mount operation from process with ksu domain will be checked.
config KSU_SUSFS_SPOOF_UNAME
bool "Enable to spoof uname"
depends on KSU_SUSFS
default y
help
- Allow spoofing the string returned by uname syscall to user-defined string.
- Effective on all processes.
config KSU_SUSFS_ENABLE_LOG
bool "Enable logging susfs log to kernel"
depends on KSU_SUSFS
default y
help
- Allow logging susfs log to kernel, uncheck it to completely disable all susfs log.
config KSU_SUSFS_HIDE_KSU_SUSFS_SYMBOLS
bool "Enable to automatically hide ksu and susfs symbols from /proc/kallsyms"
depends on KSU_SUSFS
default y
help
- Automatically hide ksu and susfs symbols from '/proc/kallsyms'.
- Effective on all processes.
config KSU_SUSFS_SPOOF_CMDLINE_OR_BOOTCONFIG
bool "Enable to spoof /proc/bootconfig (gki) or /proc/cmdline (non-gki)"
depends on KSU_SUSFS
default y
help
- Spoof the output of /proc/bootconfig (gki) or /proc/cmdline (non-gki) with a user-defined file.
- Effective on all processes.
config KSU_SUSFS_OPEN_REDIRECT
bool "Enable to redirect a path to be opened with another path (experimental)"
depends on KSU_SUSFS
default y
help
- Allow redirecting a target path to be opened with another user-defined path.
- Effective only on processes with uid < 2000.
- Please be reminded that process with open access to the target and redirected path can be detected.
endmenu
endmenu

339
drivers/kernelsu/LICENSE Normal file
View File

@@ -0,0 +1,339 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

91
drivers/kernelsu/Makefile Normal file
View File

@@ -0,0 +1,91 @@
kernelsu-objs := ksu.o
kernelsu-objs += allowlist.o
kernelsu-objs += apk_sign.o
kernelsu-objs += sucompat.o
kernelsu-objs += throne_tracker.o
kernelsu-objs += core_hook.o
kernelsu-objs += ksud.o
kernelsu-objs += embed_ksud.o
kernelsu-objs += kernel_compat.o
kernelsu-objs += selinux/selinux.o
kernelsu-objs += selinux/sepolicy.o
kernelsu-objs += selinux/rules.o
ccflags-y += -I$(srctree)/security/selinux -I$(srctree)/security/selinux/include
ccflags-y += -I$(objtree)/security/selinux -include $(srctree)/include/uapi/asm-generic/errno.h
obj-$(CONFIG_KSU) += kernelsu.o
ccflags-y += -DKSU_VERSION=12998
# Checks hooks state
ifeq ($(strip $(CONFIG_KSU_MANUAL_HOOK)),y)
$(info -- KernelSU: CONFIG_KSU_MANUAL_HOOK)
else
$(info -- KernelSU: CONFIG_KSU_KPROBES_HOOK)
ccflags-y += -DCONFIG_KSU_KPROBES_HOOK
endif
# SELinux drivers check
ifeq ($(shell grep -q "current_sid(void)" $(srctree)/security/selinux/include/objsec.h; echo $$?),0)
ccflags-y += -DKSU_COMPAT_HAS_CURRENT_SID
endif
ifeq ($(shell grep -q "struct selinux_state " $(srctree)/security/selinux/include/security.h; echo $$?),0)
ccflags-y += -DKSU_COMPAT_HAS_SELINUX_STATE
endif
# Handle optional backports
ifeq ($(shell grep -q "strncpy_from_user_nofault" $(srctree)/include/linux/uaccess.h; echo $$?),0)
ccflags-y += -DKSU_OPTIONAL_STRNCPY
endif
ifeq ($(shell grep -q "ssize_t kernel_read" $(srctree)/fs/read_write.c; echo $$?),0)
ccflags-y += -DKSU_OPTIONAL_KERNEL_READ
endif
ifeq ($(shell grep "ssize_t kernel_write" $(srctree)/fs/read_write.c | grep -q "const void" ; echo $$?),0)
ccflags-y += -DKSU_OPTIONAL_KERNEL_WRITE
endif
ifeq ($(shell grep -q "int\s\+path_umount" $(srctree)/fs/namespace.c; echo $$?),0)
ccflags-y += -DKSU_HAS_PATH_UMOUNT
endif
# Checks Samsung UH drivers
ifeq ($(shell grep -q "CONFIG_KDP_CRED" $(srctree)/kernel/cred.c; echo $$?),0)
ccflags-y += -DSAMSUNG_UH_DRIVER_EXIST
endif
# Samsung SELinux Porting
ifeq ($(shell grep -q "SEC_SELINUX_PORTING_COMMON" $(srctree)/security/selinux/avc.c; echo $$?),0)
ccflags-y += -DSAMSUNG_SELINUX_PORTING
endif
# Custom Signs
ifdef KSU_EXPECTED_SIZE
ccflags-y += -DEXPECTED_SIZE=$(KSU_EXPECTED_SIZE)
$(info -- Custom KernelSU Manager signature size: $(KSU_EXPECTED_SIZE))
endif
ifdef KSU_EXPECTED_HASH
ccflags-y += -DEXPECTED_HASH=\"$(KSU_EXPECTED_HASH)\"
$(info -- Custom KernelSU Manager signature hash: $(KSU_EXPECTED_HASH))
endif
ifdef KSU_MANAGER_PACKAGE
ccflags-y += -DKSU_MANAGER_PACKAGE=\"$(KSU_MANAGER_PACKAGE)\"
$(info -- KernelSU Manager package name: $(KSU_MANAGER_PACKAGE))
endif
$(info -- Supported KernelSU Manager(s): tiann, rsuntk, 5ec1cff)
ccflags-y += -Wno-implicit-function-declaration -Wno-strict-prototypes -Wno-int-conversion -Wno-gcc-compat
ccflags-y += -Wno-declaration-after-statement -Wno-unused-function
## For susfs stuff ##
ifeq ($(shell test -e $(srctree)/fs/susfs.c; echo $$?),0)
$(eval SUSFS_VERSION=$(shell cat $(srctree)/include/linux/susfs.h | grep -E '^#define SUSFS_VERSION' | cut -d' ' -f3 | sed 's/"//g'))
$(info )
$(info -- SUSFS_VERSION: $(SUSFS_VERSION))
else
$(info -- You have not integrate susfs in your kernel.)
$(info -- Read: https://gitlab.com/simonpunk/susfs4ksu)
endif
# Keep a new line here!! Because someone may append config

View File

@@ -0,0 +1,528 @@
#include <linux/capability.h>
#include <linux/compiler.h>
#include <linux/fs.h>
#include <linux/gfp.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/printk.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/version.h>
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 14, 0)
#include <linux/compiler_types.h>
#endif
#include "ksu.h"
#include "klog.h" // IWYU pragma: keep
#include "selinux/selinux.h"
#include "kernel_compat.h"
#include "allowlist.h"
#include "manager.h"
#define FILE_MAGIC 0x7f4b5355 // ' KSU', u32
#define FILE_FORMAT_VERSION 3 // u32
#define KSU_APP_PROFILE_PRESERVE_UID 9999 // NOBODY_UID
#define KSU_DEFAULT_SELINUX_DOMAIN "u:r:su:s0"
static DEFINE_MUTEX(allowlist_mutex);
// default profiles, these may be used frequently, so we cache it
static struct root_profile default_root_profile;
static struct non_root_profile default_non_root_profile;
static int allow_list_arr[PAGE_SIZE / sizeof(int)] __read_mostly __aligned(PAGE_SIZE);
static int allow_list_pointer __read_mostly = 0;
static void remove_uid_from_arr(uid_t uid)
{
int *temp_arr;
int i, j;
if (allow_list_pointer == 0)
return;
temp_arr = kmalloc(sizeof(allow_list_arr), GFP_KERNEL);
if (temp_arr == NULL) {
pr_err("%s: unable to allocate memory\n", __func__);
return;
}
for (i = j = 0; i < allow_list_pointer; i++) {
if (allow_list_arr[i] == uid)
continue;
temp_arr[j++] = allow_list_arr[i];
}
allow_list_pointer = j;
for (; j < ARRAY_SIZE(allow_list_arr); j++)
temp_arr[j] = -1;
memcpy(&allow_list_arr, temp_arr, PAGE_SIZE);
kfree(temp_arr);
}
static void init_default_profiles()
{
kernel_cap_t full_cap = CAP_FULL_SET;
default_root_profile.uid = 0;
default_root_profile.gid = 0;
default_root_profile.groups_count = 1;
default_root_profile.groups[0] = 0;
memcpy(&default_root_profile.capabilities.effective, &full_cap,
sizeof(default_root_profile.capabilities.effective));
default_root_profile.namespaces = 0;
strcpy(default_root_profile.selinux_domain, KSU_DEFAULT_SELINUX_DOMAIN);
// This means that we will umount modules by default!
default_non_root_profile.umount_modules = true;
}
struct perm_data {
struct list_head list;
struct app_profile profile;
};
static struct list_head allow_list;
static uint8_t allow_list_bitmap[PAGE_SIZE] __read_mostly __aligned(PAGE_SIZE);
#define BITMAP_UID_MAX ((sizeof(allow_list_bitmap) * BITS_PER_BYTE) - 1)
#define KERNEL_SU_ALLOWLIST "/data/adb/ksu/.allowlist"
static struct work_struct ksu_save_work;
static struct work_struct ksu_load_work;
static bool persistent_allow_list(void);
void ksu_show_allow_list(void)
{
struct perm_data *p = NULL;
struct list_head *pos = NULL;
pr_info("ksu_show_allow_list\n");
list_for_each (pos, &allow_list) {
p = list_entry(pos, struct perm_data, list);
pr_info("uid :%d, allow: %d\n", p->profile.current_uid,
p->profile.allow_su);
}
}
#ifdef CONFIG_KSU_DEBUG
static void ksu_grant_root_to_shell()
{
struct app_profile profile = {
.version = KSU_APP_PROFILE_VER,
.allow_su = true,
.current_uid = 2000,
};
strcpy(profile.key, "com.android.shell");
strcpy(profile.rp_config.profile.selinux_domain, KSU_DEFAULT_SELINUX_DOMAIN);
ksu_set_app_profile(&profile, false);
}
#endif
bool ksu_get_app_profile(struct app_profile *profile)
{
struct perm_data *p = NULL;
struct list_head *pos = NULL;
bool found = false;
list_for_each (pos, &allow_list) {
p = list_entry(pos, struct perm_data, list);
bool uid_match = profile->current_uid == p->profile.current_uid;
if (uid_match) {
// found it, override it with ours
memcpy(profile, &p->profile, sizeof(*profile));
found = true;
goto exit;
}
}
exit:
return found;
}
static inline bool forbid_system_uid(uid_t uid) {
#define SHELL_UID 2000
#define SYSTEM_UID 1000
return uid < SHELL_UID && uid != SYSTEM_UID;
}
static bool profile_valid(struct app_profile *profile)
{
if (!profile) {
return false;
}
if (profile->version < KSU_APP_PROFILE_VER) {
pr_info("Unsupported profile version: %d\n", profile->version);
return false;
}
if (profile->allow_su) {
if (profile->rp_config.profile.groups_count > KSU_MAX_GROUPS) {
return false;
}
if (strlen(profile->rp_config.profile.selinux_domain) == 0) {
return false;
}
}
return true;
}
bool ksu_set_app_profile(struct app_profile *profile, bool persist)
{
struct perm_data *p = NULL;
struct list_head *pos = NULL;
bool result = false;
if (!profile_valid(profile)) {
pr_err("Failed to set app profile: invalid profile!\n");
return false;
}
list_for_each (pos, &allow_list) {
p = list_entry(pos, struct perm_data, list);
// both uid and package must match, otherwise it will break multiple package with different user id
if (profile->current_uid == p->profile.current_uid &&
!strcmp(profile->key, p->profile.key)) {
// found it, just override it all!
memcpy(&p->profile, profile, sizeof(*profile));
result = true;
goto out;
}
}
// not found, alloc a new node!
p = (struct perm_data *)kmalloc(sizeof(struct perm_data), GFP_KERNEL);
if (!p) {
pr_err("ksu_set_app_profile alloc failed\n");
return false;
}
memcpy(&p->profile, profile, sizeof(*profile));
if (profile->allow_su) {
pr_info("set root profile, key: %s, uid: %d, gid: %d, context: %s\n",
profile->key, profile->current_uid,
profile->rp_config.profile.gid,
profile->rp_config.profile.selinux_domain);
} else {
pr_info("set app profile, key: %s, uid: %d, umount modules: %d\n",
profile->key, profile->current_uid,
profile->nrp_config.profile.umount_modules);
}
list_add_tail(&p->list, &allow_list);
out:
if (profile->current_uid <= BITMAP_UID_MAX) {
if (profile->allow_su)
allow_list_bitmap[profile->current_uid / BITS_PER_BYTE] |= 1 << (profile->current_uid % BITS_PER_BYTE);
else
allow_list_bitmap[profile->current_uid / BITS_PER_BYTE] &= ~(1 << (profile->current_uid % BITS_PER_BYTE));
} else {
if (profile->allow_su) {
/*
* 1024 apps with uid higher than BITMAP_UID_MAX
* registered to request superuser?
*/
if (allow_list_pointer >= ARRAY_SIZE(allow_list_arr)) {
pr_err("too many apps registered\n");
WARN_ON(1);
return false;
}
allow_list_arr[allow_list_pointer++] = profile->current_uid;
} else {
remove_uid_from_arr(profile->current_uid);
}
}
result = true;
// check if the default profiles is changed, cache it to a single struct to accelerate access.
if (unlikely(!strcmp(profile->key, "$"))) {
// set default non root profile
memcpy(&default_non_root_profile, &profile->nrp_config.profile,
sizeof(default_non_root_profile));
}
if (unlikely(!strcmp(profile->key, "#"))) {
// set default root profile
memcpy(&default_root_profile, &profile->rp_config.profile,
sizeof(default_root_profile));
}
if (persist)
persistent_allow_list();
return result;
}
bool __ksu_is_allow_uid(uid_t uid)
{
int i;
if (unlikely(uid == 0)) {
// already root, but only allow our domain.
return ksu_is_ksu_domain();
}
if (forbid_system_uid(uid)) {
// do not bother going through the list if it's system
return false;
}
if (likely(ksu_is_manager_uid_valid()) && unlikely(ksu_get_manager_uid() == uid)) {
// manager is always allowed!
return true;
}
if (likely(uid <= BITMAP_UID_MAX)) {
return !!(allow_list_bitmap[uid / BITS_PER_BYTE] & (1 << (uid % BITS_PER_BYTE)));
} else {
for (i = 0; i < allow_list_pointer; i++) {
if (allow_list_arr[i] == uid)
return true;
}
}
return false;
}
bool ksu_uid_should_umount(uid_t uid)
{
struct app_profile profile = { .current_uid = uid };
if (likely(ksu_is_manager_uid_valid()) && unlikely(ksu_get_manager_uid() == uid)) {
// we should not umount on manager!
return false;
}
bool found = ksu_get_app_profile(&profile);
if (!found) {
// no app profile found, it must be non root app
return default_non_root_profile.umount_modules;
}
if (profile.allow_su) {
// if found and it is granted to su, we shouldn't umount for it
return false;
} else {
// found an app profile
if (profile.nrp_config.use_default) {
return default_non_root_profile.umount_modules;
} else {
return profile.nrp_config.profile.umount_modules;
}
}
}
struct root_profile *ksu_get_root_profile(uid_t uid)
{
struct perm_data *p = NULL;
struct list_head *pos = NULL;
list_for_each (pos, &allow_list) {
p = list_entry(pos, struct perm_data, list);
if (uid == p->profile.current_uid && p->profile.allow_su) {
if (!p->profile.rp_config.use_default) {
return &p->profile.rp_config.profile;
}
}
}
// use default profile
return &default_root_profile;
}
bool ksu_get_allow_list(int *array, int *length, bool allow)
{
struct perm_data *p = NULL;
struct list_head *pos = NULL;
int i = 0;
list_for_each (pos, &allow_list) {
p = list_entry(pos, struct perm_data, list);
// pr_info("get_allow_list uid: %d allow: %d\n", p->uid, p->allow);
if (p->profile.allow_su == allow) {
array[i++] = p->profile.current_uid;
}
}
*length = i;
return true;
}
static void do_save_allow_list(struct work_struct *work)
{
u32 magic = FILE_MAGIC;
u32 version = FILE_FORMAT_VERSION;
struct perm_data *p = NULL;
struct list_head *pos = NULL;
loff_t off = 0;
struct file *fp =
ksu_filp_open_compat(KERNEL_SU_ALLOWLIST, O_WRONLY | O_CREAT | O_TRUNC, 0644);
if (IS_ERR(fp)) {
pr_err("save_allow_list create file failed: %ld\n", PTR_ERR(fp));
return;
}
// store magic and version
if (ksu_kernel_write_compat(fp, &magic, sizeof(magic), &off) !=
sizeof(magic)) {
pr_err("save_allow_list write magic failed.\n");
goto exit;
}
if (ksu_kernel_write_compat(fp, &version, sizeof(version), &off) !=
sizeof(version)) {
pr_err("save_allow_list write version failed.\n");
goto exit;
}
list_for_each (pos, &allow_list) {
p = list_entry(pos, struct perm_data, list);
pr_info("save allow list, name: %s uid: %d, allow: %d\n",
p->profile.key, p->profile.current_uid,
p->profile.allow_su);
ksu_kernel_write_compat(fp, &p->profile, sizeof(p->profile),
&off);
}
exit:
filp_close(fp, 0);
}
static void do_load_allow_list(struct work_struct *work)
{
loff_t off = 0;
ssize_t ret = 0;
struct file *fp = NULL;
u32 magic;
u32 version;
#ifdef CONFIG_KSU_DEBUG
// always allow adb shell by default
ksu_grant_root_to_shell();
#endif
// load allowlist now!
fp = ksu_filp_open_compat(KERNEL_SU_ALLOWLIST, O_RDONLY, 0);
if (IS_ERR(fp)) {
pr_err("load_allow_list open file failed: %ld\n", PTR_ERR(fp));
return;
}
// verify magic
if (ksu_kernel_read_compat(fp, &magic, sizeof(magic), &off) !=
sizeof(magic) ||
magic != FILE_MAGIC) {
pr_err("allowlist file invalid: %d!\n", magic);
goto exit;
}
if (ksu_kernel_read_compat(fp, &version, sizeof(version), &off) !=
sizeof(version)) {
pr_err("allowlist read version: %d failed\n", version);
goto exit;
}
pr_info("allowlist version: %d\n", version);
while (true) {
struct app_profile profile;
ret = ksu_kernel_read_compat(fp, &profile, sizeof(profile),
&off);
if (ret <= 0) {
pr_info("load_allow_list read err: %zd\n", ret);
break;
}
pr_info("load_allow_uid, name: %s, uid: %d, allow: %d\n",
profile.key, profile.current_uid, profile.allow_su);
ksu_set_app_profile(&profile, false);
}
exit:
ksu_show_allow_list();
filp_close(fp, 0);
}
void ksu_prune_allowlist(bool (*is_uid_valid)(uid_t, char *, void *), void *data)
{
struct perm_data *np = NULL;
struct perm_data *n = NULL;
bool modified = false;
// TODO: use RCU!
mutex_lock(&allowlist_mutex);
list_for_each_entry_safe (np, n, &allow_list, list) {
uid_t uid = np->profile.current_uid;
char *package = np->profile.key;
// we use this uid for special cases, don't prune it!
bool is_preserved_uid = uid == KSU_APP_PROFILE_PRESERVE_UID;
if (!is_preserved_uid && !is_uid_valid(uid, package, data)) {
modified = true;
pr_info("prune uid: %d, package: %s\n", uid, package);
list_del(&np->list);
if (likely(uid <= BITMAP_UID_MAX)) {
allow_list_bitmap[uid / BITS_PER_BYTE] &= ~(1 << (uid % BITS_PER_BYTE));
}
remove_uid_from_arr(uid);
smp_mb();
kfree(np);
}
}
mutex_unlock(&allowlist_mutex);
if (modified) {
persistent_allow_list();
}
}
// make sure allow list works cross boot
static bool persistent_allow_list(void)
{
return ksu_queue_work(&ksu_save_work);
}
bool ksu_load_allow_list(void)
{
return ksu_queue_work(&ksu_load_work);
}
void ksu_allowlist_init(void)
{
int i;
BUILD_BUG_ON(sizeof(allow_list_bitmap) != PAGE_SIZE);
BUILD_BUG_ON(sizeof(allow_list_arr) != PAGE_SIZE);
for (i = 0; i < ARRAY_SIZE(allow_list_arr); i++)
allow_list_arr[i] = -1;
INIT_LIST_HEAD(&allow_list);
INIT_WORK(&ksu_save_work, do_save_allow_list);
INIT_WORK(&ksu_load_work, do_load_allow_list);
init_default_profiles();
}
void ksu_allowlist_exit(void)
{
struct perm_data *np = NULL;
struct perm_data *n = NULL;
do_save_allow_list(NULL);
// free allowlist
mutex_lock(&allowlist_mutex);
list_for_each_entry_safe (np, n, &allow_list, list) {
list_del(&np->list);
kfree(np);
}
mutex_unlock(&allowlist_mutex);
}

View File

@@ -0,0 +1,27 @@
#ifndef __KSU_H_ALLOWLIST
#define __KSU_H_ALLOWLIST
#include <linux/types.h>
#include "ksu.h"
void ksu_allowlist_init(void);
void ksu_allowlist_exit(void);
bool ksu_load_allow_list(void);
void ksu_show_allow_list(void);
bool __ksu_is_allow_uid(uid_t uid);
#define ksu_is_allow_uid(uid) unlikely(__ksu_is_allow_uid(uid))
bool ksu_get_allow_list(int *array, int *length, bool allow);
void ksu_prune_allowlist(bool (*is_uid_exist)(uid_t, char *, void *), void *data);
bool ksu_get_app_profile(struct app_profile *);
bool ksu_set_app_profile(struct app_profile *, bool persist);
bool ksu_uid_should_umount(uid_t uid);
struct root_profile *ksu_get_root_profile(uid_t uid);
#endif

335
drivers/kernelsu/apk_sign.c Normal file
View File

@@ -0,0 +1,335 @@
#include <linux/err.h>
#include <linux/fs.h>
#include <linux/gfp.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/version.h>
#ifdef CONFIG_KSU_DEBUG
#include <linux/moduleparam.h>
#endif
#include <crypto/hash.h>
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 11, 0)
#include <crypto/sha2.h>
#else
#include <crypto/sha.h>
#endif
#include "apk_sign.h"
#include "klog.h" // IWYU pragma: keep
#include "kernel_compat.h"
#include "manager_sign.h"
struct sdesc {
struct shash_desc shash;
char ctx[];
};
static struct apk_sign_key {
unsigned size;
const char *sha256;
} apk_sign_keys[] = {
{EXPECTED_SIZE_OFFICIAL, EXPECTED_HASH_OFFICIAL}, // Official
{EXPECTED_SIZE_RSUNTK, EXPECTED_HASH_RSUNTK}, // RKSU
{EXPECTED_SIZE_5EC1CFF, EXPECTED_HASH_5EC1CFF}, // MKSU
{EXPECTED_SIZE_NEXT, EXPECTED_HASH_NEXT}, // ksu-next
#ifdef EXPECTED_SIZE
{EXPECTED_SIZE, EXPECTED_HASH}, // Custom
#endif
};
static struct sdesc *init_sdesc(struct crypto_shash *alg)
{
struct sdesc *sdesc;
int size;
size = sizeof(struct shash_desc) + crypto_shash_descsize(alg);
sdesc = kmalloc(size, GFP_KERNEL);
if (!sdesc)
return ERR_PTR(-ENOMEM);
sdesc->shash.tfm = alg;
return sdesc;
}
static int calc_hash(struct crypto_shash *alg, const unsigned char *data,
unsigned int datalen, unsigned char *digest)
{
struct sdesc *sdesc;
int ret;
sdesc = init_sdesc(alg);
if (IS_ERR(sdesc)) {
pr_info("can't alloc sdesc\n");
return PTR_ERR(sdesc);
}
ret = crypto_shash_digest(&sdesc->shash, data, datalen, digest);
kfree(sdesc);
return ret;
}
static int ksu_sha256(const unsigned char *data, unsigned int datalen,
unsigned char *digest)
{
struct crypto_shash *alg;
char *hash_alg_name = "sha256";
int ret;
alg = crypto_alloc_shash(hash_alg_name, 0, 0);
if (IS_ERR(alg)) {
pr_info("can't alloc alg %s\n", hash_alg_name);
return PTR_ERR(alg);
}
ret = calc_hash(alg, data, datalen, digest);
crypto_free_shash(alg);
return ret;
}
static bool check_block(struct file *fp, u32 *size4, loff_t *pos, u32 *offset)
{
int i;
struct apk_sign_key sign_key;
ksu_kernel_read_compat(fp, size4, 0x4, pos); // signer-sequence length
ksu_kernel_read_compat(fp, size4, 0x4, pos); // signer length
ksu_kernel_read_compat(fp, size4, 0x4, pos); // signed data length
*offset += 0x4 * 3;
ksu_kernel_read_compat(fp, size4, 0x4, pos); // digests-sequence length
*pos += *size4;
*offset += 0x4 + *size4;
ksu_kernel_read_compat(fp, size4, 0x4, pos); // certificates length
ksu_kernel_read_compat(fp, size4, 0x4, pos); // certificate length
*offset += 0x4 * 2;
for (i = 0; i < ARRAY_SIZE(apk_sign_keys); i++) {
sign_key = apk_sign_keys[i];
if (*size4 != sign_key.size)
continue;
*offset += *size4;
#define CERT_MAX_LENGTH 1024
char cert[CERT_MAX_LENGTH];
if (*size4 > CERT_MAX_LENGTH) {
pr_info("cert length overlimit\n");
return false;
}
ksu_kernel_read_compat(fp, cert, *size4, pos);
unsigned char digest[SHA256_DIGEST_SIZE];
if (IS_ERR(ksu_sha256(cert, *size4, digest))) {
pr_info("sha256 error\n");
return false;
}
char hash_str[SHA256_DIGEST_SIZE * 2 + 1];
hash_str[SHA256_DIGEST_SIZE * 2] = '\0';
bin2hex(hash_str, digest, SHA256_DIGEST_SIZE);
pr_info("sha256: %s, expected: %s\n", hash_str,
sign_key.sha256);
if (strcmp(sign_key.sha256, hash_str) == 0) {
return true;
}
}
return false;
}
struct zip_entry_header {
uint32_t signature;
uint16_t version;
uint16_t flags;
uint16_t compression;
uint16_t mod_time;
uint16_t mod_date;
uint32_t crc32;
uint32_t compressed_size;
uint32_t uncompressed_size;
uint16_t file_name_length;
uint16_t extra_field_length;
} __attribute__((packed));
// This is a necessary but not sufficient condition, but it is enough for us
static bool has_v1_signature_file(struct file *fp)
{
struct zip_entry_header header;
const char MANIFEST[] = "META-INF/MANIFEST.MF";
loff_t pos = 0;
while (ksu_kernel_read_compat(fp, &header,
sizeof(struct zip_entry_header), &pos) ==
sizeof(struct zip_entry_header)) {
if (header.signature != 0x04034b50) {
// ZIP magic: 'PK'
return false;
}
// Read the entry file name
if (header.file_name_length == sizeof(MANIFEST) - 1) {
char fileName[sizeof(MANIFEST)];
ksu_kernel_read_compat(fp, fileName,
header.file_name_length, &pos);
fileName[header.file_name_length] = '\0';
// Check if the entry matches META-INF/MANIFEST.MF
if (strncmp(MANIFEST, fileName, sizeof(MANIFEST) - 1) ==
0) {
return true;
}
} else {
// Skip the entry file name
pos += header.file_name_length;
}
// Skip to the next entry
pos += header.extra_field_length + header.compressed_size;
}
return false;
}
static __always_inline bool check_v2_signature(char *path)
{
unsigned char buffer[0x11] = { 0 };
u32 size4;
u64 size8, size_of_block;
loff_t pos;
bool v2_signing_valid = false;
int v2_signing_blocks = 0;
bool v3_signing_exist = false;
bool v3_1_signing_exist = false;
int i;
struct file *fp = ksu_filp_open_compat(path, O_RDONLY, 0);
if (IS_ERR(fp)) {
pr_err("open %s error.\n", path);
return false;
}
// disable inotify for this file
fp->f_mode |= FMODE_NONOTIFY;
// https://en.wikipedia.org/wiki/Zip_(file_format)#End_of_central_directory_record_(EOCD)
for (i = 0;; ++i) {
unsigned short n;
pos = generic_file_llseek(fp, -i - 2, SEEK_END);
ksu_kernel_read_compat(fp, &n, 2, &pos);
if (n == i) {
pos -= 22;
ksu_kernel_read_compat(fp, &size4, 4, &pos);
if ((size4 ^ 0xcafebabeu) == 0xccfbf1eeu) {
break;
}
}
if (i == 0xffff) {
pr_info("error: cannot find eocd\n");
goto clean;
}
}
pos += 12;
// offset
ksu_kernel_read_compat(fp, &size4, 0x4, &pos);
pos = size4 - 0x18;
ksu_kernel_read_compat(fp, &size8, 0x8, &pos);
ksu_kernel_read_compat(fp, buffer, 0x10, &pos);
if (strcmp((char *)buffer, "APK Sig Block 42")) {
goto clean;
}
pos = size4 - (size8 + 0x8);
ksu_kernel_read_compat(fp, &size_of_block, 0x8, &pos);
if (size_of_block != size8) {
goto clean;
}
int loop_count = 0;
while (loop_count++ < 10) {
uint32_t id;
uint32_t offset;
ksu_kernel_read_compat(fp, &size8, 0x8,
&pos); // sequence length
if (size8 == size_of_block) {
break;
}
ksu_kernel_read_compat(fp, &id, 0x4, &pos); // id
offset = 4;
if (id == 0x7109871au) {
v2_signing_blocks++;
v2_signing_valid = check_block(fp, &size4, &pos, &offset);
} else if (id == 0xf05368c0u) {
// http://aospxref.com/android-14.0.0_r2/xref/frameworks/base/core/java/android/util/apk/ApkSignatureSchemeV3Verifier.java#73
v3_signing_exist = true;
} else if (id == 0x1b93ad61u) {
// http://aospxref.com/android-14.0.0_r2/xref/frameworks/base/core/java/android/util/apk/ApkSignatureSchemeV3Verifier.java#74
v3_1_signing_exist = true;
} else {
#ifdef CONFIG_KSU_DEBUG
pr_info("Unknown id: 0x%08x\n", id);
#endif
}
pos += (size8 - offset);
}
if (v2_signing_blocks != 1) {
#ifdef CONFIG_KSU_DEBUG
pr_err("Unexpected v2 signature count: %d\n",
v2_signing_blocks);
#endif
v2_signing_valid = false;
}
if (v2_signing_valid) {
int has_v1_signing = has_v1_signature_file(fp);
if (has_v1_signing) {
pr_err("Unexpected v1 signature scheme found!\n");
filp_close(fp, 0);
return false;
}
}
clean:
filp_close(fp, 0);
if (v3_signing_exist || v3_1_signing_exist) {
#ifdef CONFIG_KSU_DEBUG
pr_err("Unexpected v3 signature scheme found!\n");
#endif
return false;
}
return v2_signing_valid;
}
#ifdef CONFIG_KSU_DEBUG
int ksu_debug_manager_uid = -1;
#include "manager.h"
static int set_expected_size(const char *val, const struct kernel_param *kp)
{
int rv = param_set_uint(val, kp);
ksu_set_manager_uid(ksu_debug_manager_uid);
pr_info("ksu_manager_uid set to %d\n", ksu_debug_manager_uid);
return rv;
}
static struct kernel_param_ops expected_size_ops = {
.set = set_expected_size,
.get = param_get_uint,
};
module_param_cb(ksu_debug_manager_uid, &expected_size_ops,
&ksu_debug_manager_uid, S_IRUSR | S_IWUSR);
#endif
bool ksu_is_manager_apk(char *path)
{
return check_v2_signature(path);
}

View File

@@ -0,0 +1,8 @@
#ifndef __KSU_H_APK_V2_SIGN
#define __KSU_H_APK_V2_SIGN
#include <linux/types.h>
bool ksu_is_manager_apk(char *path);
#endif

102
drivers/kernelsu/arch.h Normal file
View File

@@ -0,0 +1,102 @@
#ifndef __KSU_H_ARCH
#define __KSU_H_ARCH
#include <linux/version.h>
#if defined(__aarch64__)
#define __PT_PARM1_REG regs[0]
#define __PT_PARM2_REG regs[1]
#define __PT_PARM3_REG regs[2]
#define __PT_SYSCALL_PARM4_REG regs[3]
#define __PT_CCALL_PARM4_REG regs[3]
#define __PT_PARM5_REG regs[4]
#define __PT_PARM6_REG regs[5]
#define __PT_RET_REG regs[30]
#define __PT_FP_REG regs[29] /* Works only with CONFIG_FRAME_POINTER */
#define __PT_RC_REG regs[0]
#define __PT_SP_REG sp
#define __PT_IP_REG pc
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 16, 0)
#define PRCTL_SYMBOL "__arm64_sys_prctl"
#define SYS_READ_SYMBOL "__arm64_sys_read"
#define SYS_NEWFSTATAT_SYMBOL "__arm64_sys_newfstatat"
#define SYS_FSTATAT64_SYMBOL "__arm64_sys_fstatat64"
#define SYS_FACCESSAT_SYMBOL "__arm64_sys_faccessat"
#define SYS_EXECVE_SYMBOL "__arm64_sys_execve"
#define SYS_EXECVE_COMPAT_SYMBOL "__arm64_compat_sys_execve"
#else
#define PRCTL_SYMBOL "sys_prctl"
#define SYS_READ_SYMBOL "sys_read"
#define SYS_NEWFSTATAT_SYMBOL "sys_newfstatat"
#define SYS_FSTATAT64_SYMBOL "sys_fstatat64"
#define SYS_FACCESSAT_SYMBOL "sys_faccessat"
#define SYS_EXECVE_SYMBOL "sys_execve"
#define SYS_EXECVE_COMPAT_SYMBOL "compat_sys_execve"
#endif
#elif defined(__x86_64__)
#define __PT_PARM1_REG di
#define __PT_PARM2_REG si
#define __PT_PARM3_REG dx
/* syscall uses r10 for PARM4 */
#define __PT_SYSCALL_PARM4_REG r10
#define __PT_CCALL_PARM4_REG cx
#define __PT_PARM5_REG r8
#define __PT_PARM6_REG r9
#define __PT_RET_REG sp
#define __PT_FP_REG bp
#define __PT_RC_REG ax
#define __PT_SP_REG sp
#define __PT_IP_REG ip
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 16, 0)
#define PRCTL_SYMBOL "__x64_sys_prctl"
#define SYS_READ_SYMBOL "__x64_sys_read"
#define SYS_NEWFSTATAT_SYMBOL "__x64_sys_newfstatat"
#define SYS_FSTATAT64_SYMBOL "__x64_sys_fstatat64"
#define SYS_FACCESSAT_SYMBOL "__x64_sys_faccessat"
#define SYS_EXECVE_SYMBOL "__x64_sys_execve"
#define SYS_EXECVE_COMPAT_SYMBOL "__x64_compat_sys_execve"
#else
#define PRCTL_SYMBOL "sys_prctl"
#define SYS_READ_SYMBOL "sys_read"
#define SYS_NEWFSTATAT_SYMBOL "sys_newfstatat"
#define SYS_FSTATAT64_SYMBOL "sys_fstatat64"
#define SYS_FACCESSAT_SYMBOL "sys_faccessat"
#define SYS_EXECVE_SYMBOL "sys_execve"
#define SYS_EXECVE_COMPAT_SYMBOL "compat_sys_execve"
#endif
#else
#ifdef CONFIG_KSU_KPROBES_HOOK
#error "Unsupported arch"
#endif
#endif
/* allow some architecutres to override `struct pt_regs` */
#ifndef __PT_REGS_CAST
#define __PT_REGS_CAST(x) (x)
#endif
#define PT_REGS_PARM1(x) (__PT_REGS_CAST(x)->__PT_PARM1_REG)
#define PT_REGS_PARM2(x) (__PT_REGS_CAST(x)->__PT_PARM2_REG)
#define PT_REGS_PARM3(x) (__PT_REGS_CAST(x)->__PT_PARM3_REG)
#define PT_REGS_SYSCALL_PARM4(x) (__PT_REGS_CAST(x)->__PT_SYSCALL_PARM4_REG)
#define PT_REGS_CCALL_PARM4(x) (__PT_REGS_CAST(x)->__PT_CCALL_PARM4_REG)
#define PT_REGS_PARM5(x) (__PT_REGS_CAST(x)->__PT_PARM5_REG)
#define PT_REGS_PARM6(x) (__PT_REGS_CAST(x)->__PT_PARM6_REG)
#define PT_REGS_RET(x) (__PT_REGS_CAST(x)->__PT_RET_REG)
#define PT_REGS_FP(x) (__PT_REGS_CAST(x)->__PT_FP_REG)
#define PT_REGS_RC(x) (__PT_REGS_CAST(x)->__PT_RC_REG)
#define PT_REGS_SP(x) (__PT_REGS_CAST(x)->__PT_SP_REG)
#define PT_REGS_IP(x) (__PT_REGS_CAST(x)->__PT_IP_REG)
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 16, 0)
#define PT_REAL_REGS(regs) ((struct pt_regs *)PT_REGS_PARM1(regs))
#else
#define PT_REAL_REGS(regs) ((regs))
#endif
#endif

1374
drivers/kernelsu/core_hook.c Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,9 @@
#ifndef __KSU_H_KSU_CORE
#define __KSU_H_KSU_CORE
#include <linux/init.h>
void __init ksu_core_init(void);
void ksu_core_exit(void);
#endif

View File

@@ -0,0 +1,5 @@
// WARNING: THIS IS A STUB FILE
// This file will be regenerated by CI
unsigned int ksud_size = 0;
const char ksud[0] = {};

View File

@@ -0,0 +1,2 @@
register_kprobe
unregister_kprobe

View File

@@ -0,0 +1,28 @@
#ifndef __KSU_H_KSHOOK
#define __KSU_H_KSHOOK
#include <linux/fs.h>
#include <linux/types.h>
// For sucompat
int ksu_handle_faccessat(int *dfd, const char __user **filename_user, int *mode,
int *flags);
int ksu_handle_stat(int *dfd, const char __user **filename_user, int *flags);
// For ksud
int ksu_handle_vfs_read(struct file **file_ptr, char __user **buf_ptr,
size_t *count_ptr, loff_t **pos);
// For ksud and sucompat
int ksu_handle_execveat(int *fd, struct filename **filename_ptr, void *argv,
void *envp, int *flags);
// For volume button
int ksu_handle_input_handle_event(unsigned int *type, unsigned int *code,
int *value);
#endif

View File

@@ -0,0 +1,194 @@
#include <linux/version.h>
#include <linux/fs.h>
#include <linux/nsproxy.h>
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0)
#include <linux/sched/task.h>
#else
#include <linux/sched.h>
#endif
#include <linux/uaccess.h>
#include "klog.h" // IWYU pragma: keep
#include "kernel_compat.h" // Add check Huawei Device
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0) || \
defined(CONFIG_IS_HW_HISI) || \
defined(CONFIG_KSU_ALLOWLIST_WORKAROUND)
#include <linux/key.h>
#include <linux/errno.h>
#include <linux/cred.h>
struct key *init_session_keyring = NULL;
static inline int install_session_keyring(struct key *keyring)
{
struct cred *new;
int ret;
new = prepare_creds();
if (!new)
return -ENOMEM;
ret = install_session_keyring_to_cred(new, keyring);
if (ret < 0) {
abort_creds(new);
return ret;
}
return commit_creds(new);
}
#endif
extern struct task_struct init_task;
// mnt_ns context switch for environment that android_init->nsproxy->mnt_ns != init_task.nsproxy->mnt_ns, such as WSA
struct ksu_ns_fs_saved {
struct nsproxy *ns;
struct fs_struct *fs;
};
static void ksu_save_ns_fs(struct ksu_ns_fs_saved *ns_fs_saved)
{
ns_fs_saved->ns = current->nsproxy;
ns_fs_saved->fs = current->fs;
}
static void ksu_load_ns_fs(struct ksu_ns_fs_saved *ns_fs_saved)
{
current->nsproxy = ns_fs_saved->ns;
current->fs = ns_fs_saved->fs;
}
static bool android_context_saved_checked = false;
static bool android_context_saved_enabled = false;
static struct ksu_ns_fs_saved android_context_saved;
void ksu_android_ns_fs_check()
{
if (android_context_saved_checked)
return;
android_context_saved_checked = true;
task_lock(current);
if (current->nsproxy && current->fs &&
current->nsproxy->mnt_ns != init_task.nsproxy->mnt_ns) {
android_context_saved_enabled = true;
pr_info("android context saved enabled due to init mnt_ns(%p) != android mnt_ns(%p)\n",
current->nsproxy->mnt_ns, init_task.nsproxy->mnt_ns);
ksu_save_ns_fs(&android_context_saved);
} else {
pr_info("android context saved disabled\n");
}
task_unlock(current);
}
struct file *ksu_filp_open_compat(const char *filename, int flags, umode_t mode)
{
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0) || \
defined(CONFIG_IS_HW_HISI) || \
defined(CONFIG_KSU_ALLOWLIST_WORKAROUND)
if (init_session_keyring != NULL && !current_cred()->session_keyring &&
(current->flags & PF_WQ_WORKER)) {
pr_info("installing init session keyring for older kernel\n");
install_session_keyring(init_session_keyring);
}
#endif
// switch mnt_ns even if current is not wq_worker, to ensure what we open is the correct file in android mnt_ns, rather than user created mnt_ns
struct ksu_ns_fs_saved saved;
if (android_context_saved_enabled) {
pr_info("start switch current nsproxy and fs to android context\n");
task_lock(current);
ksu_save_ns_fs(&saved);
ksu_load_ns_fs(&android_context_saved);
task_unlock(current);
}
struct file *fp = filp_open(filename, flags, mode);
if (android_context_saved_enabled) {
task_lock(current);
ksu_load_ns_fs(&saved);
task_unlock(current);
pr_info("switch current nsproxy and fs back to saved successfully\n");
}
return fp;
}
ssize_t ksu_kernel_read_compat(struct file *p, void *buf, size_t count,
loff_t *pos)
{
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 14, 0) || defined(KSU_OPTIONAL_KERNEL_READ)
return kernel_read(p, buf, count, pos);
#else
loff_t offset = pos ? *pos : 0;
ssize_t result = kernel_read(p, offset, (char *)buf, count);
if (pos && result > 0) {
*pos = offset + result;
}
return result;
#endif
}
ssize_t ksu_kernel_write_compat(struct file *p, const void *buf, size_t count,
loff_t *pos)
{
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 14, 0) || defined(KSU_OPTIONAL_KERNEL_WRITE)
return kernel_write(p, buf, count, pos);
#else
loff_t offset = pos ? *pos : 0;
ssize_t result = kernel_write(p, buf, count, offset);
if (pos && result > 0) {
*pos = offset + result;
}
return result;
#endif
}
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 8, 0) || defined(KSU_OPTIONAL_STRNCPY)
long ksu_strncpy_from_user_nofault(char *dst, const void __user *unsafe_addr,
long count)
{
return strncpy_from_user_nofault(dst, unsafe_addr, count);
}
#elif LINUX_VERSION_CODE >= KERNEL_VERSION(5, 3, 0)
long ksu_strncpy_from_user_nofault(char *dst, const void __user *unsafe_addr,
long count)
{
return strncpy_from_unsafe_user(dst, unsafe_addr, count);
}
#else
// Copied from: https://elixir.bootlin.com/linux/v4.9.337/source/mm/maccess.c#L201
long ksu_strncpy_from_user_nofault(char *dst, const void __user *unsafe_addr,
long count)
{
mm_segment_t old_fs = get_fs();
long ret;
if (unlikely(count <= 0))
return 0;
set_fs(USER_DS);
pagefault_disable();
ret = strncpy_from_user(dst, unsafe_addr, count);
pagefault_enable();
set_fs(old_fs);
if (ret >= count) {
ret = count;
dst[ret - 1] = '\0';
} else if (ret > 0) {
ret++;
}
return ret;
}
#endif
long ksu_strncpy_from_user_retry(char *dst, const void __user *unsafe_addr,
long count)
{
long ret = ksu_strncpy_from_user_nofault(dst, unsafe_addr, count);
if (likely(ret >= 0))
return ret;
// we faulted! fallback to slow path
if (unlikely(!ksu_access_ok(unsafe_addr, count)))
return -EFAULT;
return strncpy_from_user(dst, unsafe_addr, count);
}

View File

@@ -0,0 +1,58 @@
#ifndef __KSU_H_KERNEL_COMPAT
#define __KSU_H_KERNEL_COMPAT
#include <linux/fs.h>
#include <linux/version.h>
#include <linux/cred.h>
#include "ss/policydb.h"
#include "linux/key.h"
/*
* Adapt to Huawei HISI kernel without affecting other kernels ,
* Huawei Hisi Kernel EBITMAP Enable or Disable Flag ,
* From ss/ebitmap.h
*/
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 9, 0)) && \
(LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0)) || \
(LINUX_VERSION_CODE >= KERNEL_VERSION(4, 14, 0)) && \
(LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0))
#ifdef HISI_SELINUX_EBITMAP_RO
#define CONFIG_IS_HW_HISI
#endif
#endif
// Checks for UH, KDP and RKP
#ifdef SAMSUNG_UH_DRIVER_EXIST
#if defined(CONFIG_UH) || defined(CONFIG_KDP) || defined(CONFIG_RKP)
#error "CONFIG_UH, CONFIG_KDP and CONFIG_RKP is enabled! Please disable or remove it before compile a kernel with KernelSU!"
#endif
#endif
extern long ksu_strncpy_from_user_nofault(char *dst,
const void __user *unsafe_addr,
long count);
extern long ksu_strncpy_from_user_retry(char *dst,
const void __user *unsafe_addr,
long count);
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0) || \
defined(CONFIG_IS_HW_HISI) || \
defined(CONFIG_KSU_ALLOWLIST_WORKAROUND)
extern struct key *init_session_keyring;
#endif
extern void ksu_android_ns_fs_check();
extern struct file *ksu_filp_open_compat(const char *filename, int flags,
umode_t mode);
extern ssize_t ksu_kernel_read_compat(struct file *p, void *buf, size_t count,
loff_t *pos);
extern ssize_t ksu_kernel_write_compat(struct file *p, const void *buf,
size_t count, loff_t *pos);
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 0, 0)
#define ksu_access_ok(addr, size) access_ok(addr, size)
#else
#define ksu_access_ok(addr, size) access_ok(VERIFY_READ, addr, size)
#endif
#endif

11
drivers/kernelsu/klog.h Normal file
View File

@@ -0,0 +1,11 @@
#ifndef __KSU_H_KLOG
#define __KSU_H_KLOG
#include <linux/printk.h>
#ifdef pr_fmt
#undef pr_fmt
#define pr_fmt(fmt) "KernelSU: " fmt
#endif
#endif

143
drivers/kernelsu/ksu.c Normal file
View File

@@ -0,0 +1,143 @@
#include <linux/export.h>
#include <linux/fs.h>
#include <linux/kobject.h>
#include <linux/module.h>
#include <linux/workqueue.h>
#include "allowlist.h"
#include "arch.h"
#include "core_hook.h"
#include "klog.h" // IWYU pragma: keep
#include "ksu.h"
#include "throne_tracker.h"
#ifdef CONFIG_KSU_SUSFS
#include <linux/susfs.h>
#endif
#ifdef CONFIG_KSU_CMDLINE
#include <linux/init.h>
// use get_ksu_state()!
unsigned int enable_kernelsu = 1; // enabled by default
static int __init read_kernelsu_state(char *s)
{
if (s)
enable_kernelsu = simple_strtoul(s, NULL, 0);
return 1;
}
__setup("kernelsu.enabled=", read_kernelsu_state);
bool get_ksu_state(void) { return enable_kernelsu >= 1; }
#else
bool get_ksu_state(void) { return true; }
#endif /* CONFIG_KSU_CMDLINE */
static struct workqueue_struct *ksu_workqueue;
bool ksu_queue_work(struct work_struct *work)
{
return queue_work(ksu_workqueue, work);
}
extern int ksu_handle_execveat_sucompat(int *fd, struct filename **filename_ptr,
void *argv, void *envp, int *flags);
extern int ksu_handle_execveat_ksud(int *fd, struct filename **filename_ptr,
void *argv, void *envp, int *flags);
int ksu_handle_execveat(int *fd, struct filename **filename_ptr, void *argv,
void *envp, int *flags)
{
ksu_handle_execveat_ksud(fd, filename_ptr, argv, envp, flags);
return ksu_handle_execveat_sucompat(fd, filename_ptr, argv, envp,
flags);
}
extern void ksu_sucompat_init();
extern void ksu_sucompat_exit();
extern void ksu_ksud_init();
extern void ksu_ksud_exit();
int __init ksu_kernelsu_init(void)
{
pr_info("kernelsu.enabled=%d\n",
(int)get_ksu_state());
#ifdef CONFIG_KSU_CMDLINE
if (!get_ksu_state()) {
pr_info_once("drivers is disabled.");
return 0;
}
#endif
#ifdef CONFIG_KSU_DEBUG
pr_alert("*************************************************************");
pr_alert("** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **");
pr_alert("** **");
pr_alert("** You are running KernelSU in DEBUG mode **");
pr_alert("** **");
pr_alert("** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **");
pr_alert("*************************************************************");
#endif
#ifdef CONFIG_KSU_SUSFS
susfs_init();
#endif
ksu_core_init();
ksu_workqueue = alloc_ordered_workqueue("kernelsu_work_queue", 0);
ksu_allowlist_init();
ksu_throne_tracker_init();
ksu_sucompat_init();
#ifdef CONFIG_KSU_KPROBES_HOOK
ksu_ksud_init();
#else
pr_debug("init ksu driver\n");
#endif
#ifdef MODULE
#ifndef CONFIG_KSU_DEBUG
kobject_del(&THIS_MODULE->mkobj.kobj);
#endif
#endif
return 0;
}
void ksu_kernelsu_exit(void)
{
#ifdef CONFIG_KSU_CMDLINE
if (!get_ksu_state()) {
return;
}
#endif
ksu_allowlist_exit();
ksu_throne_tracker_exit();
destroy_workqueue(ksu_workqueue);
#ifdef CONFIG_KSU_KPROBES_HOOK
ksu_ksud_exit();
#endif
ksu_sucompat_exit();
ksu_core_exit();
}
module_init(ksu_kernelsu_init);
module_exit(ksu_kernelsu_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("weishu");
MODULE_DESCRIPTION("Android KernelSU");
#include <linux/version.h>
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 0, 0)
MODULE_IMPORT_NS(VFS_internal_I_am_really_a_filesystem_and_am_NOT_a_driver);
#endif

104
drivers/kernelsu/ksu.h Normal file
View File

@@ -0,0 +1,104 @@
#ifndef __KSU_H_KSU
#define __KSU_H_KSU
#include <linux/types.h>
#include <linux/workqueue.h>
#define KERNEL_SU_VERSION KSU_VERSION
#define KERNEL_SU_OPTION 0xDEADBEEF
#define CMD_GRANT_ROOT 0
#define CMD_BECOME_MANAGER 1
#define CMD_GET_VERSION 2
#define CMD_ALLOW_SU 3
#define CMD_DENY_SU 4
#define CMD_GET_ALLOW_LIST 5
#define CMD_GET_DENY_LIST 6
#define CMD_REPORT_EVENT 7
#define CMD_SET_SEPOLICY 8
#define CMD_CHECK_SAFEMODE 9
#define CMD_GET_APP_PROFILE 10
#define CMD_SET_APP_PROFILE 11
#define CMD_UID_GRANTED_ROOT 12
#define CMD_UID_SHOULD_UMOUNT 13
#define CMD_IS_SU_ENABLED 14
#define CMD_ENABLE_SU 15
#define CMD_GET_MANAGER_UID 16
#define CMD_HOOK_MODE 0xC0DEAD1A
#define EVENT_POST_FS_DATA 1
#define EVENT_BOOT_COMPLETED 2
#define EVENT_MODULE_MOUNTED 3
#define KSU_APP_PROFILE_VER 2
#define KSU_MAX_PACKAGE_NAME 256
// NGROUPS_MAX for Linux is 65535 generally, but we only supports 32 groups.
#define KSU_MAX_GROUPS 32
#define KSU_SELINUX_DOMAIN 64
struct root_profile {
int32_t uid;
int32_t gid;
int32_t groups_count;
int32_t groups[KSU_MAX_GROUPS];
// kernel_cap_t is u32[2] for capabilities v3
struct {
u64 effective;
u64 permitted;
u64 inheritable;
} capabilities;
char selinux_domain[KSU_SELINUX_DOMAIN];
int32_t namespaces;
};
struct non_root_profile {
bool umount_modules;
};
struct app_profile {
// It may be utilized for backward compatibility, although we have never explicitly made any promises regarding this.
u32 version;
// this is usually the package of the app, but can be other value for special apps
char key[KSU_MAX_PACKAGE_NAME];
int32_t current_uid;
bool allow_su;
union {
struct {
bool use_default;
char template_name[KSU_MAX_PACKAGE_NAME];
struct root_profile profile;
} rp_config;
struct {
bool use_default;
struct non_root_profile profile;
} nrp_config;
};
};
bool ksu_queue_work(struct work_struct *work);
static inline int startswith(char *s, char *prefix)
{
return strncmp(s, prefix, strlen(prefix));
}
static inline int endswith(const char *s, const char *t)
{
size_t slen = strlen(s);
size_t tlen = strlen(t);
if (tlen > slen)
return 1;
return strcmp(s + slen - tlen, t);
}
#endif

683
drivers/kernelsu/ksud.c Normal file
View File

@@ -0,0 +1,683 @@
#include <asm/current.h>
#include <linux/compat.h>
#include <linux/cred.h>
#include <linux/dcache.h>
#include <linux/err.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/version.h>
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 0)
#include <linux/input-event-codes.h>
#else
#include <uapi/linux/input.h>
#endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 1, 0)
#include <linux/aio.h>
#endif
#include <linux/kprobes.h>
#include <linux/printk.h>
#include <linux/types.h>
#include <linux/uaccess.h>
#include <linux/workqueue.h>
#include "allowlist.h"
#include "arch.h"
#include "klog.h" // IWYU pragma: keep
#include "ksud.h"
#include "kernel_compat.h"
#include "selinux/selinux.h"
bool ksu_is_compat __read_mostly = false; // let it here
static const char KERNEL_SU_RC[] =
"\n"
"on post-fs-data\n"
" start logd\n"
// We should wait for the post-fs-data finish
" exec u:r:su:s0 root -- " KSUD_PATH " post-fs-data\n"
"\n"
"on nonencrypted\n"
" exec u:r:su:s0 root -- " KSUD_PATH " services\n"
"\n"
"on property:vold.decrypt=trigger_restart_framework\n"
" exec u:r:su:s0 root -- " KSUD_PATH " services\n"
"\n"
"on property:sys.boot_completed=1\n"
" exec u:r:su:s0 root -- " KSUD_PATH " boot-completed\n"
"\n"
"\n";
static void stop_vfs_read_hook();
static void stop_execve_hook();
static void stop_input_hook();
#ifdef CONFIG_KSU_KPROBES_HOOK
static struct work_struct stop_vfs_read_work;
static struct work_struct stop_execve_hook_work;
static struct work_struct stop_input_hook_work;
#else
bool ksu_vfs_read_hook __read_mostly = true;
bool ksu_execveat_hook __read_mostly = true;
bool ksu_input_hook __read_mostly = true;
#endif
u32 ksu_devpts_sid;
void ksu_on_post_fs_data(void)
{
static bool done = false;
if (done) {
pr_info("%s already done\n", __func__);
return;
}
done = true;
pr_info("%s!\n", __func__);
ksu_load_allow_list();
// sanity check, this may influence the performance
stop_input_hook();
ksu_devpts_sid = ksu_get_devpts_sid();
pr_info("devpts sid: %d\n", ksu_devpts_sid);
}
#define MAX_ARG_STRINGS 0x7FFFFFFF
struct user_arg_ptr {
#ifdef CONFIG_COMPAT
bool is_compat;
#endif
union {
const char __user *const __user *native;
#ifdef CONFIG_COMPAT
const compat_uptr_t __user *compat;
#endif
} ptr;
};
static const char __user *get_user_arg_ptr(struct user_arg_ptr argv, int nr)
{
const char __user *native;
#ifdef CONFIG_COMPAT
if (unlikely(argv.is_compat)) {
compat_uptr_t compat;
if (get_user(compat, argv.ptr.compat + nr))
return ERR_PTR(-EFAULT);
ksu_is_compat = true;
return compat_ptr(compat);
}
#endif
if (get_user(native, argv.ptr.native + nr))
return ERR_PTR(-EFAULT);
return native;
}
/*
* count() counts the number of strings in array ARGV.
*/
/*
* Make sure old GCC compiler can use __maybe_unused,
* Test passed in 4.4.x ~ 4.9.x when use GCC.
*/
static int __maybe_unused count(struct user_arg_ptr argv, int max)
{
int i = 0;
if (argv.ptr.native != NULL) {
for (;;) {
const char __user *p = get_user_arg_ptr(argv, i);
if (!p)
break;
if (IS_ERR(p))
return -EFAULT;
if (i >= max)
return -E2BIG;
++i;
if (fatal_signal_pending(current))
return -ERESTARTNOHAND;
cond_resched();
}
}
return i;
}
// IMPORTANT NOTE: the call from execve_handler_pre WON'T provided correct value for envp and flags in GKI version
int ksu_handle_execveat_ksud(int *fd, struct filename **filename_ptr,
struct user_arg_ptr *argv,
struct user_arg_ptr *envp, int *flags)
{
#ifndef CONFIG_KSU_KPROBES_HOOK
if (!ksu_execveat_hook) {
return 0;
}
#endif
struct filename *filename;
static const char app_process[] = "/system/bin/app_process";
static bool first_app_process = true;
/* This applies to versions Android 10+ */
static const char system_bin_init[] = "/system/bin/init";
/* This applies to versions between Android 6 ~ 9 */
static const char old_system_init[] = "/init";
static bool init_second_stage_executed = false;
if (!filename_ptr)
return 0;
filename = *filename_ptr;
if (IS_ERR(filename)) {
return 0;
}
if (unlikely(!memcmp(filename->name, system_bin_init,
sizeof(system_bin_init) - 1) &&
argv)) {
// /system/bin/init executed
int argc = count(*argv, MAX_ARG_STRINGS);
pr_info("/system/bin/init argc: %d\n", argc);
if (argc > 1 && !init_second_stage_executed) {
const char __user *p = get_user_arg_ptr(*argv, 1);
if (p && !IS_ERR(p)) {
char first_arg[16];
ksu_strncpy_from_user_retry(
first_arg, p, sizeof(first_arg));
pr_info("/system/bin/init first arg: %s\n",
first_arg);
if (!strcmp(first_arg, "second_stage")) {
pr_info("/system/bin/init second_stage executed\n");
ksu_apply_kernelsu_rules();
init_second_stage_executed = true;
ksu_android_ns_fs_check();
}
} else {
pr_err("/system/bin/init parse args err!\n");
}
}
} else if (unlikely(!memcmp(filename->name, old_system_init,
sizeof(old_system_init) - 1) &&
argv)) {
// /init executed
int argc = count(*argv, MAX_ARG_STRINGS);
pr_info("/init argc: %d\n", argc);
if (argc > 1 && !init_second_stage_executed) {
/* This applies to versions between Android 6 ~ 7 */
const char __user *p = get_user_arg_ptr(*argv, 1);
if (p && !IS_ERR(p)) {
char first_arg[16];
ksu_strncpy_from_user_retry(
first_arg, p, sizeof(first_arg));
pr_info("/init first arg: %s\n", first_arg);
if (!strcmp(first_arg, "--second-stage")) {
pr_info("/init second_stage executed\n");
ksu_apply_kernelsu_rules();
init_second_stage_executed = true;
ksu_android_ns_fs_check();
}
} else {
pr_err("/init parse args err!\n");
}
} else if (argc == 1 && !init_second_stage_executed && envp) {
/* This applies to versions between Android 8 ~ 9 */
int envc = count(*envp, MAX_ARG_STRINGS);
if (envc > 0) {
int n;
for (n = 1; n <= envc; n++) {
const char __user *p =
get_user_arg_ptr(*envp, n);
if (!p || IS_ERR(p)) {
continue;
}
char env[256];
// Reading environment variable strings from user space
if (ksu_strncpy_from_user_retry(
env, p, sizeof(env)) < 0)
continue;
// Parsing environment variable names and values
char *env_name = env;
char *env_value = strchr(env, '=');
if (env_value == NULL)
continue;
// Replace equal sign with string terminator
*env_value = '\0';
env_value++;
// Check if the environment variable name and value are matching
if (!strcmp(env_name,
"INIT_SECOND_STAGE") &&
(!strcmp(env_value, "1") ||
!strcmp(env_value, "true"))) {
pr_info("/init second_stage executed\n");
ksu_apply_kernelsu_rules();
init_second_stage_executed =
true;
ksu_android_ns_fs_check();
}
}
}
}
}
if (unlikely(first_app_process && !memcmp(filename->name, app_process,
sizeof(app_process) - 1))) {
first_app_process = false;
pr_info("exec app_process, /data prepared, second_stage: %d\n",
init_second_stage_executed);
ksu_on_post_fs_data(); // we keep this for old ksud
stop_execve_hook();
}
return 0;
}
static ssize_t (*orig_read)(struct file *, char __user *, size_t, loff_t *);
static ssize_t (*orig_read_iter)(struct kiocb *, struct iov_iter *);
static struct file_operations fops_proxy;
static ssize_t read_count_append = 0;
static ssize_t read_proxy(struct file *file, char __user *buf, size_t count,
loff_t *pos)
{
bool first_read = file->f_pos == 0;
ssize_t ret = orig_read(file, buf, count, pos);
if (first_read) {
pr_info("read_proxy append %ld + %ld\n", ret,
read_count_append);
ret += read_count_append;
}
return ret;
}
static ssize_t read_iter_proxy(struct kiocb *iocb, struct iov_iter *to)
{
bool first_read = iocb->ki_pos == 0;
ssize_t ret = orig_read_iter(iocb, to);
if (first_read) {
pr_info("read_iter_proxy append %ld + %ld\n", ret,
read_count_append);
ret += read_count_append;
}
return ret;
}
int ksu_handle_vfs_read(struct file **file_ptr, char __user **buf_ptr,
size_t *count_ptr, loff_t **pos)
{
#ifndef CONFIG_KSU_KPROBES_HOOK
if (!ksu_vfs_read_hook) {
return 0;
}
#endif
struct file *file;
char __user *buf;
size_t count;
if (strcmp(current->comm, "init")) {
// we are only interest in `init` process
return 0;
}
file = *file_ptr;
if (IS_ERR(file)) {
return 0;
}
if (!d_is_reg(file->f_path.dentry)) {
return 0;
}
const char *short_name = file->f_path.dentry->d_name.name;
if (strcmp(short_name, "atrace.rc")) {
// we are only interest `atrace.rc` file name file
return 0;
}
char path[256];
char *dpath = d_path(&file->f_path, path, sizeof(path));
if (IS_ERR(dpath)) {
return 0;
}
if (strcmp(dpath, "/system/etc/init/atrace.rc")) {
return 0;
}
// we only process the first read
static bool rc_inserted = false;
if (rc_inserted) {
// we don't need this kprobe, unregister it!
stop_vfs_read_hook();
return 0;
}
rc_inserted = true;
// now we can sure that the init process is reading
// `/system/etc/init/atrace.rc`
buf = *buf_ptr;
count = *count_ptr;
size_t rc_count = strlen(KERNEL_SU_RC);
pr_info("vfs_read: %s, comm: %s, count: %zu, rc_count: %zu\n", dpath,
current->comm, count, rc_count);
if (count < rc_count) {
pr_err("count: %zu < rc_count: %zu\n", count, rc_count);
return 0;
}
size_t ret = copy_to_user(buf, KERNEL_SU_RC, rc_count);
if (ret) {
pr_err("copy ksud.rc failed: %zu\n", ret);
return 0;
}
// we've succeed to insert ksud.rc, now we need to proxy the read and modify the result!
// But, we can not modify the file_operations directly, because it's in read-only memory.
// We just replace the whole file_operations with a proxy one.
memcpy(&fops_proxy, file->f_op, sizeof(struct file_operations));
orig_read = file->f_op->read;
if (orig_read) {
fops_proxy.read = read_proxy;
}
orig_read_iter = file->f_op->read_iter;
if (orig_read_iter) {
fops_proxy.read_iter = read_iter_proxy;
}
// replace the file_operations
file->f_op = &fops_proxy;
read_count_append = rc_count;
*buf_ptr = buf + rc_count;
*count_ptr = count - rc_count;
return 0;
}
int ksu_handle_sys_read(unsigned int fd, char __user **buf_ptr,
size_t *count_ptr)
{
struct file *file = fget(fd);
if (!file) {
return 0;
}
int result = ksu_handle_vfs_read(&file, buf_ptr, count_ptr, NULL);
fput(file);
return result;
}
static unsigned int volumedown_pressed_count = 0;
static bool is_volumedown_enough(unsigned int count)
{
return count >= 3;
}
int ksu_handle_input_handle_event(unsigned int *type, unsigned int *code,
int *value)
{
#ifndef CONFIG_KSU_KPROBES_HOOK
if (!ksu_input_hook) {
return 0;
}
#endif
if (*type == EV_KEY && *code == KEY_VOLUMEDOWN) {
int val = *value;
pr_info("KEY_VOLUMEDOWN val: %d\n", val);
if (val) {
// key pressed, count it
volumedown_pressed_count += 1;
if (is_volumedown_enough(volumedown_pressed_count)) {
stop_input_hook();
}
}
}
return 0;
}
bool ksu_is_safe_mode()
{
static bool safe_mode = false;
if (safe_mode) {
// don't need to check again, userspace may call multiple times
return true;
}
// stop hook first!
stop_input_hook();
pr_info("volumedown_pressed_count: %d\n", volumedown_pressed_count);
if (is_volumedown_enough(volumedown_pressed_count)) {
// pressed over 3 times
pr_info("KEY_VOLUMEDOWN pressed max times, safe mode detected!\n");
safe_mode = true;
return true;
}
return false;
}
#ifdef CONFIG_KSU_KPROBES_HOOK
// https://elixir.bootlin.com/linux/v5.10.158/source/fs/exec.c#L1864
static int execve_handler_pre(struct kprobe *p, struct pt_regs *regs)
{
int *fd = (int *)&PT_REGS_PARM1(regs);
struct filename **filename_ptr =
(struct filename **)&PT_REGS_PARM2(regs);
struct user_arg_ptr argv;
#ifdef CONFIG_COMPAT
argv.is_compat = PT_REGS_PARM3(regs);
if (unlikely(argv.is_compat)) {
argv.ptr.compat = PT_REGS_CCALL_PARM4(regs);
} else {
argv.ptr.native = PT_REGS_CCALL_PARM4(regs);
}
#else
argv.ptr.native = PT_REGS_PARM3(regs);
#endif
return ksu_handle_execveat_ksud(fd, filename_ptr, &argv, NULL, NULL);
}
static int sys_execve_handler_pre(struct kprobe *p, struct pt_regs *regs)
{
struct pt_regs *real_regs = PT_REAL_REGS(regs);
const char __user **filename_user =
(const char **)&PT_REGS_PARM1(real_regs);
const char __user *const __user *__argv =
(const char __user *const __user *)PT_REGS_PARM2(real_regs);
struct user_arg_ptr argv = { .ptr.native = __argv };
struct filename filename_in, *filename_p;
char path[32];
if (!filename_user)
return 0;
memset(path, 0, sizeof(path));
ksu_strncpy_from_user_nofault(path, *filename_user, 32);
filename_in.name = path;
filename_p = &filename_in;
return ksu_handle_execveat_ksud(AT_FDCWD, &filename_p, &argv, NULL,
NULL);
}
static int sys_read_handler_pre(struct kprobe *p, struct pt_regs *regs)
{
struct pt_regs *real_regs = PT_REAL_REGS(regs);
unsigned int fd = PT_REGS_PARM1(real_regs);
char __user **buf_ptr = (char __user **)&PT_REGS_PARM2(real_regs);
size_t count_ptr = (size_t *)&PT_REGS_PARM3(real_regs);
return ksu_handle_sys_read(fd, buf_ptr, count_ptr);
}
static int input_handle_event_handler_pre(struct kprobe *p,
struct pt_regs *regs)
{
unsigned int *type = (unsigned int *)&PT_REGS_PARM2(regs);
unsigned int *code = (unsigned int *)&PT_REGS_PARM3(regs);
int *value = (int *)&PT_REGS_CCALL_PARM4(regs);
return ksu_handle_input_handle_event(type, code, value);
}
static struct kprobe execve_kp = {
.symbol_name = SYS_EXECVE_SYMBOL,
.pre_handler = sys_execve_handler_pre,
};
static struct kprobe vfs_read_kp = {
.symbol_name = SYS_READ_SYMBOL,
.pre_handler = sys_read_handler_pre,
};
static struct kprobe input_event_kp = {
.symbol_name = "input_event",
.pre_handler = input_handle_event_handler_pre,
};
static void do_stop_vfs_read_hook(struct work_struct *work)
{
unregister_kprobe(&vfs_read_kp);
}
static void do_stop_execve_hook(struct work_struct *work)
{
unregister_kprobe(&execve_kp);
}
static void do_stop_input_hook(struct work_struct *work)
{
unregister_kprobe(&input_event_kp);
}
#else
static int ksu_execve_ksud_common(const char __user *filename_user,
struct user_arg_ptr *argv)
{
struct filename filename_in, *filename_p;
char path[32];
long len;
// return early if disabled.
if (!ksu_execveat_hook) {
return 0;
}
if (!filename_user)
return 0;
len = ksu_strncpy_from_user_nofault(path, filename_user, 32);
if (len <= 0)
return 0;
path[sizeof(path) - 1] = '\0';
// this is because ksu_handle_execveat_ksud calls it filename->name
filename_in.name = path;
filename_p = &filename_in;
return ksu_handle_execveat_ksud(AT_FDCWD, &filename_p, argv, NULL, NULL);
}
int __maybe_unused ksu_handle_execve_ksud(const char __user *filename_user,
const char __user *const __user *__argv)
{
struct user_arg_ptr argv = { .ptr.native = __argv };
return ksu_execve_ksud_common(filename_user, &argv);
}
#if defined(CONFIG_COMPAT) && defined(CONFIG_64BIT)
int __maybe_unused ksu_handle_compat_execve_ksud(const char __user *filename_user,
const compat_uptr_t __user *__argv)
{
struct user_arg_ptr argv = { .ptr.compat = __argv };
return ksu_execve_ksud_common(filename_user, &argv);
}
#endif /* COMPAT & 64BIT */
#endif
static void stop_vfs_read_hook()
{
#ifdef CONFIG_KSU_KPROBES_HOOK
bool ret = schedule_work(&stop_vfs_read_work);
pr_info("unregister vfs_read kprobe: %d!\n", ret);
#else
ksu_vfs_read_hook = false;
pr_info("stop vfs_read_hook\n");
#endif
}
static void stop_execve_hook()
{
#ifdef CONFIG_KSU_KPROBES_HOOK
bool ret = schedule_work(&stop_execve_hook_work);
pr_info("unregister execve kprobe: %d!\n", ret);
#else
ksu_execveat_hook = false;
pr_info("stop execve_hook\n");
#endif
}
static void stop_input_hook()
{
#ifdef CONFIG_KSU_KPROBES_HOOK
static bool input_hook_stopped = false;
if (input_hook_stopped) {
return;
}
input_hook_stopped = true;
bool ret = schedule_work(&stop_input_hook_work);
pr_info("unregister input kprobe: %d!\n", ret);
#else
if (!ksu_input_hook) { return; }
ksu_input_hook = false;
pr_info("stop input_hook\n");
#endif
}
// ksud: module support
void ksu_ksud_init()
{
#ifdef CONFIG_KSU_KPROBES_HOOK
int ret;
ret = register_kprobe(&execve_kp);
pr_info("ksud: execve_kp: %d\n", ret);
ret = register_kprobe(&vfs_read_kp);
pr_info("ksud: vfs_read_kp: %d\n", ret);
ret = register_kprobe(&input_event_kp);
pr_info("ksud: input_event_kp: %d\n", ret);
INIT_WORK(&stop_vfs_read_work, do_stop_vfs_read_hook);
INIT_WORK(&stop_execve_hook_work, do_stop_execve_hook);
INIT_WORK(&stop_input_hook_work, do_stop_input_hook);
#endif
}
void ksu_ksud_exit()
{
#ifdef CONFIG_KSU_KPROBES_HOOK
unregister_kprobe(&execve_kp);
// this should be done before unregister vfs_read_kp
// unregister_kprobe(&vfs_read_kp);
unregister_kprobe(&input_event_kp);
#endif
}

14
drivers/kernelsu/ksud.h Normal file
View File

@@ -0,0 +1,14 @@
#ifndef __KSU_H_KSUD
#define __KSU_H_KSUD
#include <linux/types.h>
#define KSUD_PATH "/data/adb/ksud"
void ksu_on_post_fs_data(void);
bool ksu_is_safe_mode(void);
extern u32 ksu_devpts_sid;
#endif

View File

@@ -0,0 +1,36 @@
#ifndef __KSU_H_KSU_MANAGER
#define __KSU_H_KSU_MANAGER
#include <linux/cred.h>
#include <linux/types.h>
#define KSU_INVALID_UID -1
extern uid_t ksu_manager_uid; // DO NOT DIRECT USE
static inline bool ksu_is_manager_uid_valid()
{
return ksu_manager_uid != KSU_INVALID_UID;
}
static inline bool ksu_is_manager()
{
return unlikely(ksu_manager_uid == current_uid().val);
}
static inline uid_t ksu_get_manager_uid()
{
return ksu_manager_uid;
}
static inline void ksu_set_manager_uid(uid_t uid)
{
ksu_manager_uid = uid;
}
static inline void ksu_invalidate_manager_uid()
{
ksu_manager_uid = KSU_INVALID_UID;
}
#endif

View File

@@ -0,0 +1,20 @@
#ifndef __KSU_H_MANAGER_SIGN
#define __KSU_H_MANAGER_SIGN
// rsuntk/KernelSU
#define EXPECTED_SIZE_RSUNTK 0x396
#define EXPECTED_HASH_RSUNTK "f415f4ed9435427e1fdf7f1fccd4dbc07b3d6b8751e4dbcec6f19671f427870b"
// 5ec1cff/KernelSU
#define EXPECTED_SIZE_5EC1CFF 384
#define EXPECTED_HASH_5EC1CFF "7e0c6d7278a3bb8e364e0fcba95afaf3666cf5ff3c245a3b63c8833bd0445cc4"
// tiann/KernelSU
#define EXPECTED_SIZE_OFFICIAL 0x033b
#define EXPECTED_HASH_OFFICIAL "c371061b19d8c7d7d6133c6a9bafe198fa944e50c1b31c9d8daa8d7f1fc2d2d6"
// ksu-next
#define EXPECTED_SIZE_NEXT 0x3e6
#define EXPECTED_HASH_NEXT "79e590113c4c4c0c222978e413a5faa801666957b1212a328e46c00c69821bf7"
#endif /* MANAGER_SIGN_H */

View File

@@ -0,0 +1,542 @@
#include <linux/uaccess.h>
#include <linux/types.h>
#include <linux/version.h>
#include "../klog.h" // IWYU pragma: keep
#include "selinux.h"
#include "sepolicy.h"
#include "ss/services.h"
#include "linux/lsm_audit.h"
#include "xfrm.h"
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 10, 0)
#define SELINUX_POLICY_INSTEAD_SELINUX_SS
#endif
#define KERNEL_SU_DOMAIN "su"
#define KERNEL_SU_FILE "ksu_file"
#define KERNEL_EXEC_TYPE "ksu_exec"
#define ALL NULL
static struct policydb *get_policydb(void)
{
struct policydb *db;
// selinux_state does not exists before 4.19
#ifdef KSU_COMPAT_USE_SELINUX_STATE
#ifdef SELINUX_POLICY_INSTEAD_SELINUX_SS
struct selinux_policy *policy = rcu_dereference(selinux_state.policy);
db = &policy->policydb;
#else
struct selinux_ss *ss = rcu_dereference(selinux_state.ss);
db = &ss->policydb;
#endif
#else
db = &policydb;
#endif
return db;
}
static DEFINE_MUTEX(apply_ksu_rules_mutex);
void ksu_apply_kernelsu_rules()
{
struct policydb *db;
if (!ksu_getenforce()) {
pr_info("SELinux permissive or disabled, apply rules!\n");
}
mutex_lock(&apply_ksu_rules_mutex);
db = get_policydb();
ksu_permissive(db, KERNEL_SU_DOMAIN);
ksu_typeattribute(db, KERNEL_SU_DOMAIN, "mlstrustedsubject");
ksu_typeattribute(db, KERNEL_SU_DOMAIN, "netdomain");
ksu_typeattribute(db, KERNEL_SU_DOMAIN, "bluetoothdomain");
// Create unconstrained file type
ksu_type(db, KERNEL_SU_FILE, "file_type");
ksu_typeattribute(db, KERNEL_SU_FILE, "mlstrustedobject");
ksu_allow(db, ALL, KERNEL_SU_FILE, ALL, ALL);
// allow all!
ksu_allow(db, KERNEL_SU_DOMAIN, ALL, ALL, ALL);
// allow us do any ioctl
if (db->policyvers >= POLICYDB_VERSION_XPERMS_IOCTL) {
ksu_allowxperm(db, KERNEL_SU_DOMAIN, ALL, "blk_file", ALL);
ksu_allowxperm(db, KERNEL_SU_DOMAIN, ALL, "fifo_file", ALL);
ksu_allowxperm(db, KERNEL_SU_DOMAIN, ALL, "chr_file", ALL);
ksu_allowxperm(db, KERNEL_SU_DOMAIN, ALL, "file", ALL);
}
// we need to save allowlist in /data/adb/ksu
ksu_allow(db, "kernel", "adb_data_file", "dir", ALL);
ksu_allow(db, "kernel", "adb_data_file", "file", ALL);
// we need to search /data/app
ksu_allow(db, "kernel", "apk_data_file", "file", "open");
ksu_allow(db, "kernel", "apk_data_file", "dir", "open");
ksu_allow(db, "kernel", "apk_data_file", "dir", "read");
ksu_allow(db, "kernel", "apk_data_file", "dir", "search");
// we may need to do mount on shell
ksu_allow(db, "kernel", "shell_data_file", "file", ALL);
// we need to read /data/system/packages.list
ksu_allow(db, "kernel", "kernel", "capability", "dac_override");
// Android 10+:
// http://aospxref.com/android-12.0.0_r3/xref/system/sepolicy/private/file_contexts#512
ksu_allow(db, "kernel", "packages_list_file", "file", ALL);
// Kernel 4.4
ksu_allow(db, "kernel", "packages_list_file", "dir", ALL);
// Android 9-:
// http://aospxref.com/android-9.0.0_r61/xref/system/sepolicy/private/file_contexts#360
ksu_allow(db, "kernel", "system_data_file", "file", ALL);
ksu_allow(db, "kernel", "system_data_file", "dir", ALL);
// our ksud triggered by init
ksu_allow(db, "init", "adb_data_file", "file", ALL);
ksu_allow(db, "init", "adb_data_file", "dir", ALL); // #1289
ksu_allow(db, "init", KERNEL_SU_DOMAIN, ALL, ALL);
// we need to umount modules in zygote
ksu_allow(db, "zygote", "adb_data_file", "dir", "search");
// copied from Magisk rules
// suRights
ksu_allow(db, "servicemanager", KERNEL_SU_DOMAIN, "dir", "search");
ksu_allow(db, "servicemanager", KERNEL_SU_DOMAIN, "dir", "read");
ksu_allow(db, "servicemanager", KERNEL_SU_DOMAIN, "file", "open");
ksu_allow(db, "servicemanager", KERNEL_SU_DOMAIN, "file", "read");
ksu_allow(db, "servicemanager", KERNEL_SU_DOMAIN, "process", "getattr");
ksu_allow(db, ALL, KERNEL_SU_DOMAIN, "process", "sigchld");
// allowLog
ksu_allow(db, "logd", KERNEL_SU_DOMAIN, "dir", "search");
ksu_allow(db, "logd", KERNEL_SU_DOMAIN, "file", "read");
ksu_allow(db, "logd", KERNEL_SU_DOMAIN, "file", "open");
ksu_allow(db, "logd", KERNEL_SU_DOMAIN, "file", "getattr");
// dumpsys
ksu_allow(db, ALL, KERNEL_SU_DOMAIN, "fd", "use");
ksu_allow(db, ALL, KERNEL_SU_DOMAIN, "fifo_file", "write");
ksu_allow(db, ALL, KERNEL_SU_DOMAIN, "fifo_file", "read");
ksu_allow(db, ALL, KERNEL_SU_DOMAIN, "fifo_file", "open");
ksu_allow(db, ALL, KERNEL_SU_DOMAIN, "fifo_file", "getattr");
// bootctl
ksu_allow(db, "hwservicemanager", KERNEL_SU_DOMAIN, "dir", "search");
ksu_allow(db, "hwservicemanager", KERNEL_SU_DOMAIN, "file", "read");
ksu_allow(db, "hwservicemanager", KERNEL_SU_DOMAIN, "file", "open");
ksu_allow(db, "hwservicemanager", KERNEL_SU_DOMAIN, "process",
"getattr");
// For mounting loop devices, mirrors, tmpfs
ksu_allow(db, "kernel", ALL, "file", "read");
ksu_allow(db, "kernel", ALL, "file", "write");
// Allow all binder transactions
ksu_allow(db, ALL, KERNEL_SU_DOMAIN, "binder", ALL);
// Allow system server kill su process
ksu_allow(db, "system_server", KERNEL_SU_DOMAIN, "process", "getpgid");
ksu_allow(db, "system_server", KERNEL_SU_DOMAIN, "process", "sigkill");
#ifdef CONFIG_KSU_SUSFS
// Allow umount in zygote process without installing zygisk
ksu_allow(db, "zygote", "labeledfs", "filesystem", "unmount");
susfs_set_init_sid();
susfs_set_ksu_sid();
susfs_set_zygote_sid();
#endif
mutex_unlock(&apply_ksu_rules_mutex);
}
#define MAX_SEPOL_LEN 128
#define CMD_NORMAL_PERM 1
#define CMD_XPERM 2
#define CMD_TYPE_STATE 3
#define CMD_TYPE 4
#define CMD_TYPE_ATTR 5
#define CMD_ATTR 6
#define CMD_TYPE_TRANSITION 7
#define CMD_TYPE_CHANGE 8
#define CMD_GENFSCON 9
// keep it!
extern bool ksu_is_compat __read_mostly;
// armv7l kernel compat
#ifdef CONFIG_64BIT
#define usize u64
#else
#define usize u32
#endif
struct sepol_data {
u32 cmd;
u32 subcmd;
usize field_sepol1;
usize field_sepol2;
usize field_sepol3;
usize field_sepol4;
usize field_sepol5;
usize field_sepol6;
usize field_sepol7;
};
// ksud 32-bit on arm64 kernel
struct __maybe_unused sepol_data_compat {
u32 cmd;
u32 subcmd;
u32 field_sepol1;
u32 field_sepol2;
u32 field_sepol3;
u32 field_sepol4;
u32 field_sepol5;
u32 field_sepol6;
u32 field_sepol7;
};
static int get_object(char *buf, char __user *user_object, size_t buf_sz,
char **object)
{
if (!user_object) {
*object = ALL;
return 0;
}
if (strncpy_from_user(buf, user_object, buf_sz) < 0) {
return -1;
}
*object = buf;
return 0;
}
// reset avc cache table, otherwise the new rules will not take effect if already denied
static void reset_avc_cache()
{
#if LINUX_VERSION_CODE >= KERNEL_VERSION(6, 4, 0) || \
!defined(KSU_COMPAT_USE_SELINUX_STATE)
avc_ss_reset(0);
selnl_notify_policyload(0);
selinux_status_update_policyload(0);
#else
struct selinux_avc *avc = selinux_state.avc;
avc_ss_reset(avc, 0);
selnl_notify_policyload(0);
selinux_status_update_policyload(&selinux_state, 0);
#endif
selinux_xfrm_notify_policyload();
}
static DEFINE_MUTEX(ksu_handle_sepolicy_mutex);
int ksu_handle_sepolicy(unsigned long arg3, void __user *arg4)
{
if (!arg4) {
return -1;
}
if (!ksu_getenforce()) {
pr_info("SELinux permissive or disabled when handle policy!\n");
}
u32 cmd, subcmd;
char __user *sepol1, *sepol2, *sepol3, *sepol4, *sepol5, *sepol6, *sepol7;
if (unlikely(ksu_is_compat)) {
struct sepol_data_compat data_compat;
if (copy_from_user(&data_compat, arg4, sizeof(struct sepol_data_compat))) {
pr_err("sepol: copy sepol_data failed.\n");
return -1;
}
pr_info("sepol: running in compat mode!\n");
sepol1 = compat_ptr(data_compat.field_sepol1);
sepol2 = compat_ptr(data_compat.field_sepol2);
sepol3 = compat_ptr(data_compat.field_sepol3);
sepol4 = compat_ptr(data_compat.field_sepol4);
sepol5 = compat_ptr(data_compat.field_sepol5);
sepol6 = compat_ptr(data_compat.field_sepol6);
sepol7 = compat_ptr(data_compat.field_sepol7);
cmd = data_compat.cmd;
subcmd = data_compat.subcmd;
} else {
struct sepol_data data;
if (copy_from_user(&data, arg4, sizeof(struct sepol_data))) {
pr_err("sepol: copy sepol_data failed.\n");
return -1;
}
sepol1 = data.field_sepol1;
sepol2 = data.field_sepol2;
sepol3 = data.field_sepol3;
sepol4 = data.field_sepol4;
sepol5 = data.field_sepol5;
sepol6 = data.field_sepol6;
sepol7 = data.field_sepol7;
cmd = data.cmd;
subcmd = data.subcmd;
}
struct policydb *db;
mutex_lock(&ksu_handle_sepolicy_mutex);
db = get_policydb();
int ret = -1;
if (cmd == CMD_NORMAL_PERM) {
char src_buf[MAX_SEPOL_LEN];
char tgt_buf[MAX_SEPOL_LEN];
char cls_buf[MAX_SEPOL_LEN];
char perm_buf[MAX_SEPOL_LEN];
char *s, *t, *c, *p;
if (get_object(src_buf, sepol1, sizeof(src_buf), &s) < 0) {
pr_err("sepol: copy src failed.\n");
goto exit;
}
if (get_object(tgt_buf, sepol2, sizeof(tgt_buf), &t) < 0) {
pr_err("sepol: copy tgt failed.\n");
goto exit;
}
if (get_object(cls_buf, sepol3, sizeof(cls_buf), &c) < 0) {
pr_err("sepol: copy cls failed.\n");
goto exit;
}
if (get_object(perm_buf, sepol4, sizeof(perm_buf), &p) <
0) {
pr_err("sepol: copy perm failed.\n");
goto exit;
}
bool success = false;
if (subcmd == 1) {
success = ksu_allow(db, s, t, c, p);
} else if (subcmd == 2) {
success = ksu_deny(db, s, t, c, p);
} else if (subcmd == 3) {
success = ksu_auditallow(db, s, t, c, p);
} else if (subcmd == 4) {
success = ksu_dontaudit(db, s, t, c, p);
} else {
pr_err("sepol: unknown subcmd: %d\n", subcmd);
}
ret = success ? 0 : -1;
} else if (cmd == CMD_XPERM) {
char src_buf[MAX_SEPOL_LEN];
char tgt_buf[MAX_SEPOL_LEN];
char cls_buf[MAX_SEPOL_LEN];
char __maybe_unused
operation[MAX_SEPOL_LEN]; // it is always ioctl now!
char perm_set[MAX_SEPOL_LEN];
char *s, *t, *c;
if (get_object(src_buf, sepol1, sizeof(src_buf), &s) < 0) {
pr_err("sepol: copy src failed.\n");
goto exit;
}
if (get_object(tgt_buf, sepol2, sizeof(tgt_buf), &t) < 0) {
pr_err("sepol: copy tgt failed.\n");
goto exit;
}
if (get_object(cls_buf, sepol3, sizeof(cls_buf), &c) < 0) {
pr_err("sepol: copy cls failed.\n");
goto exit;
}
if (strncpy_from_user(operation, sepol4,
sizeof(operation)) < 0) {
pr_err("sepol: copy operation failed.\n");
goto exit;
}
if (strncpy_from_user(perm_set, sepol5, sizeof(perm_set)) <
0) {
pr_err("sepol: copy perm_set failed.\n");
goto exit;
}
bool success = false;
if (subcmd == 1) {
success = ksu_allowxperm(db, s, t, c, perm_set);
} else if (subcmd == 2) {
success = ksu_auditallowxperm(db, s, t, c, perm_set);
} else if (subcmd == 3) {
success = ksu_dontauditxperm(db, s, t, c, perm_set);
} else {
pr_err("sepol: unknown subcmd: %d\n", subcmd);
}
ret = success ? 0 : -1;
} else if (cmd == CMD_TYPE_STATE) {
char src[MAX_SEPOL_LEN];
if (strncpy_from_user(src, sepol1, sizeof(src)) < 0) {
pr_err("sepol: copy src failed.\n");
goto exit;
}
bool success = false;
if (subcmd == 1) {
success = ksu_permissive(db, src);
} else if (subcmd == 2) {
success = ksu_enforce(db, src);
} else {
pr_err("sepol: unknown subcmd: %d\n", subcmd);
}
if (success)
ret = 0;
} else if (cmd == CMD_TYPE || cmd == CMD_TYPE_ATTR) {
char type[MAX_SEPOL_LEN];
char attr[MAX_SEPOL_LEN];
if (strncpy_from_user(type, sepol1, sizeof(type)) < 0) {
pr_err("sepol: copy type failed.\n");
goto exit;
}
if (strncpy_from_user(attr, sepol2, sizeof(attr)) < 0) {
pr_err("sepol: copy attr failed.\n");
goto exit;
}
bool success = false;
if (cmd == CMD_TYPE) {
success = ksu_type(db, type, attr);
} else {
success = ksu_typeattribute(db, type, attr);
}
if (!success) {
pr_err("sepol: %d failed.\n", cmd);
goto exit;
}
ret = 0;
} else if (cmd == CMD_ATTR) {
char attr[MAX_SEPOL_LEN];
if (strncpy_from_user(attr, sepol1, sizeof(attr)) < 0) {
pr_err("sepol: copy attr failed.\n");
goto exit;
}
if (!ksu_attribute(db, attr)) {
pr_err("sepol: %d failed.\n", cmd);
goto exit;
}
ret = 0;
} else if (cmd == CMD_TYPE_TRANSITION) {
char src[MAX_SEPOL_LEN];
char tgt[MAX_SEPOL_LEN];
char cls[MAX_SEPOL_LEN];
char default_type[MAX_SEPOL_LEN];
char object[MAX_SEPOL_LEN];
if (strncpy_from_user(src, sepol1, sizeof(src)) < 0) {
pr_err("sepol: copy src failed.\n");
goto exit;
}
if (strncpy_from_user(tgt, sepol2, sizeof(tgt)) < 0) {
pr_err("sepol: copy tgt failed.\n");
goto exit;
}
if (strncpy_from_user(cls, sepol3, sizeof(cls)) < 0) {
pr_err("sepol: copy cls failed.\n");
goto exit;
}
if (strncpy_from_user(default_type, sepol4,
sizeof(default_type)) < 0) {
pr_err("sepol: copy default_type failed.\n");
goto exit;
}
char *real_object;
if (sepol5 == NULL) {
real_object = NULL;
} else {
if (strncpy_from_user(object, sepol5,
sizeof(object)) < 0) {
pr_err("sepol: copy object failed.\n");
goto exit;
}
real_object = object;
}
bool success = ksu_type_transition(db, src, tgt, cls,
default_type, real_object);
if (success)
ret = 0;
} else if (cmd == CMD_TYPE_CHANGE) {
char src[MAX_SEPOL_LEN];
char tgt[MAX_SEPOL_LEN];
char cls[MAX_SEPOL_LEN];
char default_type[MAX_SEPOL_LEN];
if (strncpy_from_user(src, sepol1, sizeof(src)) < 0) {
pr_err("sepol: copy src failed.\n");
goto exit;
}
if (strncpy_from_user(tgt, sepol2, sizeof(tgt)) < 0) {
pr_err("sepol: copy tgt failed.\n");
goto exit;
}
if (strncpy_from_user(cls, sepol3, sizeof(cls)) < 0) {
pr_err("sepol: copy cls failed.\n");
goto exit;
}
if (strncpy_from_user(default_type, sepol4,
sizeof(default_type)) < 0) {
pr_err("sepol: copy default_type failed.\n");
goto exit;
}
bool success = false;
if (subcmd == 1) {
success = ksu_type_change(db, src, tgt, cls,
default_type);
} else if (subcmd == 2) {
success = ksu_type_member(db, src, tgt, cls,
default_type);
} else {
pr_err("sepol: unknown subcmd: %d\n", subcmd);
}
if (success)
ret = 0;
} else if (cmd == CMD_GENFSCON) {
char name[MAX_SEPOL_LEN];
char path[MAX_SEPOL_LEN];
char context[MAX_SEPOL_LEN];
if (strncpy_from_user(name, sepol1, sizeof(name)) < 0) {
pr_err("sepol: copy name failed.\n");
goto exit;
}
if (strncpy_from_user(path, sepol2, sizeof(path)) < 0) {
pr_err("sepol: copy path failed.\n");
goto exit;
}
if (strncpy_from_user(context, sepol3, sizeof(context)) <
0) {
pr_err("sepol: copy context failed.\n");
goto exit;
}
if (!ksu_genfscon(db, name, path, context)) {
pr_err("sepol: %d failed.\n", cmd);
goto exit;
}
ret = 0;
} else {
pr_err("sepol: unknown cmd: %d\n", cmd);
}
exit:
mutex_unlock(&ksu_handle_sepolicy_mutex);
// only allow and xallow needs to reset avc cache, but we cannot do that because
// we are in atomic context. so we just reset it every time.
reset_avc_cache();
return ret;
}

View File

@@ -0,0 +1,251 @@
#include "selinux.h"
#include "objsec.h"
#include "linux/version.h"
#include "../klog.h" // IWYU pragma: keep
#ifdef SAMSUNG_SELINUX_PORTING
#include "security.h" // Samsung SELinux Porting
#endif
#ifndef KSU_COMPAT_USE_SELINUX_STATE
#include "avc.h"
#endif
#define KERNEL_SU_DOMAIN "u:r:su:s0"
#ifdef CONFIG_KSU_SUSFS
#define KERNEL_INIT_DOMAIN "u:r:init:s0"
#define KERNEL_ZYGOTE_DOMAIN "u:r:zygote:s0"
u32 susfs_ksu_sid = 0;
u32 susfs_init_sid = 0;
u32 susfs_zygote_sid = 0;
#endif
static int transive_to_domain(const char *domain)
{
struct cred *cred;
struct task_security_struct *tsec;
u32 sid;
int error;
cred = (struct cred *)__task_cred(current);
tsec = cred->security;
if (!tsec) {
pr_err("tsec == NULL!\n");
return -1;
}
error = security_secctx_to_secid(domain, strlen(domain), &sid);
if (error) {
pr_info("security_secctx_to_secid %s -> sid: %d, error: %d\n",
domain, sid, error);
}
if (!error) {
tsec->sid = sid;
tsec->create_sid = 0;
tsec->keycreate_sid = 0;
tsec->sockcreate_sid = 0;
}
return error;
}
bool __maybe_unused is_ksu_transition(const struct task_security_struct *old_tsec,
const struct task_security_struct *new_tsec)
{
static u32 ksu_sid;
char *secdata;
u32 seclen;
bool allowed = false;
if (!ksu_sid)
security_secctx_to_secid("u:r:su:s0", strlen("u:r:su:s0"), &ksu_sid);
if (security_secid_to_secctx(old_tsec->sid, &secdata, &seclen))
return false;
allowed = (!strcmp("u:r:init:s0", secdata) && new_tsec->sid == ksu_sid);
security_release_secctx(secdata, seclen);
return allowed;
}
void ksu_setup_selinux(const char *domain)
{
if (transive_to_domain(domain)) {
pr_err("transive domain failed.\n");
return;
}
}
void ksu_setenforce(bool enforce)
{
#ifdef CONFIG_SECURITY_SELINUX_DEVELOP
#ifdef SAMSUNG_SELINUX_PORTING
selinux_enforcing = enforce;
#endif
#ifdef KSU_COMPAT_USE_SELINUX_STATE
selinux_state.enforcing = enforce;
#else
selinux_enforcing = enforce;
#endif
#endif
}
bool ksu_getenforce()
{
#ifdef CONFIG_SECURITY_SELINUX_DISABLE
#ifdef KSU_COMPAT_USE_SELINUX_STATE
if (selinux_state.disabled) {
#else
if (selinux_disabled) {
#endif
return false;
}
#endif
#ifdef CONFIG_SECURITY_SELINUX_DEVELOP
#ifdef SAMSUNG_SELINUX_PORTING
return selinux_enforcing;
#endif
#ifdef KSU_COMPAT_USE_SELINUX_STATE
return selinux_state.enforcing;
#else
return selinux_enforcing;
#endif
#else
return true;
#endif
}
#if (LINUX_VERSION_CODE < KERNEL_VERSION(5, 10, 0)) && \
!defined(KSU_COMPAT_HAS_CURRENT_SID)
/*
* get the subjective security ID of the current task
*/
static inline u32 current_sid(void)
{
const struct task_security_struct *tsec = current_security();
return tsec->sid;
}
#endif
bool ksu_is_ksu_domain()
{
char *domain;
u32 seclen;
bool result;
int err = security_secid_to_secctx(current_sid(), &domain, &seclen);
if (err) {
return false;
}
result = strncmp(KERNEL_SU_DOMAIN, domain, seclen) == 0;
security_release_secctx(domain, seclen);
return result;
}
bool ksu_is_zygote(void *sec)
{
struct task_security_struct *tsec = (struct task_security_struct *)sec;
if (!tsec) {
return false;
}
char *domain;
u32 seclen;
bool result;
int err = security_secid_to_secctx(tsec->sid, &domain, &seclen);
if (err) {
return false;
}
result = strncmp("u:r:zygote:s0", domain, seclen) == 0;
security_release_secctx(domain, seclen);
return result;
}
#ifdef CONFIG_KSU_SUSFS
static inline void susfs_set_sid(const char *secctx_name, u32 *out_sid)
{
int err;
if (!secctx_name || !out_sid) {
pr_err("secctx_name || out_sid is NULL\n");
return;
}
err = security_secctx_to_secid(secctx_name, strlen(secctx_name),
out_sid);
if (err) {
pr_err("failed setting sid for '%s', err: %d\n", secctx_name, err);
return;
}
pr_info("sid '%u' is set for secctx_name '%s'\n", *out_sid, secctx_name);
}
bool susfs_is_sid_equal(void *sec, u32 sid2) {
struct task_security_struct *tsec = (struct task_security_struct *)sec;
if (!tsec) {
return false;
}
return tsec->sid == sid2;
}
u32 susfs_get_sid_from_name(const char *secctx_name)
{
u32 out_sid = 0;
int err;
if (!secctx_name) {
pr_err("secctx_name is NULL\n");
return 0;
}
err = security_secctx_to_secid(secctx_name, strlen(secctx_name),
&out_sid);
if (err) {
pr_err("failed getting sid from secctx_name: %s, err: %d\n", secctx_name, err);
return 0;
}
return out_sid;
}
u32 susfs_get_current_sid(void) {
return current_sid();
}
void susfs_set_zygote_sid(void)
{
susfs_set_sid(KERNEL_ZYGOTE_DOMAIN, &susfs_zygote_sid);
}
bool susfs_is_current_zygote_domain(void) {
return unlikely(current_sid() == susfs_zygote_sid);
}
void susfs_set_ksu_sid(void)
{
susfs_set_sid(KERNEL_SU_DOMAIN, &susfs_ksu_sid);
}
bool susfs_is_current_ksu_domain(void) {
return unlikely(current_sid() == susfs_ksu_sid);
}
void susfs_set_init_sid(void)
{
susfs_set_sid(KERNEL_INIT_DOMAIN, &susfs_init_sid);
}
bool susfs_is_current_init_domain(void) {
return unlikely(current_sid() == susfs_init_sid);
}
#endif
#define DEVPTS_DOMAIN "u:object_r:ksu_file:s0"
u32 ksu_get_devpts_sid()
{
u32 devpts_sid = 0;
int err = security_secctx_to_secid(DEVPTS_DOMAIN, strlen(DEVPTS_DOMAIN),
&devpts_sid);
if (err) {
pr_info("get devpts sid err %d\n", err);
}
return devpts_sid;
}

View File

@@ -0,0 +1,37 @@
#ifndef __KSU_H_SELINUX
#define __KSU_H_SELINUX
#include "linux/types.h"
#include "linux/version.h"
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5, 10, 0)) || defined(KSU_COMPAT_HAS_SELINUX_STATE)
#define KSU_COMPAT_USE_SELINUX_STATE
#endif
void ksu_setup_selinux(const char *);
void ksu_setenforce(bool);
bool ksu_getenforce();
bool ksu_is_ksu_domain();
bool ksu_is_zygote(void *cred);
void ksu_apply_kernelsu_rules();
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
bool susfs_is_sid_equal(void *sec, u32 sid2);
u32 susfs_get_sid_from_name(const char *secctx_name);
u32 susfs_get_current_sid(void);
void susfs_set_zygote_sid(void);
bool susfs_is_current_zygote_domain(void);
void susfs_set_ksu_sid(void);
bool susfs_is_current_ksu_domain(void);
void susfs_set_init_sid(void);
bool susfs_is_current_init_domain(void);
#endif
u32 ksu_get_devpts_sid();
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,46 @@
#ifndef __KSU_H_SEPOLICY
#define __KSU_H_SEPOLICY
#include <linux/types.h>
#include "ss/policydb.h"
// Operation on types
bool ksu_type(struct policydb *db, const char *name, const char *attr);
bool ksu_attribute(struct policydb *db, const char *name);
bool ksu_permissive(struct policydb *db, const char *type);
bool ksu_enforce(struct policydb *db, const char *type);
bool ksu_typeattribute(struct policydb *db, const char *type, const char *attr);
bool ksu_exists(struct policydb *db, const char *type);
// Access vector rules
bool ksu_allow(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *perm);
bool ksu_deny(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *perm);
bool ksu_auditallow(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *perm);
bool ksu_dontaudit(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *perm);
// Extended permissions access vector rules
bool ksu_allowxperm(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *range);
bool ksu_auditallowxperm(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *range);
bool ksu_dontauditxperm(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *range);
// Type rules
bool ksu_type_transition(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *def, const char *obj);
bool ksu_type_change(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *def);
bool ksu_type_member(struct policydb *db, const char *src, const char *tgt,
const char *cls, const char *def);
// File system labeling
bool ksu_genfscon(struct policydb *db, const char *fs_name, const char *path,
const char *ctx);
#endif

75
drivers/kernelsu/setup.sh Executable file
View File

@@ -0,0 +1,75 @@
#!/bin/sh
set -eu
GKI_ROOT=$(pwd)
display_usage() {
echo "Usage: $0 [--cleanup | <commit-or-tag>]"
echo " --cleanup: Cleans up previous modifications made by the script."
echo " <commit-or-tag>: Sets up or updates the KernelSU to specified tag or commit."
echo " -h, --help: Displays this usage information."
echo " (no args): Sets up or updates the KernelSU environment to the latest tagged version."
}
initialize_variables() {
if test -d "$GKI_ROOT/common/drivers"; then
DRIVER_DIR="$GKI_ROOT/common/drivers"
elif test -d "$GKI_ROOT/drivers"; then
DRIVER_DIR="$GKI_ROOT/drivers"
else
echo '[ERROR] "drivers/" directory not found.'
exit 127
fi
DRIVER_MAKEFILE=$DRIVER_DIR/Makefile
DRIVER_KCONFIG=$DRIVER_DIR/Kconfig
}
# Reverts modifications made by this script
perform_cleanup() {
echo "[+] Cleaning up..."
[ -L "$DRIVER_DIR/kernelsu" ] && rm "$DRIVER_DIR/kernelsu" && echo "[-] Symlink removed."
grep -q "kernelsu" "$DRIVER_MAKEFILE" && sed -i '/kernelsu/d' "$DRIVER_MAKEFILE" && echo "[-] Makefile reverted."
grep -q "drivers/kernelsu/Kconfig" "$DRIVER_KCONFIG" && sed -i '/drivers\/kernelsu\/Kconfig/d' "$DRIVER_KCONFIG" && echo "[-] Kconfig reverted."
if [ -d "$GKI_ROOT/KernelSU" ]; then
rm -rf "$GKI_ROOT/KernelSU" && echo "[-] KernelSU directory deleted."
fi
}
# Sets up or update KernelSU environment
setup_kernelsu() {
echo "[+] Setting up KernelSU..."
test -d "$GKI_ROOT/KernelSU" || git clone https://github.com/rsuntk/KernelSU && echo "[+] Repository cloned."
cd "$GKI_ROOT/KernelSU"
git stash && echo "[-] Stashed current changes."
if [ "$(git status | grep -Po 'v\d+(\.\d+)*' | head -n1)" ]; then
git checkout main && echo "[-] Switched to main branch."
fi
git pull && echo "[+] Repository updated."
if [ -z "${1-}" ]; then
git checkout "$(git describe --abbrev=0 --tags)" && echo "[-] Checked out latest tag."
else
git checkout "$1" && echo "[-] Checked out $1." || echo "[-] Checkout default branch"
fi
cd "$DRIVER_DIR"
ln -sf "$(realpath --relative-to="$DRIVER_DIR" "$GKI_ROOT/KernelSU/kernel")" "kernelsu" && echo "[+] Symlink created."
# Add entries in Makefile and Kconfig if not already existing
grep -q "kernelsu" "$DRIVER_MAKEFILE" || printf "\nobj-\$(CONFIG_KSU) += kernelsu/\n" >> "$DRIVER_MAKEFILE" && echo "[+] Modified Makefile."
grep -q "source \"drivers/kernelsu/Kconfig\"" "$DRIVER_KCONFIG" || sed -i "/endmenu/i\source \"drivers/kernelsu/Kconfig\"" "$DRIVER_KCONFIG" && echo "[+] Modified Kconfig."
echo '[+] Done.'
}
# Process command-line arguments
if [ "$#" -eq 0 ]; then
initialize_variables
setup_kernelsu
elif [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
display_usage
elif [ "$1" = "--cleanup" ]; then
initialize_variables
perform_cleanup
else
initialize_variables
setup_kernelsu "$@"
fi

307
drivers/kernelsu/sucompat.c Normal file
View File

@@ -0,0 +1,307 @@
#include <linux/dcache.h>
#include <linux/security.h>
#include <asm/current.h>
#include <linux/cred.h>
#include <linux/err.h>
#include <linux/fs.h>
#include <linux/kprobes.h>
#include <linux/types.h>
#include <linux/uaccess.h>
#include <linux/version.h>
#include <linux/ptrace.h>
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0)
#include <linux/sched/task_stack.h>
#else
#include <linux/sched.h>
#endif
#include "objsec.h"
#include "allowlist.h"
#include "arch.h"
#include "klog.h" // IWYU pragma: keep
#include "ksud.h"
#include "kernel_compat.h"
#define SU_PATH "/system/bin/su"
#define SH_PATH "/system/bin/sh"
static const char su[] = SU_PATH;
static const char ksud_path[] = KSUD_PATH;
extern void ksu_escape_to_root();
bool ksu_sucompat_hook_state __read_mostly = true;
static inline void __user *userspace_stack_buffer(const void *d, size_t len)
{
/* To avoid having to mmap a page in userspace, just write below the stack
* pointer. */
char __user *p = (void __user *)current_user_stack_pointer() - len;
return copy_to_user(p, d, len) ? NULL : p;
}
static inline char __user *sh_user_path(void)
{
const char sh_path[] = SH_PATH;
return userspace_stack_buffer(sh_path, sizeof(sh_path));
}
static inline char __user *ksud_user_path(void)
{
return userspace_stack_buffer(ksud_path, sizeof(ksud_path));
}
static inline bool __is_su_allowed(const void *ptr_to_check)
{
#ifndef CONFIG_KSU_KPROBES_HOOK
if (!ksu_sucompat_hook_state)
return false;
#endif
if (likely(!ksu_is_allow_uid(current_uid().val)))
return false;
if (unlikely(!ptr_to_check))
return false;
return true;
}
#define is_su_allowed(ptr) __is_su_allowed((const void *)ptr)
static int ksu_sucompat_user_common(const char __user **filename_user,
const char *syscall_name,
const bool escalate)
{
char path[sizeof(su)]; // sizeof includes nullterm already!
if (ksu_strncpy_from_user_retry(path,
*filename_user, sizeof(path)) <= 0) {
return 0;
}
path[sizeof(path) - 1] = '\0';
if (memcmp(path, su, sizeof(su)))
return 0;
if (escalate) {
pr_info("%s su found\n", syscall_name);
*filename_user = ksud_user_path();
ksu_escape_to_root(); // escalate !!
} else {
pr_info("%s su->sh!\n", syscall_name);
*filename_user = sh_user_path();
}
return 0;
}
int ksu_handle_faccessat(int *dfd, const char __user **filename_user, int *mode,
int *__unused_flags)
{
if (!is_su_allowed(filename_user))
return 0;
return ksu_sucompat_user_common(filename_user, "faccessat", false);
}
int ksu_handle_stat(int *dfd, const char __user **filename_user, int *flags)
{
if (!is_su_allowed(filename_user))
return 0;
return ksu_sucompat_user_common(filename_user, "newfstatat", false);
}
int ksu_handle_execve_sucompat(int *fd, const char __user **filename_user,
void *__never_use_argv, void *__never_use_envp,
int *__never_use_flags)
{
if (!is_su_allowed(filename_user))
return 0;
return ksu_sucompat_user_common(filename_user, "sys_execve", true);
}
int ksu_handle_execveat_sucompat(int *fd, struct filename **filename_ptr,
void *__never_use_argv, void *__never_use_envp,
int *__never_use_flags)
{
struct filename *filename;
if (!is_su_allowed(filename_ptr))
return 0;
filename = *filename_ptr;
if (IS_ERR(filename))
return 0;
if (likely(memcmp(filename->name, su, sizeof(su))))
return 0;
pr_info("do_execveat_common su found\n");
memcpy((void *)filename->name, ksud_path, sizeof(ksud_path));
ksu_escape_to_root();
return 0;
}
static int ksu_inline_handle_devpts(struct inode *inode)
{
if (!current->mm) {
return 0;
}
uid_t uid = current_uid().val;
if (uid % 100000 < 10000) {
// not untrusted_app, ignore it
return 0;
}
if (!ksu_is_allow_uid(uid))
return 0;
if (ksu_devpts_sid) {
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 1, 0)
struct inode_security_struct *sec = selinux_inode(inode);
#else
struct inode_security_struct *sec =
(struct inode_security_struct *)inode->i_security;
#endif
if (sec) {
sec->sid = ksu_devpts_sid;
}
}
return 0;
}
int __ksu_handle_devpts(struct inode *inode)
{
#ifndef CONFIG_KSU_KPROBES_HOOK
if (!ksu_sucompat_hook_state)
return 0;
#endif
return ksu_inline_handle_devpts(inode);
}
// dead code, we are phasing out ksu_handle_devpts for LSM hooks.
int __maybe_unused ksu_handle_devpts(struct inode *inode)
{
return 0;
}
#ifdef CONFIG_KSU_KPROBES_HOOK
static int faccessat_handler_pre(struct kprobe *p, struct pt_regs *regs)
{
struct pt_regs *real_regs = PT_REAL_REGS(regs);
int *dfd = (int *)&PT_REGS_PARM1(real_regs);
const char __user **filename_user =
(const char **)&PT_REGS_PARM2(real_regs);
int *mode = (int *)&PT_REGS_PARM3(real_regs);
return ksu_handle_faccessat(dfd, filename_user, mode, NULL);
}
static int newfstatat_handler_pre(struct kprobe *p, struct pt_regs *regs)
{
struct pt_regs *real_regs = PT_REAL_REGS(regs);
int *dfd = (int *)&PT_REGS_PARM1(real_regs);
const char __user **filename_user =
(const char **)&PT_REGS_PARM2(real_regs);
int *flags = (int *)&PT_REGS_SYSCALL_PARM4(real_regs);
return ksu_handle_stat(dfd, filename_user, flags);
}
static int execve_handler_pre(struct kprobe *p, struct pt_regs *regs)
{
struct pt_regs *real_regs = PT_REAL_REGS(regs);
const char __user **filename_user =
(const char **)&PT_REGS_PARM1(real_regs);
return ksu_handle_execve_sucompat(AT_FDCWD, filename_user, NULL, NULL,
NULL);
}
#ifdef MODULE
static struct kprobe *su_kps[6];
static int pts_unix98_lookup_pre(struct kprobe *p, struct pt_regs *regs)
{
struct inode *inode;
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 6, 0)
struct file *file = (struct file *)PT_REGS_PARM2(regs);
inode = file->f_path.dentry->d_inode;
#else
inode = (struct inode *)PT_REGS_PARM2(regs);
#endif
return ksu_inline_handle_devpts(inode);
}
#else
static struct kprobe *su_kps[5];
#endif
static struct kprobe *init_kprobe(const char *name,
kprobe_pre_handler_t handler)
{
struct kprobe *kp = kzalloc(sizeof(struct kprobe), GFP_KERNEL);
if (!kp)
return NULL;
kp->symbol_name = name;
kp->pre_handler = handler;
int ret = register_kprobe(kp);
pr_info("sucompat: register_%s kprobe: %d\n", name, ret);
if (ret) {
kfree(kp);
return NULL;
}
return kp;
}
static void destroy_kprobe(struct kprobe **kp_ptr)
{
struct kprobe *kp = *kp_ptr;
if (!kp)
return;
unregister_kprobe(kp);
synchronize_rcu();
kfree(kp);
*kp_ptr = NULL;
}
#endif
// sucompat: permited process can execute 'su' to gain root access.
void ksu_sucompat_init()
{
#ifdef CONFIG_KSU_KPROBES_HOOK
su_kps[0] = init_kprobe(SYS_EXECVE_SYMBOL, execve_handler_pre);
su_kps[1] = init_kprobe(SYS_EXECVE_COMPAT_SYMBOL, execve_handler_pre);
su_kps[2] = init_kprobe(SYS_FACCESSAT_SYMBOL, faccessat_handler_pre);
su_kps[3] = init_kprobe(SYS_NEWFSTATAT_SYMBOL, newfstatat_handler_pre);
su_kps[4] = init_kprobe(SYS_FSTATAT64_SYMBOL, newfstatat_handler_pre);
#ifdef MODULE
su_kps[5] = init_kprobe("pts_unix98_lookup", pts_unix98_lookup_pre);
#endif
#else
ksu_sucompat_hook_state = true;
pr_info("ksu_sucompat init\n");
#endif
}
void ksu_sucompat_exit()
{
#ifdef CONFIG_KSU_KPROBES_HOOK
int i;
for (i = 0; i < ARRAY_SIZE(su_kps); i++) {
destroy_kprobe(&su_kps[i]);
}
#else
ksu_sucompat_hook_state = false;
pr_info("ksu_sucompat exit\n");
#endif
}

View File

@@ -0,0 +1,398 @@
#include <linux/err.h>
#include <linux/fs.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/version.h>
#include "allowlist.h"
#include "klog.h" // IWYU pragma: keep
#include "ksu.h"
#include "manager.h"
#include "throne_tracker.h"
#include "kernel_compat.h"
uid_t ksu_manager_uid = KSU_INVALID_UID;
#define SYSTEM_PACKAGES_LIST_PATH "/data/system/packages.list.tmp"
struct uid_data {
struct list_head list;
u32 uid;
char package[KSU_MAX_PACKAGE_NAME];
};
static int get_pkg_from_apk_path(char *pkg, const char *path)
{
int len = strlen(path);
if (len >= KSU_MAX_PACKAGE_NAME || len < 1)
return -1;
const char *last_slash = NULL;
const char *second_last_slash = NULL;
int i;
for (i = len - 1; i >= 0; i--) {
if (path[i] == '/') {
if (!last_slash) {
last_slash = &path[i];
} else {
second_last_slash = &path[i];
break;
}
}
}
if (!last_slash || !second_last_slash)
return -1;
const char *last_hyphen = strchr(second_last_slash, '-');
if (!last_hyphen || last_hyphen > last_slash)
return -1;
int pkg_len = last_hyphen - second_last_slash - 1;
if (pkg_len >= KSU_MAX_PACKAGE_NAME || pkg_len <= 0)
return -1;
// Copying the package name
strncpy(pkg, second_last_slash + 1, pkg_len);
pkg[pkg_len] = '\0';
return 0;
}
static void crown_manager(const char *apk, struct list_head *uid_data)
{
char pkg[KSU_MAX_PACKAGE_NAME];
if (get_pkg_from_apk_path(pkg, apk) < 0) {
pr_err("Failed to get package name from apk path: %s\n", apk);
return;
}
pr_info("manager pkg: %s\n", pkg);
#ifdef KSU_MANAGER_PACKAGE
// pkg is `/<real package>`
if (strncmp(pkg, KSU_MANAGER_PACKAGE, sizeof(KSU_MANAGER_PACKAGE))) {
pr_info("manager package is inconsistent with kernel build: %s\n",
KSU_MANAGER_PACKAGE);
return;
}
#endif
struct list_head *list = (struct list_head *)uid_data;
struct uid_data *np;
list_for_each_entry (np, list, list) {
if (strncmp(np->package, pkg, KSU_MAX_PACKAGE_NAME) == 0) {
pr_info("Crowning manager: %s(uid=%d)\n", pkg, np->uid);
ksu_set_manager_uid(np->uid);
break;
}
}
}
#define DATA_PATH_LEN 384 // 384 is enough for /data/app/<package>/base.apk
struct data_path {
char dirpath[DATA_PATH_LEN];
int depth;
struct list_head list;
};
struct apk_path_hash {
unsigned int hash;
bool exists;
struct list_head list;
};
static struct list_head apk_path_hash_list;
struct my_dir_context {
struct dir_context ctx;
struct list_head *data_path_list;
char *parent_dir;
void *private_data;
int depth;
int *stop;
};
// https://docs.kernel.org/filesystems/porting.html
// filldir_t (readdir callbacks) calling conventions have changed. Instead of returning 0 or -E... it returns bool now. false means "no more" (as -E... used to) and true - "keep going" (as 0 in old calling conventions). Rationale: callers never looked at specific -E... values anyway. -> iterate_shared() instances require no changes at all, all filldir_t ones in the tree converted.
#if LINUX_VERSION_CODE >= KERNEL_VERSION(6, 1, 0)
#define FILLDIR_RETURN_TYPE bool
#define FILLDIR_ACTOR_CONTINUE true
#define FILLDIR_ACTOR_STOP false
#else
#define FILLDIR_RETURN_TYPE int
#define FILLDIR_ACTOR_CONTINUE 0
#define FILLDIR_ACTOR_STOP -EINVAL
#endif
FILLDIR_RETURN_TYPE my_actor(struct dir_context *ctx, const char *name,
int namelen, loff_t off, u64 ino,
unsigned int d_type)
{
struct my_dir_context *my_ctx =
container_of(ctx, struct my_dir_context, ctx);
char dirpath[DATA_PATH_LEN];
if (!my_ctx) {
pr_err("Invalid context\n");
return FILLDIR_ACTOR_STOP;
}
if (my_ctx->stop && *my_ctx->stop) {
pr_info("Stop searching\n");
return FILLDIR_ACTOR_STOP;
}
if (!strncmp(name, "..", namelen) || !strncmp(name, ".", namelen))
return FILLDIR_ACTOR_CONTINUE; // Skip "." and ".."
if (d_type == DT_DIR && namelen >= 8 && !strncmp(name, "vmdl", 4) &&
!strncmp(name + namelen - 4, ".tmp", 4)) {
pr_info("Skipping directory: %.*s\n", namelen, name);
return FILLDIR_ACTOR_CONTINUE; // Skip staging package
}
if (snprintf(dirpath, DATA_PATH_LEN, "%s/%.*s", my_ctx->parent_dir,
namelen, name) >= DATA_PATH_LEN) {
pr_err("Path too long: %s/%.*s\n", my_ctx->parent_dir, namelen,
name);
return FILLDIR_ACTOR_CONTINUE;
}
if (d_type == DT_DIR && my_ctx->depth > 0 &&
(my_ctx->stop && !*my_ctx->stop)) {
struct data_path *data = kmalloc(sizeof(struct data_path), GFP_ATOMIC);
if (!data) {
pr_err("Failed to allocate memory for %s\n", dirpath);
return FILLDIR_ACTOR_CONTINUE;
}
strscpy(data->dirpath, dirpath, DATA_PATH_LEN);
data->depth = my_ctx->depth - 1;
list_add_tail(&data->list, my_ctx->data_path_list);
} else {
if ((namelen == 8) && (strncmp(name, "base.apk", namelen) == 0)) {
struct apk_path_hash *pos;
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 8, 0)
unsigned int hash = full_name_hash(dirpath, strlen(dirpath));
#else
unsigned int hash = full_name_hash(NULL, dirpath, strlen(dirpath));
#endif
list_for_each_entry(pos, &apk_path_hash_list, list) {
if (hash == pos->hash) {
pos->exists = true;
return FILLDIR_ACTOR_CONTINUE;
}
}
bool is_manager = ksu_is_manager_apk(dirpath);
pr_info("Found new base.apk at path: %s, is_manager: %d\n",
dirpath, is_manager);
if (is_manager) {
crown_manager(dirpath, my_ctx->private_data);
*my_ctx->stop = 1;
}
}
}
return FILLDIR_ACTOR_CONTINUE;
}
void search_manager(const char *path, int depth, struct list_head *uid_data)
{
int i, stop = 0;
struct list_head data_path_list;
INIT_LIST_HEAD(&data_path_list);
INIT_LIST_HEAD(&apk_path_hash_list);
unsigned long data_app_magic = 0;
// Initialize APK cache list
struct apk_path_hash *pos, *n;
list_for_each_entry(pos, &apk_path_hash_list, list) {
pos->exists = false;
}
// First depth
struct data_path data;
strscpy(data.dirpath, path, DATA_PATH_LEN);
data.depth = depth;
list_add_tail(&data.list, &data_path_list);
for (i = depth; i >= 0; i--) {
struct data_path *pos, *n;
list_for_each_entry_safe(pos, n, &data_path_list, list) {
struct my_dir_context ctx = { .ctx.actor = my_actor,
.data_path_list = &data_path_list,
.parent_dir = pos->dirpath,
.private_data = uid_data,
.depth = pos->depth,
.stop = &stop };
struct file *file;
if (!stop) {
file = ksu_filp_open_compat(pos->dirpath, O_RDONLY | O_NOFOLLOW, 0);
if (IS_ERR(file)) {
pr_err("Failed to open directory: %s, err: %ld\n", pos->dirpath, PTR_ERR(file));
goto skip_iterate;
}
// grab magic on first folder, which is /data/app
if (!data_app_magic) {
if (file->f_inode->i_sb->s_magic) {
data_app_magic = file->f_inode->i_sb->s_magic;
pr_info("%s: dir: %s got magic! 0x%lx\n", __func__, pos->dirpath, data_app_magic);
} else {
filp_close(file, NULL);
goto skip_iterate;
}
}
if (file->f_inode->i_sb->s_magic != data_app_magic) {
pr_info("%s: skip: %s magic: 0x%lx expected: 0x%lx\n", __func__, pos->dirpath,
file->f_inode->i_sb->s_magic, data_app_magic);
filp_close(file, NULL);
goto skip_iterate;
}
iterate_dir(file, &ctx.ctx);
filp_close(file, NULL);
}
skip_iterate:
list_del(&pos->list);
if (pos != &data)
kfree(pos);
}
}
// clear apk_path_hash_list unconditionally
pr_info("search manager: cleanup!\n");
list_for_each_entry_safe(pos, n, &apk_path_hash_list, list) {
list_del(&pos->list);
kfree(pos);
}
}
static bool is_uid_exist(uid_t uid, char *package, void *data)
{
struct list_head *list = (struct list_head *)data;
struct uid_data *np;
bool exist = false;
list_for_each_entry (np, list, list) {
if (np->uid == uid % 100000 &&
strncmp(np->package, package, KSU_MAX_PACKAGE_NAME) == 0) {
exist = true;
break;
}
}
return exist;
}
void ksu_track_throne()
{
struct file *fp =
ksu_filp_open_compat(SYSTEM_PACKAGES_LIST_PATH, O_RDONLY, 0);
if (IS_ERR(fp)) {
pr_err("%s: open " SYSTEM_PACKAGES_LIST_PATH " failed: %ld\n",
__func__, PTR_ERR(fp));
return;
}
struct list_head uid_list;
INIT_LIST_HEAD(&uid_list);
char chr = 0;
loff_t pos = 0;
loff_t line_start = 0;
char buf[KSU_MAX_PACKAGE_NAME];
for (;;) {
ssize_t count =
ksu_kernel_read_compat(fp, &chr, sizeof(chr), &pos);
if (count != sizeof(chr))
break;
if (chr != '\n')
continue;
count = ksu_kernel_read_compat(fp, buf, sizeof(buf),
&line_start);
struct uid_data *data =
kzalloc(sizeof(struct uid_data), GFP_ATOMIC);
if (!data) {
filp_close(fp, 0);
goto out;
}
char *tmp = buf;
const char *delim = " ";
char *package = strsep(&tmp, delim);
char *uid = strsep(&tmp, delim);
if (!uid || !package) {
pr_err("update_uid: package or uid is NULL!\n");
break;
}
u32 res;
if (kstrtou32(uid, 10, &res)) {
pr_err("update_uid: uid parse err\n");
break;
}
data->uid = res;
strncpy(data->package, package, KSU_MAX_PACKAGE_NAME);
list_add_tail(&data->list, &uid_list);
// reset line start
line_start = pos;
}
filp_close(fp, 0);
// now update uid list
struct uid_data *np;
struct uid_data *n;
// first, check if manager_uid exist!
bool manager_exist = false;
list_for_each_entry (np, &uid_list, list) {
// if manager is installed in work profile, the uid in packages.list is still equals main profile
// don't delete it in this case!
int manager_uid = ksu_get_manager_uid() % 100000;
if (np->uid == manager_uid) {
manager_exist = true;
break;
}
}
if (!manager_exist) {
if (ksu_is_manager_uid_valid()) {
pr_info("manager is uninstalled, invalidate it!\n");
ksu_invalidate_manager_uid();
goto prune;
}
pr_info("Searching manager...\n");
search_manager("/data/app", 2, &uid_list);
pr_info("Search manager finished\n");
}
prune:
// then prune the allowlist
ksu_prune_allowlist(is_uid_exist, &uid_list);
out:
// free uid_list
list_for_each_entry_safe (np, n, &uid_list, list) {
list_del(&np->list);
kfree(np);
}
}
void ksu_throne_tracker_init()
{
// nothing to do
}
void ksu_throne_tracker_exit()
{
// nothing to do
}

View File

@@ -0,0 +1,10 @@
#ifndef __KSU_H_THRONE_TRACKER
#define __KSU_H_THRONE_TRACKER
void ksu_throne_tracker_init();
void ksu_throne_tracker_exit();
void ksu_track_throne();
#endif

View File

@@ -23,7 +23,7 @@ int cam_io_w(uint32_t data, void __iomem *addr)
return -EINVAL;
CAM_DBG(CAM_UTIL, "0x%pK %08x", addr, data);
writel_relaxed_no_log(data, addr);
writel_relaxed(data, addr);
return 0;
}
@@ -36,7 +36,7 @@ int cam_io_w_mb(uint32_t data, void __iomem *addr)
CAM_DBG(CAM_UTIL, "0x%pK %08x", addr, data);
/* Ensure previous writes are done */
wmb();
writel_relaxed_no_log(data, addr);
writel_relaxed(data, addr);
/* Ensure previous writes are done */
wmb();

View File

@@ -23,7 +23,7 @@ int cam_io_w(uint32_t data, void __iomem *addr)
return -EINVAL;
CAM_DBG(CAM_UTIL, "0x%pK %08x", addr, data);
writel_relaxed_no_log(data, addr);
writel_relaxed(data, addr);
return 0;
}
@@ -36,7 +36,7 @@ int cam_io_w_mb(uint32_t data, void __iomem *addr)
CAM_DBG(CAM_UTIL, "0x%pK %08x", addr, data);
/* Ensure previous writes are done */
wmb();
writel_relaxed_no_log(data, addr);
writel_relaxed(data, addr);
/* Ensure previous writes are done */
wmb();

View File

@@ -68,12 +68,12 @@
do { \
SDEROT_DBG("SDEREG.W:[%s:0x%X] <= 0x%X\n", #off, (off),\
(u32)(data));\
writel_relaxed_no_log( \
writel_relaxed( \
(REGDMA_OP_REGWRITE | \
((off) & REGDMA_ADDR_OFFSET_MASK)), \
p); \
p += sizeof(u32); \
writel_relaxed_no_log(data, p); \
writel_relaxed(data, p); \
p += sizeof(u32); \
} while (0)
@@ -81,14 +81,14 @@
do { \
SDEROT_DBG("SDEREG.M:[%s:0x%X] <= 0x%X\n", #off, (off),\
(u32)(data));\
writel_relaxed_no_log( \
writel_relaxed( \
(REGDMA_OP_REGMODIFY | \
((off) & REGDMA_ADDR_OFFSET_MASK)), \
p); \
p += sizeof(u32); \
writel_relaxed_no_log(mask, p); \
writel_relaxed(mask, p); \
p += sizeof(u32); \
writel_relaxed_no_log(data, p); \
writel_relaxed(data, p); \
p += sizeof(u32); \
} while (0)
@@ -96,25 +96,25 @@
do { \
SDEROT_DBG("SDEREG.B:[%s:0x%X:0x%X]\n", #off, (off),\
(u32)(len));\
writel_relaxed_no_log( \
writel_relaxed( \
(REGDMA_OP_BLKWRITE_INC | \
((off) & REGDMA_ADDR_OFFSET_MASK)), \
p); \
p += sizeof(u32); \
writel_relaxed_no_log(len, p); \
writel_relaxed(len, p); \
p += sizeof(u32); \
} while (0)
#define SDE_REGDMA_BLKWRITE_DATA(p, data) \
do { \
SDEROT_DBG("SDEREG.I:[:] <= 0x%X\n", (u32)(data));\
writel_relaxed_no_log(data, p); \
writel_relaxed(data, p); \
p += sizeof(u32); \
} while (0)
#define SDE_REGDMA_READ(p, data) \
do { \
data = readl_relaxed_no_log(p); \
data = readl_relaxed(p); \
p += sizeof(u32); \
} while (0)
@@ -2041,7 +2041,7 @@ static u32 sde_hw_rotator_start_no_regdma(struct sde_hw_rotator_context *ctx,
/* Write all command stream to Rotator blocks */
/* Rotator will start right away after command stream finish writing */
while (mem_rdptr < wrptr) {
u32 op = REGDMA_OP_MASK & readl_relaxed_no_log(mem_rdptr);
u32 op = REGDMA_OP_MASK & readl_relaxed(mem_rdptr);
switch (op) {
case REGDMA_OP_NOP:

View File

@@ -531,7 +531,7 @@ static int uvc_parse_format(struct uvc_device *dev,
/* Parse the frame descriptors. Only uncompressed, MJPEG and frame
* based formats have frame descriptors.
*/
while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
while (ftype && buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
buffer[2] == ftype) {
frame = &format->frame[format->nframes];
if (ftype != UVC_VS_FRAME_FRAME_BASED)

View File

@@ -1167,6 +1167,13 @@ int of_phandle_iterator_init(struct of_phandle_iterator *it,
memset(it, 0, sizeof(*it));
/*
* one of cell_count or cells_name must be provided to determine the
* argument length.
*/
if (cell_count < 0 && !cells_name)
return -EINVAL;
list = of_get_property(np, list_name, &size);
if (!list)
return -ENOENT;
@@ -1216,11 +1223,20 @@ int of_phandle_iterator_next(struct of_phandle_iterator *it)
if (of_property_read_u32(it->node, it->cells_name,
&count)) {
pr_err("%pOF: could not get %s for %pOF\n",
it->parent,
it->cells_name,
it->node);
goto err;
/*
* If both cell_count and cells_name is given,
* fall back to cell_count in absence
* of the cells_name property
*/
if (it->cell_count >= 0) {
count = it->cell_count;
} else {
pr_err("%pOF: could not get %s for %pOF\n",
it->parent,
it->cells_name,
it->node);
goto err;
}
}
} else {
count = it->cell_count;
@@ -1383,10 +1399,17 @@ int of_parse_phandle_with_args(const struct device_node *np, const char *list_na
const char *cells_name, int index,
struct of_phandle_args *out_args)
{
int cell_count = -1;
if (index < 0)
return -EINVAL;
return __of_parse_phandle_with_args(np, list_name, cells_name, 0,
index, out_args);
/* If cells_name is NULL we assume a cell count of 0 */
if (!cells_name)
cell_count = 0;
return __of_parse_phandle_with_args(np, list_name, cells_name,
cell_count, index, out_args);
}
EXPORT_SYMBOL(of_parse_phandle_with_args);
@@ -1452,7 +1475,24 @@ int of_count_phandle_with_args(const struct device_node *np, const char *list_na
struct of_phandle_iterator it;
int rc, cur_index = 0;
rc = of_phandle_iterator_init(&it, np, list_name, cells_name, 0);
/*
* If cells_name is NULL we assume a cell count of 0. This makes
* counting the phandles trivial as each 32bit word in the list is a
* phandle and no arguments are to consider. So we don't iterate through
* the list but just use the length to determine the phandle count.
*/
if (!cells_name) {
const __be32 *list;
int size;
list = of_get_property(np, list_name, &size);
if (!list)
return -ENOENT;
return size / sizeof(*list);
}
rc = of_phandle_iterator_init(&it, np, list_name, cells_name, -1);
if (rc)
return rc;

View File

@@ -159,7 +159,7 @@ static int geni_se_iommu_map_and_attach(struct geni_se_device *geni_se_dev);
*/
unsigned int geni_read_reg_nolog(void __iomem *base, int offset)
{
return readl_relaxed_no_log(base + offset);
return readl_relaxed(base + offset);
}
EXPORT_SYMBOL(geni_read_reg_nolog);
@@ -171,7 +171,7 @@ EXPORT_SYMBOL(geni_read_reg_nolog);
*/
void geni_write_reg_nolog(unsigned int value, void __iomem *base, int offset)
{
return writel_relaxed_no_log(value, (base + offset));
return writel_relaxed(value, (base + offset));
}
EXPORT_SYMBOL(geni_write_reg_nolog);

View File

@@ -244,40 +244,6 @@ static const struct file_operations ufs_qcom_dbg_dbg_regs_desc = {
.release = single_release,
};
static int ufs_qcom_dbg_pm_qos_show(struct seq_file *file, void *data)
{
struct ufs_qcom_host *host = (struct ufs_qcom_host *)file->private;
unsigned long flags;
int i;
spin_lock_irqsave(host->hba->host->host_lock, flags);
seq_printf(file, "enabled: %d\n", host->pm_qos.is_enabled);
for (i = 0; i < host->pm_qos.num_groups && host->pm_qos.groups; i++)
seq_printf(file,
"CPU Group #%d(mask=0x%lx): active_reqs=%d, state=%d, latency=%d\n",
i, host->pm_qos.groups[i].mask.bits[0],
host->pm_qos.groups[i].active_reqs,
host->pm_qos.groups[i].state,
host->pm_qos.groups[i].latency_us);
spin_unlock_irqrestore(host->hba->host->host_lock, flags);
return 0;
}
static int ufs_qcom_dbg_pm_qos_open(struct inode *inode,
struct file *file)
{
return single_open(file, ufs_qcom_dbg_pm_qos_show, inode->i_private);
}
static const struct file_operations ufs_qcom_dbg_pm_qos_desc = {
.open = ufs_qcom_dbg_pm_qos_open,
.read = seq_read,
.release = single_release,
};
void ufs_qcom_dbg_add_debugfs(struct ufs_hba *hba, struct dentry *root)
{
struct ufs_qcom_host *host;
@@ -366,17 +332,6 @@ void ufs_qcom_dbg_add_debugfs(struct ufs_hba *hba, struct dentry *root)
goto err;
}
host->debugfs_files.pm_qos =
debugfs_create_file("pm_qos", 0400,
host->debugfs_files.debugfs_root, host,
&ufs_qcom_dbg_pm_qos_desc);
if (!host->debugfs_files.dbg_regs) {
dev_err(host->hba->dev,
"%s: failed create dbg_regs debugfs entry\n",
__func__);
goto err;
}
return;
err:

View File

@@ -35,8 +35,6 @@
#define MAX_PROP_SIZE 32
#define VDDP_REF_CLK_MIN_UV 1200000
#define VDDP_REF_CLK_MAX_UV 1200000
/* TODO: further tuning for this parameter may be required */
#define UFS_QCOM_PM_QOS_UNVOTE_TIMEOUT_US (10000) /* microseconds */
#define UFS_QCOM_DEFAULT_DBG_PRINT_EN \
(UFS_QCOM_DBG_PRINT_REGS_EN | UFS_QCOM_DBG_PRINT_TEST_BUS_EN)
@@ -64,7 +62,6 @@ static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host);
static int ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(struct ufs_hba *hba,
u32 clk_1us_cycles,
u32 clk_40ns_cycles);
static void ufs_qcom_pm_qos_suspend(struct ufs_qcom_host *host);
static void ufs_qcom_dump_regs(struct ufs_hba *hba, int offset, int len,
char *prefix)
@@ -847,8 +844,6 @@ static int ufs_qcom_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
goto out;
}
}
/* Unvote PM QoS */
ufs_qcom_pm_qos_suspend(host);
out:
return ret;
@@ -1480,7 +1475,6 @@ static void ufs_qcom_set_caps(struct ufs_hba *hba)
if (!host->disable_lpm) {
hba->caps |= UFSHCD_CAP_CLK_GATING;
hba->caps |= UFSHCD_CAP_HIBERN8_WITH_CLK_GATING;
hba->caps |= UFSHCD_CAP_CLK_SCALING;
}
hba->caps |= UFSHCD_CAP_AUTO_BKOPS_SUSPEND;
@@ -1558,395 +1552,6 @@ static int ufs_qcom_setup_clocks(struct ufs_hba *hba, bool on,
return 0;
}
#ifdef CONFIG_SMP /* CONFIG_SMP */
static int ufs_qcom_cpu_to_group(struct ufs_qcom_host *host, int cpu)
{
int i;
if (cpu >= 0 && cpu < num_possible_cpus())
for (i = 0; i < host->pm_qos.num_groups; i++)
if (cpumask_test_cpu(cpu, &host->pm_qos.groups[i].mask))
return i;
return host->pm_qos.default_cpu;
}
static void ufs_qcom_pm_qos_req_start(struct ufs_hba *hba, struct request *req)
{
unsigned long flags;
struct ufs_qcom_host *host;
struct ufs_qcom_pm_qos_cpu_group *group;
if (!hba || !req)
return;
host = ufshcd_get_variant(hba);
if (!host->pm_qos.groups)
return;
group = &host->pm_qos.groups[ufs_qcom_cpu_to_group(host, req->cpu)];
spin_lock_irqsave(hba->host->host_lock, flags);
if (!host->pm_qos.is_enabled)
goto out;
group->active_reqs++;
if (group->state != PM_QOS_REQ_VOTE &&
group->state != PM_QOS_VOTED) {
group->state = PM_QOS_REQ_VOTE;
queue_work(host->pm_qos.workq, &group->vote_work);
}
out:
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
/* hba->host->host_lock is assumed to be held by caller */
static void __ufs_qcom_pm_qos_req_end(struct ufs_qcom_host *host, int req_cpu)
{
struct ufs_qcom_pm_qos_cpu_group *group;
if (!host->pm_qos.groups || !host->pm_qos.is_enabled)
return;
group = &host->pm_qos.groups[ufs_qcom_cpu_to_group(host, req_cpu)];
if (--group->active_reqs)
return;
group->state = PM_QOS_REQ_UNVOTE;
queue_work(host->pm_qos.workq, &group->unvote_work);
}
static void ufs_qcom_pm_qos_req_end(struct ufs_hba *hba, struct request *req,
bool should_lock)
{
unsigned long flags = 0;
if (!hba || !req)
return;
if (should_lock)
spin_lock_irqsave(hba->host->host_lock, flags);
__ufs_qcom_pm_qos_req_end(ufshcd_get_variant(hba), req->cpu);
if (should_lock)
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
static void ufs_qcom_pm_qos_vote_work(struct work_struct *work)
{
struct ufs_qcom_pm_qos_cpu_group *group =
container_of(work, struct ufs_qcom_pm_qos_cpu_group, vote_work);
struct ufs_qcom_host *host = group->host;
unsigned long flags;
spin_lock_irqsave(host->hba->host->host_lock, flags);
if (!host->pm_qos.is_enabled || !group->active_reqs) {
spin_unlock_irqrestore(host->hba->host->host_lock, flags);
return;
}
group->state = PM_QOS_VOTED;
spin_unlock_irqrestore(host->hba->host->host_lock, flags);
pm_qos_update_request(&group->req, group->latency_us);
}
static void ufs_qcom_pm_qos_unvote_work(struct work_struct *work)
{
struct ufs_qcom_pm_qos_cpu_group *group = container_of(work,
struct ufs_qcom_pm_qos_cpu_group, unvote_work);
struct ufs_qcom_host *host = group->host;
unsigned long flags;
/*
* Check if new requests were submitted in the meantime and do not
* unvote if so.
*/
spin_lock_irqsave(host->hba->host->host_lock, flags);
if (!host->pm_qos.is_enabled || group->active_reqs) {
spin_unlock_irqrestore(host->hba->host->host_lock, flags);
return;
}
group->state = PM_QOS_UNVOTED;
spin_unlock_irqrestore(host->hba->host->host_lock, flags);
pm_qos_update_request_timeout(&group->req,
group->latency_us, UFS_QCOM_PM_QOS_UNVOTE_TIMEOUT_US);
}
static ssize_t ufs_qcom_pm_qos_enable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev->parent);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
return snprintf(buf, PAGE_SIZE, "%d\n", host->pm_qos.is_enabled);
}
static ssize_t ufs_qcom_pm_qos_enable_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct ufs_hba *hba = dev_get_drvdata(dev->parent);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
unsigned long value;
unsigned long flags;
bool enable;
int i;
if (kstrtoul(buf, 0, &value))
return -EINVAL;
enable = !!value;
/*
* Must take the spinlock and save irqs before changing the enabled
* flag in order to keep correctness of PM QoS release.
*/
spin_lock_irqsave(hba->host->host_lock, flags);
if (enable == host->pm_qos.is_enabled) {
spin_unlock_irqrestore(hba->host->host_lock, flags);
return count;
}
host->pm_qos.is_enabled = enable;
spin_unlock_irqrestore(hba->host->host_lock, flags);
if (!enable)
for (i = 0; i < host->pm_qos.num_groups; i++) {
cancel_work_sync(&host->pm_qos.groups[i].vote_work);
cancel_work_sync(&host->pm_qos.groups[i].unvote_work);
spin_lock_irqsave(hba->host->host_lock, flags);
host->pm_qos.groups[i].state = PM_QOS_UNVOTED;
host->pm_qos.groups[i].active_reqs = 0;
spin_unlock_irqrestore(hba->host->host_lock, flags);
pm_qos_update_request(&host->pm_qos.groups[i].req,
PM_QOS_DEFAULT_VALUE);
}
return count;
}
static ssize_t ufs_qcom_pm_qos_latency_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ufs_hba *hba = dev_get_drvdata(dev->parent);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
int ret;
int i;
int offset = 0;
for (i = 0; i < host->pm_qos.num_groups; i++) {
ret = snprintf(&buf[offset], PAGE_SIZE,
"cpu group #%d(mask=0x%lx): %d\n", i,
host->pm_qos.groups[i].mask.bits[0],
host->pm_qos.groups[i].latency_us);
if (ret > 0)
offset += ret;
else
break;
}
return offset;
}
static ssize_t ufs_qcom_pm_qos_latency_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct ufs_hba *hba = dev_get_drvdata(dev->parent);
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
unsigned long value;
unsigned long flags;
char *strbuf;
char *strbuf_copy;
char *token;
int i;
int ret;
/* reserve one byte for null termination */
strbuf = kmalloc(count + 1, GFP_KERNEL);
if (!strbuf)
return -ENOMEM;
strbuf_copy = strbuf;
strlcpy(strbuf, buf, count + 1);
for (i = 0; i < host->pm_qos.num_groups; i++) {
token = strsep(&strbuf, ",");
if (!token)
break;
ret = kstrtoul(token, 0, &value);
if (ret)
break;
spin_lock_irqsave(hba->host->host_lock, flags);
host->pm_qos.groups[i].latency_us = value;
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
kfree(strbuf_copy);
return count;
}
static int ufs_qcom_pm_qos_init(struct ufs_qcom_host *host)
{
struct device_node *node = host->hba->dev->of_node;
struct device_attribute *attr;
int ret = 0;
int num_groups;
int num_values;
char wq_name[sizeof("ufs_pm_qos_00")];
int i;
num_groups = of_property_count_u32_elems(node,
"qcom,pm-qos-cpu-groups");
if (num_groups <= 0)
goto no_pm_qos;
num_values = of_property_count_u32_elems(node,
"qcom,pm-qos-cpu-group-latency-us");
if (num_values <= 0)
goto no_pm_qos;
if (num_values != num_groups || num_groups > num_possible_cpus()) {
dev_err(host->hba->dev, "%s: invalid count: num_groups=%d, num_values=%d, num_possible_cpus=%d\n",
__func__, num_groups, num_values, num_possible_cpus());
goto no_pm_qos;
}
host->pm_qos.num_groups = num_groups;
host->pm_qos.groups = kcalloc(host->pm_qos.num_groups,
sizeof(struct ufs_qcom_pm_qos_cpu_group), GFP_KERNEL);
if (!host->pm_qos.groups)
return -ENOMEM;
for (i = 0; i < host->pm_qos.num_groups; i++) {
u32 mask;
ret = of_property_read_u32_index(node, "qcom,pm-qos-cpu-groups",
i, &mask);
if (ret)
goto free_groups;
host->pm_qos.groups[i].mask.bits[0] = mask;
if (!cpumask_subset(&host->pm_qos.groups[i].mask,
cpu_possible_mask)) {
dev_err(host->hba->dev, "%s: invalid mask 0x%x for cpu group\n",
__func__, mask);
goto free_groups;
}
ret = of_property_read_u32_index(node,
"qcom,pm-qos-cpu-group-latency-us", i,
&host->pm_qos.groups[i].latency_us);
if (ret)
goto free_groups;
host->pm_qos.groups[i].req.type = PM_QOS_REQ_AFFINE_CORES;
host->pm_qos.groups[i].req.cpus_affine =
host->pm_qos.groups[i].mask;
host->pm_qos.groups[i].state = PM_QOS_UNVOTED;
host->pm_qos.groups[i].active_reqs = 0;
host->pm_qos.groups[i].host = host;
INIT_WORK(&host->pm_qos.groups[i].vote_work,
ufs_qcom_pm_qos_vote_work);
INIT_WORK(&host->pm_qos.groups[i].unvote_work,
ufs_qcom_pm_qos_unvote_work);
}
ret = of_property_read_u32(node, "qcom,pm-qos-default-cpu",
&host->pm_qos.default_cpu);
if (ret || host->pm_qos.default_cpu > num_possible_cpus())
host->pm_qos.default_cpu = 0;
/*
* Use a single-threaded workqueue to assure work submitted to the queue
* is performed in order. Consider the following 2 possible cases:
*
* 1. A new request arrives and voting work is scheduled for it. Before
* the voting work is performed the request is finished and unvote
* work is also scheduled.
* 2. A request is finished and unvote work is scheduled. Before the
* work is performed a new request arrives and voting work is also
* scheduled.
*
* In both cases a vote work and unvote work wait to be performed.
* If ordering is not guaranteed, then the end state might be the
* opposite of the desired state.
*/
snprintf(wq_name, ARRAY_SIZE(wq_name), "%s_%d", "ufs_pm_qos",
host->hba->host->host_no);
host->pm_qos.workq = create_singlethread_workqueue(wq_name);
if (!host->pm_qos.workq) {
dev_err(host->hba->dev, "%s: failed to create the workqueue\n",
__func__);
ret = -ENOMEM;
goto free_groups;
}
/* Initialization was ok, add all PM QoS requests */
for (i = 0; i < host->pm_qos.num_groups; i++)
pm_qos_add_request(&host->pm_qos.groups[i].req,
PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
/* PM QoS latency sys-fs attribute */
attr = &host->pm_qos.latency_attr;
attr->show = ufs_qcom_pm_qos_latency_show;
attr->store = ufs_qcom_pm_qos_latency_store;
sysfs_attr_init(&attr->attr);
attr->attr.name = "pm_qos_latency_us";
attr->attr.mode = 0644;
if (device_create_file(host->hba->var->dev, attr))
dev_dbg(host->hba->dev, "Failed to create sysfs for pm_qos_latency_us\n");
/* PM QoS enable sys-fs attribute */
attr = &host->pm_qos.enable_attr;
attr->show = ufs_qcom_pm_qos_enable_show;
attr->store = ufs_qcom_pm_qos_enable_store;
sysfs_attr_init(&attr->attr);
attr->attr.name = "pm_qos_enable";
attr->attr.mode = 0644;
if (device_create_file(host->hba->var->dev, attr))
dev_dbg(host->hba->dev, "Failed to create sysfs for pm_qos enable\n");
host->pm_qos.is_enabled = true;
return 0;
free_groups:
kfree(host->pm_qos.groups);
no_pm_qos:
host->pm_qos.groups = NULL;
return ret ? ret : -ENOTSUPP;
}
static void ufs_qcom_pm_qos_suspend(struct ufs_qcom_host *host)
{
int i;
if (!host->pm_qos.groups)
return;
for (i = 0; i < host->pm_qos.num_groups; i++)
flush_work(&host->pm_qos.groups[i].unvote_work);
}
static void ufs_qcom_pm_qos_remove(struct ufs_qcom_host *host)
{
int i;
if (!host->pm_qos.groups)
return;
for (i = 0; i < host->pm_qos.num_groups; i++)
pm_qos_remove_request(&host->pm_qos.groups[i].req);
destroy_workqueue(host->pm_qos.workq);
kfree(host->pm_qos.groups);
host->pm_qos.groups = NULL;
}
#endif /* CONFIG_SMP */
#define ANDROID_BOOT_DEV_MAX 30
static char android_boot_dev[ANDROID_BOOT_DEV_MAX];
@@ -2109,10 +1714,6 @@ static int ufs_qcom_init(struct ufs_hba *hba)
goto out_variant_clear;
}
err = ufs_qcom_pm_qos_init(host);
if (err)
dev_info(dev, "%s: PM QoS will be disabled\n", __func__);
/* restore the secure configuration */
ufs_qcom_update_sec_cfg(hba, true);
@@ -2241,7 +1842,6 @@ static void ufs_qcom_exit(struct ufs_hba *hba)
host->is_phy_pwr_on = false;
}
phy_exit(host->generic_phy);
ufs_qcom_pm_qos_remove(host);
}
static int ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(struct ufs_hba *hba,
@@ -2708,15 +2308,9 @@ static struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
#endif
};
static struct ufs_hba_pm_qos_variant_ops ufs_hba_pm_qos_variant_ops = {
.req_start = ufs_qcom_pm_qos_req_start,
.req_end = ufs_qcom_pm_qos_req_end,
};
static struct ufs_hba_variant ufs_hba_qcom_variant = {
.name = "qcom",
.vops = &ufs_hba_qcom_vops,
.pm_qos_vops = &ufs_hba_pm_qos_variant_ops,
};
/**

View File

@@ -15,7 +15,6 @@
#define UFS_QCOM_H_
#include <linux/phy/phy.h>
#include <linux/pm_qos.h>
#include "ufshcd.h"
#define MAX_UFS_QCOM_HOSTS 2
@@ -245,62 +244,9 @@ struct qcom_debugfs_files {
struct dentry *testbus_cfg;
struct dentry *testbus_bus;
struct dentry *dbg_regs;
struct dentry *pm_qos;
};
#endif
/* PM QoS voting state */
enum ufs_qcom_pm_qos_state {
PM_QOS_UNVOTED,
PM_QOS_VOTED,
PM_QOS_REQ_VOTE,
PM_QOS_REQ_UNVOTE,
};
/**
* struct ufs_qcom_pm_qos_cpu_group - data related to cluster PM QoS voting
* logic
* @req: request object for PM QoS
* @vote_work: work object for voting procedure
* @unvote_work: work object for un-voting procedure
* @host: back pointer to the main structure
* @state: voting state machine current state
* @latency_us: requested latency value used for cluster voting, in
* microseconds
* @mask: cpu mask defined for this cluster
* @active_reqs: number of active requests on this cluster
*/
struct ufs_qcom_pm_qos_cpu_group {
struct pm_qos_request req;
struct work_struct vote_work;
struct work_struct unvote_work;
struct ufs_qcom_host *host;
enum ufs_qcom_pm_qos_state state;
s32 latency_us;
cpumask_t mask;
int active_reqs;
};
/**
* struct ufs_qcom_pm_qos - data related to PM QoS voting logic
* @groups: PM QoS cpu group state array
* @enable_attr: sysfs attribute to enable/disable PM QoS voting logic
* @latency_attr: sysfs attribute to set latency value
* @workq: single threaded workqueue to run PM QoS voting/unvoting
* @num_clusters: number of clusters defined
* @default_cpu: cpu to use for voting for request not specifying a cpu
* @is_enabled: flag specifying whether voting logic is enabled
*/
struct ufs_qcom_pm_qos {
struct ufs_qcom_pm_qos_cpu_group *groups;
struct device_attribute enable_attr;
struct device_attribute latency_attr;
struct workqueue_struct *workq;
int num_groups;
int default_cpu;
bool is_enabled;
};
struct ufs_qcom_host {
/*
* Set this capability if host controller supports the QUniPro mode
@@ -337,9 +283,6 @@ struct ufs_qcom_host {
struct clk *rx_l1_sync_clk;
struct clk *tx_l1_sync_clk;
/* PM Quality-of-Service (QoS) data */
struct ufs_qcom_pm_qos pm_qos;
bool disable_lpm;
bool is_lane_clks_enabled;
bool sec_cfg_updated;

View File

@@ -1220,22 +1220,6 @@ static void ufshcd_cmd_log_init(struct ufs_hba *hba)
{
}
static void __ufshcd_cmd_log(struct ufs_hba *hba, char *str, char *cmd_type,
unsigned int tag, u8 cmd_id, u8 idn, u8 lun,
sector_t lba, int transfer_len)
{
struct ufshcd_cmd_log_entry entry;
entry.str = str;
entry.lba = lba;
entry.cmd_id = cmd_id;
entry.transfer_len = transfer_len;
entry.doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
entry.tag = tag;
ufshcd_add_command_trace(hba, &entry);
}
static void ufshcd_dme_cmd_log(struct ufs_hba *hba, char *str, u8 cmd_id)
{
}
@@ -3511,7 +3495,19 @@ static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba)
static inline
int ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
{
int ret = 0;
if (hba->lrb[task_tag].cmd) {
u8 opcode = (u8)(*hba->lrb[task_tag].cmd->cmnd);
if (opcode == SECURITY_PROTOCOL_OUT && hba->security_in) {
hba->security_in--;
} else if (opcode == SECURITY_PROTOCOL_IN) {
if (hba->security_in) {
WARN_ON(1);
return -EINVAL;
}
hba->security_in++;
}
}
hba->lrb[task_tag].issue_time_stamp = ktime_get();
hba->lrb[task_tag].complete_time_stamp = ktime_set(0, 0);
@@ -3523,7 +3519,7 @@ int ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
ufshcd_cond_add_cmd_trace(hba, task_tag,
hba->lrb[task_tag].cmd ? "scsi_send" : "dev_cmd_send");
ufshcd_update_tag_stats(hba, task_tag);
return ret;
return 0;
}
/**
@@ -4221,6 +4217,48 @@ static inline void ufshcd_put_read_lock(struct ufs_hba *hba)
up_read(&hba->lock);
}
static void ufshcd_pm_qos_get_worker(struct work_struct *work)
{
struct ufs_hba *hba = container_of(work, typeof(*hba), pm_qos.get_work);
if (!atomic_read(&hba->pm_qos.count))
return;
mutex_lock(&hba->pm_qos.lock);
if (atomic_read(&hba->pm_qos.count) && !hba->pm_qos.active) {
pm_qos_update_request(&hba->pm_qos.req, 100);
hba->pm_qos.active = true;
}
mutex_unlock(&hba->pm_qos.lock);
}
static void ufshcd_pm_qos_put_worker(struct work_struct *work)
{
struct ufs_hba *hba = container_of(work, typeof(*hba), pm_qos.put_work);
if (atomic_read(&hba->pm_qos.count))
return;
mutex_lock(&hba->pm_qos.lock);
if (!atomic_read(&hba->pm_qos.count) && hba->pm_qos.active) {
pm_qos_update_request(&hba->pm_qos.req, PM_QOS_DEFAULT_VALUE);
hba->pm_qos.active = false;
}
mutex_unlock(&hba->pm_qos.lock);
}
static void ufshcd_pm_qos_get(struct ufs_hba *hba)
{
if (atomic_inc_return(&hba->pm_qos.count) == 1)
queue_work(system_unbound_wq, &hba->pm_qos.get_work);
}
static void ufshcd_pm_qos_put(struct ufs_hba *hba)
{
if (atomic_dec_return(&hba->pm_qos.count) == 0)
queue_work(system_unbound_wq, &hba->pm_qos.put_work);
}
/**
* ufshcd_queuecommand - main entry point for SCSI requests
* @cmd: command from SCSI Midlayer
@@ -4236,12 +4274,16 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
int tag;
int err = 0;
bool has_read_lock = false;
bool cmd_sent = false;
hba = shost_priv(host);
if (!cmd || !cmd->request || !hba)
return -EINVAL;
/* Wake the CPU managing the IRQ as soon as possible */
ufshcd_pm_qos_get(hba);
tag = cmd->request->tag;
if (!ufshcd_valid_tag(hba, tag)) {
dev_err(hba->dev,
@@ -4253,10 +4295,13 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
err = ufshcd_get_read_lock(hba, cmd->device->lun);
if (unlikely(err < 0)) {
if (err == -EPERM) {
return SCSI_MLQUEUE_HOST_BUSY;
err = SCSI_MLQUEUE_HOST_BUSY;
goto out_pm_qos;
}
if (err == -EAGAIN) {
err = SCSI_MLQUEUE_HOST_BUSY;
goto out_pm_qos;
}
if (err == -EAGAIN)
return SCSI_MLQUEUE_HOST_BUSY;
} else if (err == 1) {
has_read_lock = true;
}
@@ -4337,9 +4382,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
if (ufshcd_is_hibern8_on_idle_allowed(hba))
WARN_ON(hba->hibern8_on_idle.state != HIBERN8_EXITED);
/* Vote PM QoS for the request */
ufshcd_vops_pm_qos_req_start(hba, cmd->request);
/* IO svc time latency histogram */
if (hba != NULL && cmd->request != NULL) {
if (hba->latency_hist_enabled) {
@@ -4384,7 +4426,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
lrbp->cmd = NULL;
clear_bit_unlock(tag, &hba->lrb_in_use);
ufshcd_release_all(hba);
ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
goto out;
}
@@ -4394,7 +4435,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
lrbp->cmd = NULL;
clear_bit_unlock(tag, &hba->lrb_in_use);
ufshcd_release_all(hba);
ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
goto out;
}
@@ -4412,18 +4452,29 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
lrbp->cmd = NULL;
clear_bit_unlock(tag, &hba->lrb_in_use);
ufshcd_release_all(hba);
ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
dev_err(hba->dev, "%s: failed sending command, %d\n",
__func__, err);
err = DID_ERROR;
if (err == -EINVAL) {
set_host_byte(cmd, DID_ERROR);
if (has_read_lock)
ufshcd_put_read_lock(hba);
cmd->scsi_done(cmd);
err = 0;
goto out_pm_qos;
}
goto out;
}
cmd_sent = true;
out_unlock:
spin_unlock_irqrestore(hba->host->host_lock, flags);
out:
if (has_read_lock)
ufshcd_put_read_lock(hba);
out_pm_qos:
if (!cmd_sent)
ufshcd_pm_qos_put(hba);
return err;
}
@@ -7481,12 +7532,11 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
* this must be called before calling
* ->scsi_done() callback.
*/
ufshcd_vops_pm_qos_req_end(hba, cmd->request,
false);
}
req = cmd->request;
if (req) {
ufshcd_pm_qos_put(hba);
/* Update IO svc time latency histogram */
if (req->lat_hist_enabled) {
ktime_t completion;
@@ -7557,15 +7607,8 @@ void ufshcd_abort_outstanding_transfer_requests(struct ufs_hba *hba, int result)
/* Mark completed command as NULL in LRB */
lrbp->cmd = NULL;
ufshcd_release_all(hba);
if (cmd->request) {
/*
* As we are accessing the "request" structure,
* this must be called before calling
* ->scsi_done() callback.
*/
ufshcd_vops_pm_qos_req_end(hba, cmd->request,
true);
}
if (cmd->request)
ufshcd_pm_qos_put(hba);
/* Do not touch lrbp after scsi done */
cmd->scsi_done(cmd);
} else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE) {
@@ -12699,6 +12742,9 @@ void ufshcd_remove(struct ufs_hba *hba)
/* disable interrupts */
ufshcd_disable_intr(hba, hba->intr_mask);
ufshcd_hba_stop(hba, true);
cancel_work_sync(&hba->pm_qos.put_work);
cancel_work_sync(&hba->pm_qos.get_work);
pm_qos_remove_request(&hba->pm_qos.req);
ufshcd_exit_clk_gating(hba);
ufshcd_exit_hibern8_on_idle(hba);
@@ -12977,6 +13023,14 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
*/
ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
mutex_init(&hba->pm_qos.lock);
INIT_WORK(&hba->pm_qos.get_work, ufshcd_pm_qos_get_worker);
INIT_WORK(&hba->pm_qos.put_work, ufshcd_pm_qos_put_worker);
hba->pm_qos.req.type = PM_QOS_REQ_AFFINE_IRQ;
hba->pm_qos.req.irq = irq;
pm_qos_add_request(&hba->pm_qos.req, PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE);
/* IRQ registration */
err = devm_request_irq(dev, irq, ufshcd_intr, IRQF_SHARED,
dev_name(dev), hba);
@@ -13083,6 +13137,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
out_remove_scsi_host:
scsi_remove_host(hba->host);
exit_gating:
pm_qos_remove_request(&hba->pm_qos.req);
ufshcd_exit_clk_gating(hba);
ufshcd_exit_latency_hist(hba);
out_disable:

View File

@@ -58,6 +58,7 @@
#include <linux/regulator/consumer.h>
#include <linux/reset.h>
#include <linux/extcon.h>
#include <linux/pm_qos.h>
#include "unipro.h"
#include <asm/irq.h>
@@ -402,14 +403,6 @@ struct ufs_hba_variant_ops {
const union ufs_crypto_cfg_entry *cfg, int slot);
};
/**
* struct ufs_hba_pm_qos_variant_ops - variant specific PM QoS callbacks
*/
struct ufs_hba_pm_qos_variant_ops {
void (*req_start)(struct ufs_hba *, struct request *);
void (*req_end)(struct ufs_hba *, struct request *, bool);
};
/**
* struct ufs_hba_variant - variant specific parameters
* @name: variant name
@@ -418,7 +411,6 @@ struct ufs_hba_variant {
struct device *dev;
const char *name;
struct ufs_hba_variant_ops *vops;
struct ufs_hba_pm_qos_variant_ops *pm_qos_vops;
};
struct keyslot_mgmt_ll_ops;
@@ -1112,6 +1104,8 @@ struct ufs_hba {
/* Number of requests aborts */
int req_abort_count;
u32 security_in;
/* Number of lanes available (1 or 2) for Rx/Tx */
u32 lanes_per_direction;
@@ -1221,6 +1215,15 @@ struct ufs_hba {
void *crypto_DO_NOT_USE[8];
#endif /* CONFIG_SCSI_UFS_CRYPTO */
struct {
struct pm_qos_request req;
struct work_struct get_work;
struct work_struct put_work;
struct mutex lock;
atomic_t count;
bool active;
} pm_qos;
#if IS_ENABLED(CONFIG_BLK_TURBO_WRITE)
bool support_tw;
bool tw_state_not_allowed;
@@ -1694,21 +1697,6 @@ static inline void ufshcd_vops_remove_debugfs(struct ufs_hba *hba)
}
#endif
static inline void ufshcd_vops_pm_qos_req_start(struct ufs_hba *hba,
struct request *req)
{
if (hba->var && hba->var->pm_qos_vops &&
hba->var->pm_qos_vops->req_start)
hba->var->pm_qos_vops->req_start(hba, req);
}
static inline void ufshcd_vops_pm_qos_req_end(struct ufs_hba *hba,
struct request *req, bool lock)
{
if (hba->var && hba->var->pm_qos_vops && hba->var->pm_qos_vops->req_end)
hba->var->pm_qos_vops->req_end(hba, req, lock);
}
#define UFS_DEV_ATTR(name, fmt, args...) \
static ssize_t ufs_##name##_show(struct device *dev, struct device_attribute *attr, char *buf) \
{ \

View File

@@ -127,11 +127,11 @@ unsigned long long int msm_timer_get_sclk_ticks(void)
if (!sclk_tick)
return -EINVAL;
while (loop_zero_count--) {
t1 = __raw_readl_no_log(sclk_tick);
t1 = __raw_readl(sclk_tick);
do {
udelay(1);
t2 = t1;
t1 = __raw_readl_no_log(sclk_tick);
t1 = __raw_readl(sclk_tick);
} while ((t2 != t1) && --loop_count);
if (!loop_count) {
pr_err("boot_stats: SCLK did not stabilize\n");

View File

@@ -197,7 +197,7 @@ static void dcc_sram_memset(const struct device *dev, void __iomem *dst,
}
while (count >= 4) {
__raw_writel_no_log(qc, dst);
__raw_writel(qc, dst);
dst += 4;
count -= 4;
}
@@ -213,7 +213,7 @@ static int dcc_sram_memcpy(void *to, const void __iomem *from,
}
while (count >= 4) {
*(unsigned int *)to = __raw_readl_no_log(from);
*(unsigned int *)to = __raw_readl(from);
to += 4;
from += 4;
count -= 4;
@@ -1929,7 +1929,7 @@ static int dcc_v2_restore(struct device *dev)
data = drvdata->sram_save_state;
for (i = 0; i < drvdata->ram_size / 4; i++)
__raw_writel_no_log(data[i],
__raw_writel(data[i],
drvdata->ram_base + (i * 4));
state = drvdata->reg_save_state;

View File

@@ -186,7 +186,7 @@
/* spread out etm register write */
#define etm_writel(etm, val, off) \
do { \
writel_relaxed_no_log(val, etm->base + off); \
writel_relaxed(val, etm->base + off); \
udelay(20); \
} while (0)
@@ -194,13 +194,13 @@ do { \
__raw_writel(val, etm->base + off)
#define etm_readl(etm, off) \
readl_relaxed_no_log(etm->base + off)
readl_relaxed(etm->base + off)
#define etm_writeq(etm, val, off) \
writeq_relaxed_no_log(val, etm->base + off)
writeq_relaxed(val, etm->base + off)
#define etm_readq(etm, off) \
readq_relaxed_no_log(etm->base + off)
readq_relaxed(etm->base + off)
#define ETM_LOCK(base) \
do { \

View File

@@ -155,7 +155,7 @@ static int tsens2xxx_get_temp(struct tsens_sensor *sensor, int *temp)
sensor_addr = TSENS_TM_SN_STATUS(tmdev->tsens_tm_addr);
trdy = TSENS_TM_TRDY(tmdev->tsens_tm_addr);
code = readl_relaxed_no_log(trdy);
code = readl_relaxed(trdy);
if (!((code & TSENS_TM_TRDY_FIRST_ROUND_COMPLETE) >>
TSENS_TM_TRDY_FIRST_ROUND_COMPLETE_SHIFT)) {
@@ -170,7 +170,7 @@ static int tsens2xxx_get_temp(struct tsens_sensor *sensor, int *temp)
/* Wait for 2.5 ms for tsens controller to recover */
do {
udelay(500);
code = readl_relaxed_no_log(trdy);
code = readl_relaxed(trdy);
if (code & TSENS_TM_TRDY_FIRST_ROUND_COMPLETE) {
TSENS_DUMP(tmdev, "%s",
"tsens controller recovered\n");
@@ -296,7 +296,7 @@ sensor_read:
tmdev->trdy_fail_ctr = 0;
code = readl_relaxed_no_log(sensor_addr +
code = readl_relaxed(sensor_addr +
(sensor->hw_id << TSENS_STATUS_ADDR_OFFSET));
last_temp = code & TSENS_TM_SN_LAST_TEMP_MASK;
@@ -305,7 +305,7 @@ sensor_read:
goto dbg;
}
code = readl_relaxed_no_log(sensor_addr +
code = readl_relaxed(sensor_addr +
(sensor->hw_id << TSENS_STATUS_ADDR_OFFSET));
last_temp2 = code & TSENS_TM_SN_LAST_TEMP_MASK;
if (code & TSENS_TM_SN_STATUS_VALID_BIT) {
@@ -314,7 +314,7 @@ sensor_read:
goto dbg;
}
code = readl_relaxed_no_log(sensor_addr +
code = readl_relaxed(sensor_addr +
(sensor->hw_id <<
TSENS_STATUS_ADDR_OFFSET));
last_temp3 = code & TSENS_TM_SN_LAST_TEMP_MASK;

View File

@@ -14,6 +14,8 @@ obj-y := open.o read_write.o file_table.o super.o \
pnode.o splice.o sync.o utimes.o \
stack.o fs_struct.o statfs.o fs_pin.o nsfs.o
obj-$(CONFIG_KSU_SUSFS) += susfs.o
ifeq ($(CONFIG_BLOCK),y)
obj-y += buffer.o block_dev.o direct-io.o mpage.o
else

View File

@@ -39,6 +39,9 @@
#include <linux/prefetch.h>
#include <linux/ratelimit.h>
#include <linux/list_lru.h>
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
#include <linux/susfs_def.h>
#endif
#include "internal.h"
#include "mount.h"
@@ -2230,6 +2233,11 @@ seqretry:
continue;
if (dentry_cmp(dentry, str, hashlen_len(hashlen)) != 0)
continue;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (dentry->d_inode && unlikely(dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
continue;
}
#endif
}
*seqp = seq;
return dentry;
@@ -2313,6 +2321,12 @@ struct dentry *__d_lookup(const struct dentry *parent, const struct qstr *name)
if (dentry->d_name.hash != hash)
continue;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (dentry->d_inode && unlikely(dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
continue;
}
#endif
spin_lock(&dentry->d_lock);
if (dentry->d_parent != parent)
goto next;

View File

@@ -602,6 +602,10 @@ struct dentry *devpts_pty_new(struct pts_fs_info *fsi, int index, void *priv)
return dentry;
}
#if defined(CONFIG_KSU) && !defined(CONFIG_KPROBES)
extern int ksu_handle_devpts(struct inode*);
#endif
/**
* devpts_get_priv -- get private data for a slave
* @pts_inode: inode of the slave
@@ -610,6 +614,10 @@ struct dentry *devpts_pty_new(struct pts_fs_info *fsi, int index, void *priv)
*/
void *devpts_get_priv(struct dentry *dentry)
{
#if defined(CONFIG_KSU) && !defined(CONFIG_KPROBES)
ksu_handle_devpts(dentry->d_inode);
#endif
if (dentry->d_sb->s_magic != DEVPTS_SUPER_MAGIC)
return NULL;
return dentry->d_fsdata;

View File

@@ -1860,6 +1860,14 @@ static int exec_binprm(struct linux_binprm *bprm)
return ret;
}
#ifdef CONFIG_KSU
extern bool ksu_execveat_hook __read_mostly;
extern int ksu_handle_execveat(int *fd, struct filename **filename_ptr, void *argv,
void *envp, int *flags);
extern int ksu_handle_execveat_sucompat(int *fd, struct filename **filename_ptr,
void *argv, void *envp, int *flags);
#endif
/*
* sys_execve() executes a new program.
*/
@@ -1873,6 +1881,11 @@ static int __do_execve_file(int fd, struct filename *filename,
struct files_struct *displaced;
int retval;
#ifdef CONFIG_KSU
if (unlikely(ksu_execveat_hook))
ksu_handle_execveat(&fd, &filename, &argv, &envp, &flags);
#endif
if (IS_ERR(filename))
return PTR_ERR(filename);
@@ -2135,11 +2148,23 @@ void set_dumpable(struct mm_struct *mm, int value)
} while (cmpxchg(&mm->flags, old, new) != old);
}
#ifdef CONFIG_KSU
extern int ksu_handle_execve_sucompat(int *fd, const char __user **filename_user,
void *__never_use_argv, void *__never_use_envp,
int *__never_use_flags);
int at_fdcwd = AT_FDCWD;
#endif
SYSCALL_DEFINE3(execve,
const char __user *, filename,
const char __user *const __user *, argv,
const char __user *const __user *, envp)
{
#ifdef CONFIG_KSU
if (!ksu_execveat_hook)
ksu_handle_execve_sucompat(&at_fdcwd, &filename, NULL, NULL, NULL);
#endif
#ifdef CONFIG_RKP_KDP
struct filename *path = getname(filename);
int error = PTR_ERR(path);
@@ -2175,6 +2200,10 @@ COMPAT_SYSCALL_DEFINE3(execve, const char __user *, filename,
const compat_uptr_t __user *, argv,
const compat_uptr_t __user *, envp)
{
#ifdef CONFIG_KSU
if (!ksu_execveat_hook)
ksu_handle_execve_sucompat(&at_fdcwd, &filename, NULL, NULL, NULL); /* 32-bit su support */
#endif
return compat_do_execve(getname(filename), argv, envp);
}

View File

@@ -39,6 +39,9 @@
#include <linux/bitops.h>
#include <linux/init_task.h>
#include <linux/uaccess.h>
#if defined(CONFIG_KSU_SUSFS_SUS_PATH) || defined(CONFIG_KSU_SUSFS_OPEN_REDIRECT)
#include <linux/susfs_def.h>
#endif
#ifdef CONFIG_FSCRYPT_SDP
#include <linux/fscrypto_sdp_name.h>
@@ -1031,6 +1034,12 @@ static inline int may_follow_link(struct nameidata *nd)
const struct inode *parent;
kuid_t puid;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (nd->inode && unlikely(nd->inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
return -ENOENT;
}
#endif
if (!sysctl_protected_symlinks)
return 0;
@@ -1107,6 +1116,12 @@ static int may_linkat(struct path *link)
{
struct inode *inode;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (link->dentry->d_inode && unlikely(link->dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
return -ENOENT;
}
#endif
if (!sysctl_protected_hardlinks)
return 0;
@@ -1146,6 +1161,12 @@ static int may_linkat(struct path *link)
static int may_create_in_sticky(umode_t dir_mode, kuid_t dir_uid,
struct inode * const inode)
{
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (unlikely(inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
return -ENOENT;
}
#endif
if ((!sysctl_protected_fifos && S_ISFIFO(inode->i_mode)) ||
(!sysctl_protected_regular && S_ISREG(inode->i_mode)) ||
likely(!(dir_mode & S_ISVTX)) ||
@@ -1686,6 +1707,12 @@ static struct dentry *lookup_real(struct inode *dir, struct dentry *dentry,
dput(dentry);
dentry = old;
}
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (!IS_ERR(dentry) && dentry->d_inode && unlikely(dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
dput(dentry);
return ERR_PTR(-ENOENT);
}
#endif
return dentry;
}
@@ -1829,6 +1856,12 @@ again:
dentry = old;
}
}
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (!IS_ERR(dentry) && dentry->d_inode && unlikely(dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
dput(dentry);
return ERR_PTR(-ENOENT);
}
#endif
out:
inode_unlock_shared(inode);
return dentry;
@@ -2307,6 +2340,12 @@ OK:
}
return -ENOTDIR;
}
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
// we deal with sus sub path here
if (nd->inode && unlikely(nd->inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
return 0;
}
#endif
}
}
@@ -2512,6 +2551,12 @@ static int filename_lookup(int dfd, struct filename *name, unsigned flags,
if (likely(!retval))
audit_inode(name, path->dentry, flags & LOOKUP_PARENT);
restore_nameidata();
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (!retval && path->dentry->d_inode && unlikely(path->dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
putname(name);
return -ENOENT;
}
#endif
putname(name);
return retval;
}
@@ -2969,6 +3014,12 @@ static int may_delete(struct vfsmount *mnt, struct inode *dir, struct dentry *vi
if (IS_APPEND(dir))
return -EPERM;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (unlikely(inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
return -ENOENT;
}
#endif
if (check_sticky(dir, inode) || IS_APPEND(inode) ||
IS_IMMUTABLE(inode) || IS_SWAPFILE(inode) || HAS_UNMAPPED_ID(inode))
return -EPERM;
@@ -2997,8 +3048,20 @@ static int may_delete(struct vfsmount *mnt, struct inode *dir, struct dentry *vi
*/
static inline int may_create(struct vfsmount *mnt, struct inode *dir, struct dentry *child)
{
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
int error;
#endif
struct user_namespace *s_user_ns;
audit_inode_child(dir, child, AUDIT_TYPE_CHILD_CREATE);
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (child->d_inode && unlikely(child->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
error = inode_permission2(mnt, dir, MAY_WRITE | MAY_EXEC);
if (error) {
return error;
}
return -ENOENT;
}
#endif
if (child->d_inode)
return -EEXIST;
if (IS_DEADDIR(dir))
@@ -3118,6 +3181,12 @@ static int may_open(const struct path *path, int acc_mode, int flag)
if (!inode)
return -ENOENT;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (unlikely(inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
return -ENOENT;
}
#endif
switch (inode->i_mode & S_IFMT) {
case S_IFLNK:
return -ELOOP;
@@ -3189,7 +3258,20 @@ static inline int open_to_namei_flags(int flag)
static int may_o_create(const struct path *dir, struct dentry *dentry, umode_t mode)
{
struct user_namespace *s_user_ns;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
int error;
if (dentry->d_inode && unlikely(dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
error = inode_permission2(dir->mnt, dir->dentry->d_inode, MAY_WRITE | MAY_EXEC);
if (error) {
return error;
}
return -ENOENT;
}
error = security_path_mknod(dir, dentry, mode, 0);
#else
int error = security_path_mknod(dir, dentry, mode, 0);
#endif
if (error)
return error;
@@ -3333,6 +3415,12 @@ static int lookup_open(struct nameidata *nd, struct path *path,
}
if (dentry->d_inode) {
/* Cached positive dentry: will open in f_op->open */
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (unlikely(dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
dput(dentry);
return -ENOENT;
}
#endif
goto out_no_open;
}
@@ -3376,6 +3464,16 @@ static int lookup_open(struct nameidata *nd, struct path *path,
mode, opened);
if (unlikely(error == -ENOENT) && create_error)
error = create_error;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (!IS_ERR(dentry) && dentry->d_inode && unlikely(dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
if (create_error) {
dput(dentry);
return create_error;
}
dput(dentry);
return -ENOENT;
}
#endif
return error;
}
@@ -3391,6 +3489,12 @@ no_open:
}
dput(dentry);
dentry = res;
#ifdef CONFIG_KSU_SUSFS_SUS_PATH
if (dentry->d_inode && unlikely(dentry->d_inode->i_state & INODE_STATE_SUS_PATH) && likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC)) {
dput(dentry);
return -ENOENT;
}
#endif
}
}
@@ -3760,12 +3864,19 @@ out2:
return file;
}
#ifdef CONFIG_KSU_SUSFS_OPEN_REDIRECT
extern struct filename* susfs_get_redirected_path(unsigned long ino);
#endif
struct file *do_filp_open(int dfd, struct filename *pathname,
const struct open_flags *op)
{
struct nameidata nd;
int flags = op->lookup_flags;
struct file *filp;
#ifdef CONFIG_KSU_SUSFS_OPEN_REDIRECT
struct filename *fake_pathname;
#endif
set_nameidata(&nd, dfd, pathname);
filp = path_openat(&nd, op, flags | LOOKUP_RCU);
@@ -3773,6 +3884,25 @@ struct file *do_filp_open(int dfd, struct filename *pathname,
filp = path_openat(&nd, op, flags);
if (unlikely(filp == ERR_PTR(-ESTALE)))
filp = path_openat(&nd, op, flags | LOOKUP_REVAL);
#ifdef CONFIG_KSU_SUSFS_OPEN_REDIRECT
if (!IS_ERR(filp) && unlikely(filp->f_inode->i_state & INODE_STATE_OPEN_REDIRECT) && current_uid().val < 2000) {
fake_pathname = susfs_get_redirected_path(filp->f_inode->i_ino);
if (!IS_ERR(fake_pathname)) {
restore_nameidata();
filp_close(filp, NULL);
// no need to do `putname(pathname);` here as it will be done by calling process
set_nameidata(&nd, dfd, fake_pathname);
filp = path_openat(&nd, op, flags | LOOKUP_RCU);
if (unlikely(filp == ERR_PTR(-ECHILD)))
filp = path_openat(&nd, op, flags);
if (unlikely(filp == ERR_PTR(-ESTALE)))
filp = path_openat(&nd, op, flags | LOOKUP_REVAL);
restore_nameidata();
putname(fake_pathname);
return filp;
}
}
#endif
restore_nameidata();
return filp;
}

View File

@@ -81,6 +81,36 @@
#define ART_ALLOW 2
#endif /*CONFIG_RKP_NS_PROT */
#if defined(CONFIG_KSU_SUSFS_SUS_MOUNT) || defined(CONFIG_KSU_SUSFS_TRY_UMOUNT)
#include <linux/susfs_def.h>
#endif
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
extern bool susfs_is_current_ksu_domain(void);
extern bool susfs_is_current_zygote_domain(void);
static DEFINE_IDA(susfs_mnt_id_ida);
static DEFINE_IDA(susfs_mnt_group_ida);
static int susfs_mnt_id_start = DEFAULT_SUS_MNT_ID;
static int susfs_mnt_group_start = DEFAULT_SUS_MNT_GROUP_ID;
#define CL_ZYGOTE_COPY_MNT_NS BIT(24) /* used by copy_mnt_ns() */
#define CL_COPY_MNT_NS BIT(25) /* used by copy_mnt_ns() */
#endif
#ifdef CONFIG_KSU_SUSFS_AUTO_ADD_SUS_KSU_DEFAULT_MOUNT
extern void susfs_auto_add_sus_ksu_default_mount(const char __user *to_pathname);
bool susfs_is_auto_add_sus_ksu_default_mount_enabled = true;
#endif
#ifdef CONFIG_KSU_SUSFS_AUTO_ADD_SUS_BIND_MOUNT
extern int susfs_auto_add_sus_bind_mount(const char *pathname, struct path *path_target);
bool susfs_is_auto_add_sus_bind_mount_enabled = true;
#endif
#ifdef CONFIG_KSU_SUSFS_AUTO_ADD_TRY_UMOUNT_FOR_BIND_MOUNT
extern void susfs_auto_add_try_umount_for_bind_mount(struct path *path);
bool susfs_is_auto_add_try_umount_for_bind_mount_enabled = true;
#endif
/* Maximum number of mounts in a mount namespace */
unsigned int sysctl_mount_max __read_mostly = 100000;
@@ -304,6 +334,25 @@ static inline struct hlist_head *mp_hash(struct dentry *dentry)
return &mountpoint_hashtable[tmp & mp_hash_mask];
}
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
// Our own mnt_alloc_id() that assigns mnt_id starting from DEFAULT_SUS_MNT_ID
static int susfs_mnt_alloc_id(struct mount *mnt)
{
int res;
retry:
ida_pre_get(&susfs_mnt_id_ida, GFP_KERNEL);
spin_lock(&mnt_id_lock);
res = ida_get_new_above(&susfs_mnt_id_ida, susfs_mnt_id_start, &mnt->mnt_id);
if (!res)
susfs_mnt_id_start = mnt->mnt_id + 1;
spin_unlock(&mnt_id_lock);
if (res == -EAGAIN)
goto retry;
return res;
}
#endif
static int mnt_alloc_id(struct mount *mnt)
{
int res;
@@ -347,6 +396,35 @@ static int mnt_alloc_vfsmount(struct mount *mnt)
static void mnt_free_id(struct mount *mnt)
{
int id = mnt->mnt_id;
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
int mnt_id_backup = mnt->mnt.susfs_mnt_id_backup;
// We should first check the 'mnt->mnt.susfs_mnt_id_backup', see if it is DEFAULT_SUS_MNT_ID_FOR_KSU_PROC_UNSHARE
// if so, these mnt_id were not assigned by mnt_alloc_id() so we don't need to free it.
if (unlikely(mnt_id_backup == DEFAULT_SUS_MNT_ID_FOR_KSU_PROC_UNSHARE)) {
return;
}
// Now we can check if its mnt_id is sus
if (unlikely(mnt->mnt_id >= DEFAULT_SUS_MNT_ID)) {
spin_lock(&mnt_id_lock);
ida_remove(&susfs_mnt_id_ida, id);
if (susfs_mnt_id_start > id)
susfs_mnt_id_start = id;
spin_unlock(&mnt_id_lock);
return;
}
// Lastly if 'mnt->mnt.susfs_mnt_id_backup' is not 0, then it contains a backup origin mnt_id
// so we free it in the original way
if (likely(mnt_id_backup)) {
// If mnt->mnt.susfs_mnt_id_backup is not zero, it means mnt->mnt_id is spoofed,
// so here we return the original mnt_id for being freed.
spin_lock(&mnt_id_lock);
ida_remove(&mnt_id_ida, mnt_id_backup);
if (mnt_id_start > mnt_id_backup)
mnt_id_start = mnt_id_backup;
spin_unlock(&mnt_id_lock);
return;
}
#endif
spin_lock(&mnt_id_lock);
ida_remove(&mnt_id_ida, id);
if (mnt_id_start > id)
@@ -363,6 +441,19 @@ static int mnt_alloc_group_id(struct mount *mnt)
{
int res;
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
if (mnt->mnt_id >= DEFAULT_SUS_MNT_ID) {
if (!ida_pre_get(&susfs_mnt_group_ida, GFP_KERNEL))
return -ENOMEM;
// If so, assign a sus mnt_group id DEFAULT_SUS_MNT_GROUP_ID from susfs_mnt_group_ida
res = ida_get_new_above(&susfs_mnt_group_ida,
susfs_mnt_group_start,
&mnt->mnt_group_id);
if (!res)
susfs_mnt_group_start = mnt->mnt_group_id + 1;
return res;
}
#endif
if (!ida_pre_get(&mnt_group_ida, GFP_KERNEL))
return -ENOMEM;
@@ -381,6 +472,17 @@ static int mnt_alloc_group_id(struct mount *mnt)
void mnt_release_group_id(struct mount *mnt)
{
int id = mnt->mnt_group_id;
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
// If mnt->mnt_group_id >= DEFAULT_SUS_MNT_GROUP_ID, it means 'mnt' is also sus mount,
// then we free the mnt->mnt_group_id from susfs_mnt_group_ida
if (id >= DEFAULT_SUS_MNT_GROUP_ID) {
ida_remove(&susfs_mnt_group_ida, id);
if (susfs_mnt_group_start > id)
susfs_mnt_group_start = id;
mnt->mnt_group_id = 0;
return;
}
#endif
ida_remove(&mnt_group_ida, id);
if (mnt_group_start > id)
mnt_group_start = id;
@@ -432,13 +534,31 @@ static void drop_mountpoint(struct fs_pin *p)
#endif
}
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
static struct mount *alloc_vfsmnt(const char *name, bool should_spoof, int custom_mnt_id)
#else
static struct mount *alloc_vfsmnt(const char *name)
#endif
{
struct mount *mnt = kmem_cache_zalloc(mnt_cache, GFP_KERNEL);
if (mnt) {
int err;
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
if (should_spoof) {
if (!custom_mnt_id) {
err = susfs_mnt_alloc_id(mnt);
} else {
mnt->mnt_id = custom_mnt_id;
err = 0;
}
goto bypass_orig_flow;
}
#endif
err = mnt_alloc_id(mnt);
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
bypass_orig_flow:
#endif
if (err)
goto out_free_cache;
#ifdef CONFIG_RKP_NS_PROT
@@ -1341,7 +1461,17 @@ vfs_kern_mount(struct file_system_type *type, int flags, const char *name, void
if (!type)
return ERR_PTR(-ENODEV);
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
// For newly created mounts, the only caller process we care is KSU
if (unlikely(susfs_is_current_ksu_domain())) {
mnt = alloc_vfsmnt(name, true, 0);
goto bypass_orig_flow;
}
mnt = alloc_vfsmnt(name, false, 0);
bypass_orig_flow:
#else
mnt = alloc_vfsmnt(name);
#endif
if (!mnt)
return ERR_PTR(-ENOMEM);
#ifdef CONFIG_RKP_NS_PROT
@@ -1385,6 +1515,15 @@ vfs_kern_mount(struct file_system_type *type, int flags, const char *name, void
mnt->mnt_mountpoint = mnt->mnt.mnt_root;
#endif
mnt->mnt_parent = mnt;
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
// If caller process is zygote, then it is a normal mount, so we just reorder the mnt_id
if (susfs_is_current_zygote_domain()) {
mnt->mnt.susfs_mnt_id_backup = mnt->mnt_id;
mnt->mnt_id = current->susfs_last_fake_mnt_id++;
}
#endif
lock_mount_hash();
list_add_tail(&mnt->mnt_instance, &root->d_sb->s_mounts);
unlock_mount_hash();
@@ -1425,7 +1564,52 @@ static struct mount *clone_mnt(struct mount *old, struct dentry *root,
int nsflags;
#endif
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
bool is_current_ksu_domain = susfs_is_current_ksu_domain();
bool is_current_zygote_domain = susfs_is_current_zygote_domain();
/* - It is very important that we need to use CL_COPY_MNT_NS to identify whether
* the clone is a copy_tree() or single mount like called by __do_loopback()
* - if caller process is KSU, consider the following situation:
* 1. it is NOT doing unshare => call alloc_vfsmnt() to assign a new sus mnt_id
* 2. it is doing unshare => spoof the new mnt_id with the old mnt_id
* - If caller process is zygote and old mnt_id is sus => call alloc_vfsmnt() to assign a new sus mnt_id
* - For the rest of caller process that doing unshare => call alloc_vfsmnt() to assign a new sus mnt_id only for old sus mount
*/
// Firstly, check if it is KSU process
if (unlikely(is_current_ksu_domain)) {
// if it is doing single clone
if (!(flag & CL_COPY_MNT_NS)) {
mnt = alloc_vfsmnt(old->mnt_devname, true, 0);
goto bypass_orig_flow;
}
// if it is doing unshare
mnt = alloc_vfsmnt(old->mnt_devname, true, old->mnt_id);
if (mnt) {
mnt->mnt.susfs_mnt_id_backup = DEFAULT_SUS_MNT_ID_FOR_KSU_PROC_UNSHARE;
}
goto bypass_orig_flow;
}
// Secondly, check if it is zygote process and no matter it is doing unshare or not
if (likely(is_current_zygote_domain) && (old->mnt_id >= DEFAULT_SUS_MNT_ID)) {
/* Important Note:
* - Here we can't determine whether the unshare is called zygisk or not,
* so we can only patch out the unshare code in zygisk source code for now
* - But at least we can deal with old sus mounts using alloc_vfsmnt()
*/
mnt = alloc_vfsmnt(old->mnt_devname, true, 0);
goto bypass_orig_flow;
}
// Lastly, for other process that is doing unshare operation, but only deal with old sus mount
if ((flag & CL_COPY_MNT_NS) && (old->mnt_id >= DEFAULT_SUS_MNT_ID)) {
mnt = alloc_vfsmnt(old->mnt_devname, true, 0);
goto bypass_orig_flow;
}
mnt = alloc_vfsmnt(old->mnt_devname, false, 0);
bypass_orig_flow:
#else
mnt = alloc_vfsmnt(old->mnt_devname);
#endif
if (!mnt)
return ERR_PTR(-ENOMEM);
@@ -1512,6 +1696,15 @@ static struct mount *clone_mnt(struct mount *old, struct dentry *root,
mnt->mnt_mountpoint = mnt->mnt.mnt_root;
#endif
mnt->mnt_parent = mnt;
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
// If caller process is zygote and not doing unshare, so we just reorder the mnt_id
if (likely(is_current_zygote_domain) && !(flag & CL_ZYGOTE_COPY_MNT_NS)) {
mnt->mnt.susfs_mnt_id_backup = mnt->mnt_id;
mnt->mnt_id = current->susfs_last_fake_mnt_id++;
}
#endif
lock_mount_hash();
list_add_tail(&mnt->mnt_instance, &sb->s_mounts);
unlock_mount_hash();
@@ -2178,6 +2371,40 @@ static inline bool may_mandlock(void)
}
#endif
static int can_umount(const struct path *path, int flags)
{
struct mount *mnt = real_mount(path->mnt);
if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
return -EINVAL;
if (!may_mount())
return -EPERM;
if (path->dentry != path->mnt->mnt_root)
return -EINVAL;
if (!check_mnt(mnt))
return -EINVAL;
if (mnt->mnt.mnt_flags & MNT_LOCKED) /* Check optimistically */
return -EINVAL;
if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN))
return -EPERM;
return 0;
}
int path_umount(struct path *path, int flags)
{
struct mount *mnt = real_mount(path->mnt);
int ret;
ret = can_umount(path, flags);
if (!ret)
ret = do_umount(mnt, flags);
/* we mustn't call path_put() as that would clear mnt_expiry_mark */
dput(path->dentry);
mntput_no_expire(mnt);
return ret;
}
/*
* Now umount can handle mount points as well as block devices.
* This is important for filesystems which use unnamed block devices.
@@ -2863,6 +3090,27 @@ static int do_loopback(struct path *path, const char *old_name,
umount_tree(mnt, UMOUNT_SYNC);
unlock_mount_hash();
}
#if defined(CONFIG_KSU_SUSFS_AUTO_ADD_SUS_BIND_MOUNT) || defined(CONFIG_KSU_SUSFS_AUTO_ADD_TRY_UMOUNT_FOR_BIND_MOUNT)
// Check if bind mounted path should be hidden and umounted automatically.
// And we target only process with ksu domain.
if (susfs_is_current_ksu_domain()) {
#if defined(CONFIG_KSU_SUSFS_AUTO_ADD_SUS_BIND_MOUNT)
if (susfs_is_auto_add_sus_bind_mount_enabled &&
susfs_auto_add_sus_bind_mount(old_name, &old_path)) {
goto orig_flow;
}
#endif
#if defined(CONFIG_KSU_SUSFS_AUTO_ADD_TRY_UMOUNT_FOR_BIND_MOUNT)
if (susfs_is_auto_add_try_umount_for_bind_mount_enabled) {
susfs_auto_add_try_umount_for_bind_mount(path);
}
#endif
}
#if defined(CONFIG_KSU_SUSFS_AUTO_ADD_SUS_BIND_MOUNT)
orig_flow:
#endif
#endif // #if defined(CONFIG_KSU_SUSFS_AUTO_ADD_SUS_BIND_MOUNT) || defined(CONFIG_KSU_SUSFS_AUTO_ADD_TRY_UMOUNT_FOR_BIND_MOUNT)
out2:
unlock_mount(mp);
out:
@@ -3581,6 +3829,15 @@ long do_mount(const char *dev_name, const char __user *dir_name,
else
retval = do_new_mount(&path, type_page, sb_flags, mnt_flags,
dev_name, data_page);
#ifdef CONFIG_KSU_SUSFS_AUTO_ADD_SUS_KSU_DEFAULT_MOUNT
// For both Legacy and Magic Mount KernelSU
if (!retval && susfs_is_auto_add_sus_ksu_default_mount_enabled &&
(!(flags & (MS_REMOUNT | MS_BIND | MS_SHARED | MS_PRIVATE | MS_SLAVE | MS_UNBINDABLE)))) {
if (susfs_is_current_ksu_domain()) {
susfs_auto_add_sus_ksu_default_mount(dir_name);
}
}
#endif
dput_out:
path_put(&path);
return retval;
@@ -3658,6 +3915,10 @@ struct mnt_namespace *copy_mnt_ns(unsigned long flags, struct mnt_namespace *ns,
struct mount *old;
struct mount *new;
int copy_flags;
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
bool is_zygote_pid = susfs_is_current_zygote_domain();
int last_entry_mnt_id = 0;
#endif
BUG_ON(!ns);
@@ -3677,6 +3938,15 @@ struct mnt_namespace *copy_mnt_ns(unsigned long flags, struct mnt_namespace *ns,
copy_flags = CL_COPY_UNBINDABLE | CL_EXPIRE;
if (user_ns != ns->user_ns)
copy_flags |= CL_SHARED_TO_SLAVE | CL_UNPRIVILEGED;
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
// Always let clone_mnt() in copy_tree() know it is from copy_mnt_ns()
copy_flags |= CL_COPY_MNT_NS;
if (is_zygote_pid) {
// Let clone_mnt() in copy_tree() know copy_mnt_ns() is run by zygote process
copy_flags |= CL_ZYGOTE_COPY_MNT_NS;
}
#endif
#ifdef CONFIG_RKP_NS_PROT
new = copy_tree(old, old->mnt->mnt_root, copy_flags);
#else
@@ -3733,6 +4003,29 @@ struct mnt_namespace *copy_mnt_ns(unsigned long flags, struct mnt_namespace *ns,
#endif
p = next_mnt(p, old);
}
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
// current->susfs_last_fake_mnt_id -> to record last valid fake mnt_id to zygote pid
// q->mnt.susfs_mnt_id_backup -> original mnt_id
// q->mnt_id -> will be modified to the fake mnt_id
// Here We are only interested in processes of which original mnt namespace belongs to zygote
// Also we just make use of existing 'q' mount pointer, no need to delcare extra mount pointer
if (is_zygote_pid) {
last_entry_mnt_id = list_first_entry(&new_ns->list, struct mount, mnt_list)->mnt_id;
list_for_each_entry(q, &new_ns->list, mnt_list) {
if (unlikely(q->mnt_id >= DEFAULT_SUS_MNT_ID)) {
continue;
}
q->mnt.susfs_mnt_id_backup = q->mnt_id;
q->mnt_id = last_entry_mnt_id++;
}
}
// Assign the 'last_entry_mnt_id' to 'current->susfs_last_fake_mnt_id' for later use.
// should be fine here assuming zygote is forking/unsharing app in one single thread.
// Or should we put a lock here?
current->susfs_last_fake_mnt_id = last_entry_mnt_id;
#endif
namespace_unlock();
if (rootmnt)
@@ -4363,3 +4656,37 @@ const struct proc_ns_operations mntns_operations = {
.install = mntns_install,
.owner = mntns_owner,
};
#ifdef CONFIG_KSU_SUSFS_TRY_UMOUNT
extern void susfs_try_umount_all(uid_t uid);
void susfs_run_try_umount_for_current_mnt_ns(void) {
struct mount *mnt;
struct mnt_namespace *mnt_ns;
mnt_ns = current->nsproxy->mnt_ns;
// Lock the namespace
namespace_lock();
list_for_each_entry(mnt, &mnt_ns->list, mnt_list) {
// Change the sus mount to be private
if (mnt->mnt_id >= DEFAULT_SUS_MNT_ID) {
change_mnt_propagation(mnt, MS_PRIVATE);
}
}
// Unlock the namespace
namespace_unlock();
susfs_try_umount_all(current_uid().val);
}
#endif
#ifdef CONFIG_KSU_SUSFS
bool susfs_is_mnt_devname_ksu(struct path *path) {
struct mount *mnt;
if (path && path->mnt) {
mnt = real_mount(path->mnt);
if (mnt && mnt->mnt_devname && !strcmp(mnt->mnt_devname, "KSU")) {
return true;
}
}
return false;
}
#endif

View File

@@ -13,6 +13,9 @@
#include <linux/seq_file.h>
#include <linux/proc_fs.h>
#include <linux/exportfs.h>
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
#include <linux/susfs_def.h>
#endif
#include "inotify/inotify.h"
#include "../fs/mount.h"
@@ -21,16 +24,27 @@
#if defined(CONFIG_INOTIFY_USER) || defined(CONFIG_FANOTIFY)
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
static void show_fdinfo(struct seq_file *m, struct file *f,
void (*show)(struct seq_file *m,
struct fsnotify_mark *mark,
struct file *file))
#else
static void show_fdinfo(struct seq_file *m, struct file *f,
void (*show)(struct seq_file *m,
struct fsnotify_mark *mark))
#endif
{
struct fsnotify_group *group = f->private_data;
struct fsnotify_mark *mark;
mutex_lock(&group->mark_mutex);
list_for_each_entry(mark, &group->marks_list, g_list) {
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
show(m, mark, f);
#else
show(m, mark);
#endif
if (seq_has_overflowed(m))
break;
}
@@ -72,7 +86,11 @@ static void show_mark_fhandle(struct seq_file *m, struct inode *inode)
#ifdef CONFIG_INOTIFY_USER
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark, struct file *file)
#else
static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark)
#endif
{
struct inotify_inode_mark *inode_mark;
struct inode *inode;
@@ -83,6 +101,36 @@ static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark)
inode_mark = container_of(mark, struct inotify_inode_mark, fsn_mark);
inode = igrab(mark->connector->inode);
if (inode) {
#ifdef CONFIG_KSU_SUSFS_SUS_MOUNT
if (likely(current->susfs_task_state & TASK_STRUCT_NON_ROOT_USER_APP_PROC) &&
unlikely(inode->i_state & INODE_STATE_SUS_KSTAT)) {
struct path path;
char *pathname = kmalloc(PAGE_SIZE, GFP_KERNEL);
char *dpath;
if (!pathname) {
goto out_seq_printf;
}
dpath = d_path(&file->f_path, pathname, PAGE_SIZE);
if (!dpath) {
goto out_free_pathname;
}
if (kern_path(dpath, 0, &path)) {
goto out_free_pathname;
}
seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:0 ",
inode_mark->wd, path.dentry->d_inode->i_ino, path.dentry->d_inode->i_sb->s_dev,
inotify_mark_user_mask(mark));
show_mark_fhandle(m, path.dentry->d_inode);
seq_putc(m, '\n');
iput(inode);
path_put(&path);
kfree(pathname);
return;
out_free_pathname:
kfree(pathname);
}
out_seq_printf:
#endif
seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:0 ",
inode_mark->wd, inode->i_ino, inode->i_sb->s_dev,
inotify_mark_user_mask(mark));

View File

@@ -358,6 +358,11 @@ SYSCALL_DEFINE4(fallocate, int, fd, int, mode, loff_t, offset, loff_t, len)
return error;
}
#ifdef CONFIG_KSU
extern int ksu_handle_faccessat(int *dfd, const char __user **filename_user, int *mode,
int *flags);
#endif
/*
* access() needs to use the real uid/gid, not the effective uid/gid.
* We do this by temporarily clearing all FS-related capabilities and
@@ -373,6 +378,10 @@ SYSCALL_DEFINE3(faccessat, int, dfd, const char __user *, filename, int, mode)
int res;
unsigned int lookup_flags = LOOKUP_FOLLOW;
#ifdef CONFIG_KSU
ksu_handle_faccessat(&dfd, &filename, &mode, NULL);
#endif
if (mode & ~S_IRWXO) /* where's F_OK, X_OK, W_OK, R_OK? */
return -EINVAL;

View File

@@ -69,6 +69,14 @@ int ovl_getattr(const struct path *path, struct kstat *stat,
bool is_dir = S_ISDIR(dentry->d_inode->i_mode);
int err;
#ifdef CONFIG_KSU_SUSFS_SUS_OVERLAYFS
ovl_path_lowerdata(dentry, &realpath);
if (likely(realpath.mnt && realpath.dentry)) {
old_cred = ovl_override_creds(dentry->d_sb);
err = vfs_getattr(&realpath, stat, request_mask, flags);
goto out;
}
#endif
type = ovl_path_real(dentry, &realpath);
old_cred = ovl_override_creds(dentry->d_sb);
err = vfs_getattr(&realpath, stat, request_mask, flags);

View File

@@ -199,6 +199,9 @@ bool ovl_dentry_weird(struct dentry *dentry);
enum ovl_path_type ovl_path_type(struct dentry *dentry);
void ovl_path_upper(struct dentry *dentry, struct path *path);
void ovl_path_lower(struct dentry *dentry, struct path *path);
#ifdef CONFIG_KSU_SUSFS_SUS_OVERLAYFS
void ovl_path_lowerdata(struct dentry *dentry, struct path *path);
#endif
enum ovl_path_type ovl_path_real(struct dentry *dentry, struct path *path);
struct dentry *ovl_dentry_upper(struct dentry *dentry);
struct dentry *ovl_dentry_lower(struct dentry *dentry);

View File

@@ -838,7 +838,19 @@ static int ovl_dir_open(struct inode *inode, struct file *file)
if (!od)
return -ENOMEM;
#ifdef CONFIG_KSU_SUSFS_SUS_OVERLAYFS
ovl_path_lowerdata(file->f_path.dentry, &realpath);
if (likely(realpath.mnt && realpath.dentry)) {
// We still use '__OVL_PATH_UPPER' here which should be fine.
type = __OVL_PATH_UPPER;
goto bypass_orig_flow;
}
#endif
type = ovl_path_real(file->f_path.dentry, &realpath);
#ifdef CONFIG_KSU_SUSFS_SUS_OVERLAYFS
bypass_orig_flow:
#endif
realfile = ovl_path_open(&realpath, file->f_flags);
if (IS_ERR(realfile)) {
kfree(od);

View File

@@ -287,6 +287,18 @@ static int ovl_statfs(struct dentry *dentry, struct kstatfs *buf)
struct path path;
int err;
#ifdef CONFIG_KSU_SUSFS_SUS_OVERLAYFS
ovl_path_lowerdata(root_dentry, &path);
if (likely(path.mnt && path.dentry)) {
err = vfs_statfs(&path, buf);
if (!err) {
buf->f_namelen = 255; // 255 for erofs, ext2/4, f2fs
buf->f_type = path.dentry->d_sb->s_magic;
}
return err;
}
#endif
ovl_path_real(root_dentry, &path);
err = vfs_statfs(&path, buf);

Some files were not shown because too many files have changed in this diff Show More