Merge branch 'android13-5.15' into 'android13-5.15-lts'

Catch up on the android13-5.10 branch changes now that the LTS merge has
completed.  This consists of the following changes:

* ebdcd11edc ANDROID: ABI: Update symbols to unisoc whitelist
* c09d2c605e ANDROID: abi_gki_aarch64_qcom: Add rproc_set_firmware
* 00b1ba8b15 ANDROID: vendor_hooks: vendor hook for MM
* 6a33db6d06 UPSTREAM: net: cdc_ncm: Deal with too low values of dwNtbOutMaxSize
* b4ebde0fe3 UPSTREAM: mailbox: mailbox-test: fix a locking issue in mbox_test_message_write()
* 19fe9f6907 Revert "ANDROID: GKI: add vendor hooks to avoid unsupported usb device probing"
* f3284ea22b UPSTREAM: 9p/xen : Fix use after free bug in xen_9pfs_front_remove due to race condition
* 42fa58b9a3 UPSTREAM: net: qcom/emac: Fix use after free bug in emac_remove due to race condition
* afa948d5af ANDROID: GKI: add vendor hooks to avoid unsupported usb device probing
* 2f4e942037 BACKPORT: power: supply: bq24190: Fix use after free bug in bq24190_remove due to race condition
* ce2c66d2e2 UPSTREAM: mailbox: mailbox-test: Fix potential double-free in mbox_test_message_write()
* 3c816bcc11 UPSTREAM: ALSA: usb-audio: Split endpoint setups for hw_params and prepare
* 27903582a2 UPSTREAM: usb: gadget: uvc: queue empty isoc requests if no video buffer is available
* c27abae938 FROMGIT: pstore: Revert pmsg_lock back to a normal mutex
* be4040bed1 ANDROID: vendor_hook: Avoid clearing protect-flag before waking waiters
* f713281aa0 ANDROID: ABI: Add to QCOM symbols list
* a40bcba1c5 UPSTREAM: usb: gadget: f_fs: Add unbind event before functionfs_unbind
* df32918f7e ANDROID: GKI: Update symbols to symbol list
* 2dffee86ee ANDROID: block: export tracepoints
* 22a7f585d0 FROMGIT: usb: gadget: u_serial: Avoid spinlock recursion in __gs_console_push
* 3ff3fb3e75 ANDROID: GKI: Add symbols and update symbol list for Unisoc
* f51f079fe3 FROMGIT: usb: gadget: u_serial: Add null pointer check in gserial_suspend
* f62ba41ffa ANDROID: GKI: Update symbol list for sunxi
* b805b2f705 BACKPORT: mm: multi-gen LRU: retry pages written back while isolated
* b721f43a76 UPSTREAM: mm: multi-gen LRU: fix crash during cgroup migration
* 76ef696696 ANDROID: GKI: Revert "ANDROID: GKI: Enable HCTR2"
* bc73c4bb5d ANDROID: GKI: Update symbol list for mtk
* 8022ab8aa8 ANDROID: fix ABI breakage caused by per_cpu_pages
* dfc6b63877 ANDROID: fix ABI breakage caused by adding union type in struct page
* 2bf2b667d9 UPSTREAM: mm/page_alloc: replace local_lock with normal spinlock
* 3cce865cde UPSTREAM: mm/page_alloc: remotely drain per-cpu lists
* cf666fb569 BACKPORT: mm/page_alloc: protect PCP lists with a spinlock
* bd093f8791 UPSTREAM: mm/page_alloc: remove mistaken page == NULL check in rmqueue
* 30888d2792 BACKPORT: mm/page_alloc: split out buddy removal code from rmqueue into separate helper
* a1cab27001 BACKPORT: mm/page_alloc: add page->buddy_list and page->pcp_list
* 3a5551e6ca ANDROID: abi_gki_aarch64_qcom: Update symbol list
* 9c533fb707 ANDROID: gki_defconfig: enable CONFIG_SYN_COOKIES
* 81edb450dd ANDROID: update the .xml file based on previous LTS merge
*   d00dcb7d76 Merge "Merge tag 'android13-5.15.104_r00' into android13-5.15" into android13-5.15
|\
| * 23818c192b Merge tag 'android13-5.15.104_r00' into android13-5.15
* | 1247e4a9ca BACKPORT: FROMGIT: Multi-gen LRU: fix workingset accounting
* | 99e45d1651 ANDROID: ABI: Update symbols to unisoc whitelist
|/
* 9fdde2b21a ANDROID: remove CONFIG_NET_CLS_TCINDEX from gki_defconfig
* f4bcd63716 BACKPORT: net/sched: Retire tcindex classifier
* 75d202bb9b UPSTREAM: ext4: avoid a potential slab-out-of-bounds in ext4_group_desc_csum
* a9903644f0 ANDROID: ABI: Update allowed list for QCOM
* 0461430273 UPSTREAM: usb: dwc3: fix gadget mode suspend interrupt handler issue
* 8faa860f55 BACKPORT: usb: gadget: udc: Handle gadget_connect failure during bind operation
* 3a0a7c82a9 FROMGIT: usb: dwc3: gadget: Bail out in pullup if soft reset timeout happens
* 10d315f835 BACKPORT: mm: Multi-gen LRU: remove wait_event_killable()
* 2e5e23042f UPSTREAM: perf: fix perf_event_context->time
* de46338f53 UPSTREAM: perf/core: Fix perf_output_begin parameter is incorrectly invoked in perf_event_bpf_output
* dc031e19fa UPSTREAM: perf: Fix check before add_event_to_groups() in perf_group_detach()
* 9e92bfe8fd ANDROID: GKI: Update symbols to symbol list
* b5d2e9c99d ANDROID: vendor_hook: add hooks in dm_bufio.c
* ddfd56a6ad UPSTREAM: of: reserved_mem: Use proper binary prefix
* 7d6c6a1715 BACKPORT: of: reserved-mem: print out reserved-mem details during boot
* 5daddf0e06 BACKPORT: swiotlb: relocate PageHighMem test away from rmem_swiotlb_setup
* 8ccda1f683 UPSTREAM: ext4: fix invalid free tracking in ext4_xattr_move_to_block()
* 343808251d BACKPORT: FROMGIT: binder: add lockless binder_alloc_(set|get)_vma()
* 7d51cccdd9 BACKPORT: FROMGIT: Revert "binder_alloc: add missing mmap_lock calls when using the VMA"
* 43b43053a4 ANDROID: fix merge issue in binder_alloc_set_vma()
* adaabe3996 UPSTREAM: usb: dwc3: debugfs: Resume dwc3 before accessing registers
* a34daa1c47 UPSTREAM: kvm: initialize all of the kvm_debugregs structure before sending it to userspace
* f993c1a2b0 UPSTREAM: netfilter: nf_tables: deactivate anonymous set from preparation phase
* 0f765cae4a UPSTREAM: usb: dwc3: gadget: Refactor EP0 forced stall/restart into a separate API
* c5de3d68b0 FROMGIT: locking/rwsem: Add __always_inline annotation to __down_read_common() and inlined callers
* 1ce1603175 BACKPORT: UPSTREAM: usb: dwc3: gadget: Execute gadget stop after halting the controller
* 3dd76c4a0d ANDROID: irqchip/irq-gic-v3: Fixed gic_suspend() stub for !CONFIG_PM
* c2d82f46fc ANDROID: ABI: Update symbol list for the symbols used by the unisoc for A13-k5.15
* 82aad30f43 UPSTREAM: usb: dwc3: gadget: Stall and restart EP0 if host is unresponsive
* a881d6f4e5 BACKPORT: FROMLIST: thermal/core/power_allocator: avoid thermal cdev can not be reset
* 424075e4ef Revert "ANDROID: uid_sys_stat: split the global lock uid_lock to the fine-grained"
* e38f3666ea BACKPORT: FROMGIT: wifi: cfg80211/mac80211: report link ID on control port RX
* 9caa51de34 FROMLIST: binder: fix UAF caused by faulty buffer cleanup
* 9ad803f257 ANDROID: usb: gadget: configfs: Protect composite_setup in a spinlock
* db8d05e8f0 ANDROID: ABI: update allowed list for galaxy
* 5227c47617 ANDROID: GKI: Increase max 8250 uarts
* b70e2af3bd BACKPORT: f2fs: give priority to select unpinned section for foreground GC
* 7c4a265d2a UPSTREAM: f2fs: check pinfile in gc_data_segment() in advance
* 1e1a532845 ANDROID: GKI: add missing vendor hook symbols
* e6dabdbadf ANDROID: GKI: reorder symbols within ABI files
* d7d2be8fd5 ANDROID: uid_sys_stat: split the global lock uid_lock to the fine-grained locks for each hlist in hash_table.
* 77f51b1655 ANDROID: fuse-bpf: Fix bpf_test_xattr testcase error
* 2c1967007d ANDROID: fuse-bpf: Remove OWNERS file
* e7df7ebf40 ANDROID: ABI: Add to QCOM symbols list
* 7a661c41cc ANDROID: fuse-bpf: Simplify and fix setting bpf program
* 7671fd7ee9 BACKPORT: FROMLIST: arm64: Also reset KASAN tag if page is not PG_mte_tagged
* c5044e240d ANDROID: fuse-bpf: Make fuse_test compile and pass
* b35a061824 ANDROID: KVM: arm64: Move addr_is_allowed_memory() check into host callback
* 53625a846a ANDROID: KVM: arm64: Pass addr to get_page_state() helper
* 2c5e832436 ANDROID: abi_gki_aarch64_qcom: Add android_vh_ufs_prepare_command
* c86e2416fd ANDROID: fix use of plain integer as NULL pointer
* 9172303261 UPSTREAM: usb: gadget: udc: core: remove usage of list iterator past the loop body
* ac3cf8a41a UPSTREAM: usb: gadget: udc: core: Print error code in usb_gadget_probe_driver()
* d3e95905ce FROMLIST: usb: xhci: Remove unused udev from xhci_log_ctx trace event
* 1539137fce FROMGIT: usb: dwc3: gadget: Add 1ms delay after end transfer command without IOC
* 4f1a122937 UPSTREAM: usb: gadget: udc: core: Use pr_fmt() to prefix messages
* 283ccf3c28 ANDROID: GKI: Update symbol list for Amlogic
* 69e55fed94 ANDROID: GKI: Update symbol list for mtk
* 67510f5083 ANDROID: setlocalversion: Add a flag to keep tag info
* 5e6e9c596b ANDROID: clear memory trylock-bit when page_locked.
* 805cf52991 UPSTREAM: fs: drop peer group ids under namespace lock
* 4158b1508f Revert "Revert "mm/rmap: Fix anon_vma->degree ambiguity leading to double-reuse""
* e269893a9b ANDROID: GKI: Update symbol list for mtk
* 875c053251 ANDROID: dma-buf: heaps: Don't lock unused dmabuf_page_pool mutex
* 2f051979ea ANDROID: Updatae the GKI symbol list and ABI XML.
* 30edea77f7 ANDROID: gki_defconfig: enable CONFIG_BLK_CGROUP_IOPRIO
* ec96f22414 FROMLIST: [PATCH v2] tick/broadcast: Do not set oneshot_mask except was_periodic was true
* ff29d7e59d UPSTREAM: KVM: VMX: Move preemption timer <=> hrtimer dance to common x86
* 428069e9c6 ANDROID: GKI: Update symbol list for Unisoc
* a6dcbbd57f ANDROID: abi_gki_aarch64_qcom: update abi
* 6ccb91c80a BACKPORT: FROMGIT: rcu: Avoid freeing new kfree_rcu() memory after old grace period
* 0491ec319e ANDROID: MGLRU: Avoid reactivation of anon pages on swap full
* 5959a6946f ANDROID: fuse-bpf: Run bpf with migration disabled
* f01e7da91f ANDROID: incremental fs: Evict inodes before freeing mount data
* fe8e1408d9 ANDROID: GKI: Update symbol list for Amlogic
* fd28863aa4 ANDROID: fuse-bpf: Correctly put backing files
* 8e6265391e UPSTREAM: media: rc: Fix use-after-free bugs caused by ene_tx_irqsim()
* 617c5ccc25 UPSTREAM: hid: bigben_probe(): validate report count
* e422c244a9 UPSTREAM: HID: bigben: use spinlock to safely schedule workers
* 7516b4d0ff ANDROID: Fix kernelci break: eventfd_signal_mask redefined
* f3a30a028e ANDROID: fuse: fix struct path zero initialization
* ee002ea6ad UPSTREAM: Makefile: use -gdwarf-{4|5} for assembler for DEBUG_INFO_DWARF{4|5}
* 1fd3cdb1c2 UPSTREAM: HID: bigben_worker() remove unneeded check on report_field
* 2cabed5f02 UPSTREAM: HID: bigben: use spinlock to protect concurrent accesses
* 35ff3e8cb6 ANDROID: gki_defconfig: enable CONFIG_CRYPTO_GHASH_ARM64_CE
* d3b24dd2c7 ANDROID: dm-default-key: update for blk_crypto_evict_key() returning void
* 75a9412100 BACKPORT: FROMGIT: blk-crypto: make blk_crypto_evict_key() more robust
* 4f1318871f BACKPORT: FROMGIT: blk-crypto: make blk_crypto_evict_key() return void
* 1f978b5216 BACKPORT: FROMGIT: blk-mq: release crypto keyslot before reporting I/O complete
* f08d600c31 ANDROID: GKI: Update symbol list for mtk
* afcf7ac2f3 UPSTREAM: of: base: Skip CPU nodes with "fail"/"fail-..." status
* 44a94ece47 ANDROID: fuse: Support errors from fuse daemon in canonical path
* 30c810b809 ANDROID: fsnotify: Notify lower fs of open
* 00d76c2ca4 UPSTREAM: ARM: 9203/1: kconfig: fix MODULE_PLTS for KASAN with KASAN_VMALLOC
* 37bcdf00ab UPSTREAM: ARM: 9202/1: kasan: support CONFIG_KASAN_VMALLOC

Change-Id: I8bf5883879fc5bcb4a72fe90e61e27caeaa59cd0
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman
2023-06-16 09:17:32 +00:00
107 changed files with 4156 additions and 2901 deletions

View File

@@ -1298,7 +1298,7 @@ $(sort $(vmlinux-deps) $(subdir-modorder)): descend ;
filechk_kernel.release = \
echo "$(KERNELVERSION)$$($(CONFIG_SHELL) $(srctree)/scripts/setlocalversion \
$(srctree) $(BRANCH) $(KMI_GENERATION))"
--save-tag $(srctree) $(BRANCH) $(KMI_GENERATION))"
# Store (new) KERNELRELEASE string in include/config/kernel.release
include/config/kernel.release: FORCE

File diff suppressed because it is too large Load Diff

View File

@@ -688,9 +688,9 @@
flow_rule_match_vlan
flush_dcache_page
flush_delayed_work
flush_signals
flush_work
flush_workqueue
flush_signals
fpsimd_context_busy
fput
free_irq
@@ -1434,6 +1434,7 @@
register_wide_hw_breakpoint
regmap_field_read
regmap_field_update_bits_base
regmap_irq_chip_get_base
regmap_multi_reg_write
regmap_raw_write
regmap_read
@@ -1684,6 +1685,8 @@
__stack_chk_fail
stack_trace_print
stack_trace_save
static_key_disable
static_key_enable
static_key_slow_dec
static_key_slow_inc
stpcpy
@@ -1787,10 +1790,12 @@
__traceiter_mmap_lock_start_locking
__traceiter_xdp_exception
trace_output_call
__tracepoint_android_rvh_panic_unhandled
__tracepoint_android_vh_cpu_idle_enter
__tracepoint_android_vh_cpu_idle_exit
__tracepoint_android_vh_do_traversal_lruvec
__tracepoint_android_vh_dump_throttled_rt_tasks
__tracepoint_android_vh_ftrace_format_check
__tracepoint_android_vh_mmc_sd_update_cmdline_timing
__tracepoint_android_vh_mmc_sd_update_dataline_timing
__tracepoint_android_vh_rmqueue
@@ -1877,6 +1882,7 @@
usleep_range_state
utf16s_to_utf8s
utf8_to_utf32
v4l2_ctrl_add_handler
v4l2_ctrl_handler_free
v4l2_ctrl_handler_init_class
v4l2_ctrl_handler_setup

View File

@@ -72,10 +72,10 @@
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_trace_run1
bpf_trace_run10
bpf_trace_run11
bpf_trace_run12
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@@ -1049,9 +1049,9 @@
is_dma_buf_file
is_vmalloc_addr
iterate_fd
jiffies
jiffies_64_to_clock_t
jiffies64_to_msecs
jiffies
jiffies_to_msecs
jiffies_to_usecs
kasan_flag_enabled
@@ -1162,8 +1162,8 @@
memory_read_from_buffer
memparse
mem_section
memset
memset64
memset
__memset_io
memstart_addr
mfd_add_devices
@@ -1245,8 +1245,8 @@
nla_find
nla_memcpy
__nla_parse
nla_put
nla_put_64bit
nla_put
nla_put_nohdr
nla_reserve
__nla_validate

View File

@@ -63,9 +63,9 @@
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_trace_run1
bpf_trace_run10
bpf_trace_run11
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@@ -719,8 +719,8 @@
memory_read_from_buffer
memparse
mem_section
memset
memset64
memset
__memset_io
memstart_addr
mfd_add_devices
@@ -790,8 +790,8 @@
nla_find
nla_memcpy
__nla_parse
nla_put
nla_put_64bit
nla_put
nla_put_nohdr
nla_reserve
__nla_validate

View File

@@ -65,6 +65,7 @@
name_to_dev_t
netlink_ack
of_css
of_find_all_nodes
phy_connect_direct
phy_find_first
phy_get_pause
@@ -80,14 +81,38 @@
skb_copy_ubufs
smpboot_unregister_percpu_thread
snd_soc_add_card_controls
snd_soc_component_enable_pin
snd_soc_component_get_pin_status
stack_trace_save_regs
_trace_android_vh_record_pcpu_rwsem_starttime
__traceiter_android_rvh_arm64_serror_panic
__traceiter_android_rvh_die_kernel_fault
__traceiter_android_rvh_do_mem_abort
__traceiter_android_rvh_do_ptrauth_fault
__traceiter_android_rvh_do_sea
__traceiter_android_rvh_do_sp_pc_abort
__traceiter_android_rvh_do_undefinstr
__traceiter_android_rvh_panic_unhandled
__traceiter_android_rvh_ufs_complete_init
__traceiter_android_vh_meminfo_proc_show
__traceiter_android_vh_ptype_head
__traceiter_android_vh_rtmutex_wait_finish
__traceiter_android_vh_rtmutex_wait_start
__traceiter_android_vh_rwsem_read_wait_finish
__traceiter_android_vh_rwsem_write_wait_finish
__traceiter_android_vh_sched_show_task
__traceiter_android_vh_show_mem
__traceiter_android_vh_try_to_freeze_todo
__traceiter_android_vh_try_to_freeze_todo_unfrozen
__traceiter_android_vh_watchdog_timer_softlockup
__traceiter_android_vh_wq_lockup_pool
__traceiter_block_rq_insert
__traceiter_hrtimer_expire_entry
__traceiter_hrtimer_expire_exit
__traceiter_irq_handler_entry
__traceiter_irq_handler_exit
__traceiter_kfree_skb
__traceiter_workqueue_execute_start
__tracepoint_android_rvh_arm64_serror_panic
__tracepoint_android_rvh_die_kernel_fault
__tracepoint_android_rvh_do_mem_abort
@@ -96,6 +121,7 @@
__tracepoint_android_rvh_do_sp_pc_abort
__tracepoint_android_rvh_do_undefinstr
__tracepoint_android_rvh_panic_unhandled
__tracepoint_android_rvh_ufs_complete_init
__tracepoint_android_vh_meminfo_proc_show
__tracepoint_android_vh_ptype_head
__tracepoint_android_vh_rtmutex_wait_finish
@@ -115,7 +141,6 @@
__tracepoint_irq_handler_exit
__tracepoint_kfree_skb
__tracepoint_workqueue_execute_start
_trace_android_vh_record_pcpu_rwsem_starttime
usb_alloc_dev
usb_deregister_dev
usb_find_interface

View File

@@ -65,10 +65,10 @@
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_trace_run1
bpf_trace_run10
bpf_trace_run11
bpf_trace_run12
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@@ -1103,8 +1103,8 @@
mempool_free
memremap
mem_section
memset
memset64
memset
__memset_io
memstart_addr
memunmap

View File

@@ -1,21 +1,21 @@
[abi_symbol_list]
__traceiter_android_rvh_dma_buf_stats_teardown
__traceiter_android_vh_alter_mutex_list_add
__traceiter_android_vh_alter_rwsem_list_add
__traceiter_android_vh_mutex_init
__traceiter_android_vh_mutex_unlock_slowpath
__traceiter_android_vh_mutex_wait_finish
__traceiter_android_vh_mutex_wait_start
__traceiter_android_vh_rwsem_init
__traceiter_android_vh_rwsem_wake
__traceiter_android_vh_rwsem_write_finished
__traceiter_android_vh_alter_rwsem_list_add
__traceiter_android_vh_mutex_init
__traceiter_android_vh_alter_mutex_list_add
__traceiter_android_vh_mutex_unlock_slowpath
__traceiter_android_vh_mutex_wait_start
__traceiter_android_vh_mutex_wait_finish
__traceiter_android_rvh_dma_buf_stats_teardown
__tracepoint_android_rvh_dma_buf_stats_teardown
__tracepoint_android_vh_alter_mutex_list_add
__tracepoint_android_vh_alter_rwsem_list_add
__tracepoint_android_vh_mutex_init
__tracepoint_android_vh_mutex_unlock_slowpath
__tracepoint_android_vh_mutex_wait_finish
__tracepoint_android_vh_mutex_wait_start
__tracepoint_android_vh_rwsem_init
__tracepoint_android_vh_rwsem_wake
__tracepoint_android_vh_rwsem_write_finished
__tracepoint_android_vh_alter_rwsem_list_add
__tracepoint_android_vh_mutex_init
__tracepoint_android_vh_alter_mutex_list_add
__tracepoint_android_vh_mutex_unlock_slowpath
__tracepoint_android_vh_mutex_wait_start
__tracepoint_android_vh_mutex_wait_finish
__tracepoint_android_rvh_dma_buf_stats_teardown

View File

@@ -29,14 +29,15 @@
__arch_copy_to_user
arch_freq_scale
arch_timer_read_counter
arm64_const_caps_ready
arm64_use_ng_mappings
arm_smccc_1_1_get_conduit
arm_smccc_1_2_hvc
arm_smccc_1_2_smc
arm_smccc_get_version
arm64_const_caps_ready
arm64_use_ng_mappings
__arm_smccc_hvc
__arm_smccc_smc
arp_create
arp_tbl
async_schedule_node
atomic_notifier_call_chain
@@ -44,7 +45,9 @@
atomic_notifier_chain_unregister
autoremove_wake_function
balance_push_callback
_bcd2bin
bcmp
_bin2bcd
bio_add_page
bio_alloc_bioset
bio_associate_blkg
@@ -65,6 +68,7 @@
bitmap_print_to_pagebuf
bitmap_release_region
__bitmap_set
__bitmap_subset
bitmap_to_arr32
__bitmap_weight
bitmap_zalloc
@@ -348,6 +352,7 @@
devm_clk_bulk_get_optional
devm_clk_get
devm_clk_get_optional
devm_clk_get_optional_enabled
devm_clk_put
devm_clk_register
devm_devfreq_add_device
@@ -375,6 +380,7 @@
devm_iio_channel_get_all
devm_iio_device_alloc
__devm_iio_device_register
devm_iio_triggered_buffer_setup_ext
devm_input_allocate_device
devm_ioremap
devm_ioremap_resource
@@ -944,6 +950,7 @@
init_wait_entry
__init_waitqueue_head
input_allocate_device
input_device_enabled
input_event
input_free_device
input_mt_init_slots
@@ -1249,6 +1256,7 @@
__netdev_alloc_skb
netdev_err
netdev_info
netdev_stats_to_stats64
netdev_upper_get_next_dev_rcu
netdev_warn
netif_carrier_off
@@ -1402,10 +1410,10 @@
pci_msi_create_irq_domain
pci_msi_mask_irq
pci_msi_unmask_irq
pci_pio_to_address
pci_remove_root_bus
pci_stop_root_bus
pci_unlock_rescan_remove
pci_pio_to_address
PDE_DATA
__per_cpu_offset
perf_event_create_kernel_counter
@@ -1953,7 +1961,6 @@
system_wq
sys_tz
task_active_pid_ns
task_sched_runtime
__tasklet_hi_schedule
tasklet_init
tasklet_kill
@@ -1961,6 +1968,7 @@
tasklet_setup
__task_pid_nr_ns
__task_rq_lock
task_sched_runtime
thermal_cooling_device_unregister
thermal_of_cooling_device_register
thermal_pressure
@@ -2017,9 +2025,9 @@
__traceiter_android_rvh_selinux_avc_node_delete
__traceiter_android_rvh_selinux_avc_node_replace
__traceiter_android_rvh_selinux_is_initialized
__traceiter_android_rvh_setscheduler
__traceiter_android_rvh_set_cpus_allowed_ptr_locked
__traceiter_android_rvh_set_cpus_allowed_by_task
__traceiter_android_rvh_set_cpus_allowed_ptr_locked
__traceiter_android_rvh_setscheduler
__traceiter_android_rvh_set_user_nice
__traceiter_android_rvh_tick_entry
__traceiter_android_rvh_update_cpu_capacity
@@ -2064,6 +2072,7 @@
__traceiter_xhci_urb_giveback
trace_output_call
__tracepoint_android_rvh_after_enqueue_task
__tracepoint_android_rvh_audio_usb_offload_disconnect
__tracepoint_android_rvh_can_migrate_task
__tracepoint_android_rvh_commit_creds
__tracepoint_android_rvh_cpu_overutilized
@@ -2095,9 +2104,9 @@
__tracepoint_android_rvh_selinux_avc_node_delete
__tracepoint_android_rvh_selinux_avc_node_replace
__tracepoint_android_rvh_selinux_is_initialized
__tracepoint_android_rvh_setscheduler
__tracepoint_android_rvh_set_cpus_allowed_ptr_locked
__tracepoint_android_rvh_set_cpus_allowed_by_task
__tracepoint_android_rvh_set_cpus_allowed_ptr_locked
__tracepoint_android_rvh_setscheduler
__tracepoint_android_rvh_set_user_nice
__tracepoint_android_rvh_tick_entry
__tracepoint_android_rvh_update_cpu_capacity
@@ -2303,7 +2312,6 @@
usb_hcd_start_port_resume
usb_hub_clear_tt_buffer
usb_interface_id
usb_wakeup_notification
usbnet_change_mtu
usbnet_disconnect
usbnet_get_drvinfo
@@ -2346,6 +2354,7 @@
usb_role_switch_unregister
usb_speed_string
usb_string_id
usb_wakeup_notification
__usecs_to_jiffies
usleep_range_state
uuid_null
@@ -2575,8 +2584,8 @@
pci_set_master
pci_set_power_state
pci_unregister_driver
_raw_spin_trylock_bh
radix_tree_maybe_preload
_raw_spin_trylock_bh
register_inetaddr_notifier
regmap_multi_reg_write
regmap_register_patch

View File

@@ -145,8 +145,8 @@
ipv6_skip_exthdr
is_dma_buf_file
iterate_fd
jiffies
jiffies_64
jiffies
jiffies_to_msecs
kasan_flag_enabled
kasprintf
@@ -197,8 +197,8 @@
memory_cgrp_subsys
memory_cgrp_subsys_enabled_key
memparse
memset
memset64
memset
memstart_addr
migrate_page_copy
misc_deregister
@@ -222,10 +222,10 @@
nla_find
nla_memcpy
__nla_parse
nla_put
nla_put_64bit
nla_reserve
nla_put
nla_reserve_64bit
nla_reserve
nonseekable_open
nr_cpu_ids
__num_online_cpus
@@ -234,8 +234,8 @@
__page_file_index
__page_mapcount
page_mapping
param_ops_uint
page_to_lruvec
param_ops_uint
__per_cpu_offset
platform_device_add
platform_device_alloc
@@ -266,7 +266,6 @@
radix_tree_next_chunk
radix_tree_preload
radix_tree_replace_slot
rtc_read_alarm
_raw_read_lock
_raw_read_unlock
_raw_spin_lock
@@ -285,6 +284,7 @@
register_sysctl_table
register_tcf_proto_ops
remove_proc_subtree
rtc_read_alarm
__rtnl_link_unregister
sched_clock
sched_setscheduler_nocheck
@@ -385,12 +385,14 @@
__traceiter_android_vh_build_sched_domains
__traceiter_android_vh_cache_show
__traceiter_android_vh_check_uninterruptible_tasks_dn
__traceiter_android_vh_cleanup_old_buffers_bypass
__traceiter_android_vh_cma_drain_all_pages_bypass
__traceiter_android_vh_cpufreq_acct_update_power
__traceiter_android_vh_del_page_from_lrulist
__traceiter_android_vh_do_futex
__traceiter_android_vh_do_page_trylock
__traceiter_android_vh_do_traversal_lruvec
__traceiter_android_vh_dm_bufio_shrink_scan_bypass
__traceiter_android_vh_drain_all_pages_bypass
__traceiter_android_vh_dup_task_struct
__traceiter_android_vh_exit_mm
@@ -414,8 +416,8 @@
__traceiter_android_vh_mem_cgroup_id_remove
__traceiter_android_vh_meminfo_proc_show
__traceiter_android_vh_modify_thermal_cpu_get_power
__traceiter_android_vh_mutex_init
__traceiter_android_vh_mutex_can_spin_on_owner
__traceiter_android_vh_mutex_init
__traceiter_android_vh_mutex_opt_spin_finish
__traceiter_android_vh_mutex_opt_spin_start
__traceiter_android_vh_page_referenced_check_bypass
@@ -443,6 +445,17 @@
__traceiter_android_vh_tune_scan_type
__traceiter_android_vh_tune_swappiness
__traceiter_android_vh_update_page_mapcount
__traceiter_block_bio_complete
__traceiter_block_bio_queue
__traceiter_block_getrq
__traceiter_block_rq_issue
__traceiter_block_rq_merge
__traceiter_block_rq_requeue
__traceiter_block_split
__traceiter_net_dev_queue
__traceiter_net_dev_xmit
__traceiter_netif_receive_skb
__traceiter_netif_rx
__traceiter_sched_stat_blocked
__traceiter_sched_stat_iowait
__traceiter_sched_stat_runtime
@@ -476,9 +489,12 @@
__tracepoint_android_vh_binder_thread_release
__tracepoint_android_vh_build_sched_domains
__tracepoint_android_vh_cache_show
__tracepoint_android_vh_check_uninterruptible_tasks_dn
__tracepoint_android_vh_cleanup_old_buffers_bypass
__tracepoint_android_vh_cma_drain_all_pages_bypass
__tracepoint_android_vh_cpufreq_acct_update_power
__tracepoint_android_vh_del_page_from_lrulist
__tracepoint_android_vh_dm_bufio_shrink_scan_bypass
__tracepoint_android_vh_do_futex
__tracepoint_android_vh_do_page_trylock
__tracepoint_android_vh_do_traversal_lruvec
@@ -505,8 +521,8 @@
__tracepoint_android_vh_mem_cgroup_id_remove
__tracepoint_android_vh_meminfo_proc_show
__tracepoint_android_vh_modify_thermal_cpu_get_power
__tracepoint_android_vh_mutex_init
__tracepoint_android_vh_mutex_can_spin_on_owner
__tracepoint_android_vh_mutex_init
__tracepoint_android_vh_mutex_opt_spin_finish
__tracepoint_android_vh_mutex_opt_spin_start
__tracepoint_android_vh_page_referenced_check_bypass
@@ -525,8 +541,8 @@
__tracepoint_android_vh_rwsem_opt_spin_finish
__tracepoint_android_vh_rwsem_opt_spin_start
__tracepoint_android_vh_rwsem_wake_finish
__tracepoint_android_vh_sched_show_task
__tracepoint_android_vh_save_track_hash
__tracepoint_android_vh_sched_show_task
__tracepoint_android_vh_sched_stat_runtime_rt
__tracepoint_android_vh_show_mapcount_pages
__tracepoint_android_vh_sync_txn_recvd
@@ -534,6 +550,13 @@
__tracepoint_android_vh_tune_scan_type
__tracepoint_android_vh_tune_swappiness
__tracepoint_android_vh_update_page_mapcount
__tracepoint_block_bio_complete
__tracepoint_block_bio_queue
__tracepoint_block_getrq
__tracepoint_block_rq_issue
__tracepoint_block_rq_merge
__tracepoint_block_rq_requeue
__tracepoint_block_split
__tracepoint_net_dev_queue
__tracepoint_net_dev_xmit
__tracepoint_netif_receive_skb

View File

@@ -1,18 +1,18 @@
[abi_symbol_list]
__hid_register_driver
__hid_request
hid_add_device
hid_alloc_report_buf
hid_allocate_device
hid_alloc_report_buf
hid_destroy_device
hid_hw_start
hid_hw_stop
hidinput_calc_abs_res
hid_input_report
hid_open_report
hid_parse_report
__hid_register_driver
hid_report_raw_event
__hid_request
hid_unregister_driver
hidinput_calc_abs_res
iio_trigger_generic_data_rdy_poll
input_device_enabled

View File

@@ -720,8 +720,8 @@
_find_next_bit
find_pid_ns
find_task_by_vpid
find_vm_area
__find_vma
find_vm_area
finish_wait
flush_dcache_page
flush_delayed_work

View File

@@ -79,12 +79,15 @@
bitmap_allocate_region
__bitmap_and
__bitmap_clear
__bitmap_complement
bitmap_find_next_zero_area_off
bitmap_free
__bitmap_or
bitmap_print_to_pagebuf
bitmap_release_region
__bitmap_replace
__bitmap_set
__bitmap_xor
bitmap_zalloc
__blk_alloc_disk
blk_cleanup_disk
@@ -145,10 +148,10 @@
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bmap
bpf_trace_run1
bpf_trace_run10
bpf_trace_run11
bpf_trace_run12
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@@ -423,6 +426,7 @@
devm_iio_channel_get_all
devm_iio_device_alloc
__devm_iio_device_register
devm_iio_triggered_buffer_setup_ext
devm_input_allocate_device
devm_ioremap
devm_ioremap_resource
@@ -688,8 +692,11 @@
edac_device_handle_ue_count
enable_irq
enable_percpu_irq
ethnl_cable_test_amplitude
ethnl_cable_test_fault_length
ethnl_cable_test_pulse
ethnl_cable_test_result
ethnl_cable_test_step
ethtool_convert_legacy_u32_to_link_mode
ethtool_convert_link_mode_to_legacy_u32
eventfd_ctx_fdget
@@ -772,14 +779,20 @@
geni_se_select_mode
geni_se_tx_dma_prep
geni_se_tx_dma_unprep
genphy_check_and_restart_aneg
__genphy_config_aneg
genphy_c45_read_status
genphy_read_abilities
genphy_read_lpa
genphy_read_mmd_unsupported
genphy_read_status
genphy_read_status_fixed
genphy_restart_aneg
genphy_resume
genphy_setup_forced
genphy_soft_reset
genphy_suspend
genphy_update_link
genphy_write_mmd_unsupported
gen_pool_add_owner
gen_pool_alloc_algo_owner
@@ -1032,6 +1045,7 @@
iommu_group_set_iommudata
iommu_iova_to_phys
iommu_map
iommu_map_atomic
iommu_map_sg
iommu_present
iommu_put_dma_cookie
@@ -1236,8 +1250,10 @@
mdiobus_alloc_size
mdiobus_free
mdiobus_get_phy
__mdiobus_read
mdiobus_read
mdiobus_unregister
__mdiobus_write
mdiobus_write
mdio_device_create
mdio_device_free
@@ -1267,8 +1283,8 @@
mempool_kmalloc
memremap
mem_section
memset
memset64
memset
__memset_io
memstart_addr
memunmap
@@ -1295,8 +1311,8 @@
mmc_retune_release
mmc_select_bus_width
mmc_select_card
mmc_select_hs
mmc_select_hs400
mmc_select_hs
mmc_select_hs_ddr
mmc_select_timing
mmc_send_status
@@ -1329,8 +1345,8 @@
netdev_rss_key_fill
netif_receive_skb_list
nla_find
nla_reserve
nla_reserve_64bit
nla_reserve
__nla_validate
no_llseek
nonseekable_open
@@ -1528,12 +1544,15 @@
perf_trace_run_bpf_submit
phy_attached_info
phy_calibrate
phy_config_aneg
phy_drivers_register
phy_drivers_unregister
phy_error
phy_ethtool_get_eee
phy_ethtool_get_wol
phy_ethtool_set_wol
phy_exit
phy_gbit_fibre_features
phy_init
phy_init_eee
phy_init_hw
@@ -1563,14 +1582,27 @@
phylink_stop
phylink_suspend
phy_mac_interrupt
__phy_modify
phy_modify
phy_modify_changed
phy_modify_mmd
phy_modify_paged
phy_modify_paged_changed
phy_power_off
phy_power_on
phy_read_mmd
phy_read_paged
phy_resolve_aneg_pause
phy_restore_page
phy_save_page
phy_select_page
phy_set_mode_ext
phy_sfp_attach
phy_sfp_detach
phy_sfp_probe
phy_trigger_machine
phy_write_mmd
phy_write_paged
pick_highest_pushable_task
pick_migrate_task
pid_nr_ns
@@ -1867,6 +1899,7 @@
rproc_put
rproc_remove_subdev
rproc_report_crash
rproc_set_firmware
rproc_shutdown
rtc_time64_to_tm
rtc_tm_to_time64
@@ -2153,6 +2186,7 @@
task_may_not_preempt
__task_pid_nr_ns
__task_rq_lock
task_rq_lock
tcp_hashinfo
thermal_cooling_device_register
thermal_cooling_device_unregister
@@ -2285,6 +2319,7 @@
__traceiter_android_vh_ufs_check_int_errors
__traceiter_android_vh_ufs_clock_scaling
__traceiter_android_vh_ufs_compl_command
__traceiter_android_vh_ufs_prepare_command
__traceiter_android_vh_ufs_send_command
__traceiter_android_vh_ufs_send_tm_command
__traceiter_android_vh_ufs_send_uic_command
@@ -2407,6 +2442,7 @@
__tracepoint_android_vh_ufs_check_int_errors
__tracepoint_android_vh_ufs_clock_scaling
__tracepoint_android_vh_ufs_compl_command
__tracepoint_android_vh_ufs_prepare_command
__tracepoint_android_vh_ufs_send_command
__tracepoint_android_vh_ufs_send_tm_command
__tracepoint_android_vh_ufs_send_uic_command
@@ -2518,6 +2554,7 @@
up_write
usb_add_phy_dev
usb_alloc_coherent
usb_alloc_dev
usb_assign_descriptors
usb_bus_idr
usb_bus_idr_lock
@@ -2526,6 +2563,7 @@
usb_control_msg_send
usb_debug_root
usb_decode_ctrl
usb_driver_set_configuration
usb_ep_alloc_request
usb_ep_autoconfig
usb_ep_dequeue
@@ -2534,6 +2572,7 @@
usb_ep_free_request
usb_ep_queue
usb_ep_set_halt
usb_find_common_endpoints
usb_free_all_descriptors
usb_free_coherent
usb_function_register
@@ -2556,6 +2595,7 @@
usb_role_switch_register
usb_role_switch_set_role
usb_role_switch_unregister
usb_set_device_state
usb_speed_string
usb_string_id
usb_unregister_notify

View File

@@ -49,8 +49,8 @@
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_trace_run1
bpf_trace_run10
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@@ -416,8 +416,8 @@
irq_to_desc
is_console_locked
is_vmalloc_addr
jiffies
jiffies_64
jiffies
jiffies_to_msecs
jiffies_to_usecs
kasan_flag_enabled
@@ -496,8 +496,8 @@
memparse
memremap
mem_section
memset
memset64
memset
__memset_io
memstart_addr
memunmap
@@ -572,8 +572,8 @@
nla_find
nla_memcpy
__nla_parse
nla_put
nla_put_64bit
nla_put
nla_reserve
__nla_validate
no_llseek

View File

@@ -58,9 +58,9 @@
blk_queue_max_segment_size
blk_queue_max_write_zeroes_sectors
blk_queue_physical_block_size
blk_status_to_errno
blk_queue_rq_timeout
blk_queue_write_cache
blk_status_to_errno
blk_update_request
blockdev_superblock
blocking_notifier_call_chain
@@ -233,9 +233,9 @@
devfreq_event_get_event
devfreq_recommended_opp
devfreq_remove_device
devfreq_unregister_opp_notifier
devfreq_resume_device
devfreq_suspend_device
devfreq_unregister_opp_notifier
__dev_get_by_index
dev_get_by_index
device_add
@@ -375,9 +375,9 @@
dma_fence_array_ops
dma_fence_context_alloc
dma_fence_default_wait
dma_fence_get_status
dma_fence_enable_sw_signaling
dma_fence_free
dma_fence_get_status
dma_fence_init
dma_fence_release
dma_fence_remove_callback
@@ -452,10 +452,10 @@
event_triggers_call
fb_mode_option
fd_install
fget
file_path
filp_close
filp_open_block
fget
_find_first_bit
find_get_pid
_find_next_bit
@@ -888,8 +888,8 @@
of_iomap
of_io_request_and_map
of_irq_find_parent
of_machine_is_compatible
of_irq_get
of_machine_is_compatible
of_match_device
of_match_node
of_nvmem_cell_get
@@ -924,6 +924,7 @@
pci_dev_put
pci_disable_device
pci_disable_msi
pcie_link_speed
pci_enable_device
pci_enable_msi
pci_get_device
@@ -943,7 +944,6 @@
pci_unregister_driver
pci_write_config_byte
pci_write_config_word
pcie_link_speed
PDE_DATA
__per_cpu_offset
perf_trace_buf_alloc
@@ -1213,9 +1213,9 @@
seq_write
set_capacity
set_capacity_and_notify
set_page_dirty_lock
set_disk_ro
set_freezable
set_page_dirty_lock
__SetPageMovable
set_user_nice
sg_alloc_table
@@ -1226,9 +1226,9 @@
sg_next
__sg_page_iter_next
__sg_page_iter_start
si_meminfo
simple_attr_open
simple_attr_release
si_meminfo
simple_open
simple_read_from_buffer
simple_strtol
@@ -1396,6 +1396,7 @@
tasklet_setup
tasklet_unlock_wait
__task_pid_nr_ns
tcpm_tcpc_reset
thermal_zone_get_temp
thermal_zone_get_zone_by_name
time64_to_tm
@@ -1413,6 +1414,7 @@
__traceiter_android_vh_dma_buf_release
__traceiter_android_vh_map_util_freq
__traceiter_android_vh_meminfo_proc_show
__traceiter_android_vh_page_cache_forced_ra
__traceiter_android_vh_show_mem
__traceiter_android_vh_tune_inactive_ratio
__traceiter_android_vh_tune_swappiness
@@ -1654,9 +1656,9 @@
vm_get_page_prot
vm_insert_page
vm_iomap_memory
vm_map_ram
vm_mmap
vm_munmap
vm_map_ram
vm_unmap_ram
vsnprintf
vunmap

View File

@@ -267,8 +267,8 @@ up_read
up_write
vfree
vfs_fsync_range
vmalloc
__vmalloc
vmalloc
vsnprintf
vzalloc
__wait_on_buffer

View File

@@ -1,6 +1,6 @@
[abi_symbol_list]
# for type visibility
GKI_struct_selinux_state
GKI_struct_readahead_control
GKI_struct_blk_mq_alloc_data
GKI_struct_readahead_control
GKI_struct_selinux_state

View File

@@ -546,6 +546,7 @@
kernel_kobj
kernel_neon_begin
kernel_neon_end
kernel_sock_shutdown
kern_mount
kern_unmount
key_create_or_update
@@ -808,13 +809,13 @@
proc_dointvec_minmax
proc_dostring
proc_mkdir
__pskb_copy_fclone
psi_system
__pskb_copy_fclone
pskb_expand_head
__put_task_struct
put_device
put_pages_list
put_pid
__put_task_struct
queue_delayed_work_on
queue_work_on
radix_tree_delete
@@ -1129,11 +1130,11 @@
__traceiter_android_vh_binder_restore_priority
__traceiter_android_vh_binder_set_priority
__traceiter_android_vh_binder_transaction_init
__traceiter_android_vh_check_uninterruptible_tasks
__traceiter_android_vh_check_uninterruptible_tasks_dn
__traceiter_android_vh_cpufreq_fast_switch
__traceiter_android_vh_cpufreq_resolve_freq
__traceiter_android_vh_cpufreq_target
__traceiter_android_vh_check_uninterruptible_tasks
__traceiter_android_vh_check_uninterruptible_tasks_dn
__traceiter_android_vh_disable_thermal_cooling_stats
__traceiter_android_vh_drm_atomic_check_modeset
__traceiter_android_vh_dump_throttled_rt_tasks
@@ -1225,11 +1226,11 @@
__tracepoint_android_vh_binder_restore_priority
__tracepoint_android_vh_binder_set_priority
__tracepoint_android_vh_binder_transaction_init
__tracepoint_android_vh_check_uninterruptible_tasks
__tracepoint_android_vh_check_uninterruptible_tasks_dn
__tracepoint_android_vh_cpufreq_fast_switch
__tracepoint_android_vh_cpufreq_resolve_freq
__tracepoint_android_vh_cpufreq_target
__tracepoint_android_vh_check_uninterruptible_tasks
__tracepoint_android_vh_check_uninterruptible_tasks_dn
__tracepoint_android_vh_disable_thermal_cooling_stats
__tracepoint_android_vh_drm_atomic_check_modeset
__tracepoint_android_vh_dump_throttled_rt_tasks
@@ -1250,6 +1251,8 @@
__tracepoint_android_vh_thermal_register
__tracepoint_android_vh_thermal_unregister
__tracepoint_android_vh_update_topology_flags_workfn
__tracepoint_android_vh_usb_new_device_added
__tracepoint_clock_set_rate
__tracepoint_cpu_frequency
__tracepoint_cpu_frequency_limits
__tracepoint_pelt_se_tp
@@ -1816,6 +1819,9 @@
__traceiter_android_vh_printk_caller_id
__traceiter_android_vh_printk_ext_header
__traceiter_android_vh_regmap_update
__traceiter_android_vh_usb_new_device_added
__traceiter_clock_set_rate
__traceiter_gpu_mem_total
trace_output_call
__tracepoint_android_rvh_report_bug
__tracepoint_android_rvh_tk_based_time_sync
@@ -1843,6 +1849,7 @@
dev_pm_qos_read_value
__find_vma
__traceiter_gpu_mem_total
__tracepoint_gpu_mem_total
# required by microarray_fp.ko
cdev_alloc
@@ -2314,6 +2321,10 @@
sdio_writel
sdio_writesb
# required by sensorhub.ko
devm_iio_kfifo_buffer_setup_ext
iio_kfifo_free
# required by seth.ko
napi_complete_done
napi_disable
@@ -2326,6 +2337,7 @@
unregister_netdev
# required by sfp_core.ko
br_fdb_find_port
csum_tcpudp_nofold
dev_get_by_index_rcu
ip_send_check
@@ -2340,6 +2352,7 @@
seq_open_private
seq_release
skb_copy_bits
skb_vlan_untag
unregister_netdevice_notifier
# required by sha1-ce.ko
@@ -3067,6 +3080,12 @@
typec_get_negotiated_svdm_version
# required by ufs-sprd.ko
scsi_autopm_get_device
scsi_autopm_put_device
scsi_block_when_processing_errors
scsi_cmd_allowed
scsi_device_quiesce
__scsi_iterate_devices
__traceiter_android_vh_ufs_check_int_errors
__traceiter_android_vh_ufs_compl_command
__traceiter_android_vh_ufs_fill_prdt
@@ -3085,12 +3104,6 @@
__tracepoint_android_vh_ufs_send_uic_command
__tracepoint_android_vh_ufs_update_sdev
__tracepoint_android_vh_ufs_update_sysfs
scsi_autopm_get_device
scsi_autopm_put_device
scsi_block_when_processing_errors
scsi_cmd_allowed
scsi_device_quiesce
__scsi_iterate_devices
ufshcd_add_command_trace
ufshcd_auto_hibern8_update
ufshcd_config_pwr_mode
@@ -3126,6 +3139,28 @@
# required by unisoc_dump_io.ko
blk_stat_enable_accounting
# required by unisoc_mm.ko
__traceiter_android_vh_tune_swappiness
__tracepoint_android_vh_tune_swappiness
# required by unisoc_mm_emem.ko
__traceiter_android_vh_oom_check_panic
__tracepoint_android_vh_oom_check_panic
# required by unisoc_mm_reclaim.ko
__traceiter_android_vh_do_page_trylock
__traceiter_android_vh_handle_failed_page_trylock
__traceiter_android_vh_page_trylock_clear
__traceiter_android_vh_page_trylock_get_result
__traceiter_android_vh_page_trylock_set
__traceiter_android_vh_shrink_slab_bypass
__tracepoint_android_vh_do_page_trylock
__tracepoint_android_vh_handle_failed_page_trylock
__tracepoint_android_vh_page_trylock_clear
__tracepoint_android_vh_page_trylock_get_result
__tracepoint_android_vh_page_trylock_set
__tracepoint_android_vh_shrink_slab_bypass
# required by unisoc_multi_control.ko
cpufreq_table_index_unsorted

View File

@@ -2,10 +2,10 @@
# abi_gki_aarch64_virtual_device contains all the symbols that are used by the
# virtual device modules. Here goes all the symbols that were used
# in abi_gki_aarch64_virtual_device but currently retired (e.g Intel HDA).
_snd_ctl_add_follower
get_device_system_crosststamp
snd_card_disconnect
snd_component_add
_snd_ctl_add_follower
snd_ctl_add_vmaster_hook
snd_ctl_apply_vmaster_followers
snd_ctl_make_virtual_master
@@ -17,6 +17,6 @@
snd_pci_quirk_lookup_id
snd_pcm_hw_limit_rates
snd_pcm_set_sync
snd_pcm_std_chmaps
snd_pcm_suspend_all
snd_sgbuf_get_chunk_size
snd_pcm_std_chmaps

View File

@@ -96,10 +96,10 @@
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_trace_run1
bpf_trace_run10
bpf_trace_run11
bpf_trace_run12
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@@ -1357,8 +1357,8 @@
mempool_free_slab
memremap
mem_section
memset
memset64
memset
__memset_io
memstart_addr
memunmap
@@ -1430,8 +1430,8 @@
netlink_unicast
nla_memcpy
__nla_parse
nla_put
nla_put_64bit
nla_put
nla_strscpy
__nlmsg_put
no_llseek
@@ -2305,33 +2305,48 @@
__traceiter_android_rvh_account_irq
__traceiter_android_rvh_after_dequeue_task
__traceiter_android_rvh_after_enqueue_task
__traceiter_android_rvh_audio_usb_offload_disconnect
__traceiter_android_rvh_build_perf_domains
__traceiter_android_rvh_can_migrate_task
__traceiter_android_rvh_check_preempt_wakeup
__traceiter_android_rvh_cpufreq_transition
__traceiter_android_rvh_cpu_cgroup_attach
__traceiter_android_rvh_cpu_cgroup_online
__traceiter_android_rvh_cpufreq_transition
__traceiter_android_rvh_do_sched_yield
__traceiter_android_rvh_find_busiest_queue
__traceiter_android_rvh_find_lowest_rq
__traceiter_android_rvh_flush_task
__traceiter_android_rvh_get_nohz_timer_target
__traceiter_android_rvh_iommu_setup_dma_ops
__traceiter_android_rvh_is_cpu_allowed
__traceiter_android_rvh_migrate_queued_task
__traceiter_android_rvh_new_task_stats
__traceiter_android_rvh_refrigerator
__traceiter_android_rvh_replace_next_task_fair
__traceiter_android_rvh_rto_next_cpu
__traceiter_android_rvh_sched_cpu_dying
__traceiter_android_rvh_sched_cpu_starting
__traceiter_android_rvh_sched_exec
__traceiter_android_rvh_sched_fork_init
__traceiter_android_rvh_sched_newidle_balance
__traceiter_android_rvh_sched_nohz_balancer_kick
__traceiter_android_rvh_sched_setaffinity
__traceiter_android_rvh_schedule
__traceiter_android_rvh_select_task_rq_fair
__traceiter_android_rvh_select_task_rq_rt
__traceiter_android_rvh_set_balance_anon_file_reclaim
__traceiter_android_rvh_set_cpus_allowed_ptr_locked
__traceiter_android_rvh_set_gfp_zone_flags
__traceiter_android_rvh_set_readahead_gfp_mask
__traceiter_android_rvh_set_skip_swapcache_flags
__traceiter_android_rvh_set_task_cpu
__traceiter_android_rvh_show_max_freq
__traceiter_android_rvh_tick_entry
__traceiter_android_rvh_try_to_wake_up
__traceiter_android_rvh_try_to_wake_up_success
__traceiter_android_rvh_ttwu_cond
__traceiter_android_rvh_update_cpu_capacity
__traceiter_android_rvh_update_cpus_allowed
__traceiter_android_rvh_update_misfit_status
__traceiter_android_rvh_wake_up_new_task
__traceiter_android_vh_account_task_time
@@ -2339,13 +2354,23 @@
__traceiter_android_vh_binder_restore_priority
__traceiter_android_vh_binder_set_priority
__traceiter_android_vh_binder_trans
__traceiter_android_vh_binder_wakeup_ilocked
__traceiter_android_vh_blk_alloc_rqs
__traceiter_android_vh_blk_rq_ctx_init
__traceiter_android_vh_cpu_idle_enter
__traceiter_android_vh_cpu_idle_exit
__traceiter_android_vh_cpuidle_psci_enter
__traceiter_android_vh_cpuidle_psci_exit
__traceiter_android_vh_dup_task_struct
__traceiter_android_vh_ftrace_dump_buffer
__traceiter_android_vh_ftrace_format_check
__traceiter_android_vh_ftrace_oops_enter
__traceiter_android_vh_ftrace_oops_exit
__traceiter_android_vh_ftrace_size_check
__traceiter_android_vh_gic_resume
__traceiter_android_vh_ipi_stop
__traceiter_android_vh_irqtime_account_process_tick
__traceiter_android_vh_jiffies_update
__traceiter_android_vh_logbuf
__traceiter_android_vh_logbuf_pr_cont
__traceiter_android_vh_mmap_region
@@ -2353,12 +2378,15 @@
__traceiter_android_vh_mmc_blk_mq_rw_recovery
__traceiter_android_vh_mmc_blk_reset
__traceiter_android_vh_mmc_gpio_cd_irqt
__traceiter_android_vh_printk_hotplug
__traceiter_android_vh_rproc_recovery
__traceiter_android_vh_scheduler_tick
__traceiter_android_vh_sdhci_get_cd
__traceiter_android_vh_sd_update_bus_speed_mode
__traceiter_android_vh_show_resume_epoch_val
__traceiter_android_vh_show_suspend_epoch_val
__traceiter_android_vh_shrink_slab_bypass
__traceiter_android_vh_timer_calc_index
__traceiter_android_vh_try_to_unmap_one
__traceiter_android_vh_tune_scan_type
__traceiter_android_vh_ufs_check_int_errors
@@ -2371,6 +2399,7 @@
__traceiter_android_vh_ufs_update_sdev
__traceiter_android_vh_update_topology_flags_workfn
__traceiter_android_vh_vmpressure
__traceiter_binder_transaction_received
__traceiter_block_rq_insert
__traceiter_cpu_frequency_limits
__traceiter_dwc3_complete_trb
@@ -2381,9 +2410,12 @@
__traceiter_gpu_mem_total
__traceiter_ipi_entry
__traceiter_ipi_raise
__traceiter_irq_handler_entry
__traceiter_mmap_lock_acquire_returned
__traceiter_mmap_lock_released
__traceiter_mmap_lock_start_locking
__traceiter_rwmmio_read
__traceiter_rwmmio_write
__traceiter_sched_overutilized_tp
__traceiter_sched_switch
__traceiter_suspend_resume
@@ -2396,9 +2428,9 @@
__tracepoint_android_rvh_build_perf_domains
__tracepoint_android_rvh_can_migrate_task
__tracepoint_android_rvh_check_preempt_wakeup
__tracepoint_android_rvh_cpufreq_transition
__tracepoint_android_rvh_cpu_cgroup_attach
__tracepoint_android_rvh_cpu_cgroup_online
__tracepoint_android_rvh_cpufreq_transition
__tracepoint_android_rvh_do_sched_yield
__tracepoint_android_rvh_find_busiest_queue
__tracepoint_android_rvh_find_lowest_rq

View File

@@ -2,9 +2,9 @@
# commonly used symbols
# required by touch module
proc_mkdir_data
proc_create_seq_private
power_supply_is_system_supplied
proc_create_seq_private
proc_mkdir_data
# required by aw8697-haptic.ko
devm_gpio_free
@@ -12,40 +12,88 @@
i2c_smbus_write_byte_data
#required by memory module
blk_execute_rq
blk_rq_map_kern
nr_free_buffer_pages
mmc_set_blocklen
scsi_device_lookup
scsi_host_lookup
scsi_host_put
ufshcd_read_desc_param
utf16s_to_utf8s
async_schedule_node
blk_execute_rq
blk_ksm_get_slot_idx
blk_ksm_register
blk_ksm_reprogram_all_keys
blk_mq_alloc_tag_set
blk_mq_free_tag_set
blk_mq_init_queue
blk_mq_tagset_busy_iter
blk_pm_runtime_init
blk_queue_update_dma_alignment
blk_queue_update_dma_pad
blk_rq_map_kern
bsg_job_done
bsg_remove_queue
bsg_setup_queue
dev_pm_opp_remove
kobject_get
mempool_alloc_pages
mempool_free_pages
mempool_resize
mmc_set_blocklen
nr_free_buffer_pages
__scsi_add_device
scsi_add_host_with_dma
scsi_block_requests
scsi_change_queue_depth
scsi_device_lookup
scsi_dma_map
scsi_dma_unmap
__scsi_execute
scsi_host_alloc
scsi_host_lookup
scsi_host_put
scsi_is_host_device
scsi_normalize_sense
scsi_print_command
scsi_remove_device
scsi_remove_host
scsi_report_bus_reset
scsi_scan_host
scsi_unblock_requests
scsi_change_queue_depth
scsi_print_command
scsi_dma_map
scsi_host_alloc
scsi_normalize_sense
sg_copy_from_buffer
sg_copy_to_buffer
__traceiter_android_vh_direct_io_update_bio
__traceiter_android_vh_dm_update_clone_bio
__traceiter_android_vh_loop_prepare_cmd
__traceiter_android_vh_ufs_mcq_abort
__traceiter_android_vh_ufs_mcq_clear_cmd
__traceiter_android_vh_ufs_mcq_clear_pending
__traceiter_android_vh_ufs_mcq_config
__traceiter_android_vh_ufs_mcq_get_outstanding_reqs
__traceiter_android_vh_ufs_mcq_handler
__traceiter_android_vh_ufs_mcq_has_oustanding_reqs
__traceiter_android_vh_ufs_mcq_hba_capabilities
__traceiter_android_vh_ufs_mcq_make_hba_operational
__traceiter_android_vh_ufs_mcq_map_tag
__traceiter_android_vh_ufs_mcq_max_tag
__traceiter_android_vh_ufs_mcq_print_trs
__traceiter_android_vh_ufs_mcq_send_command
__traceiter_android_vh_ufs_mcq_set_sqid
__traceiter_android_vh_ufs_update_sdev
__traceiter_android_vh_ufs_use_mcq_hooks
__tracepoint_android_vh_direct_io_update_bio
__tracepoint_android_vh_dm_update_clone_bio
__tracepoint_android_vh_loop_prepare_cmd
__tracepoint_android_vh_ufs_mcq_abort
__tracepoint_android_vh_ufs_mcq_clear_cmd
__tracepoint_android_vh_ufs_mcq_clear_pending
__tracepoint_android_vh_ufs_mcq_config
__tracepoint_android_vh_ufs_mcq_get_outstanding_reqs
__tracepoint_android_vh_ufs_mcq_handler
__tracepoint_android_vh_ufs_mcq_has_oustanding_reqs
__tracepoint_android_vh_ufs_mcq_hba_capabilities
__tracepoint_android_vh_ufs_mcq_make_hba_operational
__tracepoint_android_vh_ufs_mcq_map_tag
__tracepoint_android_vh_ufs_mcq_max_tag
__tracepoint_android_vh_ufs_mcq_print_trs
__tracepoint_android_vh_ufs_mcq_send_command
__tracepoint_android_vh_ufs_mcq_set_sqid
__tracepoint_android_vh_ufs_update_sdev
__tracepoint_android_vh_ufs_use_mcq_hooks
ufshcd_alloc_host
ufshcd_config_pwr_mode
ufshcd_dealloc_host
@@ -54,99 +102,56 @@
ufshcd_map_desc_id_to_length
ufshcd_query_attr_retry
ufshcd_query_flag_retry
ufshcd_read_desc_param
ufshcd_update_evt_hist
utf16s_to_utf8s
wait_for_completion_io_timeout
__scsi_add_device
__scsi_execute
blk_mq_free_tag_set
blk_queue_update_dma_alignment
blk_queue_update_dma_pad
blk_ksm_get_slot_idx
mempool_resize
mempool_alloc_pages
mempool_free_pages
blk_pm_runtime_init
scsi_remove_device
kobject_get
__traceiter_android_vh_ufs_update_sdev
__tracepoint_android_vh_ufs_mcq_handler
__tracepoint_android_vh_ufs_mcq_print_trs
__tracepoint_android_vh_ufs_mcq_config
__tracepoint_android_vh_ufs_mcq_max_tag
__tracepoint_android_vh_ufs_mcq_hba_capabilities
__tracepoint_android_vh_ufs_mcq_clear_pending
__tracepoint_android_vh_ufs_mcq_abort
__tracepoint_android_vh_ufs_mcq_map_tag
__tracepoint_android_vh_ufs_mcq_make_hba_operational
__tracepoint_android_vh_ufs_use_mcq_hooks
__tracepoint_android_vh_ufs_mcq_get_outstanding_reqs
__tracepoint_android_vh_ufs_mcq_clear_cmd
__tracepoint_android_vh_ufs_mcq_send_command
__tracepoint_android_vh_ufs_mcq_set_sqid
__tracepoint_android_vh_ufs_mcq_has_oustanding_reqs
__tracepoint_android_vh_dm_update_clone_bio
__tracepoint_android_vh_direct_io_update_bio
__tracepoint_android_vh_loop_prepare_cmd
__traceiter_android_vh_ufs_mcq_handler
__traceiter_android_vh_ufs_mcq_print_trs
__traceiter_android_vh_ufs_mcq_config
__traceiter_android_vh_ufs_mcq_max_tag
__traceiter_android_vh_ufs_mcq_hba_capabilities
__traceiter_android_vh_ufs_mcq_clear_pending
__traceiter_android_vh_ufs_mcq_abort
__traceiter_android_vh_ufs_mcq_map_tag
__traceiter_android_vh_ufs_mcq_make_hba_operational
__traceiter_android_vh_ufs_use_mcq_hooks
__traceiter_android_vh_ufs_mcq_get_outstanding_reqs
__traceiter_android_vh_ufs_mcq_clear_cmd
__traceiter_android_vh_ufs_mcq_send_command
__traceiter_android_vh_ufs_mcq_set_sqid
__traceiter_android_vh_ufs_mcq_has_oustanding_reqs
__traceiter_android_vh_dm_update_clone_bio
__traceiter_android_vh_direct_io_update_bio
__traceiter_android_vh_loop_prepare_cmd
#required by bfq module
__blkg_prfill_rwstat
blkg_rwstat_recursive_sum
blkg_prfill_rwstat
bdi_dev_name
blkcg_print_blkgs
blkg_conf_finish
blkg_conf_prep
__blkg_prfill_rwstat
blkg_prfill_rwstat
__blkg_prfill_u64
blkcg_print_blkgs
blkg_rwstat_exit
blkg_rwstat_init
percpu_counter_add_batch
blkg_rwstat_recursive_sum
io_cgrp_subsys_on_dfl_key
ioc_lookup_icq
bdi_dev_name
percpu_counter_add_batch
#required by cs35l41 module
regmap_raw_write_async
snd_soc_bytes_tlv_callback
regcache_drop_region
regmap_async_complete
regmap_multi_reg_write
regmap_multi_reg_write_bypassed
regmap_raw_read
regmap_raw_write
regmap_raw_write_async
regulator_bulk_enable
snd_compr_stop_error
snd_soc_component_disable_pin
snd_soc_component_force_enable_pin
snd_ctl_boolean_mono_info
snd_pcm_format_physical_width
snd_pcm_hw_constraint_list
regmap_multi_reg_write_bypassed
snd_ctl_boolean_mono_info
snd_soc_put_volsw_range
snd_soc_bytes_tlv_callback
snd_soc_component_disable_pin
snd_soc_component_force_enable_pin
snd_soc_get_volsw_range
snd_soc_info_volsw_range
regmap_raw_write
regcache_drop_region
regmap_raw_read
regmap_multi_reg_write
regulator_bulk_enable
snd_soc_put_volsw_range
#required by mtd module
__blk_mq_end_request
balance_dirty_pages_ratelimited
bdi_alloc
bdi_put
bdi_register
blkdev_get_by_dev
blkdev_get_by_path
blkdev_put
blk_mq_alloc_sq_tag_set
__blk_mq_end_request
blk_mq_freeze_queue
blk_mq_quiesce_queue
blk_mq_start_request
@@ -154,9 +159,6 @@
blk_mq_unquiesce_queue
blk_queue_write_cache
blk_update_request
blkdev_get_by_dev
blkdev_get_by_path
blkdev_put
deactivate_locked_super
fixed_size_llseek
generic_shutdown_super
@@ -180,40 +182,39 @@
simple_strtoul
sync_blockdev
wait_for_device_probe
blk_mq_alloc_sq_tag_set
#required by millet.ko
__traceiter_android_vh_binder_wait_for_work
__tracepoint_android_vh_binder_wait_for_work
__traceiter_android_vh_do_send_sig_info
__traceiter_android_vh_binder_preset
__traceiter_android_vh_binder_trans
__traceiter_android_vh_binder_reply
__traceiter_android_vh_binder_alloc_new_buf_locked
__tracepoint_android_vh_do_send_sig_info
__tracepoint_android_vh_binder_preset
__tracepoint_android_vh_binder_trans
__tracepoint_android_vh_binder_reply
__tracepoint_android_vh_binder_alloc_new_buf_locked
freezer_cgrp_subsys
__traceiter_android_vh_binder_alloc_new_buf_locked
__traceiter_android_vh_binder_preset
__traceiter_android_vh_binder_reply
__traceiter_android_vh_binder_trans
__traceiter_android_vh_binder_wait_for_work
__traceiter_android_vh_do_send_sig_info
__tracepoint_android_vh_binder_alloc_new_buf_locked
__tracepoint_android_vh_binder_preset
__tracepoint_android_vh_binder_reply
__tracepoint_android_vh_binder_trans
__tracepoint_android_vh_binder_wait_for_work
__tracepoint_android_vh_do_send_sig_info
#required by mi_sched.ko
__traceiter_android_vh_free_task
__tracepoint_android_vh_free_task
__traceiter_android_vh_scheduler_tick
__tracepoint_android_vh_scheduler_tick
jiffies_64
free_uid
find_user
free_uid
jiffies_64
__traceiter_android_vh_free_task
__traceiter_android_vh_scheduler_tick
__tracepoint_android_vh_free_task
__tracepoint_android_vh_scheduler_tick
#required by migt.ko
__traceiter_android_rvh_after_enqueue_task
__traceiter_android_rvh_after_dequeue_task
__traceiter_android_rvh_after_enqueue_task
__traceiter_android_vh_map_util_freq
__tracepoint_android_rvh_after_enqueue_task
__tracepoint_android_rvh_after_dequeue_task
__tracepoint_android_vh_map_util_freq
__traceiter_android_vh_map_util_freq_new
__tracepoint_android_rvh_after_dequeue_task
__tracepoint_android_rvh_after_enqueue_task
__tracepoint_android_vh_map_util_freq
__tracepoint_android_vh_map_util_freq_new
#required by turbo.ko
@@ -231,17 +232,17 @@
console_verbose
#required by binderinfo.ko module
__traceiter_android_vh_binder_transaction_init
__tracepoint_android_vh_binder_transaction_init
__traceiter_android_vh_binder_print_transaction_info
__tracepoint_android_vh_binder_print_transaction_info
__traceiter_android_vh_binder_transaction_init
__traceiter_binder_txn_latency_free
__tracepoint_android_vh_binder_print_transaction_info
__tracepoint_android_vh_binder_transaction_init
__tracepoint_binder_txn_latency_free
#required by reclaim module
__traceiter_android_vh_tune_scan_type
__tracepoint_android_vh_tune_scan_type
__traceiter_android_vh_tune_swappiness
__tracepoint_android_vh_tune_scan_type
__tracepoint_android_vh_tune_swappiness
#required by msm_drm.ko module
@@ -254,41 +255,41 @@
#required by xm_power_debug.ko module
wakeup_sources_read_lock
wakeup_sources_read_unlock
wakeup_sources_walk_start
wakeup_sources_walk_next
wakeup_sources_walk_start
#required by swinfo.ko module
proc_set_size
#required by msm_rtb.ko module
__tracepoint_rwmmio_read
__traceiter_irq_handler_entry
__traceiter_rwmmio_read
__tracepoint_rwmmio_write
__traceiter_rwmmio_write
__tracepoint_irq_handler_entry
__traceiter_irq_handler_entry
__tracepoint_rwmmio_read
__tracepoint_rwmmio_write
#required by ax88796b.ko module
phy_resolve_aneg_linkmode
#required by metis.ko module
cpuset_cpus_allowed
__traceiter_android_rvh_cpuset_fork
__traceiter_android_rvh_dequeue_task
__traceiter_android_rvh_set_cpus_allowed_comm
__traceiter_android_vh_alter_mutex_list_add
__traceiter_android_vh_mutex_wait_start
__traceiter_android_vh_rwsem_read_wait_start
__traceiter_android_vh_rwsem_write_wait_start
__traceiter_android_vh_mutex_wait_start
__traceiter_android_vh_alter_mutex_list_add
__traceiter_android_rvh_cpuset_fork
__traceiter_android_vh_sched_setaffinity_early
__traceiter_android_rvh_set_cpus_allowed_comm
__traceiter_android_rvh_dequeue_task
__tracepoint_android_rvh_cpuset_fork
__tracepoint_android_rvh_dequeue_task
__tracepoint_android_rvh_set_cpus_allowed_comm
__tracepoint_android_vh_alter_mutex_list_add
__tracepoint_android_vh_mutex_wait_start
__tracepoint_android_vh_rwsem_read_wait_start
__tracepoint_android_vh_rwsem_write_wait_start
__tracepoint_android_vh_mutex_wait_start
__tracepoint_android_vh_alter_mutex_list_add
__tracepoint_android_rvh_cpuset_fork
__tracepoint_android_vh_sched_setaffinity_early
__tracepoint_android_rvh_set_cpus_allowed_comm
__tracepoint_android_rvh_dequeue_task
cpuset_cpus_allowed
#required by perf_helper.ko
try_to_free_mem_cgroup_pages
@@ -297,80 +298,52 @@
of_find_all_nodes
#required by mi_freqwdg.ko
__traceiter_android_rvh_dequeue_task_fair
__traceiter_android_rvh_entity_tick
__traceiter_android_vh_freq_qos_add_request
__traceiter_android_vh_freq_qos_remove_request
__traceiter_android_vh_freq_qos_update_request
__traceiter_android_vh_freq_qos_add_request
__traceiter_android_rvh_entity_tick
__traceiter_android_rvh_dequeue_task_fair
__tracepoint_android_vh_freq_qos_remove_request
__tracepoint_android_vh_freq_qos_update_request
__tracepoint_android_vh_freq_qos_add_request
__tracepoint_android_rvh_dequeue_task_fair
__tracepoint_android_rvh_entity_tick
__tracepoint_android_vh_freq_qos_add_request
__tracepoint_android_vh_freq_qos_remove_request
__tracepoint_android_vh_freq_qos_update_request
#required by binder_prio module
__traceiter_android_vh_binder_priority_skip
__tracepoint_android_vh_binder_priority_skip
#required by mi_mempool.ko module
__traceiter_android_vh_mmput
__tracepoint_android_vh_mmput
__traceiter_android_vh_alloc_pages_reclaim_bypass
__tracepoint_android_vh_alloc_pages_reclaim_bypass
__traceiter_android_vh_alloc_pages_failure_bypass
__traceiter_android_vh_alloc_pages_reclaim_bypass
__traceiter_android_vh_mmput
__tracepoint_android_vh_alloc_pages_failure_bypass
__tracepoint_android_vh_alloc_pages_reclaim_bypass
__tracepoint_android_vh_mmput
#required by mifs.ko module
__cleancache_get_page
__dquot_alloc_space
__dquot_free_space
__dquot_transfer
__filemap_set_wb_err
__fscrypt_encrypt_symlink
__fscrypt_inode_uses_inline_crypto
__fscrypt_prepare_link
__fscrypt_prepare_lookup
__fscrypt_prepare_readdir
__fscrypt_prepare_rename
__fscrypt_prepare_setattr
__iomap_dio_rw
__page_file_mapping
__pagevec_release
__percpu_counter_init
__percpu_counter_sum
__set_page_dirty_nobuffers
__sync_dirty_buffer
__test_set_page_writeback
__traceiter_android_fs_dataread_end
__traceiter_android_fs_dataread_start
__traceiter_android_fs_datawrite_end
__traceiter_android_fs_datawrite_start
__tracepoint_android_fs_dataread_end
__tracepoint_android_fs_dataread_start
__tracepoint_android_fs_datawrite_end
__tracepoint_android_fs_datawrite_start
__xa_clear_mark
add_swap_extent
bdev_read_only
bio_associate_blkg_from_css
bioset_exit
bioset_init
blk_op_str
blkdev_issue_discard
blkdev_issue_zeroout
blk_op_str
capable_wrt_inode_uidgid
__cleancache_get_page
clear_page_dirty_for_io
current_umask
dentry_path_raw
d_instantiate_new
d_invalidate
d_tmpfile
dentry_path_raw
dotdot_name
dqget
dqput
dquot_acquire
dquot_alloc
dquot_alloc_inode
__dquot_alloc_space
dquot_claim_space_nodirty
dquot_commit
dquot_commit_info
@@ -379,6 +352,7 @@
dquot_drop
dquot_file_open
dquot_free_inode
__dquot_free_space
dquot_get_dqblk
dquot_get_next_dqblk
dquot_get_next_id
@@ -394,31 +368,34 @@
dquot_resume
dquot_set_dqblk
dquot_set_dqinfo
__dquot_transfer
dquot_transfer
dquot_writeback_dquots
d_tmpfile
end_page_writeback
errseq_set
evict_inodes
fault_in_iov_iter_readable
fiemap_fill_next_extent
fiemap_prep
file_modified
file_update_time
fileattr_fill_flags
filemap_check_errors
filemap_fault
filemap_fdatawrite
filemap_map_pages
filemap_read
__filemap_set_wb_err
filemap_write_and_wait_range
file_modified
file_update_time
find_inode_nowait
freeze_bdev
freeze_super
fs_kobj
fscrypt_decrypt_bio
fscrypt_dio_supported
fscrypt_drop_inode
fscrypt_encrypt_pagecache_blocks
__fscrypt_encrypt_symlink
fscrypt_file_open
fscrypt_fname_alloc_buffer
fscrypt_fname_disk_to_usr
@@ -428,6 +405,7 @@
fscrypt_free_inode
fscrypt_get_symlink
fscrypt_has_permitted_context
__fscrypt_inode_uses_inline_crypto
fscrypt_ioctl_add_key
fscrypt_ioctl_get_key_status
fscrypt_ioctl_get_nonce
@@ -439,7 +417,12 @@
fscrypt_limit_io_blocks
fscrypt_match_name
fscrypt_mergeable_bio
__fscrypt_prepare_link
__fscrypt_prepare_lookup
fscrypt_prepare_new_inode
__fscrypt_prepare_readdir
__fscrypt_prepare_rename
__fscrypt_prepare_setattr
fscrypt_prepare_symlink
fscrypt_put_encryption_info
fscrypt_set_bio_crypt_ctx
@@ -449,6 +432,7 @@
fscrypt_show_test_dummy_encryption
fscrypt_symlink_getattr
fscrypt_zeroout_range
fs_kobj
fsverity_cleanup_inode
fsverity_enqueue_verify_work
fsverity_file_open
@@ -471,10 +455,14 @@
inode_set_flags
insert_inode_locked
iomap_dio_complete
__iomap_dio_rw
iov_iter_alignment
iter_file_splice_write
kernfs_get
kset_register
LZ4_compress_default
LZ4_compress_HC
LZ4_decompress_safe
migrate_page_move_mapping
migrate_page_states
mnt_drop_write_file
@@ -483,15 +471,19 @@
noop_direct_IO
page_cache_ra_unbounded
page_cache_sync_ra
page_symlink
pagecache_write_begin
pagecache_write_end
__page_file_mapping
page_symlink
pagevec_lookup_range
pagevec_lookup_range_tag
__pagevec_release
percpu_counter_add_batch
percpu_counter_batch
percpu_counter_destroy
__percpu_counter_init
percpu_counter_set
__percpu_counter_sum
posix_acl_alloc
posix_acl_chmod
posix_acl_equiv_mode
@@ -499,14 +491,25 @@
security_inode_init_security
seq_escape
set_cached_acl
__set_page_dirty_nobuffers
set_task_ioprio
shrink_dcache_sb
__sync_dirty_buffer
sync_inode_metadata
sync_inodes_sb
tag_pages_for_writeback
__test_set_page_writeback
thaw_bdev
thaw_super
touch_atime
__traceiter_android_fs_dataread_end
__traceiter_android_fs_dataread_start
__traceiter_android_fs_datawrite_end
__traceiter_android_fs_datawrite_start
__tracepoint_android_fs_dataread_end
__tracepoint_android_fs_dataread_start
__tracepoint_android_fs_datawrite_end
__tracepoint_android_fs_datawrite_start
truncate_inode_pages_range
truncate_pagecache_range
utf8_casefold
@@ -518,14 +521,12 @@
wait_for_stable_page
wait_on_page_writeback
wbc_account_cgroup_owner
__xa_clear_mark
xa_get_mark
LZ4_compress_HC
LZ4_compress_default
LZ4_decompress_safe
ZSTD_CStreamWorkspaceBound
ZSTD_DStreamWorkspaceBound
ZSTD_compressStream
ZSTD_CStreamWorkspaceBound
ZSTD_decompressStream
ZSTD_DStreamWorkspaceBound
ZSTD_endStream
ZSTD_getParams
ZSTD_initCStream
@@ -533,10 +534,10 @@
ZSTD_maxCLevel
#required by cache module
__mod_lruvec_state
__mod_zone_page_state
d_delete
mem_cgroup_update_lru_size
__mod_lruvec_state
__mod_zone_page_state
__traceiter_android_rvh_ctl_dirty_rate
__tracepoint_android_rvh_ctl_dirty_rate

View File

@@ -72,6 +72,7 @@ config ARM
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_PFN_VALID
select HAVE_ARCH_SECCOMP
@@ -1507,6 +1508,7 @@ config ARCH_WANT_GENERAL_HUGETLB
config ARM_MODULE_PLTS
bool "Use PLTs to allow module memory to spill over into vmalloc area"
depends on MODULES
select KASAN_VMALLOC if KASAN
default y
help
Allocate PLTs when loading modules so that jumps and calls whose

View File

@@ -236,7 +236,11 @@ void __init kasan_init(void)
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
if (!IS_ENABLED(CONFIG_KASAN_VMALLOC))
kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
kasan_mem_to_shadow((void *)VMALLOC_END));
kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_END),
kasan_mem_to_shadow((void *)-1UL) + 1);
for_each_mem_range(i, &pa_start, &pa_end) {

View File

@@ -89,7 +89,7 @@ CONFIG_KVM=y
CONFIG_KVM_S2MPU=y
CONFIG_CRYPTO_SHA2_ARM64_CE=y
CONFIG_CRYPTO_SHA512_ARM64_CE=y
CONFIG_CRYPTO_POLYVAL_ARM64_CE=y
CONFIG_CRYPTO_GHASH_ARM64_CE=y
CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
@@ -103,6 +103,7 @@ CONFIG_MODULE_SCMVERSION=y
CONFIG_MODULE_SIG=y
CONFIG_MODULE_SIG_PROTECT=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_BLK_CGROUP_IOPRIO=y
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_IOSCHED_BFQ=y
@@ -142,6 +143,7 @@ CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_NET_IPIP=y
CONFIG_NET_IPGRE_DEMUX=y
CONFIG_NET_IPGRE=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=y
CONFIG_INET_ESP=y
CONFIG_INET_UDP_DIAG=y
@@ -380,6 +382,7 @@ CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_CONSOLE=y
# CONFIG_SERIAL_8250_EXAR is not set
CONFIG_SERIAL_8250_NR_UARTS=32
CONFIG_SERIAL_8250_RUNTIME_UARTS=0
CONFIG_SERIAL_8250_DW=y
CONFIG_SERIAL_OF_PLATFORM=y
@@ -657,7 +660,6 @@ CONFIG_SECURITY_SELINUX=y
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
CONFIG_CRYPTO_CHACHA20POLY1305=y
CONFIG_CRYPTO_ADIANTUM=y
CONFIG_CRYPTO_HCTR2=y
CONFIG_CRYPTO_XCBC=y
CONFIG_CRYPTO_BLAKE2B=y
CONFIG_CRYPTO_MD5=y

View File

@@ -781,7 +781,7 @@ static pkvm_id completer_owner_id(const struct pkvm_mem_transition *tx)
struct check_walk_data {
enum pkvm_page_state desired;
enum pkvm_page_state (*get_page_state)(kvm_pte_t pte);
enum pkvm_page_state (*get_page_state)(kvm_pte_t pte, u64 addr);
};
static int __check_page_state_visitor(u64 addr, u64 end, u32 level,
@@ -792,10 +792,7 @@ static int __check_page_state_visitor(u64 addr, u64 end, u32 level,
struct check_walk_data *d = arg;
kvm_pte_t pte = *ptep;
if (kvm_pte_valid(pte) && !addr_is_allowed_memory(kvm_pte_to_phys(pte)))
return -EINVAL;
return d->get_page_state(pte) == d->desired ? 0 : -EPERM;
return d->get_page_state(pte, addr) == d->desired ? 0 : -EPERM;
}
static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
@@ -810,8 +807,11 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size,
return kvm_pgtable_walk(pgt, addr, size, &walker);
}
static enum pkvm_page_state host_get_page_state(kvm_pte_t pte)
static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr)
{
if (!addr_is_allowed_memory(addr))
return PKVM_NOPAGE;
if (!kvm_pte_valid(pte) && pte)
return PKVM_NOPAGE;
@@ -954,7 +954,7 @@ static int host_complete_donation(u64 addr, const struct pkvm_mem_transition *tx
return host_stage2_set_owner_locked(addr, size, host_id);
}
static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte)
static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte, u64 addr)
{
if (!kvm_pte_valid(pte))
return PKVM_NOPAGE;
@@ -1066,7 +1066,7 @@ static int hyp_complete_donation(u64 addr,
return pkvm_create_mappings_locked(start, end, prot);
}
static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte)
static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr)
{
if (!kvm_pte_valid(pte))
return PKVM_NOPAGE;
@@ -1180,7 +1180,7 @@ static int __guest_request_page_transition(u64 *completer_addr,
if (ret)
return ret;
state = guest_get_page_state(pte);
state = guest_get_page_state(pte, tx->initiator.addr);
if (state == PKVM_NOPAGE)
return -EFAULT;
@@ -1946,7 +1946,7 @@ int __pkvm_host_reclaim_page(u64 pfn)
if (ret)
goto unlock;
if (host_get_page_state(pte) == PKVM_PAGE_OWNED)
if (host_get_page_state(pte, addr) == PKVM_PAGE_OWNED)
goto unlock;
page = hyp_phys_to_page(addr);

View File

@@ -21,9 +21,11 @@ void copy_highpage(struct page *to, struct page *from)
copy_page(kto, kfrom);
if (kasan_hw_tags_enabled())
page_kasan_tag_reset(to);
if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
set_bit(PG_mte_tagged, &to->flags);
page_kasan_tag_reset(to);
/*
* We need smp_wmb() in between setting the flags and clearing the
* tags because if another thread reads page->flags and builds a

View File

@@ -93,6 +93,7 @@ CONFIG_MODULE_SCMVERSION=y
CONFIG_MODULE_SIG=y
CONFIG_MODULE_SIG_PROTECT=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_BLK_CGROUP_IOPRIO=y
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_IOSCHED_BFQ=y
@@ -131,6 +132,7 @@ CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_NET_IPIP=y
CONFIG_NET_IPGRE_DEMUX=y
CONFIG_NET_IPGRE=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=y
CONFIG_INET_ESP=y
CONFIG_INET_UDP_DIAG=y
@@ -358,6 +360,7 @@ CONFIG_INPUT_UINPUT=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=32
CONFIG_SERIAL_8250_RUNTIME_UARTS=0
CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_SERIAL_SAMSUNG=y
@@ -601,10 +604,8 @@ CONFIG_SECURITY_SELINUX=y
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
CONFIG_CRYPTO_CHACHA20POLY1305=y
CONFIG_CRYPTO_ADIANTUM=y
CONFIG_CRYPTO_HCTR2=y
CONFIG_CRYPTO_XCBC=y
CONFIG_CRYPTO_BLAKE2B=y
CONFIG_CRYPTO_POLYVAL_CLMUL_NI=y
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=y

View File

@@ -61,6 +61,11 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_complete);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_split);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_unplug);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_insert);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_bio_queue);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_getrq);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_issue);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_merge);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_requeue);
EXPORT_TRACEPOINT_SYMBOL_GPL(block_rq_complete);
DEFINE_IDA(blk_queue_ida);
@@ -1424,6 +1429,13 @@ bool blk_update_request(struct request *req, blk_status_t error,
req->q->integrity.profile->complete_fn(req, nr_bytes);
#endif
/*
* Upper layers may call blk_crypto_evict_key() anytime after the last
* bio_endio(). Therefore, the keyslot must be released before that.
*/
if (blk_crypto_rq_has_keyslot(req) && nr_bytes >= blk_rq_bytes(req))
__blk_crypto_rq_put_keyslot(req);
if (unlikely(error && !blk_rq_is_passthrough(req) &&
!(req->rq_flags & RQF_QUIET)))
print_req_error(req, error, __func__);

View File

@@ -60,6 +60,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
return rq->crypt_ctx;
}
static inline bool blk_crypto_rq_has_keyslot(struct request *rq)
{
return rq->crypt_keyslot;
}
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
static inline bool bio_crypt_rq_ctx_compatible(struct request *rq,
@@ -93,6 +98,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
return false;
}
static inline bool blk_crypto_rq_has_keyslot(struct request *rq)
{
return false;
}
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
void __bio_crypt_advance(struct bio *bio, unsigned int bytes);
@@ -127,14 +137,21 @@ static inline bool blk_crypto_bio_prep(struct bio **bio_ptr)
return true;
}
blk_status_t __blk_crypto_init_request(struct request *rq);
static inline blk_status_t blk_crypto_init_request(struct request *rq)
blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq);
static inline blk_status_t blk_crypto_rq_get_keyslot(struct request *rq)
{
if (blk_crypto_rq_is_encrypted(rq))
return __blk_crypto_init_request(rq);
return __blk_crypto_rq_get_keyslot(rq);
return BLK_STS_OK;
}
void __blk_crypto_rq_put_keyslot(struct request *rq);
static inline void blk_crypto_rq_put_keyslot(struct request *rq)
{
if (blk_crypto_rq_has_keyslot(rq))
__blk_crypto_rq_put_keyslot(rq);
}
void __blk_crypto_free_request(struct request *rq);
static inline void blk_crypto_free_request(struct request *rq)
{
@@ -173,7 +190,7 @@ static inline blk_status_t blk_crypto_insert_cloned_request(struct request *rq)
{
if (blk_crypto_rq_is_encrypted(rq))
return blk_crypto_init_request(rq);
return blk_crypto_rq_get_keyslot(rq);
return BLK_STS_OK;
}

View File

@@ -13,6 +13,7 @@
#include <linux/blkdev.h>
#include <linux/keyslot-manager.h>
#include <linux/module.h>
#include <linux/ratelimit.h>
#include <linux/slab.h>
#include "blk-crypto-internal.h"
@@ -217,26 +218,26 @@ static bool bio_crypt_check_alignment(struct bio *bio)
return true;
}
blk_status_t __blk_crypto_init_request(struct request *rq)
blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq)
{
return blk_ksm_get_slot_for_key(rq->q->ksm, rq->crypt_ctx->bc_key,
&rq->crypt_keyslot);
}
/**
* __blk_crypto_free_request - Uninitialize the crypto fields of a request.
*
* @rq: The request whose crypto fields to uninitialize.
*
* Completely uninitializes the crypto fields of a request. If a keyslot has
* been programmed into some inline encryption hardware, that keyslot is
* released. The rq->crypt_ctx is also freed.
*/
void __blk_crypto_free_request(struct request *rq)
void __blk_crypto_rq_put_keyslot(struct request *rq)
{
blk_ksm_put_slot(rq->crypt_keyslot);
rq->crypt_keyslot = NULL;
}
void __blk_crypto_free_request(struct request *rq)
{
/* The keyslot, if one was needed, should have been released earlier. */
if (WARN_ON_ONCE(rq->crypt_keyslot))
__blk_crypto_rq_put_keyslot(rq);
mempool_free(rq->crypt_ctx, bio_crypt_ctx_pool);
blk_crypto_rq_set_defaults(rq);
rq->crypt_ctx = NULL;
}
/**
@@ -409,29 +410,39 @@ int blk_crypto_start_using_key(const struct blk_crypto_key *key,
EXPORT_SYMBOL_GPL(blk_crypto_start_using_key);
/**
* blk_crypto_evict_key() - Evict a key from any inline encryption hardware
* it may have been programmed into
* @q: The request queue who's associated inline encryption hardware this key
* might have been programmed into
* @key: The key to evict
* blk_crypto_evict_key() - Evict a blk_crypto_key from a request_queue
* @q: a request_queue on which I/O using the key may have been done
* @key: the key to evict
*
* Upper layers (filesystems) must call this function to ensure that a key is
* evicted from any hardware that it might have been programmed into. The key
* must not be in use by any in-flight IO when this function is called.
* For a given request_queue, this function removes the given blk_crypto_key
* from the keyslot management structures and evicts it from any underlying
* hardware keyslot(s) or blk-crypto-fallback keyslot it may have been
* programmed into.
*
* Return: 0 on success or if key is not present in the q's ksm, -err on error.
* Upper layers must call this before freeing the blk_crypto_key. It must be
* called for every request_queue the key may have been used on. The key must
* no longer be in use by any I/O when this function is called.
*
* Context: May sleep.
*/
int blk_crypto_evict_key(struct request_queue *q,
const struct blk_crypto_key *key)
void blk_crypto_evict_key(struct request_queue *q,
const struct blk_crypto_key *key)
{
if (blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg))
return blk_ksm_evict_key(q->ksm, key);
int err;
if (blk_ksm_crypto_cfg_supported(q->ksm, &key->crypto_cfg))
err = blk_ksm_evict_key(q->ksm, key);
else
err = blk_crypto_fallback_evict_key(key);
/*
* If the request queue's associated inline encryption hardware didn't
* have support for the key, then the key might have been programmed
* into the fallback keyslot manager, so try to evict from there.
* An error can only occur here if the key failed to be evicted from a
* keyslot (due to a hardware or driver issue) or is allegedly still in
* use by I/O (due to a kernel bug). Even in these cases, the key is
* still unlinked from the keyslot management structures, and the caller
* is allowed and expected to free it right away. There's nothing
* callers can do to handle errors, so just log them and return void.
*/
return blk_crypto_fallback_evict_key(key);
if (err)
pr_warn_ratelimited("error %d evicting key\n", err);
}
EXPORT_SYMBOL_GPL(blk_crypto_evict_key);

View File

@@ -818,6 +818,8 @@ static struct request *attempt_merge(struct request_queue *q,
if (!blk_discard_mergable(req))
elv_merge_requests(q, req, next);
blk_crypto_rq_put_keyslot(next);
/*
* 'next' is going away, so update stats accordingly
*/

View File

@@ -2231,7 +2231,7 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
blk_mq_bio_to_request(rq, bio, nr_segs);
ret = blk_crypto_init_request(rq);
ret = blk_crypto_rq_get_keyslot(rq);
if (ret != BLK_STS_OK) {
bio->bi_status = ret;
bio_endio(bio);

View File

@@ -350,25 +350,16 @@ bool blk_ksm_crypto_cfg_supported(struct blk_keyslot_manager *ksm,
return true;
}
/**
* blk_ksm_evict_key() - Evict a key from the lower layer device.
* @ksm: The keyslot manager to evict from
* @key: The key to evict
*
* Find the keyslot that the specified key was programmed into, and evict that
* slot from the lower layer device. The slot must not be in use by any
* in-flight IO when this function is called.
*
* Context: Process context. Takes and releases ksm->lock.
* Return: 0 on success or if there's no keyslot with the specified key, -EBUSY
* if the keyslot is still in use, or another -errno value on other
* error.
/*
* This is an internal function that evicts a key from an inline encryption
* device that can be either a real device or the blk-crypto-fallback "device".
* It is used only by blk_crypto_evict_key(); see that function for details.
*/
int blk_ksm_evict_key(struct blk_keyslot_manager *ksm,
const struct blk_crypto_key *key)
{
struct blk_ksm_keyslot *slot;
int err = 0;
int err;
if (blk_ksm_is_passthrough(ksm)) {
if (ksm->ksm_ll_ops.keyslot_evict) {
@@ -382,22 +373,30 @@ int blk_ksm_evict_key(struct blk_keyslot_manager *ksm,
blk_ksm_hw_enter(ksm);
slot = blk_ksm_find_keyslot(ksm, key);
if (!slot)
goto out_unlock;
if (!slot) {
/*
* Not an error, since a key not in use by I/O is not guaranteed
* to be in a keyslot. There can be more keys than keyslots.
*/
err = 0;
goto out;
}
if (WARN_ON_ONCE(atomic_read(&slot->slot_refs) != 0)) {
/* BUG: key is still in use by I/O */
err = -EBUSY;
goto out_unlock;
goto out_remove;
}
err = ksm->ksm_ll_ops.keyslot_evict(ksm, key,
blk_ksm_get_slot_idx(slot));
if (err)
goto out_unlock;
out_remove:
/*
* Callers free the key even on error, so unlink the key from the hash
* table and clear slot->key even on error.
*/
hlist_del(&slot->hash_node);
slot->key = NULL;
err = 0;
out_unlock:
out:
blk_ksm_hw_exit(ksm);
return err;
}

View File

@@ -1443,7 +1443,8 @@ err_no_ref:
*/
static void binder_free_ref(struct binder_ref *ref)
{
trace_android_vh_binder_del_ref(ref->proc ? ref->proc->tsk : 0, ref->data.desc);
trace_android_vh_binder_del_ref(ref->proc ? ref->proc->tsk : NULL,
ref->data.desc);
if (ref->node)
binder_free_node(ref->node);
kfree(ref->death);
@@ -2093,24 +2094,23 @@ static void binder_deferred_fd_close(int fd)
static void binder_transaction_buffer_release(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_buffer *buffer,
binder_size_t failed_at,
binder_size_t off_end_offset,
bool is_failure)
{
int debug_id = buffer->debug_id;
binder_size_t off_start_offset, buffer_offset, off_end_offset;
binder_size_t off_start_offset, buffer_offset;
binder_debug(BINDER_DEBUG_TRANSACTION,
"%d buffer release %d, size %zd-%zd, failed at %llx\n",
proc->pid, buffer->debug_id,
buffer->data_size, buffer->offsets_size,
(unsigned long long)failed_at);
(unsigned long long)off_end_offset);
if (buffer->target_node)
binder_dec_node(buffer->target_node, 1, 0);
off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
off_end_offset = is_failure && failed_at ? failed_at :
off_start_offset + buffer->offsets_size;
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
struct binder_object_header *hdr;
@@ -2270,6 +2270,21 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
}
}
/* Clean up all the objects in the buffer */
static inline void binder_release_entire_buffer(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_buffer *buffer,
bool is_failure)
{
binder_size_t off_end_offset;
off_end_offset = ALIGN(buffer->data_size, sizeof(void *));
off_end_offset += buffer->offsets_size;
binder_transaction_buffer_release(proc, thread, buffer,
off_end_offset, is_failure);
}
static int binder_translate_binder(struct flat_binder_object *fp,
struct binder_transaction *t,
struct binder_thread *thread)
@@ -2929,7 +2944,8 @@ static int binder_proc_transaction(struct binder_transaction *t,
thread = binder_select_thread_ilocked(proc);
trace_android_vh_binder_proc_transaction(current, proc->tsk,
thread ? thread->task : 0, node->debug_id, t->code, pending_async);
thread ? thread->task : NULL, node->debug_id, t->code,
pending_async);
if (thread) {
binder_transaction_priority(thread, t, node);
@@ -2970,7 +2986,7 @@ static int binder_proc_transaction(struct binder_transaction *t,
t_outdated->buffer = NULL;
buffer->transaction = NULL;
trace_binder_transaction_update_buffer_release(buffer);
binder_transaction_buffer_release(proc, NULL, buffer, 0, 0);
binder_release_entire_buffer(proc, NULL, buffer, false);
binder_alloc_free_buf(&proc->alloc, buffer);
kfree(t_outdated);
binder_stats_deleted(BINDER_STAT_TRANSACTION);
@@ -3882,7 +3898,7 @@ binder_free_buf(struct binder_proc *proc,
binder_node_inner_unlock(buf_node);
}
trace_binder_transaction_buffer_release(buffer);
binder_transaction_buffer_release(proc, thread, buffer, 0, is_failure);
binder_release_entire_buffer(proc, thread, buffer, is_failure);
binder_alloc_free_buf(&proc->alloc, buffer);
}

View File

@@ -314,29 +314,15 @@ err_no_vma:
static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
struct vm_area_struct *vma)
{
if (vma)
alloc->vma_vm_mm = vma->vm_mm;
/*
* If we see alloc->vma is not NULL, buffer data structures set up
* completely. Look at smp_rmb side binder_alloc_get_vma.
* We also want to guarantee new alloc->vma_vm_mm is always visible
* if alloc->vma is set.
*/
smp_wmb();
alloc->vma = vma;
/* pairs with smp_load_acquire in binder_alloc_get_vma() */
smp_store_release(&alloc->vma, vma);
}
static inline struct vm_area_struct *binder_alloc_get_vma(
struct binder_alloc *alloc)
{
struct vm_area_struct *vma = NULL;
if (alloc->vma) {
/* Look at description in binder_alloc_set_vma */
smp_rmb();
vma = alloc->vma;
}
return vma;
/* pairs with smp_store_release in binder_alloc_set_vma() */
return smp_load_acquire(&alloc->vma);
}
static bool debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
@@ -399,15 +385,13 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
size_t size, data_offsets_size;
int ret;
mmap_read_lock(alloc->vma_vm_mm);
/* Check binder_alloc is fully initialized */
if (!binder_alloc_get_vma(alloc)) {
mmap_read_unlock(alloc->vma_vm_mm);
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
"%d: binder_alloc_buf, no vma\n",
alloc->pid);
return ERR_PTR(-ESRCH);
}
mmap_read_unlock(alloc->vma_vm_mm);
data_offsets_size = ALIGN(data_size, sizeof(void *)) +
ALIGN(offsets_size, sizeof(void *));
@@ -798,6 +782,8 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
buffer->free = 1;
binder_insert_free_buffer(alloc, buffer);
alloc->free_async_space = alloc->buffer_size / 2;
/* Signal binder_alloc is fully initialized */
binder_alloc_set_vma(alloc, vma);
return 0;
@@ -935,25 +921,17 @@ void binder_alloc_print_pages(struct seq_file *m,
* Make sure the binder_alloc is fully initialized, otherwise we might
* read inconsistent state.
*/
mmap_read_lock(alloc->vma_vm_mm);
if (binder_alloc_get_vma(alloc) == NULL) {
mmap_read_unlock(alloc->vma_vm_mm);
goto uninitialized;
if (binder_alloc_get_vma(alloc) != NULL) {
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
page = &alloc->pages[i];
if (!page->page_ptr)
free++;
else if (list_empty(&page->lru))
active++;
else
lru++;
}
}
mmap_read_unlock(alloc->vma_vm_mm);
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
page = &alloc->pages[i];
if (!page->page_ptr)
free++;
else if (list_empty(&page->lru))
active++;
else
lru++;
}
uninitialized:
mutex_unlock(&alloc->mutex);
seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);

View File

@@ -250,6 +250,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_page_trylock_set);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_page_trylock_clear);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_page_trylock_get_result);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_do_page_trylock);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_dm_bufio_shrink_scan_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cleanup_old_buffers_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_page_referenced_check_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_drain_all_pages_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cma_drain_all_pages_bypass);
@@ -426,6 +428,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_direct_io_update_bio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_loop_prepare_cmd);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_psi_event);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_psi_group);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rmqueue_smallest_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_one_page_bypass);
/*
* For type visibility
*/

View File

@@ -144,7 +144,6 @@ struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask, unsigned int or
pool->gfp_mask = gfp_mask | __GFP_COMP;
pool->order = order;
mutex_init(&pool->mutex); /* No longer used! */
mutex_lock(&pool->mutex); /* Make sure anyone who attempts to acquire this hangs */
mutex_lock(&pool_list_lock);
list_add(&pool->list, &pool_list);

View File

@@ -1385,7 +1385,7 @@ static void gic_syscore_init(void)
#else
static inline void gic_syscore_init(void) { }
void gic_resume(void) { }
static int gic_suspend(void) { return 0; }
static inline int gic_suspend(void) { return 0; }
#endif

View File

@@ -12,6 +12,7 @@
#include <linux/kernel.h>
#include <linux/mailbox_client.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/poll.h>
@@ -38,6 +39,7 @@ struct mbox_test_device {
char *signal;
char *message;
spinlock_t lock;
struct mutex mutex;
wait_queue_head_t waitq;
struct fasync_struct *async_queue;
struct dentry *root_debugfs_dir;
@@ -95,6 +97,7 @@ static ssize_t mbox_test_message_write(struct file *filp,
size_t count, loff_t *ppos)
{
struct mbox_test_device *tdev = filp->private_data;
char *message;
void *data;
int ret;
@@ -110,10 +113,13 @@ static ssize_t mbox_test_message_write(struct file *filp,
return -EINVAL;
}
tdev->message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL);
if (!tdev->message)
message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL);
if (!message)
return -ENOMEM;
mutex_lock(&tdev->mutex);
tdev->message = message;
ret = copy_from_user(tdev->message, userbuf, count);
if (ret) {
ret = -EFAULT;
@@ -144,6 +150,8 @@ out:
kfree(tdev->message);
tdev->signal = NULL;
mutex_unlock(&tdev->mutex);
return ret < 0 ? ret : count;
}
@@ -392,6 +400,7 @@ static int mbox_test_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, tdev);
spin_lock_init(&tdev->lock);
mutex_init(&tdev->mutex);
if (tdev->rx_channel) {
tdev->rx_buffer = devm_kzalloc(&pdev->dev,

View File

@@ -19,6 +19,8 @@
#include <linux/rbtree.h>
#include <linux/stacktrace.h>
#include <trace/hooks/mm.h>
#define DM_MSG_PREFIX "bufio"
/*
@@ -1683,6 +1685,13 @@ static void shrink_work(struct work_struct *w)
static unsigned long dm_bufio_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
{
struct dm_bufio_client *c;
bool bypass = false;
trace_android_vh_dm_bufio_shrink_scan_bypass(
dm_bufio_current_allocated,
&bypass);
if (bypass)
return 0;
c = container_of(shrink, struct dm_bufio_client, shrinker);
atomic_long_add(sc->nr_to_scan, &c->need_shrink);
@@ -2009,6 +2018,14 @@ static void cleanup_old_buffers(void)
{
unsigned long max_age_hz = get_max_age_hz();
struct dm_bufio_client *c;
bool bypass = false;
trace_android_vh_cleanup_old_buffers_bypass(
dm_bufio_current_allocated,
&max_age_hz,
&bypass);
if (bypass)
return;
mutex_lock(&dm_bufio_clients_lock);

View File

@@ -67,13 +67,9 @@ lookup_cipher(const char *cipher_string)
static void default_key_dtr(struct dm_target *ti)
{
struct default_key_c *dkc = ti->private;
int err;
if (dkc->dev) {
err = blk_crypto_evict_key(bdev_get_queue(dkc->dev->bdev),
&dkc->key);
if (err && err != -ENOKEY)
DMWARN("Failed to evict crypto key: %d", err);
blk_crypto_evict_key(bdev_get_queue(dkc->dev->bdev), &dkc->key);
dm_put_device(ti, dkc->dev);
}
kfree_sensitive(dkc->cipher_string);

View File

@@ -1191,21 +1191,12 @@ struct dm_keyslot_manager {
struct mapped_device *md;
};
struct dm_keyslot_evict_args {
const struct blk_crypto_key *key;
int err;
};
static int dm_keyslot_evict_callback(struct dm_target *ti, struct dm_dev *dev,
sector_t start, sector_t len, void *data)
{
struct dm_keyslot_evict_args *args = data;
int err;
const struct blk_crypto_key *key = data;
err = blk_crypto_evict_key(bdev_get_queue(dev->bdev), args->key);
if (!args->err)
args->err = err;
/* Always try to evict the key from all devices. */
blk_crypto_evict_key(bdev_get_queue(dev->bdev), key);
return 0;
}
@@ -1220,7 +1211,6 @@ static int dm_keyslot_evict(struct blk_keyslot_manager *ksm,
struct dm_keyslot_manager,
ksm);
struct mapped_device *md = dksm->md;
struct dm_keyslot_evict_args args = { key };
struct dm_table *t;
int srcu_idx;
int i;
@@ -1233,10 +1223,11 @@ static int dm_keyslot_evict(struct blk_keyslot_manager *ksm,
ti = dm_table_get_target(t, i);
if (!ti->type->iterate_devices)
continue;
ti->type->iterate_devices(ti, dm_keyslot_evict_callback, &args);
ti->type->iterate_devices(ti, dm_keyslot_evict_callback,
(void *)key);
}
dm_put_live_table(md, srcu_idx);
return args.err;
return 0;
}
struct dm_derive_raw_secret_args {

View File

@@ -180,9 +180,12 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
else
min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
if (max == 0)
if (le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize) == 0)
max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
else
max = clamp_t(u32, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize),
USB_CDC_NCM_NTB_MIN_OUT_SIZE,
CDC_NCM_NTB_MAX_SIZE_TX);
/* some devices set dwNtbOutMaxSize too low for the above default */
min = min(min, max);
@@ -1243,6 +1246,9 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
* further.
*/
if (skb_out == NULL) {
/* If even the smallest allocation fails, abort. */
if (ctx->tx_curr_size == USB_CDC_NCM_NTB_MIN_OUT_SIZE)
goto alloc_failed;
ctx->tx_low_mem_max_cnt = min(ctx->tx_low_mem_max_cnt + 1,
(unsigned)CDC_NCM_LOW_MEM_MAX_CNT);
ctx->tx_low_mem_val = ctx->tx_low_mem_max_cnt;
@@ -1261,13 +1267,8 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
skb_out = alloc_skb(ctx->tx_curr_size, GFP_ATOMIC);
/* No allocation possible so we will abort */
if (skb_out == NULL) {
if (skb != NULL) {
dev_kfree_skb_any(skb);
dev->net->stats.tx_dropped++;
}
goto exit_no_skb;
}
if (!skb_out)
goto alloc_failed;
ctx->tx_low_mem_val--;
}
if (ctx->is_ndp16) {
@@ -1460,6 +1461,11 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign)
return skb_out;
alloc_failed:
if (skb) {
dev_kfree_skb_any(skb);
dev->net->stats.tx_dropped++;
}
exit_no_skb:
/* Start timer, if there is a remaining non-empty skb */
if (ctx->tx_curr_skb != NULL && n > 0)

View File

@@ -628,6 +628,28 @@ bool of_device_is_available(const struct device_node *device)
}
EXPORT_SYMBOL(of_device_is_available);
/**
* __of_device_is_fail - check if a device has status "fail" or "fail-..."
*
* @device: Node to check status for, with locks already held
*
* Return: True if the status property is set to "fail" or "fail-..." (for any
* error code suffix), false otherwise
*/
static bool __of_device_is_fail(const struct device_node *device)
{
const char *status;
if (!device)
return false;
status = __of_get_property(device, "status", NULL);
if (status == NULL)
return false;
return !strcmp(status, "fail") || !strncmp(status, "fail-", 5);
}
/**
* of_device_is_big_endian - check if a device has BE registers
*
@@ -774,6 +796,9 @@ EXPORT_SYMBOL(of_get_next_available_child);
* of_get_next_cpu_node - Iterate on cpu nodes
* @prev: previous child of the /cpus node, or NULL to get first
*
* Unusable CPUs (those with the status property set to "fail" or "fail-...")
* will be skipped.
*
* Return: A cpu node pointer with refcount incremented, use of_node_put()
* on it when done. Returns NULL when prev is the last child. Decrements
* the refcount of prev.
@@ -795,6 +820,8 @@ struct device_node *of_get_next_cpu_node(struct device_node *prev)
of_node_put(node);
}
for (; next; next = next->sibling) {
if (__of_device_is_fail(next))
continue;
if (!(of_node_name_eq(next, "cpu") ||
__of_node_is_type(next, "cpu")))
continue;

View File

@@ -286,6 +286,16 @@ void __init fdt_init_reserved_mem(void)
memblock_clear_nomap(rmem->base, rmem->size);
else
memblock_free(rmem->base, rmem->size);
} else {
phys_addr_t end = rmem->base + rmem->size - 1;
bool reusable =
(of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
&rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
nomap ? "nomap" : "map",
reusable ? "reusable" : "non-reusable",
rmem->name ? rmem->name : "unknown");
}
}
}

View File

@@ -62,6 +62,8 @@ static inline s64 div_frac(s64 x, s64 y)
* governor switches on when this trip point is crossed.
* If the thermal zone only has one passive trip point,
* @trip_switch_on should be INVALID_TRIP.
* @last_switch_on_temp:Record the last switch_on_temp only when trips
are writable.
* @trip_max_desired_temperature: last passive trip point of the thermal
* zone. The temperature we are
* controlling for.
@@ -73,6 +75,9 @@ struct power_allocator_params {
s64 err_integral;
s32 prev_err;
int trip_switch_on;
#ifdef CONFIG_THERMAL_WRITABLE_TRIPS
int last_switch_on_temp;
#endif
int trip_max_desired_temperature;
u32 sustainable_power;
};
@@ -567,6 +572,25 @@ static void get_governor_trips(struct thermal_zone_device *tz,
}
}
#ifdef CONFIG_THERMAL_WRITABLE_TRIPS
static bool power_allocator_throttle_update(struct thermal_zone_device *tz, int switch_on_temp)
{
bool update;
struct power_allocator_params *params = tz->governor_data;
int last_switch_on_temp = params->last_switch_on_temp;
update = (tz->last_temperature >= last_switch_on_temp);
params->last_switch_on_temp = switch_on_temp;
return update;
}
#else
static inline bool power_allocator_throttle_update(struct thermal_zone_device *tz, int switch_on_temp)
{
return false;
}
#endif
static void reset_pid_controller(struct power_allocator_params *params)
{
params->err_integral = 0;
@@ -735,16 +759,18 @@ static int power_allocator_throttle(struct thermal_zone_device *tz, int trip)
* requirement.
*/
trace_android_vh_enable_thermal_power_throttle(&enable, &override);
ret = tz->ops->get_trip_temp(tz, params->trip_switch_on,
&switch_on_temp);
if (!enable || (!ret && (tz->temperature < switch_on_temp) &&
!override)) {
update = (tz->last_temperature >= switch_on_temp);
trace_android_vh_modify_thermal_throttle_update(tz, &update);
tz->passive = 0;
reset_pid_controller(params);
allow_maximum_power(tz, update);
return 0;
ret = tz->ops->get_trip_temp(tz, params->trip_switch_on, &switch_on_temp);
if (!ret) {
update = power_allocator_throttle_update(tz, switch_on_temp);
if (!enable || ((tz->temperature < switch_on_temp) && !override)) {
update |= (tz->last_temperature >= switch_on_temp);
trace_android_vh_modify_thermal_throttle_update(tz, &update);
tz->passive = 0;
reset_pid_controller(params);
allow_maximum_power(tz, update);
return 0;
}
}
tz->passive = 1;

View File

@@ -1582,15 +1582,17 @@ static int dwc3_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct resource *res, dwc_res;
struct dwc3 *dwc;
struct dwc3_vendor *vdwc;
int ret;
void __iomem *regs;
dwc = devm_kzalloc(dev, sizeof(*dwc), GFP_KERNEL);
if (!dwc)
vdwc = devm_kzalloc(dev, sizeof(*vdwc), GFP_KERNEL);
if (!vdwc)
return -ENOMEM;
dwc = &vdwc->dwc;
dwc->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);

View File

@@ -1324,6 +1324,16 @@ struct dwc3 {
ANDROID_KABI_RESERVE(4);
};
/**
* struct dwc3_vendor - contains parameters without modifying the format of DWC3 core
* @dwc: contains dwc3 core reference
* @suspended: set to track suspend event due to U3/L2.
*/
struct dwc3_vendor {
struct dwc3 dwc;
unsigned suspended:1;
};
#define INCRX_BURST_MODE 0
#define INCRX_UNDEF_LENGTH_BURST_MODE 1

View File

@@ -327,6 +327,11 @@ static int dwc3_lsp_show(struct seq_file *s, void *unused)
unsigned int current_mode;
unsigned long flags;
u32 reg;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
reg = dwc3_readl(dwc->regs, DWC3_GSTS);
@@ -345,6 +350,8 @@ static int dwc3_lsp_show(struct seq_file *s, void *unused)
}
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -390,6 +397,11 @@ static int dwc3_mode_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = s->private;
unsigned long flags;
u32 reg;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
reg = dwc3_readl(dwc->regs, DWC3_GCTL);
@@ -409,6 +421,8 @@ static int dwc3_mode_show(struct seq_file *s, void *unused)
seq_printf(s, "UNKNOWN %08x\n", DWC3_GCTL_PRTCAP(reg));
}
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -458,6 +472,11 @@ static int dwc3_testmode_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = s->private;
unsigned long flags;
u32 reg;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
reg = dwc3_readl(dwc->regs, DWC3_DCTL);
@@ -488,6 +507,8 @@ static int dwc3_testmode_show(struct seq_file *s, void *unused)
seq_printf(s, "UNKNOWN %d\n", reg);
}
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -504,6 +525,7 @@ static ssize_t dwc3_testmode_write(struct file *file,
unsigned long flags;
u32 testmode = 0;
char buf[32];
int ret;
if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
return -EFAULT;
@@ -521,10 +543,16 @@ static ssize_t dwc3_testmode_write(struct file *file,
else
testmode = 0;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
dwc3_gadget_set_test_mode(dwc, testmode);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return count;
}
@@ -543,12 +571,18 @@ static int dwc3_link_state_show(struct seq_file *s, void *unused)
enum dwc3_link_state state;
u32 reg;
u8 speed;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
reg = dwc3_readl(dwc->regs, DWC3_GSTS);
if (DWC3_GSTS_CURMOD(reg) != DWC3_GSTS_CURMOD_DEVICE) {
seq_puts(s, "Not available\n");
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -561,6 +595,8 @@ static int dwc3_link_state_show(struct seq_file *s, void *unused)
dwc3_gadget_hs_link_string(state));
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -579,6 +615,7 @@ static ssize_t dwc3_link_state_write(struct file *file,
char buf[32];
u32 reg;
u8 speed;
int ret;
if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
return -EFAULT;
@@ -598,10 +635,15 @@ static ssize_t dwc3_link_state_write(struct file *file,
else
return -EINVAL;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
reg = dwc3_readl(dwc->regs, DWC3_GSTS);
if (DWC3_GSTS_CURMOD(reg) != DWC3_GSTS_CURMOD_DEVICE) {
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return -EINVAL;
}
@@ -611,12 +653,15 @@ static ssize_t dwc3_link_state_write(struct file *file,
if (speed < DWC3_DSTS_SUPERSPEED &&
state != DWC3_LINK_STATE_RECOV) {
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return -EINVAL;
}
dwc3_gadget_set_link_state(dwc, state);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return count;
}
@@ -640,6 +685,11 @@ static int dwc3_tx_fifo_size_show(struct seq_file *s, void *unused)
unsigned long flags;
u32 mdwidth;
u32 val;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_TXFIFO);
@@ -652,6 +702,8 @@ static int dwc3_tx_fifo_size_show(struct seq_file *s, void *unused)
seq_printf(s, "%u\n", val);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -662,6 +714,11 @@ static int dwc3_rx_fifo_size_show(struct seq_file *s, void *unused)
unsigned long flags;
u32 mdwidth;
u32 val;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_RXFIFO);
@@ -674,6 +731,8 @@ static int dwc3_rx_fifo_size_show(struct seq_file *s, void *unused)
seq_printf(s, "%u\n", val);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -683,12 +742,19 @@ static int dwc3_tx_request_queue_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = dep->dwc;
unsigned long flags;
u32 val;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_TXREQQ);
seq_printf(s, "%u\n", val);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -698,12 +764,19 @@ static int dwc3_rx_request_queue_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = dep->dwc;
unsigned long flags;
u32 val;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_RXREQQ);
seq_printf(s, "%u\n", val);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -713,12 +786,19 @@ static int dwc3_rx_info_queue_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = dep->dwc;
unsigned long flags;
u32 val;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_RXINFOQ);
seq_printf(s, "%u\n", val);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -728,12 +808,19 @@ static int dwc3_descriptor_fetch_queue_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = dep->dwc;
unsigned long flags;
u32 val;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_DESCFETCHQ);
seq_printf(s, "%u\n", val);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -743,12 +830,19 @@ static int dwc3_event_queue_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = dep->dwc;
unsigned long flags;
u32 val;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
val = dwc3_core_fifo_space(dep, DWC3_EVENTQ);
seq_printf(s, "%u\n", val);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -793,6 +887,11 @@ static int dwc3_trb_ring_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = dep->dwc;
unsigned long flags;
int i;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
if (dep->number <= 1) {
@@ -822,6 +921,8 @@ static int dwc3_trb_ring_show(struct seq_file *s, void *unused)
out:
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -834,6 +935,11 @@ static int dwc3_ep_info_register_show(struct seq_file *s, void *unused)
u32 lower_32_bits;
u32 upper_32_bits;
u32 reg;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
if (ret < 0)
return ret;
spin_lock_irqsave(&dwc->lock, flags);
reg = DWC3_GDBGLSPMUX_EPSELECT(dep->number);
@@ -846,6 +952,8 @@ static int dwc3_ep_info_register_show(struct seq_file *s, void *unused)
seq_printf(s, "0x%016llx\n", ep_info);
spin_unlock_irqrestore(&dwc->lock, flags);
pm_runtime_put_sync(dwc->dev);
return 0;
}
@@ -905,6 +1013,7 @@ void dwc3_debugfs_init(struct dwc3 *dwc)
dwc->regset->regs = dwc3_regs;
dwc->regset->nregs = ARRAY_SIZE(dwc3_regs);
dwc->regset->base = dwc->regs - DWC3_GLOBALS_REGS_START;
dwc->regset->dev = dwc->dev;
root = debugfs_create_dir(dev_name(dwc->dev), usb_debug_root);
dwc->debug_root = root;

View File

@@ -139,6 +139,24 @@ int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum dwc3_link_state state)
return -ETIMEDOUT;
}
static void dwc3_ep0_reset_state(struct dwc3 *dwc)
{
unsigned int dir;
if (dwc->ep0state != EP0_SETUP_PHASE) {
dir = !!dwc->ep0_expect_in;
if (dwc->ep0state == EP0_DATA_PHASE)
dwc3_ep0_end_control_data(dwc, dwc->eps[dir]);
else
dwc3_ep0_end_control_data(dwc, dwc->eps[!dir]);
dwc->eps[0]->trb_enqueue = 0;
dwc->eps[1]->trb_enqueue = 0;
dwc3_ep0_stall_and_restart(dwc);
}
}
/**
* dwc3_ep_inc_trb - increment a trb index.
* @index: Pointer to the TRB index to increment.
@@ -1681,6 +1699,7 @@ static int __dwc3_gadget_get_frame(struct dwc3 *dwc)
*/
static int __dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force, bool interrupt)
{
struct dwc3 *dwc = dep->dwc;
struct dwc3_gadget_ep_cmd_params params;
u32 cmd;
int ret;
@@ -1704,10 +1723,13 @@ static int __dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force, bool int
WARN_ON_ONCE(ret);
dep->resource_index = 0;
if (!interrupt)
if (!interrupt) {
if (!DWC3_IP_IS(DWC3) || DWC3_VER_IS_PRIOR(DWC3, 310A))
mdelay(1);
dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
else if (!ret)
} else if (!ret) {
dep->flags |= DWC3_EP_END_TRANSFER_PENDING;
}
dep->flags &= ~DWC3_EP_DELAY_STOP;
return ret;
@@ -2506,29 +2528,17 @@ static int __dwc3_gadget_start(struct dwc3 *dwc);
static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&dwc->lock, flags);
dwc->connected = false;
/*
* Per databook, when we want to stop the gadget, if a control transfer
* is still in process, complete it and get the core into setup phase.
* Attempt to end pending SETUP status phase, and not wait for the
* function to do so.
*/
if (dwc->ep0state != EP0_SETUP_PHASE) {
int ret;
if (dwc->delayed_status)
dwc3_ep0_send_delayed_status(dwc);
reinit_completion(&dwc->ep0_in_setup);
spin_unlock_irqrestore(&dwc->lock, flags);
ret = wait_for_completion_timeout(&dwc->ep0_in_setup,
msecs_to_jiffies(DWC3_PULL_UP_TIMEOUT));
spin_lock_irqsave(&dwc->lock, flags);
if (ret == 0)
dev_warn(dwc->dev, "timed out waiting for SETUP phase\n");
}
if (dwc->delayed_status)
dwc3_ep0_send_delayed_status(dwc);
/*
* In the Synopsys DesignWare Cores USB3 Databook Rev. 3.30a
@@ -2538,9 +2548,28 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
* bit.
*/
dwc3_stop_active_transfers(dwc);
__dwc3_gadget_stop(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
/*
* Per databook, when we want to stop the gadget, if a control transfer
* is still in process, complete it and get the core into setup phase.
* In case the host is unresponsive to a SETUP transaction, forcefully
* stall the transfer, and move back to the SETUP phase, so that any
* pending endxfers can be executed.
*/
if (dwc->ep0state != EP0_SETUP_PHASE) {
reinit_completion(&dwc->ep0_in_setup);
ret = wait_for_completion_timeout(&dwc->ep0_in_setup,
msecs_to_jiffies(DWC3_PULL_UP_TIMEOUT));
if (ret == 0) {
dev_warn(dwc->dev, "wait for SETUP phase timed out\n");
spin_lock_irqsave(&dwc->lock, flags);
dwc3_ep0_reset_state(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
}
}
/*
* Note: if the GEVNTCOUNT indicates events in the event buffer, the
* driver needs to acknowledge them before the controller can halt.
@@ -2548,7 +2577,19 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
* remaining event generated by the controller while polling for
* DSTS.DEVCTLHLT.
*/
return dwc3_gadget_run_stop(dwc, false, false);
ret = dwc3_gadget_run_stop(dwc, false, false);
/*
* Stop the gadget after controller is halted, so that if needed, the
* events to update EP0 state can still occur while the run/stop
* routine polls for the halted state. DEVTEN is cleared as part of
* gadget stop.
*/
spin_lock_irqsave(&dwc->lock, flags);
__dwc3_gadget_stop(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
return ret;
}
static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
@@ -2597,13 +2638,16 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
* device-initiated disconnect requires a core soft reset
* (DCTL.CSftRst) before enabling the run/stop bit.
*/
dwc3_core_soft_reset(dwc);
ret = dwc3_core_soft_reset(dwc);
if (ret)
goto done;
dwc3_event_buffers_setup(dwc);
__dwc3_gadget_start(dwc);
ret = dwc3_gadget_run_stop(dwc, true, false);
}
done:
pm_runtime_put(dwc->dev);
return ret;
@@ -3743,7 +3787,11 @@ void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
* enabled, the EndTransfer command will have completed upon
* returning from this function.
*
* This mode is NOT available on the DWC_usb31 IP.
* This mode is NOT available on the DWC_usb31 IP. In this
* case, if the IOC bit is not set, then delay by 1ms
* after issuing the EndTransfer command. This allows for the
* controller to handle the command completely before DWC3
* remove requests attempts to unmap USB request buffers.
*/
__dwc3_stop_active_transfer(dep, force, interrupt);
}
@@ -3772,8 +3820,11 @@ static void dwc3_clear_stall_all_ep(struct dwc3 *dwc)
static void dwc3_gadget_disconnect_interrupt(struct dwc3 *dwc)
{
struct dwc3_vendor *vdwc = container_of(dwc, struct dwc3_vendor, dwc);
int reg;
vdwc->suspended = false;
dwc3_gadget_set_link_state(dwc, DWC3_LINK_STATE_RX_DET);
reg = dwc3_readl(dwc->regs, DWC3_DCTL);
@@ -3789,22 +3840,16 @@ static void dwc3_gadget_disconnect_interrupt(struct dwc3 *dwc)
dwc->setup_packet_pending = false;
usb_gadget_set_state(dwc->gadget, USB_STATE_NOTATTACHED);
if (dwc->ep0state != EP0_SETUP_PHASE) {
unsigned int dir;
dir = !!dwc->ep0_expect_in;
if (dwc->ep0state == EP0_DATA_PHASE)
dwc3_ep0_end_control_data(dwc, dwc->eps[dir]);
else
dwc3_ep0_end_control_data(dwc, dwc->eps[!dir]);
dwc3_ep0_stall_and_restart(dwc);
}
dwc3_ep0_reset_state(dwc);
}
static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
{
struct dwc3_vendor *vdwc = container_of(dwc, struct dwc3_vendor, dwc);
u32 reg;
vdwc->suspended = false;
/*
* Ideally, dwc3_reset_gadget() would trigger the function
* drivers to stop any active transfers through ep disable.
@@ -3852,20 +3897,7 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
* phase. So ensure that EP0 is in setup phase by issuing a stall
* and restart if EP0 is not in setup phase.
*/
if (dwc->ep0state != EP0_SETUP_PHASE) {
unsigned int dir;
dir = !!dwc->ep0_expect_in;
if (dwc->ep0state == EP0_DATA_PHASE)
dwc3_ep0_end_control_data(dwc, dwc->eps[dir]);
else
dwc3_ep0_end_control_data(dwc, dwc->eps[!dir]);
dwc->eps[0]->trb_enqueue = 0;
dwc->eps[1]->trb_enqueue = 0;
dwc3_ep0_stall_and_restart(dwc);
}
dwc3_ep0_reset_state(dwc);
/*
* In the Synopsis DesignWare Cores USB3 Databook Rev. 3.30a
@@ -4034,6 +4066,10 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
static void dwc3_gadget_wakeup_interrupt(struct dwc3 *dwc)
{
struct dwc3_vendor *vdwc = container_of(dwc, struct dwc3_vendor, dwc);
vdwc->suspended = false;
/*
* TODO take core out of low power mode when that's
* implemented.
@@ -4147,10 +4183,13 @@ static void dwc3_gadget_linksts_change_interrupt(struct dwc3 *dwc,
static void dwc3_gadget_suspend_interrupt(struct dwc3 *dwc,
unsigned int evtinfo)
{
struct dwc3_vendor *vdwc = container_of(dwc, struct dwc3_vendor, dwc);
enum dwc3_link_state next = evtinfo & DWC3_LINK_STATE_MASK;
if (dwc->link_state != next && next == DWC3_LINK_STATE_U3)
if (!vdwc->suspended && next == DWC3_LINK_STATE_U3) {
vdwc->suspended = true;
dwc3_suspend_gadget(dwc);
}
dwc->link_state = next;
}

View File

@@ -1571,8 +1571,11 @@ static int android_setup(struct usb_gadget *gadget,
value = acc_ctrlrequest_composite(cdev, c);
#endif
if (value < 0)
if (value < 0) {
spin_lock_irqsave(&gi->spinlock, flags);
value = composite_setup(gadget, c);
spin_unlock_irqrestore(&gi->spinlock, flags);
}
spin_lock_irqsave(&cdev->lock, flags);
if (c->bRequest == USB_REQ_SET_CONFIGURATION &&

View File

@@ -3620,6 +3620,7 @@ static void ffs_func_unbind(struct usb_configuration *c,
/* Drain any pending AIO completions */
drain_workqueue(ffs->io_completion_wq);
ffs_event_add(ffs, FUNCTIONFS_UNBIND);
if (!--opts->refcnt)
functionfs_unbind(ffs);
@@ -3644,7 +3645,6 @@ static void ffs_func_unbind(struct usb_configuration *c,
func->function.ssp_descriptors = NULL;
func->interfaces_nums = NULL;
ffs_event_add(ffs, FUNCTIONFS_UNBIND);
}
static struct usb_function *ffs_alloc(struct usb_function_instance *fi)

View File

@@ -915,8 +915,11 @@ static void __gs_console_push(struct gs_console *cons)
}
req->length = size;
spin_unlock_irq(&cons->lock);
if (usb_ep_queue(ep, req, GFP_ATOMIC))
req->length = 0;
spin_lock_irq(&cons->lock);
}
static void gs_console_work(struct work_struct *work)
@@ -1419,10 +1422,19 @@ EXPORT_SYMBOL_GPL(gserial_disconnect);
void gserial_suspend(struct gserial *gser)
{
struct gs_port *port = gser->ioport;
struct gs_port *port;
unsigned long flags;
spin_lock_irqsave(&port->port_lock, flags);
spin_lock_irqsave(&serial_port_lock, flags);
port = gser->ioport;
if (!port) {
spin_unlock_irqrestore(&serial_port_lock, flags);
return;
}
spin_lock(&port->port_lock);
spin_unlock(&serial_port_lock);
port->suspended = true;
spin_unlock_irqrestore(&port->port_lock, flags);
}

View File

@@ -386,6 +386,9 @@ static void uvcg_video_pump(struct work_struct *work)
struct uvc_buffer *buf;
unsigned long flags;
int ret;
bool buf_int;
/* video->max_payload_size is only set when using bulk transfer */
bool is_bulk = video->max_payload_size;
while (video->ep->enabled) {
/*
@@ -408,20 +411,35 @@ static void uvcg_video_pump(struct work_struct *work)
*/
spin_lock_irqsave(&queue->irqlock, flags);
buf = uvcg_queue_head(queue);
if (buf == NULL) {
if (buf != NULL) {
video->encode(req, video, buf);
/* Always interrupt for the last request of a video buffer */
buf_int = buf->state == UVC_BUF_STATE_DONE;
} else if (!(queue->flags & UVC_QUEUE_DISCONNECTED) && !is_bulk) {
/*
* No video buffer available; the queue is still connected and
* we're traferring over ISOC. Queue a 0 length request to
* prevent missed ISOC transfers.
*/
req->length = 0;
buf_int = false;
} else {
/*
* Either queue has been disconnected or no video buffer
* available to bulk transfer. Either way, stop processing
* further.
*/
spin_unlock_irqrestore(&queue->irqlock, flags);
break;
}
video->encode(req, video, buf);
/*
* With usb3 we have more requests. This will decrease the
* interrupt load to a quarter but also catches the corner
* cases, which needs to be handled.
*/
if (list_empty(&video->req_free) ||
buf->state == UVC_BUF_STATE_DONE ||
if (list_empty(&video->req_free) || buf_int ||
!(video->req_int_count %
DIV_ROUND_UP(video->uvc_num_requests, 4))) {
video->req_int_count = 0;
@@ -441,8 +459,7 @@ static void uvcg_video_pump(struct work_struct *work)
/* Endpoint now owns the request */
req = NULL;
if (buf->state != UVC_BUF_STATE_DONE)
video->req_int_count++;
video->req_int_count++;
}
if (!req)
@@ -527,4 +544,3 @@ int uvcg_video_init(struct uvc_video *video, struct uvc_device *uvc)
V4L2_BUF_TYPE_VIDEO_OUTPUT, &video->mutex);
return 0;
}

View File

@@ -6,6 +6,8 @@
* Author: Felipe Balbi <balbi@ti.com>
*/
#define pr_fmt(fmt) "UDC core: " fmt
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/device.h>
@@ -1043,12 +1045,16 @@ EXPORT_SYMBOL_GPL(usb_gadget_set_state);
/* ------------------------------------------------------------------------- */
static void usb_udc_connect_control(struct usb_udc *udc)
static int usb_udc_connect_control(struct usb_udc *udc)
{
int ret;
if (udc->vbus)
usb_gadget_connect(udc->gadget);
ret = usb_gadget_connect(udc->gadget);
else
usb_gadget_disconnect(udc->gadget);
ret = usb_gadget_disconnect(udc->gadget);
return ret;
}
/**
@@ -1503,15 +1509,26 @@ static int udc_bind_to_driver(struct usb_udc *udc, struct usb_gadget_driver *dri
if (ret)
goto err1;
ret = usb_gadget_udc_start(udc);
if (ret) {
driver->unbind(udc->gadget);
goto err1;
}
if (ret)
goto err_start;
usb_gadget_enable_async_callbacks(udc);
usb_udc_connect_control(udc);
ret = usb_udc_connect_control(udc);
if (ret)
goto err_connect_control;
kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
return 0;
err_connect_control:
usb_gadget_disable_async_callbacks(udc);
if (udc->gadget->irq)
synchronize_irq(udc->gadget->irq);
usb_gadget_udc_stop(udc);
err_start:
driver->unbind(udc->gadget);
err1:
if (ret != -EISNAM)
dev_err(&udc->dev, "failed to start %s: %d\n",
@@ -1523,7 +1540,7 @@ err1:
int usb_gadget_probe_driver(struct usb_gadget_driver *driver)
{
struct usb_udc *udc = NULL;
struct usb_udc *udc = NULL, *iter;
int ret = -ENODEV;
if (!driver || !driver->bind || !driver->setup)
@@ -1531,10 +1548,12 @@ int usb_gadget_probe_driver(struct usb_gadget_driver *driver)
mutex_lock(&udc_lock);
if (driver->udc_name) {
list_for_each_entry(udc, &udc_list, list) {
ret = strcmp(driver->udc_name, dev_name(&udc->dev));
if (!ret)
break;
list_for_each_entry(iter, &udc_list, list) {
ret = strcmp(driver->udc_name, dev_name(&iter->dev));
if (ret)
continue;
udc = iter;
break;
}
if (ret)
ret = -ENODEV;
@@ -1543,23 +1562,25 @@ int usb_gadget_probe_driver(struct usb_gadget_driver *driver)
else
goto found;
} else {
list_for_each_entry(udc, &udc_list, list) {
list_for_each_entry(iter, &udc_list, list) {
/* For now we take the first one */
if (!udc->driver)
goto found;
if (iter->driver)
continue;
udc = iter;
goto found;
}
}
if (!driver->match_existing_only) {
list_add_tail(&driver->pending, &gadget_driver_pending_list);
pr_info("udc-core: couldn't find an available UDC - added [%s] to list of pending drivers\n",
pr_info("couldn't find an available UDC - added [%s] to list of pending drivers\n",
driver->function);
ret = 0;
}
mutex_unlock(&udc_lock);
if (ret)
pr_warn("udc-core: couldn't find an available UDC or it's busy\n");
pr_warn("couldn't find an available UDC or it's busy: %d\n", ret);
return ret;
found:
ret = udc_bind_to_driver(udc, driver);

View File

@@ -80,20 +80,16 @@ DECLARE_EVENT_CLASS(xhci_log_ctx,
__field(dma_addr_t, ctx_dma)
__field(u8 *, ctx_va)
__field(unsigned, ctx_ep_num)
__field(int, slot_id)
__dynamic_array(u32, ctx_data,
((HCC_64BYTE_CONTEXT(xhci->hcc_params) + 1) * 8) *
((ctx->type == XHCI_CTX_TYPE_INPUT) + ep_num + 1))
),
TP_fast_assign(
struct usb_device *udev;
udev = to_usb_device(xhci_to_hcd(xhci)->self.controller);
__entry->ctx_64 = HCC_64BYTE_CONTEXT(xhci->hcc_params);
__entry->ctx_type = ctx->type;
__entry->ctx_dma = ctx->dma;
__entry->ctx_va = ctx->bytes;
__entry->slot_id = udev->slot_id;
__entry->ctx_ep_num = ep_num;
memcpy(__get_dynamic_array(ctx_data), ctx->bytes,
((HCC_64BYTE_CONTEXT(xhci->hcc_params) + 1) * 32) *

View File

@@ -2835,11 +2835,9 @@ static __le16 ext4_group_desc_csum(struct super_block *sb, __u32 block_group,
crc = crc16(crc, (__u8 *)gdp, offset);
offset += sizeof(gdp->bg_checksum); /* skip checksum */
/* for checksum of struct ext4_group_desc do the rest...*/
if (ext4_has_feature_64bit(sb) &&
offset < le16_to_cpu(sbi->s_es->s_desc_size))
if (ext4_has_feature_64bit(sb) && offset < sbi->s_desc_size)
crc = crc16(crc, (__u8 *)gdp + offset,
le16_to_cpu(sbi->s_es->s_desc_size) -
offset);
sbi->s_desc_size - offset);
out:
return cpu_to_le16(crc);

View File

@@ -2563,6 +2563,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
.in_inode = !!entry->e_value_inum,
};
struct ext4_xattr_ibody_header *header = IHDR(inode, raw_inode);
int needs_kvfree = 0;
int error;
is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
@@ -2585,7 +2586,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
error = -ENOMEM;
goto out;
}
needs_kvfree = 1;
error = ext4_xattr_inode_get(inode, entry, buffer, value_size);
if (error)
goto out;
@@ -2624,7 +2625,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
out:
kfree(b_entry_name);
if (entry->e_value_inum && buffer)
if (needs_kvfree && buffer)
kvfree(buffer);
if (is)
brelse(is->iloc.bh);

View File

@@ -642,6 +642,54 @@ static void release_victim_entry(struct f2fs_sb_info *sbi)
f2fs_bug_on(sbi, !list_empty(&am->victim_list));
}
static bool f2fs_pin_section(struct f2fs_sb_info *sbi, unsigned int segno)
{
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
if (!dirty_i->enable_pin_section)
return false;
if (!test_and_set_bit(secno, dirty_i->pinned_secmap))
dirty_i->pinned_secmap_cnt++;
return true;
}
static bool f2fs_pinned_section_exists(struct dirty_seglist_info *dirty_i)
{
return dirty_i->pinned_secmap_cnt;
}
static bool f2fs_section_is_pinned(struct dirty_seglist_info *dirty_i,
unsigned int secno)
{
return dirty_i->enable_pin_section &&
f2fs_pinned_section_exists(dirty_i) &&
test_bit(secno, dirty_i->pinned_secmap);
}
static void f2fs_unpin_all_sections(struct f2fs_sb_info *sbi, bool enable)
{
unsigned int bitmap_size = f2fs_bitmap_size(MAIN_SECS(sbi));
if (f2fs_pinned_section_exists(DIRTY_I(sbi))) {
memset(DIRTY_I(sbi)->pinned_secmap, 0, bitmap_size);
DIRTY_I(sbi)->pinned_secmap_cnt = 0;
}
DIRTY_I(sbi)->enable_pin_section = enable;
}
static int f2fs_gc_pinned_control(struct inode *inode, int gc_type,
unsigned int segno)
{
if (!f2fs_is_pinned_file(inode))
return 0;
if (gc_type != FG_GC)
return -EBUSY;
if (!f2fs_pin_section(F2FS_I_SB(inode), segno))
f2fs_pin_file_control(inode, true);
return -EAGAIN;
}
/*
* This function is called from two paths.
* One is garbage collection and the other is SSR segment selection.
@@ -783,6 +831,9 @@ retry:
if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap))
goto next;
if (gc_type == FG_GC && f2fs_section_is_pinned(dirty_i, secno))
goto next;
if (is_atgc) {
add_victim_entry(sbi, &p, segno);
goto next;
@@ -1212,11 +1263,9 @@ static int move_data_block(struct inode *inode, block_t bidx,
goto out;
}
if (f2fs_is_pinned_file(inode)) {
f2fs_pin_file_control(inode, true);
err = -EAGAIN;
err = f2fs_gc_pinned_control(inode, gc_type, segno);
if (err)
goto out;
}
set_new_dnode(&dn, inode, NULL, NULL, 0);
err = f2fs_get_dnode_of_data(&dn, bidx, LOOKUP_NODE);
@@ -1361,12 +1410,9 @@ static int move_data_page(struct inode *inode, block_t bidx, int gc_type,
err = -EAGAIN;
goto out;
}
if (f2fs_is_pinned_file(inode)) {
if (gc_type == FG_GC)
f2fs_pin_file_control(inode, true);
err = -EAGAIN;
err = f2fs_gc_pinned_control(inode, gc_type, segno);
if (err)
goto out;
}
if (gc_type == BG_GC) {
if (PageWriteback(page)) {
@@ -1487,11 +1533,19 @@ next_step:
ofs_in_node = le16_to_cpu(entry->ofs_in_node);
if (phase == 3) {
int err;
inode = f2fs_iget(sb, dni.ino);
if (IS_ERR(inode) || is_bad_inode(inode) ||
special_file(inode->i_mode))
continue;
err = f2fs_gc_pinned_control(inode, gc_type, segno);
if (err == -EAGAIN) {
iput(inode);
return submitted;
}
if (!f2fs_down_write_trylock(
&F2FS_I(inode)->i_gc_rwsem[WRITE])) {
iput(inode);
@@ -1772,9 +1826,17 @@ gc_more:
ret = -EINVAL;
goto stop;
}
retry:
ret = __get_victim(sbi, &segno, gc_type);
if (ret)
if (ret) {
/* allow to search victim from sections has pinned data */
if (ret == -ENODATA && gc_type == FG_GC &&
f2fs_pinned_section_exists(DIRTY_I(sbi))) {
f2fs_unpin_all_sections(sbi, false);
goto retry;
}
goto stop;
}
seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type, force);
if (gc_type == FG_GC &&
@@ -1817,6 +1879,9 @@ stop:
SIT_I(sbi)->last_victim[ALLOC_NEXT] = 0;
SIT_I(sbi)->last_victim[FLUSH_DEVICE] = init_segno;
if (gc_type == FG_GC)
f2fs_unpin_all_sections(sbi, true);
trace_f2fs_gc_end(sbi->sb, ret, total_freed, sec_freed,
get_pages(sbi, F2FS_DIRTY_NODES),
get_pages(sbi, F2FS_DIRTY_DENTS),

View File

@@ -4774,6 +4774,13 @@ static int init_victim_secmap(struct f2fs_sb_info *sbi)
dirty_i->victim_secmap = f2fs_kvzalloc(sbi, bitmap_size, GFP_KERNEL);
if (!dirty_i->victim_secmap)
return -ENOMEM;
dirty_i->pinned_secmap = f2fs_kvzalloc(sbi, bitmap_size, GFP_KERNEL);
if (!dirty_i->pinned_secmap)
return -ENOMEM;
dirty_i->pinned_secmap_cnt = 0;
dirty_i->enable_pin_section = true;
return 0;
}
@@ -5362,6 +5369,7 @@ static void destroy_victim_secmap(struct f2fs_sb_info *sbi)
{
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
kvfree(dirty_i->pinned_secmap);
kvfree(dirty_i->victim_secmap);
}

View File

@@ -295,6 +295,9 @@ struct dirty_seglist_info {
struct mutex seglist_lock; /* lock for segment bitmaps */
int nr_dirty[NR_DIRTY_TYPE]; /* # of dirty segments */
unsigned long *victim_secmap; /* background GC victims */
unsigned long *pinned_secmap; /* pinned victims from foreground GC */
unsigned int pinned_secmap_cnt; /* count of victims which has pinned data */
bool enable_pin_section; /* enable pinning section */
};
/* victim selection function for cleaning and SSR */

View File

@@ -1172,8 +1172,6 @@ int fuse_handle_backing(struct fuse_entry_bpf *feb, struct inode **backing_inode
path_put(backing_path);
*backing_path = backing_file->f_path;
path_get(backing_path);
fput(backing_file);
break;
}
@@ -1187,39 +1185,36 @@ int fuse_handle_backing(struct fuse_entry_bpf *feb, struct inode **backing_inode
int fuse_handle_bpf_prog(struct fuse_entry_bpf *feb, struct inode *parent,
struct bpf_prog **bpf)
{
struct bpf_prog *new_bpf;
/* Parent isn't presented, but we want to keep
* Don't touch bpf program at all in this case
*/
if (feb->out.bpf_action == FUSE_ACTION_KEEP && !parent)
return 0;
struct bpf_prog *new_bpf = NULL;
switch (feb->out.bpf_action) {
case FUSE_ACTION_KEEP: {
struct fuse_inode *pi = get_fuse_inode(parent);
/* Parent isn't presented, but we want to keep
* Don't touch bpf program at all in this case
*/
if (!parent)
return 0;
new_bpf = pi->bpf;
new_bpf = get_fuse_inode(parent)->bpf;
if (new_bpf)
bpf_prog_inc(new_bpf);
break;
}
case FUSE_ACTION_REMOVE:
new_bpf = NULL;
break;
case FUSE_ACTION_REPLACE: {
struct file *bpf_file = feb->bpf_file;
struct bpf_prog *bpf_prog = ERR_PTR(-EINVAL);
if (bpf_file && !IS_ERR(bpf_file))
bpf_prog = fuse_get_bpf_prog(bpf_file);
if (!bpf_file)
return -EINVAL;
if (IS_ERR(bpf_file))
return PTR_ERR(bpf_file);
if (IS_ERR(bpf_prog))
return PTR_ERR(bpf_prog);
new_bpf = bpf_prog;
new_bpf = fuse_get_bpf_prog(bpf_file);
if (IS_ERR(new_bpf))
return PTR_ERR(new_bpf);
break;
}
@@ -1228,11 +1223,14 @@ int fuse_handle_bpf_prog(struct fuse_entry_bpf *feb, struct inode *parent,
}
/* Cannot change existing program */
if (*bpf) {
if (*bpf && new_bpf) {
bpf_prog_put(new_bpf);
return new_bpf == *bpf ? 0 : -EINVAL;
}
if (*bpf)
bpf_prog_put(*bpf);
*bpf = new_bpf;
return 0;
}
@@ -1249,36 +1247,55 @@ struct dentry *fuse_lookup_finalize(struct fuse_bpf_args *fa, struct inode *dir,
struct fuse_entry_bpf *feb = container_of(febo, struct fuse_entry_bpf, out);
int error = -1;
u64 target_nodeid = 0;
struct dentry *ret;
fd = get_fuse_dentry(entry);
if (!fd)
return ERR_PTR(-EIO);
if (!fd) {
ret = ERR_PTR(-EIO);
goto out;
}
bd = fd->backing_path.dentry;
if (!bd)
return ERR_PTR(-ENOENT);
if (!bd) {
ret = ERR_PTR(-ENOENT);
goto out;
}
backing_inode = bd->d_inode;
if (!backing_inode)
return 0;
if (!backing_inode) {
ret = 0;
goto out;
}
if (d_inode)
target_nodeid = get_fuse_inode(d_inode)->nodeid;
inode = fuse_iget_backing(dir->i_sb, target_nodeid, backing_inode);
if (IS_ERR(inode))
return ERR_PTR(PTR_ERR(inode));
if (IS_ERR(inode)) {
ret = ERR_PTR(PTR_ERR(inode));
goto out;
}
error = fuse_handle_bpf_prog(feb, dir, &get_fuse_inode(inode)->bpf);
if (error)
return ERR_PTR(error);
if (error) {
ret = ERR_PTR(error);
goto out;
}
error = fuse_handle_backing(feb, &get_fuse_inode(inode)->backing_inode, &fd->backing_path);
if (error)
return ERR_PTR(error);
if (error) {
ret = ERR_PTR(error);
goto out;
}
get_fuse_inode(inode)->nodeid = feo->nodeid;
return d_splice_alias(inode, entry);
ret = d_splice_alias(inode, entry);
out:
if (feb->backing_file)
fput(feb->backing_file);
return ret;
}
int fuse_revalidate_backing(struct dentry *entry, unsigned int flags)

View File

@@ -1941,7 +1941,7 @@ static ssize_t fuse_dev_do_write(struct fuse_dev *fud,
err = copy_out_args(cs, req->args, nbytes);
fuse_copy_finish(cs);
if (!err && req->in.h.opcode == FUSE_CANONICAL_PATH) {
if (!err && req->in.h.opcode == FUSE_CANONICAL_PATH && !oh.error) {
char *path = (char *)req->args->out_args[0].value;
path[req->args->out_args[0].size - 1] = 0;

View File

@@ -186,8 +186,10 @@ static bool backing_data_changed(struct fuse_inode *fi, struct dentry *entry,
int err;
bool ret = true;
if (!entry)
return false;
if (!entry) {
ret = false;
goto put_backing_file;
}
get_fuse_backing_path(entry, &new_backing_path);
new_backing_inode = fi->backing_inode;
@@ -210,6 +212,9 @@ put_bpf:
put_inode:
iput(new_backing_inode);
path_put(&new_backing_path);
put_backing_file:
if (bpf_arg->backing_file)
fput(bpf_arg->backing_file);
return ret;
}
#endif
@@ -416,13 +421,18 @@ static void fuse_dentry_canonical_path(const struct path *path,
fuse_canonical_path_backing,
fuse_canonical_path_finalize, path,
canonical_path);
if (fer.ret)
if (fer.ret) {
if (IS_ERR(fer.result))
canonical_path->dentry = fer.result;
return;
}
#endif
path_name = (char *)get_zeroed_page(GFP_KERNEL);
if (!path_name)
goto default_path;
if (!path_name) {
canonical_path->dentry = ERR_PTR(-ENOMEM);
return;
}
args.opcode = FUSE_CANONICAL_PATH;
args.nodeid = get_node_id(inode);
@@ -437,10 +447,15 @@ static void fuse_dentry_canonical_path(const struct path *path,
free_page((unsigned long)path_name);
if (err > 0)
return;
default_path:
if (err < 0) {
canonical_path->dentry = ERR_PTR(err);
return;
}
canonical_path->dentry = path->dentry;
canonical_path->mnt = path->mnt;
path_get(canonical_path);
return;
}
const struct dentry_operations fuse_dentry_operations = {
@@ -524,7 +539,7 @@ int fuse_lookup_name(struct super_block *sb, u64 nodeid, const struct qstr *name
backing_inode = backing_file->f_inode;
*inode = fuse_iget_backing(sb, outarg->nodeid, backing_inode);
if (!*inode)
goto bpf_arg_out;
goto out;
err = fuse_handle_backing(&bpf_arg,
&get_fuse_inode(*inode)->backing_inode,
@@ -535,8 +550,6 @@ int fuse_lookup_name(struct super_block *sb, u64 nodeid, const struct qstr *name
err = fuse_handle_bpf_prog(&bpf_arg, NULL, &get_fuse_inode(*inode)->bpf);
if (err)
goto out;
bpf_arg_out:
fput(backing_file);
} else
#endif
{
@@ -568,6 +581,8 @@ out_queue_forget:
out_put_forget:
kfree(forget);
out:
if (bpf_arg.backing_file)
fput(bpf_arg.backing_file);
return err;
}

View File

@@ -1884,6 +1884,16 @@ void __exit fuse_bpf_cleanup(void);
ssize_t fuse_bpf_simple_request(struct fuse_mount *fm, struct fuse_bpf_args *args);
static inline int fuse_bpf_run(struct bpf_prog *prog, struct fuse_bpf_args *fba)
{
int ret;
migrate_disable();
ret = bpf_prog_run(prog, fba);
migrate_enable();
return ret;
}
/*
* expression statement to wrap the backing filter logic
* struct inode *inode: inode with bpf and backing inode
@@ -1935,7 +1945,7 @@ ssize_t fuse_bpf_simple_request(struct fuse_mount *fm, struct fuse_bpf_args *arg
fa.out_numargs = fa.in_numargs; \
\
ext_flags = fuse_inode->bpf ? \
bpf_prog_run(fuse_inode->bpf, &fa) : \
fuse_bpf_run(fuse_inode->bpf, &fa) : \
FUSE_BPF_BACKING; \
if (ext_flags < 0) { \
fer = (struct fuse_err_ret) { \
@@ -1990,7 +2000,7 @@ ssize_t fuse_bpf_simple_request(struct fuse_mount *fm, struct fuse_bpf_args *arg
.size = fa.out_args[i].size, \
.value = fa.out_args[i].value, \
}; \
ext_flags = bpf_prog_run(fuse_inode->bpf, &fa); \
ext_flags = fuse_bpf_run(fuse_inode->bpf, &fa); \
if (ext_flags < 0) { \
fer = (struct fuse_err_ret) { \
ERR_PTR(ext_flags), \

View File

@@ -34,12 +34,14 @@ DECLARE_FEATURE_FLAG(corefs);
DECLARE_FEATURE_FLAG(zstd);
DECLARE_FEATURE_FLAG(v2);
DECLARE_FEATURE_FLAG(bugfix_throttling);
DECLARE_FEATURE_FLAG(bugfix_inode_eviction);
static struct attribute *attributes[] = {
&corefs_attr.attr,
&zstd_attr.attr,
&v2_attr.attr,
&bugfix_throttling_attr.attr,
&bugfix_inode_eviction_attr.attr,
NULL,
};

View File

@@ -1945,6 +1945,13 @@ void incfs_kill_sb(struct super_block *sb)
pr_debug("incfs: unmount\n");
/*
* We must kill the super before freeing mi, since killing the super
* triggers inode eviction, which triggers the final update of the
* backing file, which uses certain information for mi
*/
kill_anon_super(sb);
if (mi) {
if (mi->mi_backing_dir_path.dentry)
dinode = d_inode(mi->mi_backing_dir_path.dentry);
@@ -1962,7 +1969,6 @@ void incfs_kill_sb(struct super_block *sb)
incfs_free_mount_info(mi);
sb->s_fs_info = NULL;
}
kill_anon_super(sb);
}
static int show_options(struct seq_file *m, struct dentry *root)

View File

@@ -4121,9 +4121,9 @@ static int do_mount_setattr(struct path *path, struct mount_kattr *kattr)
unlock_mount_hash();
if (kattr->propagation) {
namespace_unlock();
if (err)
cleanup_group_ids(mnt, NULL);
namespace_unlock();
}
return err;

View File

@@ -712,7 +712,7 @@ SYSCALL_DEFINE3(inotify_add_watch, int, fd, const char __user *, pathname,
struct fsnotify_group *group;
struct inode *inode;
struct path path;
struct path alteredpath;
struct path alteredpath = {};
struct path *canonical_path = &path;
struct fd f;
int ret;
@@ -765,6 +765,11 @@ SYSCALL_DEFINE3(inotify_add_watch, int, fd, const char __user *, pathname,
if (path.dentry->d_op->d_canonical_path) {
path.dentry->d_op->d_canonical_path(&path,
&alteredpath);
if (IS_ERR(alteredpath.dentry)) {
ret = PTR_ERR(alteredpath.dentry);
goto path_put_and_out;
}
canonical_path = &alteredpath;
path_put(&path);
}
@@ -776,6 +781,7 @@ SYSCALL_DEFINE3(inotify_add_watch, int, fd, const char __user *, pathname,
/* create/update an inode mark */
ret = inotify_update_watch(group, inode, mask);
path_put_and_out:
path_put(canonical_path);
fput_and_out:
fdput(f);

View File

@@ -7,10 +7,9 @@
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/uaccess.h>
#include <linux/rtmutex.h>
#include "internal.h"
static DEFINE_RT_MUTEX(pmsg_lock);
static DEFINE_MUTEX(pmsg_lock);
static ssize_t write_pmsg(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
@@ -29,9 +28,9 @@ static ssize_t write_pmsg(struct file *file, const char __user *buf,
if (!access_ok(buf, count))
return -EFAULT;
rt_mutex_lock(&pmsg_lock);
mutex_lock(&pmsg_lock);
ret = psinfo->write_user(&record, buf);
rt_mutex_unlock(&pmsg_lock);
mutex_unlock(&pmsg_lock);
return ret ? ret : count;
}

View File

@@ -104,8 +104,8 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key,
int blk_crypto_start_using_key(const struct blk_crypto_key *key,
struct request_queue *q);
int blk_crypto_evict_key(struct request_queue *q,
const struct blk_crypto_key *key);
void blk_crypto_evict_key(struct request_queue *q,
const struct blk_crypto_key *key);
bool blk_crypto_config_supported(struct request_queue *q,
const struct blk_crypto_config *cfg);

View File

@@ -73,12 +73,6 @@ static inline int eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n,
return -ENOSYS;
}
static inline int eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n,
unsigned mask)
{
return -ENOSYS;
}
static inline void eventfd_ctx_put(struct eventfd_ctx *ctx)
{

View File

@@ -96,6 +96,28 @@ static inline int fsnotify_file(struct file *file, __u32 mask)
if (file->f_mode & FMODE_NONOTIFY)
return 0;
/*
* Open calls notify early on, so lower file system must be notified
*/
if (mask & FS_OPEN) {
if (path->dentry->d_op &&
path->dentry->d_op->d_canonical_path) {
struct path lower_path = {};
int ret;
path->dentry->d_op->d_canonical_path(path, &lower_path);
if (IS_ERR(lower_path.dentry))
return PTR_ERR(lower_path.dentry);
ret = fsnotify_parent(lower_path.dentry, mask,
&lower_path, FSNOTIFY_EVENT_PATH);
path_put(&lower_path);
if (ret)
return ret;
}
}
return fsnotify_parent(path->dentry, mask, path, FSNOTIFY_EVENT_PATH);
}

View File

@@ -122,7 +122,11 @@ extern void kobject_put(struct kobject *kobj);
extern const void *kobject_namespace(struct kobject *kobj);
extern void kobject_get_ownership(struct kobject *kobj,
kuid_t *uid, kgid_t *gid);
#ifdef __GENKSYMS__ // ANDROID KABI CRC preservation
extern char *kobject_get_path(struct kobject *kobj, gfp_t flag);
#else
extern char *kobject_get_path(const struct kobject *kobj, gfp_t flag);
#endif
/**
* kobject_has_children - Returns whether a kobject has children.

View File

@@ -89,6 +89,7 @@ struct page {
* by the page owner.
*/
struct list_head lru;
/* See page-flags.h for PAGE_MAPPING_FLAGS */
struct address_space *mapping;
pgoff_t index; /* Our offset within mapping. */

View File

@@ -437,17 +437,17 @@ enum {
struct lru_gen_mm_state {
/* set to max_seq after each iteration */
unsigned long seq;
/* where the current iteration continues (inclusive) */
/* where the current iteration continues after */
struct list_head *head;
/* where the last iteration ended (exclusive) */
/* where the last iteration ended before */
struct list_head *tail;
/* to wait for the last page table walker to finish */
/* Unused - keep for ABI compatiiblity */
struct wait_queue_head wait;
/* Bloom filters flip after each iteration */
unsigned long *filters[NR_BLOOM_FILTERS];
/* the mm stats for debugging */
unsigned long stats[NR_HIST_GENS][NR_MM_STATS];
/* the number of concurrent page table walkers */
/* Unused - keep for ABI compatiiblity */
int nr_walkers;
};
@@ -583,6 +583,11 @@ struct per_cpu_pages {
struct list_head lists[NR_PCP_LISTS];
};
struct per_cpu_pages_ext {
spinlock_t lock; /* Protects pcp.lists field */
struct per_cpu_pages pcp;
};
struct per_cpu_zonestat {
#ifdef CONFIG_SMP
s8 vm_stat_diff[NR_VM_ZONE_STAT_ITEMS];

View File

@@ -109,7 +109,6 @@ static inline bool percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
{
_trace_android_vh_record_pcpu_rwsem_starttime(current, 0);
rwsem_release(&sem->dep_map, _RET_IP_);
preempt_disable();
@@ -132,6 +131,7 @@ static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
this_cpu_dec(*sem->read_count);
rcuwait_wake_up(&sem->writer);
}
_trace_android_vh_record_pcpu_rwsem_starttime(current, 0);
preempt_enable();
}

View File

@@ -42,13 +42,7 @@ struct anon_vma {
*/
atomic_t refcount;
/*
* Count of child anon_vmas and VMAs which points to this anon_vma.
*
* This counter is used for making decision about reusing anon_vma
* instead of forking new one. See comments in function anon_vma_clone.
*/
unsigned degree;
unsigned degree; /* ANDROID: KABI preservation, DO NOT USE! */
struct anon_vma *parent; /* Parent of this anon_vma */
@@ -63,6 +57,25 @@ struct anon_vma {
/* Interval tree of private "related" vmas */
struct rb_root_cached rb_root;
/*
* ANDROID: KABI preservation, it's safe to put these at the end of this structure as it's
* only passed by a pointer everywhere, the size and internal structures are local to the
* core kernel.
*/
#ifndef __GENKSYMS__
/*
* Count of child anon_vmas. Equals to the count of all anon_vmas that
* have ->parent pointing to this one, including itself.
*
* This counter is used for making decision about reusing anon_vma
* instead of forking new one. See comments in function anon_vma_clone.
*/
unsigned long num_children;
/* Count of VMAs whose ->anon_vma pointer points to this object. */
unsigned long num_active_vmas;
#endif
};
/*

View File

@@ -8398,6 +8398,7 @@ void cfg80211_control_port_tx_status(struct wireless_dev *wdev, u64 cookie,
* responsible for any cleanup. The caller must also ensure that
* skb->protocol is set appropriately.
* @unencrypted: Whether the frame was received unencrypted
* @link_id: the link the frame was received on, -1 if not applicable or unknown
*
* This function is used to inform userspace about a received control port
* frame. It should only be used if userspace indicated it wants to receive
@@ -8408,8 +8409,8 @@ void cfg80211_control_port_tx_status(struct wireless_dev *wdev, u64 cookie,
*
* Return: %true if the frame was passed to userspace
*/
bool cfg80211_rx_control_port(struct net_device *dev,
struct sk_buff *skb, bool unencrypted);
bool cfg80211_rx_control_port(struct net_device *dev, struct sk_buff *skb,
bool unencrypted, int link_id);
/**
* cfg80211_cqm_rssi_notify - connection quality monitoring rssi event

View File

@@ -584,6 +584,7 @@ struct nft_set_binding {
};
enum nft_trans_phase;
void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_binding *binding,
enum nft_trans_phase phase);

View File

@@ -72,6 +72,14 @@ DECLARE_HOOK(android_vh_drain_all_pages_bypass,
int migratetype, unsigned long did_some_progress,
bool *bypass),
TP_ARGS(gfp_mask, order, alloc_flags, migratetype, did_some_progress, bypass));
DECLARE_HOOK(android_vh_dm_bufio_shrink_scan_bypass,
TP_PROTO(unsigned long dm_bufio_current_allocated, bool *bypass),
TP_ARGS(dm_bufio_current_allocated, bypass));
DECLARE_HOOK(android_vh_cleanup_old_buffers_bypass,
TP_PROTO(unsigned long dm_bufio_current_allocated,
unsigned long *max_age_hz,
bool *bypass),
TP_ARGS(dm_bufio_current_allocated, max_age_hz, bypass));
DECLARE_HOOK(android_vh_cma_drain_all_pages_bypass,
TP_PROTO(unsigned int migratetype, bool *bypass),
TP_ARGS(migratetype, bypass));
@@ -162,6 +170,13 @@ DECLARE_HOOK(android_vh_madvise_cold_or_pageout,
DECLARE_RESTRICTED_HOOK(android_rvh_ctl_dirty_rate,
TP_PROTO(void *unused),
TP_ARGS(unused), 1);
DECLARE_HOOK(android_vh_rmqueue_smallest_bypass,
TP_PROTO(struct page **page, struct zone *zone, int order, int migratetype),
TP_ARGS(page, zone, order, migratetype));
DECLARE_HOOK(android_vh_free_one_page_bypass,
TP_PROTO(struct page *page, struct zone *zone, int order, int migratetype,
int fpi_flags, bool *bypass),
TP_ARGS(page, zone, order, migratetype, fpi_flags, bypass));
#endif /* _TRACE_HOOK_MM_H */
/* This part must be outside protection */

View File

@@ -808,6 +808,11 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
struct io_tlb_mem *mem = rmem->priv;
unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
dev_err(dev, "Restricted DMA pool must be accessible within the linear mapping.");
return -EINVAL;
}
/*
* Since multiple devices can share the same pool, the private data,
* io_tlb_mem struct, will be initialized by the first device attached
@@ -862,11 +867,6 @@ static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
of_get_flat_dt_prop(node, "no-map", NULL))
return -EINVAL;
if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
pr_err("Restricted DMA pool must be accessible within the linear mapping.");
return -EINVAL;
}
rmem->ops = &rmem_swiotlb_ops;
pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M);

View File

@@ -547,12 +547,12 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
*/
void __sched mutex_unlock(struct mutex *lock)
{
trace_android_vh_record_mutex_lock_starttime(current, 0);
#ifndef CONFIG_DEBUG_LOCK_ALLOC
if (__mutex_unlock_fast(lock))
return;
#endif
__mutex_unlock_slowpath(lock, _RET_IP_);
trace_android_vh_record_mutex_lock_starttime(current, 0);
}
EXPORT_SYMBOL(mutex_unlock);

View File

@@ -259,7 +259,6 @@ EXPORT_SYMBOL_GPL(percpu_down_write);
void percpu_up_write(struct percpu_rw_semaphore *sem)
{
trace_android_vh_record_pcpu_rwsem_starttime(current, 0);
rwsem_release(&sem->dep_map, _RET_IP_);
/*
@@ -285,6 +284,7 @@ void percpu_up_write(struct percpu_rw_semaphore *sem)
* exclusive write lock because its counting.
*/
rcu_sync_exit(&sem->rss);
trace_android_vh_record_pcpu_rwsem_starttime(current, 0);
}
EXPORT_SYMBOL_GPL(percpu_up_write);

View File

@@ -1273,7 +1273,7 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
/*
* lock for reading
*/
static inline int __down_read_common(struct rw_semaphore *sem, int state)
static __always_inline int __down_read_common(struct rw_semaphore *sem, int state)
{
int ret = 0;
long count;
@@ -1291,17 +1291,17 @@ out:
return ret;
}
static inline void __down_read(struct rw_semaphore *sem)
static __always_inline void __down_read(struct rw_semaphore *sem)
{
__down_read_common(sem, TASK_UNINTERRUPTIBLE);
}
static inline int __down_read_interruptible(struct rw_semaphore *sem)
static __always_inline int __down_read_interruptible(struct rw_semaphore *sem)
{
return __down_read_common(sem, TASK_INTERRUPTIBLE);
}
static inline int __down_read_killable(struct rw_semaphore *sem)
static __always_inline int __down_read_killable(struct rw_semaphore *sem)
{
return __down_read_common(sem, TASK_KILLABLE);
}
@@ -1367,7 +1367,6 @@ static inline void __up_read(struct rw_semaphore *sem)
DEBUG_RWSEMS_WARN_ON(sem->magic != sem, sem);
DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
trace_android_vh_record_rwsem_lock_starttime(current, 0);
preempt_disable();
rwsem_clear_reader_owned(sem);
tmp = atomic_long_add_return_release(-RWSEM_READER_BIAS, &sem->count);
@@ -1377,6 +1376,7 @@ static inline void __up_read(struct rw_semaphore *sem)
clear_nonspinnable(sem);
rwsem_wake(sem);
}
trace_android_vh_record_rwsem_lock_starttime(current, 0);
preempt_enable();
}
@@ -1395,13 +1395,13 @@ static inline void __up_write(struct rw_semaphore *sem)
DEBUG_RWSEMS_WARN_ON((rwsem_owner(sem) != current) &&
!rwsem_test_oflags(sem, RWSEM_NONSPINNABLE), sem);
trace_android_vh_record_rwsem_lock_starttime(current, 0);
preempt_disable();
rwsem_clear_owner(sem);
tmp = atomic_long_fetch_add_release(-RWSEM_WRITER_LOCKED, &sem->count);
preempt_enable();
if (unlikely(tmp & RWSEM_FLAG_WAITERS))
rwsem_wake(sem);
trace_android_vh_record_rwsem_lock_starttime(current, 0);
}
/*

View File

@@ -3327,6 +3327,30 @@ static void kfree_rcu_work(struct work_struct *work)
}
}
static bool
need_offload_krc(struct kfree_rcu_cpu *krcp)
{
int i;
for (i = 0; i < FREE_N_CHANNELS; i++)
if (krcp->bkvhead[i])
return true;
return !!krcp->head;
}
static bool
need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp)
{
int i;
for (i = 0; i < FREE_N_CHANNELS; i++)
if (krwp->bkvhead_free[i])
return true;
return !!krwp->head_free;
}
/*
* This function is invoked after the KFREE_DRAIN_JIFFIES timeout.
*/
@@ -3343,14 +3367,13 @@ static void kfree_rcu_monitor(struct work_struct *work)
for (i = 0; i < KFREE_N_BATCHES; i++) {
struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]);
// Try to detach bkvhead or head and attach it over any
// available corresponding free channel. It can be that
// a previous RCU batch is in progress, it means that
// immediately to queue another one is not possible so
// in that case the monitor work is rearmed.
if ((krcp->bkvhead[0] && !krwp->bkvhead_free[0]) ||
(krcp->bkvhead[1] && !krwp->bkvhead_free[1]) ||
(krcp->head && !krwp->head_free)) {
// Try to detach bulk_head or head and attach it, only when
// all channels are free. Any channel is not free means at krwp
// there is on-going rcu work to handle krwp's free business.
if (need_wait_for_krwp_work(krwp))
continue;
if (need_offload_krc(krcp)) {
// Channel 1 corresponds to the SLAB-pointer bulk path.
// Channel 2 corresponds to vmalloc-pointer bulk path.
for (j = 0; j < FREE_N_CHANNELS; j++) {

View File

@@ -1037,12 +1037,13 @@ static void tick_broadcast_setup_oneshot(struct clock_event_device *bc)
*/
cpumask_copy(tmpmask, tick_broadcast_mask);
cpumask_clear_cpu(cpu, tmpmask);
cpumask_or(tick_broadcast_oneshot_mask,
tick_broadcast_oneshot_mask, tmpmask);
if (was_periodic && !cpumask_empty(tmpmask)) {
ktime_t nextevt = tick_get_next_period();
cpumask_or(tick_broadcast_oneshot_mask,
tick_broadcast_oneshot_mask, tmpmask);
clockevents_switch_state(bc, CLOCK_EVT_STATE_ONESHOT);
tick_broadcast_init_next_event(tmpmask, nextevt);
tick_broadcast_set_event(bc, cpu, nextevt);

View File

@@ -84,6 +84,17 @@
/* Free Page Internal flags: for internal, non-pcp variants of free_pages(). */
typedef int __bitwise fpi_t;
static inline struct per_cpu_pages_ext *pcp_to_pcpext(struct per_cpu_pages *pcp)
{
return container_of(pcp, struct per_cpu_pages_ext, pcp);
}
static inline
struct per_cpu_pages_ext __percpu *zone_per_cpu_pageset(struct zone *zone)
{
return (struct per_cpu_pages_ext __percpu *)zone->per_cpu_pageset;
}
/* No special request */
#define FPI_NONE ((__force fpi_t)0)
@@ -124,13 +135,97 @@ typedef int __bitwise fpi_t;
static DEFINE_MUTEX(pcp_batch_high_lock);
#define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8)
struct pagesets {
local_lock_t lock;
};
static DEFINE_PER_CPU(struct pagesets, pagesets) = {
.lock = INIT_LOCAL_LOCK(lock),
};
#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT)
/*
* On SMP, spin_trylock is sufficient protection.
* On PREEMPT_RT, spin_trylock is equivalent on both SMP and UP.
*/
#define pcp_trylock_prepare(flags) do { } while (0)
#define pcp_trylock_finish(flag) do { } while (0)
#else
/* UP spin_trylock always succeeds so disable IRQs to prevent re-entrancy. */
#define pcp_trylock_prepare(flags) local_irq_save(flags)
#define pcp_trylock_finish(flags) local_irq_restore(flags)
#endif
/*
* Locking a pcp requires a PCP lookup followed by a spinlock. To avoid
* a migration causing the wrong PCP to be locked and remote memory being
* potentially allocated, pin the task to the CPU for the lookup+lock.
* preempt_disable is used on !RT because it is faster than migrate_disable.
* migrate_disable is used on RT because otherwise RT spinlock usage is
* interfered with and a high priority task cannot preempt the allocator.
*/
#ifndef CONFIG_PREEMPT_RT
#define pcpu_task_pin() preempt_disable()
#define pcpu_task_unpin() preempt_enable()
#else
#define pcpu_task_pin() migrate_disable()
#define pcpu_task_unpin() migrate_enable()
#endif
/*
* Generic helper to lookup and a per-cpu variable with an embedded spinlock.
* Return value should be used with equivalent unlock helper.
*/
#define pcpu_spin_lock(type, member, ptr) \
({ \
type *_ret; \
pcpu_task_pin(); \
_ret = this_cpu_ptr(ptr); \
spin_lock(&_ret->member); \
&_ret->pcp; \
})
#define pcpu_spin_lock_irqsave(type, member, ptr, flags) \
({ \
type *_ret; \
pcpu_task_pin(); \
_ret = this_cpu_ptr(ptr); \
spin_lock_irqsave(&_ret->member, flags); \
&_ret->pcp; \
})
#define pcpu_spin_trylock_irqsave(type, member, ptr, flags) \
({ \
type *_ret; \
pcpu_task_pin(); \
_ret = this_cpu_ptr(ptr); \
if (!spin_trylock_irqsave(&_ret->member, flags)) { \
pcpu_task_unpin(); \
_ret = NULL; \
} \
_ret ? &_ret->pcp : NULL; \
})
#define pcpu_spin_unlock(member, ptr) \
({ \
spin_unlock(&ptr->member); \
pcpu_task_unpin(); \
})
#define pcpu_spin_unlock_irqrestore(member, ptr, flags) \
({ \
spin_unlock_irqrestore(&ptr->member, flags); \
pcpu_task_unpin(); \
})
/* struct per_cpu_pages_ext specific helpers. */
#define pcp_spin_lock(ptr) \
pcpu_spin_lock(struct per_cpu_pages_ext, lock, ptr)
#define pcp_spin_lock_irqsave(ptr, flags) \
pcpu_spin_lock_irqsave(struct per_cpu_pages_ext, lock, ptr, flags)
#define pcp_spin_trylock_irqsave(ptr, flags) \
pcpu_spin_trylock_irqsave(struct per_cpu_pages_ext, lock, ptr, flags)
#define pcp_spin_unlock(ptr) \
pcpu_spin_unlock(lock, ptr)
#define pcp_spin_unlock_irqrestore(ptr, flags) \
pcpu_spin_unlock_irqrestore(lock, pcp_to_pcpext(ptr), flags)
#ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID
DEFINE_PER_CPU(int, numa_node);
EXPORT_PER_CPU_SYMBOL(numa_node);
@@ -149,13 +244,7 @@ DEFINE_PER_CPU(int, _numa_mem_); /* Kernel "local memory" node */
EXPORT_PER_CPU_SYMBOL(_numa_mem_);
#endif
/* work_structs for global per-cpu drains */
struct pcpu_drain {
struct zone *zone;
struct work_struct work;
};
static DEFINE_MUTEX(pcpu_drain_mutex);
static DEFINE_PER_CPU(struct pcpu_drain, pcpu_drain);
#ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY
volatile unsigned long latent_entropy __latent_entropy;
@@ -1072,6 +1161,13 @@ static inline void __free_one_page(struct page *page,
unsigned int max_order;
struct page *buddy;
bool to_tail;
bool bypass = false;
trace_android_vh_free_one_page_bypass(page, zone, order,
migratetype, (int)fpi_flags, &bypass);
if (bypass)
return;
max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order);
@@ -1541,10 +1637,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
}
pcp->count -= nr_freed;
/*
* local_lock_irq held so equivalent to spin_lock_irqsave for
* both PREEMPT_RT and non-PREEMPT_RT configurations.
*/
/* Caller must hold IRQ-safe pcp->lock so IRQs are disabled. */
spin_lock(&zone->lock);
isolated_pageblocks = has_isolate_pageblock(zone);
@@ -3057,7 +3150,11 @@ static __always_inline struct page *
__rmqueue(struct zone *zone, unsigned int order, int migratetype,
unsigned int alloc_flags)
{
struct page *page;
struct page *page = NULL;
trace_android_vh_rmqueue_smallest_bypass(&page, zone, order, migratetype);
if (page)
return page;
retry:
page = __rmqueue_smallest(zone, order, migratetype);
@@ -3102,10 +3199,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
{
int i, allocated = 0;
/*
* local_lock_irq held so equivalent to spin_lock_irqsave for
* both PREEMPT_RT and non-PREEMPT_RT configurations.
*/
/* Caller must hold IRQ-safe pcp->lock so IRQs are disabled. */
spin_lock(&zone->lock);
for (i = 0; i < count; ++i) {
struct page *page;
@@ -3189,51 +3283,51 @@ static struct list_head *get_populated_pcp_list(struct zone *zone,
* Called from the vmstat counter updater to drain pagesets of this
* currently executing processor on remote nodes after they have
* expired.
*
* Note that this function must be called with the thread pinned to
* a single processor.
*/
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
{
unsigned long flags;
int to_drain, batch;
local_lock_irqsave(&pagesets.lock, flags);
batch = READ_ONCE(pcp->batch);
to_drain = min(pcp->count, batch);
if (to_drain > 0)
if (to_drain > 0) {
unsigned long flags;
struct per_cpu_pages_ext *pcp_ext = pcp_to_pcpext(pcp);
/*
* free_pcppages_bulk expects IRQs disabled for zone->lock
* so even though pcp->lock is not intended to be IRQ-safe,
* it's needed in this context.
*/
spin_lock_irqsave(&pcp_ext->lock, flags);
free_pcppages_bulk(zone, to_drain, pcp);
local_unlock_irqrestore(&pagesets.lock, flags);
spin_unlock_irqrestore(&pcp_ext->lock, flags);
}
}
#endif
/*
* Drain pcplists of the indicated processor and zone.
*
* The processor must either be the current processor and the
* thread pinned to the current processor or a processor that
* is not online.
*/
static void drain_pages_zone(unsigned int cpu, struct zone *zone)
{
unsigned long flags;
struct per_cpu_pages *pcp;
struct per_cpu_pages_ext *pcp_ext;
local_lock_irqsave(&pagesets.lock, flags);
pcp_ext = per_cpu_ptr(zone_per_cpu_pageset(zone), cpu);
pcp = &pcp_ext->pcp;
if (pcp->count) {
unsigned long flags;
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
if (pcp->count)
/* See drain_zone_pages on why this is disabling IRQs */
spin_lock_irqsave(&pcp_ext->lock, flags);
free_pcppages_bulk(zone, pcp->count, pcp);
local_unlock_irqrestore(&pagesets.lock, flags);
spin_unlock_irqrestore(&pcp_ext->lock, flags);
}
}
/*
* Drain pcplists of all zones on the indicated processor.
*
* The processor must either be the current processor and the
* thread pinned to the current processor or a processor that
* is not online.
*/
static void drain_pages(unsigned int cpu)
{
@@ -3246,9 +3340,6 @@ static void drain_pages(unsigned int cpu)
/*
* Spill all of this CPU's per-cpu pages back into the buddy allocator.
*
* The CPU has to be pinned. When zone parameter is non-NULL, spill just
* the single zone's pages.
*/
void drain_local_pages(struct zone *zone)
{
@@ -3260,24 +3351,6 @@ void drain_local_pages(struct zone *zone)
drain_pages(cpu);
}
static void drain_local_pages_wq(struct work_struct *work)
{
struct pcpu_drain *drain;
drain = container_of(work, struct pcpu_drain, work);
/*
* drain_all_pages doesn't use proper cpu hotplug protection so
* we can race with cpu offline when the WQ can move this from
* a cpu pinned worker to an unbound one. We can operate on a different
* cpu which is alright but we also have to make sure to not move to
* a different one.
*/
preempt_disable();
drain_local_pages(drain->zone);
preempt_enable();
}
/*
* The implementation of drain_all_pages(), exposing an extra parameter to
* drain on all cpus.
@@ -3298,13 +3371,6 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
*/
static cpumask_t cpus_with_pcps;
/*
* Make sure nobody triggers this path before mm_percpu_wq is fully
* initialized.
*/
if (WARN_ON_ONCE(!mm_percpu_wq))
return;
/*
* Do not drain if one is already in progress unless it's specific to
* a zone. Such callers are primarily CMA and memory hotplug and need
@@ -3334,12 +3400,12 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
*/
has_pcps = true;
} else if (zone) {
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
pcp = &per_cpu_ptr(zone_per_cpu_pageset(zone), cpu)->pcp;
if (pcp->count)
has_pcps = true;
} else {
for_each_populated_zone(z) {
pcp = per_cpu_ptr(z->per_cpu_pageset, cpu);
pcp = &per_cpu_ptr(zone_per_cpu_pageset(z), cpu)->pcp;
if (pcp->count) {
has_pcps = true;
break;
@@ -3354,14 +3420,11 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
}
for_each_cpu(cpu, &cpus_with_pcps) {
struct pcpu_drain *drain = per_cpu_ptr(&pcpu_drain, cpu);
drain->zone = zone;
INIT_WORK(&drain->work, drain_local_pages_wq);
queue_work_on(cpu, mm_percpu_wq, &drain->work);
if (zone)
drain_pages_zone(cpu, zone);
else
drain_pages(cpu);
}
for_each_cpu(cpu, &cpus_with_pcps)
flush_work(&per_cpu_ptr(&pcpu_drain, cpu)->work);
mutex_unlock(&pcpu_drain_mutex);
}
@@ -3370,8 +3433,6 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
* Spill all the per-cpu pages from all CPUs back into the buddy allocator.
*
* When zone parameter is non-NULL, spill just the single zone's pages.
*
* Note that this can be extremely slow as the draining happens in a workqueue.
*/
void drain_all_pages(struct zone *zone)
{
@@ -3487,16 +3548,14 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone)
return min(READ_ONCE(pcp->batch) << 2, high);
}
static void free_unref_page_commit(struct page *page, unsigned long pfn,
static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
struct page *page, unsigned long pfn,
int migratetype, unsigned int order)
{
struct zone *zone = page_zone(page);
struct per_cpu_pages *pcp;
int high;
int pindex;
__count_vm_event(PGFREE);
pcp = this_cpu_ptr(zone->per_cpu_pageset);
pindex = order_to_pindex(migratetype, order);
list_add(&page->lru, &pcp->lists[pindex]);
pcp->count += 1 << order;
@@ -3514,6 +3573,9 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn,
void free_unref_page(struct page *page, unsigned int order)
{
unsigned long flags;
unsigned long __maybe_unused UP_flags;
struct per_cpu_pages *pcp;
struct zone *zone;
unsigned long pfn = page_to_pfn(page);
int migratetype;
bool pcp_skip_cma_pages = false;
@@ -3540,9 +3602,16 @@ void free_unref_page(struct page *page, unsigned int order)
migratetype = MIGRATE_MOVABLE;
}
local_lock_irqsave(&pagesets.lock, flags);
free_unref_page_commit(page, pfn, migratetype, order);
local_unlock_irqrestore(&pagesets.lock, flags);
zone = page_zone(page);
pcp_trylock_prepare(UP_flags);
pcp = pcp_spin_trylock_irqsave(zone_per_cpu_pageset(zone), flags);
if (pcp) {
free_unref_page_commit(zone, pcp, page, pfn, migratetype, order);
pcp_spin_unlock_irqrestore(pcp, flags);
} else {
free_one_page(zone, page, pfn, order, migratetype, FPI_NONE);
}
pcp_trylock_finish(UP_flags);
}
/*
@@ -3551,6 +3620,8 @@ void free_unref_page(struct page *page, unsigned int order)
void free_unref_page_list(struct list_head *list)
{
struct page *page, *next;
struct per_cpu_pages *pcp = NULL;
struct zone *locked_zone = NULL;
unsigned long flags, pfn;
int batch_count = 0;
int migratetype;
@@ -3581,8 +3652,18 @@ void free_unref_page_list(struct list_head *list)
set_page_private(page, pfn);
}
local_lock_irqsave(&pagesets.lock, flags);
list_for_each_entry_safe(page, next, list, lru) {
struct zone *zone = page_zone(page);
/* Different zone, different pcp lock. */
if (zone != locked_zone) {
if (pcp)
pcp_spin_unlock_irqrestore(pcp, flags);
locked_zone = zone;
pcp = pcp_spin_lock_irqsave(zone_per_cpu_pageset(locked_zone), flags);
}
pfn = page_private(page);
set_page_private(page, 0);
@@ -3595,19 +3676,21 @@ void free_unref_page_list(struct list_head *list)
migratetype = MIGRATE_MOVABLE;
trace_mm_page_free_batched(page);
free_unref_page_commit(page, pfn, migratetype, 0);
free_unref_page_commit(zone, pcp, page, pfn, migratetype, 0);
/*
* Guard against excessive IRQ disabled times when we get
* a large list of pages to free.
*/
if (++batch_count == SWAP_CLUSTER_MAX) {
local_unlock_irqrestore(&pagesets.lock, flags);
pcp_spin_unlock_irqrestore(pcp, flags);
batch_count = 0;
local_lock_irqsave(&pagesets.lock, flags);
pcp = pcp_spin_lock_irqsave(zone_per_cpu_pageset(locked_zone), flags);
}
}
local_unlock_irqrestore(&pagesets.lock, flags);
if (pcp)
pcp_spin_unlock_irqrestore(pcp, flags);
}
/*
@@ -3729,6 +3812,51 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z,
#endif
}
static __always_inline
struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
unsigned int order, unsigned int alloc_flags,
int migratetype)
{
struct page *page;
unsigned long flags;
do {
page = NULL;
spin_lock_irqsave(&zone->lock, flags);
/*
* order-0 request can reach here when the pcplist is skipped
* due to non-CMA allocation context. HIGHATOMIC area is
* reserved for high-order atomic allocation, so order-0
* request should skip it.
*/
if (order > 0 && alloc_flags & ALLOC_HARDER) {
page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
if (page)
trace_mm_page_alloc_zone_locked(page, order, migratetype);
}
if (!page) {
if (alloc_flags & ALLOC_CMA && migratetype == MIGRATE_MOVABLE)
page = __rmqueue_cma(zone, order, migratetype,
alloc_flags);
if (!page)
page = __rmqueue(zone, order, migratetype,
alloc_flags);
}
if (!page) {
spin_unlock_irqrestore(&zone->lock, flags);
return NULL;
}
__mod_zone_freepage_state(zone, -(1 << order),
get_pcppage_migratetype(page));
spin_unlock_irqrestore(&zone->lock, flags);
} while (check_new_pages(page, order));
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
zone_statistics(preferred_zone, zone, 1);
return page;
}
/* Remove page from the per-cpu list, caller must protect the list */
static inline
struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
@@ -3772,18 +3900,28 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
struct per_cpu_pages *pcp;
struct page *page;
unsigned long flags;
unsigned long __maybe_unused UP_flags;
local_lock_irqsave(&pagesets.lock, flags);
/*
* spin_trylock may fail due to a parallel drain. In the future, the
* trylock will also protect against IRQ reentrancy.
*/
pcp_trylock_prepare(UP_flags);
pcp = pcp_spin_trylock_irqsave(zone_per_cpu_pageset(zone), flags);
if (!pcp) {
pcp_trylock_finish(UP_flags);
return NULL;
}
/*
* On allocation, reduce the number of pages that are batch freed.
* See nr_pcp_free() where free_factor is increased for subsequent
* frees.
*/
pcp = this_cpu_ptr(zone->per_cpu_pageset);
pcp->free_factor >>= 1;
page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, gfp_flags);
local_unlock_irqrestore(&pagesets.lock, flags);
pcp_spin_unlock_irqrestore(pcp, flags);
pcp_trylock_finish(UP_flags);
if (page) {
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1);
zone_statistics(preferred_zone, zone, 1);
@@ -3800,13 +3938,13 @@ struct page *rmqueue(struct zone *preferred_zone,
gfp_t gfp_flags, unsigned int alloc_flags,
int migratetype)
{
unsigned long flags;
struct page *page;
if (likely(pcp_allowed_order(order))) {
page = rmqueue_pcplist(preferred_zone, zone, order,
gfp_flags, migratetype, alloc_flags);
goto out;
if (likely(page))
goto out;
}
/*
@@ -3814,55 +3952,20 @@ struct page *rmqueue(struct zone *preferred_zone,
* allocate greater than order-1 page units with __GFP_NOFAIL.
*/
WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
spin_lock_irqsave(&zone->lock, flags);
do {
page = NULL;
/*
* order-0 request can reach here when the pcplist is skipped
* due to non-CMA allocation context. HIGHATOMIC area is
* reserved for high-order atomic allocation, so order-0
* request should skip it.
*/
if (order > 0 && alloc_flags & ALLOC_HARDER) {
page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
if (page)
trace_mm_page_alloc_zone_locked(page, order, migratetype);
}
if (!page) {
if (alloc_flags & ALLOC_CMA && migratetype == MIGRATE_MOVABLE)
page = __rmqueue_cma(zone, order, migratetype,
alloc_flags);
if (!page)
page = __rmqueue(zone, order, migratetype,
alloc_flags);
}
} while (page && check_new_pages(page, order));
if (!page)
goto failed;
__mod_zone_freepage_state(zone, -(1 << order),
get_pcppage_migratetype(page));
spin_unlock_irqrestore(&zone->lock, flags);
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
zone_statistics(preferred_zone, zone, 1);
page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags,
migratetype);
trace_android_vh_rmqueue(preferred_zone, zone, order,
gfp_flags, alloc_flags, migratetype);
out:
/* Separate test+clear to avoid unnecessary atomics */
if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) {
if (unlikely(test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags))) {
clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
wakeup_kswapd(zone, 0, 0, zone_idx(zone));
}
VM_BUG_ON_PAGE(page && bad_range(zone, page), page);
return page;
failed:
spin_unlock_irqrestore(&zone->lock, flags);
return NULL;
}
#ifdef CONFIG_FAIL_PAGE_ALLOC
@@ -5391,6 +5494,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
{
struct page *page;
unsigned long flags;
unsigned long __maybe_unused UP_flags;
struct zone *zone;
struct zoneref *z;
struct per_cpu_pages *pcp;
@@ -5470,10 +5574,13 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
if (unlikely(!zone))
goto failed;
/* Attempt the batch allocation */
local_lock_irqsave(&pagesets.lock, flags);
pcp = this_cpu_ptr(zone->per_cpu_pageset);
/* Is a parallel drain in progress? */
pcp_trylock_prepare(UP_flags);
pcp = pcp_spin_trylock_irqsave(zone_per_cpu_pageset(zone), flags);
if (!pcp)
goto failed_irq;
/* Attempt the batch allocation */
while (nr_populated < nr_pages) {
/* Skip existing pages */
@@ -5486,8 +5593,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
pcp, alloc_gfp);
if (unlikely(!page)) {
/* Try and allocate at least one page */
if (!nr_account)
if (!nr_account) {
pcp_spin_unlock_irqrestore(pcp, flags);
goto failed_irq;
}
break;
}
nr_account++;
@@ -5500,7 +5609,8 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
nr_populated++;
}
local_unlock_irqrestore(&pagesets.lock, flags);
pcp_spin_unlock_irqrestore(pcp, flags);
pcp_trylock_finish(UP_flags);
__count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account);
zone_statistics(ac.preferred_zoneref->zone, zone, nr_account);
@@ -5509,7 +5619,7 @@ out:
return nr_populated;
failed_irq:
local_unlock_irqrestore(&pagesets.lock, flags);
pcp_trylock_finish(UP_flags);
failed:
page = __alloc_pages(gfp, 0, preferred_nid, nodemask);
@@ -6089,7 +6199,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
continue;
for_each_online_cpu(cpu)
free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count;
free_pcp += per_cpu_ptr(zone_per_cpu_pageset(zone), cpu)->pcp.count;
}
printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n"
@@ -6184,7 +6294,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
free_pcp = 0;
for_each_online_cpu(cpu)
free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count;
free_pcp += per_cpu_ptr(zone_per_cpu_pageset(zone), cpu)->pcp.count;
show_node(zone);
printk(KERN_CONT
@@ -6225,7 +6335,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
K(zone_page_state(zone, NR_MLOCK)),
K(zone_page_state(zone, NR_BOUNCE)),
K(free_pcp),
K(this_cpu_read(zone->per_cpu_pageset->count)),
K(this_cpu_read((zone_per_cpu_pageset(zone))->pcp.count)),
K(zone_page_state(zone, NR_FREE_CMA_PAGES)));
printk("lowmem_reserve[]:");
for (i = 0; i < MAX_NR_ZONES; i++)
@@ -6556,7 +6666,7 @@ static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonesta
/* These effectively disable the pcplists in the boot pageset completely */
#define BOOT_PAGESET_HIGH 0
#define BOOT_PAGESET_BATCH 1
static DEFINE_PER_CPU(struct per_cpu_pages, boot_pageset);
static DEFINE_PER_CPU(struct per_cpu_pages_ext, boot_pageset);
static DEFINE_PER_CPU(struct per_cpu_zonestat, boot_zonestats);
static DEFINE_PER_CPU(struct per_cpu_nodestat, boot_nodestats);
@@ -6623,7 +6733,7 @@ build_all_zonelists_init(void)
* (a chicken-egg dilemma).
*/
for_each_possible_cpu(cpu)
per_cpu_pages_init(&per_cpu(boot_pageset, cpu), &per_cpu(boot_zonestats, cpu));
per_cpu_pages_init(&per_cpu(boot_pageset, cpu).pcp, &per_cpu(boot_zonestats, cpu));
mminit_verify_zonelist();
cpuset_init_current_mems_allowed();
@@ -7083,11 +7193,13 @@ static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonestat *pzstats)
{
struct per_cpu_pages_ext *pcp_ext = pcp_to_pcpext(pcp);
int pindex;
memset(pcp, 0, sizeof(*pcp));
memset(pzstats, 0, sizeof(*pzstats));
spin_lock_init(&pcp_ext->lock);
for (pindex = 0; pindex < NR_PCP_LISTS; pindex++)
INIT_LIST_HEAD(&pcp->lists[pindex]);
@@ -7109,7 +7221,7 @@ static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long h
int cpu;
for_each_possible_cpu(cpu) {
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
pcp = &per_cpu_ptr(zone_per_cpu_pageset(zone), cpu)->pcp;
pageset_update(pcp, high, batch);
}
}
@@ -7143,12 +7255,13 @@ void __meminit setup_zone_pageset(struct zone *zone)
if (sizeof(struct per_cpu_zonestat) > 0)
zone->per_cpu_zonestats = alloc_percpu(struct per_cpu_zonestat);
zone->per_cpu_pageset = alloc_percpu(struct per_cpu_pages);
zone->per_cpu_pageset = (struct per_cpu_pages __percpu *)
alloc_percpu(struct per_cpu_pages_ext);
for_each_possible_cpu(cpu) {
struct per_cpu_pages *pcp;
struct per_cpu_zonestat *pzstats;
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
pcp = &per_cpu_ptr(zone_per_cpu_pageset(zone), cpu)->pcp;
pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
per_cpu_pages_init(pcp, pzstats);
}
@@ -7195,7 +7308,7 @@ static __meminit void zone_pcp_init(struct zone *zone)
* relies on the ability of the linker to provide the
* offset of a (static) per cpu variable into the per cpu area.
*/
zone->per_cpu_pageset = &boot_pageset;
zone->per_cpu_pageset = (struct per_cpu_pages __percpu *)&boot_pageset;
zone->per_cpu_zonestats = &boot_zonestats;
zone->pageset_high = BOOT_PAGESET_HIGH;
zone->pageset_batch = BOOT_PAGESET_BATCH;
@@ -9528,14 +9641,14 @@ void zone_pcp_reset(struct zone *zone)
int cpu;
struct per_cpu_zonestat *pzstats;
if (zone->per_cpu_pageset != &boot_pageset) {
if (zone_per_cpu_pageset(zone) != &boot_pageset) {
for_each_online_cpu(cpu) {
pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
drain_zonestat(zone, pzstats);
}
free_percpu(zone->per_cpu_pageset);
free_percpu(zone->per_cpu_zonestats);
zone->per_cpu_pageset = &boot_pageset;
zone->per_cpu_pageset = (struct per_cpu_pages __percpu *)&boot_pageset;
zone->per_cpu_zonestats = &boot_zonestats;
}
}

View File

@@ -93,7 +93,8 @@ static inline struct anon_vma *anon_vma_alloc(void)
anon_vma = kmem_cache_alloc(anon_vma_cachep, GFP_KERNEL);
if (anon_vma) {
atomic_set(&anon_vma->refcount, 1);
anon_vma->degree = 1; /* Reference for first vma */
anon_vma->num_children = 0;
anon_vma->num_active_vmas = 0;
anon_vma->parent = anon_vma;
/*
* Initialise the anon_vma root to point to itself. If called
@@ -201,6 +202,7 @@ int __anon_vma_prepare(struct vm_area_struct *vma)
anon_vma = anon_vma_alloc();
if (unlikely(!anon_vma))
goto out_enomem_free_avc;
anon_vma->num_children++; /* self-parent link for new root */
allocated = anon_vma;
}
@@ -210,8 +212,7 @@ int __anon_vma_prepare(struct vm_area_struct *vma)
if (likely(!vma->anon_vma)) {
vma->anon_vma = anon_vma;
anon_vma_chain_link(vma, avc, anon_vma);
/* vma reference or self-parent link for new root */
anon_vma->degree++;
anon_vma->num_active_vmas++;
allocated = NULL;
avc = NULL;
}
@@ -296,19 +297,19 @@ int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
anon_vma_chain_link(dst, avc, anon_vma);
/*
* Reuse existing anon_vma if its degree lower than two,
* that means it has no vma and only one anon_vma child.
* Reuse existing anon_vma if it has no vma and only one
* anon_vma child.
*
* Do not chose parent anon_vma, otherwise first child
* will always reuse it. Root anon_vma is never reused:
* Root anon_vma is never reused:
* it has self-parent reference and at least one child.
*/
if (!dst->anon_vma && src->anon_vma &&
anon_vma != src->anon_vma && anon_vma->degree < 2)
anon_vma->num_children < 2 &&
anon_vma->num_active_vmas == 0)
dst->anon_vma = anon_vma;
}
if (dst->anon_vma)
dst->anon_vma->degree++;
dst->anon_vma->num_active_vmas++;
unlock_anon_vma_root(root);
return 0;
@@ -358,6 +359,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
anon_vma = anon_vma_alloc();
if (!anon_vma)
goto out_error;
anon_vma->num_active_vmas++;
avc = anon_vma_chain_alloc(GFP_KERNEL);
if (!avc)
goto out_error_free_anon_vma;
@@ -378,7 +380,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
vma->anon_vma = anon_vma;
anon_vma_lock_write(anon_vma);
anon_vma_chain_link(vma, avc, anon_vma);
anon_vma->parent->degree++;
anon_vma->parent->num_children++;
anon_vma_unlock_write(anon_vma);
return 0;
@@ -410,7 +412,7 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
* to free them outside the lock.
*/
if (RB_EMPTY_ROOT(&anon_vma->rb_root.rb_root)) {
anon_vma->parent->degree--;
anon_vma->parent->num_children--;
continue;
}
@@ -418,7 +420,7 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
anon_vma_chain_free(avc);
}
if (vma->anon_vma) {
vma->anon_vma->degree--;
vma->anon_vma->num_active_vmas--;
#ifndef CONFIG_SPECULATIVE_PAGE_FAULT
/*
@@ -438,7 +440,8 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
list_for_each_entry_safe(avc, next, &vma->anon_vma_chain, same_vma) {
struct anon_vma *anon_vma = avc->anon_vma;
VM_WARN_ON(anon_vma->degree);
VM_WARN_ON(anon_vma->num_children);
VM_WARN_ON(anon_vma->num_active_vmas);
put_anon_vma(anon_vma);
list_del(&avc->same_vma);

View File

@@ -1409,6 +1409,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
unsigned int nr_reclaimed = 0;
unsigned int pgactivate = 0;
bool do_demote_pass;
bool page_trylock_result;
memset(stat, 0, sizeof(*stat));
cond_resched();
@@ -1831,6 +1832,21 @@ activate_locked:
count_memcg_page_event(page, PGACTIVATE);
}
keep_locked:
/*
* The page with trylock-bit will be added ret_pages and
* handled in trace_android_vh_handle_failed_page_trylock.
* If the page carried with trylock-bit after unlocked by
* shrink_page_list will cause some error-issues in other
* scene, so clear trylock-bit here.
* trace_android_vh_page_trylock_get_result will clear
* trylock-bit and return if page tyrlock failed in
* reclaim-process. Here we just want to clear trylock-bit
* so that ignore page_trylock_result.
* TODO: trace_android_vh_page_trylock_get_result should be
* changed to a different hook which correctly reflects the
* usage here, which is to clear the try-lock bit.
*/
trace_android_vh_page_trylock_get_result(page, &page_trylock_result);
unlock_page(page);
keep:
list_add(&page->lru, &ret_pages);
@@ -2952,7 +2968,8 @@ static int get_swappiness(struct lruvec *lruvec, struct scan_control *sc)
struct mem_cgroup *memcg = lruvec_memcg(lruvec);
struct pglist_data *pgdat = lruvec_pgdat(lruvec);
if (!can_demote(pgdat->node_id, sc))
if (!can_demote(pgdat->node_id, sc) &&
mem_cgroup_get_nr_swap_pages(memcg) <= 0)
return 0;
return mem_cgroup_swappiness(memcg);
@@ -3042,18 +3059,13 @@ void lru_gen_del_mm(struct mm_struct *mm)
if (!lruvec)
continue;
/* where the last iteration ended (exclusive) */
/* where the current iteration continues after */
if (lruvec->mm_state.head == &mm->lru_gen.list)
lruvec->mm_state.head = lruvec->mm_state.head->prev;
/* where the last iteration ended before */
if (lruvec->mm_state.tail == &mm->lru_gen.list)
lruvec->mm_state.tail = lruvec->mm_state.tail->next;
/* where the current iteration continues (inclusive) */
if (lruvec->mm_state.head != &mm->lru_gen.list)
continue;
lruvec->mm_state.head = lruvec->mm_state.head->next;
/* the deletion ends the current iteration */
if (lruvec->mm_state.head == &mm_list->fifo)
WRITE_ONCE(lruvec->mm_state.seq, lruvec->mm_state.seq + 1);
}
list_del_init(&mm->lru_gen.list);
@@ -3079,13 +3091,16 @@ void lru_gen_migrate_mm(struct mm_struct *mm)
if (mem_cgroup_disabled())
return;
/* migration can happen before addition */
if (!mm->lru_gen.memcg)
return;
rcu_read_lock();
memcg = mem_cgroup_from_task(task);
rcu_read_unlock();
if (memcg == mm->lru_gen.memcg)
return;
VM_WARN_ON_ONCE(!mm->lru_gen.memcg);
VM_WARN_ON_ONCE(list_empty(&mm->lru_gen.list));
lru_gen_del_mm(mm);
@@ -3234,68 +3249,54 @@ static bool iterate_mm_list(struct lruvec *lruvec, struct lru_gen_mm_walk *walk,
struct mm_struct **iter)
{
bool first = false;
bool last = true;
bool last = false;
struct mm_struct *mm = NULL;
struct mem_cgroup *memcg = lruvec_memcg(lruvec);
struct lru_gen_mm_list *mm_list = get_mm_list(memcg);
struct lru_gen_mm_state *mm_state = &lruvec->mm_state;
/*
* There are four interesting cases for this page table walker:
* 1. It tries to start a new iteration of mm_list with a stale max_seq;
* there is nothing left to do.
* 2. It's the first of the current generation, and it needs to reset
* the Bloom filter for the next generation.
* 3. It reaches the end of mm_list, and it needs to increment
* mm_state->seq; the iteration is done.
* 4. It's the last of the current generation, and it needs to reset the
* mm stats counters for the next generation.
* mm_state->seq is incremented after each iteration of mm_list. There
* are three interesting cases for this page table walker:
* 1. It tries to start a new iteration with a stale max_seq: there is
* nothing left to do.
* 2. It started the next iteration: it needs to reset the Bloom filter
* so that a fresh set of PTE tables can be recorded.
* 3. It ended the current iteration: it needs to reset the mm stats
* counters and tell its caller to increment max_seq.
*/
spin_lock(&mm_list->lock);
VM_WARN_ON_ONCE(mm_state->seq + 1 < walk->max_seq);
VM_WARN_ON_ONCE(*iter && mm_state->seq > walk->max_seq);
VM_WARN_ON_ONCE(*iter && !mm_state->nr_walkers);
if (walk->max_seq <= mm_state->seq) {
if (!*iter)
last = false;
if (walk->max_seq <= mm_state->seq)
goto done;
}
if (!mm_state->nr_walkers) {
VM_WARN_ON_ONCE(mm_state->head && mm_state->head != &mm_list->fifo);
if (!mm_state->head)
mm_state->head = &mm_list->fifo;
mm_state->head = mm_list->fifo.next;
if (mm_state->head == &mm_list->fifo)
first = true;
}
while (!mm && mm_state->head != &mm_list->fifo) {
mm = list_entry(mm_state->head, struct mm_struct, lru_gen.list);
do {
mm_state->head = mm_state->head->next;
if (mm_state->head == &mm_list->fifo) {
WRITE_ONCE(mm_state->seq, mm_state->seq + 1);
last = true;
break;
}
/* force scan for those added after the last iteration */
if (!mm_state->tail || mm_state->tail == &mm->lru_gen.list) {
mm_state->tail = mm_state->head;
if (!mm_state->tail || mm_state->tail == mm_state->head) {
mm_state->tail = mm_state->head->next;
walk->full_scan = true;
}
mm = list_entry(mm_state->head, struct mm_struct, lru_gen.list);
if (should_skip_mm(mm, walk))
mm = NULL;
}
if (mm_state->head == &mm_list->fifo)
WRITE_ONCE(mm_state->seq, mm_state->seq + 1);
} while (!mm);
done:
if (*iter && !mm)
mm_state->nr_walkers--;
if (!*iter && mm)
mm_state->nr_walkers++;
if (mm_state->nr_walkers)
last = false;
if (*iter || last)
reset_mm_stats(lruvec, walk, last);
@@ -3323,9 +3324,9 @@ static bool iterate_mm_list_nowalk(struct lruvec *lruvec, unsigned long max_seq)
VM_WARN_ON_ONCE(mm_state->seq + 1 < max_seq);
if (max_seq > mm_state->seq && !mm_state->nr_walkers) {
VM_WARN_ON_ONCE(mm_state->head && mm_state->head != &mm_list->fifo);
if (max_seq > mm_state->seq) {
mm_state->head = NULL;
mm_state->tail = NULL;
WRITE_ONCE(mm_state->seq, mm_state->seq + 1);
reset_mm_stats(lruvec, NULL, true);
success = true;
@@ -3931,10 +3932,6 @@ restart:
walk_pmd_range(&val, addr, next, args);
/* a racy check to curtail the waiting time */
if (wq_has_sleeper(&walk->lruvec->mm_state.wait))
return 1;
if (need_resched() || walk->batched >= MAX_LRU_BATCH) {
end = (addr | ~PUD_MASK) + 1;
goto done;
@@ -3967,8 +3964,14 @@ static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_
walk->next_addr = FIRST_USER_ADDRESS;
do {
DEFINE_MAX_SEQ(lruvec);
err = -EBUSY;
/* another thread might have called inc_max_seq() */
if (walk->max_seq != max_seq)
break;
/* page_update_gen() requires stable page_memcg() */
if (!mem_cgroup_trylock_pages(memcg))
break;
@@ -4201,26 +4204,12 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq,
success = iterate_mm_list(lruvec, walk, &mm);
if (mm)
walk_mm(lruvec, mm, walk);
cond_resched();
} while (mm);
done:
if (!success) {
if (sc->priority <= DEF_PRIORITY - 2)
wait_event_killable(lruvec->mm_state.wait,
max_seq < READ_ONCE(lrugen->max_seq));
if (success)
inc_max_seq(lruvec, can_swap, full_scan);
return max_seq < READ_ONCE(lrugen->max_seq);
}
VM_WARN_ON_ONCE(max_seq != READ_ONCE(lrugen->max_seq));
inc_max_seq(lruvec, can_swap, full_scan);
/* either this sees any waiters or they will see updated max_seq */
if (wq_has_sleeper(&lruvec->mm_state.wait))
wake_up_all(&lruvec->mm_state.wait);
return true;
return success;
}
static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsigned long *min_seq,
@@ -4554,7 +4543,6 @@ static bool sort_page(struct lruvec *lruvec, struct page *page, int tier_idx)
WRITE_ONCE(lrugen->protected[hist][type][tier - 1],
lrugen->protected[hist][type][tier - 1] + delta);
__mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta);
return true;
}
@@ -4779,10 +4767,13 @@ static int evict_pages(struct lruvec *lruvec, struct scan_control *sc, int swapp
int scanned;
int reclaimed;
LIST_HEAD(list);
LIST_HEAD(clean);
struct page *page;
struct page *next;
enum vm_event_item item;
struct reclaim_stat stat;
struct lru_gen_mm_walk *walk;
bool skip_retry = false;
struct mem_cgroup *memcg = lruvec_memcg(lruvec);
struct pglist_data *pgdat = lruvec_pgdat(lruvec);
@@ -4799,20 +4790,37 @@ static int evict_pages(struct lruvec *lruvec, struct scan_control *sc, int swapp
if (list_empty(&list))
return scanned;
retry:
reclaimed = shrink_page_list(&list, pgdat, sc, &stat, false);
sc->nr_reclaimed += reclaimed;
list_for_each_entry(page, &list, lru) {
/* restore LRU_REFS_FLAGS cleared by isolate_page() */
if (PageWorkingset(page))
SetPageReferenced(page);
list_for_each_entry_safe_reverse(page, next, &list, lru) {
if (!page_evictable(page)) {
list_del(&page->lru);
putback_lru_page(page);
continue;
}
/* don't add rejected pages to the oldest generation */
if (PageReclaim(page) &&
(PageDirty(page) || PageWriteback(page)))
ClearPageActive(page);
else
SetPageActive(page);
(PageDirty(page) || PageWriteback(page))) {
/* restore LRU_REFS_FLAGS cleared by isolate_page() */
if (PageWorkingset(page))
SetPageReferenced(page);
continue;
}
if (skip_retry || PageActive(page) || PageReferenced(page) ||
page_mapped(page) || PageLocked(page) ||
PageDirty(page) || PageWriteback(page)) {
/* don't add rejected pages to the oldest generation */
set_mask_bits(&page->flags, LRU_REFS_MASK | LRU_REFS_FLAGS,
BIT(PG_active));
continue;
}
/* retry pages that may have missed rotate_reclaimable_page() */
list_move(&page->lru, &clean);
sc->nr_scanned -= thp_nr_pages(page);
}
spin_lock_irq(&lruvec->lru_lock);
@@ -4834,7 +4842,13 @@ static int evict_pages(struct lruvec *lruvec, struct scan_control *sc, int swapp
mem_cgroup_uncharge_list(&list);
free_unref_page_list(&list);
sc->nr_reclaimed += reclaimed;
INIT_LIST_HEAD(&list);
list_splice_init(&clean, &list);
if (!list_empty(&list)) {
skip_retry = true;
goto retry;
}
if (need_swapping && type == LRU_GEN_ANON)
*need_swapping = true;
@@ -5589,7 +5603,6 @@ void lru_gen_init_lruvec(struct lruvec *lruvec)
INIT_LIST_HEAD(&lrugen->lists[gen][type][zone]);
lruvec->mm_state.seq = MIN_NR_GENS;
init_waitqueue_head(&lruvec->mm_state.wait);
}
#ifdef CONFIG_MEMCG

View File

@@ -1724,7 +1724,8 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
struct per_cpu_pages *pcp;
struct per_cpu_zonestat __maybe_unused *pzstats;
pcp = per_cpu_ptr(zone->per_cpu_pageset, i);
pcp = &per_cpu_ptr((struct per_cpu_pages_ext __percpu *)zone->per_cpu_pageset,
i)->pcp;
seq_printf(m,
"\n cpu: %i"
"\n count: %i"

View File

@@ -272,6 +272,8 @@ static void lru_gen_refault(struct page *page, void *shadow)
lruvec = mem_cgroup_lruvec(memcg, pgdat);
lrugen = &lruvec->lrugen;
mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + type, delta);
min_seq = READ_ONCE(lrugen->min_seq[type]);
if ((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)))
goto unlock;
@@ -282,7 +284,7 @@ static void lru_gen_refault(struct page *page, void *shadow)
tier = lru_tier_from_refs(refs);
atomic_long_add(delta, &lrugen->refaulted[hist][type][tier]);
mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + type, delta);
mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta);
/*
* Count the following two cases as stalls:

View File

@@ -300,6 +300,10 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
write_unlock(&xen_9pfs_lock);
for (i = 0; i < priv->num_rings; i++) {
struct xen_9pfs_dataring *ring = &priv->rings[i];
cancel_work_sync(&ring->work);
if (!priv->rings[i].intf)
break;
if (priv->rings[i].irq > 0)

View File

@@ -2584,7 +2584,7 @@ static void ieee80211_deliver_skb_to_local_stack(struct sk_buff *skb,
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
bool noencrypt = !(status->flag & RX_FLAG_DECRYPTED);
cfg80211_rx_control_port(dev, skb, noencrypt);
cfg80211_rx_control_port(dev, skb, noencrypt, -1);
dev_kfree_skb(skb);
} else {
struct ethhdr *ehdr = (void *)skb_mac_header(skb);

View File

@@ -4787,12 +4787,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
}
}
void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
{
if (nft_set_is_anonymous(set))
nft_clear(ctx->net, set);
set->use++;
}
EXPORT_SYMBOL_GPL(nf_tables_activate_set);
void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_binding *binding,
enum nft_trans_phase phase)
{
switch (phase) {
case NFT_TRANS_PREPARE:
if (nft_set_is_anonymous(set))
nft_deactivate_next(ctx->net, set);
set->use--;
return;
case NFT_TRANS_ABORT:

View File

@@ -342,7 +342,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
{
struct nft_dynset *priv = nft_expr_priv(expr);
priv->set->use++;
nf_tables_activate_set(ctx, priv->set);
}
static void nft_dynset_destroy(const struct nft_ctx *ctx,

View File

@@ -167,7 +167,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
{
struct nft_lookup *priv = nft_expr_priv(expr);
priv->set->use++;
nf_tables_activate_set(ctx, priv->set);
}
static void nft_lookup_destroy(const struct nft_ctx *ctx,

View File

@@ -183,7 +183,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
{
struct nft_objref_map *priv = nft_expr_priv(expr);
priv->set->use++;
nf_tables_activate_set(ctx, priv->set);
}
static void nft_objref_map_destroy(const struct nft_ctx *ctx,

View File

@@ -18684,7 +18684,9 @@ EXPORT_SYMBOL(cfg80211_mgmt_tx_status_ext);
static int __nl80211_rx_control_port(struct net_device *dev,
struct sk_buff *skb,
bool unencrypted, gfp_t gfp)
bool unencrypted,
int link_id,
gfp_t gfp)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
@@ -18716,6 +18718,8 @@ static int __nl80211_rx_control_port(struct net_device *dev,
NL80211_ATTR_PAD) ||
nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, addr) ||
nla_put_u16(msg, NL80211_ATTR_CONTROL_PORT_ETHERTYPE, proto) ||
(link_id >= 0 &&
nla_put_u8(msg, NL80211_ATTR_MLO_LINK_ID, link_id)) ||
(unencrypted && nla_put_flag(msg,
NL80211_ATTR_CONTROL_PORT_NO_ENCRYPT)))
goto nla_put_failure;
@@ -18734,13 +18738,14 @@ static int __nl80211_rx_control_port(struct net_device *dev,
return -ENOBUFS;
}
bool cfg80211_rx_control_port(struct net_device *dev,
struct sk_buff *skb, bool unencrypted)
bool cfg80211_rx_control_port(struct net_device *dev, struct sk_buff *skb,
bool unencrypted, int link_id)
{
int ret;
trace_cfg80211_rx_control_port(dev, skb, unencrypted);
ret = __nl80211_rx_control_port(dev, skb, unencrypted, GFP_ATOMIC);
trace_cfg80211_rx_control_port(dev, skb, unencrypted, link_id);
ret = __nl80211_rx_control_port(dev, skb, unencrypted, link_id,
GFP_ATOMIC);
trace_cfg80211_return_bool(ret == 0);
return ret == 0;
}

View File

@@ -3165,14 +3165,15 @@ TRACE_EVENT(cfg80211_control_port_tx_status,
TRACE_EVENT(cfg80211_rx_control_port,
TP_PROTO(struct net_device *netdev, struct sk_buff *skb,
bool unencrypted),
TP_ARGS(netdev, skb, unencrypted),
bool unencrypted, int link_id),
TP_ARGS(netdev, skb, unencrypted, link_id),
TP_STRUCT__entry(
NETDEV_ENTRY
__field(int, len)
MAC_ENTRY(from)
__field(u16, proto)
__field(bool, unencrypted)
__field(int, link_id)
),
TP_fast_assign(
NETDEV_ASSIGN;
@@ -3180,10 +3181,12 @@ TRACE_EVENT(cfg80211_rx_control_port,
MAC_ASSIGN(from, eth_hdr(skb)->h_source);
__entry->proto = be16_to_cpu(skb->protocol);
__entry->unencrypted = unencrypted;
__entry->link_id = link_id;
),
TP_printk(NETDEV_PR_FMT ", len=%d, %pM, proto: 0x%x, unencrypted: %s",
TP_printk(NETDEV_PR_FMT ", len=%d, %pM, proto: 0x%x, unencrypted: %s, link: %d",
NETDEV_PR_ARG, __entry->len, __entry->from,
__entry->proto, BOOL_TO_STR(__entry->unencrypted))
__entry->proto, BOOL_TO_STR(__entry->unencrypted),
__entry->link_id)
);
TRACE_EVENT(cfg80211_cqm_rssi_notify,

Some files were not shown because too many files have changed in this diff Show More