* 4.9/tmp-8dd0f52: Linux 4.9.72 sparc32: Export vac_cache_size to fix build error bpf: fix incorrect sign extension in check_alu_op() bpf: reject out-of-bounds stack pointer calculation bpf: fix branch pruning logic bpf: adjust insn_aux_data when patching insns Revert "Bluetooth: btusb: driver to enable the usb-wakeup feature" platform/x86: asus-wireless: send an EV_SYN/SYN_REPORT between state changes MIPS: math-emu: Fix final emulation phase for certain instructions thermal/drivers/hisi: Fix multiple alarm interrupts firing thermal/drivers/hisi: Simplify the temperature/step computation thermal/drivers/hisi: Fix kernel panic on alarm interrupt thermal/drivers/hisi: Fix missing interrupt enablement thermal: hisilicon: Handle return value of clk_prepare_enable cpuidle: fix broadcast control when broadcast can not be entered rtc: set the alarm to the next expiring timer tcp: fix under-evaluated ssthresh in TCP Vegas clk: sunxi-ng: sun6i: Rename HDMI DDC clock to avoid name collision staging: greybus: light: Release memory obtained by kasprintf net: ipv6: send NS for DAD when link operationally up fm10k: ensure we process SM mbx when processing VF mbx vfio/pci: Virtualize Maximum Payload Size scsi: lpfc: PLOGI failures during NPIV testing scsi: lpfc: Fix secure firmware updates fm10k: fix mis-ordered parameters in declaration for .ndo_set_vf_bw ASoC: img-parallel-out: Add pm_runtime_get/put to set_fmt callback tracing: Exclude 'generic fields' from histograms PCI/AER: Report non-fatal errors only to the affected endpoint IB/rxe: check for allocation failure on elem ixgbe: fix use of uninitialized padding igb: check memory allocation failure PM / OPP: Move error message to debug level PCI: Create SR-IOV virtfn/physfn links before attaching driver scsi: mpt3sas: Fix IO error occurs on pulling out a drive from RAID1 volume created on two SATA drive scsi: cxgb4i: fix Tx skb leak PCI: Avoid bus reset if bridge itself is broken net: phy: at803x: Change error to EINVAL for invalid MAC kvm, mm: account kvm related kmem slabs to kmemcg rtc: pl031: make interrupt optional crypto: crypto4xx - increase context and scatter ring buffer elements backlight: pwm_bl: Fix overflow condition bnxt_en: Fix NULL pointer dereference in reopen failure path cpuidle: powernv: Pass correct drv->cpumask for registration ARM: dma-mapping: disallow dma_get_sgtable() for non-kernel managed memory Btrfs: fix an integer overflow check netfilter: nfnetlink_queue: fix secctx memory leak xhci: plat: Register shutdown for xhci_plat net: moxa: fix TX overrun memory leak isdn: kcapi: avoid uninitialized data virtio_balloon: prevent uninitialized variable use virtio-balloon: use actual number of stats for stats queue buffers KVM: pci-assign: do not map smm memory slot pages in vt-d page tables net: ipconfig: fix ic_close_devs() use-after-free cpufreq: Fix creation of symbolic links to policy directories ARM: dts: am335x-evmsk: adjust mmc2 param to allow suspend netfilter: nf_nat_snmp: Fix panic when snmp_trap_helper fails to register netfilter: nfnl_cthelper: fix a race when walk the nf_ct_helper_hash table irda: vlsi_ir: fix check for DMA mapping errors RDMA/iser: Fix possible mr leak on device removal event i40e: Do not enable NAPI on q_vectors that have no rings IB/rxe: increment msn only when completing a request IB/rxe: double free on error net: Do not allow negative values for busy_read and busy_poll sysctl interfaces nbd: set queue timeout properly infiniband: Fix alignment of mmap cookies to support VIPT caching IB/core: Protect against self-requeue of a cq work item i40iw: Receive netdev events post INET_NOTIFIER state bna: avoid writing uninitialized data into hw registers s390/qeth: no ETH header for outbound AF_IUCV s390/qeth: size calculation outbound buffers r8152: prevent the driver from transmitting packets with carrier off ASoC: STI: Fix reader substream pointer set HID: xinmo: fix for out of range for THT 2P arcade controller. hwmon: (asus_atk0110) fix uninitialized data access ARM: dts: ti: fix PCI bus dtc warnings KVM: VMX: Fix enable VPID conditions KVM: x86: correct async page present tracepoint kvm: vmx: Flush TLB when the APIC-access address changes scsi: lpfc: Fix PT2PT PRLI reject pinctrl: st: add irq_request/release_resources callbacks inet: frag: release spinlock before calling icmp_send() tipc: fix nametbl deadlock at tipc_nametbl_unsubscribe r8152: fix the rx early size of RTL8153 iommu/exynos: Workaround FLPD cache flush issues for SYSMMU v5 netfilter: nfnl_cthelper: Fix memory leak netfilter: nfnl_cthelper: fix runtime expectation policy updates usb: gadget: udc: remove pointer dereference after free usb: gadget: f_uvc: Sanity check wMaxPacketSize for SuperSpeed hwmon: (max31790) Set correct PWM value net: qmi_wwan: Add USB IDs for MDM6600 modem on Motorola Droid 4 sctp: out_qlen should be updated when pruning unsent queue bna: integer overflow bug in debugfs sch_dsmark: fix invalid skb_cow() usage vsock: cancel packets when failing to connect vhost-vsock: add pkt cancel capability vsock: track pkt owner vsock crypto: deadlock between crypto_alg_sem/rtnl_mutex/genl_mutex r8152: fix the list rx_done may be used without initialization cpuidle: Validate cpu_dev in cpuidle_add_sysfs() nvme-loop: handle cpu unplug when re-establishing the controller arm: kprobes: Align stack to 8-bytes in test code arm: kprobes: Fix the return address of multiple kretprobes HID: corsair: Add driver Scimitar Pro RGB gaming mouse 1b1c:1b3e support to hid-corsair HID: corsair: support for K65-K70 Rapidfire and Scimitar Pro RGB kvm: fix usage of uninit spinlock in avic_vm_destroy() ALSA: hda - add support for docking station for HP 840 G3 ALSA: hda - add support for docking station for HP 820 G2 arm64: Initialise high_memory global variable earlier cxl: Check if vphb exists before iterating over AFU devices Linux 4.9.71 ath9k: fix tx99 potential info leak icmp: don't fail on fragment reassembly time exceeded IB/ipoib: Grab rtnl lock on heavy flush when calling ndo_open/stop RDMA/cma: Avoid triggering undefined behavior macvlan: Only deliver one copy of the frame to the macvlan interface udf: Avoid overflow when session starts at large offset scsi: bfa: integer overflow in debugfs scsi: sd: change allow_restart to bool in sysfs interface scsi: sd: change manage_start_stop to bool in sysfs interface rtl8188eu: Fix a possible sleep-in-atomic bug in rtw_disassoc_cmd rtl8188eu: Fix a possible sleep-in-atomic bug in rtw_createbss_cmd vt6655: Fix a possible sleep-in-atomic bug in vt6655_suspend IB/core: Fix calculation of maximum RoCE MTU scsi: scsi_devinfo: Add REPORTLUN2 to EMC SYMMETRIX blacklist entry raid5: Set R5_Expanded on parity devices as well as data. pinctrl: adi2: Fix Kconfig build problem usb: musb: da8xx: fix babble condition handling tty fix oops when rmmod 8250 soc: mediatek: pwrap: fix compiler errors powerpc/perf/hv-24x7: Fix incorrect comparison in memord scsi: hpsa: destroy sas transport properties before scsi_host scsi: hpsa: cleanup sas_phy structures in sysfs when unloading PCI: Detach driver before procfs & sysfs teardown on device remove RDMA/cxgb4: Declare stag as __be32 xfs: fix incorrect extent state in xfs_bmap_add_extent_unwritten_real xfs: fix log block underflow during recovery cycle verification l2tp: cleanup l2tp_tunnel_delete calls nvme: use kref_get_unless_zero in nvme_find_get_ns platform/x86: hp_accel: Add quirk for HP ProBook 440 G4 btrfs: tests: Fix a memory leak in error handling path in 'run_test()' arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27 Ib/hfi1: Return actual operational VLs in port info query bcache: fix wrong cache_misses statistics bcache: explicitly destroy mutex while exiting GFS2: Take inode off order_write list when setting jdata flag scsi: scsi_debug: write_same: fix error report thermal/drivers/step_wise: Fix temperature regulation misbehavior ASoC: rsnd: rsnd_ssi_run_mods() needs to care ssi_parent_mod ppp: Destroy the mutex when cleanup clk: tegra: Fix cclk_lp divisor register clk: hi6220: mark clock cs_atb_syspll as critical clk: imx6: refine hdmi_isfr's parent to make HDMI work on i.MX6 SoCs w/o VPU clk: mediatek: add the option for determining PLL source clock mm: Handle 0 flags in _calc_vm_trans() macro crypto: tcrypt - fix buffer lengths in test_aead_speed() arm-ccn: perf: Prevent module unload while PMU is in use xfs: truncate pagecache before writeback in xfs_setattr_size() iommu/amd: Limit the IOVA page range to the specified addresses badblocks: fix wrong return value in badblocks_set if badblocks are disabled target/file: Do not return error for UNMAP if length is zero target:fix condition return in core_pr_dump_initiator_port() iscsi-target: fix memory leak in lio_target_tiqn_addtpg() target/iscsi: Fix a race condition in iscsit_add_reject_from_cmd() platform/x86: intel_punit_ipc: Fix resource ioremap warning powerpc/ipic: Fix status get and status clear powerpc/opal: Fix EBUSY bug in acquiring tokens netfilter: ipvs: Fix inappropriate output of procfs iommu/mediatek: Fix driver name PCI: Do not allocate more buses than available in parent powerpc/powernv/cpufreq: Fix the frequency read by /proc/cpuinfo PCI/PME: Handle invalid data when reading Root Status dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type ASoC: Intel: Skylake: Fix uuid_module memory leak in failure case rtc: pcf8563: fix output clock rate video: fbdev: au1200fb: Return an error code if a memory allocation fails video: fbdev: au1200fb: Release some resources if a memory allocation fails video: udlfb: Fix read EDID timeout fbdev: controlfb: Add missing modes to fix out of bounds access sfc: don't warn on successful change of MAC HID: cp2112: fix broken gpio_direction_input callback Revert "x86/acpi: Set persistent cpuid <-> nodeid mapping when booting" target: fix race during implicit transition work flushes target: fix ALUA transition timeout handling target: Use system workqueue for ALUA transitions btrfs: add missing memset while reading compressed inline extents NFSv4.1 respect server's max size in CREATE_SESSION efi/esrt: Cleanup bad memory map log messages perf symbols: Fix symbols__fixup_end heuristic for corner cases tty: fix data race in tty_ldisc_ref_wait() tty: don't panic on OOM in tty_set_ldisc() rxrpc: Ignore BUSY packets on old calls net: mpls: Fix nexthop alive tracking on down events net/mlx4_core: Avoid delays during VF driver device shutdown nvmet-rdma: Fix a possible uninitialized variable dereference nvmet: confirm sq percpu has scheduled and switched to atomic nvme-loop: fix a possible use-after-free when destroying the admin queue afs: Fix abort on signal while waiting for call completion afs: Fix afs_kill_pages() afs: Fix page leak in afs_write_begin() afs: Populate and use client modification time afs: Better abort and net error handling afs: Invalid op ID should abort with RXGEN_OPCODE afs: Fix the maths in afs_fs_store_data() afs: Prevent callback expiry timer overflow afs: Migrate vlocation fields to 64-bit afs: Flush outstanding writes when an fd is closed afs: Deal with an empty callback array afs: Adjust mode bits processing afs: Populate group ID from vnode status afs: Fix missing put_page() drm/radeon: reinstate oland workaround for sclk mmc: mediatek: Fixed bug where clock frequency could be set wrong sched/deadline: Use deadline instead of period when calculating overflow sched/deadline: Throttle a constrained deadline task activated after the deadline sched/deadline: Make sure the replenishment timer fires in the next period sched/deadline: Add missing update_rq_clock() in dl_task_timer() iwlwifi: mvm: cleanup pending frames in DQA mode Drivers: hv: util: move waiting for release to hv_utils_transport itself drm/radeon/si: add dpm quirk for Oland fjes: Fix wrong netdevice feature flags scsi: hpsa: do not timeout reset operations scsi: hpsa: limit outstanding rescans scsi: hpsa: update check for logical volume status ASoC: rcar: clear DE bit only in PDMACHCR when it stops openrisc: fix issue handling 8 byte get_user calls intel_th: pci: Add Gemini Lake support drm: amd: remove broken include path qed: Fix interrupt flags on Rx LL2 qed: Fix mapping leak on LL2 rx flow qed: Align CIDs according to DORQ requirement mlxsw: reg: Fix SPVMLR max record count mlxsw: reg: Fix SPVM max record count net: Resend IGMP memberships upon peer notification. irqchip/mvebu-odmi: Select GENERIC_MSI_IRQ_DOMAIN dmaengine: Fix array index out of bounds warning in __get_unmap_pool() net: wimax/i2400m: fix NULL-deref at probe writeback: fix memory leak in wb_queue_work() blk-mq: Fix tagset reinit in the presence of cpu hot-unplug ASoC: rsnd: fix sound route path when using SRC6/SRC9 netfilter: bridge: honor frag_max_size when refragmenting drm/omap: fix dmabuf mmap for dma_alloc'ed buffers Input: i8042 - add TUXEDO BU1406 (N24_25BU) to the nomux list NFSD: fix nfsd_reset_versions for NFSv4. NFSD: fix nfsd_minorversion(.., NFSD_AVAIL) drm/amdgpu: fix parser init error path to avoid crash in parser fini iommu/io-pgtable-arm-v7s: Check for leaf entry before dereferencing it net/mlx5: Don't save PCI state when PCI error is detected net/mlx5: Fix create autogroup prev initializer rxrpc: Wake up the transmitter if Rx window size increases on the peer net: bcmgenet: Power up the internal PHY before probing the MII net: bcmgenet: synchronize irq0 status between the isr and task net: bcmgenet: power down internal phy if open or resume fails net: bcmgenet: reserved phy revisions must be checked first net: bcmgenet: correct MIB access of UniMAC RUNT counters net: bcmgenet: correct the RBUF_OVFL_CNT and RBUF_ERR_CNT MIB values bnxt_en: Ignore 0 value in autoneg supported speed from firmware. net: initialize msg.msg_flags in recvfrom userfaultfd: selftest: vm: allow to build in vm/ directory userfaultfd: shmem: __do_fault requires VM_FAULT_NOPAGE md-cluster: free md_cluster_info if node leave cluster usb: xhci-mtk: check hcc_params after adding primary hcd KVM: nVMX: do not warn when MSR bitmap address is not backed usb: phy: isp1301: Add OF device ID table mac80211: Fix addition of mesh configuration element ext4: fix crash when a directory's i_size is too small ext4: fix fdatasync(2) after fallocate(2) operation dmaengine: dmatest: move callback wait queue to thread context eeprom: at24: change nvmem stride to 1 sched/rt: Do not pull from current CPU if only one CPU to pull nfs: don't wait on commit in nfs_commit_inode() if there were no commit requests xhci: Don't add a virt_dev to the devs array before it's fully allocated Bluetooth: btusb: driver to enable the usb-wakeup feature usb: xhci: fix TDS for MTK xHCI1.1 ceph: drop negative child dentries before try pruning inode's alias usbip: fix stub_send_ret_submit() vulnerability to null transfer_buffer usbip: fix stub_rx: harden CMD_SUBMIT path to handle malicious input usb: add helper to extract bits 12:11 of wMaxPacketSize usbip: fix stub_rx: get_pipe() to validate endpoint number USB: core: prevent malicious bNumInterfaces overflow USB: uas and storage: Add US_FL_BROKEN_FUA for another JMicron JMS567 ID tracing: Allocate mask_str buffer dynamically autofs: fix careless error in recent commit crypto: salsa20 - fix blkcipher_walk API usage crypto: hmac - require that the underlying hash algorithm is unkeyed crypto: rsa - fix buffer overread when stripping leading zeroes mfd: fsl-imx25: Clean up irq settings during removal Linux 4.9.70 RDMA/cxgb4: Annotate r2 and stag as __be32 md: free unused memory after bitmap resize audit: ensure that 'audit=1' actually enables audit for PID 1 ipvlan: fix ipv6 outbound device kbuild: do not call cc-option before KBUILD_CFLAGS initialization powerpc/64: Fix checksum folding in csum_tcpudp_nofold and ip_fast_csum_nofold KVM: arm/arm64: vgic-its: Preserve the revious read from the pending table fix kcm_clone() usb: gadget: ffs: Forbid usb_ep_alloc_request from sleeping s390: always save and restore all registers on context switch ipmi: Stop timers before cleaning up the module Fix handling of verdicts after NF_QUEUE tipc: call tipc_rcv() only if bearer is up in tipc_udp_recv() s390/qeth: fix thinko in IPv4 multicast address tracking s390/qeth: fix GSO throughput regression s390/qeth: build max size GSO skbs on L2 devices tcp/dccp: block bh before arming time_wait timer stmmac: reset last TSO segment size after device open net: remove hlist_nulls_add_tail_rcu() usbnet: fix alignment for frames with no ethernet header net/packet: fix a race in packet_bind() and packet_notifier() packet: fix crash in fanout_demux_rollover() sit: update frag_off info rds: Fix NULL pointer dereference in __rds_rdma_map tipc: fix memory leak in tipc_accept_from_sock() s390/qeth: fix early exit from error path net: qmi_wwan: add Quectel BG96 2c7c:0296 ANDROID: dma-buf/sw_sync: Rename active_list to link FROMLIST: android: binder: Fix null ptr dereference in debug msg FROMLIST: android: binder: Move buffer out of area shared with user space FROMLIST: android: binder: Add allocator selftest FROMLIST: android: binder: Refactor prev and next buffer into a helper function Linux 4.9.69 afs: Connect up the CB.ProbeUuid IB/mlx5: Assign send CQ and recv CQ of UMR QP IB/mlx4: Increase maximal message size under UD QP xfrm: Copy policy family in clone_policy jump_label: Invoke jump_label_test() via early_initcall() atm: horizon: Fix irq release error clk: uniphier: fix DAPLL2 clock rate of Pro5 bpf: fix lockdep splat sctp: use the right sk after waking up from wait_buf sleep sctp: do not free asoc when it is already dead in sctp_sendmsg zsmalloc: calling zs_map_object() from irq is a bug sparc64/mm: set fields in deferred pages block: wake up all tasks blocked in get_request() dt-bindings: usb: fix reg-property port-number range xfs: fix forgotten rcu read unlock when skipping inode reclaim sunrpc: Fix rpc_task_begin trace point NFS: Fix a typo in nfs_rename() dynamic-debug-howto: fix optional/omitted ending line number to be LARGE instead of 0 lib/genalloc.c: make the avail variable an atomic_long_t drivers/rapidio/devices/rio_mport_cdev.c: fix resource leak in error handling path in 'rio_dma_transfer()' route: update fnhe_expires for redirect when the fnhe exists route: also update fnhe_genid when updating a route cache gre6: use log_ecn_error module parameter in ip6_tnl_rcv() mac80211_hwsim: Fix memory leak in hwsim_new_radio_nl() x86/mpx/selftests: Fix up weird arrays coccinelle: fix parallel build with CHECK=scripts/coccicheck kbuild: pkg: use --transform option to prefix paths in tar EDAC, i5000, i5400: Fix definition of NRECMEMB register EDAC, i5000, i5400: Fix use of MTR_DRAM_WIDTH macro powerpc/powernv/ioda2: Gracefully fail if too many TCE levels requested drm/amd/amdgpu: fix console deadlock if late init failed axonram: Fix gendisk handling netfilter: don't track fragmented packets zram: set physical queue limits to avoid array out of bounds accesses blk-mq: initialize mq kobjects in blk_mq_init_allocated_queue() i2c: riic: fix restart condition crypto: s5p-sss - Fix completing crypto request in IRQ handler ipv6: reorder icmpv6_init() and ip6_mr_init() ibmvnic: Allocate number of rx/tx buffers agreed on by firmware ibmvnic: Fix overflowing firmware/hardware TX queue rds: tcp: Sequence teardown of listen and acceptor sockets to avoid races bnx2x: do not rollback VF MAC/VLAN filters we did not configure bnx2x: fix detection of VLAN filtering feature for VF bnx2x: fix possible overrun of VFPF multicast addresses array bnx2x: prevent crash when accessing PTP with interface down spi_ks8995: regs_size incorrect for some devices spi_ks8995: fix "BUG: key accdaa28 not in .data!" KVM: arm/arm64: VGIC: Fix command handling while ITS being disabled arm64: KVM: Survive unknown traps from guests arm: KVM: Survive unknown traps from guests KVM: nVMX: reset nested_run_pending if the vCPU is going to be reset irqchip/crossbar: Fix incorrect type of register size scsi: lpfc: Fix crash during Hardware error recovery on SLI3 adapters scsi: qla2xxx: Fix ql_dump_buffer workqueue: trigger WARN if queue_delayed_work() is called with NULL @wq libata: drop WARN from protocol error in ata_sff_qc_issue() kvm: nVMX: VMCLEAR should not cause the vCPU to shut down usb: gadget: udc: net2280: Fix tmp reusage in net2280 driver usb: gadget: pxa27x: Test for a valid argument pointer usb: dwc3: gadget: Fix system suspend/resume on TI platforms USB: gadgetfs: Fix a potential memory leak in 'dev_config()' usb: gadget: configs: plug memory leak HID: chicony: Add support for another ASUS Zen AiO keyboard gpio: altera: Use handle_level_irq when configured as a level_high ASoC: rcar: avoid SSI_MODEx settings for SSI8 ARM: OMAP2+: Release device node after it is no longer needed. ARM: OMAP2+: Fix device node reference counts powerpc/64: Fix checksum folding in csum_add() module: set __jump_table alignment to 8 lirc: fix dead lock between open and wakeup_filter powerpc: Fix compiling a BE kernel with a powerpc64le toolchain selftest/powerpc: Fix false failures for skipped tests powerpc/64: Invalidate process table caching after setting process table x86/hpet: Prevent might sleep splat on resume sched/fair: Make select_idle_cpu() more aggressive x86/platform/uv/BAU: Fix HUB errors by remove initial write to sw-ack register x86/selftests: Add clobbers for int80 on x86_64 ARM: OMAP2+: gpmc-onenand: propagate error on initialization failure vti6: Don't report path MTU below IPV6_MIN_MTU. ARM: 8657/1: uaccess: consistently check object sizes Revert "spi: SPI_FSL_DSPI should depend on HAS_DMA" Revert "drm/armada: Fix compile fail" mm: drop unused pmdp_huge_get_and_clear_notify() thp: fix MADV_DONTNEED vs. numa balancing race thp: reduce indentation level in change_huge_pmd() ARM: avoid faulting on qemu ARM: BUG if jumping to usermode address in kernel mode usb: f_fs: Force Reserved1=1 in OS_DESC_EXT_COMPAT crypto: talitos - fix ctr-aes-talitos crypto: talitos - fix use of sg_link_tbl_len crypto: talitos - fix AEAD for sha224 on non sha224 capable chips crypto: talitos - fix setkey to check key weakness crypto: talitos - fix memory corruption on SEC2 crypto: talitos - fix AEAD test failures bus: arm-ccn: fix module unloading Error: Removing state 147 which has instances left. bus: arm-ccn: Fix use of smp_processor_id() in preemptible context bus: arm-ccn: Check memory allocation failure bus: arm-cci: Fix use of smp_processor_id() in preemptible context arm64: fpsimd: Prevent registers leaking from dead tasks KVM: arm/arm64: vgic-its: Check result of allocation before use KVM: arm/arm64: vgic-irqfd: Fix MSI entry allocation KVM: arm/arm64: Fix broken GICH_ELRSR big endian conversion KVM: VMX: remove I/O port 0x80 bypass on Intel hosts arm: KVM: Fix VTTBR_BADDR_MASK BUG_ON off-by-one arm64: KVM: fix VTTBR_BADDR_MASK BUG_ON off-by-one media: dvb: i2c transfers over usb cannot be done from stack drm/exynos: gem: Drop NONCONTIG flag for buffers allocated without IOMMU kdb: Fix handling of kallsyms_symbol_next() return value brcmfmac: change driver unbind order of the sdio function devices powerpc/64s: Initialize ISAv3 MMU registers before setting partition table KVM: s390: Fix skey emulation permission check s390: fix compat system call table smp/hotplug: Move step CPUHP_AP_SMPCFD_DYING to the correct place iommu/vt-d: Fix scatterlist offset handling ALSA: usb-audio: Add check return value for usb_string() ALSA: usb-audio: Fix out-of-bound error ALSA: seq: Remove spurious WARN_ON() at timer check ALSA: pcm: prevent UAF in snd_pcm_info btrfs: fix missing error return in btrfs_drop_snapshot KVM: x86: fix APIC page invalidation x86/PCI: Make broadcom_postcore_init() check acpi_disabled X.509: fix comparisons of ->pkey_algo X.509: reject invalid BIT STRING for subjectPublicKey KEYS: add missing permission check for request_key() destination ASN.1: check for error from ASN1_OP_END__ACT actions ASN.1: fix out-of-bounds read when parsing indefinite length item efi/esrt: Use memunmap() instead of kfree() to free the remapping efi: Move some sysfs files to be read-only by root scsi: libsas: align sata_device's rps_resp on a cacheline scsi: use dma_get_cache_alignment() as minimum DMA alignment scsi: dma-mapping: always provide dma_get_cache_alignment isa: Prevent NULL dereference in isa_bus driver callbacks hv: kvp: Avoid reading past allocated blocks from KVP file virtio: release virtio index when fail to device_register can: usb_8dev: cancel urb on -EPIPE and -EPROTO can: esd_usb2: cancel urb on -EPIPE and -EPROTO can: ems_usb: cancel urb on -EPIPE and -EPROTO can: kvaser_usb: cancel urb on -EPIPE and -EPROTO can: kvaser_usb: ratelimit errors if incomplete messages are received can: kvaser_usb: Fix comparison bug in kvaser_usb_read_bulk_callback() can: kvaser_usb: free buf in error paths can: ti_hecc: Fix napi poll return value for repoll usb: gadget: udc: renesas_usb3: fix number of the pipes ANDROID: Revert "arm64: move ELF_ET_DYN_BASE to 4GB / 4MB" ANDROID: Revert "arm: move ELF_ET_DYN_BASE to 4MB" Linux 4.9.68 xen-netfront: avoid crashing on resume after a failure in talk_to_netback() usb: host: fix incorrect updating of offset USB: usbfs: Filter flags passed in from user space USB: devio: Prevent integer overflow in proc_do_submiturb() USB: Increase usbfs transfer limit USB: core: Add type-specific length check of BOS descriptors usb: xhci: fix panic in xhci_free_virt_devices_depth_first usb: hub: Cycle HUB power when initialization fails dma-buf: Update kerneldoc for sync_file_create dma-buf/sync_file: hold reference to fence when creating sync_file dma-buf/sw_sync: force signal all unsignaled fences on dying timeline dma-fence: Introduce drm_fence_set_error() helper dma-fence: Wrap querying the fence->status dma-fence: Clear fence->status during dma_fence_init() dma-buf/sw_sync: clean up list before signaling the fence dma-buf/sw_sync: move timeline_fence_ops around dma-buf/sw-sync: Use an rbtree to sort fences in the timeline dma-buf/sw-sync: Fix locking around sync_timeline lists dma-buf/sw-sync: sync_pt is private and of fixed size dma-buf/sw-sync: Reduce irqsave/irqrestore from known context dma-buf/sw-sync: Prevent user overflow on timeline advance dma-buf/sw-sync: Fix the is-signaled test to handle u32 wraparound dma-buf/dma-fence: Extract __dma_fence_is_later() net: fec: fix multicast filtering hardware setup xen-netback: vif counters from int/long to u64 cec: initiator should be the same as the destination for, poll xen-netfront: Improve error handling during initialization mm: avoid returning VM_FAULT_RETRY from ->page_mkwrite handlers vfio/spapr: Fix missing mutex unlock when creating a window be2net: fix initial MAC setting net: thunderx: avoid dereferencing xcv when NULL net: phy: micrel: KSZ8795 do not set SUPPORTED_[Asym_]Pause gtp: fix cross netns recv on gtp socket gtp: clear DF bit on GTP packet tx nvmet: cancel fatal error and flush async work before free controller i2c: i2c-cadence: Initialize configuration before probing devices tcp: correct memory barrier usage in tcp_check_space() dmaengine: pl330: fix double lock tipc: fix cleanup at module unload tipc: fix nametbl_lock soft lockup at module exit RDMA/qedr: Fix RDMA CM loopback RDMA/qedr: Return success when not changing QP state mac80211: don't try to sleep in rate_control_rate_init() drm/amdgpu: fix unload driver issue for virtual display x86/fpu: Set the xcomp_bv when we fake up a XSAVES area net: sctp: fix array overrun read on sctp_timer_tbl drm/exynos/decon5433: set STANDALONE_UPDATE_F on output enablement drm/amdgpu: fix bug set incorrect value to vce register qla2xxx: Fix wrong IOCB type assumption powerpc/mm: Fix memory hotplug BUG() on radix perf/x86/intel: Account interrupts for PEBS errors NFSv4: Fix client recovery when server reboots multiple times mac80211: prevent skb/txq mismatch KVM: arm/arm64: Fix occasional warning from the timer work function drm/exynos/decon5433: set STANDALONE_UPDATE_F also if planes are disabled drm/exynos/decon5433: update shadow registers iff there are active windows nfs: Don't take a reference on fl->fl_file for LOCK operation ravb: Remove Rx overflow log messages mac80211: calculate min channel width correctly mm: fix remote numa hits statistics net: qrtr: Mark 'buf' as little endian libfs: Modify mount_pseudo_xattr to be clear it is not a userspace mount net/appletalk: Fix kernel memory disclosure be2net: fix unicast list filling be2net: fix accesses to unicast list vti6: fix device register to report IFLA_INFO_KIND ARM: OMAP1: DMA: Correct the number of logical channels ARM: OMAP2+: Fix WL1283 Bluetooth Baud Rate net: systemport: Pad packet before inserting TSB net: systemport: Utilize skb_put_padto() libcxgb: fix error check for ip6_route_output() usb: gadget: f_fs: Fix ExtCompat descriptor validation dmaengine: stm32-dma: Fix null pointer dereference in stm32_dma_tx_status dmaengine: stm32-dma: Set correct args number for DMA request from DT l2tp: take remote address into account in l2tp_ip and l2tp_ip6 socket lookups net/mlx4_en: Fix type mismatch for 32-bit systems dax: Avoid page invalidation races and unnecessary radix tree traversals iio: adc: ti-ads1015: add 10% to conversion wait time tools include: Do not use poison with C++ kprobes/x86: Disable preemption in ftrace-based jprobes perf test attr: Fix ignored test case result usbip: tools: Install all headers needed for libusbip development sysrq : fix Show Regs call trace on ARM EDAC, sb_edac: Fix missing break in switch x86/entry: Use SYSCALL_DEFINE() macros for sys_modify_ldt() serial: 8250: Preserve DLD[7:4] for PORT_XR17V35X usb: phy: tahvo: fix error handling in tahvo_usb_probe() mmc: sdhci-msm: fix issue with power irq spi: spi-axi: fix potential use-after-free after deregistration spi: sh-msiof: Fix DMA transfer size check staging: rtl8188eu: avoid a null dereference on pmlmepriv serial: 8250_fintek: Fix rs485 disablement on invalid ioctl() m68k: fix ColdFire node shift size calculation staging: greybus: loopback: Fix iteration count on async path selftests/x86/ldt_get: Add a few additional tests for limits s390/pci: do not require AIS facility ima: fix hash algorithm initialization USB: serial: option: add Quectel BG96 id s390/runtime instrumentation: simplify task exit handling serial: 8250_pci: Add Amazon PCI serial device ID usb: quirks: Add no-lpm quirk for KY-688 USB 3.1 Type-C Hub uas: Always apply US_FL_NO_ATA_1X quirk to Seagate devices mm, oom_reaper: gather each vma to prevent leaking TLB entry Revert "crypto: caam - get rid of tasklet" drm/fsl-dcu: enable IRQ before drm_atomic_helper_resume() drm/fsl-dcu: avoid disabling pixel clock twice on suspend bcache: recover data from backing when data is clean bcache: only permit to recovery read error when cache device is clean Linux 4.9.67 drm/i915: Prevent zero length "index" write drm/i915: Don't try indexed reads to alternate slave addresses NFS: revalidate "." etc correctly on "open". Revert "x86/entry/64: Add missing irqflags tracing to native_load_gs_index()" drm/amd/pp: fix typecast error in powerplay. drm/ttm: once more fix ttm_buffer_object_transfer drm/hisilicon: Ensure LDI regs are properly configured. drm/panel: simple: Add missing panel_simple_unprepare() calls drm/radeon: fix atombios on big endian drm/amdgpu: Potential uninitialized variable in amdgpu_vm_update_directories() drm/amdgpu: potential uninitialized variable in amdgpu_vce_ring_parse_cs() Revert "drm/radeon: dont switch vt on suspend" nvme-pci: add quirk for delay before CHK RDY for WDC SN200 hwmon: (jc42) optionally try to disable the SMBUS timeout bcache: Fix building error on MIPS i2c: i801: Fix Failed to allocate irq -2147483648 error eeprom: at24: check at24_read/write arguments eeprom: at24: correctly set the size for at24mac402 eeprom: at24: fix reading from 24MAC402/24MAC602 mmc: core: prepend 0x to OCR entry in sysfs mmc: core: Do not leave the block driver in a suspended state KVM: lapic: Fixup LDR on load in x2apic KVM: lapic: Split out x2apic ldr calculation KVM: x86: inject exceptions produced by x86_decode_insn KVM: x86: Exit to user-mode on #UD intercept when emulator requires KVM: x86: pvclock: Handle first-time write to pvclock-page contains random junk ARM: OMAP2+: Fix WL1283 Bluetooth Baud Rate mfd: twl4030-power: Fix pmic for boards that need vmmc1 on reboot nfsd: fix panic in posix_unblock_lock called from nfs4_laundromat nfsd: Fix another OPEN stateid race nfsd: Fix stateid races between OPEN and CLOSE btrfs: clear space cache inode generation always mm/madvise.c: fix madvise() infinite loop under special circumstances mm, hugetlbfs: introduce ->split() to vm_operations_struct mm/cma: fix alloc_contig_range ret code/potential leak mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d() ARM: dts: omap3: logicpd-torpedo-37xx-devkit: Fix MMC1 cd-gpio ARM: dts: LogicPD Torpedo: Fix camera pin mux Linux 4.9.66 xen: xenbus driver must not accept invalid transaction ids nvmet: fix KATO offset in Set Features cec: update log_addr[] before finishing configuration cec: CEC_MSG_GIVE_FEATURES should abort for CEC version < 2 cec: when canceling a message, don't overwrite old status info s390/kbuild: enable modversions for symbols exported from asm ASoC: wm_adsp: Don't overrun firmware file buffer when reading region data btrfs: return the actual error value from from btrfs_uuid_tree_iterate crypto: marvell - Copy IVDIG before launching partial DMA ahash requests ASoC: rsnd: don't double free kctrl netfilter: nf_tables: fix oob access netfilter: nft_queue: use raw_smp_processor_id() spi: SPI_FSL_DSPI should depend on HAS_DMA staging: iio: cdc: fix improper return value iio: light: fix improper return value adm80211: add checks for dma mapping errors mac80211: Suppress NEW_PEER_CANDIDATE event if no room mac80211: Remove invalid flag operations in mesh TSF synchronization drm/mediatek: don't use drm_put_dev clk: qcom: ipq4019: Add all the frequencies for apss cpu drm: Apply range restriction after color adjustment when allocation gpio: mockup: dynamically allocate memory for chip name ALSA: hda - Apply ALC269_FIXUP_NO_SHUTUP on HDA_FIXUP_ACT_PROBE ath10k: set CTS protection VDEV param only if VDEV is up bnxt_en: Set default completion ring for async events. pinctrl: sirf: atlas7: Add missing 'of_node_put()' ath10k: fix potential memory leak in ath10k_wmi_tlv_op_pull_fw_stats() ath10k: ignore configuring the incorrect board_id ath10k: fix incorrect txpower set by P2P_DEVICE interface mwifiex: sdio: fix use after free issue for save_adapter adm80211: return an error if adm8211_alloc_rings() fails rt2800: set minimum MPDU and PSDU lengths to sane values drm/armada: Fix compile fail net: 3com: typhoon: typhoon_init_one: fix incorrect return values net: 3com: typhoon: typhoon_init_one: make return values more specific net: Allow IP_MULTICAST_IF to set index to L3 slave fscrypt: use ENOTDIR when setting encryption policy on nondirectory fscrypt: use ENOKEY when file cannot be created w/o key dmaengine: zx: set DMA_CYCLIC cap_mask bit clk: sunxi-ng: fix PLL_CPUX adjusting on A33 clk: sunxi-ng: A31: Fix spdif clock register drm/sun4i: Fix a return value in case of error PCI: Apply _HPX settings only to relevant devices RDS: RDMA: fix the ib_map_mr_sg_zbva() argument RDS: RDMA: return appropriate error on rdma map failures RDS: make message size limit compliant with spec e1000e: Avoid receiver overrun interrupt bursts e1000e: Separate signaling for link check/link up e1000e: Fix return value test e1000e: Fix error path in link detection Revert "drm/i915: Do not rely on wm preservation for ILK watermarks" PM / OPP: Add missing of_node_put(np) net/9p: Switch to wait_event_killable() fscrypt: lock mutex before checking for bounce page pool sched/rt: Simplify the IPI based RT balancing logic media: v4l2-ctrl: Fix flags field on Control events cx231xx-cards: fix NULL-deref on missing association descriptor media: rc: check for integer overflow media: Don't do DMA on stack for firmware upload in the AS102 driver powerpc/signal: Properly handle return value from uprobe_deny_signal() parisc: Fix validity check of pointer size argument in new CAS implementation ixgbe: Fix skb list corruption on Power systems fm10k: Use smp_rmb rather than read_barrier_depends i40evf: Use smp_rmb rather than read_barrier_depends ixgbevf: Use smp_rmb rather than read_barrier_depends igbvf: Use smp_rmb rather than read_barrier_depends igb: Use smp_rmb rather than read_barrier_depends i40e: Use smp_rmb rather than read_barrier_depends NFC: fix device-allocation error return IB/srp: Avoid that a cable pull can trigger a kernel crash IB/srpt: Do not accept invalid initiator port names libnvdimm, namespace: make 'resource' attribute only readable by root libnvdimm, namespace: fix label initialization to use valid seq numbers libnvdimm, pfn: make 'resource' attribute only readable by root clk: ti: dra7-atl-clock: fix child-node lookups SUNRPC: Fix tracepoint storage issues with svc_recv and svc_rqst_status KVM: SVM: obey guest PAT KVM: nVMX: set IDTR and GDTR limits when loading L1 host state lockd: double unregister of inetaddr notifiers irqchip/gic-v3: Fix ppi-partitions lookup block: Fix a race between blk_cleanup_queue() and timeout handling p54: don't unregister leds when they are not initialized mtd: nand: mtk: fix infinite ECC decode IRQ issue mtd: nand: Fix writing mtdoops to nand flash. mtd: nand: omap2: Fix subpage write target: Fix QUEUE_FULL + SCSI task attribute handling iscsi-target: Fix non-immediate TMR reference leak fs/9p: Compare qid.path in v9fs_test_inode fix a page leak in vhost_scsi_iov_to_sgl() error recovery ALSA: hda/realtek - Fix ALC700 family no sound issue ALSA: hda: Fix too short HDMI/DP chmap reporting ALSA: timer: Remove kernel warning at compat ioctl error paths ALSA: usb-audio: Add sanity checks in v2 clock parsers ALSA: usb-audio: Fix potential out-of-bound access at parsing SU ALSA: usb-audio: Add sanity checks to FE parser ALSA: pcm: update tstamp only if audio_tstamp changed ext4: fix interaction between i_size, fallocate, and delalloc after a crash ata: fixes kernel crash while tracing ata_eh_link_autopsy event rtlwifi: fix uninitialized rtlhal->last_suspend_sec time rtlwifi: rtl8192ee: Fix memory leak when loading firmware nfsd: deal with revoked delegations appropriately NFS: Avoid RCU usage in tracepoints nfs: Fix ugly referral attributes NFS: Fix typo in nomigration mount option isofs: fix timestamps beyond 2027 bcache: check ca->alloc_thread initialized before wake up it libceph: don't WARN() if user tries to add invalid key eCryptfs: use after free in ecryptfs_release_messaging() nilfs2: fix race condition that causes file system corruption autofs: don't fail mount for transient error rt2x00usb: mark device removed when get ENOENT usb error MIPS: BCM47XX: Fix LED inversion for WRT54GSv1 MIPS: Fix an n32 core file generation regset support regression MIPS: dts: remove bogus bcm96358nb4ser.dtb from dtb-y entry MIPS: Fix odd fp register warnings with MIPS64r2 dm: fix race between dm_get_from_kobject() and __dm_destroy() MIPS: pci: Remove KERN_WARN instance inside the mt7620 driver dm: allocate struct mapped_device with kvzalloc dm bufio: fix integer overflow when limiting maximum cache size ALSA: hda: Add Raven PCI ID PCI: Set Cavium ACS capability quirk flags to assert RR/CR/SV/UF MIPS: ralink: Fix typo in mt7628 pinmux function MIPS: ralink: Fix MT7628 pinmux ARM: 8721/1: mm: dump: check hardware RO bit for LPAE ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAE arm64: Implement arch-specific pte_access_permitted() x86/entry/64: Add missing irqflags tracing to native_load_gs_index() x86/decoder: Add new TEST instruction pattern lib/mpi: call cond_resched() from mpi_powm() loop sched: Make resched_cpu() unconditional vsock: use new wait API for vsock_stream_sendmsg() ipv6: only call ip6_route_dev_notify() once for NETDEV_UNREGISTER x86/mm: fix use-after-free of vma during userfaultfd fault ACPI / EC: Fix regression related to triggering source of EC event handling s390/disassembler: increase show_code buffer size s390/disassembler: add missing end marker for e7 table s390/runtime instrumention: fix possible memory corruption s390: fix transactional execution control register handling Conflicts: drivers/android/binder_alloc.c drivers/android/binder_alloc.h drivers/android/binder_alloc_selftest.c drivers/mmc/core/bus.c drivers/mmc/host/sdhci-msm.c drivers/thermal/step_wise.c kernel/cpu.c mm/oom_kill.c sound/usb/mixer.c Change-Id: Id01eb66cafc5970b460321e44ec8ffcfa76971a6 Signed-off-by: Kyle Yan <kyan@codeaurora.org>
1410 lines
39 KiB
C
1410 lines
39 KiB
C
/*
|
|
* Performance events:
|
|
*
|
|
* Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de>
|
|
* Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar
|
|
* Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra
|
|
*
|
|
* Data type definitions, declarations, prototypes.
|
|
*
|
|
* Started by: Thomas Gleixner and Ingo Molnar
|
|
*
|
|
* For licencing details see kernel-base/COPYING
|
|
*/
|
|
#ifndef _LINUX_PERF_EVENT_H
|
|
#define _LINUX_PERF_EVENT_H
|
|
|
|
#include <uapi/linux/perf_event.h>
|
|
|
|
/*
|
|
* Kernel-internal data types and definitions:
|
|
*/
|
|
|
|
#ifdef CONFIG_PERF_EVENTS
|
|
# include <asm/perf_event.h>
|
|
# include <asm/local64.h>
|
|
#endif
|
|
|
|
struct perf_guest_info_callbacks {
|
|
int (*is_in_guest)(void);
|
|
int (*is_user_mode)(void);
|
|
unsigned long (*get_guest_ip)(void);
|
|
};
|
|
|
|
#ifdef CONFIG_HAVE_HW_BREAKPOINT
|
|
#include <asm/hw_breakpoint.h>
|
|
#endif
|
|
|
|
#include <linux/list.h>
|
|
#include <linux/mutex.h>
|
|
#include <linux/rculist.h>
|
|
#include <linux/rcupdate.h>
|
|
#include <linux/spinlock.h>
|
|
#include <linux/hrtimer.h>
|
|
#include <linux/fs.h>
|
|
#include <linux/pid_namespace.h>
|
|
#include <linux/workqueue.h>
|
|
#include <linux/ftrace.h>
|
|
#include <linux/cpu.h>
|
|
#include <linux/irq_work.h>
|
|
#include <linux/static_key.h>
|
|
#include <linux/jump_label_ratelimit.h>
|
|
#include <linux/atomic.h>
|
|
#include <linux/sysfs.h>
|
|
#include <linux/perf_regs.h>
|
|
#include <linux/workqueue.h>
|
|
#include <linux/cgroup.h>
|
|
#include <asm/local.h>
|
|
|
|
struct perf_callchain_entry {
|
|
__u64 nr;
|
|
__u64 ip[0]; /* /proc/sys/kernel/perf_event_max_stack */
|
|
};
|
|
|
|
struct perf_callchain_entry_ctx {
|
|
struct perf_callchain_entry *entry;
|
|
u32 max_stack;
|
|
u32 nr;
|
|
short contexts;
|
|
bool contexts_maxed;
|
|
};
|
|
|
|
typedef unsigned long (*perf_copy_f)(void *dst, const void *src,
|
|
unsigned long off, unsigned long len);
|
|
|
|
struct perf_raw_frag {
|
|
union {
|
|
struct perf_raw_frag *next;
|
|
unsigned long pad;
|
|
};
|
|
perf_copy_f copy;
|
|
void *data;
|
|
u32 size;
|
|
} __packed;
|
|
|
|
struct perf_raw_record {
|
|
struct perf_raw_frag frag;
|
|
u32 size;
|
|
};
|
|
|
|
/*
|
|
* branch stack layout:
|
|
* nr: number of taken branches stored in entries[]
|
|
*
|
|
* Note that nr can vary from sample to sample
|
|
* branches (to, from) are stored from most recent
|
|
* to least recent, i.e., entries[0] contains the most
|
|
* recent branch.
|
|
*/
|
|
struct perf_branch_stack {
|
|
__u64 nr;
|
|
struct perf_branch_entry entries[0];
|
|
};
|
|
|
|
struct task_struct;
|
|
|
|
/*
|
|
* extra PMU register associated with an event
|
|
*/
|
|
struct hw_perf_event_extra {
|
|
u64 config; /* register value */
|
|
unsigned int reg; /* register address or index */
|
|
int alloc; /* extra register already allocated */
|
|
int idx; /* index in shared_regs->regs[] */
|
|
};
|
|
|
|
/**
|
|
* struct hw_perf_event - performance event hardware details:
|
|
*/
|
|
struct hw_perf_event {
|
|
#ifdef CONFIG_PERF_EVENTS
|
|
union {
|
|
struct { /* hardware */
|
|
u64 config;
|
|
u64 last_tag;
|
|
unsigned long config_base;
|
|
unsigned long event_base;
|
|
int event_base_rdpmc;
|
|
int idx;
|
|
int last_cpu;
|
|
int flags;
|
|
|
|
struct hw_perf_event_extra extra_reg;
|
|
struct hw_perf_event_extra branch_reg;
|
|
};
|
|
struct { /* software */
|
|
struct hrtimer hrtimer;
|
|
};
|
|
struct { /* tracepoint */
|
|
/* for tp_event->class */
|
|
struct list_head tp_list;
|
|
};
|
|
struct { /* intel_cqm */
|
|
int cqm_state;
|
|
u32 cqm_rmid;
|
|
int is_group_event;
|
|
struct list_head cqm_events_entry;
|
|
struct list_head cqm_groups_entry;
|
|
struct list_head cqm_group_entry;
|
|
};
|
|
struct { /* itrace */
|
|
int itrace_started;
|
|
};
|
|
struct { /* amd_power */
|
|
u64 pwr_acc;
|
|
u64 ptsc;
|
|
};
|
|
#ifdef CONFIG_HAVE_HW_BREAKPOINT
|
|
struct { /* breakpoint */
|
|
/*
|
|
* Crufty hack to avoid the chicken and egg
|
|
* problem hw_breakpoint has with context
|
|
* creation and event initalization.
|
|
*/
|
|
struct arch_hw_breakpoint info;
|
|
struct list_head bp_list;
|
|
};
|
|
#endif
|
|
};
|
|
/*
|
|
* If the event is a per task event, this will point to the task in
|
|
* question. See the comment in perf_event_alloc().
|
|
*/
|
|
struct task_struct *target;
|
|
|
|
/*
|
|
* PMU would store hardware filter configuration
|
|
* here.
|
|
*/
|
|
void *addr_filters;
|
|
|
|
/* Last sync'ed generation of filters */
|
|
unsigned long addr_filters_gen;
|
|
|
|
/*
|
|
* hw_perf_event::state flags; used to track the PERF_EF_* state.
|
|
*/
|
|
#define PERF_HES_STOPPED 0x01 /* the counter is stopped */
|
|
#define PERF_HES_UPTODATE 0x02 /* event->count up-to-date */
|
|
#define PERF_HES_ARCH 0x04
|
|
|
|
int state;
|
|
|
|
/*
|
|
* The last observed hardware counter value, updated with a
|
|
* local64_cmpxchg() such that pmu::read() can be called nested.
|
|
*/
|
|
local64_t prev_count;
|
|
|
|
/*
|
|
* The period to start the next sample with.
|
|
*/
|
|
u64 sample_period;
|
|
|
|
/*
|
|
* The period we started this sample with.
|
|
*/
|
|
u64 last_period;
|
|
|
|
/*
|
|
* However much is left of the current period; note that this is
|
|
* a full 64bit value and allows for generation of periods longer
|
|
* than hardware might allow.
|
|
*/
|
|
local64_t period_left;
|
|
|
|
/*
|
|
* State for throttling the event, see __perf_event_overflow() and
|
|
* perf_adjust_freq_unthr_context().
|
|
*/
|
|
u64 interrupts_seq;
|
|
u64 interrupts;
|
|
|
|
/*
|
|
* State for freq target events, see __perf_event_overflow() and
|
|
* perf_adjust_freq_unthr_context().
|
|
*/
|
|
u64 freq_time_stamp;
|
|
u64 freq_count_stamp;
|
|
#endif
|
|
};
|
|
|
|
struct perf_event;
|
|
|
|
/*
|
|
* Common implementation detail of pmu::{start,commit,cancel}_txn
|
|
*/
|
|
#define PERF_PMU_TXN_ADD 0x1 /* txn to add/schedule event on PMU */
|
|
#define PERF_PMU_TXN_READ 0x2 /* txn to read event group from PMU */
|
|
|
|
/**
|
|
* pmu::capabilities flags
|
|
*/
|
|
#define PERF_PMU_CAP_NO_INTERRUPT 0x01
|
|
#define PERF_PMU_CAP_NO_NMI 0x02
|
|
#define PERF_PMU_CAP_AUX_NO_SG 0x04
|
|
#define PERF_PMU_CAP_AUX_SW_DOUBLEBUF 0x08
|
|
#define PERF_PMU_CAP_EXCLUSIVE 0x10
|
|
#define PERF_PMU_CAP_ITRACE 0x20
|
|
#define PERF_PMU_CAP_HETEROGENEOUS_CPUS 0x40
|
|
|
|
/**
|
|
* struct pmu - generic performance monitoring unit
|
|
*/
|
|
struct pmu {
|
|
struct list_head entry;
|
|
|
|
struct module *module;
|
|
struct device *dev;
|
|
const struct attribute_group **attr_groups;
|
|
const char *name;
|
|
int type;
|
|
|
|
/*
|
|
* various common per-pmu feature flags
|
|
*/
|
|
int capabilities;
|
|
|
|
int * __percpu pmu_disable_count;
|
|
struct perf_cpu_context * __percpu pmu_cpu_context;
|
|
atomic_t exclusive_cnt; /* < 0: cpu; > 0: tsk */
|
|
int task_ctx_nr;
|
|
int hrtimer_interval_ms;
|
|
u32 events_across_hotplug:1,
|
|
reserved:31;
|
|
|
|
/* number of address filters this PMU can do */
|
|
unsigned int nr_addr_filters;
|
|
|
|
/*
|
|
* Fully disable/enable this PMU, can be used to protect from the PMI
|
|
* as well as for lazy/batch writing of the MSRs.
|
|
*/
|
|
void (*pmu_enable) (struct pmu *pmu); /* optional */
|
|
void (*pmu_disable) (struct pmu *pmu); /* optional */
|
|
|
|
/*
|
|
* Try and initialize the event for this PMU.
|
|
*
|
|
* Returns:
|
|
* -ENOENT -- @event is not for this PMU
|
|
*
|
|
* -ENODEV -- @event is for this PMU but PMU not present
|
|
* -EBUSY -- @event is for this PMU but PMU temporarily unavailable
|
|
* -EINVAL -- @event is for this PMU but @event is not valid
|
|
* -EOPNOTSUPP -- @event is for this PMU, @event is valid, but not supported
|
|
* -EACCESS -- @event is for this PMU, @event is valid, but no privilidges
|
|
*
|
|
* 0 -- @event is for this PMU and valid
|
|
*
|
|
* Other error return values are allowed.
|
|
*/
|
|
int (*event_init) (struct perf_event *event);
|
|
|
|
/*
|
|
* Notification that the event was mapped or unmapped. Called
|
|
* in the context of the mapping task.
|
|
*/
|
|
void (*event_mapped) (struct perf_event *event); /*optional*/
|
|
void (*event_unmapped) (struct perf_event *event); /*optional*/
|
|
|
|
/*
|
|
* Flags for ->add()/->del()/ ->start()/->stop(). There are
|
|
* matching hw_perf_event::state flags.
|
|
*/
|
|
#define PERF_EF_START 0x01 /* start the counter when adding */
|
|
#define PERF_EF_RELOAD 0x02 /* reload the counter when starting */
|
|
#define PERF_EF_UPDATE 0x04 /* update the counter when stopping */
|
|
|
|
/*
|
|
* Adds/Removes a counter to/from the PMU, can be done inside a
|
|
* transaction, see the ->*_txn() methods.
|
|
*
|
|
* The add/del callbacks will reserve all hardware resources required
|
|
* to service the event, this includes any counter constraint
|
|
* scheduling etc.
|
|
*
|
|
* Called with IRQs disabled and the PMU disabled on the CPU the event
|
|
* is on.
|
|
*
|
|
* ->add() called without PERF_EF_START should result in the same state
|
|
* as ->add() followed by ->stop().
|
|
*
|
|
* ->del() must always PERF_EF_UPDATE stop an event. If it calls
|
|
* ->stop() that must deal with already being stopped without
|
|
* PERF_EF_UPDATE.
|
|
*/
|
|
int (*add) (struct perf_event *event, int flags);
|
|
void (*del) (struct perf_event *event, int flags);
|
|
|
|
/*
|
|
* Starts/Stops a counter present on the PMU.
|
|
*
|
|
* The PMI handler should stop the counter when perf_event_overflow()
|
|
* returns !0. ->start() will be used to continue.
|
|
*
|
|
* Also used to change the sample period.
|
|
*
|
|
* Called with IRQs disabled and the PMU disabled on the CPU the event
|
|
* is on -- will be called from NMI context with the PMU generates
|
|
* NMIs.
|
|
*
|
|
* ->stop() with PERF_EF_UPDATE will read the counter and update
|
|
* period/count values like ->read() would.
|
|
*
|
|
* ->start() with PERF_EF_RELOAD will reprogram the the counter
|
|
* value, must be preceded by a ->stop() with PERF_EF_UPDATE.
|
|
*/
|
|
void (*start) (struct perf_event *event, int flags);
|
|
void (*stop) (struct perf_event *event, int flags);
|
|
|
|
/*
|
|
* Updates the counter value of the event.
|
|
*
|
|
* For sampling capable PMUs this will also update the software period
|
|
* hw_perf_event::period_left field.
|
|
*/
|
|
void (*read) (struct perf_event *event);
|
|
|
|
/*
|
|
* Group events scheduling is treated as a transaction, add
|
|
* group events as a whole and perform one schedulability test.
|
|
* If the test fails, roll back the whole group
|
|
*
|
|
* Start the transaction, after this ->add() doesn't need to
|
|
* do schedulability tests.
|
|
*
|
|
* Optional.
|
|
*/
|
|
void (*start_txn) (struct pmu *pmu, unsigned int txn_flags);
|
|
/*
|
|
* If ->start_txn() disabled the ->add() schedulability test
|
|
* then ->commit_txn() is required to perform one. On success
|
|
* the transaction is closed. On error the transaction is kept
|
|
* open until ->cancel_txn() is called.
|
|
*
|
|
* Optional.
|
|
*/
|
|
int (*commit_txn) (struct pmu *pmu);
|
|
/*
|
|
* Will cancel the transaction, assumes ->del() is called
|
|
* for each successful ->add() during the transaction.
|
|
*
|
|
* Optional.
|
|
*/
|
|
void (*cancel_txn) (struct pmu *pmu);
|
|
|
|
/*
|
|
* Will return the value for perf_event_mmap_page::index for this event,
|
|
* if no implementation is provided it will default to: event->hw.idx + 1.
|
|
*/
|
|
int (*event_idx) (struct perf_event *event); /*optional */
|
|
|
|
/*
|
|
* context-switches callback
|
|
*/
|
|
void (*sched_task) (struct perf_event_context *ctx,
|
|
bool sched_in);
|
|
/*
|
|
* PMU specific data size
|
|
*/
|
|
size_t task_ctx_size;
|
|
|
|
|
|
/*
|
|
* Return the count value for a counter.
|
|
*/
|
|
u64 (*count) (struct perf_event *event); /*optional*/
|
|
|
|
/*
|
|
* Set up pmu-private data structures for an AUX area
|
|
*/
|
|
void *(*setup_aux) (int cpu, void **pages,
|
|
int nr_pages, bool overwrite);
|
|
/* optional */
|
|
|
|
/*
|
|
* Free pmu-private AUX data structures
|
|
*/
|
|
void (*free_aux) (void *aux); /* optional */
|
|
|
|
/*
|
|
* Validate address range filters: make sure the HW supports the
|
|
* requested configuration and number of filters; return 0 if the
|
|
* supplied filters are valid, -errno otherwise.
|
|
*
|
|
* Runs in the context of the ioctl()ing process and is not serialized
|
|
* with the rest of the PMU callbacks.
|
|
*/
|
|
int (*addr_filters_validate) (struct list_head *filters);
|
|
/* optional */
|
|
|
|
/*
|
|
* Synchronize address range filter configuration:
|
|
* translate hw-agnostic filters into hardware configuration in
|
|
* event::hw::addr_filters.
|
|
*
|
|
* Runs as a part of filter sync sequence that is done in ->start()
|
|
* callback by calling perf_event_addr_filters_sync().
|
|
*
|
|
* May (and should) traverse event::addr_filters::list, for which its
|
|
* caller provides necessary serialization.
|
|
*/
|
|
void (*addr_filters_sync) (struct perf_event *event);
|
|
/* optional */
|
|
|
|
/*
|
|
* Filter events for PMU-specific reasons.
|
|
*/
|
|
int (*filter_match) (struct perf_event *event); /* optional */
|
|
};
|
|
|
|
/**
|
|
* struct perf_addr_filter - address range filter definition
|
|
* @entry: event's filter list linkage
|
|
* @inode: object file's inode for file-based filters
|
|
* @offset: filter range offset
|
|
* @size: filter range size
|
|
* @range: 1: range, 0: address
|
|
* @filter: 1: filter/start, 0: stop
|
|
*
|
|
* This is a hardware-agnostic filter configuration as specified by the user.
|
|
*/
|
|
struct perf_addr_filter {
|
|
struct list_head entry;
|
|
struct inode *inode;
|
|
unsigned long offset;
|
|
unsigned long size;
|
|
unsigned int range : 1,
|
|
filter : 1;
|
|
};
|
|
|
|
/**
|
|
* struct perf_addr_filters_head - container for address range filters
|
|
* @list: list of filters for this event
|
|
* @lock: spinlock that serializes accesses to the @list and event's
|
|
* (and its children's) filter generations.
|
|
*
|
|
* A child event will use parent's @list (and therefore @lock), so they are
|
|
* bundled together; see perf_event_addr_filters().
|
|
*/
|
|
struct perf_addr_filters_head {
|
|
struct list_head list;
|
|
raw_spinlock_t lock;
|
|
};
|
|
|
|
/**
|
|
* enum perf_event_active_state - the states of a event
|
|
*/
|
|
enum perf_event_active_state {
|
|
PERF_EVENT_STATE_DEAD = -5,
|
|
PERF_EVENT_STATE_ZOMBIE = -4,
|
|
PERF_EVENT_STATE_EXIT = -3,
|
|
PERF_EVENT_STATE_ERROR = -2,
|
|
PERF_EVENT_STATE_OFF = -1,
|
|
PERF_EVENT_STATE_INACTIVE = 0,
|
|
PERF_EVENT_STATE_ACTIVE = 1,
|
|
};
|
|
|
|
struct file;
|
|
struct perf_sample_data;
|
|
|
|
typedef void (*perf_overflow_handler_t)(struct perf_event *,
|
|
struct perf_sample_data *,
|
|
struct pt_regs *regs);
|
|
|
|
/*
|
|
* Event capabilities. For event_caps and groups caps.
|
|
*
|
|
* PERF_EV_CAP_SOFTWARE: Is a software event.
|
|
* PERF_EV_CAP_READ_ACTIVE_PKG: A CPU event (or cgroup event) that can be read
|
|
* from any CPU in the package where it is active.
|
|
*/
|
|
#define PERF_EV_CAP_SOFTWARE BIT(0)
|
|
#define PERF_EV_CAP_READ_ACTIVE_PKG BIT(1)
|
|
|
|
#define SWEVENT_HLIST_BITS 8
|
|
#define SWEVENT_HLIST_SIZE (1 << SWEVENT_HLIST_BITS)
|
|
|
|
struct swevent_hlist {
|
|
struct hlist_head heads[SWEVENT_HLIST_SIZE];
|
|
struct rcu_head rcu_head;
|
|
};
|
|
|
|
#define PERF_ATTACH_CONTEXT 0x01
|
|
#define PERF_ATTACH_GROUP 0x02
|
|
#define PERF_ATTACH_TASK 0x04
|
|
#define PERF_ATTACH_TASK_DATA 0x08
|
|
|
|
struct perf_cgroup;
|
|
struct ring_buffer;
|
|
|
|
struct pmu_event_list {
|
|
raw_spinlock_t lock;
|
|
struct list_head list;
|
|
};
|
|
|
|
/**
|
|
* struct perf_event - performance event kernel representation:
|
|
*/
|
|
struct perf_event {
|
|
#ifdef CONFIG_PERF_EVENTS
|
|
/*
|
|
* entry onto perf_event_context::event_list;
|
|
* modifications require ctx->lock
|
|
* RCU safe iterations.
|
|
*/
|
|
struct list_head event_entry;
|
|
|
|
/*
|
|
* XXX: group_entry and sibling_list should be mutually exclusive;
|
|
* either you're a sibling on a group, or you're the group leader.
|
|
* Rework the code to always use the same list element.
|
|
*
|
|
* Locked for modification by both ctx->mutex and ctx->lock; holding
|
|
* either sufficies for read.
|
|
*/
|
|
struct list_head group_entry;
|
|
struct list_head sibling_list;
|
|
|
|
/*
|
|
* We need storage to track the entries in perf_pmu_migrate_context; we
|
|
* cannot use the event_entry because of RCU and we want to keep the
|
|
* group in tact which avoids us using the other two entries.
|
|
*/
|
|
struct list_head migrate_entry;
|
|
|
|
struct hlist_node hlist_entry;
|
|
struct list_head active_entry;
|
|
int nr_siblings;
|
|
|
|
/* Not serialized. Only written during event initialization. */
|
|
int event_caps;
|
|
/* The cumulative AND of all event_caps for events in this group. */
|
|
int group_caps;
|
|
|
|
struct perf_event *group_leader;
|
|
|
|
/*
|
|
* Protect the pmu, attributes and context of a group leader.
|
|
* Note: does not protect the pointer to the group_leader.
|
|
*/
|
|
struct mutex group_leader_mutex;
|
|
struct pmu *pmu;
|
|
void *pmu_private;
|
|
|
|
enum perf_event_active_state state;
|
|
unsigned int attach_state;
|
|
local64_t count;
|
|
atomic64_t child_count;
|
|
|
|
/*
|
|
* These are the total time in nanoseconds that the event
|
|
* has been enabled (i.e. eligible to run, and the task has
|
|
* been scheduled in, if this is a per-task event)
|
|
* and running (scheduled onto the CPU), respectively.
|
|
*
|
|
* They are computed from tstamp_enabled, tstamp_running and
|
|
* tstamp_stopped when the event is in INACTIVE or ACTIVE state.
|
|
*/
|
|
u64 total_time_enabled;
|
|
u64 total_time_running;
|
|
|
|
/*
|
|
* These are timestamps used for computing total_time_enabled
|
|
* and total_time_running when the event is in INACTIVE or
|
|
* ACTIVE state, measured in nanoseconds from an arbitrary point
|
|
* in time.
|
|
* tstamp_enabled: the notional time when the event was enabled
|
|
* tstamp_running: the notional time when the event was scheduled on
|
|
* tstamp_stopped: in INACTIVE state, the notional time when the
|
|
* event was scheduled off.
|
|
*/
|
|
u64 tstamp_enabled;
|
|
u64 tstamp_running;
|
|
u64 tstamp_stopped;
|
|
|
|
/*
|
|
* timestamp shadows the actual context timing but it can
|
|
* be safely used in NMI interrupt context. It reflects the
|
|
* context time as it was when the event was last scheduled in.
|
|
*
|
|
* ctx_time already accounts for ctx->timestamp. Therefore to
|
|
* compute ctx_time for a sample, simply add perf_clock().
|
|
*/
|
|
u64 shadow_ctx_time;
|
|
|
|
struct perf_event_attr attr;
|
|
u16 header_size;
|
|
u16 id_header_size;
|
|
u16 read_size;
|
|
struct hw_perf_event hw;
|
|
|
|
struct perf_event_context *ctx;
|
|
atomic_long_t refcount;
|
|
|
|
/*
|
|
* These accumulate total time (in nanoseconds) that children
|
|
* events have been enabled and running, respectively.
|
|
*/
|
|
atomic64_t child_total_time_enabled;
|
|
atomic64_t child_total_time_running;
|
|
|
|
/*
|
|
* Protect attach/detach and child_list:
|
|
*/
|
|
struct mutex child_mutex;
|
|
struct list_head child_list;
|
|
struct perf_event *parent;
|
|
|
|
int oncpu;
|
|
int cpu;
|
|
|
|
struct list_head owner_entry;
|
|
struct task_struct *owner;
|
|
|
|
/* mmap bits */
|
|
struct mutex mmap_mutex;
|
|
atomic_t mmap_count;
|
|
|
|
struct ring_buffer *rb;
|
|
struct list_head rb_entry;
|
|
unsigned long rcu_batches;
|
|
int rcu_pending;
|
|
|
|
/* poll related */
|
|
wait_queue_head_t waitq;
|
|
struct fasync_struct *fasync;
|
|
|
|
/* delayed work for NMIs and such */
|
|
int pending_wakeup;
|
|
int pending_kill;
|
|
int pending_disable;
|
|
struct irq_work pending;
|
|
|
|
atomic_t event_limit;
|
|
|
|
/* address range filters */
|
|
struct perf_addr_filters_head addr_filters;
|
|
/* vma address array for file-based filders */
|
|
unsigned long *addr_filters_offs;
|
|
unsigned long addr_filters_gen;
|
|
|
|
void (*destroy)(struct perf_event *);
|
|
struct rcu_head rcu_head;
|
|
|
|
struct pid_namespace *ns;
|
|
u64 id;
|
|
|
|
u64 (*clock)(void);
|
|
perf_overflow_handler_t overflow_handler;
|
|
void *overflow_handler_context;
|
|
#ifdef CONFIG_BPF_SYSCALL
|
|
perf_overflow_handler_t orig_overflow_handler;
|
|
struct bpf_prog *prog;
|
|
#endif
|
|
|
|
#ifdef CONFIG_EVENT_TRACING
|
|
struct trace_event_call *tp_event;
|
|
struct event_filter *filter;
|
|
#ifdef CONFIG_FUNCTION_TRACER
|
|
struct ftrace_ops ftrace_ops;
|
|
#endif
|
|
#endif
|
|
|
|
#ifdef CONFIG_CGROUP_PERF
|
|
struct perf_cgroup *cgrp; /* cgroup event is attach to */
|
|
int cgrp_defer_enabled;
|
|
#endif
|
|
|
|
struct list_head sb_list;
|
|
|
|
/* Is this event shared with other events */
|
|
bool shared;
|
|
struct list_head zombie_entry;
|
|
#endif /* CONFIG_PERF_EVENTS */
|
|
};
|
|
|
|
/**
|
|
* struct perf_event_context - event context structure
|
|
*
|
|
* Used as a container for task events and CPU events as well:
|
|
*/
|
|
struct perf_event_context {
|
|
struct pmu *pmu;
|
|
/*
|
|
* Protect the states of the events in the list,
|
|
* nr_active, and the list:
|
|
*/
|
|
raw_spinlock_t lock;
|
|
/*
|
|
* Protect the list of events. Locking either mutex or lock
|
|
* is sufficient to ensure the list doesn't change; to change
|
|
* the list you need to lock both the mutex and the spinlock.
|
|
*/
|
|
struct mutex mutex;
|
|
|
|
struct list_head active_ctx_list;
|
|
struct list_head pinned_groups;
|
|
struct list_head flexible_groups;
|
|
struct list_head event_list;
|
|
int nr_events;
|
|
int nr_active;
|
|
int is_active;
|
|
int nr_stat;
|
|
int nr_freq;
|
|
int rotate_disable;
|
|
atomic_t refcount;
|
|
struct task_struct *task;
|
|
|
|
/*
|
|
* Context clock, runs when context enabled.
|
|
*/
|
|
u64 time;
|
|
u64 timestamp;
|
|
|
|
/*
|
|
* These fields let us detect when two contexts have both
|
|
* been cloned (inherited) from a common ancestor.
|
|
*/
|
|
struct perf_event_context *parent_ctx;
|
|
u64 parent_gen;
|
|
u64 generation;
|
|
int pin_count;
|
|
#ifdef CONFIG_CGROUP_PERF
|
|
int nr_cgroups; /* cgroup evts */
|
|
#endif
|
|
void *task_ctx_data; /* pmu specific data */
|
|
struct rcu_head rcu_head;
|
|
};
|
|
|
|
/*
|
|
* Number of contexts where an event can trigger:
|
|
* task, softirq, hardirq, nmi.
|
|
*/
|
|
#define PERF_NR_CONTEXTS 4
|
|
|
|
/**
|
|
* struct perf_event_cpu_context - per cpu event context structure
|
|
*/
|
|
struct perf_cpu_context {
|
|
struct perf_event_context ctx;
|
|
struct perf_event_context *task_ctx;
|
|
int active_oncpu;
|
|
int exclusive;
|
|
|
|
raw_spinlock_t hrtimer_lock;
|
|
struct hrtimer hrtimer;
|
|
ktime_t hrtimer_interval;
|
|
unsigned int hrtimer_active;
|
|
|
|
struct pmu *unique_pmu;
|
|
#ifdef CONFIG_CGROUP_PERF
|
|
struct perf_cgroup *cgrp;
|
|
#endif
|
|
|
|
struct list_head sched_cb_entry;
|
|
int sched_cb_usage;
|
|
};
|
|
|
|
struct perf_output_handle {
|
|
struct perf_event *event;
|
|
struct ring_buffer *rb;
|
|
unsigned long wakeup;
|
|
unsigned long size;
|
|
union {
|
|
void *addr;
|
|
unsigned long head;
|
|
};
|
|
int page;
|
|
};
|
|
|
|
struct bpf_perf_event_data_kern {
|
|
struct pt_regs *regs;
|
|
struct perf_sample_data *data;
|
|
};
|
|
|
|
#ifdef CONFIG_CGROUP_PERF
|
|
|
|
/*
|
|
* perf_cgroup_info keeps track of time_enabled for a cgroup.
|
|
* This is a per-cpu dynamically allocated data structure.
|
|
*/
|
|
struct perf_cgroup_info {
|
|
u64 time;
|
|
u64 timestamp;
|
|
};
|
|
|
|
struct perf_cgroup {
|
|
struct cgroup_subsys_state css;
|
|
struct perf_cgroup_info __percpu *info;
|
|
};
|
|
|
|
/*
|
|
* Must ensure cgroup is pinned (css_get) before calling
|
|
* this function. In other words, we cannot call this function
|
|
* if there is no cgroup event for the current CPU context.
|
|
*/
|
|
static inline struct perf_cgroup *
|
|
perf_cgroup_from_task(struct task_struct *task, struct perf_event_context *ctx)
|
|
{
|
|
return container_of(task_css_check(task, perf_event_cgrp_id,
|
|
ctx ? lockdep_is_held(&ctx->lock)
|
|
: true),
|
|
struct perf_cgroup, css);
|
|
}
|
|
#endif /* CONFIG_CGROUP_PERF */
|
|
|
|
#ifdef CONFIG_PERF_EVENTS
|
|
|
|
extern void *perf_aux_output_begin(struct perf_output_handle *handle,
|
|
struct perf_event *event);
|
|
extern void perf_aux_output_end(struct perf_output_handle *handle,
|
|
unsigned long size, bool truncated);
|
|
extern int perf_aux_output_skip(struct perf_output_handle *handle,
|
|
unsigned long size);
|
|
extern void *perf_get_aux(struct perf_output_handle *handle);
|
|
|
|
extern int perf_pmu_register(struct pmu *pmu, const char *name, int type);
|
|
extern void perf_pmu_unregister(struct pmu *pmu);
|
|
|
|
extern int perf_num_counters(void);
|
|
extern const char *perf_pmu_name(void);
|
|
extern void __perf_event_task_sched_in(struct task_struct *prev,
|
|
struct task_struct *task);
|
|
extern void __perf_event_task_sched_out(struct task_struct *prev,
|
|
struct task_struct *next);
|
|
extern int perf_event_init_task(struct task_struct *child);
|
|
extern void perf_event_exit_task(struct task_struct *child);
|
|
extern void perf_event_free_task(struct task_struct *task);
|
|
extern void perf_event_delayed_put(struct task_struct *task);
|
|
extern struct file *perf_event_get(unsigned int fd);
|
|
extern const struct perf_event_attr *perf_event_attrs(struct perf_event *event);
|
|
extern void perf_event_print_debug(void);
|
|
extern void perf_pmu_disable(struct pmu *pmu);
|
|
extern void perf_pmu_enable(struct pmu *pmu);
|
|
extern void perf_sched_cb_dec(struct pmu *pmu);
|
|
extern void perf_sched_cb_inc(struct pmu *pmu);
|
|
extern int perf_event_task_disable(void);
|
|
extern int perf_event_task_enable(void);
|
|
extern int perf_event_refresh(struct perf_event *event, int refresh);
|
|
extern void perf_event_update_userpage(struct perf_event *event);
|
|
extern int perf_event_release_kernel(struct perf_event *event);
|
|
extern struct perf_event *
|
|
perf_event_create_kernel_counter(struct perf_event_attr *attr,
|
|
int cpu,
|
|
struct task_struct *task,
|
|
perf_overflow_handler_t callback,
|
|
void *context);
|
|
extern void perf_pmu_migrate_context(struct pmu *pmu,
|
|
int src_cpu, int dst_cpu);
|
|
extern u64 perf_event_read_local(struct perf_event *event);
|
|
extern u64 perf_event_read_value(struct perf_event *event,
|
|
u64 *enabled, u64 *running);
|
|
|
|
|
|
struct perf_sample_data {
|
|
/*
|
|
* Fields set by perf_sample_data_init(), group so as to
|
|
* minimize the cachelines touched.
|
|
*/
|
|
u64 addr;
|
|
struct perf_raw_record *raw;
|
|
struct perf_branch_stack *br_stack;
|
|
u64 period;
|
|
u64 weight;
|
|
u64 txn;
|
|
union perf_mem_data_src data_src;
|
|
|
|
/*
|
|
* The other fields, optionally {set,used} by
|
|
* perf_{prepare,output}_sample().
|
|
*/
|
|
u64 type;
|
|
u64 ip;
|
|
struct {
|
|
u32 pid;
|
|
u32 tid;
|
|
} tid_entry;
|
|
u64 time;
|
|
u64 id;
|
|
u64 stream_id;
|
|
struct {
|
|
u32 cpu;
|
|
u32 reserved;
|
|
} cpu_entry;
|
|
struct perf_callchain_entry *callchain;
|
|
|
|
/*
|
|
* regs_user may point to task_pt_regs or to regs_user_copy, depending
|
|
* on arch details.
|
|
*/
|
|
struct perf_regs regs_user;
|
|
struct pt_regs regs_user_copy;
|
|
|
|
struct perf_regs regs_intr;
|
|
u64 stack_user_size;
|
|
} ____cacheline_aligned;
|
|
|
|
/* default value for data source */
|
|
#define PERF_MEM_NA (PERF_MEM_S(OP, NA) |\
|
|
PERF_MEM_S(LVL, NA) |\
|
|
PERF_MEM_S(SNOOP, NA) |\
|
|
PERF_MEM_S(LOCK, NA) |\
|
|
PERF_MEM_S(TLB, NA))
|
|
|
|
static inline void perf_sample_data_init(struct perf_sample_data *data,
|
|
u64 addr, u64 period)
|
|
{
|
|
/* remaining struct members initialized in perf_prepare_sample() */
|
|
data->addr = addr;
|
|
data->raw = NULL;
|
|
data->br_stack = NULL;
|
|
data->period = period;
|
|
data->weight = 0;
|
|
data->data_src.val = PERF_MEM_NA;
|
|
data->txn = 0;
|
|
}
|
|
|
|
extern void perf_output_sample(struct perf_output_handle *handle,
|
|
struct perf_event_header *header,
|
|
struct perf_sample_data *data,
|
|
struct perf_event *event);
|
|
extern void perf_prepare_sample(struct perf_event_header *header,
|
|
struct perf_sample_data *data,
|
|
struct perf_event *event,
|
|
struct pt_regs *regs);
|
|
|
|
extern int perf_event_overflow(struct perf_event *event,
|
|
struct perf_sample_data *data,
|
|
struct pt_regs *regs);
|
|
|
|
extern void perf_event_output_forward(struct perf_event *event,
|
|
struct perf_sample_data *data,
|
|
struct pt_regs *regs);
|
|
extern void perf_event_output_backward(struct perf_event *event,
|
|
struct perf_sample_data *data,
|
|
struct pt_regs *regs);
|
|
extern void perf_event_output(struct perf_event *event,
|
|
struct perf_sample_data *data,
|
|
struct pt_regs *regs);
|
|
|
|
static inline bool
|
|
is_default_overflow_handler(struct perf_event *event)
|
|
{
|
|
if (likely(event->overflow_handler == perf_event_output_forward))
|
|
return true;
|
|
if (unlikely(event->overflow_handler == perf_event_output_backward))
|
|
return true;
|
|
return false;
|
|
}
|
|
|
|
extern void
|
|
perf_event_header__init_id(struct perf_event_header *header,
|
|
struct perf_sample_data *data,
|
|
struct perf_event *event);
|
|
extern void
|
|
perf_event__output_id_sample(struct perf_event *event,
|
|
struct perf_output_handle *handle,
|
|
struct perf_sample_data *sample);
|
|
|
|
extern void
|
|
perf_log_lost_samples(struct perf_event *event, u64 lost);
|
|
|
|
static inline bool is_sampling_event(struct perf_event *event)
|
|
{
|
|
return event->attr.sample_period != 0;
|
|
}
|
|
|
|
/*
|
|
* Return 1 for a software event, 0 for a hardware event
|
|
*/
|
|
static inline int is_software_event(struct perf_event *event)
|
|
{
|
|
return event->event_caps & PERF_EV_CAP_SOFTWARE;
|
|
}
|
|
|
|
extern struct static_key perf_swevent_enabled[PERF_COUNT_SW_MAX];
|
|
|
|
extern void ___perf_sw_event(u32, u64, struct pt_regs *, u64);
|
|
extern void __perf_sw_event(u32, u64, struct pt_regs *, u64);
|
|
|
|
#ifndef perf_arch_fetch_caller_regs
|
|
static inline void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned long ip) { }
|
|
#endif
|
|
|
|
/*
|
|
* Take a snapshot of the regs. Skip ip and frame pointer to
|
|
* the nth caller. We only need a few of the regs:
|
|
* - ip for PERF_SAMPLE_IP
|
|
* - cs for user_mode() tests
|
|
* - bp for callchains
|
|
* - eflags, for future purposes, just in case
|
|
*/
|
|
static inline void perf_fetch_caller_regs(struct pt_regs *regs)
|
|
{
|
|
perf_arch_fetch_caller_regs(regs, CALLER_ADDR0);
|
|
}
|
|
|
|
static __always_inline void
|
|
perf_sw_event(u32 event_id, u64 nr, struct pt_regs *regs, u64 addr)
|
|
{
|
|
if (static_key_false(&perf_swevent_enabled[event_id]))
|
|
__perf_sw_event(event_id, nr, regs, addr);
|
|
}
|
|
|
|
DECLARE_PER_CPU(struct pt_regs, __perf_regs[4]);
|
|
|
|
/*
|
|
* 'Special' version for the scheduler, it hard assumes no recursion,
|
|
* which is guaranteed by us not actually scheduling inside other swevents
|
|
* because those disable preemption.
|
|
*/
|
|
static __always_inline void
|
|
perf_sw_event_sched(u32 event_id, u64 nr, u64 addr)
|
|
{
|
|
if (static_key_false(&perf_swevent_enabled[event_id])) {
|
|
struct pt_regs *regs = this_cpu_ptr(&__perf_regs[0]);
|
|
|
|
perf_fetch_caller_regs(regs);
|
|
___perf_sw_event(event_id, nr, regs, addr);
|
|
}
|
|
}
|
|
|
|
extern struct static_key_false perf_sched_events;
|
|
|
|
static __always_inline bool
|
|
perf_sw_migrate_enabled(void)
|
|
{
|
|
if (static_key_false(&perf_swevent_enabled[PERF_COUNT_SW_CPU_MIGRATIONS]))
|
|
return true;
|
|
return false;
|
|
}
|
|
|
|
static inline void perf_event_task_migrate(struct task_struct *task)
|
|
{
|
|
if (perf_sw_migrate_enabled())
|
|
task->sched_migrated = 1;
|
|
}
|
|
|
|
static inline void perf_event_task_sched_in(struct task_struct *prev,
|
|
struct task_struct *task)
|
|
{
|
|
if (static_branch_unlikely(&perf_sched_events))
|
|
__perf_event_task_sched_in(prev, task);
|
|
|
|
if (perf_sw_migrate_enabled() && task->sched_migrated) {
|
|
struct pt_regs *regs = this_cpu_ptr(&__perf_regs[0]);
|
|
|
|
perf_fetch_caller_regs(regs);
|
|
___perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, regs, 0);
|
|
task->sched_migrated = 0;
|
|
}
|
|
}
|
|
|
|
static inline void perf_event_task_sched_out(struct task_struct *prev,
|
|
struct task_struct *next)
|
|
{
|
|
perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0);
|
|
|
|
if (static_branch_unlikely(&perf_sched_events))
|
|
__perf_event_task_sched_out(prev, next);
|
|
}
|
|
|
|
static inline u64 __perf_event_count(struct perf_event *event)
|
|
{
|
|
return local64_read(&event->count) + atomic64_read(&event->child_count);
|
|
}
|
|
|
|
extern void perf_event_mmap(struct vm_area_struct *vma);
|
|
extern struct perf_guest_info_callbacks *perf_guest_cbs;
|
|
extern int perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
|
|
extern int perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *callbacks);
|
|
|
|
extern void perf_event_exec(void);
|
|
extern void perf_event_comm(struct task_struct *tsk, bool exec);
|
|
extern void perf_event_fork(struct task_struct *tsk);
|
|
|
|
/* Callchains */
|
|
DECLARE_PER_CPU(struct perf_callchain_entry, perf_callchain_entry);
|
|
|
|
extern void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs);
|
|
extern void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs);
|
|
extern struct perf_callchain_entry *
|
|
get_perf_callchain(struct pt_regs *regs, u32 init_nr, bool kernel, bool user,
|
|
u32 max_stack, bool crosstask, bool add_mark);
|
|
extern int get_callchain_buffers(int max_stack);
|
|
extern void put_callchain_buffers(void);
|
|
|
|
extern int sysctl_perf_event_max_stack;
|
|
extern int sysctl_perf_event_max_contexts_per_stack;
|
|
|
|
static inline int perf_callchain_store_context(struct perf_callchain_entry_ctx *ctx, u64 ip)
|
|
{
|
|
if (ctx->contexts < sysctl_perf_event_max_contexts_per_stack) {
|
|
struct perf_callchain_entry *entry = ctx->entry;
|
|
entry->ip[entry->nr++] = ip;
|
|
++ctx->contexts;
|
|
return 0;
|
|
} else {
|
|
ctx->contexts_maxed = true;
|
|
return -1; /* no more room, stop walking the stack */
|
|
}
|
|
}
|
|
|
|
static inline int perf_callchain_store(struct perf_callchain_entry_ctx *ctx, u64 ip)
|
|
{
|
|
if (ctx->nr < ctx->max_stack && !ctx->contexts_maxed) {
|
|
struct perf_callchain_entry *entry = ctx->entry;
|
|
entry->ip[entry->nr++] = ip;
|
|
++ctx->nr;
|
|
return 0;
|
|
} else {
|
|
return -1; /* no more room, stop walking the stack */
|
|
}
|
|
}
|
|
|
|
extern int sysctl_perf_event_paranoid;
|
|
extern int sysctl_perf_event_mlock;
|
|
extern int sysctl_perf_event_sample_rate;
|
|
extern int sysctl_perf_cpu_time_max_percent;
|
|
|
|
extern void perf_sample_event_took(u64 sample_len_ns);
|
|
|
|
extern int perf_proc_update_handler(struct ctl_table *table, int write,
|
|
void __user *buffer, size_t *lenp,
|
|
loff_t *ppos);
|
|
extern int perf_cpu_time_max_percent_handler(struct ctl_table *table, int write,
|
|
void __user *buffer, size_t *lenp,
|
|
loff_t *ppos);
|
|
|
|
int perf_event_max_stack_handler(struct ctl_table *table, int write,
|
|
void __user *buffer, size_t *lenp, loff_t *ppos);
|
|
|
|
static inline bool perf_paranoid_any(void)
|
|
{
|
|
return sysctl_perf_event_paranoid > 2;
|
|
}
|
|
|
|
static inline bool perf_paranoid_tracepoint_raw(void)
|
|
{
|
|
return sysctl_perf_event_paranoid > -1;
|
|
}
|
|
|
|
static inline bool perf_paranoid_cpu(void)
|
|
{
|
|
return sysctl_perf_event_paranoid > 0;
|
|
}
|
|
|
|
static inline bool perf_paranoid_kernel(void)
|
|
{
|
|
return sysctl_perf_event_paranoid > 1;
|
|
}
|
|
|
|
extern void perf_event_init(void);
|
|
extern void perf_tp_event(u16 event_type, u64 count, void *record,
|
|
int entry_size, struct pt_regs *regs,
|
|
struct hlist_head *head, int rctx,
|
|
struct task_struct *task);
|
|
extern void perf_bp_event(struct perf_event *event, void *data);
|
|
|
|
#ifndef perf_misc_flags
|
|
# define perf_misc_flags(regs) \
|
|
(user_mode(regs) ? PERF_RECORD_MISC_USER : PERF_RECORD_MISC_KERNEL)
|
|
# define perf_instruction_pointer(regs) instruction_pointer(regs)
|
|
#endif
|
|
|
|
static inline bool has_branch_stack(struct perf_event *event)
|
|
{
|
|
return event->attr.sample_type & PERF_SAMPLE_BRANCH_STACK;
|
|
}
|
|
|
|
static inline bool needs_branch_stack(struct perf_event *event)
|
|
{
|
|
return event->attr.branch_sample_type != 0;
|
|
}
|
|
|
|
static inline bool has_aux(struct perf_event *event)
|
|
{
|
|
return event->pmu->setup_aux;
|
|
}
|
|
|
|
static inline bool is_write_backward(struct perf_event *event)
|
|
{
|
|
return !!event->attr.write_backward;
|
|
}
|
|
|
|
static inline bool has_addr_filter(struct perf_event *event)
|
|
{
|
|
return event->pmu->nr_addr_filters;
|
|
}
|
|
|
|
/*
|
|
* An inherited event uses parent's filters
|
|
*/
|
|
static inline struct perf_addr_filters_head *
|
|
perf_event_addr_filters(struct perf_event *event)
|
|
{
|
|
struct perf_addr_filters_head *ifh = &event->addr_filters;
|
|
|
|
if (event->parent)
|
|
ifh = &event->parent->addr_filters;
|
|
|
|
return ifh;
|
|
}
|
|
|
|
extern void perf_event_addr_filters_sync(struct perf_event *event);
|
|
|
|
extern int perf_output_begin(struct perf_output_handle *handle,
|
|
struct perf_event *event, unsigned int size);
|
|
extern int perf_output_begin_forward(struct perf_output_handle *handle,
|
|
struct perf_event *event,
|
|
unsigned int size);
|
|
extern int perf_output_begin_backward(struct perf_output_handle *handle,
|
|
struct perf_event *event,
|
|
unsigned int size);
|
|
|
|
extern void perf_output_end(struct perf_output_handle *handle);
|
|
extern unsigned int perf_output_copy(struct perf_output_handle *handle,
|
|
const void *buf, unsigned int len);
|
|
extern unsigned int perf_output_skip(struct perf_output_handle *handle,
|
|
unsigned int len);
|
|
extern int perf_swevent_get_recursion_context(void);
|
|
extern void perf_swevent_put_recursion_context(int rctx);
|
|
extern u64 perf_swevent_set_period(struct perf_event *event);
|
|
extern void perf_event_enable(struct perf_event *event);
|
|
extern void perf_event_disable(struct perf_event *event);
|
|
extern void perf_event_disable_local(struct perf_event *event);
|
|
extern void perf_event_disable_inatomic(struct perf_event *event);
|
|
extern void perf_event_task_tick(void);
|
|
extern int perf_event_account_interrupt(struct perf_event *event);
|
|
#else /* !CONFIG_PERF_EVENTS: */
|
|
static inline void *
|
|
perf_aux_output_begin(struct perf_output_handle *handle,
|
|
struct perf_event *event) { return NULL; }
|
|
static inline void
|
|
perf_aux_output_end(struct perf_output_handle *handle, unsigned long size,
|
|
bool truncated) { }
|
|
static inline int
|
|
perf_aux_output_skip(struct perf_output_handle *handle,
|
|
unsigned long size) { return -EINVAL; }
|
|
static inline void *
|
|
perf_get_aux(struct perf_output_handle *handle) { return NULL; }
|
|
static inline void
|
|
perf_event_task_migrate(struct task_struct *task) { }
|
|
static inline void
|
|
perf_event_task_sched_in(struct task_struct *prev,
|
|
struct task_struct *task) { }
|
|
static inline void
|
|
perf_event_task_sched_out(struct task_struct *prev,
|
|
struct task_struct *next) { }
|
|
static inline int perf_event_init_task(struct task_struct *child) { return 0; }
|
|
static inline void perf_event_exit_task(struct task_struct *child) { }
|
|
static inline void perf_event_free_task(struct task_struct *task) { }
|
|
static inline void perf_event_delayed_put(struct task_struct *task) { }
|
|
static inline struct file *perf_event_get(unsigned int fd) { return ERR_PTR(-EINVAL); }
|
|
static inline const struct perf_event_attr *perf_event_attrs(struct perf_event *event)
|
|
{
|
|
return ERR_PTR(-EINVAL);
|
|
}
|
|
static inline u64 perf_event_read_local(struct perf_event *event) { return -EINVAL; }
|
|
static inline void perf_event_print_debug(void) { }
|
|
static inline int perf_event_task_disable(void) { return -EINVAL; }
|
|
static inline int perf_event_task_enable(void) { return -EINVAL; }
|
|
static inline int perf_event_refresh(struct perf_event *event, int refresh)
|
|
{
|
|
return -EINVAL;
|
|
}
|
|
|
|
static inline void
|
|
perf_sw_event(u32 event_id, u64 nr, struct pt_regs *regs, u64 addr) { }
|
|
static inline void
|
|
perf_sw_event_sched(u32 event_id, u64 nr, u64 addr) { }
|
|
static inline void
|
|
perf_bp_event(struct perf_event *event, void *data) { }
|
|
|
|
static inline int perf_register_guest_info_callbacks
|
|
(struct perf_guest_info_callbacks *callbacks) { return 0; }
|
|
static inline int perf_unregister_guest_info_callbacks
|
|
(struct perf_guest_info_callbacks *callbacks) { return 0; }
|
|
|
|
static inline void perf_event_mmap(struct vm_area_struct *vma) { }
|
|
static inline void perf_event_exec(void) { }
|
|
static inline void perf_event_comm(struct task_struct *tsk, bool exec) { }
|
|
static inline void perf_event_fork(struct task_struct *tsk) { }
|
|
static inline void perf_event_init(void) { }
|
|
static inline int perf_swevent_get_recursion_context(void) { return -1; }
|
|
static inline void perf_swevent_put_recursion_context(int rctx) { }
|
|
static inline u64 perf_swevent_set_period(struct perf_event *event) { return 0; }
|
|
static inline void perf_event_enable(struct perf_event *event) { }
|
|
static inline void perf_event_disable(struct perf_event *event) { }
|
|
static inline int __perf_event_disable(void *info) { return -1; }
|
|
static inline void perf_event_task_tick(void) { }
|
|
static inline int perf_event_release_kernel(struct perf_event *event) { return 0; }
|
|
#endif
|
|
|
|
#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
|
|
extern void perf_restore_debug_store(void);
|
|
#else
|
|
static inline void perf_restore_debug_store(void) { }
|
|
#endif
|
|
|
|
static __always_inline bool perf_raw_frag_last(const struct perf_raw_frag *frag)
|
|
{
|
|
return frag->pad < sizeof(u64);
|
|
}
|
|
|
|
#define perf_output_put(handle, x) perf_output_copy((handle), &(x), sizeof(x))
|
|
|
|
struct perf_pmu_events_attr {
|
|
struct device_attribute attr;
|
|
u64 id;
|
|
const char *event_str;
|
|
};
|
|
|
|
struct perf_pmu_events_ht_attr {
|
|
struct device_attribute attr;
|
|
u64 id;
|
|
const char *event_str_ht;
|
|
const char *event_str_noht;
|
|
};
|
|
|
|
ssize_t perf_event_sysfs_show(struct device *dev, struct device_attribute *attr,
|
|
char *page);
|
|
|
|
#define PMU_EVENT_ATTR(_name, _var, _id, _show) \
|
|
static struct perf_pmu_events_attr _var = { \
|
|
.attr = __ATTR(_name, 0444, _show, NULL), \
|
|
.id = _id, \
|
|
};
|
|
|
|
#define PMU_EVENT_ATTR_STRING(_name, _var, _str) \
|
|
static struct perf_pmu_events_attr _var = { \
|
|
.attr = __ATTR(_name, 0444, perf_event_sysfs_show, NULL), \
|
|
.id = 0, \
|
|
.event_str = _str, \
|
|
};
|
|
|
|
#define PMU_FORMAT_ATTR(_name, _format) \
|
|
static ssize_t \
|
|
_name##_show(struct device *dev, \
|
|
struct device_attribute *attr, \
|
|
char *page) \
|
|
{ \
|
|
BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \
|
|
return sprintf(page, _format "\n"); \
|
|
} \
|
|
\
|
|
static struct device_attribute format_attr_##_name = __ATTR_RO(_name)
|
|
|
|
/* Performance counter hotplug functions */
|
|
#ifdef CONFIG_PERF_EVENTS
|
|
int perf_event_init_cpu(unsigned int cpu);
|
|
int perf_event_exit_cpu(unsigned int cpu);
|
|
#else
|
|
#define perf_event_init_cpu NULL
|
|
#define perf_event_exit_cpu NULL
|
|
#endif
|
|
|
|
#endif /* _LINUX_PERF_EVENT_H */
|