Merge 5.15.105 into android13-5.15-lts
Changes in 5.15.105 interconnect: qcom: osm-l3: fix icc_onecell_data allocation perf/core: Fix perf_output_begin parameter is incorrectly invoked in perf_event_bpf_output perf: fix perf_event_context->time tracing/hwlat: Replace sched_setaffinity with set_cpus_allowed_ptr serial: fsl_lpuart: Fix comment typo tty: serial: fsl_lpuart: switch to new dmaengine_terminate_* API tty: serial: fsl_lpuart: fix race on RX DMA shutdown serial: 8250: SERIAL_8250_ASPEED_VUART should depend on ARCH_ASPEED serial: 8250: ASPEED_VUART: select REGMAP instead of depending on it kthread: add the helper function kthread_run_on_cpu() trace/hwlat: make use of the helper function kthread_run_on_cpu() trace/hwlat: Do not start per-cpu thread if it is already running net: tls: fix possible race condition between do_tls_getsockopt_conf() and do_tls_setsockopt_conf() power: supply: bq24190_charger: using pm_runtime_resume_and_get instead of pm_runtime_get_sync power: supply: bq24190: Fix use after free bug in bq24190_remove due to race condition power: supply: da9150: Fix use after free bug in da9150_charger_remove due to race condition ARM: dts: imx6sll: e60k02: fix usbotg1 pinctrl ARM: dts: imx6sl: tolino-shine2hd: fix usbotg1 pinctrl arm64: dts: imx8mn: specify #sound-dai-cells for SAI nodes xsk: Add missing overflow check in xdp_umem_reg iavf: fix inverted Rx hash condition leading to disabled hash iavf: fix non-tunneled IPv6 UDP packet type and hashing intel/igbvf: free irq on the error path in igbvf_request_msix() igbvf: Regard vf reset nack as success igc: fix the validation logic for taprio's gate list i2c: imx-lpi2c: check only for enabled interrupt flags i2c: hisi: Only use the completion interrupt to finish the transfer scsi: scsi_dh_alua: Fix memleak for 'qdata' in alua_activate() net: dsa: b53: mmap: fix device tree support net: usb: smsc95xx: Limit packet length to skb->len qed/qed_sriov: guard against NULL derefs from qed_iov_get_vf_info xirc2ps_cs: Fix use after free bug in xirc2ps_detach net: phy: Ensure state transitions are processed from phy_stop() net: mdio: fix owner field for mdio buses registered using device-tree net: mdio: fix owner field for mdio buses registered using ACPI drm/i915/gt: perform uc late init after probe error injection net: qcom/emac: Fix use after free bug in emac_remove due to race condition net/ps3_gelic_net: Fix RX sk_buff length net/ps3_gelic_net: Use dma_mapping_error octeontx2-vf: Add missing free for alloc_percpu bootconfig: Fix testcase to increase max node keys: Do not cache key in task struct if key is requested from kernel thread iavf: fix hang on reboot with ice i40e: fix flow director packet filter programming bpf: Adjust insufficient default bpf_jit_limit net/mlx5e: Set uplink rep as NETNS_LOCAL net/mlx5: Fix steering rules cleanup net/mlx5: Read the TC mapping of all priorities on ETS query net/mlx5: E-Switch, Fix an Oops in error handling code net: dsa: tag_brcm: legacy: fix daisy-chained switches atm: idt77252: fix kmemleak when rmmod idt77252 erspan: do not use skb_mac_header() in ndo_start_xmit() net/sonic: use dma_mapping_error() for error check nvme-tcp: fix nvme_tcp_term_pdu to match spec hvc/xen: prevent concurrent accesses to the shared ring ksmbd: add low bound validation to FSCTL_SET_ZERO_DATA ksmbd: add low bound validation to FSCTL_QUERY_ALLOCATED_RANGES ksmbd: fix possible refcount leak in smb2_open() gve: Cache link_speed value from device net: dsa: mt7530: move enabling disabling core clock to mt7530_pll_setup() net: dsa: mt7530: move lowering TRGMII driving to mt7530_setup() net: dsa: mt7530: move setting ssc_delta to PHY_INTERFACE_MODE_TRGMII case net: mdio: thunder: Add missing fwnode_handle_put() Bluetooth: btqcomsmd: Fix command timeout after setting BD address Bluetooth: L2CAP: Fix responding with wrong PDU type Bluetooth: btsdio: fix use after free bug in btsdio_remove due to unfinished work platform/chrome: cros_ec_chardev: fix kernel data leak from ioctl thread_info: Add helpers to snapshot thread flags entry: Snapshot thread flags entry/rcu: Check TIF_RESCHED _after_ delayed RCU wake-up hwmon: fix potential sensor registration fail if of_node is missing hwmon (it87): Fix voltage scaling for chips with 10.9mV ADCs scsi: qla2xxx: Synchronize the IOCB count to be in order scsi: qla2xxx: Perform lockless command completion in abort path uas: Add US_FL_NO_REPORT_OPCODES for JMicron JMS583Gen 2 thunderbolt: Use scale field when allocating USB3 bandwidth thunderbolt: Call tb_check_quirks() after initializing adapters thunderbolt: Disable interrupt auto clear for rings thunderbolt: Add missing UNSET_INBOUND_SBTX for retimer access thunderbolt: Use const qualifier for `ring_interrupt_index` thunderbolt: Rename shadowed variables bit to interrupt_bit and auto_clear_bit ACPI: x86: utils: Add Cezanne to the list for forcing StorageD3Enable riscv: Bump COMMAND_LINE_SIZE value to 1024 drm/cirrus: NULL-check pipe->plane.state->fb in cirrus_pipe_update() HID: cp2112: Fix driver not registering GPIO IRQ chip as threaded ca8210: fix mac_len negative array access HID: intel-ish-hid: ipc: Fix potential use-after-free in work function m68k: Only force 030 bus error if PC not in exception table selftests/bpf: check that modifier resolves after pointer scsi: target: iscsi: Fix an error message in iscsi_check_key() scsi: hisi_sas: Check devm_add_action() return value scsi: ufs: core: Add soft dependency on governor_simpleondemand scsi: lpfc: Check kzalloc() in lpfc_sli4_cgn_params_read() scsi: lpfc: Avoid usage of list iterator variable after loop scsi: storvsc: Handle BlockSize change in Hyper-V VHD/VHDX file net: usb: cdc_mbim: avoid altsetting toggling for Telit FE990 net: usb: qmi_wwan: add Telit 0x1080 composition sh: sanitize the flags on sigreturn net/sched: act_mirred: better wording on protection against excessive stack growth act_mirred: use the backlog for nested calls to mirred ingress cifs: empty interface list when server doesn't support query interfaces cifs: print session id while listing open files scsi: core: Add BLIST_SKIP_VPD_PAGES for SKhynix H28U74301AMR usb: dwc2: fix a devres leak in hw_enable upon suspend resume usb: gadget: u_audio: don't let userspace block driver unbind efi: sysfb_efi: Fix DMI quirks not working for simpledrm mm/slab: Fix undefined init_cache_node_node() for NUMA and !SMP fscrypt: destroy keyring after security_sb_delete() fsverity: Remove WQ_UNBOUND from fsverity read workqueue lockd: set file_lock start and end when decoding nlm4 testargs arm64: dts: imx8mm-nitrogen-r2: fix WM8960 clock name igb: revert rtnl_lock() that causes deadlock dm thin: fix deadlock when swapping to thin device usb: typec: tcpm: fix warning when handle discover_identity message usb: cdns3: Fix issue with using incorrect PCI device function usb: cdnsp: Fixes issue with redundant Status Stage usb: cdnsp: changes PCI Device ID to fix conflict with CNDS3 driver usb: chipdea: core: fix return -EINVAL if request role is the same with current role usb: chipidea: core: fix possible concurrent when switch role usb: ucsi: Fix NULL pointer deref in ucsi_connector_change() kfence: avoid passing -g for test KVM: x86: hyper-v: Avoid calling kvm_make_vcpus_request_mask() with vcpu_mask==NULL ksmbd: set FILE_NAMED_STREAMS attribute in FS_ATTRIBUTE_INFORMATION ksmbd: return STATUS_NOT_SUPPORTED on unsupported smb2.0 dialect ksmbd: return unsupported error on smb1 mount wifi: mac80211: fix qos on mesh interfaces nilfs2: fix kernel-infoleak in nilfs_ioctl_wrap_copy() drm/bridge: lt8912b: return EPROBE_DEFER if bridge is not found drm/meson: fix missing component unbind on bind errors drm/amdgpu/nv: Apply ASPM quirk on Intel ADL + AMD Navi drm/i915/active: Fix missing debug object activation drm/i915: Preserve crtc_state->inherited during state clearing riscv: mm: Fix incorrect ASID argument when flushing TLB riscv: Handle zicsr/zifencei issues between clang and binutils tee: amdtee: fix race condition in amdtee_open_session firmware: arm_scmi: Fix device node validation for mailbox transport i2c: xgene-slimpro: Fix out-of-bounds bug in xgene_slimpro_i2c_xfer() dm stats: check for and propagate alloc_percpu failure dm crypt: add cond_resched() to dmcrypt_write() dm crypt: avoid accessing uninitialized tasklet sched/fair: sanitize vruntime of entity being placed sched/fair: Sanitize vruntime of entity being migrated mm: kfence: fix using kfence_metadata without initialization in show_object() ocfs2: fix data corruption after failed write NFSD: fix use-after-free in __nfs42_ssc_open() Linux 5.15.105 Change-Id: I95eeed5f1709518478411212ba7318c05016e6a4 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 15
|
||||
SUBLEVEL = 104
|
||||
SUBLEVEL = 105
|
||||
EXTRAVERSION =
|
||||
NAME = Trick or Treat
|
||||
|
||||
|
||||
@@ -302,6 +302,7 @@
|
||||
|
||||
&usbotg1 {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_usbotg1>;
|
||||
disable-over-current;
|
||||
srp-disable;
|
||||
hnp-disable;
|
||||
|
||||
@@ -597,6 +597,7 @@
|
||||
|
||||
&usbotg1 {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_usbotg1>;
|
||||
disable-over-current;
|
||||
srp-disable;
|
||||
hnp-disable;
|
||||
|
||||
@@ -247,7 +247,7 @@
|
||||
compatible = "wlf,wm8960";
|
||||
reg = <0x1a>;
|
||||
clocks = <&clk IMX8MM_CLK_SAI1_ROOT>;
|
||||
clock-names = "mclk1";
|
||||
clock-names = "mclk";
|
||||
wlf,shared-lrclk;
|
||||
#sound-dai-cells = <0>;
|
||||
};
|
||||
|
||||
@@ -265,6 +265,7 @@
|
||||
sai2: sai@30020000 {
|
||||
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
|
||||
reg = <0x30020000 0x10000>;
|
||||
#sound-dai-cells = <0>;
|
||||
interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&clk IMX8MN_CLK_SAI2_IPG>,
|
||||
<&clk IMX8MN_CLK_DUMMY>,
|
||||
@@ -279,6 +280,7 @@
|
||||
sai3: sai@30030000 {
|
||||
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
|
||||
reg = <0x30030000 0x10000>;
|
||||
#sound-dai-cells = <0>;
|
||||
interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&clk IMX8MN_CLK_SAI3_IPG>,
|
||||
<&clk IMX8MN_CLK_DUMMY>,
|
||||
@@ -293,6 +295,7 @@
|
||||
sai5: sai@30050000 {
|
||||
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
|
||||
reg = <0x30050000 0x10000>;
|
||||
#sound-dai-cells = <0>;
|
||||
interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&clk IMX8MN_CLK_SAI5_IPG>,
|
||||
<&clk IMX8MN_CLK_DUMMY>,
|
||||
@@ -309,6 +312,7 @@
|
||||
sai6: sai@30060000 {
|
||||
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
|
||||
reg = <0x30060000 0x10000>;
|
||||
#sound-dai-cells = <0>;
|
||||
interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&clk IMX8MN_CLK_SAI6_IPG>,
|
||||
<&clk IMX8MN_CLK_DUMMY>,
|
||||
@@ -366,6 +370,7 @@
|
||||
sai7: sai@300b0000 {
|
||||
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
|
||||
reg = <0x300b0000 0x10000>;
|
||||
#sound-dai-cells = <0>;
|
||||
interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&clk IMX8MN_CLK_SAI7_IPG>,
|
||||
<&clk IMX8MN_CLK_DUMMY>,
|
||||
|
||||
@@ -30,6 +30,7 @@
|
||||
#include <linux/init.h>
|
||||
#include <linux/ptrace.h>
|
||||
#include <linux/kallsyms.h>
|
||||
#include <linux/extable.h>
|
||||
|
||||
#include <asm/setup.h>
|
||||
#include <asm/fpu.h>
|
||||
@@ -544,7 +545,8 @@ static inline void bus_error030 (struct frame *fp)
|
||||
errorcode |= 2;
|
||||
|
||||
if (mmusr & (MMU_I | MMU_WP)) {
|
||||
if (ssw & 4) {
|
||||
/* We might have an exception table for this PC */
|
||||
if (ssw & 4 && !search_exception_tables(fp->ptregs.pc)) {
|
||||
pr_err("Data %s fault at %#010lx in %s (pc=%#lx)\n",
|
||||
ssw & RW ? "read" : "write",
|
||||
fp->un.fmtb.daddr,
|
||||
|
||||
@@ -361,6 +361,28 @@ config RISCV_BASE_PMU
|
||||
|
||||
endmenu
|
||||
|
||||
config TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
|
||||
def_bool y
|
||||
# https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=aed44286efa8ae8717a77d94b51ac3614e2ca6dc
|
||||
depends on AS_IS_GNU && AS_VERSION >= 23800
|
||||
help
|
||||
Newer binutils versions default to ISA spec version 20191213 which
|
||||
moves some instructions from the I extension to the Zicsr and Zifencei
|
||||
extensions.
|
||||
|
||||
config TOOLCHAIN_NEEDS_OLD_ISA_SPEC
|
||||
def_bool y
|
||||
depends on TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
|
||||
# https://github.com/llvm/llvm-project/commit/22e199e6afb1263c943c0c0d4498694e15bf8a16
|
||||
depends on CC_IS_CLANG && CLANG_VERSION < 170000
|
||||
help
|
||||
Certain versions of clang do not support zicsr and zifencei via -march
|
||||
but newer versions of binutils require it for the reasons noted in the
|
||||
help text of CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI. This
|
||||
option causes an older ISA spec compatible with these older versions
|
||||
of clang to be passed to GAS, which has the same result as passing zicsr
|
||||
and zifencei to -march.
|
||||
|
||||
config FPU
|
||||
bool "FPU support"
|
||||
default y
|
||||
|
||||
@@ -59,10 +59,12 @@ riscv-march-$(CONFIG_ARCH_RV64I) := rv64ima
|
||||
riscv-march-$(CONFIG_FPU) := $(riscv-march-y)fd
|
||||
riscv-march-$(CONFIG_RISCV_ISA_C) := $(riscv-march-y)c
|
||||
|
||||
# Newer binutils versions default to ISA spec version 20191213 which moves some
|
||||
# instructions from the I extension to the Zicsr and Zifencei extensions.
|
||||
toolchain-need-zicsr-zifencei := $(call cc-option-yn, -march=$(riscv-march-y)_zicsr_zifencei)
|
||||
riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei
|
||||
ifdef CONFIG_TOOLCHAIN_NEEDS_OLD_ISA_SPEC
|
||||
KBUILD_CFLAGS += -Wa,-misa-spec=2.2
|
||||
KBUILD_AFLAGS += -Wa,-misa-spec=2.2
|
||||
else
|
||||
riscv-march-$(CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI) := $(riscv-march-y)_zicsr_zifencei
|
||||
endif
|
||||
|
||||
KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))
|
||||
KBUILD_AFLAGS += -march=$(riscv-march-y)
|
||||
|
||||
@@ -12,6 +12,8 @@
|
||||
#include <asm/errata_list.h>
|
||||
|
||||
#ifdef CONFIG_MMU
|
||||
extern unsigned long asid_mask;
|
||||
|
||||
static inline void local_flush_tlb_all(void)
|
||||
{
|
||||
__asm__ __volatile__ ("sfence.vma" : : : "memory");
|
||||
|
||||
8
arch/riscv/include/uapi/asm/setup.h
Normal file
8
arch/riscv/include/uapi/asm/setup.h
Normal file
@@ -0,0 +1,8 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
|
||||
|
||||
#ifndef _UAPI_ASM_RISCV_SETUP_H
|
||||
#define _UAPI_ASM_RISCV_SETUP_H
|
||||
|
||||
#define COMMAND_LINE_SIZE 1024
|
||||
|
||||
#endif /* _UAPI_ASM_RISCV_SETUP_H */
|
||||
@@ -22,7 +22,7 @@ DEFINE_STATIC_KEY_FALSE(use_asid_allocator);
|
||||
|
||||
static unsigned long asid_bits;
|
||||
static unsigned long num_asids;
|
||||
static unsigned long asid_mask;
|
||||
unsigned long asid_mask;
|
||||
|
||||
static atomic_long_t current_version;
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start,
|
||||
/* check if the tlbflush needs to be sent to other CPUs */
|
||||
broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids;
|
||||
if (static_branch_unlikely(&use_asid_allocator)) {
|
||||
unsigned long asid = atomic_long_read(&mm->context.id);
|
||||
unsigned long asid = atomic_long_read(&mm->context.id) & asid_mask;
|
||||
|
||||
if (broadcast) {
|
||||
riscv_cpuid_to_hartid_mask(cmask, &hmask);
|
||||
|
||||
@@ -50,6 +50,7 @@
|
||||
#define SR_FD 0x00008000
|
||||
#define SR_MD 0x40000000
|
||||
|
||||
#define SR_USER_MASK 0x00000303 // M, Q, S, T bits
|
||||
/*
|
||||
* DSP structure and data
|
||||
*/
|
||||
|
||||
@@ -115,6 +115,7 @@ static int
|
||||
restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *r0_p)
|
||||
{
|
||||
unsigned int err = 0;
|
||||
unsigned int sr = regs->sr & ~SR_USER_MASK;
|
||||
|
||||
#define COPY(x) err |= __get_user(regs->x, &sc->sc_##x)
|
||||
COPY(regs[1]);
|
||||
@@ -130,6 +131,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *r0_p
|
||||
COPY(sr); COPY(pc);
|
||||
#undef COPY
|
||||
|
||||
regs->sr = (regs->sr & SR_USER_MASK) | sr;
|
||||
|
||||
#ifdef CONFIG_SH_FPU
|
||||
if (boot_cpu_data.flags & CPU_HAS_FPU) {
|
||||
int owned_fp;
|
||||
|
||||
@@ -1846,16 +1846,19 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
|
||||
|
||||
cpumask_clear(&hv_vcpu->tlb_flush);
|
||||
|
||||
vcpu_mask = all_cpus ? NULL :
|
||||
sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
|
||||
vp_bitmap, vcpu_bitmap);
|
||||
|
||||
/*
|
||||
* vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
|
||||
* analyze it here, flush TLB regardless of the specified address space.
|
||||
*/
|
||||
kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
|
||||
NULL, vcpu_mask, &hv_vcpu->tlb_flush);
|
||||
if (all_cpus) {
|
||||
kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH_GUEST);
|
||||
} else {
|
||||
vcpu_mask = sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
|
||||
vp_bitmap, vcpu_bitmap);
|
||||
|
||||
kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
|
||||
NULL, vcpu_mask, &hv_vcpu->tlb_flush);
|
||||
}
|
||||
|
||||
ret_success:
|
||||
/* We always do full TLB flush, set 'Reps completed' = 'Rep Count' */
|
||||
|
||||
@@ -191,37 +191,26 @@ bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *s
|
||||
* a hardcoded allowlist for D3 support, which was used for these platforms.
|
||||
*
|
||||
* This allows quirking on Linux in a similar fashion.
|
||||
*
|
||||
* Cezanne systems shouldn't *normally* need this as the BIOS includes
|
||||
* StorageD3Enable. But for two reasons we have added it.
|
||||
* 1) The BIOS on a number of Dell systems have ambiguity
|
||||
* between the same value used for _ADR on ACPI nodes GPP1.DEV0 and GPP1.NVME.
|
||||
* GPP1.NVME is needed to get StorageD3Enable node set properly.
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=216440
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=216773
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=217003
|
||||
* 2) On at least one HP system StorageD3Enable is missing on the second NVME
|
||||
disk in the system.
|
||||
*/
|
||||
static const struct x86_cpu_id storage_d3_cpu_ids[] = {
|
||||
X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 96, NULL), /* Renoir */
|
||||
X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 104, NULL), /* Lucienne */
|
||||
{}
|
||||
};
|
||||
|
||||
static const struct dmi_system_id force_storage_d3_dmi[] = {
|
||||
{
|
||||
/*
|
||||
* _ADR is ambiguous between GPP1.DEV0 and GPP1.NVME
|
||||
* but .NVME is needed to get StorageD3Enable node
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=216440
|
||||
*/
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 14 7425 2-in-1"),
|
||||
}
|
||||
},
|
||||
{
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 16 5625"),
|
||||
}
|
||||
},
|
||||
X86_MATCH_VENDOR_FAM_MODEL(AMD, 25, 80, NULL), /* Cezanne */
|
||||
{}
|
||||
};
|
||||
|
||||
bool force_storage_d3(void)
|
||||
{
|
||||
const struct dmi_system_id *dmi_id = dmi_first_match(force_storage_d3_dmi);
|
||||
|
||||
return dmi_id || x86_match_cpu(storage_d3_cpu_ids);
|
||||
return x86_match_cpu(storage_d3_cpu_ids);
|
||||
}
|
||||
|
||||
@@ -2909,6 +2909,7 @@ close_card_oam(struct idt77252_dev *card)
|
||||
|
||||
recycle_rx_pool_skb(card, &vc->rcv.rx_pool);
|
||||
}
|
||||
kfree(vc);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2952,6 +2953,15 @@ open_card_ubr0(struct idt77252_dev *card)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
close_card_ubr0(struct idt77252_dev *card)
|
||||
{
|
||||
struct vc_map *vc = card->vcs[0];
|
||||
|
||||
free_scq(card, vc->scq);
|
||||
kfree(vc);
|
||||
}
|
||||
|
||||
static int
|
||||
idt77252_dev_open(struct idt77252_dev *card)
|
||||
{
|
||||
@@ -3001,6 +3011,7 @@ static void idt77252_dev_close(struct atm_dev *dev)
|
||||
struct idt77252_dev *card = dev->dev_data;
|
||||
u32 conf;
|
||||
|
||||
close_card_ubr0(card);
|
||||
close_card_oam(card);
|
||||
|
||||
conf = SAR_CFG_RXPTH | /* enable receive path */
|
||||
|
||||
@@ -122,6 +122,21 @@ static int btqcomsmd_setup(struct hci_dev *hdev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int btqcomsmd_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = qca_set_bdaddr_rome(hdev, bdaddr);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* The firmware stops responding for a while after setting the bdaddr,
|
||||
* causing timeouts for subsequent commands. Sleep a bit to avoid this.
|
||||
*/
|
||||
usleep_range(1000, 10000);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int btqcomsmd_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct btqcomsmd *btq;
|
||||
@@ -162,7 +177,7 @@ static int btqcomsmd_probe(struct platform_device *pdev)
|
||||
hdev->close = btqcomsmd_close;
|
||||
hdev->send = btqcomsmd_send;
|
||||
hdev->setup = btqcomsmd_setup;
|
||||
hdev->set_bdaddr = qca_set_bdaddr_rome;
|
||||
hdev->set_bdaddr = btqcomsmd_set_bdaddr;
|
||||
|
||||
ret = hci_register_dev(hdev);
|
||||
if (ret < 0)
|
||||
|
||||
@@ -352,6 +352,7 @@ static void btsdio_remove(struct sdio_func *func)
|
||||
|
||||
BT_DBG("func %p", func);
|
||||
|
||||
cancel_work_sync(&data->work);
|
||||
if (!data)
|
||||
return;
|
||||
|
||||
|
||||
@@ -52,6 +52,39 @@ static bool mailbox_chan_available(struct device *dev, int idx)
|
||||
"#mbox-cells", idx, NULL);
|
||||
}
|
||||
|
||||
static int mailbox_chan_validate(struct device *cdev)
|
||||
{
|
||||
int num_mb, num_sh, ret = 0;
|
||||
struct device_node *np = cdev->of_node;
|
||||
|
||||
num_mb = of_count_phandle_with_args(np, "mboxes", "#mbox-cells");
|
||||
num_sh = of_count_phandle_with_args(np, "shmem", NULL);
|
||||
/* Bail out if mboxes and shmem descriptors are inconsistent */
|
||||
if (num_mb <= 0 || num_sh > 2 || num_mb != num_sh) {
|
||||
dev_warn(cdev, "Invalid channel descriptor for '%s'\n",
|
||||
of_node_full_name(np));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (num_sh > 1) {
|
||||
struct device_node *np_tx, *np_rx;
|
||||
|
||||
np_tx = of_parse_phandle(np, "shmem", 0);
|
||||
np_rx = of_parse_phandle(np, "shmem", 1);
|
||||
/* SCMI Tx and Rx shared mem areas have to be distinct */
|
||||
if (!np_tx || !np_rx || np_tx == np_rx) {
|
||||
dev_warn(cdev, "Invalid shmem descriptor for '%s'\n",
|
||||
of_node_full_name(np));
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
of_node_put(np_tx);
|
||||
of_node_put(np_rx);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
|
||||
bool tx)
|
||||
{
|
||||
@@ -64,6 +97,10 @@ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
|
||||
resource_size_t size;
|
||||
struct resource res;
|
||||
|
||||
ret = mailbox_chan_validate(cdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
smbox = devm_kzalloc(dev, sizeof(*smbox), GFP_KERNEL);
|
||||
if (!smbox)
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -343,7 +343,7 @@ static const struct fwnode_operations efifb_fwnode_ops = {
|
||||
#ifdef CONFIG_EFI
|
||||
static struct fwnode_handle efifb_fwnode;
|
||||
|
||||
__init void sysfb_apply_efi_quirks(struct platform_device *pd)
|
||||
__init void sysfb_apply_efi_quirks(void)
|
||||
{
|
||||
if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI ||
|
||||
!(screen_info.capabilities & VIDEO_CAPABILITY_SKIP_QUIRKS))
|
||||
@@ -357,7 +357,10 @@ __init void sysfb_apply_efi_quirks(struct platform_device *pd)
|
||||
screen_info.lfb_height = temp;
|
||||
screen_info.lfb_linelength = 4 * screen_info.lfb_width;
|
||||
}
|
||||
}
|
||||
|
||||
__init void sysfb_set_efifb_fwnode(struct platform_device *pd)
|
||||
{
|
||||
if (screen_info.orig_video_isVGA == VIDEO_TYPE_EFI && IS_ENABLED(CONFIG_PCI)) {
|
||||
fwnode_init(&efifb_fwnode, &efifb_fwnode_ops);
|
||||
pd->dev.fwnode = &efifb_fwnode;
|
||||
|
||||
@@ -81,6 +81,8 @@ static __init int sysfb_init(void)
|
||||
if (disabled)
|
||||
goto unlock_mutex;
|
||||
|
||||
sysfb_apply_efi_quirks();
|
||||
|
||||
/* try to create a simple-framebuffer device */
|
||||
compatible = sysfb_parse_mode(si, &mode);
|
||||
if (compatible) {
|
||||
@@ -103,7 +105,7 @@ static __init int sysfb_init(void)
|
||||
goto unlock_mutex;
|
||||
}
|
||||
|
||||
sysfb_apply_efi_quirks(pd);
|
||||
sysfb_set_efifb_fwnode(pd);
|
||||
|
||||
ret = platform_device_add_data(pd, si, sizeof(*si));
|
||||
if (ret)
|
||||
|
||||
@@ -110,7 +110,7 @@ __init struct platform_device *sysfb_create_simplefb(const struct screen_info *s
|
||||
if (!pd)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
sysfb_apply_efi_quirks(pd);
|
||||
sysfb_set_efifb_fwnode(pd);
|
||||
|
||||
ret = platform_device_add_resources(pd, &res, 1);
|
||||
if (ret)
|
||||
|
||||
@@ -1286,6 +1286,7 @@ void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
|
||||
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_need_post(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_aspm_support_quirk(void);
|
||||
|
||||
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
|
||||
u64 num_vis_bytes);
|
||||
|
||||
@@ -75,6 +75,10 @@
|
||||
|
||||
#include <drm/drm_drv.h>
|
||||
|
||||
#if IS_ENABLED(CONFIG_X86)
|
||||
#include <asm/intel-family.h>
|
||||
#endif
|
||||
|
||||
MODULE_FIRMWARE("amdgpu/vega10_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/vega12_gpu_info.bin");
|
||||
MODULE_FIRMWARE("amdgpu/raven_gpu_info.bin");
|
||||
@@ -1337,6 +1341,17 @@ bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev)
|
||||
return pcie_aspm_enabled(adev->pdev);
|
||||
}
|
||||
|
||||
bool amdgpu_device_aspm_support_quirk(void)
|
||||
{
|
||||
#if IS_ENABLED(CONFIG_X86)
|
||||
struct cpuinfo_x86 *c = &cpu_data(0);
|
||||
|
||||
return !(c->x86 == 6 && c->x86_model == INTEL_FAM6_ALDERLAKE);
|
||||
#else
|
||||
return true;
|
||||
#endif
|
||||
}
|
||||
|
||||
/* if we get transitioned to only one device, take VGA back */
|
||||
/**
|
||||
* amdgpu_device_vga_set_decode - enable/disable vga decode
|
||||
|
||||
@@ -584,7 +584,7 @@ static void nv_pcie_gen3_enable(struct amdgpu_device *adev)
|
||||
|
||||
static void nv_program_aspm(struct amdgpu_device *adev)
|
||||
{
|
||||
if (!amdgpu_device_should_use_aspm(adev))
|
||||
if (!amdgpu_device_should_use_aspm(adev) || !amdgpu_device_aspm_support_quirk())
|
||||
return;
|
||||
|
||||
if (!(adev->flags & AMD_IS_APU) &&
|
||||
|
||||
@@ -81,10 +81,6 @@
|
||||
#include "mxgpu_vi.h"
|
||||
#include "amdgpu_dm.h"
|
||||
|
||||
#if IS_ENABLED(CONFIG_X86)
|
||||
#include <asm/intel-family.h>
|
||||
#endif
|
||||
|
||||
#define ixPCIE_LC_L1_PM_SUBSTATE 0x100100C6
|
||||
#define PCIE_LC_L1_PM_SUBSTATE__LC_L1_SUBSTATES_OVERRIDE_EN_MASK 0x00000001L
|
||||
#define PCIE_LC_L1_PM_SUBSTATE__LC_PCI_PM_L1_2_OVERRIDE_MASK 0x00000002L
|
||||
@@ -1138,24 +1134,13 @@ static void vi_enable_aspm(struct amdgpu_device *adev)
|
||||
WREG32_PCIE(ixPCIE_LC_CNTL, data);
|
||||
}
|
||||
|
||||
static bool aspm_support_quirk_check(void)
|
||||
{
|
||||
#if IS_ENABLED(CONFIG_X86)
|
||||
struct cpuinfo_x86 *c = &cpu_data(0);
|
||||
|
||||
return !(c->x86 == 6 && c->x86_model == INTEL_FAM6_ALDERLAKE);
|
||||
#else
|
||||
return true;
|
||||
#endif
|
||||
}
|
||||
|
||||
static void vi_program_aspm(struct amdgpu_device *adev)
|
||||
{
|
||||
u32 data, data1, orig;
|
||||
bool bL1SS = false;
|
||||
bool bClkReqSupport = true;
|
||||
|
||||
if (!amdgpu_device_should_use_aspm(adev) || !aspm_support_quirk_check())
|
||||
if (!amdgpu_device_should_use_aspm(adev) || !amdgpu_device_aspm_support_quirk())
|
||||
return;
|
||||
|
||||
if (adev->flags & AMD_IS_APU ||
|
||||
|
||||
@@ -670,8 +670,8 @@ static int lt8912_parse_dt(struct lt8912 *lt)
|
||||
|
||||
lt->hdmi_port = of_drm_find_bridge(port_node);
|
||||
if (!lt->hdmi_port) {
|
||||
dev_err(lt->dev, "%s: Failed to get hdmi port\n", __func__);
|
||||
ret = -ENODEV;
|
||||
ret = -EPROBE_DEFER;
|
||||
dev_err_probe(lt->dev, ret, "%s: Failed to get hdmi port\n", __func__);
|
||||
goto err_free_host_node;
|
||||
}
|
||||
|
||||
|
||||
@@ -7824,6 +7824,7 @@ intel_crtc_prepare_cleared_state(struct intel_atomic_state *state,
|
||||
* only fields that are know to not cause problems are preserved. */
|
||||
|
||||
saved_state->uapi = crtc_state->uapi;
|
||||
saved_state->inherited = crtc_state->inherited;
|
||||
saved_state->scaler_state = crtc_state->scaler_state;
|
||||
saved_state->shared_dpll = crtc_state->shared_dpll;
|
||||
saved_state->dpll_hw_state = crtc_state->dpll_hw_state;
|
||||
|
||||
@@ -709,12 +709,12 @@ int intel_gt_init(struct intel_gt *gt)
|
||||
if (err)
|
||||
goto err_gt;
|
||||
|
||||
intel_uc_init_late(>->uc);
|
||||
|
||||
err = i915_inject_probe_error(gt->i915, -EIO);
|
||||
if (err)
|
||||
goto err_gt;
|
||||
|
||||
intel_uc_init_late(>->uc);
|
||||
|
||||
intel_migrate_init(>->migrate, gt);
|
||||
|
||||
goto out_fw;
|
||||
|
||||
@@ -92,8 +92,7 @@ static void debug_active_init(struct i915_active *ref)
|
||||
static void debug_active_activate(struct i915_active *ref)
|
||||
{
|
||||
lockdep_assert_held(&ref->tree_lock);
|
||||
if (!atomic_read(&ref->count)) /* before the first inc */
|
||||
debug_object_activate(ref, &active_debug_desc);
|
||||
debug_object_activate(ref, &active_debug_desc);
|
||||
}
|
||||
|
||||
static void debug_active_deactivate(struct i915_active *ref)
|
||||
|
||||
@@ -324,23 +324,23 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
|
||||
|
||||
ret = meson_encoder_hdmi_init(priv);
|
||||
if (ret)
|
||||
goto exit_afbcd;
|
||||
goto unbind_all;
|
||||
|
||||
ret = meson_plane_create(priv);
|
||||
if (ret)
|
||||
goto exit_afbcd;
|
||||
goto unbind_all;
|
||||
|
||||
ret = meson_overlay_create(priv);
|
||||
if (ret)
|
||||
goto exit_afbcd;
|
||||
goto unbind_all;
|
||||
|
||||
ret = meson_crtc_create(priv);
|
||||
if (ret)
|
||||
goto exit_afbcd;
|
||||
goto unbind_all;
|
||||
|
||||
ret = request_irq(priv->vsync_irq, meson_irq, 0, drm->driver->name, drm);
|
||||
if (ret)
|
||||
goto exit_afbcd;
|
||||
goto unbind_all;
|
||||
|
||||
drm_mode_config_reset(drm);
|
||||
|
||||
@@ -358,6 +358,9 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
|
||||
|
||||
uninstall_irq:
|
||||
free_irq(priv->vsync_irq, drm);
|
||||
unbind_all:
|
||||
if (has_components)
|
||||
component_unbind_all(drm->dev, drm);
|
||||
exit_afbcd:
|
||||
if (priv->afbcd.ops)
|
||||
priv->afbcd.ops->exit(priv);
|
||||
|
||||
@@ -450,7 +450,7 @@ static void cirrus_pipe_update(struct drm_simple_display_pipe *pipe,
|
||||
if (state->fb && cirrus->cpp != cirrus_cpp(state->fb))
|
||||
cirrus_mode_set(cirrus, &crtc->mode, state->fb);
|
||||
|
||||
if (drm_atomic_helper_damage_merged(old_state, state, &rect))
|
||||
if (state->fb && drm_atomic_helper_damage_merged(old_state, state, &rect))
|
||||
cirrus_fb_blit_rect(state->fb, &shadow_plane_state->data[0], &rect);
|
||||
}
|
||||
|
||||
|
||||
@@ -1352,6 +1352,7 @@ static int cp2112_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
girq->parents = NULL;
|
||||
girq->default_type = IRQ_TYPE_NONE;
|
||||
girq->handler = handle_simple_irq;
|
||||
girq->threaded = true;
|
||||
|
||||
ret = gpiochip_add_data(&dev->gc, dev);
|
||||
if (ret < 0) {
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
* Copyright (c) 2014-2016, Intel Corporation.
|
||||
*/
|
||||
|
||||
#include <linux/devm-helpers.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/delay.h>
|
||||
@@ -621,7 +622,6 @@ static void recv_ipc(struct ishtp_device *dev, uint32_t doorbell_val)
|
||||
case MNG_RESET_NOTIFY:
|
||||
if (!ishtp_dev) {
|
||||
ishtp_dev = dev;
|
||||
INIT_WORK(&fw_reset_work, fw_reset_work_fn);
|
||||
}
|
||||
schedule_work(&fw_reset_work);
|
||||
break;
|
||||
@@ -936,6 +936,7 @@ struct ishtp_device *ish_dev_init(struct pci_dev *pdev)
|
||||
{
|
||||
struct ishtp_device *dev;
|
||||
int i;
|
||||
int ret;
|
||||
|
||||
dev = devm_kzalloc(&pdev->dev,
|
||||
sizeof(struct ishtp_device) + sizeof(struct ish_hw),
|
||||
@@ -971,6 +972,12 @@ struct ishtp_device *ish_dev_init(struct pci_dev *pdev)
|
||||
list_add_tail(&tx_buf->link, &dev->wr_free_list);
|
||||
}
|
||||
|
||||
ret = devm_work_autocancel(&pdev->dev, &fw_reset_work, fw_reset_work_fn);
|
||||
if (ret) {
|
||||
dev_err(dev->devc, "Failed to initialise FW reset work\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
dev->ops = &ish_hw_ops;
|
||||
dev->devc = &pdev->dev;
|
||||
dev->mtu = IPC_PAYLOAD_SIZE - sizeof(struct ishtp_msg_hdr);
|
||||
|
||||
@@ -736,6 +736,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
|
||||
{
|
||||
struct hwmon_device *hwdev;
|
||||
struct device *hdev;
|
||||
struct device *tdev = dev;
|
||||
int i, err, id;
|
||||
|
||||
/* Complain about invalid characters in hwmon name attribute */
|
||||
@@ -793,7 +794,9 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
|
||||
hwdev->name = name;
|
||||
hdev->class = &hwmon_class;
|
||||
hdev->parent = dev;
|
||||
hdev->of_node = dev ? dev->of_node : NULL;
|
||||
while (tdev && !tdev->of_node)
|
||||
tdev = tdev->parent;
|
||||
hdev->of_node = tdev ? tdev->of_node : NULL;
|
||||
hwdev->chip = chip;
|
||||
dev_set_drvdata(hdev, drvdata);
|
||||
dev_set_name(hdev, HWMON_ID_FORMAT, id);
|
||||
@@ -805,7 +808,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
|
||||
|
||||
INIT_LIST_HEAD(&hwdev->tzdata);
|
||||
|
||||
if (dev && dev->of_node && chip && chip->ops->read &&
|
||||
if (hdev->of_node && chip && chip->ops->read &&
|
||||
chip->info[0]->type == hwmon_chip &&
|
||||
(chip->info[0]->config[0] & HWMON_C_REGISTER_TZ)) {
|
||||
err = hwmon_thermal_register_sensors(hdev);
|
||||
|
||||
@@ -486,6 +486,8 @@ static const struct it87_devices it87_devices[] = {
|
||||
#define has_pwm_freq2(data) ((data)->features & FEAT_PWM_FREQ2)
|
||||
#define has_six_temp(data) ((data)->features & FEAT_SIX_TEMP)
|
||||
#define has_vin3_5v(data) ((data)->features & FEAT_VIN3_5V)
|
||||
#define has_scaling(data) ((data)->features & (FEAT_12MV_ADC | \
|
||||
FEAT_10_9MV_ADC))
|
||||
|
||||
struct it87_sio_data {
|
||||
int sioaddr;
|
||||
@@ -3098,7 +3100,7 @@ static int it87_probe(struct platform_device *pdev)
|
||||
"Detected broken BIOS defaults, disabling PWM interface\n");
|
||||
|
||||
/* Starting with IT8721F, we handle scaling of internal voltages */
|
||||
if (has_12mv_adc(data)) {
|
||||
if (has_scaling(data)) {
|
||||
if (sio_data->internal & BIT(0))
|
||||
data->in_scaled |= BIT(3); /* in3 is AVCC */
|
||||
if (sio_data->internal & BIT(1))
|
||||
|
||||
@@ -340,7 +340,11 @@ static irqreturn_t hisi_i2c_irq(int irq, void *context)
|
||||
hisi_i2c_read_rx_fifo(ctlr);
|
||||
|
||||
out:
|
||||
if (int_stat & HISI_I2C_INT_TRANS_CPLT || ctlr->xfer_err) {
|
||||
/*
|
||||
* Only use TRANS_CPLT to indicate the completion. On error cases we'll
|
||||
* get two interrupts, INT_ERR first then TRANS_CPLT.
|
||||
*/
|
||||
if (int_stat & HISI_I2C_INT_TRANS_CPLT) {
|
||||
hisi_i2c_disable_int(ctlr, HISI_I2C_INT_ALL);
|
||||
hisi_i2c_clear_int(ctlr, HISI_I2C_INT_ALL);
|
||||
complete(ctlr->completion);
|
||||
|
||||
@@ -502,10 +502,14 @@ disable:
|
||||
static irqreturn_t lpi2c_imx_isr(int irq, void *dev_id)
|
||||
{
|
||||
struct lpi2c_imx_struct *lpi2c_imx = dev_id;
|
||||
unsigned int enabled;
|
||||
unsigned int temp;
|
||||
|
||||
enabled = readl(lpi2c_imx->base + LPI2C_MIER);
|
||||
|
||||
lpi2c_imx_intctrl(lpi2c_imx, 0);
|
||||
temp = readl(lpi2c_imx->base + LPI2C_MSR);
|
||||
temp &= enabled;
|
||||
|
||||
if (temp & MSR_RDF)
|
||||
lpi2c_imx_read_rxfifo(lpi2c_imx);
|
||||
|
||||
@@ -307,6 +307,9 @@ static int slimpro_i2c_blkwr(struct slimpro_i2c_dev *ctx, u32 chip,
|
||||
u32 msg[3];
|
||||
int rc;
|
||||
|
||||
if (writelen > I2C_SMBUS_BLOCK_MAX)
|
||||
return -EINVAL;
|
||||
|
||||
memcpy(ctx->dma_buffer, data, writelen);
|
||||
paddr = dma_map_single(ctx->dev, ctx->dma_buffer, writelen,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
@@ -275,7 +275,7 @@ static int qcom_osm_l3_probe(struct platform_device *pdev)
|
||||
qnodes = desc->nodes;
|
||||
num_nodes = desc->num_nodes;
|
||||
|
||||
data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL);
|
||||
data = devm_kzalloc(&pdev->dev, struct_size(data, nodes, num_nodes), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
|
||||
@@ -68,7 +68,9 @@ struct dm_crypt_io {
|
||||
struct crypt_config *cc;
|
||||
struct bio *base_bio;
|
||||
u8 *integrity_metadata;
|
||||
bool integrity_metadata_from_pool;
|
||||
bool integrity_metadata_from_pool:1;
|
||||
bool in_tasklet:1;
|
||||
|
||||
struct work_struct work;
|
||||
struct tasklet_struct tasklet;
|
||||
|
||||
@@ -1723,6 +1725,7 @@ static void crypt_io_init(struct dm_crypt_io *io, struct crypt_config *cc,
|
||||
io->ctx.r.req = NULL;
|
||||
io->integrity_metadata = NULL;
|
||||
io->integrity_metadata_from_pool = false;
|
||||
io->in_tasklet = false;
|
||||
atomic_set(&io->io_pending, 0);
|
||||
}
|
||||
|
||||
@@ -1768,14 +1771,13 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
|
||||
* our tasklet. In this case we need to delay bio_endio()
|
||||
* execution to after the tasklet is done and dequeued.
|
||||
*/
|
||||
if (tasklet_trylock(&io->tasklet)) {
|
||||
tasklet_unlock(&io->tasklet);
|
||||
bio_endio(base_bio);
|
||||
if (io->in_tasklet) {
|
||||
INIT_WORK(&io->work, kcryptd_io_bio_endio);
|
||||
queue_work(cc->io_queue, &io->work);
|
||||
return;
|
||||
}
|
||||
|
||||
INIT_WORK(&io->work, kcryptd_io_bio_endio);
|
||||
queue_work(cc->io_queue, &io->work);
|
||||
bio_endio(base_bio);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1935,6 +1937,7 @@ pop_from_list:
|
||||
io = crypt_io_from_node(rb_first(&write_tree));
|
||||
rb_erase(&io->rb_node, &write_tree);
|
||||
kcryptd_io_write(io);
|
||||
cond_resched();
|
||||
} while (!RB_EMPTY_ROOT(&write_tree));
|
||||
blk_finish_plug(&plug);
|
||||
}
|
||||
@@ -2228,6 +2231,7 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io)
|
||||
* it is being executed with irqs disabled.
|
||||
*/
|
||||
if (in_hardirq() || irqs_disabled()) {
|
||||
io->in_tasklet = true;
|
||||
tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work);
|
||||
tasklet_schedule(&io->tasklet);
|
||||
return;
|
||||
|
||||
@@ -188,7 +188,7 @@ static int dm_stat_in_flight(struct dm_stat_shared *shared)
|
||||
atomic_read(&shared->in_flight[WRITE]);
|
||||
}
|
||||
|
||||
void dm_stats_init(struct dm_stats *stats)
|
||||
int dm_stats_init(struct dm_stats *stats)
|
||||
{
|
||||
int cpu;
|
||||
struct dm_stats_last_position *last;
|
||||
@@ -197,11 +197,16 @@ void dm_stats_init(struct dm_stats *stats)
|
||||
INIT_LIST_HEAD(&stats->list);
|
||||
stats->precise_timestamps = false;
|
||||
stats->last = alloc_percpu(struct dm_stats_last_position);
|
||||
if (!stats->last)
|
||||
return -ENOMEM;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
last = per_cpu_ptr(stats->last, cpu);
|
||||
last->last_sector = (sector_t)ULLONG_MAX;
|
||||
last->last_rw = UINT_MAX;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void dm_stats_cleanup(struct dm_stats *stats)
|
||||
|
||||
@@ -21,7 +21,7 @@ struct dm_stats_aux {
|
||||
unsigned long long duration_ns;
|
||||
};
|
||||
|
||||
void dm_stats_init(struct dm_stats *st);
|
||||
int dm_stats_init(struct dm_stats *st);
|
||||
void dm_stats_cleanup(struct dm_stats *st);
|
||||
|
||||
struct mapped_device;
|
||||
|
||||
@@ -3383,6 +3383,7 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
|
||||
pt->low_water_blocks = low_water_blocks;
|
||||
pt->adjusted_pf = pt->requested_pf = pf;
|
||||
ti->num_flush_bios = 1;
|
||||
ti->limit_swap_bios = true;
|
||||
|
||||
/*
|
||||
* Only need to enable discards if the pool should pass
|
||||
@@ -4263,6 +4264,7 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
|
||||
goto bad;
|
||||
|
||||
ti->num_flush_bios = 1;
|
||||
ti->limit_swap_bios = true;
|
||||
ti->flush_supported = true;
|
||||
ti->per_io_data_size = sizeof(struct dm_thin_endio_hook);
|
||||
|
||||
|
||||
@@ -1818,7 +1818,9 @@ static struct mapped_device *alloc_dev(int minor)
|
||||
if (!md->pending_io)
|
||||
goto bad;
|
||||
|
||||
dm_stats_init(&md->stats);
|
||||
r = dm_stats_init(&md->stats);
|
||||
if (r < 0)
|
||||
goto bad;
|
||||
|
||||
/* Populate the mapping, nobody knows we exist yet */
|
||||
spin_lock(&_minor_lock);
|
||||
|
||||
@@ -263,7 +263,7 @@ static int b53_mmap_probe_of(struct platform_device *pdev,
|
||||
if (of_property_read_u32(of_port, "reg", ®))
|
||||
continue;
|
||||
|
||||
if (reg < B53_CPU_PORT)
|
||||
if (reg < B53_N_PORTS)
|
||||
pdata->enabled_ports |= BIT(reg);
|
||||
}
|
||||
|
||||
|
||||
@@ -391,6 +391,9 @@ mt7530_fdb_write(struct mt7530_priv *priv, u16 vid,
|
||||
/* Set up switch core clock for MT7530 */
|
||||
static void mt7530_pll_setup(struct mt7530_priv *priv)
|
||||
{
|
||||
/* Disable core clock */
|
||||
core_clear(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);
|
||||
|
||||
/* Disable PLL */
|
||||
core_write(priv, CORE_GSWPLL_GRP1, 0);
|
||||
|
||||
@@ -404,14 +407,19 @@ static void mt7530_pll_setup(struct mt7530_priv *priv)
|
||||
RG_GSWPLL_EN_PRE |
|
||||
RG_GSWPLL_POSDIV_200M(2) |
|
||||
RG_GSWPLL_FBKDIV_200M(32));
|
||||
|
||||
udelay(20);
|
||||
|
||||
/* Enable core clock */
|
||||
core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);
|
||||
}
|
||||
|
||||
/* Setup TX circuit including relevant PAD and driving */
|
||||
/* Setup port 6 interface mode and TRGMII TX circuit */
|
||||
static int
|
||||
mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
|
||||
{
|
||||
struct mt7530_priv *priv = ds->priv;
|
||||
u32 ncpo1, ssc_delta, trgint, i, xtal;
|
||||
u32 ncpo1, ssc_delta, trgint, xtal;
|
||||
|
||||
xtal = mt7530_read(priv, MT7530_MHWTRAP) & HWTRAP_XTAL_MASK;
|
||||
|
||||
@@ -428,6 +436,10 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
|
||||
break;
|
||||
case PHY_INTERFACE_MODE_TRGMII:
|
||||
trgint = 1;
|
||||
if (xtal == HWTRAP_XTAL_25MHZ)
|
||||
ssc_delta = 0x57;
|
||||
else
|
||||
ssc_delta = 0x87;
|
||||
if (priv->id == ID_MT7621) {
|
||||
/* PLL frequency: 150MHz: 1.2GBit */
|
||||
if (xtal == HWTRAP_XTAL_40MHZ)
|
||||
@@ -447,23 +459,12 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (xtal == HWTRAP_XTAL_25MHZ)
|
||||
ssc_delta = 0x57;
|
||||
else
|
||||
ssc_delta = 0x87;
|
||||
|
||||
mt7530_rmw(priv, MT7530_P6ECR, P6_INTF_MODE_MASK,
|
||||
P6_INTF_MODE(trgint));
|
||||
|
||||
if (trgint) {
|
||||
/* Lower Tx Driving for TRGMII path */
|
||||
for (i = 0 ; i < NUM_TRGMII_CTRL ; i++)
|
||||
mt7530_write(priv, MT7530_TRGMII_TD_ODT(i),
|
||||
TD_DM_DRVP(8) | TD_DM_DRVN(8));
|
||||
|
||||
/* Disable MT7530 core and TRGMII Tx clocks */
|
||||
core_clear(priv, CORE_TRGMII_GSW_CLK_CG,
|
||||
REG_GSWCK_EN | REG_TRGMIICK_EN);
|
||||
/* Disable the MT7530 TRGMII clocks */
|
||||
core_clear(priv, CORE_TRGMII_GSW_CLK_CG, REG_TRGMIICK_EN);
|
||||
|
||||
/* Setup the MT7530 TRGMII Tx Clock */
|
||||
core_write(priv, CORE_PLL_GROUP5, RG_LCDDS_PCW_NCPO1(ncpo1));
|
||||
@@ -480,13 +481,8 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
|
||||
RG_LCDDS_PCW_NCPO_CHG | RG_LCCDS_C(3) |
|
||||
RG_LCDDS_PWDB | RG_LCDDS_ISO_EN);
|
||||
|
||||
/* Enable MT7530 core and TRGMII Tx clocks */
|
||||
core_set(priv, CORE_TRGMII_GSW_CLK_CG,
|
||||
REG_GSWCK_EN | REG_TRGMIICK_EN);
|
||||
} else {
|
||||
for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
|
||||
mt7530_rmw(priv, MT7530_TRGMII_RD(i),
|
||||
RD_TAP_MASK, RD_TAP(16));
|
||||
/* Enable the MT7530 TRGMII clocks */
|
||||
core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_TRGMIICK_EN);
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -2168,6 +2164,15 @@ mt7530_setup(struct dsa_switch *ds)
|
||||
|
||||
mt7530_pll_setup(priv);
|
||||
|
||||
/* Lower Tx driving for TRGMII path */
|
||||
for (i = 0; i < NUM_TRGMII_CTRL; i++)
|
||||
mt7530_write(priv, MT7530_TRGMII_TD_ODT(i),
|
||||
TD_DM_DRVP(8) | TD_DM_DRVN(8));
|
||||
|
||||
for (i = 0; i < NUM_TRGMII_CTRL; i++)
|
||||
mt7530_rmw(priv, MT7530_TRGMII_RD(i),
|
||||
RD_TAP_MASK, RD_TAP(16));
|
||||
|
||||
/* Enable port 6 */
|
||||
val = mt7530_read(priv, MT7530_MHWTRAP);
|
||||
val &= ~MHWTRAP_P6_DIS & ~MHWTRAP_PHY_ACCESS;
|
||||
|
||||
@@ -526,7 +526,10 @@ static int gve_get_link_ksettings(struct net_device *netdev,
|
||||
struct ethtool_link_ksettings *cmd)
|
||||
{
|
||||
struct gve_priv *priv = netdev_priv(netdev);
|
||||
int err = gve_adminq_report_link_speed(priv);
|
||||
int err = 0;
|
||||
|
||||
if (priv->link_speed == 0)
|
||||
err = gve_adminq_report_link_speed(priv);
|
||||
|
||||
cmd->base.speed = priv->link_speed;
|
||||
return err;
|
||||
|
||||
@@ -170,10 +170,10 @@ static char *i40e_create_dummy_packet(u8 *dummy_packet, bool ipv4, u8 l4proto,
|
||||
struct i40e_fdir_filter *data)
|
||||
{
|
||||
bool is_vlan = !!data->vlan_tag;
|
||||
struct vlan_hdr vlan;
|
||||
struct ipv6hdr ipv6;
|
||||
struct ethhdr eth;
|
||||
struct iphdr ip;
|
||||
struct vlan_hdr vlan = {};
|
||||
struct ipv6hdr ipv6 = {};
|
||||
struct ethhdr eth = {};
|
||||
struct iphdr ip = {};
|
||||
u8 *tmp;
|
||||
|
||||
if (ipv4) {
|
||||
|
||||
@@ -661,7 +661,7 @@ struct iavf_rx_ptype_decoded iavf_ptype_lookup[BIT(8)] = {
|
||||
/* Non Tunneled IPv6 */
|
||||
IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
|
||||
IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
|
||||
IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY3),
|
||||
IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4),
|
||||
IAVF_PTT_UNUSED_ENTRY(91),
|
||||
IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4),
|
||||
IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
|
||||
|
||||
@@ -4213,6 +4213,11 @@ static void iavf_remove(struct pci_dev *pdev)
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
break;
|
||||
}
|
||||
/* Simply return if we already went through iavf_shutdown */
|
||||
if (adapter->state == __IAVF_REMOVE) {
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
return;
|
||||
}
|
||||
|
||||
mutex_unlock(&adapter->crit_lock);
|
||||
usleep_range(500, 1000);
|
||||
|
||||
@@ -1061,7 +1061,7 @@ static inline void iavf_rx_hash(struct iavf_ring *ring,
|
||||
cpu_to_le64((u64)IAVF_RX_DESC_FLTSTAT_RSS_HASH <<
|
||||
IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT);
|
||||
|
||||
if (ring->netdev->features & NETIF_F_RXHASH)
|
||||
if (!(ring->netdev->features & NETIF_F_RXHASH))
|
||||
return;
|
||||
|
||||
if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) {
|
||||
|
||||
@@ -3820,9 +3820,7 @@ static void igb_remove(struct pci_dev *pdev)
|
||||
igb_release_hw_control(adapter);
|
||||
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
rtnl_lock();
|
||||
igb_disable_sriov(pdev);
|
||||
rtnl_unlock();
|
||||
#endif
|
||||
|
||||
unregister_netdev(netdev);
|
||||
|
||||
@@ -1074,7 +1074,7 @@ static int igbvf_request_msix(struct igbvf_adapter *adapter)
|
||||
igbvf_intr_msix_rx, 0, adapter->rx_ring->name,
|
||||
netdev);
|
||||
if (err)
|
||||
goto out;
|
||||
goto free_irq_tx;
|
||||
|
||||
adapter->rx_ring->itr_register = E1000_EITR(vector);
|
||||
adapter->rx_ring->itr_val = adapter->current_itr;
|
||||
@@ -1083,10 +1083,14 @@ static int igbvf_request_msix(struct igbvf_adapter *adapter)
|
||||
err = request_irq(adapter->msix_entries[vector].vector,
|
||||
igbvf_msix_other, 0, netdev->name, netdev);
|
||||
if (err)
|
||||
goto out;
|
||||
goto free_irq_rx;
|
||||
|
||||
igbvf_configure_msix(adapter);
|
||||
return 0;
|
||||
free_irq_rx:
|
||||
free_irq(adapter->msix_entries[--vector].vector, netdev);
|
||||
free_irq_tx:
|
||||
free_irq(adapter->msix_entries[--vector].vector, netdev);
|
||||
out:
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright(c) 2009 - 2018 Intel Corporation. */
|
||||
|
||||
#include <linux/etherdevice.h>
|
||||
|
||||
#include "vf.h"
|
||||
|
||||
static s32 e1000_check_for_link_vf(struct e1000_hw *hw);
|
||||
@@ -131,11 +133,16 @@ static s32 e1000_reset_hw_vf(struct e1000_hw *hw)
|
||||
/* set our "perm_addr" based on info provided by PF */
|
||||
ret_val = mbx->ops.read_posted(hw, msgbuf, 3);
|
||||
if (!ret_val) {
|
||||
if (msgbuf[0] == (E1000_VF_RESET |
|
||||
E1000_VT_MSGTYPE_ACK))
|
||||
switch (msgbuf[0]) {
|
||||
case E1000_VF_RESET | E1000_VT_MSGTYPE_ACK:
|
||||
memcpy(hw->mac.perm_addr, addr, ETH_ALEN);
|
||||
else
|
||||
break;
|
||||
case E1000_VF_RESET | E1000_VT_MSGTYPE_NACK:
|
||||
eth_zero_addr(hw->mac.perm_addr);
|
||||
break;
|
||||
default:
|
||||
ret_val = -E1000_ERR_MAC_INIT;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -5951,18 +5951,18 @@ static bool validate_schedule(struct igc_adapter *adapter,
|
||||
if (e->command != TC_TAPRIO_CMD_SET_GATES)
|
||||
return false;
|
||||
|
||||
for (i = 0; i < adapter->num_tx_queues; i++) {
|
||||
if (e->gate_mask & BIT(i))
|
||||
for (i = 0; i < adapter->num_tx_queues; i++)
|
||||
if (e->gate_mask & BIT(i)) {
|
||||
queue_uses[i]++;
|
||||
|
||||
/* There are limitations: A single queue cannot be
|
||||
* opened and closed multiple times per cycle unless the
|
||||
* gate stays open. Check for it.
|
||||
*/
|
||||
if (queue_uses[i] > 1 &&
|
||||
!(prev->gate_mask & BIT(i)))
|
||||
return false;
|
||||
}
|
||||
/* There are limitations: A single queue cannot
|
||||
* be opened and closed multiple times per cycle
|
||||
* unless the gate stays open. Check for it.
|
||||
*/
|
||||
if (queue_uses[i] > 1 &&
|
||||
!(prev->gate_mask & BIT(i)))
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
|
||||
@@ -704,6 +704,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
err_unreg_netdev:
|
||||
unregister_netdev(netdev);
|
||||
err_detach_rsrc:
|
||||
free_percpu(vf->hw.lmt_info);
|
||||
if (test_bit(CN10K_LMTST, &vf->hw.cap_flag))
|
||||
qmem_free(vf->dev, vf->dync_lmt);
|
||||
otx2_detach_resources(&vf->mbox);
|
||||
@@ -738,6 +739,7 @@ static void otx2vf_remove(struct pci_dev *pdev)
|
||||
destroy_workqueue(vf->otx2_wq);
|
||||
otx2vf_disable_mbox_intr(vf);
|
||||
otx2_detach_resources(&vf->mbox);
|
||||
free_percpu(vf->hw.lmt_info);
|
||||
if (test_bit(CN10K_LMTST, &vf->hw.cap_flag))
|
||||
qmem_free(vf->dev, vf->dync_lmt);
|
||||
otx2vf_vfaf_mbox_destroy(vf);
|
||||
|
||||
@@ -117,12 +117,14 @@ static int mlx5e_dcbnl_ieee_getets(struct net_device *netdev,
|
||||
if (!MLX5_CAP_GEN(priv->mdev, ets))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
ets->ets_cap = mlx5_max_tc(priv->mdev) + 1;
|
||||
for (i = 0; i < ets->ets_cap; i++) {
|
||||
for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
|
||||
err = mlx5_query_port_prio_tc(mdev, i, &ets->prio_tc[i]);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
ets->ets_cap = mlx5_max_tc(priv->mdev) + 1;
|
||||
for (i = 0; i < ets->ets_cap; i++) {
|
||||
err = mlx5_query_port_tc_group(mdev, i, &tc_group[i]);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
@@ -3527,8 +3527,12 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
|
||||
netdev_warn(netdev, "Disabling rxhash, not supported when CQE compress is active\n");
|
||||
}
|
||||
|
||||
if (mlx5e_is_uplink_rep(priv))
|
||||
if (mlx5e_is_uplink_rep(priv)) {
|
||||
features = mlx5e_fix_uplink_rep_features(netdev, features);
|
||||
features |= NETIF_F_NETNS_LOCAL;
|
||||
} else {
|
||||
features &= ~NETIF_F_NETNS_LOCAL;
|
||||
}
|
||||
|
||||
mutex_unlock(&priv->state_lock);
|
||||
|
||||
|
||||
@@ -301,8 +301,7 @@ int mlx5_esw_acl_ingress_vport_bond_update(struct mlx5_eswitch *esw, u16 vport_n
|
||||
|
||||
if (WARN_ON_ONCE(IS_ERR(vport))) {
|
||||
esw_warn(esw->dev, "vport(%d) invalid!\n", vport_num);
|
||||
err = PTR_ERR(vport);
|
||||
goto out;
|
||||
return PTR_ERR(vport);
|
||||
}
|
||||
|
||||
esw_acl_ingress_ofld_rules_destroy(esw, vport);
|
||||
|
||||
@@ -918,6 +918,7 @@ void mlx5_esw_vport_disable(struct mlx5_eswitch *esw, u16 vport_num)
|
||||
*/
|
||||
esw_vport_change_handle_locked(vport);
|
||||
vport->enabled_events = 0;
|
||||
esw_apply_vport_rx_mode(esw, vport, false, false);
|
||||
esw_vport_cleanup(esw, vport);
|
||||
esw->enabled_vports--;
|
||||
|
||||
|
||||
@@ -292,7 +292,7 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
|
||||
*/
|
||||
|
||||
laddr = dma_map_single(lp->device, skb->data, length, DMA_TO_DEVICE);
|
||||
if (!laddr) {
|
||||
if (dma_mapping_error(lp->device, laddr)) {
|
||||
pr_err_ratelimited("%s: failed to map tx DMA buffer.\n", dev->name);
|
||||
dev_kfree_skb_any(skb);
|
||||
return NETDEV_TX_OK;
|
||||
@@ -509,7 +509,7 @@ static bool sonic_alloc_rb(struct net_device *dev, struct sonic_local *lp,
|
||||
|
||||
*new_addr = dma_map_single(lp->device, skb_put(*new_skb, SONIC_RBSIZE),
|
||||
SONIC_RBSIZE, DMA_FROM_DEVICE);
|
||||
if (!*new_addr) {
|
||||
if (dma_mapping_error(lp->device, *new_addr)) {
|
||||
dev_kfree_skb(*new_skb);
|
||||
*new_skb = NULL;
|
||||
return false;
|
||||
|
||||
@@ -4378,6 +4378,9 @@ qed_iov_configure_min_tx_rate(struct qed_dev *cdev, int vfid, u32 rate)
|
||||
}
|
||||
|
||||
vf = qed_iov_get_vf_info(QED_LEADING_HWFN(cdev), (u16)vfid, true);
|
||||
if (!vf)
|
||||
return -EINVAL;
|
||||
|
||||
vport_id = vf->vport_id;
|
||||
|
||||
return qed_configure_vport_wfq(cdev, vport_id, rate);
|
||||
@@ -5124,7 +5127,7 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
|
||||
|
||||
/* Validate that the VF has a configured vport */
|
||||
vf = qed_iov_get_vf_info(hwfn, i, true);
|
||||
if (!vf->vport_instance)
|
||||
if (!vf || !vf->vport_instance)
|
||||
continue;
|
||||
|
||||
memset(¶ms, 0, sizeof(params));
|
||||
|
||||
@@ -728,9 +728,15 @@ static int emac_remove(struct platform_device *pdev)
|
||||
struct net_device *netdev = dev_get_drvdata(&pdev->dev);
|
||||
struct emac_adapter *adpt = netdev_priv(netdev);
|
||||
|
||||
netif_carrier_off(netdev);
|
||||
netif_tx_disable(netdev);
|
||||
|
||||
unregister_netdev(netdev);
|
||||
netif_napi_del(&adpt->rx_q.napi);
|
||||
|
||||
free_irq(adpt->irq.irq, &adpt->irq);
|
||||
cancel_work_sync(&adpt->work_thread);
|
||||
|
||||
emac_clks_teardown(adpt);
|
||||
|
||||
put_device(&adpt->phydev->mdio.dev);
|
||||
|
||||
@@ -317,15 +317,17 @@ static int gelic_card_init_chain(struct gelic_card *card,
|
||||
|
||||
/* set up the hardware pointers in each descriptor */
|
||||
for (i = 0; i < no; i++, descr++) {
|
||||
gelic_descr_set_status(descr, GELIC_DESCR_DMA_NOT_IN_USE);
|
||||
descr->bus_addr =
|
||||
dma_map_single(ctodev(card), descr,
|
||||
GELIC_DESCR_SIZE,
|
||||
DMA_BIDIRECTIONAL);
|
||||
dma_addr_t cpu_addr;
|
||||
|
||||
if (!descr->bus_addr)
|
||||
gelic_descr_set_status(descr, GELIC_DESCR_DMA_NOT_IN_USE);
|
||||
|
||||
cpu_addr = dma_map_single(ctodev(card), descr,
|
||||
GELIC_DESCR_SIZE, DMA_BIDIRECTIONAL);
|
||||
|
||||
if (dma_mapping_error(ctodev(card), cpu_addr))
|
||||
goto iommu_error;
|
||||
|
||||
descr->bus_addr = cpu_to_be32(cpu_addr);
|
||||
descr->next = descr + 1;
|
||||
descr->prev = descr - 1;
|
||||
}
|
||||
@@ -365,26 +367,28 @@ iommu_error:
|
||||
*
|
||||
* allocates a new rx skb, iommu-maps it and attaches it to the descriptor.
|
||||
* Activate the descriptor state-wise
|
||||
*
|
||||
* Gelic RX sk_buffs must be aligned to GELIC_NET_RXBUF_ALIGN and the length
|
||||
* must be a multiple of GELIC_NET_RXBUF_ALIGN.
|
||||
*/
|
||||
static int gelic_descr_prepare_rx(struct gelic_card *card,
|
||||
struct gelic_descr *descr)
|
||||
{
|
||||
static const unsigned int rx_skb_size =
|
||||
ALIGN(GELIC_NET_MAX_FRAME, GELIC_NET_RXBUF_ALIGN) +
|
||||
GELIC_NET_RXBUF_ALIGN - 1;
|
||||
dma_addr_t cpu_addr;
|
||||
int offset;
|
||||
unsigned int bufsize;
|
||||
|
||||
if (gelic_descr_get_status(descr) != GELIC_DESCR_DMA_NOT_IN_USE)
|
||||
dev_info(ctodev(card), "%s: ERROR status\n", __func__);
|
||||
/* we need to round up the buffer size to a multiple of 128 */
|
||||
bufsize = ALIGN(GELIC_NET_MAX_MTU, GELIC_NET_RXBUF_ALIGN);
|
||||
|
||||
/* and we need to have it 128 byte aligned, therefore we allocate a
|
||||
* bit more */
|
||||
descr->skb = dev_alloc_skb(bufsize + GELIC_NET_RXBUF_ALIGN - 1);
|
||||
descr->skb = netdev_alloc_skb(*card->netdev, rx_skb_size);
|
||||
if (!descr->skb) {
|
||||
descr->buf_addr = 0; /* tell DMAC don't touch memory */
|
||||
return -ENOMEM;
|
||||
}
|
||||
descr->buf_size = cpu_to_be32(bufsize);
|
||||
descr->buf_size = cpu_to_be32(rx_skb_size);
|
||||
descr->dmac_cmd_status = 0;
|
||||
descr->result_size = 0;
|
||||
descr->valid_size = 0;
|
||||
@@ -395,11 +399,10 @@ static int gelic_descr_prepare_rx(struct gelic_card *card,
|
||||
if (offset)
|
||||
skb_reserve(descr->skb, GELIC_NET_RXBUF_ALIGN - offset);
|
||||
/* io-mmu-map the skb */
|
||||
descr->buf_addr = cpu_to_be32(dma_map_single(ctodev(card),
|
||||
descr->skb->data,
|
||||
GELIC_NET_MAX_MTU,
|
||||
DMA_FROM_DEVICE));
|
||||
if (!descr->buf_addr) {
|
||||
cpu_addr = dma_map_single(ctodev(card), descr->skb->data,
|
||||
GELIC_NET_MAX_FRAME, DMA_FROM_DEVICE);
|
||||
descr->buf_addr = cpu_to_be32(cpu_addr);
|
||||
if (dma_mapping_error(ctodev(card), cpu_addr)) {
|
||||
dev_kfree_skb_any(descr->skb);
|
||||
descr->skb = NULL;
|
||||
dev_info(ctodev(card),
|
||||
@@ -779,7 +782,7 @@ static int gelic_descr_prepare_tx(struct gelic_card *card,
|
||||
|
||||
buf = dma_map_single(ctodev(card), skb->data, skb->len, DMA_TO_DEVICE);
|
||||
|
||||
if (!buf) {
|
||||
if (dma_mapping_error(ctodev(card), buf)) {
|
||||
dev_err(ctodev(card),
|
||||
"dma map 2 failed (%p, %i). Dropping packet\n",
|
||||
skb->data, skb->len);
|
||||
@@ -915,7 +918,7 @@ static void gelic_net_pass_skb_up(struct gelic_descr *descr,
|
||||
data_error = be32_to_cpu(descr->data_error);
|
||||
/* unmap skb buffer */
|
||||
dma_unmap_single(ctodev(card), be32_to_cpu(descr->buf_addr),
|
||||
GELIC_NET_MAX_MTU,
|
||||
GELIC_NET_MAX_FRAME,
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
skb_put(skb, be32_to_cpu(descr->valid_size)?
|
||||
|
||||
@@ -19,8 +19,9 @@
|
||||
#define GELIC_NET_RX_DESCRIPTORS 128 /* num of descriptors */
|
||||
#define GELIC_NET_TX_DESCRIPTORS 128 /* num of descriptors */
|
||||
|
||||
#define GELIC_NET_MAX_MTU VLAN_ETH_FRAME_LEN
|
||||
#define GELIC_NET_MIN_MTU VLAN_ETH_ZLEN
|
||||
#define GELIC_NET_MAX_FRAME 2312
|
||||
#define GELIC_NET_MAX_MTU 2294
|
||||
#define GELIC_NET_MIN_MTU 64
|
||||
#define GELIC_NET_RXBUF_ALIGN 128
|
||||
#define GELIC_CARD_RX_CSUM_DEFAULT 1 /* hw chksum */
|
||||
#define GELIC_NET_WATCHDOG_TIMEOUT 5*HZ
|
||||
|
||||
@@ -503,6 +503,11 @@ static void
|
||||
xirc2ps_detach(struct pcmcia_device *link)
|
||||
{
|
||||
struct net_device *dev = link->priv;
|
||||
struct local_info *local = netdev_priv(dev);
|
||||
|
||||
netif_carrier_off(dev);
|
||||
netif_tx_disable(dev);
|
||||
cancel_work_sync(&local->tx_timeout_task);
|
||||
|
||||
dev_dbg(&link->dev, "detach\n");
|
||||
|
||||
|
||||
@@ -1956,6 +1956,8 @@ static int ca8210_skb_tx(
|
||||
* packet
|
||||
*/
|
||||
mac_len = ieee802154_hdr_peek_addrs(skb, &header);
|
||||
if (mac_len < 0)
|
||||
return mac_len;
|
||||
|
||||
secspec.security_level = header.sec.level;
|
||||
secspec.key_id_mode = header.sec.key_id_mode;
|
||||
|
||||
@@ -18,16 +18,18 @@ MODULE_AUTHOR("Calvin Johnson <calvin.johnson@oss.nxp.com>");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
/**
|
||||
* acpi_mdiobus_register - Register mii_bus and create PHYs from the ACPI ASL.
|
||||
* __acpi_mdiobus_register - Register mii_bus and create PHYs from the ACPI ASL.
|
||||
* @mdio: pointer to mii_bus structure
|
||||
* @fwnode: pointer to fwnode of MDIO bus. This fwnode is expected to represent
|
||||
* @owner: module owning this @mdio object.
|
||||
* an ACPI device object corresponding to the MDIO bus and its children are
|
||||
* expected to correspond to the PHY devices on that bus.
|
||||
*
|
||||
* This function registers the mii_bus structure and registers a phy_device
|
||||
* for each child node of @fwnode.
|
||||
*/
|
||||
int acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode)
|
||||
int __acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode,
|
||||
struct module *owner)
|
||||
{
|
||||
struct fwnode_handle *child;
|
||||
u32 addr;
|
||||
@@ -35,7 +37,7 @@ int acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode)
|
||||
|
||||
/* Mask out all PHYs from auto probing. */
|
||||
mdio->phy_mask = GENMASK(31, 0);
|
||||
ret = mdiobus_register(mdio);
|
||||
ret = __mdiobus_register(mdio, owner);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -55,4 +57,4 @@ int acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode)
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(acpi_mdiobus_register);
|
||||
EXPORT_SYMBOL(__acpi_mdiobus_register);
|
||||
|
||||
@@ -104,6 +104,7 @@ static int thunder_mdiobus_pci_probe(struct pci_dev *pdev,
|
||||
if (i >= ARRAY_SIZE(nexus->buses))
|
||||
break;
|
||||
}
|
||||
fwnode_handle_put(fwn);
|
||||
return 0;
|
||||
|
||||
err_release_regions:
|
||||
|
||||
@@ -139,21 +139,23 @@ bool of_mdiobus_child_is_phy(struct device_node *child)
|
||||
EXPORT_SYMBOL(of_mdiobus_child_is_phy);
|
||||
|
||||
/**
|
||||
* of_mdiobus_register - Register mii_bus and create PHYs from the device tree
|
||||
* __of_mdiobus_register - Register mii_bus and create PHYs from the device tree
|
||||
* @mdio: pointer to mii_bus structure
|
||||
* @np: pointer to device_node of MDIO bus.
|
||||
* @owner: module owning the @mdio object.
|
||||
*
|
||||
* This function registers the mii_bus structure and registers a phy_device
|
||||
* for each child node of @np.
|
||||
*/
|
||||
int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
|
||||
int __of_mdiobus_register(struct mii_bus *mdio, struct device_node *np,
|
||||
struct module *owner)
|
||||
{
|
||||
struct device_node *child;
|
||||
bool scanphys = false;
|
||||
int addr, rc;
|
||||
|
||||
if (!np)
|
||||
return mdiobus_register(mdio);
|
||||
return __mdiobus_register(mdio, owner);
|
||||
|
||||
/* Do not continue if the node is disabled */
|
||||
if (!of_device_is_available(np))
|
||||
@@ -172,7 +174,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
|
||||
of_property_read_u32(np, "reset-post-delay-us", &mdio->reset_post_delay_us);
|
||||
|
||||
/* Register the MDIO bus */
|
||||
rc = mdiobus_register(mdio);
|
||||
rc = __mdiobus_register(mdio, owner);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
@@ -236,7 +238,7 @@ unregister:
|
||||
mdiobus_unregister(mdio);
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL(of_mdiobus_register);
|
||||
EXPORT_SYMBOL(__of_mdiobus_register);
|
||||
|
||||
/**
|
||||
* of_mdio_find_device - Given a device tree node, find the mdio_device
|
||||
|
||||
@@ -98,13 +98,14 @@ EXPORT_SYMBOL(__devm_mdiobus_register);
|
||||
|
||||
#if IS_ENABLED(CONFIG_OF_MDIO)
|
||||
/**
|
||||
* devm_of_mdiobus_register - Resource managed variant of of_mdiobus_register()
|
||||
* __devm_of_mdiobus_register - Resource managed variant of of_mdiobus_register()
|
||||
* @dev: Device to register mii_bus for
|
||||
* @mdio: MII bus structure to register
|
||||
* @np: Device node to parse
|
||||
* @owner: Owning module
|
||||
*/
|
||||
int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
|
||||
struct device_node *np)
|
||||
int __devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
|
||||
struct device_node *np, struct module *owner)
|
||||
{
|
||||
struct mdiobus_devres *dr;
|
||||
int ret;
|
||||
@@ -117,7 +118,7 @@ int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
|
||||
if (!dr)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = of_mdiobus_register(mdio, np);
|
||||
ret = __of_mdiobus_register(mdio, np, owner);
|
||||
if (ret) {
|
||||
devres_free(dr);
|
||||
return ret;
|
||||
@@ -127,7 +128,7 @@ int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
|
||||
devres_add(dev, dr);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(devm_of_mdiobus_register);
|
||||
EXPORT_SYMBOL(__devm_of_mdiobus_register);
|
||||
#endif /* CONFIG_OF_MDIO */
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
@@ -56,6 +56,18 @@ static const char *phy_state_to_str(enum phy_state st)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void phy_process_state_change(struct phy_device *phydev,
|
||||
enum phy_state old_state)
|
||||
{
|
||||
if (old_state != phydev->state) {
|
||||
phydev_dbg(phydev, "PHY state change %s -> %s\n",
|
||||
phy_state_to_str(old_state),
|
||||
phy_state_to_str(phydev->state));
|
||||
if (phydev->drv && phydev->drv->link_change_notify)
|
||||
phydev->drv->link_change_notify(phydev);
|
||||
}
|
||||
}
|
||||
|
||||
static void phy_link_up(struct phy_device *phydev)
|
||||
{
|
||||
phydev->phy_link_change(phydev, true);
|
||||
@@ -1038,6 +1050,7 @@ EXPORT_SYMBOL(phy_free_interrupt);
|
||||
void phy_stop(struct phy_device *phydev)
|
||||
{
|
||||
struct net_device *dev = phydev->attached_dev;
|
||||
enum phy_state old_state;
|
||||
|
||||
if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) {
|
||||
WARN(1, "called from state %s\n",
|
||||
@@ -1046,6 +1059,7 @@ void phy_stop(struct phy_device *phydev)
|
||||
}
|
||||
|
||||
mutex_lock(&phydev->lock);
|
||||
old_state = phydev->state;
|
||||
|
||||
if (phydev->state == PHY_CABLETEST) {
|
||||
phy_abort_cable_test(phydev);
|
||||
@@ -1056,6 +1070,7 @@ void phy_stop(struct phy_device *phydev)
|
||||
sfp_upstream_stop(phydev->sfp_bus);
|
||||
|
||||
phydev->state = PHY_HALTED;
|
||||
phy_process_state_change(phydev, old_state);
|
||||
|
||||
mutex_unlock(&phydev->lock);
|
||||
|
||||
@@ -1173,13 +1188,7 @@ void phy_state_machine(struct work_struct *work)
|
||||
if (err < 0)
|
||||
phy_error(phydev);
|
||||
|
||||
if (old_state != phydev->state) {
|
||||
phydev_dbg(phydev, "PHY state change %s -> %s\n",
|
||||
phy_state_to_str(old_state),
|
||||
phy_state_to_str(phydev->state));
|
||||
if (phydev->drv && phydev->drv->link_change_notify)
|
||||
phydev->drv->link_change_notify(phydev);
|
||||
}
|
||||
phy_process_state_change(phydev, old_state);
|
||||
|
||||
/* Only re-schedule a PHY state machine change if we are polling the
|
||||
* PHY, if PHY_MAC_INTERRUPT is set, then we will be moving
|
||||
|
||||
@@ -664,6 +664,11 @@ static const struct usb_device_id mbim_devs[] = {
|
||||
.driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
|
||||
},
|
||||
|
||||
/* Telit FE990 */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1081, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
|
||||
.driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
|
||||
},
|
||||
|
||||
/* default entry */
|
||||
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
|
||||
.driver_info = (unsigned long)&cdc_mbim_info_zlp,
|
||||
|
||||
@@ -1358,6 +1358,7 @@ static const struct usb_device_id products[] = {
|
||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */
|
||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */
|
||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)}, /* Telit FN990 */
|
||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1080, 2)}, /* Telit FE990 */
|
||||
{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
|
||||
{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
|
||||
{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
|
||||
|
||||
@@ -1808,6 +1808,12 @@ static int smsc95xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
|
||||
size = (u16)((header & RX_STS_FL_) >> 16);
|
||||
align_count = (4 - ((size + NET_IP_ALIGN) % 4)) % 4;
|
||||
|
||||
if (unlikely(size > skb->len)) {
|
||||
netif_dbg(dev, rx_err, dev->net,
|
||||
"size err header=0x%08x\n", header);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (unlikely(header & RX_STS_ES_)) {
|
||||
netif_dbg(dev, rx_err, dev->net,
|
||||
"Error header=0x%08x\n", header);
|
||||
|
||||
@@ -284,7 +284,7 @@ static long cros_ec_chardev_ioctl_xcmd(struct cros_ec_dev *ec, void __user *arg)
|
||||
u_cmd.insize > EC_MAX_MSG_BYTES)
|
||||
return -EINVAL;
|
||||
|
||||
s_cmd = kmalloc(sizeof(*s_cmd) + max(u_cmd.outsize, u_cmd.insize),
|
||||
s_cmd = kzalloc(sizeof(*s_cmd) + max(u_cmd.outsize, u_cmd.insize),
|
||||
GFP_KERNEL);
|
||||
if (!s_cmd)
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -446,11 +446,9 @@ static ssize_t bq24190_sysfs_show(struct device *dev,
|
||||
if (!info)
|
||||
return -EINVAL;
|
||||
|
||||
ret = pm_runtime_get_sync(bdi->dev);
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
ret = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = bq24190_read_mask(bdi, info->reg, info->mask, info->shift, &v);
|
||||
if (ret)
|
||||
@@ -481,11 +479,9 @@ static ssize_t bq24190_sysfs_store(struct device *dev,
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = pm_runtime_get_sync(bdi->dev);
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
ret = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = bq24190_write_mask(bdi, info->reg, info->mask, info->shift, v);
|
||||
if (ret)
|
||||
@@ -504,10 +500,9 @@ static int bq24190_set_charge_mode(struct regulator_dev *dev, u8 val)
|
||||
struct bq24190_dev_info *bdi = rdev_get_drvdata(dev);
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_get_sync(bdi->dev);
|
||||
ret = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (ret < 0) {
|
||||
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", ret);
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -537,10 +532,9 @@ static int bq24190_vbus_is_enabled(struct regulator_dev *dev)
|
||||
int ret;
|
||||
u8 val;
|
||||
|
||||
ret = pm_runtime_get_sync(bdi->dev);
|
||||
ret = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (ret < 0) {
|
||||
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", ret);
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1081,11 +1075,9 @@ static int bq24190_charger_get_property(struct power_supply *psy,
|
||||
|
||||
dev_dbg(bdi->dev, "prop: %d\n", psp);
|
||||
|
||||
ret = pm_runtime_get_sync(bdi->dev);
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
ret = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
switch (psp) {
|
||||
case POWER_SUPPLY_PROP_CHARGE_TYPE:
|
||||
@@ -1155,11 +1147,9 @@ static int bq24190_charger_set_property(struct power_supply *psy,
|
||||
|
||||
dev_dbg(bdi->dev, "prop: %d\n", psp);
|
||||
|
||||
ret = pm_runtime_get_sync(bdi->dev);
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
ret = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
switch (psp) {
|
||||
case POWER_SUPPLY_PROP_ONLINE:
|
||||
@@ -1418,11 +1408,9 @@ static int bq24190_battery_get_property(struct power_supply *psy,
|
||||
dev_warn(bdi->dev, "warning: /sys/class/power_supply/bq24190-battery is deprecated\n");
|
||||
dev_dbg(bdi->dev, "prop: %d\n", psp);
|
||||
|
||||
ret = pm_runtime_get_sync(bdi->dev);
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
ret = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
switch (psp) {
|
||||
case POWER_SUPPLY_PROP_STATUS:
|
||||
@@ -1466,11 +1454,9 @@ static int bq24190_battery_set_property(struct power_supply *psy,
|
||||
dev_warn(bdi->dev, "warning: /sys/class/power_supply/bq24190-battery is deprecated\n");
|
||||
dev_dbg(bdi->dev, "prop: %d\n", psp);
|
||||
|
||||
ret = pm_runtime_get_sync(bdi->dev);
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
ret = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
switch (psp) {
|
||||
case POWER_SUPPLY_PROP_ONLINE:
|
||||
@@ -1624,10 +1610,9 @@ static irqreturn_t bq24190_irq_handler_thread(int irq, void *data)
|
||||
int error;
|
||||
|
||||
bdi->irq_event = true;
|
||||
error = pm_runtime_get_sync(bdi->dev);
|
||||
error = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (error < 0) {
|
||||
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
bq24190_check_status(bdi);
|
||||
@@ -1847,11 +1832,10 @@ static int bq24190_remove(struct i2c_client *client)
|
||||
struct bq24190_dev_info *bdi = i2c_get_clientdata(client);
|
||||
int error;
|
||||
|
||||
error = pm_runtime_get_sync(bdi->dev);
|
||||
if (error < 0) {
|
||||
cancel_delayed_work_sync(&bdi->input_current_limit_work);
|
||||
error = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (error < 0)
|
||||
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
}
|
||||
|
||||
bq24190_register_reset(bdi);
|
||||
if (bdi->battery)
|
||||
@@ -1900,11 +1884,9 @@ static __maybe_unused int bq24190_pm_suspend(struct device *dev)
|
||||
struct bq24190_dev_info *bdi = i2c_get_clientdata(client);
|
||||
int error;
|
||||
|
||||
error = pm_runtime_get_sync(bdi->dev);
|
||||
if (error < 0) {
|
||||
error = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (error < 0)
|
||||
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
}
|
||||
|
||||
bq24190_register_reset(bdi);
|
||||
|
||||
@@ -1925,11 +1907,9 @@ static __maybe_unused int bq24190_pm_resume(struct device *dev)
|
||||
bdi->f_reg = 0;
|
||||
bdi->ss_reg = BQ24190_REG_SS_VBUS_STAT_MASK; /* impossible state */
|
||||
|
||||
error = pm_runtime_get_sync(bdi->dev);
|
||||
if (error < 0) {
|
||||
error = pm_runtime_resume_and_get(bdi->dev);
|
||||
if (error < 0)
|
||||
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
|
||||
pm_runtime_put_noidle(bdi->dev);
|
||||
}
|
||||
|
||||
bq24190_register_reset(bdi);
|
||||
bq24190_set_config(bdi);
|
||||
|
||||
@@ -662,6 +662,7 @@ static int da9150_charger_remove(struct platform_device *pdev)
|
||||
|
||||
if (!IS_ERR_OR_NULL(charger->usb_phy))
|
||||
usb_unregister_notifier(charger->usb_phy, &charger->otg_nb);
|
||||
cancel_work_sync(&charger->otg_work);
|
||||
|
||||
power_supply_unregister(charger->battery);
|
||||
power_supply_unregister(charger->usb);
|
||||
|
||||
@@ -1117,10 +1117,12 @@ static int alua_activate(struct scsi_device *sdev,
|
||||
rcu_read_unlock();
|
||||
mutex_unlock(&h->init_mutex);
|
||||
|
||||
if (alua_rtpg_queue(pg, sdev, qdata, true))
|
||||
if (alua_rtpg_queue(pg, sdev, qdata, true)) {
|
||||
fn = NULL;
|
||||
else
|
||||
} else {
|
||||
kfree(qdata);
|
||||
err = SCSI_DH_DEV_OFFLINED;
|
||||
}
|
||||
kref_put(&pg->kref, release_port_group);
|
||||
out:
|
||||
if (fn)
|
||||
|
||||
@@ -2424,8 +2424,7 @@ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
|
||||
hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW;
|
||||
shost->nr_hw_queues = hisi_hba->cq_nvecs;
|
||||
|
||||
devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
|
||||
return 0;
|
||||
return devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
|
||||
}
|
||||
|
||||
static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
|
||||
|
||||
@@ -7056,6 +7056,8 @@ lpfc_sli4_cgn_params_read(struct lpfc_hba *phba)
|
||||
/* Find out if the FW has a new set of congestion parameters. */
|
||||
len = sizeof(struct lpfc_cgn_param);
|
||||
pdata = kzalloc(len, GFP_KERNEL);
|
||||
if (!pdata)
|
||||
return -ENOMEM;
|
||||
ret = lpfc_read_object(phba, (char *)LPFC_PORT_CFG_NAME,
|
||||
pdata, len);
|
||||
|
||||
|
||||
@@ -22166,20 +22166,20 @@ lpfc_get_io_buf_from_private_pool(struct lpfc_hba *phba,
|
||||
static struct lpfc_io_buf *
|
||||
lpfc_get_io_buf_from_expedite_pool(struct lpfc_hba *phba)
|
||||
{
|
||||
struct lpfc_io_buf *lpfc_ncmd;
|
||||
struct lpfc_io_buf *lpfc_ncmd = NULL, *iter;
|
||||
struct lpfc_io_buf *lpfc_ncmd_next;
|
||||
unsigned long iflag;
|
||||
struct lpfc_epd_pool *epd_pool;
|
||||
|
||||
epd_pool = &phba->epd_pool;
|
||||
lpfc_ncmd = NULL;
|
||||
|
||||
spin_lock_irqsave(&epd_pool->lock, iflag);
|
||||
if (epd_pool->count > 0) {
|
||||
list_for_each_entry_safe(lpfc_ncmd, lpfc_ncmd_next,
|
||||
list_for_each_entry_safe(iter, lpfc_ncmd_next,
|
||||
&epd_pool->list, list) {
|
||||
list_del(&lpfc_ncmd->list);
|
||||
list_del(&iter->list);
|
||||
epd_pool->count--;
|
||||
lpfc_ncmd = iter;
|
||||
break;
|
||||
}
|
||||
}
|
||||
@@ -22376,10 +22376,6 @@ lpfc_read_object(struct lpfc_hba *phba, char *rdobject, uint32_t *datap,
|
||||
struct lpfc_dmabuf *pcmd;
|
||||
u32 rd_object_name[LPFC_MBX_OBJECT_NAME_LEN_DW] = {0};
|
||||
|
||||
/* sanity check on queue memory */
|
||||
if (!datap)
|
||||
return -ENODEV;
|
||||
|
||||
mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
|
||||
if (!mbox)
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -1897,6 +1897,8 @@ qla2x00_get_sp_from_handle(scsi_qla_host_t *vha, const char *func,
|
||||
}
|
||||
|
||||
req->outstanding_cmds[index] = NULL;
|
||||
|
||||
qla_put_fw_resources(sp->qpair, &sp->iores);
|
||||
return sp;
|
||||
}
|
||||
|
||||
@@ -3099,7 +3101,6 @@ qla25xx_process_bidir_status_iocb(scsi_qla_host_t *vha, void *pkt,
|
||||
}
|
||||
bsg_reply->reply_payload_rcv_len = 0;
|
||||
|
||||
qla_put_fw_resources(sp->qpair, &sp->iores);
|
||||
done:
|
||||
/* Return the vendor specific reply to API */
|
||||
bsg_reply->reply_data.vendor_reply.vendor_rsp[0] = rval;
|
||||
|
||||
@@ -1845,6 +1845,17 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)
|
||||
for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
|
||||
sp = req->outstanding_cmds[cnt];
|
||||
if (sp) {
|
||||
/*
|
||||
* perform lockless completion during driver unload
|
||||
*/
|
||||
if (qla2x00_chip_is_down(vha)) {
|
||||
req->outstanding_cmds[cnt] = NULL;
|
||||
spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
|
||||
sp->done(sp, res);
|
||||
spin_lock_irqsave(qp->qp_lock_ptr, flags);
|
||||
continue;
|
||||
}
|
||||
|
||||
switch (sp->cmd_type) {
|
||||
case TYPE_SRB:
|
||||
qla2x00_abort_srb(qp, sp, res, &flags);
|
||||
|
||||
@@ -233,6 +233,7 @@ static struct {
|
||||
{"SGI", "RAID5", "*", BLIST_SPARSELUN},
|
||||
{"SGI", "TP9100", "*", BLIST_REPORTLUN2},
|
||||
{"SGI", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
|
||||
{"SKhynix", "H28U74301AMR", NULL, BLIST_SKIP_VPD_PAGES},
|
||||
{"IBM", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
|
||||
{"SUN", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
|
||||
{"DELL", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
|
||||
|
||||
@@ -1050,6 +1050,22 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
|
||||
goto do_work;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check for "Operating parameters have changed"
|
||||
* due to Hyper-V changing the VHD/VHDX BlockSize
|
||||
* when adding/removing a differencing disk. This
|
||||
* causes discard_granularity to change, so do a
|
||||
* rescan to pick up the new granularity. We don't
|
||||
* want scsi_report_sense() to output a message
|
||||
* that a sysadmin wouldn't know what to do with.
|
||||
*/
|
||||
if ((asc == 0x3f) && (ascq != 0x03) &&
|
||||
(ascq != 0x0e)) {
|
||||
process_err_fn = storvsc_device_scan;
|
||||
set_host_byte(scmnd, DID_REQUEUE);
|
||||
goto do_work;
|
||||
}
|
||||
|
||||
/*
|
||||
* Otherwise, let upper layer deal with the
|
||||
* error when sense message is present
|
||||
|
||||
@@ -9986,5 +9986,6 @@ module_exit(ufshcd_core_exit);
|
||||
MODULE_AUTHOR("Santosh Yaragnavi <santosh.sy@samsung.com>");
|
||||
MODULE_AUTHOR("Vinayak Holikatti <h.vinayak@samsung.com>");
|
||||
MODULE_DESCRIPTION("Generic UFS host controller driver Core");
|
||||
MODULE_SOFTDEP("pre: governor_simpleondemand");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_VERSION(UFSHCD_DRIVER_VERSION);
|
||||
|
||||
@@ -1262,18 +1262,20 @@ static struct iscsi_param *iscsi_check_key(
|
||||
return param;
|
||||
|
||||
if (!(param->phase & phase)) {
|
||||
pr_err("Key \"%s\" may not be negotiated during ",
|
||||
param->name);
|
||||
char *phase_name;
|
||||
|
||||
switch (phase) {
|
||||
case PHASE_SECURITY:
|
||||
pr_debug("Security phase.\n");
|
||||
phase_name = "Security";
|
||||
break;
|
||||
case PHASE_OPERATIONAL:
|
||||
pr_debug("Operational phase.\n");
|
||||
phase_name = "Operational";
|
||||
break;
|
||||
default:
|
||||
pr_debug("Unknown phase.\n");
|
||||
phase_name = "Unknown";
|
||||
}
|
||||
pr_err("Key \"%s\" may not be negotiated during %s phase.\n",
|
||||
param->name, phase_name);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
||||
@@ -267,35 +267,34 @@ int amdtee_open_session(struct tee_context *ctx,
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Open session with loaded TA */
|
||||
handle_open_session(arg, &session_info, param);
|
||||
if (arg->ret != TEEC_SUCCESS) {
|
||||
pr_err("open_session failed %d\n", arg->ret);
|
||||
handle_unload_ta(ta_handle);
|
||||
kref_put(&sess->refcount, destroy_session);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Find an empty session index for the given TA */
|
||||
spin_lock(&sess->lock);
|
||||
i = find_first_zero_bit(sess->sess_mask, TEE_NUM_SESSIONS);
|
||||
if (i < TEE_NUM_SESSIONS)
|
||||
if (i < TEE_NUM_SESSIONS) {
|
||||
sess->session_info[i] = session_info;
|
||||
set_session_id(ta_handle, i, &arg->session);
|
||||
set_bit(i, sess->sess_mask);
|
||||
}
|
||||
spin_unlock(&sess->lock);
|
||||
|
||||
if (i >= TEE_NUM_SESSIONS) {
|
||||
pr_err("reached maximum session count %d\n", TEE_NUM_SESSIONS);
|
||||
handle_close_session(ta_handle, session_info);
|
||||
handle_unload_ta(ta_handle);
|
||||
kref_put(&sess->refcount, destroy_session);
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Open session with loaded TA */
|
||||
handle_open_session(arg, &session_info, param);
|
||||
if (arg->ret != TEEC_SUCCESS) {
|
||||
pr_err("open_session failed %d\n", arg->ret);
|
||||
spin_lock(&sess->lock);
|
||||
clear_bit(i, sess->sess_mask);
|
||||
spin_unlock(&sess->lock);
|
||||
handle_unload_ta(ta_handle);
|
||||
kref_put(&sess->refcount, destroy_session);
|
||||
goto out;
|
||||
}
|
||||
|
||||
sess->session_info[i] = session_info;
|
||||
set_session_id(ta_handle, i, &arg->session);
|
||||
out:
|
||||
free_pages((u64)ta, get_order(ta_size));
|
||||
return rc;
|
||||
|
||||
@@ -43,7 +43,7 @@
|
||||
#define QUIRK_AUTO_CLEAR_INT BIT(0)
|
||||
#define QUIRK_E2E BIT(1)
|
||||
|
||||
static int ring_interrupt_index(struct tb_ring *ring)
|
||||
static int ring_interrupt_index(const struct tb_ring *ring)
|
||||
{
|
||||
int bit = ring->hop;
|
||||
if (!ring->is_tx)
|
||||
@@ -60,13 +60,14 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
|
||||
{
|
||||
int reg = REG_RING_INTERRUPT_BASE +
|
||||
ring_interrupt_index(ring) / 32 * 4;
|
||||
int bit = ring_interrupt_index(ring) & 31;
|
||||
int mask = 1 << bit;
|
||||
int interrupt_bit = ring_interrupt_index(ring) & 31;
|
||||
int mask = 1 << interrupt_bit;
|
||||
u32 old, new;
|
||||
|
||||
if (ring->irq > 0) {
|
||||
u32 step, shift, ivr, misc;
|
||||
void __iomem *ivr_base;
|
||||
int auto_clear_bit;
|
||||
int index;
|
||||
|
||||
if (ring->is_tx)
|
||||
@@ -74,18 +75,25 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
|
||||
else
|
||||
index = ring->hop + ring->nhi->hop_count;
|
||||
|
||||
if (ring->nhi->quirks & QUIRK_AUTO_CLEAR_INT) {
|
||||
/*
|
||||
* Ask the hardware to clear interrupt status
|
||||
* bits automatically since we already know
|
||||
* which interrupt was triggered.
|
||||
*/
|
||||
misc = ioread32(ring->nhi->iobase + REG_DMA_MISC);
|
||||
if (!(misc & REG_DMA_MISC_INT_AUTO_CLEAR)) {
|
||||
misc |= REG_DMA_MISC_INT_AUTO_CLEAR;
|
||||
iowrite32(misc, ring->nhi->iobase + REG_DMA_MISC);
|
||||
}
|
||||
}
|
||||
/*
|
||||
* Intel routers support a bit that isn't part of
|
||||
* the USB4 spec to ask the hardware to clear
|
||||
* interrupt status bits automatically since
|
||||
* we already know which interrupt was triggered.
|
||||
*
|
||||
* Other routers explicitly disable auto-clear
|
||||
* to prevent conditions that may occur where two
|
||||
* MSIX interrupts are simultaneously active and
|
||||
* reading the register clears both of them.
|
||||
*/
|
||||
misc = ioread32(ring->nhi->iobase + REG_DMA_MISC);
|
||||
if (ring->nhi->quirks & QUIRK_AUTO_CLEAR_INT)
|
||||
auto_clear_bit = REG_DMA_MISC_INT_AUTO_CLEAR;
|
||||
else
|
||||
auto_clear_bit = REG_DMA_MISC_DISABLE_AUTO_CLEAR;
|
||||
if (!(misc & auto_clear_bit))
|
||||
iowrite32(misc | auto_clear_bit,
|
||||
ring->nhi->iobase + REG_DMA_MISC);
|
||||
|
||||
ivr_base = ring->nhi->iobase + REG_INT_VEC_ALLOC_BASE;
|
||||
step = index / REG_INT_VEC_ALLOC_REGS * REG_INT_VEC_ALLOC_BITS;
|
||||
@@ -105,7 +113,7 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
|
||||
|
||||
dev_dbg(&ring->nhi->pdev->dev,
|
||||
"%s interrupt at register %#x bit %d (%#x -> %#x)\n",
|
||||
active ? "enabling" : "disabling", reg, bit, old, new);
|
||||
active ? "enabling" : "disabling", reg, interrupt_bit, old, new);
|
||||
|
||||
if (new == old)
|
||||
dev_WARN(&ring->nhi->pdev->dev,
|
||||
@@ -390,14 +398,17 @@ EXPORT_SYMBOL_GPL(tb_ring_poll_complete);
|
||||
|
||||
static void ring_clear_msix(const struct tb_ring *ring)
|
||||
{
|
||||
int bit;
|
||||
|
||||
if (ring->nhi->quirks & QUIRK_AUTO_CLEAR_INT)
|
||||
return;
|
||||
|
||||
bit = ring_interrupt_index(ring) & 31;
|
||||
if (ring->is_tx)
|
||||
ioread32(ring->nhi->iobase + REG_RING_NOTIFY_BASE);
|
||||
iowrite32(BIT(bit), ring->nhi->iobase + REG_RING_INT_CLEAR);
|
||||
else
|
||||
ioread32(ring->nhi->iobase + REG_RING_NOTIFY_BASE +
|
||||
4 * (ring->nhi->hop_count / 32));
|
||||
iowrite32(BIT(bit), ring->nhi->iobase + REG_RING_INT_CLEAR +
|
||||
4 * (ring->nhi->hop_count / 32));
|
||||
}
|
||||
|
||||
static irqreturn_t ring_msix(int irq, void *data)
|
||||
|
||||
@@ -77,12 +77,13 @@ struct ring_desc {
|
||||
|
||||
/*
|
||||
* three bitfields: tx, rx, rx overflow
|
||||
* Every bitfield contains one bit for every hop (REG_HOP_COUNT). Registers are
|
||||
* cleared on read. New interrupts are fired only after ALL registers have been
|
||||
* Every bitfield contains one bit for every hop (REG_HOP_COUNT).
|
||||
* New interrupts are fired only after ALL registers have been
|
||||
* read (even those containing only disabled rings).
|
||||
*/
|
||||
#define REG_RING_NOTIFY_BASE 0x37800
|
||||
#define RING_NOTIFY_REG_COUNT(nhi) ((31 + 3 * nhi->hop_count) / 32)
|
||||
#define REG_RING_INT_CLEAR 0x37808
|
||||
|
||||
/*
|
||||
* two bitfields: rx, tx
|
||||
@@ -105,6 +106,7 @@ struct ring_desc {
|
||||
|
||||
#define REG_DMA_MISC 0x39864
|
||||
#define REG_DMA_MISC_INT_AUTO_CLEAR BIT(2)
|
||||
#define REG_DMA_MISC_DISABLE_AUTO_CLEAR BIT(17)
|
||||
|
||||
#define REG_INMAIL_DATA 0x39900
|
||||
|
||||
|
||||
@@ -208,6 +208,22 @@ static ssize_t nvm_authenticate_show(struct device *dev,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void tb_retimer_set_inbound_sbtx(struct tb_port *port)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
|
||||
usb4_port_retimer_set_inbound_sbtx(port, i);
|
||||
}
|
||||
|
||||
static void tb_retimer_unset_inbound_sbtx(struct tb_port *port)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = TB_MAX_RETIMER_INDEX; i >= 1; i--)
|
||||
usb4_port_retimer_unset_inbound_sbtx(port, i);
|
||||
}
|
||||
|
||||
static ssize_t nvm_authenticate_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
{
|
||||
@@ -234,6 +250,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
|
||||
rt->auth_status = 0;
|
||||
|
||||
if (val) {
|
||||
tb_retimer_set_inbound_sbtx(rt->port);
|
||||
if (val == AUTHENTICATE_ONLY) {
|
||||
ret = tb_retimer_nvm_authenticate(rt, true);
|
||||
} else {
|
||||
@@ -253,6 +270,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
|
||||
}
|
||||
|
||||
exit_unlock:
|
||||
tb_retimer_unset_inbound_sbtx(rt->port);
|
||||
mutex_unlock(&rt->tb->lock);
|
||||
exit_rpm:
|
||||
pm_runtime_mark_last_busy(&rt->dev);
|
||||
@@ -466,8 +484,7 @@ int tb_retimer_scan(struct tb_port *port, bool add)
|
||||
* Enable sideband channel for each retimer. We can do this
|
||||
* regardless whether there is device connected or not.
|
||||
*/
|
||||
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
|
||||
usb4_port_retimer_set_inbound_sbtx(port, i);
|
||||
tb_retimer_set_inbound_sbtx(port);
|
||||
|
||||
/*
|
||||
* Before doing anything else, read the authentication status.
|
||||
@@ -490,6 +507,8 @@ int tb_retimer_scan(struct tb_port *port, bool add)
|
||||
break;
|
||||
}
|
||||
|
||||
tb_retimer_unset_inbound_sbtx(port);
|
||||
|
||||
if (!last_idx)
|
||||
return 0;
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@ enum usb4_sb_opcode {
|
||||
USB4_SB_OPCODE_ROUTER_OFFLINE = 0x4e45534c, /* "LSEN" */
|
||||
USB4_SB_OPCODE_ENUMERATE_RETIMERS = 0x4d554e45, /* "ENUM" */
|
||||
USB4_SB_OPCODE_SET_INBOUND_SBTX = 0x5055534c, /* "LSUP" */
|
||||
USB4_SB_OPCODE_UNSET_INBOUND_SBTX = 0x50555355, /* "USUP" */
|
||||
USB4_SB_OPCODE_QUERY_LAST_RETIMER = 0x5453414c, /* "LAST" */
|
||||
USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE = 0x53534e47, /* "GNSS" */
|
||||
USB4_SB_OPCODE_NVM_SET_OFFSET = 0x53504f42, /* "BOPS" */
|
||||
|
||||
@@ -2750,8 +2750,6 @@ int tb_switch_add(struct tb_switch *sw)
|
||||
}
|
||||
tb_sw_dbg(sw, "uid: %#llx\n", sw->uid);
|
||||
|
||||
tb_check_quirks(sw);
|
||||
|
||||
ret = tb_switch_set_uuid(sw);
|
||||
if (ret) {
|
||||
dev_err(&sw->dev, "failed to set UUID\n");
|
||||
@@ -2770,6 +2768,8 @@ int tb_switch_add(struct tb_switch *sw)
|
||||
}
|
||||
}
|
||||
|
||||
tb_check_quirks(sw);
|
||||
|
||||
tb_switch_default_link_ports(sw);
|
||||
|
||||
ret = tb_switch_update_link_attributes(sw);
|
||||
|
||||
@@ -1080,6 +1080,7 @@ int usb4_port_router_online(struct tb_port *port);
|
||||
int usb4_port_enumerate_retimers(struct tb_port *port);
|
||||
|
||||
int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index);
|
||||
int usb4_port_retimer_unset_inbound_sbtx(struct tb_port *port, u8 index);
|
||||
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
|
||||
u8 size);
|
||||
int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg,
|
||||
|
||||
@@ -1441,6 +1441,20 @@ int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index)
|
||||
500);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_unset_inbound_sbtx() - Disable sideband channel transactions
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
*
|
||||
* Disables sideband channel transations on SBTX. The reverse of
|
||||
* usb4_port_retimer_set_inbound_sbtx().
|
||||
*/
|
||||
int usb4_port_retimer_unset_inbound_sbtx(struct tb_port *port, u8 index)
|
||||
{
|
||||
return usb4_port_retimer_op(port, index,
|
||||
USB4_SB_OPCODE_UNSET_INBOUND_SBTX, 500);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_read() - Read from retimer sideband registers
|
||||
* @port: USB4 port
|
||||
@@ -1930,18 +1944,30 @@ static int usb4_usb3_port_write_allocated_bandwidth(struct tb_port *port,
|
||||
int downstream_bw)
|
||||
{
|
||||
u32 val, ubw, dbw, scale;
|
||||
int ret;
|
||||
int ret, max_bw;
|
||||
|
||||
/* Read the used scale, hardware default is 0 */
|
||||
ret = tb_port_read(port, &scale, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_3, 1);
|
||||
/* Figure out suitable scale */
|
||||
scale = 0;
|
||||
max_bw = max(upstream_bw, downstream_bw);
|
||||
while (scale < 64) {
|
||||
if (mbps_to_usb3_bw(max_bw, scale) < 4096)
|
||||
break;
|
||||
scale++;
|
||||
}
|
||||
|
||||
if (WARN_ON(scale >= 64))
|
||||
return -EINVAL;
|
||||
|
||||
ret = tb_port_write(port, &scale, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_3, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
scale &= ADP_USB3_CS_3_SCALE_MASK;
|
||||
ubw = mbps_to_usb3_bw(upstream_bw, scale);
|
||||
dbw = mbps_to_usb3_bw(downstream_bw, scale);
|
||||
|
||||
tb_port_dbg(port, "scaled bandwidth %u/%u, scale %u\n", ubw, dbw, scale);
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_2, 1);
|
||||
if (ret)
|
||||
|
||||
@@ -43,6 +43,7 @@ struct xencons_info {
|
||||
int irq;
|
||||
int vtermno;
|
||||
grant_ref_t gntref;
|
||||
spinlock_t ring_lock;
|
||||
};
|
||||
|
||||
static LIST_HEAD(xenconsoles);
|
||||
@@ -89,12 +90,15 @@ static int __write_console(struct xencons_info *xencons,
|
||||
XENCONS_RING_IDX cons, prod;
|
||||
struct xencons_interface *intf = xencons->intf;
|
||||
int sent = 0;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&xencons->ring_lock, flags);
|
||||
cons = intf->out_cons;
|
||||
prod = intf->out_prod;
|
||||
mb(); /* update queue values before going on */
|
||||
|
||||
if ((prod - cons) > sizeof(intf->out)) {
|
||||
spin_unlock_irqrestore(&xencons->ring_lock, flags);
|
||||
pr_err_once("xencons: Illegal ring page indices");
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -104,6 +108,7 @@ static int __write_console(struct xencons_info *xencons,
|
||||
|
||||
wmb(); /* write ring before updating pointer */
|
||||
intf->out_prod = prod;
|
||||
spin_unlock_irqrestore(&xencons->ring_lock, flags);
|
||||
|
||||
if (sent)
|
||||
notify_daemon(xencons);
|
||||
@@ -146,16 +151,19 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
|
||||
int recv = 0;
|
||||
struct xencons_info *xencons = vtermno_to_xencons(vtermno);
|
||||
unsigned int eoiflag = 0;
|
||||
unsigned long flags;
|
||||
|
||||
if (xencons == NULL)
|
||||
return -EINVAL;
|
||||
intf = xencons->intf;
|
||||
|
||||
spin_lock_irqsave(&xencons->ring_lock, flags);
|
||||
cons = intf->in_cons;
|
||||
prod = intf->in_prod;
|
||||
mb(); /* get pointers before reading ring */
|
||||
|
||||
if ((prod - cons) > sizeof(intf->in)) {
|
||||
spin_unlock_irqrestore(&xencons->ring_lock, flags);
|
||||
pr_err_once("xencons: Illegal ring page indices");
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -179,10 +187,13 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
|
||||
xencons->out_cons = intf->out_cons;
|
||||
xencons->out_cons_same = 0;
|
||||
}
|
||||
if (!recv && xencons->out_cons_same++ > 1) {
|
||||
eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||
}
|
||||
spin_unlock_irqrestore(&xencons->ring_lock, flags);
|
||||
|
||||
if (recv) {
|
||||
notify_daemon(xencons);
|
||||
} else if (xencons->out_cons_same++ > 1) {
|
||||
eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||
}
|
||||
|
||||
xen_irq_lateeoi(xencons->irq, eoiflag);
|
||||
@@ -239,6 +250,7 @@ static int xen_hvm_console_init(void)
|
||||
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
|
||||
if (!info)
|
||||
return -ENOMEM;
|
||||
spin_lock_init(&info->ring_lock);
|
||||
} else if (info->intf != NULL) {
|
||||
/* already configured */
|
||||
return 0;
|
||||
@@ -275,6 +287,7 @@ err:
|
||||
|
||||
static int xencons_info_pv_init(struct xencons_info *info, int vtermno)
|
||||
{
|
||||
spin_lock_init(&info->ring_lock);
|
||||
info->evtchn = xen_start_info->console.domU.evtchn;
|
||||
/* GFN == MFN for PV guest */
|
||||
info->intf = gfn_to_virt(xen_start_info->console.domU.mfn);
|
||||
@@ -325,6 +338,7 @@ static int xen_initial_domain_console_init(void)
|
||||
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
|
||||
if (!info)
|
||||
return -ENOMEM;
|
||||
spin_lock_init(&info->ring_lock);
|
||||
}
|
||||
|
||||
info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
|
||||
@@ -482,6 +496,7 @@ static int xencons_probe(struct xenbus_device *dev,
|
||||
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
|
||||
if (!info)
|
||||
return -ENOMEM;
|
||||
spin_lock_init(&info->ring_lock);
|
||||
dev_set_drvdata(&dev->dev, info);
|
||||
info->xbdev = dev;
|
||||
info->vtermno = xenbus_devid_to_vtermno(devid);
|
||||
|
||||
@@ -253,7 +253,9 @@ config SERIAL_8250_ASPEED_VUART
|
||||
tristate "Aspeed Virtual UART"
|
||||
depends on SERIAL_8250
|
||||
depends on OF
|
||||
depends on REGMAP && MFD_SYSCON
|
||||
depends on MFD_SYSCON
|
||||
depends on ARCH_ASPEED || COMPILE_TEST
|
||||
select REGMAP
|
||||
help
|
||||
If you want to use the virtual UART (VUART) device on Aspeed
|
||||
BMC platforms, enable this option. This enables the 16550A-
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user