Merge 4.14.72 into android-4.14-p
Changes in 4.14.72 be2net: Fix memory leak in be_cmd_get_profile_config() net/mlx5: Fix use-after-free in self-healing flow net: qca_spi: Fix race condition in spi transfers rds: fix two RCU related problems net/mlx5: Check for error in mlx5_attach_interface net/mlx5: Fix debugfs cleanup in the device init/remove flow net/mlx5: E-Switch, Fix memory leak when creating switchdev mode FDB tables net/tls: Set count of SG entries if sk_alloc_sg returns -ENOSPC erspan: fix error handling for erspan tunnel erspan: return PACKET_REJECT when the appropriate tunnel is not found tcp: really ignore MSG_ZEROCOPY if no SO_ZEROCOPY hv/netvsc: Fix NULL dereference at single queue mode fallback usb: dwc3: change stream event enable bit back to 13 iommu/arm-smmu-v3: sync the OVACKFLG to PRIQ consumer register iommu/io-pgtable-arm-v7s: Abort allocation when table address overflows the PTE ALSA: msnd: Fix the default sample sizes ALSA: usb-audio: Fix multiple definitions in AU0828_DEVICE() macro xfrm: fix 'passing zero to ERR_PTR()' warning amd-xgbe: use dma_mapping_error to check map errors gfs2: Special-case rindex for gfs2_grow clk: imx6ul: fix missing of_node_put() clk: core: Potentially free connection id clk: clk-fixed-factor: Clear OF_POPULATED flag in case of failure kbuild: add .DELETE_ON_ERROR special target media: tw686x: Fix oops on buffer alloc failure dmaengine: pl330: fix irq race with terminate_all MIPS: ath79: fix system restart media: videobuf2-core: check for q->error in vb2_core_qbuf() IB/rxe: Drop QP0 silently block: allow max_discard_segments to be stacked IB/ipoib: Fix error return code in ipoib_dev_init() mtd/maps: fix solutionengine.c printk format warnings media: ov5645: Supported external clock is 24MHz perf test: Fix subtest number when showing results gfs2: Don't reject a supposedly full bitmap if we have blocks reserved perf tools: Synthesize GROUP_DESC feature in pipe mode fbdev: omapfb: off by one in omapfb_register_client() perf tools: Fix struct comm_str removal crash video: goldfishfb: fix memory leak on driver remove fbdev/via: fix defined but not used warning perf powerpc: Fix callchain ip filtering when return address is in a register video: fbdev: pxafb: clear allocated memory for video modes fbdev: Distinguish between interlaced and progressive modes ARM: exynos: Clear global variable on init error path perf powerpc: Fix callchain ip filtering nvme-rdma: unquiesce queues when deleting the controller KVM: arm/arm64: vgic: Fix possible spectre-v1 write in vgic_mmio_write_apr() powerpc/powernv: opal_put_chars partial write fix staging: bcm2835-camera: fix timeout handling in wait_for_completion_timeout staging: bcm2835-camera: handle wait_for_completion_timeout return properly ASoC: rt5514: Fix the issue of the delay volume applied MIPS: jz4740: Bump zload address mac80211: restrict delayed tailroom needed decrement Smack: Fix handling of IPv4 traffic received by PF_INET6 sockets wan/fsl_ucc_hdlc: use IS_ERR_VALUE() to check return value of qe_muram_alloc arm64: fix possible spectre-v1 write in ptrace_hbp_set_event() reset: imx7: Fix always writing bits as 0 efi/arm: preserve early mapping of UEFI memory map longer for BGRT nfp: avoid buffer leak when FW communication fails xen-netfront: fix queue name setting arm64: dts: qcom: db410c: Fix Bluetooth LED trigger ARM: dts: qcom: msm8974-hammerhead: increase load on l20 for sdhci s390/qeth: fix race in used-buffer accounting s390/qeth: reset layer2 attribute on layer switch platform/x86: toshiba_acpi: Fix defined but not used build warnings KVM: arm/arm64: Fix vgic init race drivers/base: stop new probing during shutdown i2c: aspeed: Fix initial values of master and slave state dmaengine: mv_xor_v2: kill the tasklets upon exit crypto: sharah - Unregister correct algorithms for SAHARA 3 x86/pti: Check the return value of pti_user_pagetable_walk_p4d() x86/pti: Check the return value of pti_user_pagetable_walk_pmd() x86/mm/pti: Add an overflow check to pti_clone_pmds() xen-netfront: fix warn message as irq device name has '/' RDMA/cma: Protect cma dev list with lock pstore: Fix incorrect persistent ram buffer mapping xen/netfront: fix waiting for xenbus state change IB/ipoib: Avoid a race condition between start_xmit and cm_rep_handler s390/crypto: Fix return code checking in cbc_paes_crypt() mmc: omap_hsmmc: fix wakeirq handling on removal ipmi: Fix I2C client removal in the SSIF driver Tools: hv: Fix a bug in the key delete code misc: hmc6352: fix potential Spectre v1 xhci: Fix use after free for URB cancellation on a reallocated endpoint usb: Don't die twice if PCI xhci host is not responding in resume mei: ignore not found client in the enumeration mei: bus: need to unlink client before freeing USB: Add quirk to support DJI CineSSD usb: uas: add support for more quirk flags usb: Avoid use-after-free by flushing endpoints early in usb_set_interface() usb: host: u132-hcd: Fix a sleep-in-atomic-context bug in u132_get_frame() USB: add quirk for WORLDE Controller KS49 or Prodipe MIDI 49C USB controller usb: gadget: udc: renesas_usb3: fix maxpacket size of ep0 USB: net2280: Fix erroneous synchronization change USB: serial: io_ti: fix array underflow in completion handler usb: misc: uss720: Fix two sleep-in-atomic-context bugs USB: serial: ti_usb_3410_5052: fix array underflow in completion handler USB: yurex: Fix buffer over-read in yurex_write() usb: cdc-wdm: Fix a sleep-in-atomic-context bug in service_outstanding_interrupt() Revert "cdc-acm: implement put_char() and flush_chars()" cifs: prevent integer overflow in nxt_dir_entry() CIFS: fix wrapping bugs in num_entries() xtensa: ISS: don't allocate memory in platform_setup perf/core: Force USER_DS when recording user stack data x86/EISA: Don't probe EISA bus for Xen PV guests NFSv4.1 fix infinite loop on I/O. binfmt_elf: Respect error return from `regset->active' net/mlx5: Add missing SET_DRIVER_VERSION command translation arm64: dts: uniphier: Add missing cooling device properties for CPUs audit: fix use-after-free in audit_add_watch mtdchar: fix overflows in adjustment of `count` vfs: fix freeze protection in mnt_want_write_file() for overlayfs Bluetooth: Use lock_sock_nested in bt_accept_enqueue evm: Don't deadlock if a crypto algorithm is unavailable KVM: PPC: Book3S HV: Add of_node_put() in success path security: check for kstrdup() failure in lsm_append() MIPS: loongson64: cs5536: Fix PCI_OHCI_INT_REG reads configfs: fix registered group removal pinctrl: rza1: Fix selector use for groups and functions sched/core: Use smp_mb() in wake_woken_function() efi/esrt: Only call efi_mem_reserve() for boot services memory ARM: hisi: handle of_iomap and fix missing of_node_put ARM: hisi: fix error handling and missing of_node_put ARM: hisi: check of_iomap and fix missing of_node_put liquidio: fix hang when re-binding VF host drv after running DPDK VF driver gpu: ipu-v3: csi: pass back mbus_code_to_bus_cfg error codes tty: fix termios input-speed encoding when using BOTHER tty: fix termios input-speed encoding mmc: sdhci-of-esdhc: set proper dma mask for ls104x chips mmc: tegra: prevent HS200 on Tegra 3 mmc: sdhci: do not try to use 3.3V signaling if not supported drm/nouveau: Fix runtime PM leak in drm_open() drm/nouveau/debugfs: Wake up GPU before doing any reclocking drm/nouveau: tegra: Detach from ARM DMA/IOMMU mapping parport: sunbpp: fix error return code sched/fair: Fix util_avg of new tasks for asymmetric systems coresight: Handle errors in finding input/output ports coresight: tpiu: Fix disabling timeouts coresight: ETM: Add support for Arm Cortex-A73 and Cortex-A35 staging: bcm2835-audio: Don't leak workqueue if open fails gpio: pxa: Fix potential NULL dereference gpiolib: Mark gpio_suffixes array with __maybe_unused mfd: 88pm860x-i2c: switch to i2c_lock_bus(..., I2C_LOCK_SEGMENT) input: rohm_bu21023: switch to i2c_lock_bus(..., I2C_LOCK_SEGMENT) drm/amdkfd: Fix error codes in kfd_get_process rtc: bq4802: add error handling for devm_ioremap ALSA: pcm: Fix snd_interval_refine first/last with open min/max scsi: libfc: fixup 'sleeping function called from invalid context' selftest: timers: Tweak raw_skew to SKIP when ADJ_OFFSET/other clock adjustments are in progress drm/panel: type promotion bug in s6e8aa0_read_mtp_id() blk-mq: only attempt to merge bio if there is rq in sw queue blk-mq: avoid to synchronize rcu inside blk_cleanup_queue() pinctrl: msm: Fix msm_config_group_get() to be compliant pinctrl: qcom: spmi-gpio: Fix pmic_gpio_config_get() to be compliant clk: tegra: bpmp: Don't crash when a clock fails to register mei: bus: type promotion bug in mei_nfc_if_version() earlycon: Initialize port->uartclk based on clock-frequency property earlycon: Remove hardcoded port->uartclk initialization in of_setup_earlycon ASoC: samsung: i2s: Fix error handling path in i2s_set_sysclk() ASoC: samsung: Fix invalid argument when devm_gpiod_get is called drm/i915: Apply the GTT write flush for all !llc machines net/ipv6: prevent use after free in ip6_route_mpath_notify e1000e: Remove Other from EIAC Partial revert "e1000e: Avoid receiver overrun interrupt bursts" e1000e: Fix queue interrupt re-raising in Other interrupt e1000e: Avoid missed interrupts following ICR read Revert "e1000e: Separate signaling for link check/link up" e1000e: Fix link check race condition e1000e: Fix check_for_link return value with autoneg off Linux 4.14.72 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 14
|
||||
SUBLEVEL = 71
|
||||
SUBLEVEL = 72
|
||||
EXTRAVERSION =
|
||||
NAME = Petit Gorille
|
||||
|
||||
|
||||
@@ -189,6 +189,8 @@
|
||||
regulator-max-microvolt = <2950000>;
|
||||
|
||||
regulator-boot-on;
|
||||
regulator-system-load = <200000>;
|
||||
regulator-allow-set-load;
|
||||
};
|
||||
|
||||
l21 {
|
||||
|
||||
@@ -209,6 +209,7 @@ static int __init exynos_pmu_irq_init(struct device_node *node,
|
||||
NULL);
|
||||
if (!domain) {
|
||||
iounmap(pmu_base_addr);
|
||||
pmu_base_addr = NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
||||
@@ -148,13 +148,20 @@ static int hi3xxx_hotplug_init(void)
|
||||
struct device_node *node;
|
||||
|
||||
node = of_find_compatible_node(NULL, NULL, "hisilicon,sysctrl");
|
||||
if (node) {
|
||||
ctrl_base = of_iomap(node, 0);
|
||||
id = HI3620_CTRL;
|
||||
return 0;
|
||||
if (!node) {
|
||||
id = ERROR_CTRL;
|
||||
return -ENOENT;
|
||||
}
|
||||
id = ERROR_CTRL;
|
||||
return -ENOENT;
|
||||
|
||||
ctrl_base = of_iomap(node, 0);
|
||||
of_node_put(node);
|
||||
if (!ctrl_base) {
|
||||
id = ERROR_CTRL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
id = HI3620_CTRL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void hi3xxx_set_cpu(int cpu, bool enable)
|
||||
@@ -173,11 +180,15 @@ static bool hix5hd2_hotplug_init(void)
|
||||
struct device_node *np;
|
||||
|
||||
np = of_find_compatible_node(NULL, NULL, "hisilicon,cpuctrl");
|
||||
if (np) {
|
||||
ctrl_base = of_iomap(np, 0);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
if (!np)
|
||||
return false;
|
||||
|
||||
ctrl_base = of_iomap(np, 0);
|
||||
of_node_put(np);
|
||||
if (!ctrl_base)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void hix5hd2_set_cpu(int cpu, bool enable)
|
||||
@@ -219,10 +230,10 @@ void hip01_set_cpu(int cpu, bool enable)
|
||||
|
||||
if (!ctrl_base) {
|
||||
np = of_find_compatible_node(NULL, NULL, "hisilicon,hip01-sysctrl");
|
||||
if (np)
|
||||
ctrl_base = of_iomap(np, 0);
|
||||
else
|
||||
BUG();
|
||||
BUG_ON(!np);
|
||||
ctrl_base = of_iomap(np, 0);
|
||||
of_node_put(np);
|
||||
BUG_ON(!ctrl_base);
|
||||
}
|
||||
|
||||
if (enable) {
|
||||
|
||||
@@ -187,7 +187,7 @@
|
||||
led@6 {
|
||||
label = "apq8016-sbc:blue:bt";
|
||||
gpios = <&pm8916_mpps 3 GPIO_ACTIVE_HIGH>;
|
||||
linux,default-trigger = "bt";
|
||||
linux,default-trigger = "bluetooth-power";
|
||||
default-state = "off";
|
||||
};
|
||||
};
|
||||
|
||||
@@ -55,6 +55,7 @@
|
||||
clocks = <&sys_clk 32>;
|
||||
enable-method = "psci";
|
||||
operating-points-v2 = <&cluster0_opp>;
|
||||
#cooling-cells = <2>;
|
||||
};
|
||||
|
||||
cpu2: cpu@100 {
|
||||
@@ -73,6 +74,7 @@
|
||||
clocks = <&sys_clk 33>;
|
||||
enable-method = "psci";
|
||||
operating-points-v2 = <&cluster1_opp>;
|
||||
#cooling-cells = <2>;
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
@@ -274,19 +274,22 @@ static int ptrace_hbp_set_event(unsigned int note_type,
|
||||
|
||||
switch (note_type) {
|
||||
case NT_ARM_HW_BREAK:
|
||||
if (idx < ARM_MAX_BRP) {
|
||||
tsk->thread.debug.hbp_break[idx] = bp;
|
||||
err = 0;
|
||||
}
|
||||
if (idx >= ARM_MAX_BRP)
|
||||
goto out;
|
||||
idx = array_index_nospec(idx, ARM_MAX_BRP);
|
||||
tsk->thread.debug.hbp_break[idx] = bp;
|
||||
err = 0;
|
||||
break;
|
||||
case NT_ARM_HW_WATCH:
|
||||
if (idx < ARM_MAX_WRP) {
|
||||
tsk->thread.debug.hbp_watch[idx] = bp;
|
||||
err = 0;
|
||||
}
|
||||
if (idx >= ARM_MAX_WRP)
|
||||
goto out;
|
||||
idx = array_index_nospec(idx, ARM_MAX_WRP);
|
||||
tsk->thread.debug.hbp_watch[idx] = bp;
|
||||
err = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
out:
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
@@ -40,6 +40,7 @@ static char ath79_sys_type[ATH79_SYS_TYPE_LEN];
|
||||
|
||||
static void ath79_restart(char *command)
|
||||
{
|
||||
local_irq_disable();
|
||||
ath79_device_reset_set(AR71XX_RESET_FULL_CHIP);
|
||||
for (;;)
|
||||
if (cpu_wait)
|
||||
|
||||
@@ -134,6 +134,7 @@ static inline u32 ath79_pll_rr(unsigned reg)
|
||||
static inline void ath79_reset_wr(unsigned reg, u32 val)
|
||||
{
|
||||
__raw_writel(val, ath79_reset_base + reg);
|
||||
(void) __raw_readl(ath79_reset_base + reg); /* flush */
|
||||
}
|
||||
|
||||
static inline u32 ath79_reset_rr(unsigned reg)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
platform-$(CONFIG_MACH_INGENIC) += jz4740/
|
||||
cflags-$(CONFIG_MACH_INGENIC) += -I$(srctree)/arch/mips/include/asm/mach-jz4740
|
||||
load-$(CONFIG_MACH_INGENIC) += 0xffffffff80010000
|
||||
zload-$(CONFIG_MACH_INGENIC) += 0xffffffff80600000
|
||||
zload-$(CONFIG_MACH_INGENIC) += 0xffffffff81000000
|
||||
|
||||
@@ -138,7 +138,7 @@ u32 pci_ohci_read_reg(int reg)
|
||||
break;
|
||||
case PCI_OHCI_INT_REG:
|
||||
_rdmsr(DIVIL_MSR_REG(PIC_YSEL_LOW), &hi, &lo);
|
||||
if ((lo & 0x00000f00) == CS5536_USB_INTR)
|
||||
if (((lo >> PIC_YSEL_LOW_USB_SHIFT) & 0xf) == CS5536_USB_INTR)
|
||||
conf_data = 1;
|
||||
break;
|
||||
default:
|
||||
|
||||
@@ -4356,6 +4356,8 @@ static int kvmppc_book3s_init_hv(void)
|
||||
pr_err("KVM-HV: Cannot determine method for accessing XICS\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
/* presence of intc confirmed - node can be dropped again */
|
||||
of_node_put(np);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
@@ -388,7 +388,7 @@ int opal_put_chars(uint32_t vtermno, const char *data, int total_len)
|
||||
/* Closed or other error drop */
|
||||
if (rc != OPAL_SUCCESS && rc != OPAL_BUSY &&
|
||||
rc != OPAL_BUSY_EVENT) {
|
||||
written = total_len;
|
||||
written += total_len;
|
||||
break;
|
||||
}
|
||||
if (rc == OPAL_SUCCESS) {
|
||||
|
||||
@@ -212,7 +212,7 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
|
||||
walk->dst.virt.addr, walk->src.virt.addr, n);
|
||||
if (k)
|
||||
ret = blkcipher_walk_done(desc, walk, nbytes - k);
|
||||
if (n < k) {
|
||||
if (k < n) {
|
||||
if (__cbc_paes_set_key(ctx) != 0)
|
||||
return blkcipher_walk_done(desc, walk, -EIO);
|
||||
memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
|
||||
|
||||
@@ -7,11 +7,17 @@
|
||||
#include <linux/eisa.h>
|
||||
#include <linux/io.h>
|
||||
|
||||
#include <xen/xen.h>
|
||||
|
||||
static __init int eisa_bus_probe(void)
|
||||
{
|
||||
void __iomem *p = ioremap(0x0FFFD9, 4);
|
||||
void __iomem *p;
|
||||
|
||||
if (readl(p) == 'E' + ('I'<<8) + ('S'<<16) + ('A'<<24))
|
||||
if (xen_pv_domain() && !xen_initial_domain())
|
||||
return 0;
|
||||
|
||||
p = ioremap(0x0FFFD9, 4);
|
||||
if (p && readl(p) == 'E' + ('I' << 8) + ('S' << 16) + ('A' << 24))
|
||||
EISA_bus = 1;
|
||||
iounmap(p);
|
||||
return 0;
|
||||
|
||||
@@ -162,7 +162,7 @@ static __init p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
|
||||
|
||||
if (pgd_none(*pgd)) {
|
||||
unsigned long new_p4d_page = __get_free_page(gfp);
|
||||
if (!new_p4d_page)
|
||||
if (WARN_ON_ONCE(!new_p4d_page))
|
||||
return NULL;
|
||||
|
||||
set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page)));
|
||||
@@ -181,13 +181,17 @@ static __init p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
|
||||
static __init pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
|
||||
{
|
||||
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
|
||||
p4d_t *p4d = pti_user_pagetable_walk_p4d(address);
|
||||
p4d_t *p4d;
|
||||
pud_t *pud;
|
||||
|
||||
p4d = pti_user_pagetable_walk_p4d(address);
|
||||
if (!p4d)
|
||||
return NULL;
|
||||
|
||||
BUILD_BUG_ON(p4d_large(*p4d) != 0);
|
||||
if (p4d_none(*p4d)) {
|
||||
unsigned long new_pud_page = __get_free_page(gfp);
|
||||
if (!new_pud_page)
|
||||
if (WARN_ON_ONCE(!new_pud_page))
|
||||
return NULL;
|
||||
|
||||
set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page)));
|
||||
@@ -201,7 +205,7 @@ static __init pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
|
||||
}
|
||||
if (pud_none(*pud)) {
|
||||
unsigned long new_pmd_page = __get_free_page(gfp);
|
||||
if (!new_pmd_page)
|
||||
if (WARN_ON_ONCE(!new_pmd_page))
|
||||
return NULL;
|
||||
|
||||
set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
|
||||
@@ -223,9 +227,13 @@ static __init pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
|
||||
static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
|
||||
{
|
||||
gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
|
||||
pmd_t *pmd = pti_user_pagetable_walk_pmd(address);
|
||||
pmd_t *pmd;
|
||||
pte_t *pte;
|
||||
|
||||
pmd = pti_user_pagetable_walk_pmd(address);
|
||||
if (!pmd)
|
||||
return NULL;
|
||||
|
||||
/* We can't do anything sensible if we hit a large mapping. */
|
||||
if (pmd_large(*pmd)) {
|
||||
WARN_ON(1);
|
||||
@@ -283,6 +291,10 @@ pti_clone_pmds(unsigned long start, unsigned long end, pmdval_t clear)
|
||||
p4d_t *p4d;
|
||||
pud_t *pud;
|
||||
|
||||
/* Overflow check */
|
||||
if (addr < start)
|
||||
break;
|
||||
|
||||
pgd = pgd_offset_k(addr);
|
||||
if (WARN_ON(pgd_none(*pgd)))
|
||||
return;
|
||||
@@ -319,6 +331,9 @@ static void __init pti_clone_p4d(unsigned long addr)
|
||||
pgd_t *kernel_pgd;
|
||||
|
||||
user_p4d = pti_user_pagetable_walk_p4d(addr);
|
||||
if (!user_p4d)
|
||||
return;
|
||||
|
||||
kernel_pgd = pgd_offset_k(addr);
|
||||
kernel_p4d = p4d_offset(kernel_pgd, addr);
|
||||
*user_p4d = *kernel_p4d;
|
||||
|
||||
@@ -78,23 +78,28 @@ static struct notifier_block iss_panic_block = {
|
||||
|
||||
void __init platform_setup(char **p_cmdline)
|
||||
{
|
||||
static void *argv[COMMAND_LINE_SIZE / sizeof(void *)] __initdata;
|
||||
static char cmdline[COMMAND_LINE_SIZE] __initdata;
|
||||
int argc = simc_argc();
|
||||
int argv_size = simc_argv_size();
|
||||
|
||||
if (argc > 1) {
|
||||
void **argv = alloc_bootmem(argv_size);
|
||||
char *cmdline = alloc_bootmem(argv_size);
|
||||
int i;
|
||||
if (argv_size > sizeof(argv)) {
|
||||
pr_err("%s: command line too long: argv_size = %d\n",
|
||||
__func__, argv_size);
|
||||
} else {
|
||||
int i;
|
||||
|
||||
cmdline[0] = 0;
|
||||
simc_argv((void *)argv);
|
||||
cmdline[0] = 0;
|
||||
simc_argv((void *)argv);
|
||||
|
||||
for (i = 1; i < argc; ++i) {
|
||||
if (i > 1)
|
||||
strcat(cmdline, " ");
|
||||
strcat(cmdline, argv[i]);
|
||||
for (i = 1; i < argc; ++i) {
|
||||
if (i > 1)
|
||||
strcat(cmdline, " ");
|
||||
strcat(cmdline, argv[i]);
|
||||
}
|
||||
*p_cmdline = cmdline;
|
||||
}
|
||||
*p_cmdline = cmdline;
|
||||
}
|
||||
|
||||
atomic_notifier_chain_register(&panic_notifier_list, &iss_panic_block);
|
||||
|
||||
@@ -669,9 +669,13 @@ void blk_cleanup_queue(struct request_queue *q)
|
||||
* make sure all in-progress dispatch are completed because
|
||||
* blk_freeze_queue() can only complete all requests, and
|
||||
* dispatch may still be in-progress since we dispatch requests
|
||||
* from more than one contexts
|
||||
* from more than one contexts.
|
||||
*
|
||||
* No need to quiesce queue if it isn't initialized yet since
|
||||
* blk_freeze_queue() should be enough for cases of passthrough
|
||||
* request.
|
||||
*/
|
||||
if (q->mq_ops)
|
||||
if (q->mq_ops && blk_queue_init_done(q))
|
||||
blk_mq_quiesce_queue(q);
|
||||
|
||||
/* for synchronous bio-based driver finish in-flight integrity i/o */
|
||||
|
||||
@@ -236,7 +236,8 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
|
||||
return e->type->ops.mq.bio_merge(hctx, bio);
|
||||
}
|
||||
|
||||
if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) {
|
||||
if ((hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
|
||||
!list_empty_careful(&ctx->rq_list)) {
|
||||
/* default per sw-queue merge */
|
||||
spin_lock(&ctx->lock);
|
||||
ret = blk_mq_attempt_merge(q, ctx, bio);
|
||||
|
||||
@@ -128,7 +128,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
|
||||
|
||||
/* Inherit limits from component devices */
|
||||
lim->max_segments = USHRT_MAX;
|
||||
lim->max_discard_segments = 1;
|
||||
lim->max_discard_segments = USHRT_MAX;
|
||||
lim->max_hw_sectors = UINT_MAX;
|
||||
lim->max_segment_size = UINT_MAX;
|
||||
lim->max_sectors = UINT_MAX;
|
||||
|
||||
@@ -216,7 +216,7 @@ struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask)
|
||||
mask &= ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
|
||||
|
||||
alg = crypto_alg_lookup(name, type, mask);
|
||||
if (!alg) {
|
||||
if (!alg && !(mask & CRYPTO_NOLOAD)) {
|
||||
request_module("crypto-%s", name);
|
||||
|
||||
if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask &
|
||||
|
||||
@@ -2783,6 +2783,9 @@ void device_shutdown(void)
|
||||
{
|
||||
struct device *dev, *parent;
|
||||
|
||||
wait_for_device_probe();
|
||||
device_block_probing();
|
||||
|
||||
spin_lock(&devices_kset->list_lock);
|
||||
/*
|
||||
* Walk the devices list backward, shutting down each in turn.
|
||||
|
||||
@@ -184,6 +184,8 @@ struct ssif_addr_info {
|
||||
struct device *dev;
|
||||
struct i2c_client *client;
|
||||
|
||||
struct i2c_client *added_client;
|
||||
|
||||
struct mutex clients_mutex;
|
||||
struct list_head clients;
|
||||
|
||||
@@ -1710,15 +1712,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
|
||||
|
||||
out:
|
||||
if (rv) {
|
||||
/*
|
||||
* Note that if addr_info->client is assigned, we
|
||||
* leave it. The i2c client hangs around even if we
|
||||
* return a failure here, and the failure here is not
|
||||
* propagated back to the i2c code. This seems to be
|
||||
* design intent, strange as it may be. But if we
|
||||
* don't leave it, ssif_platform_remove will not remove
|
||||
* the client like it should.
|
||||
*/
|
||||
addr_info->client = NULL;
|
||||
dev_err(&client->dev, "Unable to start IPMI SSIF: %d\n", rv);
|
||||
kfree(ssif_info);
|
||||
}
|
||||
@@ -1737,7 +1731,8 @@ static int ssif_adapter_handler(struct device *adev, void *opaque)
|
||||
if (adev->type != &i2c_adapter_type)
|
||||
return 0;
|
||||
|
||||
i2c_new_device(to_i2c_adapter(adev), &addr_info->binfo);
|
||||
addr_info->added_client = i2c_new_device(to_i2c_adapter(adev),
|
||||
&addr_info->binfo);
|
||||
|
||||
if (!addr_info->adapter_name)
|
||||
return 1; /* Only try the first I2C adapter by default. */
|
||||
@@ -2018,8 +2013,8 @@ static int ssif_platform_remove(struct platform_device *dev)
|
||||
return 0;
|
||||
|
||||
mutex_lock(&ssif_infos_mutex);
|
||||
if (addr_info->client)
|
||||
i2c_unregister_device(addr_info->client);
|
||||
if (addr_info->added_client)
|
||||
i2c_unregister_device(addr_info->added_client);
|
||||
|
||||
list_del(&addr_info->link);
|
||||
kfree(addr_info);
|
||||
|
||||
@@ -177,8 +177,15 @@ static struct clk *_of_fixed_factor_clk_setup(struct device_node *node)
|
||||
|
||||
clk = clk_register_fixed_factor(NULL, clk_name, parent_name, flags,
|
||||
mult, div);
|
||||
if (IS_ERR(clk))
|
||||
if (IS_ERR(clk)) {
|
||||
/*
|
||||
* If parent clock is not registered, registration would fail.
|
||||
* Clear OF_POPULATED flag so that clock registration can be
|
||||
* attempted again from probe function.
|
||||
*/
|
||||
of_node_clear_flag(node, OF_POPULATED);
|
||||
return clk;
|
||||
}
|
||||
|
||||
ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
|
||||
if (ret) {
|
||||
|
||||
@@ -2557,6 +2557,7 @@ struct clk *__clk_create_clk(struct clk_hw *hw, const char *dev_id,
|
||||
return clk;
|
||||
}
|
||||
|
||||
/* keep in sync with __clk_put */
|
||||
void __clk_free_clk(struct clk *clk)
|
||||
{
|
||||
clk_prepare_lock();
|
||||
@@ -2922,6 +2923,7 @@ int __clk_get(struct clk *clk)
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* keep in sync with __clk_free_clk */
|
||||
void __clk_put(struct clk *clk)
|
||||
{
|
||||
struct module *owner;
|
||||
@@ -2943,6 +2945,7 @@ void __clk_put(struct clk *clk)
|
||||
|
||||
module_put(owner);
|
||||
|
||||
kfree_const(clk->con_id);
|
||||
kfree(clk);
|
||||
}
|
||||
|
||||
|
||||
@@ -135,6 +135,7 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
|
||||
|
||||
np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-anatop");
|
||||
base = of_iomap(np, 0);
|
||||
of_node_put(np);
|
||||
WARN_ON(!base);
|
||||
|
||||
clks[IMX6UL_PLL1_BYPASS_SRC] = imx_clk_mux("pll1_bypass_src", base + 0x00, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
|
||||
|
||||
@@ -581,9 +581,15 @@ static struct clk_hw *tegra_bpmp_clk_of_xlate(struct of_phandle_args *clkspec,
|
||||
unsigned int id = clkspec->args[0], i;
|
||||
struct tegra_bpmp *bpmp = data;
|
||||
|
||||
for (i = 0; i < bpmp->num_clocks; i++)
|
||||
if (bpmp->clocks[i]->id == id)
|
||||
return &bpmp->clocks[i]->hw;
|
||||
for (i = 0; i < bpmp->num_clocks; i++) {
|
||||
struct tegra_bpmp_clk *clk = bpmp->clocks[i];
|
||||
|
||||
if (!clk)
|
||||
continue;
|
||||
|
||||
if (clk->id == id)
|
||||
return &clk->hw;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@@ -1351,7 +1351,7 @@ err_sha_v4_algs:
|
||||
|
||||
err_sha_v3_algs:
|
||||
for (j = 0; j < k; j++)
|
||||
crypto_unregister_ahash(&sha_v4_algs[j]);
|
||||
crypto_unregister_ahash(&sha_v3_algs[j]);
|
||||
|
||||
err_aes_algs:
|
||||
for (j = 0; j < i; j++)
|
||||
@@ -1367,7 +1367,7 @@ static void sahara_unregister_algs(struct sahara_dev *dev)
|
||||
for (i = 0; i < ARRAY_SIZE(aes_algs); i++)
|
||||
crypto_unregister_alg(&aes_algs[i]);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(sha_v4_algs); i++)
|
||||
for (i = 0; i < ARRAY_SIZE(sha_v3_algs); i++)
|
||||
crypto_unregister_ahash(&sha_v3_algs[i]);
|
||||
|
||||
if (dev->version > SAHARA_VERSION_3)
|
||||
|
||||
@@ -898,6 +898,8 @@ static int mv_xor_v2_remove(struct platform_device *pdev)
|
||||
|
||||
platform_msi_domain_free_irqs(&pdev->dev);
|
||||
|
||||
tasklet_kill(&xor_dev->irq_tasklet);
|
||||
|
||||
clk_disable_unprepare(xor_dev->clk);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -2142,13 +2142,14 @@ static int pl330_terminate_all(struct dma_chan *chan)
|
||||
|
||||
pm_runtime_get_sync(pl330->ddma.dev);
|
||||
spin_lock_irqsave(&pch->lock, flags);
|
||||
|
||||
spin_lock(&pl330->lock);
|
||||
_stop(pch->thread);
|
||||
spin_unlock(&pl330->lock);
|
||||
|
||||
pch->thread->req[0].desc = NULL;
|
||||
pch->thread->req[1].desc = NULL;
|
||||
pch->thread->req_running = -1;
|
||||
spin_unlock(&pl330->lock);
|
||||
|
||||
power_down = pch->active;
|
||||
pch->active = false;
|
||||
|
||||
|
||||
@@ -259,7 +259,6 @@ void __init efi_init(void)
|
||||
|
||||
reserve_regions();
|
||||
efi_esrt_init();
|
||||
efi_memmap_unmap();
|
||||
|
||||
memblock_reserve(params.mmap & PAGE_MASK,
|
||||
PAGE_ALIGN(params.mmap_size +
|
||||
|
||||
@@ -122,11 +122,13 @@ static int __init arm_enable_runtime_services(void)
|
||||
{
|
||||
u64 mapsize;
|
||||
|
||||
if (!efi_enabled(EFI_BOOT)) {
|
||||
if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {
|
||||
pr_info("EFI services will not be available.\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
efi_memmap_unmap();
|
||||
|
||||
if (efi_runtime_disabled()) {
|
||||
pr_info("EFI runtime services will be disabled.\n");
|
||||
return 0;
|
||||
|
||||
@@ -333,7 +333,8 @@ void __init efi_esrt_init(void)
|
||||
|
||||
end = esrt_data + size;
|
||||
pr_info("Reserving ESRT space from %pa to %pa.\n", &esrt_data, &end);
|
||||
efi_mem_reserve(esrt_data, esrt_data_size);
|
||||
if (md.type == EFI_BOOT_SERVICES_DATA)
|
||||
efi_mem_reserve(esrt_data, esrt_data_size);
|
||||
|
||||
pr_debug("esrt-init: loaded.\n");
|
||||
err_memunmap:
|
||||
|
||||
@@ -662,6 +662,8 @@ static int pxa_gpio_probe(struct platform_device *pdev)
|
||||
pchip->irq0 = irq0;
|
||||
pchip->irq1 = irq1;
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (!res)
|
||||
return -EINVAL;
|
||||
gpio_reg_base = devm_ioremap(&pdev->dev, res->start,
|
||||
resource_size(res));
|
||||
if (!gpio_reg_base)
|
||||
|
||||
@@ -88,7 +88,7 @@ struct acpi_gpio_info {
|
||||
};
|
||||
|
||||
/* gpio suffixes used for ACPI and device tree lookup */
|
||||
static const char * const gpio_suffixes[] = { "gpios", "gpio" };
|
||||
static __maybe_unused const char * const gpio_suffixes[] = { "gpios", "gpio" };
|
||||
|
||||
#ifdef CONFIG_OF_GPIO
|
||||
struct gpio_desc *of_find_gpio(struct device *dev,
|
||||
|
||||
@@ -123,6 +123,8 @@ struct kfd_process *kfd_get_process(const struct task_struct *thread)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
process = find_process(thread);
|
||||
if (!process)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
return process;
|
||||
}
|
||||
|
||||
@@ -687,10 +687,10 @@ flush_write_domain(struct drm_i915_gem_object *obj, unsigned int flush_domains)
|
||||
|
||||
switch (obj->base.write_domain) {
|
||||
case I915_GEM_DOMAIN_GTT:
|
||||
if (INTEL_GEN(dev_priv) >= 6 && !HAS_LLC(dev_priv)) {
|
||||
if (!HAS_LLC(dev_priv)) {
|
||||
intel_runtime_pm_get(dev_priv);
|
||||
spin_lock_irq(&dev_priv->uncore.lock);
|
||||
POSTING_READ_FW(RING_ACTHD(dev_priv->engine[RCS]->mmio_base));
|
||||
POSTING_READ_FW(RING_HEAD(dev_priv->engine[RCS]->mmio_base));
|
||||
spin_unlock_irq(&dev_priv->uncore.lock);
|
||||
intel_runtime_pm_put(dev_priv);
|
||||
}
|
||||
|
||||
@@ -160,7 +160,11 @@ nouveau_debugfs_pstate_set(struct file *file, const char __user *ubuf,
|
||||
args.ustate = value;
|
||||
}
|
||||
|
||||
ret = pm_runtime_get_sync(drm->dev);
|
||||
if (IS_ERR_VALUE(ret) && ret != -EACCES)
|
||||
return ret;
|
||||
ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
|
||||
pm_runtime_put_autosuspend(drm->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
||||
@@ -848,8 +848,10 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv)
|
||||
get_task_comm(tmpname, current);
|
||||
snprintf(name, sizeof(name), "%s[%d]", tmpname, pid_nr(fpriv->pid));
|
||||
|
||||
if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL)))
|
||||
return ret;
|
||||
if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL))) {
|
||||
ret = -ENOMEM;
|
||||
goto done;
|
||||
}
|
||||
|
||||
ret = nouveau_cli_init(drm, name, cli);
|
||||
if (ret)
|
||||
|
||||
@@ -23,6 +23,10 @@
|
||||
#ifdef CONFIG_NOUVEAU_PLATFORM_DRIVER
|
||||
#include "priv.h"
|
||||
|
||||
#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
|
||||
#include <asm/dma-iommu.h>
|
||||
#endif
|
||||
|
||||
static int
|
||||
nvkm_device_tegra_power_up(struct nvkm_device_tegra *tdev)
|
||||
{
|
||||
@@ -105,6 +109,15 @@ nvkm_device_tegra_probe_iommu(struct nvkm_device_tegra *tdev)
|
||||
unsigned long pgsize_bitmap;
|
||||
int ret;
|
||||
|
||||
#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
|
||||
if (dev->archdata.mapping) {
|
||||
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
||||
|
||||
arm_iommu_detach_device(dev);
|
||||
arm_iommu_release_mapping(mapping);
|
||||
}
|
||||
#endif
|
||||
|
||||
if (!tdev->func->iommu_bit)
|
||||
return;
|
||||
|
||||
|
||||
@@ -823,7 +823,7 @@ static void s6e8aa0_read_mtp_id(struct s6e8aa0 *ctx)
|
||||
int ret, i;
|
||||
|
||||
ret = s6e8aa0_dcs_read(ctx, 0xd1, id, ARRAY_SIZE(id));
|
||||
if (ret < ARRAY_SIZE(id) || id[0] == 0x00) {
|
||||
if (ret < 0 || ret < ARRAY_SIZE(id) || id[0] == 0x00) {
|
||||
dev_err(ctx->dev, "read id failed\n");
|
||||
ctx->error = -EIO;
|
||||
return;
|
||||
|
||||
@@ -316,13 +316,17 @@ static int mbus_code_to_bus_cfg(struct ipu_csi_bus_config *cfg, u32 mbus_code)
|
||||
/*
|
||||
* Fill a CSI bus config struct from mbus_config and mbus_framefmt.
|
||||
*/
|
||||
static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
|
||||
static int fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
|
||||
struct v4l2_mbus_config *mbus_cfg,
|
||||
struct v4l2_mbus_framefmt *mbus_fmt)
|
||||
{
|
||||
int ret;
|
||||
|
||||
memset(csicfg, 0, sizeof(*csicfg));
|
||||
|
||||
mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
|
||||
ret = mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
switch (mbus_cfg->type) {
|
||||
case V4L2_MBUS_PARALLEL:
|
||||
@@ -353,6 +357,8 @@ static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
|
||||
/* will never get here, keep compiler quiet */
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ipu_csi_init_interface(struct ipu_csi *csi,
|
||||
@@ -362,8 +368,11 @@ int ipu_csi_init_interface(struct ipu_csi *csi,
|
||||
struct ipu_csi_bus_config cfg;
|
||||
unsigned long flags;
|
||||
u32 width, height, data = 0;
|
||||
int ret;
|
||||
|
||||
fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
|
||||
ret = fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* set default sensor frame width and height */
|
||||
width = mbus_fmt->width;
|
||||
@@ -584,11 +593,14 @@ int ipu_csi_set_mipi_datatype(struct ipu_csi *csi, u32 vc,
|
||||
struct ipu_csi_bus_config cfg;
|
||||
unsigned long flags;
|
||||
u32 temp;
|
||||
int ret;
|
||||
|
||||
if (vc > 3)
|
||||
return -EINVAL;
|
||||
|
||||
mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
|
||||
ret = mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
spin_lock_irqsave(&csi->lock, flags);
|
||||
|
||||
|
||||
@@ -1034,7 +1034,8 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
|
||||
}
|
||||
|
||||
pm_runtime_put(&adev->dev);
|
||||
dev_info(dev, "%s initialized\n", (char *)id->data);
|
||||
dev_info(dev, "CPU%d: ETM v%d.%d initialized\n",
|
||||
drvdata->cpu, drvdata->arch >> 4, drvdata->arch & 0xf);
|
||||
|
||||
if (boot_enable) {
|
||||
coresight_enable(drvdata->csdev);
|
||||
@@ -1052,23 +1053,19 @@ err_arch_supported:
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define ETM4x_AMBA_ID(pid) \
|
||||
{ \
|
||||
.id = pid, \
|
||||
.mask = 0x000fffff, \
|
||||
}
|
||||
|
||||
static const struct amba_id etm4_ids[] = {
|
||||
{ /* ETM 4.0 - Cortex-A53 */
|
||||
.id = 0x000bb95d,
|
||||
.mask = 0x000fffff,
|
||||
.data = "ETM 4.0",
|
||||
},
|
||||
{ /* ETM 4.0 - Cortex-A57 */
|
||||
.id = 0x000bb95e,
|
||||
.mask = 0x000fffff,
|
||||
.data = "ETM 4.0",
|
||||
},
|
||||
{ /* ETM 4.0 - A72, Maia, HiSilicon */
|
||||
.id = 0x000bb95a,
|
||||
.mask = 0x000fffff,
|
||||
.data = "ETM 4.0",
|
||||
},
|
||||
{ 0, 0},
|
||||
ETM4x_AMBA_ID(0x000bb95d), /* Cortex-A53 */
|
||||
ETM4x_AMBA_ID(0x000bb95e), /* Cortex-A57 */
|
||||
ETM4x_AMBA_ID(0x000bb95a), /* Cortex-A72 */
|
||||
ETM4x_AMBA_ID(0x000bb959), /* Cortex-A73 */
|
||||
ETM4x_AMBA_ID(0x000bb9da), /* Cortex-A35 */
|
||||
{},
|
||||
};
|
||||
|
||||
static struct amba_driver etm4x_driver = {
|
||||
|
||||
@@ -47,8 +47,9 @@
|
||||
|
||||
/** register definition **/
|
||||
/* FFSR - 0x300 */
|
||||
#define FFSR_FT_STOPPED BIT(1)
|
||||
#define FFSR_FT_STOPPED_BIT 1
|
||||
/* FFCR - 0x304 */
|
||||
#define FFCR_FON_MAN_BIT 6
|
||||
#define FFCR_FON_MAN BIT(6)
|
||||
#define FFCR_STOP_FI BIT(12)
|
||||
|
||||
@@ -93,9 +94,9 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata)
|
||||
/* Generate manual flush */
|
||||
writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
|
||||
/* Wait for flush to complete */
|
||||
coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0);
|
||||
coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN_BIT, 0);
|
||||
/* Wait for formatter to stop */
|
||||
coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1);
|
||||
coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED_BIT, 1);
|
||||
|
||||
CS_LOCK(drvdata->base);
|
||||
}
|
||||
|
||||
@@ -115,7 +115,7 @@ static int coresight_find_link_inport(struct coresight_device *csdev,
|
||||
dev_err(&csdev->dev, "couldn't find inport, parent: %s, child: %s\n",
|
||||
dev_name(&parent->dev), dev_name(&csdev->dev));
|
||||
|
||||
return 0;
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int coresight_find_link_outport(struct coresight_device *csdev,
|
||||
@@ -133,7 +133,7 @@ static int coresight_find_link_outport(struct coresight_device *csdev,
|
||||
dev_err(&csdev->dev, "couldn't find outport, parent: %s, child: %s\n",
|
||||
dev_name(&csdev->dev), dev_name(&child->dev));
|
||||
|
||||
return 0;
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int coresight_enable_sink(struct coresight_device *csdev, u32 mode)
|
||||
@@ -186,6 +186,9 @@ static int coresight_enable_link(struct coresight_device *csdev,
|
||||
else
|
||||
refport = 0;
|
||||
|
||||
if (refport < 0)
|
||||
return refport;
|
||||
|
||||
if (atomic_inc_return(&csdev->refcnt[refport]) == 1) {
|
||||
if (link_ops(csdev)->enable) {
|
||||
ret = link_ops(csdev)->enable(csdev, inport, outport);
|
||||
|
||||
@@ -110,22 +110,22 @@
|
||||
#define ASPEED_I2CD_DEV_ADDR_MASK GENMASK(6, 0)
|
||||
|
||||
enum aspeed_i2c_master_state {
|
||||
ASPEED_I2C_MASTER_INACTIVE,
|
||||
ASPEED_I2C_MASTER_START,
|
||||
ASPEED_I2C_MASTER_TX_FIRST,
|
||||
ASPEED_I2C_MASTER_TX,
|
||||
ASPEED_I2C_MASTER_RX_FIRST,
|
||||
ASPEED_I2C_MASTER_RX,
|
||||
ASPEED_I2C_MASTER_STOP,
|
||||
ASPEED_I2C_MASTER_INACTIVE,
|
||||
};
|
||||
|
||||
enum aspeed_i2c_slave_state {
|
||||
ASPEED_I2C_SLAVE_STOP,
|
||||
ASPEED_I2C_SLAVE_START,
|
||||
ASPEED_I2C_SLAVE_READ_REQUESTED,
|
||||
ASPEED_I2C_SLAVE_READ_PROCESSED,
|
||||
ASPEED_I2C_SLAVE_WRITE_REQUESTED,
|
||||
ASPEED_I2C_SLAVE_WRITE_RECEIVED,
|
||||
ASPEED_I2C_SLAVE_STOP,
|
||||
};
|
||||
|
||||
struct aspeed_i2c_bus {
|
||||
|
||||
@@ -730,6 +730,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
|
||||
dgid = (union ib_gid *) &addr->sib_addr;
|
||||
pkey = ntohs(addr->sib_pkey);
|
||||
|
||||
mutex_lock(&lock);
|
||||
list_for_each_entry(cur_dev, &dev_list, list) {
|
||||
for (p = 1; p <= cur_dev->device->phys_port_cnt; ++p) {
|
||||
if (!rdma_cap_af_ib(cur_dev->device, p))
|
||||
@@ -756,18 +757,19 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
|
||||
cma_dev = cur_dev;
|
||||
sgid = gid;
|
||||
id_priv->id.port_num = p;
|
||||
goto found;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!cma_dev)
|
||||
return -ENODEV;
|
||||
mutex_unlock(&lock);
|
||||
return -ENODEV;
|
||||
|
||||
found:
|
||||
cma_attach_to_dev(id_priv, cma_dev);
|
||||
addr = (struct sockaddr_ib *) cma_src_addr(id_priv);
|
||||
memcpy(&addr->sib_addr, &sgid, sizeof sgid);
|
||||
mutex_unlock(&lock);
|
||||
addr = (struct sockaddr_ib *)cma_src_addr(id_priv);
|
||||
memcpy(&addr->sib_addr, &sgid, sizeof(sgid));
|
||||
cma_translate_ib(addr, &id_priv->id.route.addr.dev_addr);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -225,9 +225,14 @@ static int hdr_check(struct rxe_pkt_info *pkt)
|
||||
goto err1;
|
||||
}
|
||||
|
||||
if (unlikely(qpn == 0)) {
|
||||
pr_warn_once("QP 0 not supported");
|
||||
goto err1;
|
||||
}
|
||||
|
||||
if (qpn != IB_MULTICAST_QPN) {
|
||||
index = (qpn == 0) ? port->qp_smi_index :
|
||||
((qpn == 1) ? port->qp_gsi_index : qpn);
|
||||
index = (qpn == 1) ? port->qp_gsi_index : qpn;
|
||||
|
||||
qp = rxe_pool_get_index(&rxe->qp_pool, index);
|
||||
if (unlikely(!qp)) {
|
||||
pr_warn_ratelimited("no qp matches qpn 0x%x\n", qpn);
|
||||
|
||||
@@ -1018,12 +1018,14 @@ static int ipoib_cm_rep_handler(struct ib_cm_id *cm_id, struct ib_cm_event *even
|
||||
|
||||
skb_queue_head_init(&skqueue);
|
||||
|
||||
netif_tx_lock_bh(p->dev);
|
||||
spin_lock_irq(&priv->lock);
|
||||
set_bit(IPOIB_FLAG_OPER_UP, &p->flags);
|
||||
if (p->neigh)
|
||||
while ((skb = __skb_dequeue(&p->neigh->queue)))
|
||||
__skb_queue_tail(&skqueue, skb);
|
||||
spin_unlock_irq(&priv->lock);
|
||||
netif_tx_unlock_bh(p->dev);
|
||||
|
||||
while ((skb = __skb_dequeue(&skqueue))) {
|
||||
skb->dev = p->dev;
|
||||
|
||||
@@ -1752,7 +1752,8 @@ int ipoib_dev_init(struct net_device *dev, struct ib_device *ca, int port)
|
||||
goto out_free_pd;
|
||||
}
|
||||
|
||||
if (ipoib_neigh_hash_init(priv) < 0) {
|
||||
ret = ipoib_neigh_hash_init(priv);
|
||||
if (ret) {
|
||||
pr_warn("%s failed to init neigh hash\n", dev->name);
|
||||
goto out_dev_uninit;
|
||||
}
|
||||
|
||||
@@ -304,7 +304,7 @@ static int rohm_i2c_burst_read(struct i2c_client *client, u8 start, void *buf,
|
||||
msg[1].len = len;
|
||||
msg[1].buf = buf;
|
||||
|
||||
i2c_lock_adapter(adap);
|
||||
i2c_lock_bus(adap, I2C_LOCK_SEGMENT);
|
||||
|
||||
for (i = 0; i < 2; i++) {
|
||||
if (__i2c_transfer(adap, &msg[i], 1) < 0) {
|
||||
@@ -313,7 +313,7 @@ static int rohm_i2c_burst_read(struct i2c_client *client, u8 start, void *buf,
|
||||
}
|
||||
}
|
||||
|
||||
i2c_unlock_adapter(adap);
|
||||
i2c_unlock_bus(adap, I2C_LOCK_SEGMENT);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1272,6 +1272,7 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
|
||||
|
||||
/* Sync our overflow flag, as we believe we're up to speed */
|
||||
q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
|
||||
writel(q->cons, q->cons_reg);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
|
||||
@@ -192,6 +192,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
|
||||
{
|
||||
struct io_pgtable_cfg *cfg = &data->iop.cfg;
|
||||
struct device *dev = cfg->iommu_dev;
|
||||
phys_addr_t phys;
|
||||
dma_addr_t dma;
|
||||
size_t size = ARM_V7S_TABLE_SIZE(lvl);
|
||||
void *table = NULL;
|
||||
@@ -200,6 +201,10 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
|
||||
table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
|
||||
else if (lvl == 2)
|
||||
table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
|
||||
phys = virt_to_phys(table);
|
||||
if (phys != (arm_v7s_iopte)phys)
|
||||
/* Doesn't fit in PTE */
|
||||
goto out_free;
|
||||
if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
|
||||
dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dev, dma))
|
||||
@@ -209,7 +214,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
|
||||
* address directly, so if the DMA layer suggests otherwise by
|
||||
* translating or truncating them, that bodes very badly...
|
||||
*/
|
||||
if (dma != virt_to_phys(table))
|
||||
if (dma != phys)
|
||||
goto out_unmap;
|
||||
}
|
||||
kmemleak_ignore(table);
|
||||
|
||||
@@ -510,8 +510,8 @@ static const struct reg_value ov5645_setting_full[] = {
|
||||
};
|
||||
|
||||
static const s64 link_freq[] = {
|
||||
222880000,
|
||||
334320000
|
||||
224000000,
|
||||
336000000
|
||||
};
|
||||
|
||||
static const struct ov5645_mode_info ov5645_mode_info_data[] = {
|
||||
@@ -520,7 +520,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
|
||||
.height = 960,
|
||||
.data = ov5645_setting_sxga,
|
||||
.data_size = ARRAY_SIZE(ov5645_setting_sxga),
|
||||
.pixel_clock = 111440000,
|
||||
.pixel_clock = 112000000,
|
||||
.link_freq = 0 /* an index in link_freq[] */
|
||||
},
|
||||
{
|
||||
@@ -528,7 +528,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
|
||||
.height = 1080,
|
||||
.data = ov5645_setting_1080p,
|
||||
.data_size = ARRAY_SIZE(ov5645_setting_1080p),
|
||||
.pixel_clock = 167160000,
|
||||
.pixel_clock = 168000000,
|
||||
.link_freq = 1 /* an index in link_freq[] */
|
||||
},
|
||||
{
|
||||
@@ -536,7 +536,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
|
||||
.height = 1944,
|
||||
.data = ov5645_setting_full,
|
||||
.data_size = ARRAY_SIZE(ov5645_setting_full),
|
||||
.pixel_clock = 167160000,
|
||||
.pixel_clock = 168000000,
|
||||
.link_freq = 1 /* an index in link_freq[] */
|
||||
},
|
||||
};
|
||||
@@ -1157,7 +1157,8 @@ static int ov5645_probe(struct i2c_client *client,
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (xclk_freq != 23880000) {
|
||||
/* external clock must be 24MHz, allow 1% tolerance */
|
||||
if (xclk_freq < 23760000 || xclk_freq > 24240000) {
|
||||
dev_err(dev, "external clock frequency %u is not supported\n",
|
||||
xclk_freq);
|
||||
return -EINVAL;
|
||||
|
||||
@@ -1190,6 +1190,14 @@ int tw686x_video_init(struct tw686x_dev *dev)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Initialize vc->dev and vc->ch for the error path */
|
||||
for (ch = 0; ch < max_channels(dev); ch++) {
|
||||
struct tw686x_video_channel *vc = &dev->video_channels[ch];
|
||||
|
||||
vc->dev = dev;
|
||||
vc->ch = ch;
|
||||
}
|
||||
|
||||
for (ch = 0; ch < max_channels(dev); ch++) {
|
||||
struct tw686x_video_channel *vc = &dev->video_channels[ch];
|
||||
struct video_device *vdev;
|
||||
@@ -1198,9 +1206,6 @@ int tw686x_video_init(struct tw686x_dev *dev)
|
||||
spin_lock_init(&vc->qlock);
|
||||
INIT_LIST_HEAD(&vc->vidq_queued);
|
||||
|
||||
vc->dev = dev;
|
||||
vc->ch = ch;
|
||||
|
||||
/* default settings */
|
||||
err = tw686x_set_standard(vc, V4L2_STD_NTSC);
|
||||
if (err)
|
||||
|
||||
@@ -1373,6 +1373,11 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb)
|
||||
struct vb2_buffer *vb;
|
||||
int ret;
|
||||
|
||||
if (q->error) {
|
||||
dprintk(1, "fatal error occurred on queue\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
vb = q->bufs[index];
|
||||
|
||||
switch (vb->state) {
|
||||
|
||||
@@ -146,14 +146,14 @@ int pm860x_page_reg_write(struct i2c_client *i2c, int reg,
|
||||
unsigned char zero;
|
||||
int ret;
|
||||
|
||||
i2c_lock_adapter(i2c->adapter);
|
||||
i2c_lock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
|
||||
read_device(i2c, 0xFA, 0, &zero);
|
||||
read_device(i2c, 0xFB, 0, &zero);
|
||||
read_device(i2c, 0xFF, 0, &zero);
|
||||
ret = write_device(i2c, reg, 1, &data);
|
||||
read_device(i2c, 0xFE, 0, &zero);
|
||||
read_device(i2c, 0xFC, 0, &zero);
|
||||
i2c_unlock_adapter(i2c->adapter);
|
||||
i2c_unlock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(pm860x_page_reg_write);
|
||||
@@ -164,14 +164,14 @@ int pm860x_page_bulk_read(struct i2c_client *i2c, int reg,
|
||||
unsigned char zero = 0;
|
||||
int ret;
|
||||
|
||||
i2c_lock_adapter(i2c->adapter);
|
||||
i2c_lock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
|
||||
read_device(i2c, 0xfa, 0, &zero);
|
||||
read_device(i2c, 0xfb, 0, &zero);
|
||||
read_device(i2c, 0xff, 0, &zero);
|
||||
ret = read_device(i2c, reg, count, buf);
|
||||
read_device(i2c, 0xFE, 0, &zero);
|
||||
read_device(i2c, 0xFC, 0, &zero);
|
||||
i2c_unlock_adapter(i2c->adapter);
|
||||
i2c_unlock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(pm860x_page_bulk_read);
|
||||
|
||||
@@ -27,6 +27,7 @@
|
||||
#include <linux/err.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/sysfs.h>
|
||||
#include <linux/nospec.h>
|
||||
|
||||
static DEFINE_MUTEX(compass_mutex);
|
||||
|
||||
@@ -50,6 +51,7 @@ static int compass_store(struct device *dev, const char *buf, size_t count,
|
||||
return ret;
|
||||
if (val >= strlen(map))
|
||||
return -EINVAL;
|
||||
val = array_index_nospec(val, strlen(map));
|
||||
mutex_lock(&compass_mutex);
|
||||
ret = compass_command(c, map[val]);
|
||||
mutex_unlock(&compass_mutex);
|
||||
|
||||
@@ -267,7 +267,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
|
||||
|
||||
ret = 0;
|
||||
bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0);
|
||||
if (bytes_recv < if_version_length) {
|
||||
if (bytes_recv < 0 || bytes_recv < if_version_length) {
|
||||
dev_err(bus->dev, "Could not read IF version\n");
|
||||
ret = -EIO;
|
||||
goto err;
|
||||
|
||||
@@ -465,17 +465,15 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
|
||||
|
||||
cl = cldev->cl;
|
||||
|
||||
mutex_lock(&bus->device_lock);
|
||||
if (cl->state == MEI_FILE_UNINITIALIZED) {
|
||||
mutex_lock(&bus->device_lock);
|
||||
ret = mei_cl_link(cl);
|
||||
mutex_unlock(&bus->device_lock);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto out;
|
||||
/* update pointers */
|
||||
cl->cldev = cldev;
|
||||
}
|
||||
|
||||
mutex_lock(&bus->device_lock);
|
||||
if (mei_cl_is_connected(cl)) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
@@ -841,12 +839,13 @@ static void mei_cl_bus_dev_release(struct device *dev)
|
||||
|
||||
mei_me_cl_put(cldev->me_cl);
|
||||
mei_dev_bus_put(cldev->bus);
|
||||
mei_cl_unlink(cldev->cl);
|
||||
kfree(cldev->cl);
|
||||
kfree(cldev);
|
||||
}
|
||||
|
||||
static const struct device_type mei_cl_device_type = {
|
||||
.release = mei_cl_bus_dev_release,
|
||||
.release = mei_cl_bus_dev_release,
|
||||
};
|
||||
|
||||
/**
|
||||
|
||||
@@ -1140,15 +1140,18 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
|
||||
|
||||
props_res = (struct hbm_props_response *)mei_msg;
|
||||
|
||||
if (props_res->status) {
|
||||
if (props_res->status == MEI_HBMS_CLIENT_NOT_FOUND) {
|
||||
dev_dbg(dev->dev, "hbm: properties response: %d CLIENT_NOT_FOUND\n",
|
||||
props_res->me_addr);
|
||||
} else if (props_res->status) {
|
||||
dev_err(dev->dev, "hbm: properties response: wrong status = %d %s\n",
|
||||
props_res->status,
|
||||
mei_hbm_status_str(props_res->status));
|
||||
return -EPROTO;
|
||||
} else {
|
||||
mei_hbm_me_cl_add(dev, props_res);
|
||||
}
|
||||
|
||||
mei_hbm_me_cl_add(dev, props_res);
|
||||
|
||||
/* request property for the next client */
|
||||
if (mei_hbm_prop_req(dev, props_res->me_addr + 1))
|
||||
return -EIO;
|
||||
|
||||
@@ -2194,6 +2194,7 @@ static int omap_hsmmc_remove(struct platform_device *pdev)
|
||||
dma_release_channel(host->tx_chan);
|
||||
dma_release_channel(host->rx_chan);
|
||||
|
||||
dev_pm_clear_wake_irq(host->dev);
|
||||
pm_runtime_dont_use_autosuspend(host->dev);
|
||||
pm_runtime_put_sync(host->dev);
|
||||
pm_runtime_disable(host->dev);
|
||||
|
||||
@@ -22,6 +22,7 @@
|
||||
#include <linux/sys_soc.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/mmc/host.h>
|
||||
#include "sdhci-pltfm.h"
|
||||
#include "sdhci-esdhc.h"
|
||||
@@ -427,6 +428,11 @@ static void esdhc_of_adma_workaround(struct sdhci_host *host, u32 intmask)
|
||||
static int esdhc_of_enable_dma(struct sdhci_host *host)
|
||||
{
|
||||
u32 value;
|
||||
struct device *dev = mmc_dev(host->mmc);
|
||||
|
||||
if (of_device_is_compatible(dev->of_node, "fsl,ls1043a-esdhc") ||
|
||||
of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc"))
|
||||
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
|
||||
|
||||
value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
|
||||
value |= ESDHC_DMA_SNOOP;
|
||||
|
||||
@@ -334,7 +334,8 @@ static const struct sdhci_pltfm_data sdhci_tegra30_pdata = {
|
||||
SDHCI_QUIRK_NO_HISPD_BIT |
|
||||
SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
|
||||
SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
|
||||
SDHCI_QUIRK2_BROKEN_HS200,
|
||||
.ops = &tegra_sdhci_ops,
|
||||
};
|
||||
|
||||
|
||||
@@ -3631,14 +3631,21 @@ int sdhci_setup_host(struct sdhci_host *host)
|
||||
mmc_gpio_get_cd(host->mmc) < 0)
|
||||
mmc->caps |= MMC_CAP_NEEDS_POLL;
|
||||
|
||||
/* If vqmmc regulator and no 1.8V signalling, then there's no UHS */
|
||||
if (!IS_ERR(mmc->supply.vqmmc)) {
|
||||
ret = regulator_enable(mmc->supply.vqmmc);
|
||||
|
||||
/* If vqmmc provides no 1.8V signalling, then there's no UHS */
|
||||
if (!regulator_is_supported_voltage(mmc->supply.vqmmc, 1700000,
|
||||
1950000))
|
||||
host->caps1 &= ~(SDHCI_SUPPORT_SDR104 |
|
||||
SDHCI_SUPPORT_SDR50 |
|
||||
SDHCI_SUPPORT_DDR50);
|
||||
|
||||
/* In eMMC case vqmmc might be a fixed 1.8V regulator */
|
||||
if (!regulator_is_supported_voltage(mmc->supply.vqmmc, 2700000,
|
||||
3600000))
|
||||
host->flags &= ~SDHCI_SIGNALING_330;
|
||||
|
||||
if (ret) {
|
||||
pr_warn("%s: Failed to enable vqmmc regulator: %d\n",
|
||||
mmc_hostname(mmc), ret);
|
||||
|
||||
@@ -59,9 +59,9 @@ static int __init init_soleng_maps(void)
|
||||
return -ENXIO;
|
||||
}
|
||||
}
|
||||
printk(KERN_NOTICE "Solution Engine: Flash at 0x%08lx, EPROM at 0x%08lx\n",
|
||||
soleng_flash_map.phys & 0x1fffffff,
|
||||
soleng_eprom_map.phys & 0x1fffffff);
|
||||
printk(KERN_NOTICE "Solution Engine: Flash at 0x%pap, EPROM at 0x%pap\n",
|
||||
&soleng_flash_map.phys,
|
||||
&soleng_eprom_map.phys);
|
||||
flash_mtd->owner = THIS_MODULE;
|
||||
|
||||
eprom_mtd = do_map_probe("map_rom", &soleng_eprom_map);
|
||||
|
||||
@@ -160,8 +160,12 @@ static ssize_t mtdchar_read(struct file *file, char __user *buf, size_t count,
|
||||
|
||||
pr_debug("MTD_read\n");
|
||||
|
||||
if (*ppos + count > mtd->size)
|
||||
count = mtd->size - *ppos;
|
||||
if (*ppos + count > mtd->size) {
|
||||
if (*ppos < mtd->size)
|
||||
count = mtd->size - *ppos;
|
||||
else
|
||||
count = 0;
|
||||
}
|
||||
|
||||
if (!count)
|
||||
return 0;
|
||||
@@ -246,7 +250,7 @@ static ssize_t mtdchar_write(struct file *file, const char __user *buf, size_t c
|
||||
|
||||
pr_debug("MTD_write\n");
|
||||
|
||||
if (*ppos == mtd->size)
|
||||
if (*ppos >= mtd->size)
|
||||
return -ENOSPC;
|
||||
|
||||
if (*ppos + count > mtd->size)
|
||||
|
||||
@@ -289,7 +289,7 @@ static int xgbe_alloc_pages(struct xgbe_prv_data *pdata,
|
||||
struct page *pages = NULL;
|
||||
dma_addr_t pages_dma;
|
||||
gfp_t gfp;
|
||||
int order, ret;
|
||||
int order;
|
||||
|
||||
again:
|
||||
order = alloc_order;
|
||||
@@ -316,10 +316,9 @@ again:
|
||||
/* Map the pages */
|
||||
pages_dma = dma_map_page(pdata->dev, pages, 0,
|
||||
PAGE_SIZE << order, DMA_FROM_DEVICE);
|
||||
ret = dma_mapping_error(pdata->dev, pages_dma);
|
||||
if (ret) {
|
||||
if (dma_mapping_error(pdata->dev, pages_dma)) {
|
||||
put_page(pages);
|
||||
return ret;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
pa->pages = pages;
|
||||
|
||||
@@ -493,6 +493,9 @@ static void cn23xx_pf_setup_global_output_regs(struct octeon_device *oct)
|
||||
for (q_no = srn; q_no < ern; q_no++) {
|
||||
reg_val = octeon_read_csr(oct, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
|
||||
|
||||
/* clear IPTR */
|
||||
reg_val &= ~CN23XX_PKT_OUTPUT_CTL_IPTR;
|
||||
|
||||
/* set DPTR */
|
||||
reg_val |= CN23XX_PKT_OUTPUT_CTL_DPTR;
|
||||
|
||||
|
||||
@@ -165,6 +165,9 @@ static void cn23xx_vf_setup_global_output_regs(struct octeon_device *oct)
|
||||
reg_val =
|
||||
octeon_read_csr(oct, CN23XX_VF_SLI_OQ_PKT_CONTROL(q_no));
|
||||
|
||||
/* clear IPTR */
|
||||
reg_val &= ~CN23XX_PKT_OUTPUT_CTL_IPTR;
|
||||
|
||||
/* set DPTR */
|
||||
reg_val |= CN23XX_PKT_OUTPUT_CTL_DPTR;
|
||||
|
||||
|
||||
@@ -4500,7 +4500,7 @@ int be_cmd_get_profile_config(struct be_adapter *adapter,
|
||||
port_res->max_vfs += le16_to_cpu(pcie->num_vfs);
|
||||
}
|
||||
}
|
||||
return status;
|
||||
goto err;
|
||||
}
|
||||
|
||||
pcie = be_get_pcie_desc(resp->func_param, desc_count,
|
||||
|
||||
@@ -400,6 +400,10 @@
|
||||
#define E1000_ICR_RXDMT0 0x00000010 /* Rx desc min. threshold (0) */
|
||||
#define E1000_ICR_RXO 0x00000040 /* Receiver Overrun */
|
||||
#define E1000_ICR_RXT0 0x00000080 /* Rx timer intr (ring 0) */
|
||||
#define E1000_ICR_MDAC 0x00000200 /* MDIO Access Complete */
|
||||
#define E1000_ICR_SRPD 0x00010000 /* Small Receive Packet Detected */
|
||||
#define E1000_ICR_ACK 0x00020000 /* Receive ACK Frame Detected */
|
||||
#define E1000_ICR_MNG 0x00040000 /* Manageability Event Detected */
|
||||
#define E1000_ICR_ECCER 0x00400000 /* Uncorrectable ECC Error */
|
||||
/* If this bit asserted, the driver should claim the interrupt */
|
||||
#define E1000_ICR_INT_ASSERTED 0x80000000
|
||||
@@ -407,7 +411,7 @@
|
||||
#define E1000_ICR_RXQ1 0x00200000 /* Rx Queue 1 Interrupt */
|
||||
#define E1000_ICR_TXQ0 0x00400000 /* Tx Queue 0 Interrupt */
|
||||
#define E1000_ICR_TXQ1 0x00800000 /* Tx Queue 1 Interrupt */
|
||||
#define E1000_ICR_OTHER 0x01000000 /* Other Interrupts */
|
||||
#define E1000_ICR_OTHER 0x01000000 /* Other Interrupt */
|
||||
|
||||
/* PBA ECC Register */
|
||||
#define E1000_PBA_ECC_COUNTER_MASK 0xFFF00000 /* ECC counter mask */
|
||||
@@ -431,12 +435,27 @@
|
||||
E1000_IMS_RXSEQ | \
|
||||
E1000_IMS_LSC)
|
||||
|
||||
/* These are all of the events related to the OTHER interrupt.
|
||||
*/
|
||||
#define IMS_OTHER_MASK ( \
|
||||
E1000_IMS_LSC | \
|
||||
E1000_IMS_RXO | \
|
||||
E1000_IMS_MDAC | \
|
||||
E1000_IMS_SRPD | \
|
||||
E1000_IMS_ACK | \
|
||||
E1000_IMS_MNG)
|
||||
|
||||
/* Interrupt Mask Set */
|
||||
#define E1000_IMS_TXDW E1000_ICR_TXDW /* Transmit desc written back */
|
||||
#define E1000_IMS_LSC E1000_ICR_LSC /* Link Status Change */
|
||||
#define E1000_IMS_RXSEQ E1000_ICR_RXSEQ /* Rx sequence error */
|
||||
#define E1000_IMS_RXDMT0 E1000_ICR_RXDMT0 /* Rx desc min. threshold */
|
||||
#define E1000_IMS_RXO E1000_ICR_RXO /* Receiver Overrun */
|
||||
#define E1000_IMS_RXT0 E1000_ICR_RXT0 /* Rx timer intr */
|
||||
#define E1000_IMS_MDAC E1000_ICR_MDAC /* MDIO Access Complete */
|
||||
#define E1000_IMS_SRPD E1000_ICR_SRPD /* Small Receive Packet */
|
||||
#define E1000_IMS_ACK E1000_ICR_ACK /* Receive ACK Frame Detected */
|
||||
#define E1000_IMS_MNG E1000_ICR_MNG /* Manageability Event */
|
||||
#define E1000_IMS_ECCER E1000_ICR_ECCER /* Uncorrectable ECC Error */
|
||||
#define E1000_IMS_RXQ0 E1000_ICR_RXQ0 /* Rx Queue 0 Interrupt */
|
||||
#define E1000_IMS_RXQ1 E1000_ICR_RXQ1 /* Rx Queue 1 Interrupt */
|
||||
|
||||
@@ -1367,9 +1367,6 @@ out:
|
||||
* Checks to see of the link status of the hardware has changed. If a
|
||||
* change in link status has been detected, then we read the PHY registers
|
||||
* to get the current speed/duplex if link exists.
|
||||
*
|
||||
* Returns a negative error code (-E1000_ERR_*) or 0 (link down) or 1 (link
|
||||
* up).
|
||||
**/
|
||||
static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
{
|
||||
@@ -1385,7 +1382,8 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
* Change or Rx Sequence Error interrupt.
|
||||
*/
|
||||
if (!mac->get_link_status)
|
||||
return 1;
|
||||
return 0;
|
||||
mac->get_link_status = false;
|
||||
|
||||
/* First we want to see if the MII Status Register reports
|
||||
* link. If so, then we want to get the current speed/duplex
|
||||
@@ -1393,12 +1391,12 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
*/
|
||||
ret_val = e1000e_phy_has_link_generic(hw, 1, 0, &link);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
|
||||
if (hw->mac.type == e1000_pchlan) {
|
||||
ret_val = e1000_k1_gig_workaround_hv(hw, link);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* When connected at 10Mbps half-duplex, some parts are excessively
|
||||
@@ -1431,7 +1429,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
|
||||
ret_val = hw->phy.ops.acquire(hw);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
|
||||
if (hw->mac.type == e1000_pch2lan)
|
||||
emi_addr = I82579_RX_CONFIG;
|
||||
@@ -1453,7 +1451,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
hw->phy.ops.release(hw);
|
||||
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
|
||||
if (hw->mac.type >= e1000_pch_spt) {
|
||||
u16 data;
|
||||
@@ -1462,14 +1460,14 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
if (speed == SPEED_1000) {
|
||||
ret_val = hw->phy.ops.acquire(hw);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
|
||||
ret_val = e1e_rphy_locked(hw,
|
||||
PHY_REG(776, 20),
|
||||
&data);
|
||||
if (ret_val) {
|
||||
hw->phy.ops.release(hw);
|
||||
return ret_val;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ptr_gap = (data & (0x3FF << 2)) >> 2;
|
||||
@@ -1483,18 +1481,18 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
}
|
||||
hw->phy.ops.release(hw);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
} else {
|
||||
ret_val = hw->phy.ops.acquire(hw);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
|
||||
ret_val = e1e_wphy_locked(hw,
|
||||
PHY_REG(776, 20),
|
||||
0xC023);
|
||||
hw->phy.ops.release(hw);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
|
||||
}
|
||||
}
|
||||
@@ -1521,7 +1519,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
(hw->adapter->pdev->device == E1000_DEV_ID_PCH_I218_V3)) {
|
||||
ret_val = e1000_k1_workaround_lpt_lp(hw, link);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
}
|
||||
if (hw->mac.type >= e1000_pch_lpt) {
|
||||
/* Set platform power management values for
|
||||
@@ -1529,7 +1527,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
*/
|
||||
ret_val = e1000_platform_pm_pch_lpt(hw, link);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Clear link partner's EEE ability */
|
||||
@@ -1552,9 +1550,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
}
|
||||
|
||||
if (!link)
|
||||
return 0; /* No link detected */
|
||||
|
||||
mac->get_link_status = false;
|
||||
goto out;
|
||||
|
||||
switch (hw->mac.type) {
|
||||
case e1000_pch2lan:
|
||||
@@ -1616,12 +1612,14 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
* different link partner.
|
||||
*/
|
||||
ret_val = e1000e_config_fc_after_link_up(hw);
|
||||
if (ret_val) {
|
||||
if (ret_val)
|
||||
e_dbg("Error configuring flow control\n");
|
||||
return ret_val;
|
||||
}
|
||||
|
||||
return 1;
|
||||
return ret_val;
|
||||
|
||||
out:
|
||||
mac->get_link_status = true;
|
||||
return ret_val;
|
||||
}
|
||||
|
||||
static s32 e1000_get_variants_ich8lan(struct e1000_adapter *adapter)
|
||||
|
||||
@@ -410,9 +410,6 @@ void e1000e_clear_hw_cntrs_base(struct e1000_hw *hw)
|
||||
* Checks to see of the link status of the hardware has changed. If a
|
||||
* change in link status has been detected, then we read the PHY registers
|
||||
* to get the current speed/duplex if link exists.
|
||||
*
|
||||
* Returns a negative error code (-E1000_ERR_*) or 0 (link down) or 1 (link
|
||||
* up).
|
||||
**/
|
||||
s32 e1000e_check_for_copper_link(struct e1000_hw *hw)
|
||||
{
|
||||
@@ -426,20 +423,16 @@ s32 e1000e_check_for_copper_link(struct e1000_hw *hw)
|
||||
* Change or Rx Sequence Error interrupt.
|
||||
*/
|
||||
if (!mac->get_link_status)
|
||||
return 1;
|
||||
return 0;
|
||||
mac->get_link_status = false;
|
||||
|
||||
/* First we want to see if the MII Status Register reports
|
||||
* link. If so, then we want to get the current speed/duplex
|
||||
* of the PHY.
|
||||
*/
|
||||
ret_val = e1000e_phy_has_link_generic(hw, 1, 0, &link);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
|
||||
if (!link)
|
||||
return 0; /* No link detected */
|
||||
|
||||
mac->get_link_status = false;
|
||||
if (ret_val || !link)
|
||||
goto out;
|
||||
|
||||
/* Check if there was DownShift, must be checked
|
||||
* immediately after link-up
|
||||
@@ -464,12 +457,14 @@ s32 e1000e_check_for_copper_link(struct e1000_hw *hw)
|
||||
* different link partner.
|
||||
*/
|
||||
ret_val = e1000e_config_fc_after_link_up(hw);
|
||||
if (ret_val) {
|
||||
if (ret_val)
|
||||
e_dbg("Error configuring flow control\n");
|
||||
return ret_val;
|
||||
}
|
||||
|
||||
return 1;
|
||||
return ret_val;
|
||||
|
||||
out:
|
||||
mac->get_link_status = true;
|
||||
return ret_val;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -1910,30 +1910,20 @@ static irqreturn_t e1000_msix_other(int __always_unused irq, void *data)
|
||||
struct net_device *netdev = data;
|
||||
struct e1000_adapter *adapter = netdev_priv(netdev);
|
||||
struct e1000_hw *hw = &adapter->hw;
|
||||
u32 icr;
|
||||
bool enable = true;
|
||||
u32 icr = er32(ICR);
|
||||
|
||||
if (icr & adapter->eiac_mask)
|
||||
ew32(ICS, (icr & adapter->eiac_mask));
|
||||
|
||||
icr = er32(ICR);
|
||||
if (icr & E1000_ICR_RXO) {
|
||||
ew32(ICR, E1000_ICR_RXO);
|
||||
enable = false;
|
||||
/* napi poll will re-enable Other, make sure it runs */
|
||||
if (napi_schedule_prep(&adapter->napi)) {
|
||||
adapter->total_rx_bytes = 0;
|
||||
adapter->total_rx_packets = 0;
|
||||
__napi_schedule(&adapter->napi);
|
||||
}
|
||||
}
|
||||
if (icr & E1000_ICR_LSC) {
|
||||
ew32(ICR, E1000_ICR_LSC);
|
||||
hw->mac.get_link_status = true;
|
||||
/* guard against interrupt when we're going down */
|
||||
if (!test_bit(__E1000_DOWN, &adapter->state))
|
||||
mod_timer(&adapter->watchdog_timer, jiffies + 1);
|
||||
}
|
||||
|
||||
if (enable && !test_bit(__E1000_DOWN, &adapter->state))
|
||||
ew32(IMS, E1000_IMS_OTHER);
|
||||
if (!test_bit(__E1000_DOWN, &adapter->state))
|
||||
ew32(IMS, E1000_IMS_OTHER | IMS_OTHER_MASK);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
@@ -2036,7 +2026,6 @@ static void e1000_configure_msix(struct e1000_adapter *adapter)
|
||||
hw->hw_addr + E1000_EITR_82574(vector));
|
||||
else
|
||||
writel(1, hw->hw_addr + E1000_EITR_82574(vector));
|
||||
adapter->eiac_mask |= E1000_IMS_OTHER;
|
||||
|
||||
/* Cause Tx interrupts on every write back */
|
||||
ivar |= BIT(31);
|
||||
@@ -2261,7 +2250,8 @@ static void e1000_irq_enable(struct e1000_adapter *adapter)
|
||||
|
||||
if (adapter->msix_entries) {
|
||||
ew32(EIAC_82574, adapter->eiac_mask & E1000_EIAC_MASK_82574);
|
||||
ew32(IMS, adapter->eiac_mask | E1000_IMS_LSC);
|
||||
ew32(IMS, adapter->eiac_mask | E1000_IMS_OTHER |
|
||||
IMS_OTHER_MASK);
|
||||
} else if (hw->mac.type >= e1000_pch_lpt) {
|
||||
ew32(IMS, IMS_ENABLE_MASK | E1000_IMS_ECCER);
|
||||
} else {
|
||||
@@ -2703,8 +2693,7 @@ static int e1000e_poll(struct napi_struct *napi, int weight)
|
||||
napi_complete_done(napi, work_done);
|
||||
if (!test_bit(__E1000_DOWN, &adapter->state)) {
|
||||
if (adapter->msix_entries)
|
||||
ew32(IMS, adapter->rx_ring->ims_val |
|
||||
E1000_IMS_OTHER);
|
||||
ew32(IMS, adapter->rx_ring->ims_val);
|
||||
else
|
||||
e1000_irq_enable(adapter);
|
||||
}
|
||||
@@ -5100,7 +5089,7 @@ static bool e1000e_has_link(struct e1000_adapter *adapter)
|
||||
case e1000_media_type_copper:
|
||||
if (hw->mac.get_link_status) {
|
||||
ret_val = hw->mac.ops.check_for_link(hw);
|
||||
link_active = ret_val > 0;
|
||||
link_active = !hw->mac.get_link_status;
|
||||
} else {
|
||||
link_active = true;
|
||||
}
|
||||
|
||||
@@ -449,6 +449,7 @@ const char *mlx5_command_str(int command)
|
||||
MLX5_COMMAND_STR_CASE(SET_HCA_CAP);
|
||||
MLX5_COMMAND_STR_CASE(QUERY_ISSI);
|
||||
MLX5_COMMAND_STR_CASE(SET_ISSI);
|
||||
MLX5_COMMAND_STR_CASE(SET_DRIVER_VERSION);
|
||||
MLX5_COMMAND_STR_CASE(CREATE_MKEY);
|
||||
MLX5_COMMAND_STR_CASE(QUERY_MKEY);
|
||||
MLX5_COMMAND_STR_CASE(DESTROY_MKEY);
|
||||
|
||||
@@ -132,11 +132,11 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv)
|
||||
delayed_event_start(priv);
|
||||
|
||||
dev_ctx->context = intf->add(dev);
|
||||
set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
|
||||
if (intf->attach)
|
||||
set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
|
||||
|
||||
if (dev_ctx->context) {
|
||||
set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
|
||||
if (intf->attach)
|
||||
set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
|
||||
|
||||
spin_lock_irq(&priv->ctx_lock);
|
||||
list_add_tail(&dev_ctx->list, &priv->ctx_list);
|
||||
|
||||
@@ -211,12 +211,17 @@ static void mlx5_attach_interface(struct mlx5_interface *intf, struct mlx5_priv
|
||||
if (intf->attach) {
|
||||
if (test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state))
|
||||
goto out;
|
||||
intf->attach(dev, dev_ctx->context);
|
||||
if (intf->attach(dev, dev_ctx->context))
|
||||
goto out;
|
||||
|
||||
set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
|
||||
} else {
|
||||
if (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state))
|
||||
goto out;
|
||||
dev_ctx->context = intf->add(dev);
|
||||
if (!dev_ctx->context)
|
||||
goto out;
|
||||
|
||||
set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
|
||||
}
|
||||
|
||||
|
||||
@@ -557,6 +557,7 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
|
||||
if (err)
|
||||
goto miss_rule_err;
|
||||
|
||||
kvfree(flow_group_in);
|
||||
return 0;
|
||||
|
||||
miss_rule_err:
|
||||
|
||||
@@ -333,9 +333,17 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
|
||||
add_timer(&health->timer);
|
||||
}
|
||||
|
||||
void mlx5_stop_health_poll(struct mlx5_core_dev *dev)
|
||||
void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health)
|
||||
{
|
||||
struct mlx5_core_health *health = &dev->priv.health;
|
||||
unsigned long flags;
|
||||
|
||||
if (disable_health) {
|
||||
spin_lock_irqsave(&health->wq_lock, flags);
|
||||
set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
|
||||
set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
|
||||
spin_unlock_irqrestore(&health->wq_lock, flags);
|
||||
}
|
||||
|
||||
del_timer_sync(&health->timer);
|
||||
}
|
||||
|
||||
@@ -857,8 +857,10 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
|
||||
priv->numa_node = dev_to_node(&dev->pdev->dev);
|
||||
|
||||
priv->dbg_root = debugfs_create_dir(dev_name(&pdev->dev), mlx5_debugfs_root);
|
||||
if (!priv->dbg_root)
|
||||
if (!priv->dbg_root) {
|
||||
dev_err(&pdev->dev, "Cannot create debugfs dir, aborting\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
err = mlx5_pci_enable_device(dev);
|
||||
if (err) {
|
||||
@@ -907,7 +909,7 @@ static void mlx5_pci_close(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
|
||||
pci_clear_master(dev->pdev);
|
||||
release_bar(dev->pdev);
|
||||
mlx5_pci_disable_device(dev);
|
||||
debugfs_remove(priv->dbg_root);
|
||||
debugfs_remove_recursive(priv->dbg_root);
|
||||
}
|
||||
|
||||
static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
|
||||
@@ -1227,7 +1229,7 @@ err_cleanup_once:
|
||||
mlx5_cleanup_once(dev);
|
||||
|
||||
err_stop_poll:
|
||||
mlx5_stop_health_poll(dev);
|
||||
mlx5_stop_health_poll(dev, boot);
|
||||
if (mlx5_cmd_teardown_hca(dev)) {
|
||||
dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n");
|
||||
goto out_err;
|
||||
@@ -1286,7 +1288,7 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
|
||||
mlx5_free_irq_vectors(dev);
|
||||
if (cleanup)
|
||||
mlx5_cleanup_once(dev);
|
||||
mlx5_stop_health_poll(dev);
|
||||
mlx5_stop_health_poll(dev, cleanup);
|
||||
err = mlx5_cmd_teardown_hca(dev);
|
||||
if (err) {
|
||||
dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n");
|
||||
@@ -1548,7 +1550,7 @@ static int mlx5_try_fast_unload(struct mlx5_core_dev *dev)
|
||||
* with the HCA, so the health polll is no longer needed.
|
||||
*/
|
||||
mlx5_drain_health_wq(dev);
|
||||
mlx5_stop_health_poll(dev);
|
||||
mlx5_stop_health_poll(dev, false);
|
||||
|
||||
ret = mlx5_cmd_force_teardown_hca(dev);
|
||||
if (ret) {
|
||||
|
||||
@@ -1087,7 +1087,7 @@ static bool nfp_net_xdp_complete(struct nfp_net_tx_ring *tx_ring)
|
||||
* @dp: NFP Net data path struct
|
||||
* @tx_ring: TX ring structure
|
||||
*
|
||||
* Assumes that the device is stopped
|
||||
* Assumes that the device is stopped, must be idempotent.
|
||||
*/
|
||||
static void
|
||||
nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring)
|
||||
@@ -1289,13 +1289,18 @@ static void nfp_net_rx_give_one(const struct nfp_net_dp *dp,
|
||||
* nfp_net_rx_ring_reset() - Reflect in SW state of freelist after disable
|
||||
* @rx_ring: RX ring structure
|
||||
*
|
||||
* Warning: Do *not* call if ring buffers were never put on the FW freelist
|
||||
* (i.e. device was not enabled)!
|
||||
* Assumes that the device is stopped, must be idempotent.
|
||||
*/
|
||||
static void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring)
|
||||
{
|
||||
unsigned int wr_idx, last_idx;
|
||||
|
||||
/* wr_p == rd_p means ring was never fed FL bufs. RX rings are always
|
||||
* kept at cnt - 1 FL bufs.
|
||||
*/
|
||||
if (rx_ring->wr_p == 0 && rx_ring->rd_p == 0)
|
||||
return;
|
||||
|
||||
/* Move the empty entry to the end of the list */
|
||||
wr_idx = D_IDX(rx_ring, rx_ring->wr_p);
|
||||
last_idx = rx_ring->cnt - 1;
|
||||
@@ -2505,6 +2510,8 @@ static void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx)
|
||||
/**
|
||||
* nfp_net_clear_config_and_disable() - Clear control BAR and disable NFP
|
||||
* @nn: NFP Net device to reconfigure
|
||||
*
|
||||
* Warning: must be fully idempotent.
|
||||
*/
|
||||
static void nfp_net_clear_config_and_disable(struct nfp_net *nn)
|
||||
{
|
||||
|
||||
@@ -45,34 +45,33 @@ qcaspi_read_register(struct qcaspi *qca, u16 reg, u16 *result)
|
||||
{
|
||||
__be16 rx_data;
|
||||
__be16 tx_data;
|
||||
struct spi_transfer *transfer;
|
||||
struct spi_message *msg;
|
||||
struct spi_transfer transfer[2];
|
||||
struct spi_message msg;
|
||||
int ret;
|
||||
|
||||
memset(transfer, 0, sizeof(transfer));
|
||||
|
||||
spi_message_init(&msg);
|
||||
|
||||
tx_data = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_INTERNAL | reg);
|
||||
*result = 0;
|
||||
|
||||
transfer[0].tx_buf = &tx_data;
|
||||
transfer[0].len = QCASPI_CMD_LEN;
|
||||
transfer[1].rx_buf = &rx_data;
|
||||
transfer[1].len = QCASPI_CMD_LEN;
|
||||
|
||||
spi_message_add_tail(&transfer[0], &msg);
|
||||
|
||||
if (qca->legacy_mode) {
|
||||
msg = &qca->spi_msg1;
|
||||
transfer = &qca->spi_xfer1;
|
||||
transfer->tx_buf = &tx_data;
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = QCASPI_CMD_LEN;
|
||||
spi_sync(qca->spi_dev, msg);
|
||||
} else {
|
||||
msg = &qca->spi_msg2;
|
||||
transfer = &qca->spi_xfer2[0];
|
||||
transfer->tx_buf = &tx_data;
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = QCASPI_CMD_LEN;
|
||||
transfer = &qca->spi_xfer2[1];
|
||||
spi_sync(qca->spi_dev, &msg);
|
||||
spi_message_init(&msg);
|
||||
}
|
||||
transfer->tx_buf = NULL;
|
||||
transfer->rx_buf = &rx_data;
|
||||
transfer->len = QCASPI_CMD_LEN;
|
||||
ret = spi_sync(qca->spi_dev, msg);
|
||||
spi_message_add_tail(&transfer[1], &msg);
|
||||
ret = spi_sync(qca->spi_dev, &msg);
|
||||
|
||||
if (!ret)
|
||||
ret = msg->status;
|
||||
ret = msg.status;
|
||||
|
||||
if (ret)
|
||||
qcaspi_spi_error(qca);
|
||||
@@ -86,35 +85,32 @@ int
|
||||
qcaspi_write_register(struct qcaspi *qca, u16 reg, u16 value)
|
||||
{
|
||||
__be16 tx_data[2];
|
||||
struct spi_transfer *transfer;
|
||||
struct spi_message *msg;
|
||||
struct spi_transfer transfer[2];
|
||||
struct spi_message msg;
|
||||
int ret;
|
||||
|
||||
memset(&transfer, 0, sizeof(transfer));
|
||||
|
||||
spi_message_init(&msg);
|
||||
|
||||
tx_data[0] = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_INTERNAL | reg);
|
||||
tx_data[1] = cpu_to_be16(value);
|
||||
|
||||
transfer[0].tx_buf = &tx_data[0];
|
||||
transfer[0].len = QCASPI_CMD_LEN;
|
||||
transfer[1].tx_buf = &tx_data[1];
|
||||
transfer[1].len = QCASPI_CMD_LEN;
|
||||
|
||||
spi_message_add_tail(&transfer[0], &msg);
|
||||
if (qca->legacy_mode) {
|
||||
msg = &qca->spi_msg1;
|
||||
transfer = &qca->spi_xfer1;
|
||||
transfer->tx_buf = &tx_data[0];
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = QCASPI_CMD_LEN;
|
||||
spi_sync(qca->spi_dev, msg);
|
||||
} else {
|
||||
msg = &qca->spi_msg2;
|
||||
transfer = &qca->spi_xfer2[0];
|
||||
transfer->tx_buf = &tx_data[0];
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = QCASPI_CMD_LEN;
|
||||
transfer = &qca->spi_xfer2[1];
|
||||
spi_sync(qca->spi_dev, &msg);
|
||||
spi_message_init(&msg);
|
||||
}
|
||||
transfer->tx_buf = &tx_data[1];
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = QCASPI_CMD_LEN;
|
||||
ret = spi_sync(qca->spi_dev, msg);
|
||||
spi_message_add_tail(&transfer[1], &msg);
|
||||
ret = spi_sync(qca->spi_dev, &msg);
|
||||
|
||||
if (!ret)
|
||||
ret = msg->status;
|
||||
ret = msg.status;
|
||||
|
||||
if (ret)
|
||||
qcaspi_spi_error(qca);
|
||||
|
||||
@@ -99,22 +99,24 @@ static u32
|
||||
qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
|
||||
{
|
||||
__be16 cmd;
|
||||
struct spi_message *msg = &qca->spi_msg2;
|
||||
struct spi_transfer *transfer = &qca->spi_xfer2[0];
|
||||
struct spi_message msg;
|
||||
struct spi_transfer transfer[2];
|
||||
int ret;
|
||||
|
||||
memset(&transfer, 0, sizeof(transfer));
|
||||
spi_message_init(&msg);
|
||||
|
||||
cmd = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_EXTERNAL);
|
||||
transfer->tx_buf = &cmd;
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = QCASPI_CMD_LEN;
|
||||
transfer = &qca->spi_xfer2[1];
|
||||
transfer->tx_buf = src;
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = len;
|
||||
transfer[0].tx_buf = &cmd;
|
||||
transfer[0].len = QCASPI_CMD_LEN;
|
||||
transfer[1].tx_buf = src;
|
||||
transfer[1].len = len;
|
||||
|
||||
ret = spi_sync(qca->spi_dev, msg);
|
||||
spi_message_add_tail(&transfer[0], &msg);
|
||||
spi_message_add_tail(&transfer[1], &msg);
|
||||
ret = spi_sync(qca->spi_dev, &msg);
|
||||
|
||||
if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
|
||||
if (ret || (msg.actual_length != QCASPI_CMD_LEN + len)) {
|
||||
qcaspi_spi_error(qca);
|
||||
return 0;
|
||||
}
|
||||
@@ -125,17 +127,20 @@ qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
|
||||
static u32
|
||||
qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
|
||||
{
|
||||
struct spi_message *msg = &qca->spi_msg1;
|
||||
struct spi_transfer *transfer = &qca->spi_xfer1;
|
||||
struct spi_message msg;
|
||||
struct spi_transfer transfer;
|
||||
int ret;
|
||||
|
||||
transfer->tx_buf = src;
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = len;
|
||||
memset(&transfer, 0, sizeof(transfer));
|
||||
spi_message_init(&msg);
|
||||
|
||||
ret = spi_sync(qca->spi_dev, msg);
|
||||
transfer.tx_buf = src;
|
||||
transfer.len = len;
|
||||
|
||||
if (ret || (msg->actual_length != len)) {
|
||||
spi_message_add_tail(&transfer, &msg);
|
||||
ret = spi_sync(qca->spi_dev, &msg);
|
||||
|
||||
if (ret || (msg.actual_length != len)) {
|
||||
qcaspi_spi_error(qca);
|
||||
return 0;
|
||||
}
|
||||
@@ -146,23 +151,25 @@ qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
|
||||
static u32
|
||||
qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
|
||||
{
|
||||
struct spi_message *msg = &qca->spi_msg2;
|
||||
struct spi_message msg;
|
||||
__be16 cmd;
|
||||
struct spi_transfer *transfer = &qca->spi_xfer2[0];
|
||||
struct spi_transfer transfer[2];
|
||||
int ret;
|
||||
|
||||
memset(&transfer, 0, sizeof(transfer));
|
||||
spi_message_init(&msg);
|
||||
|
||||
cmd = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_EXTERNAL);
|
||||
transfer->tx_buf = &cmd;
|
||||
transfer->rx_buf = NULL;
|
||||
transfer->len = QCASPI_CMD_LEN;
|
||||
transfer = &qca->spi_xfer2[1];
|
||||
transfer->tx_buf = NULL;
|
||||
transfer->rx_buf = dst;
|
||||
transfer->len = len;
|
||||
transfer[0].tx_buf = &cmd;
|
||||
transfer[0].len = QCASPI_CMD_LEN;
|
||||
transfer[1].rx_buf = dst;
|
||||
transfer[1].len = len;
|
||||
|
||||
ret = spi_sync(qca->spi_dev, msg);
|
||||
spi_message_add_tail(&transfer[0], &msg);
|
||||
spi_message_add_tail(&transfer[1], &msg);
|
||||
ret = spi_sync(qca->spi_dev, &msg);
|
||||
|
||||
if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
|
||||
if (ret || (msg.actual_length != QCASPI_CMD_LEN + len)) {
|
||||
qcaspi_spi_error(qca);
|
||||
return 0;
|
||||
}
|
||||
@@ -173,17 +180,20 @@ qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
|
||||
static u32
|
||||
qcaspi_read_legacy(struct qcaspi *qca, u8 *dst, u32 len)
|
||||
{
|
||||
struct spi_message *msg = &qca->spi_msg1;
|
||||
struct spi_transfer *transfer = &qca->spi_xfer1;
|
||||
struct spi_message msg;
|
||||
struct spi_transfer transfer;
|
||||
int ret;
|
||||
|
||||
transfer->tx_buf = NULL;
|
||||
transfer->rx_buf = dst;
|
||||
transfer->len = len;
|
||||
memset(&transfer, 0, sizeof(transfer));
|
||||
spi_message_init(&msg);
|
||||
|
||||
ret = spi_sync(qca->spi_dev, msg);
|
||||
transfer.rx_buf = dst;
|
||||
transfer.len = len;
|
||||
|
||||
if (ret || (msg->actual_length != len)) {
|
||||
spi_message_add_tail(&transfer, &msg);
|
||||
ret = spi_sync(qca->spi_dev, &msg);
|
||||
|
||||
if (ret || (msg.actual_length != len)) {
|
||||
qcaspi_spi_error(qca);
|
||||
return 0;
|
||||
}
|
||||
@@ -195,19 +205,23 @@ static int
|
||||
qcaspi_tx_cmd(struct qcaspi *qca, u16 cmd)
|
||||
{
|
||||
__be16 tx_data;
|
||||
struct spi_message *msg = &qca->spi_msg1;
|
||||
struct spi_transfer *transfer = &qca->spi_xfer1;
|
||||
struct spi_message msg;
|
||||
struct spi_transfer transfer;
|
||||
int ret;
|
||||
|
||||
tx_data = cpu_to_be16(cmd);
|
||||
transfer->len = sizeof(tx_data);
|
||||
transfer->tx_buf = &tx_data;
|
||||
transfer->rx_buf = NULL;
|
||||
memset(&transfer, 0, sizeof(transfer));
|
||||
|
||||
ret = spi_sync(qca->spi_dev, msg);
|
||||
spi_message_init(&msg);
|
||||
|
||||
tx_data = cpu_to_be16(cmd);
|
||||
transfer.len = sizeof(cmd);
|
||||
transfer.tx_buf = &tx_data;
|
||||
spi_message_add_tail(&transfer, &msg);
|
||||
|
||||
ret = spi_sync(qca->spi_dev, &msg);
|
||||
|
||||
if (!ret)
|
||||
ret = msg->status;
|
||||
ret = msg.status;
|
||||
|
||||
if (ret)
|
||||
qcaspi_spi_error(qca);
|
||||
@@ -836,16 +850,6 @@ qcaspi_netdev_setup(struct net_device *dev)
|
||||
qca = netdev_priv(dev);
|
||||
memset(qca, 0, sizeof(struct qcaspi));
|
||||
|
||||
memset(&qca->spi_xfer1, 0, sizeof(struct spi_transfer));
|
||||
memset(&qca->spi_xfer2, 0, sizeof(struct spi_transfer) * 2);
|
||||
|
||||
spi_message_init(&qca->spi_msg1);
|
||||
spi_message_add_tail(&qca->spi_xfer1, &qca->spi_msg1);
|
||||
|
||||
spi_message_init(&qca->spi_msg2);
|
||||
spi_message_add_tail(&qca->spi_xfer2[0], &qca->spi_msg2);
|
||||
spi_message_add_tail(&qca->spi_xfer2[1], &qca->spi_msg2);
|
||||
|
||||
memset(&qca->txr, 0, sizeof(qca->txr));
|
||||
qca->txr.count = TX_RING_MAX_LEN;
|
||||
}
|
||||
|
||||
@@ -83,11 +83,6 @@ struct qcaspi {
|
||||
struct tx_ring txr;
|
||||
struct qcaspi_stats stats;
|
||||
|
||||
struct spi_message spi_msg1;
|
||||
struct spi_message spi_msg2;
|
||||
struct spi_transfer spi_xfer1;
|
||||
struct spi_transfer spi_xfer2[2];
|
||||
|
||||
u8 *rx_buffer;
|
||||
u32 buffer_size;
|
||||
u8 sync;
|
||||
|
||||
@@ -1299,7 +1299,7 @@ out:
|
||||
/* setting up multiple channels failed */
|
||||
net_device->max_chn = 1;
|
||||
net_device->num_chn = 1;
|
||||
return 0;
|
||||
return net_device;
|
||||
|
||||
err_dev_remv:
|
||||
rndis_filter_device_remove(dev, net_device);
|
||||
|
||||
@@ -192,7 +192,7 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
|
||||
priv->ucc_pram_offset = qe_muram_alloc(sizeof(struct ucc_hdlc_param),
|
||||
ALIGNMENT_OF_UCC_HDLC_PRAM);
|
||||
|
||||
if (priv->ucc_pram_offset < 0) {
|
||||
if (IS_ERR_VALUE(priv->ucc_pram_offset)) {
|
||||
dev_err(priv->dev, "Can not allocate MURAM for hdlc parameter.\n");
|
||||
ret = -ENOMEM;
|
||||
goto free_tx_bd;
|
||||
@@ -228,14 +228,14 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
|
||||
|
||||
/* Alloc riptr, tiptr */
|
||||
riptr = qe_muram_alloc(32, 32);
|
||||
if (riptr < 0) {
|
||||
if (IS_ERR_VALUE(riptr)) {
|
||||
dev_err(priv->dev, "Cannot allocate MURAM mem for Receive internal temp data pointer\n");
|
||||
ret = -ENOMEM;
|
||||
goto free_tx_skbuff;
|
||||
}
|
||||
|
||||
tiptr = qe_muram_alloc(32, 32);
|
||||
if (tiptr < 0) {
|
||||
if (IS_ERR_VALUE(tiptr)) {
|
||||
dev_err(priv->dev, "Cannot allocate MURAM mem for Transmit internal temp data pointer\n");
|
||||
ret = -ENOMEM;
|
||||
goto free_riptr;
|
||||
|
||||
@@ -87,8 +87,7 @@ struct netfront_cb {
|
||||
/* IRQ name is queue name with "-tx" or "-rx" appended */
|
||||
#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
|
||||
|
||||
static DECLARE_WAIT_QUEUE_HEAD(module_load_q);
|
||||
static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
|
||||
static DECLARE_WAIT_QUEUE_HEAD(module_wq);
|
||||
|
||||
struct netfront_stats {
|
||||
u64 packets;
|
||||
@@ -1331,11 +1330,11 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
|
||||
netif_carrier_off(netdev);
|
||||
|
||||
xenbus_switch_state(dev, XenbusStateInitialising);
|
||||
wait_event(module_load_q,
|
||||
xenbus_read_driver_state(dev->otherend) !=
|
||||
XenbusStateClosed &&
|
||||
xenbus_read_driver_state(dev->otherend) !=
|
||||
XenbusStateUnknown);
|
||||
wait_event(module_wq,
|
||||
xenbus_read_driver_state(dev->otherend) !=
|
||||
XenbusStateClosed &&
|
||||
xenbus_read_driver_state(dev->otherend) !=
|
||||
XenbusStateUnknown);
|
||||
return netdev;
|
||||
|
||||
exit:
|
||||
@@ -1603,6 +1602,7 @@ static int xennet_init_queue(struct netfront_queue *queue)
|
||||
{
|
||||
unsigned short i;
|
||||
int err = 0;
|
||||
char *devid;
|
||||
|
||||
spin_lock_init(&queue->tx_lock);
|
||||
spin_lock_init(&queue->rx_lock);
|
||||
@@ -1610,8 +1610,9 @@ static int xennet_init_queue(struct netfront_queue *queue)
|
||||
setup_timer(&queue->rx_refill_timer, rx_refill_timeout,
|
||||
(unsigned long)queue);
|
||||
|
||||
snprintf(queue->name, sizeof(queue->name), "%s-q%u",
|
||||
queue->info->netdev->name, queue->id);
|
||||
devid = strrchr(queue->info->xbdev->nodename, '/') + 1;
|
||||
snprintf(queue->name, sizeof(queue->name), "vif%s-q%u",
|
||||
devid, queue->id);
|
||||
|
||||
/* Initialise tx_skbs as a free chain containing every entry. */
|
||||
queue->tx_skb_freelist = 0;
|
||||
@@ -2007,15 +2008,14 @@ static void netback_changed(struct xenbus_device *dev,
|
||||
|
||||
dev_dbg(&dev->dev, "%s\n", xenbus_strstate(backend_state));
|
||||
|
||||
wake_up_all(&module_wq);
|
||||
|
||||
switch (backend_state) {
|
||||
case XenbusStateInitialising:
|
||||
case XenbusStateInitialised:
|
||||
case XenbusStateReconfiguring:
|
||||
case XenbusStateReconfigured:
|
||||
break;
|
||||
|
||||
case XenbusStateUnknown:
|
||||
wake_up_all(&module_unload_q);
|
||||
break;
|
||||
|
||||
case XenbusStateInitWait:
|
||||
@@ -2031,12 +2031,10 @@ static void netback_changed(struct xenbus_device *dev,
|
||||
break;
|
||||
|
||||
case XenbusStateClosed:
|
||||
wake_up_all(&module_unload_q);
|
||||
if (dev->state == XenbusStateClosed)
|
||||
break;
|
||||
/* Missed the backend's CLOSING state -- fallthrough */
|
||||
case XenbusStateClosing:
|
||||
wake_up_all(&module_unload_q);
|
||||
xenbus_frontend_closed(dev);
|
||||
break;
|
||||
}
|
||||
@@ -2144,14 +2142,14 @@ static int xennet_remove(struct xenbus_device *dev)
|
||||
|
||||
if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
|
||||
xenbus_switch_state(dev, XenbusStateClosing);
|
||||
wait_event(module_unload_q,
|
||||
wait_event(module_wq,
|
||||
xenbus_read_driver_state(dev->otherend) ==
|
||||
XenbusStateClosing ||
|
||||
xenbus_read_driver_state(dev->otherend) ==
|
||||
XenbusStateUnknown);
|
||||
|
||||
xenbus_switch_state(dev, XenbusStateClosed);
|
||||
wait_event(module_unload_q,
|
||||
wait_event(module_wq,
|
||||
xenbus_read_driver_state(dev->otherend) ==
|
||||
XenbusStateClosed ||
|
||||
xenbus_read_driver_state(dev->otherend) ==
|
||||
|
||||
@@ -1728,6 +1728,8 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
|
||||
nvme_stop_queues(&ctrl->ctrl);
|
||||
blk_mq_tagset_busy_iter(&ctrl->tag_set,
|
||||
nvme_cancel_request, &ctrl->ctrl);
|
||||
if (shutdown)
|
||||
nvme_start_queues(&ctrl->ctrl);
|
||||
nvme_rdma_destroy_io_queues(ctrl, shutdown);
|
||||
}
|
||||
|
||||
|
||||
@@ -286,12 +286,16 @@ static int bpp_probe(struct platform_device *op)
|
||||
|
||||
ops = kmemdup(&parport_sunbpp_ops, sizeof(struct parport_operations),
|
||||
GFP_KERNEL);
|
||||
if (!ops)
|
||||
if (!ops) {
|
||||
err = -ENOMEM;
|
||||
goto out_unmap;
|
||||
}
|
||||
|
||||
dprintk(("register_port\n"));
|
||||
if (!(p = parport_register_port((unsigned long)base, irq, dma, ops)))
|
||||
if (!(p = parport_register_port((unsigned long)base, irq, dma, ops))) {
|
||||
err = -ENOMEM;
|
||||
goto out_free_ops;
|
||||
}
|
||||
|
||||
p->size = size;
|
||||
p->dev = &op->dev;
|
||||
|
||||
@@ -878,6 +878,7 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
|
||||
const char *grpname;
|
||||
const char **fngrps;
|
||||
int ret, npins;
|
||||
int gsel, fsel;
|
||||
|
||||
npins = rza1_dt_node_pin_count(np);
|
||||
if (npins < 0) {
|
||||
@@ -927,18 +928,19 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
|
||||
fngrps[0] = grpname;
|
||||
|
||||
mutex_lock(&rza1_pctl->mutex);
|
||||
ret = pinctrl_generic_add_group(pctldev, grpname, grpins, npins,
|
||||
NULL);
|
||||
if (ret) {
|
||||
gsel = pinctrl_generic_add_group(pctldev, grpname, grpins, npins,
|
||||
NULL);
|
||||
if (gsel < 0) {
|
||||
mutex_unlock(&rza1_pctl->mutex);
|
||||
return ret;
|
||||
return gsel;
|
||||
}
|
||||
|
||||
ret = pinmux_generic_add_function(pctldev, grpname, fngrps, 1,
|
||||
mux_confs);
|
||||
if (ret)
|
||||
fsel = pinmux_generic_add_function(pctldev, grpname, fngrps, 1,
|
||||
mux_confs);
|
||||
if (fsel < 0) {
|
||||
ret = fsel;
|
||||
goto remove_group;
|
||||
mutex_unlock(&rza1_pctl->mutex);
|
||||
}
|
||||
|
||||
dev_info(rza1_pctl->dev, "Parsed function and group %s with %d pins\n",
|
||||
grpname, npins);
|
||||
@@ -955,15 +957,15 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
|
||||
(*map)->data.mux.group = np->name;
|
||||
(*map)->data.mux.function = np->name;
|
||||
*num_maps = 1;
|
||||
mutex_unlock(&rza1_pctl->mutex);
|
||||
|
||||
return 0;
|
||||
|
||||
remove_function:
|
||||
mutex_lock(&rza1_pctl->mutex);
|
||||
pinmux_generic_remove_last_function(pctldev);
|
||||
pinmux_generic_remove_function(pctldev, fsel);
|
||||
|
||||
remove_group:
|
||||
pinctrl_generic_remove_last_group(pctldev);
|
||||
pinctrl_generic_remove_group(pctldev, gsel);
|
||||
mutex_unlock(&rza1_pctl->mutex);
|
||||
|
||||
dev_info(rza1_pctl->dev, "Unable to parse function and group %s\n",
|
||||
|
||||
@@ -238,22 +238,30 @@ static int msm_config_group_get(struct pinctrl_dev *pctldev,
|
||||
/* Convert register value to pinconf value */
|
||||
switch (param) {
|
||||
case PIN_CONFIG_BIAS_DISABLE:
|
||||
arg = arg == MSM_NO_PULL;
|
||||
if (arg != MSM_NO_PULL)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_BIAS_PULL_DOWN:
|
||||
arg = arg == MSM_PULL_DOWN;
|
||||
if (arg != MSM_PULL_DOWN)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_BIAS_BUS_HOLD:
|
||||
if (pctrl->soc->pull_no_keeper)
|
||||
return -ENOTSUPP;
|
||||
|
||||
arg = arg == MSM_KEEPER;
|
||||
if (arg != MSM_KEEPER)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_BIAS_PULL_UP:
|
||||
if (pctrl->soc->pull_no_keeper)
|
||||
arg = arg == MSM_PULL_UP_NO_KEEPER;
|
||||
else
|
||||
arg = arg == MSM_PULL_UP;
|
||||
if (!arg)
|
||||
return -EINVAL;
|
||||
break;
|
||||
case PIN_CONFIG_DRIVE_STRENGTH:
|
||||
arg = msm_regval_to_drive(arg);
|
||||
|
||||
@@ -390,31 +390,47 @@ static int pmic_gpio_config_get(struct pinctrl_dev *pctldev,
|
||||
|
||||
switch (param) {
|
||||
case PIN_CONFIG_DRIVE_PUSH_PULL:
|
||||
arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_CMOS;
|
||||
if (pad->buffer_type != PMIC_GPIO_OUT_BUF_CMOS)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_DRIVE_OPEN_DRAIN:
|
||||
arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS;
|
||||
if (pad->buffer_type != PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_DRIVE_OPEN_SOURCE:
|
||||
arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS;
|
||||
if (pad->buffer_type != PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_BIAS_PULL_DOWN:
|
||||
arg = pad->pullup == PMIC_GPIO_PULL_DOWN;
|
||||
if (pad->pullup != PMIC_GPIO_PULL_DOWN)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_BIAS_DISABLE:
|
||||
arg = pad->pullup = PMIC_GPIO_PULL_DISABLE;
|
||||
if (pad->pullup != PMIC_GPIO_PULL_DISABLE)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_BIAS_PULL_UP:
|
||||
arg = pad->pullup == PMIC_GPIO_PULL_UP_30;
|
||||
if (pad->pullup != PMIC_GPIO_PULL_UP_30)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
|
||||
arg = !pad->is_enabled;
|
||||
if (pad->is_enabled)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_POWER_SOURCE:
|
||||
arg = pad->power_source;
|
||||
break;
|
||||
case PIN_CONFIG_INPUT_ENABLE:
|
||||
arg = pad->input_enabled;
|
||||
if (!pad->input_enabled)
|
||||
return -EINVAL;
|
||||
arg = 1;
|
||||
break;
|
||||
case PIN_CONFIG_OUTPUT:
|
||||
arg = pad->out_value;
|
||||
|
||||
@@ -34,6 +34,7 @@
|
||||
#define TOSHIBA_ACPI_VERSION "0.24"
|
||||
#define PROC_INTERFACE_VERSION 1
|
||||
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/moduleparam.h>
|
||||
@@ -1682,7 +1683,7 @@ static const struct file_operations keys_proc_fops = {
|
||||
.write = keys_proc_write,
|
||||
};
|
||||
|
||||
static int version_proc_show(struct seq_file *m, void *v)
|
||||
static int __maybe_unused version_proc_show(struct seq_file *m, void *v)
|
||||
{
|
||||
seq_printf(m, "driver: %s\n", TOSHIBA_ACPI_VERSION);
|
||||
seq_printf(m, "proc_interface: %d\n", PROC_INTERFACE_VERSION);
|
||||
|
||||
@@ -80,7 +80,7 @@ static int imx7_reset_set(struct reset_controller_dev *rcdev,
|
||||
{
|
||||
struct imx7_src *imx7src = to_imx7_src(rcdev);
|
||||
const struct imx7_src_signal *signal = &imx7_src_signals[id];
|
||||
unsigned int value = 0;
|
||||
unsigned int value = assert ? signal->bit : 0;
|
||||
|
||||
switch (id) {
|
||||
case IMX7_RESET_PCIEPHY:
|
||||
|
||||
@@ -164,6 +164,10 @@ static int bq4802_probe(struct platform_device *pdev)
|
||||
} else if (p->r->flags & IORESOURCE_MEM) {
|
||||
p->regs = devm_ioremap(&pdev->dev, p->r->start,
|
||||
resource_size(p->r));
|
||||
if (!p->regs){
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
p->read = bq4802_read_mem;
|
||||
p->write = bq4802_write_mem;
|
||||
} else {
|
||||
|
||||
@@ -3507,13 +3507,14 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
|
||||
qdio_flags = QDIO_FLAG_SYNC_OUTPUT;
|
||||
if (atomic_read(&queue->set_pci_flags_count))
|
||||
qdio_flags |= QDIO_FLAG_PCI_OUT;
|
||||
atomic_add(count, &queue->used_buffers);
|
||||
|
||||
rc = do_QDIO(CARD_DDEV(queue->card), qdio_flags,
|
||||
queue->queue_no, index, count);
|
||||
if (queue->card->options.performance_stats)
|
||||
queue->card->perf_stats.outbound_do_qdio_time +=
|
||||
qeth_get_micros() -
|
||||
queue->card->perf_stats.outbound_do_qdio_start_time;
|
||||
atomic_add(count, &queue->used_buffers);
|
||||
if (rc) {
|
||||
queue->card->stats.tx_errors += count;
|
||||
/* ignore temporary SIGA errors without busy condition */
|
||||
|
||||
@@ -423,6 +423,7 @@ static ssize_t qeth_dev_layer2_store(struct device *dev,
|
||||
if (card->discipline) {
|
||||
card->discipline->remove(card->gdev);
|
||||
qeth_core_free_discipline(card);
|
||||
card->options.layer2 = -1;
|
||||
}
|
||||
|
||||
rc = qeth_core_load_discipline(card, newdis);
|
||||
|
||||
@@ -294,9 +294,11 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
|
||||
* discovery, reverify or log them in. Otherwise, log them out.
|
||||
* Skip ports which were never discovered. These are the dNS port
|
||||
* and ports which were created by PLOGI.
|
||||
*
|
||||
* We don't need to use the _rcu variant here as the rport list
|
||||
* is protected by the disc mutex which is already held on entry.
|
||||
*/
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(rdata, &disc->rports, peers) {
|
||||
list_for_each_entry(rdata, &disc->rports, peers) {
|
||||
if (!kref_get_unless_zero(&rdata->kref))
|
||||
continue;
|
||||
if (rdata->disc_id) {
|
||||
@@ -307,7 +309,6 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
|
||||
}
|
||||
kref_put(&rdata->kref, fc_rport_destroy);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
mutex_unlock(&disc->disc_mutex);
|
||||
disc->disc_callback(lport, event);
|
||||
mutex_lock(&disc->disc_mutex);
|
||||
|
||||
@@ -442,16 +442,16 @@ int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
|
||||
my_workqueue_init(alsa_stream);
|
||||
|
||||
ret = bcm2835_audio_open_connection(alsa_stream);
|
||||
if (ret) {
|
||||
ret = -1;
|
||||
goto exit;
|
||||
}
|
||||
if (ret)
|
||||
goto free_wq;
|
||||
|
||||
instance = alsa_stream->instance;
|
||||
LOG_DBG(" instance (%p)\n", instance);
|
||||
|
||||
if (mutex_lock_interruptible(&instance->vchi_mutex)) {
|
||||
LOG_DBG("Interrupted whilst waiting for lock on (%d)\n", instance->num_connections);
|
||||
return -EINTR;
|
||||
ret = -EINTR;
|
||||
goto free_wq;
|
||||
}
|
||||
vchi_service_use(instance->vchi_handle[0]);
|
||||
|
||||
@@ -474,7 +474,11 @@ int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
|
||||
unlock:
|
||||
vchi_service_release(instance->vchi_handle[0]);
|
||||
mutex_unlock(&instance->vchi_mutex);
|
||||
exit:
|
||||
|
||||
free_wq:
|
||||
if (ret)
|
||||
destroy_workqueue(alsa_stream->my_wq);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -580,6 +580,7 @@ static int start_streaming(struct vb2_queue *vq, unsigned int count)
|
||||
static void stop_streaming(struct vb2_queue *vq)
|
||||
{
|
||||
int ret;
|
||||
unsigned long timeout;
|
||||
struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
|
||||
|
||||
v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
|
||||
@@ -605,10 +606,10 @@ static void stop_streaming(struct vb2_queue *vq)
|
||||
sizeof(dev->capture.frame_count));
|
||||
|
||||
/* wait for last frame to complete */
|
||||
ret = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
|
||||
if (ret <= 0)
|
||||
timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
|
||||
if (timeout == 0)
|
||||
v4l2_err(&dev->v4l2_dev,
|
||||
"error %d waiting for frame completion\n", ret);
|
||||
"timed out waiting for frame completion\n");
|
||||
|
||||
v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
|
||||
"disabling connection\n");
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user