Merge 4.14.156 into android-4.14
Changes in 4.14.156 spi: mediatek: use correct mata->xfer_len when in fifo transfer tee: optee: add missing of_node_put after of_device_is_available Revert "OPP: Protect dev_list with opp_table lock" net: cdc_ncm: Signedness bug in cdc_ncm_set_dgram_size() idr: Fix idr_get_next race with idr_remove mm/memory_hotplug: don't access uninitialized memmaps in shrink_pgdat_span() mm/memory_hotplug: fix updating the node span arm64: uaccess: Ensure PAN is re-enabled after unhandled uaccess fault fbdev: Ditch fb_edid_add_monspecs net: ovs: fix return type of ndo_start_xmit function net: xen-netback: fix return type of ndo_start_xmit function ARM: dts: dra7: Enable workaround for errata i870 in PCIe host mode ARM: dts: omap5: enable OTG role for DWC3 controller net: hns3: Fix for netdev not up problem when setting mtu f2fs: return correct errno in f2fs_gc ARM: dts: sun8i: h3-h5: ir register size should be the whole memory block SUNRPC: Fix priority queue fairness IB/hfi1: Ensure ucast_dlid access doesnt exceed bounds iommu/io-pgtable-arm: Fix race handling in split_blk_unmap() kvm: arm/arm64: Fix stage2_flush_memslot for 4 level page table arm64/numa: Report correct memblock range for the dummy node ath10k: fix vdev-start timeout on error ata: ahci_brcm: Allow using driver or DSL SoCs ath9k: fix reporting calculated new FFT upper max usb: gadget: udc: fotg210-udc: Fix a sleep-in-atomic-context bug in fotg210_get_status() usb: dwc3: gadget: Check ENBLSLPM before sending ep command nl80211: Fix a GET_KEY reply attribute irqchip/irq-mvebu-icu: Fix wrong private data retrieval watchdog: w83627hf_wdt: Support NCT6796D, NCT6797D, NCT6798D KVM: PPC: Inform the userspace about TCE update failures dmaengine: ep93xx: Return proper enum in ep93xx_dma_chan_direction dmaengine: timb_dma: Use proper enum in td_prep_slave_sg ext4: fix build error when DX_DEBUG is defined clk: keystone: Enable TISCI clocks if K3_ARCH sunrpc: Fix connect metrics mei: samples: fix a signedness bug in amt_host_if_call() cxgb4: Use proper enum in cxgb4_dcb_handle_fw_update cxgb4: Use proper enum in IEEE_FAUX_SYNC powerpc/pseries: Fix DTL buffer registration powerpc/pseries: Fix how we iterate over the DTL entries powerpc/xive: Move a dereference below a NULL test ARM: dts: at91: sama5d4_xplained: fix addressable nand flash size ARM: dts: at91: at91sam9x5cm: fix addressable nand flash size mtd: rawnand: sh_flctl: Use proper enum for flctl_dma_fifo0_transfer PM / hibernate: Check the success of generating md5 digest before hibernation tools: PCI: Fix compilation warnings clocksource/drivers/sh_cmt: Fixup for 64-bit machines clocksource/drivers/sh_cmt: Fix clocksource width for 32-bit machines md: allow metadata updates while suspending an array - fix ixgbe: Fix ixgbe TX hangs with XDP_TX beyond queue limit i40e: Use proper enum in i40e_ndo_set_vf_link_state ixgbe: Fix crash with VFs and flow director on interface flap IB/mthca: Fix error return code in __mthca_init_one() IB/mlx4: Avoid implicit enumerated type conversion ACPICA: Never run _REG on system_memory and system_IO powerpc/time: Use clockevents_register_device(), fixing an issue with large decrementer ata: ep93xx: Use proper enums for directions media: rc: ir-rc6-decoder: enable toggle bit for Kathrein RCU-676 remote media: pxa_camera: Fix check for pdev->dev.of_node media: i2c: adv748x: Support probing a single output ALSA: hda/sigmatel - Disable automute for Elo VuPoint KVM: PPC: Book3S PR: Exiting split hack mode needs to fixup both PC and LR USB: serial: cypress_m8: fix interrupt-out transfer length mtd: physmap_of: Release resources on error cpu/SMT: State SMT is disabled even with nosmt and without "=force" brcmfmac: reduce timeout for action frame scan brcmfmac: fix full timeout waiting for action frame on-channel tx qtnfmac: pass sgi rate info flag to wireless core qtnfmac: drop error reports for out-of-bounds key indexes clk: samsung: exynos5420: Define CLK_SECKEY gate clock only or Exynos5420 clk: samsung: Use clk_hw API for calling clk framework from clk notifiers i2c: brcmstb: Allow enabling the driver on DSL SoCs NFSv4.x: fix lock recovery during delegation recall dmaengine: ioat: fix prototype of ioat_enumerate_channels media: cec-gpio: select correct Signal Free Time Input: st1232 - set INPUT_PROP_DIRECT property Input: silead - try firmware reload after unsuccessful resume remoteproc: Check for NULL firmwares in sysfs interface kexec: Allocate decrypted control pages for kdump if SME is enabled x86/olpc: Fix build error with CONFIG_MFD_CS5535=m dmaengine: rcar-dmac: set scatter/gather max segment size crypto: mxs-dcp - Fix SHA null hashes and output length crypto: mxs-dcp - Fix AES issues xfrm: use correct size to initialise sp->ovec ACPI / SBS: Fix rare oops when removing modules iwlwifi: mvm: don't send keys when entering D3 x86/fsgsbase/64: Fix ptrace() to read the FS/GS base accurately mmc: tmio: Fix SCC error detection fbdev: sbuslib: use checked version of put_user() fbdev: sbuslib: integer overflow in sbusfb_ioctl_helper() reset: Fix potential use-after-free in __of_reset_control_get() bcache: recal cached_dev_sectors on detach media: dw9714: Fix error handling in probe function s390/kasan: avoid vdso instrumentation proc/vmcore: Fix i386 build error of missing copy_oldmem_page_encrypted() backlight: lm3639: Unconditionally call led_classdev_unregister mfd: ti_am335x_tscadc: Keep ADC interface on if child is wakeup capable printk: Give error on attempt to set log buffer length to over 2G media: isif: fix a NULL pointer dereference bug GFS2: Flush the GFS2 delete workqueue before stopping the kernel threads media: cx231xx: fix potential sign-extension overflow on large shift x86/kexec: Correct KEXEC_BACKUP_SRC_END off-by-one error gpio: syscon: Fix possible NULL ptr usage spi: fsl-lpspi: Prevent FIFO under/overrun by default pinctrl: gemini: Mask and set properly spi: spidev: Fix OF tree warning logic ARM: 8802/1: Call syscall_trace_exit even when system call skipped orangefs: rate limit the client not running info message pinctrl: gemini: Fix up TVC clock group hwmon: (pwm-fan) Silence error on probe deferral hwmon: (ina3221) Fix INA3221_CONFIG_MODE macros netfilter: nft_compat: do not dump private area misc: cxl: Fix possible null pointer dereference mac80211: minstrel: fix using short preamble CCK rates on HT clients mac80211: minstrel: fix CCK rate group streams value mac80211: minstrel: fix sampling/reporting of CCK rates in HT mode spi: rockchip: initialize dma_slave_config properly mlxsw: spectrum_switchdev: Check notification relevance based on upper device ARM: dts: omap5: Fix dual-role mode on Super-Speed port tools: PCI: Fix broken pcitest compilation powerpc/time: Fix clockevent_decrementer initalisation for PR KVM mmc: tmio: fix SCC error handling to avoid false positive CRC error Linux 4.14.156 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 14
|
||||
SUBLEVEL = 155
|
||||
SUBLEVEL = 156
|
||||
EXTRAVERSION =
|
||||
NAME = Petit Gorille
|
||||
|
||||
|
||||
@@ -240,7 +240,7 @@
|
||||
|
||||
rootfs@800000 {
|
||||
label = "rootfs";
|
||||
reg = <0x800000 0x0f800000>;
|
||||
reg = <0x800000 0x1f800000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -88,7 +88,7 @@
|
||||
|
||||
rootfs@800000 {
|
||||
label = "rootfs";
|
||||
reg = <0x800000 0x1f800000>;
|
||||
reg = <0x800000 0x0f800000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -314,6 +314,7 @@
|
||||
<0 0 0 2 &pcie1_intc 2>,
|
||||
<0 0 0 3 &pcie1_intc 3>,
|
||||
<0 0 0 4 &pcie1_intc 4>;
|
||||
ti,syscon-unaligned-access = <&scm_conf1 0x14 1>;
|
||||
status = "disabled";
|
||||
pcie1_intc: interrupt-controller {
|
||||
interrupt-controller;
|
||||
@@ -367,6 +368,7 @@
|
||||
<0 0 0 2 &pcie2_intc 2>,
|
||||
<0 0 0 3 &pcie2_intc 3>,
|
||||
<0 0 0 4 &pcie2_intc 4>;
|
||||
ti,syscon-unaligned-access = <&scm_conf1 0x14 2>;
|
||||
pcie2_intc: interrupt-controller {
|
||||
interrupt-controller;
|
||||
#address-cells = <0>;
|
||||
|
||||
@@ -694,6 +694,11 @@
|
||||
vbus-supply = <&smps10_out1_reg>;
|
||||
};
|
||||
|
||||
&dwc3 {
|
||||
extcon = <&extcon_usb3>;
|
||||
dr_mode = "otg";
|
||||
};
|
||||
|
||||
&mcspi1 {
|
||||
|
||||
};
|
||||
|
||||
@@ -594,7 +594,7 @@
|
||||
clock-names = "apb", "ir";
|
||||
resets = <&r_ccu RST_APB0_IR>;
|
||||
interrupts = <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>;
|
||||
reg = <0x01f02000 0x40>;
|
||||
reg = <0x01f02000 0x400>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
|
||||
@@ -282,16 +282,15 @@ __sys_trace:
|
||||
cmp scno, #-1 @ skip the syscall?
|
||||
bne 2b
|
||||
add sp, sp, #S_OFF @ restore stack
|
||||
b ret_slow_syscall
|
||||
|
||||
__sys_trace_return:
|
||||
str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
|
||||
__sys_trace_return_nosave:
|
||||
enable_irq_notrace
|
||||
mov r0, sp
|
||||
bl syscall_trace_exit
|
||||
b ret_slow_syscall
|
||||
|
||||
__sys_trace_return_nosave:
|
||||
enable_irq_notrace
|
||||
__sys_trace_return:
|
||||
str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
|
||||
mov r0, sp
|
||||
bl syscall_trace_exit
|
||||
b ret_slow_syscall
|
||||
|
||||
@@ -57,5 +57,6 @@ ENDPROC(__arch_clear_user)
|
||||
.section .fixup,"ax"
|
||||
.align 2
|
||||
9: mov x0, x2 // return the original size
|
||||
uaccess_disable_not_uao x2, x3
|
||||
ret
|
||||
.previous
|
||||
|
||||
@@ -75,5 +75,6 @@ ENDPROC(__arch_copy_from_user)
|
||||
.section .fixup,"ax"
|
||||
.align 2
|
||||
9998: sub x0, end, dst // bytes not copied
|
||||
uaccess_disable_not_uao x3, x4
|
||||
ret
|
||||
.previous
|
||||
|
||||
@@ -77,5 +77,6 @@ ENDPROC(__arch_copy_in_user)
|
||||
.section .fixup,"ax"
|
||||
.align 2
|
||||
9998: sub x0, end, dst // bytes not copied
|
||||
uaccess_disable_not_uao x3, x4
|
||||
ret
|
||||
.previous
|
||||
|
||||
@@ -74,5 +74,6 @@ ENDPROC(__arch_copy_to_user)
|
||||
.section .fixup,"ax"
|
||||
.align 2
|
||||
9998: sub x0, end, dst // bytes not copied
|
||||
uaccess_disable_not_uao x3, x4
|
||||
ret
|
||||
.previous
|
||||
|
||||
@@ -419,7 +419,7 @@ static int __init dummy_numa_init(void)
|
||||
if (numa_off)
|
||||
pr_info("NUMA disabled\n"); /* Forced off on command line. */
|
||||
pr_info("Faking a node at [mem %#018Lx-%#018Lx]\n",
|
||||
0LLU, PFN_PHYS(max_pfn) - 1);
|
||||
memblock_start_of_DRAM(), memblock_end_of_DRAM() - 1);
|
||||
|
||||
for_each_memblock(memory, mblk) {
|
||||
ret = numa_add_memblk(0, mblk->base, mblk->base + mblk->size);
|
||||
|
||||
@@ -984,10 +984,14 @@ static void register_decrementer_clockevent(int cpu)
|
||||
*dec = decrementer_clockevent;
|
||||
dec->cpumask = cpumask_of(cpu);
|
||||
|
||||
clockevents_config_and_register(dec, ppc_tb_freq, 2, decrementer_max);
|
||||
|
||||
printk_once(KERN_DEBUG "clockevent: %s mult[%x] shift[%d] cpu[%d]\n",
|
||||
dec->name, dec->mult, dec->shift, cpu);
|
||||
|
||||
clockevents_register_device(dec);
|
||||
/* Set values for KVM, see kvm_emulate_dec() */
|
||||
decrementer_clockevent.mult = dec->mult;
|
||||
decrementer_clockevent.shift = dec->shift;
|
||||
}
|
||||
|
||||
static void enable_large_decrementer(void)
|
||||
@@ -1035,18 +1039,7 @@ static void __init set_decrementer_max(void)
|
||||
|
||||
static void __init init_decrementer_clockevent(void)
|
||||
{
|
||||
int cpu = smp_processor_id();
|
||||
|
||||
clockevents_calc_mult_shift(&decrementer_clockevent, ppc_tb_freq, 4);
|
||||
|
||||
decrementer_clockevent.max_delta_ns =
|
||||
clockevent_delta2ns(decrementer_max, &decrementer_clockevent);
|
||||
decrementer_clockevent.max_delta_ticks = decrementer_max;
|
||||
decrementer_clockevent.min_delta_ns =
|
||||
clockevent_delta2ns(2, &decrementer_clockevent);
|
||||
decrementer_clockevent.min_delta_ticks = 2;
|
||||
|
||||
register_decrementer_clockevent(cpu);
|
||||
register_decrementer_clockevent(smp_processor_id());
|
||||
}
|
||||
|
||||
void secondary_cpu_time_init(void)
|
||||
|
||||
@@ -79,8 +79,11 @@ void kvmppc_unfixup_split_real(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (vcpu->arch.hflags & BOOK3S_HFLAG_SPLIT_HACK) {
|
||||
ulong pc = kvmppc_get_pc(vcpu);
|
||||
ulong lr = kvmppc_get_lr(vcpu);
|
||||
if ((pc & SPLIT_HACK_MASK) == SPLIT_HACK_OFFS)
|
||||
kvmppc_set_pc(vcpu, pc & ~SPLIT_HACK_MASK);
|
||||
if ((lr & SPLIT_HACK_MASK) == SPLIT_HACK_OFFS)
|
||||
kvmppc_set_lr(vcpu, lr & ~SPLIT_HACK_MASK);
|
||||
vcpu->arch.hflags &= ~BOOK3S_HFLAG_SPLIT_HACK;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -404,7 +404,7 @@ static long kvmppc_tce_iommu_unmap(struct kvm *kvm,
|
||||
long ret;
|
||||
|
||||
if (WARN_ON_ONCE(iommu_tce_xchg(tbl, entry, &hpa, &dir)))
|
||||
return H_HARDWARE;
|
||||
return H_TOO_HARD;
|
||||
|
||||
if (dir == DMA_NONE)
|
||||
return H_SUCCESS;
|
||||
@@ -434,15 +434,15 @@ long kvmppc_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl,
|
||||
return H_TOO_HARD;
|
||||
|
||||
if (WARN_ON_ONCE(mm_iommu_ua_to_hpa(mem, ua, tbl->it_page_shift, &hpa)))
|
||||
return H_HARDWARE;
|
||||
return H_TOO_HARD;
|
||||
|
||||
if (mm_iommu_mapped_inc(mem))
|
||||
return H_CLOSED;
|
||||
return H_TOO_HARD;
|
||||
|
||||
ret = iommu_tce_xchg(tbl, entry, &hpa, &dir);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
mm_iommu_mapped_dec(mem);
|
||||
return H_HARDWARE;
|
||||
return H_TOO_HARD;
|
||||
}
|
||||
|
||||
if (dir != DMA_NONE)
|
||||
|
||||
@@ -264,14 +264,14 @@ static long kvmppc_rm_tce_iommu_map(struct kvm *kvm, struct iommu_table *tbl,
|
||||
|
||||
if (WARN_ON_ONCE_RM(mm_iommu_ua_to_hpa_rm(mem, ua, tbl->it_page_shift,
|
||||
&hpa)))
|
||||
return H_HARDWARE;
|
||||
return H_TOO_HARD;
|
||||
|
||||
pua = (void *) vmalloc_to_phys(pua);
|
||||
if (WARN_ON_ONCE_RM(!pua))
|
||||
return H_HARDWARE;
|
||||
|
||||
if (WARN_ON_ONCE_RM(mm_iommu_mapped_inc(mem)))
|
||||
return H_CLOSED;
|
||||
return H_TOO_HARD;
|
||||
|
||||
ret = iommu_tce_xchg_rm(tbl, entry, &hpa, &dir);
|
||||
if (ret) {
|
||||
@@ -448,7 +448,7 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
|
||||
|
||||
rmap = (void *) vmalloc_to_phys(rmap);
|
||||
if (WARN_ON_ONCE_RM(!rmap))
|
||||
return H_HARDWARE;
|
||||
return H_TOO_HARD;
|
||||
|
||||
/*
|
||||
* Synchronize with the MMU notifier callbacks in
|
||||
|
||||
@@ -149,7 +149,7 @@ static int dtl_start(struct dtl *dtl)
|
||||
|
||||
/* Register our dtl buffer with the hypervisor. The HV expects the
|
||||
* buffer size to be passed in the second word of the buffer */
|
||||
((u32 *)dtl->buf)[1] = DISPATCH_LOG_BYTES;
|
||||
((u32 *)dtl->buf)[1] = cpu_to_be32(DISPATCH_LOG_BYTES);
|
||||
|
||||
hwcpu = get_hard_smp_processor_id(dtl->cpu);
|
||||
addr = __pa(dtl->buf);
|
||||
@@ -184,7 +184,7 @@ static void dtl_stop(struct dtl *dtl)
|
||||
|
||||
static u64 dtl_current_index(struct dtl *dtl)
|
||||
{
|
||||
return lppaca_of(dtl->cpu).dtl_idx;
|
||||
return be64_to_cpu(lppaca_of(dtl->cpu).dtl_idx);
|
||||
}
|
||||
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
|
||||
|
||||
|
||||
@@ -1008,12 +1008,13 @@ static void xive_ipi_eoi(struct irq_data *d)
|
||||
{
|
||||
struct xive_cpu *xc = __this_cpu_read(xive_cpu);
|
||||
|
||||
DBG_VERBOSE("IPI eoi: irq=%d [0x%lx] (HW IRQ 0x%x) pending=%02x\n",
|
||||
d->irq, irqd_to_hwirq(d), xc->hw_ipi, xc->pending_prio);
|
||||
|
||||
/* Handle possible race with unplug and drop stale IPIs */
|
||||
if (!xc)
|
||||
return;
|
||||
|
||||
DBG_VERBOSE("IPI eoi: irq=%d [0x%lx] (HW IRQ 0x%x) pending=%02x\n",
|
||||
d->irq, irqd_to_hwirq(d), xc->hw_ipi, xc->pending_prio);
|
||||
|
||||
xive_do_source_eoi(xc->hw_ipi, &xc->ipi_data);
|
||||
xive_do_queue_eoi(xc);
|
||||
}
|
||||
|
||||
@@ -25,9 +25,10 @@ obj-y += vdso32_wrapper.o
|
||||
extra-y += vdso32.lds
|
||||
CPPFLAGS_vdso32.lds += -P -C -U$(ARCH)
|
||||
|
||||
# Disable gcov profiling and ubsan for VDSO code
|
||||
# Disable gcov profiling, ubsan and kasan for VDSO code
|
||||
GCOV_PROFILE := n
|
||||
UBSAN_SANITIZE := n
|
||||
KASAN_SANITIZE := n
|
||||
|
||||
# Force dependency (incbin is bad)
|
||||
$(obj)/vdso32_wrapper.o : $(obj)/vdso32.so
|
||||
|
||||
@@ -25,9 +25,10 @@ obj-y += vdso64_wrapper.o
|
||||
extra-y += vdso64.lds
|
||||
CPPFLAGS_vdso64.lds += -P -C -U$(ARCH)
|
||||
|
||||
# Disable gcov profiling and ubsan for VDSO code
|
||||
# Disable gcov profiling, ubsan and kasan for VDSO code
|
||||
GCOV_PROFILE := n
|
||||
UBSAN_SANITIZE := n
|
||||
KASAN_SANITIZE := n
|
||||
|
||||
# Force dependency (incbin is bad)
|
||||
$(obj)/vdso64_wrapper.o : $(obj)/vdso64.so
|
||||
|
||||
@@ -2715,8 +2715,7 @@ config OLPC
|
||||
|
||||
config OLPC_XO1_PM
|
||||
bool "OLPC XO-1 Power Management"
|
||||
depends on OLPC && MFD_CS5535 && PM_SLEEP
|
||||
select MFD_CORE
|
||||
depends on OLPC && MFD_CS5535=y && PM_SLEEP
|
||||
---help---
|
||||
Add support for poweroff and suspend of the OLPC XO-1 laptop.
|
||||
|
||||
|
||||
@@ -67,7 +67,7 @@ struct kimage;
|
||||
|
||||
/* Memory to backup during crash kdump */
|
||||
#define KEXEC_BACKUP_SRC_START (0UL)
|
||||
#define KEXEC_BACKUP_SRC_END (640 * 1024UL) /* 640K */
|
||||
#define KEXEC_BACKUP_SRC_END (640 * 1024UL - 1) /* 640K */
|
||||
|
||||
/*
|
||||
* CPU does not save ss and sp on stack if execution is already
|
||||
|
||||
@@ -40,6 +40,7 @@
|
||||
#include <asm/hw_breakpoint.h>
|
||||
#include <asm/traps.h>
|
||||
#include <asm/syscall.h>
|
||||
#include <asm/mmu_context.h>
|
||||
|
||||
#include "tls.h"
|
||||
|
||||
@@ -343,6 +344,49 @@ static int set_segment_reg(struct task_struct *task,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned long task_seg_base(struct task_struct *task,
|
||||
unsigned short selector)
|
||||
{
|
||||
unsigned short idx = selector >> 3;
|
||||
unsigned long base;
|
||||
|
||||
if (likely((selector & SEGMENT_TI_MASK) == 0)) {
|
||||
if (unlikely(idx >= GDT_ENTRIES))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* There are no user segments in the GDT with nonzero bases
|
||||
* other than the TLS segments.
|
||||
*/
|
||||
if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
|
||||
return 0;
|
||||
|
||||
idx -= GDT_ENTRY_TLS_MIN;
|
||||
base = get_desc_base(&task->thread.tls_array[idx]);
|
||||
} else {
|
||||
#ifdef CONFIG_MODIFY_LDT_SYSCALL
|
||||
struct ldt_struct *ldt;
|
||||
|
||||
/*
|
||||
* If performance here mattered, we could protect the LDT
|
||||
* with RCU. This is a slow path, though, so we can just
|
||||
* take the mutex.
|
||||
*/
|
||||
mutex_lock(&task->mm->context.lock);
|
||||
ldt = task->mm->context.ldt;
|
||||
if (unlikely(idx >= ldt->nr_entries))
|
||||
base = 0;
|
||||
else
|
||||
base = get_desc_base(ldt->entries + idx);
|
||||
mutex_unlock(&task->mm->context.lock);
|
||||
#else
|
||||
base = 0;
|
||||
#endif
|
||||
}
|
||||
|
||||
return base;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_X86_32 */
|
||||
|
||||
static unsigned long get_flags(struct task_struct *task)
|
||||
@@ -436,18 +480,16 @@ static unsigned long getreg(struct task_struct *task, unsigned long offset)
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
case offsetof(struct user_regs_struct, fs_base): {
|
||||
/*
|
||||
* XXX: This will not behave as expected if called on
|
||||
* current or if fsindex != 0.
|
||||
*/
|
||||
return task->thread.fsbase;
|
||||
if (task->thread.fsindex == 0)
|
||||
return task->thread.fsbase;
|
||||
else
|
||||
return task_seg_base(task, task->thread.fsindex);
|
||||
}
|
||||
case offsetof(struct user_regs_struct, gs_base): {
|
||||
/*
|
||||
* XXX: This will not behave as expected if called on
|
||||
* current or if fsindex != 0.
|
||||
*/
|
||||
return task->thread.gsbase;
|
||||
if (task->thread.gsindex == 0)
|
||||
return task->thread.gsbase;
|
||||
else
|
||||
return task_seg_base(task, task->thread.gsindex);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
@@ -250,9 +250,9 @@ static int get_e820_md5(struct e820_table *table, void *buf)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void hibernation_e820_save(void *buf)
|
||||
static int hibernation_e820_save(void *buf)
|
||||
{
|
||||
get_e820_md5(e820_table_firmware, buf);
|
||||
return get_e820_md5(e820_table_firmware, buf);
|
||||
}
|
||||
|
||||
static bool hibernation_e820_mismatch(void *buf)
|
||||
@@ -272,8 +272,9 @@ static bool hibernation_e820_mismatch(void *buf)
|
||||
return memcmp(result, buf, MD5_DIGEST_SIZE) ? true : false;
|
||||
}
|
||||
#else
|
||||
static void hibernation_e820_save(void *buf)
|
||||
static int hibernation_e820_save(void *buf)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool hibernation_e820_mismatch(void *buf)
|
||||
@@ -318,9 +319,7 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
|
||||
|
||||
rdr->magic = RESTORE_MAGIC;
|
||||
|
||||
hibernation_e820_save(rdr->e820_digest);
|
||||
|
||||
return 0;
|
||||
return hibernation_e820_save(rdr->e820_digest);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -245,6 +245,8 @@ acpi_ev_default_region_setup(acpi_handle handle,
|
||||
|
||||
acpi_status acpi_ev_initialize_region(union acpi_operand_object *region_obj);
|
||||
|
||||
u8 acpi_ev_is_pci_root_bridge(struct acpi_namespace_node *node);
|
||||
|
||||
/*
|
||||
* evsci - SCI (System Control Interrupt) handling/dispatch
|
||||
*/
|
||||
|
||||
@@ -429,9 +429,9 @@ struct acpi_simple_repair_info {
|
||||
/* Info for running the _REG methods */
|
||||
|
||||
struct acpi_reg_walk_info {
|
||||
acpi_adr_space_type space_id;
|
||||
u32 function;
|
||||
u32 reg_run_count;
|
||||
acpi_adr_space_type space_id;
|
||||
};
|
||||
|
||||
/*****************************************************************************
|
||||
|
||||
@@ -677,6 +677,19 @@ acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
|
||||
|
||||
ACPI_FUNCTION_TRACE(ev_execute_reg_methods);
|
||||
|
||||
/*
|
||||
* These address spaces do not need a call to _REG, since the ACPI
|
||||
* specification defines them as: "must always be accessible". Since
|
||||
* they never change state (never become unavailable), no need to ever
|
||||
* call _REG on them. Also, a data_table is not a "real" address space,
|
||||
* so do not call _REG. September 2018.
|
||||
*/
|
||||
if ((space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) ||
|
||||
(space_id == ACPI_ADR_SPACE_SYSTEM_IO) ||
|
||||
(space_id == ACPI_ADR_SPACE_DATA_TABLE)) {
|
||||
return_VOID;
|
||||
}
|
||||
|
||||
info.space_id = space_id;
|
||||
info.function = function;
|
||||
info.reg_run_count = 0;
|
||||
@@ -738,8 +751,8 @@ acpi_ev_reg_run(acpi_handle obj_handle,
|
||||
}
|
||||
|
||||
/*
|
||||
* We only care about regions.and objects that are allowed to have address
|
||||
* space handlers
|
||||
* We only care about regions and objects that are allowed to have
|
||||
* address space handlers
|
||||
*/
|
||||
if ((node->type != ACPI_TYPE_REGION) && (node != acpi_gbl_root_node)) {
|
||||
return (AE_OK);
|
||||
|
||||
@@ -50,9 +50,6 @@
|
||||
#define _COMPONENT ACPI_EVENTS
|
||||
ACPI_MODULE_NAME("evrgnini")
|
||||
|
||||
/* Local prototypes */
|
||||
static u8 acpi_ev_is_pci_root_bridge(struct acpi_namespace_node *node);
|
||||
|
||||
/*******************************************************************************
|
||||
*
|
||||
* FUNCTION: acpi_ev_system_memory_region_setup
|
||||
@@ -67,7 +64,6 @@ static u8 acpi_ev_is_pci_root_bridge(struct acpi_namespace_node *node);
|
||||
* DESCRIPTION: Setup a system_memory operation region
|
||||
*
|
||||
******************************************************************************/
|
||||
|
||||
acpi_status
|
||||
acpi_ev_system_memory_region_setup(acpi_handle handle,
|
||||
u32 function,
|
||||
@@ -347,7 +343,7 @@ acpi_ev_pci_config_region_setup(acpi_handle handle,
|
||||
*
|
||||
******************************************************************************/
|
||||
|
||||
static u8 acpi_ev_is_pci_root_bridge(struct acpi_namespace_node *node)
|
||||
u8 acpi_ev_is_pci_root_bridge(struct acpi_namespace_node *node)
|
||||
{
|
||||
acpi_status status;
|
||||
struct acpi_pnp_device_id *hid;
|
||||
|
||||
@@ -227,7 +227,6 @@ acpi_remove_address_space_handler(acpi_handle device,
|
||||
*/
|
||||
region_obj =
|
||||
handler_obj->address_space.region_list;
|
||||
|
||||
}
|
||||
|
||||
/* Remove this Handler object from the list */
|
||||
|
||||
@@ -1116,6 +1116,7 @@ void acpi_os_wait_events_complete(void)
|
||||
flush_workqueue(kacpid_wq);
|
||||
flush_workqueue(kacpi_notify_wq);
|
||||
}
|
||||
EXPORT_SYMBOL(acpi_os_wait_events_complete);
|
||||
|
||||
struct acpi_hp_work {
|
||||
struct work_struct work;
|
||||
|
||||
@@ -196,6 +196,7 @@ int acpi_smbus_unregister_callback(struct acpi_smb_hc *hc)
|
||||
hc->callback = NULL;
|
||||
hc->context = NULL;
|
||||
mutex_unlock(&hc->lock);
|
||||
acpi_os_wait_events_complete();
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -292,6 +293,7 @@ static int acpi_smbus_hc_remove(struct acpi_device *device)
|
||||
|
||||
hc = acpi_driver_data(device);
|
||||
acpi_ec_remove_query_handler(hc->ec, hc->query_bit);
|
||||
acpi_os_wait_events_complete();
|
||||
kfree(hc);
|
||||
device->driver_data = NULL;
|
||||
return 0;
|
||||
|
||||
@@ -102,7 +102,8 @@ config SATA_AHCI_PLATFORM
|
||||
|
||||
config AHCI_BRCM
|
||||
tristate "Broadcom AHCI SATA support"
|
||||
depends on ARCH_BRCMSTB || BMIPS_GENERIC || ARCH_BCM_NSP
|
||||
depends on ARCH_BRCMSTB || BMIPS_GENERIC || ARCH_BCM_NSP || \
|
||||
ARCH_BCM_63XX
|
||||
help
|
||||
This option enables support for the AHCI SATA3 controller found on
|
||||
Broadcom SoC's.
|
||||
|
||||
@@ -659,7 +659,7 @@ static void ep93xx_pata_dma_init(struct ep93xx_pata_data *drv_data)
|
||||
* start of new transfer.
|
||||
*/
|
||||
drv_data->dma_rx_data.port = EP93XX_DMA_IDE;
|
||||
drv_data->dma_rx_data.direction = DMA_FROM_DEVICE;
|
||||
drv_data->dma_rx_data.direction = DMA_DEV_TO_MEM;
|
||||
drv_data->dma_rx_data.name = "ep93xx-pata-rx";
|
||||
drv_data->dma_rx_channel = dma_request_channel(mask,
|
||||
ep93xx_pata_dma_filter, &drv_data->dma_rx_data);
|
||||
@@ -667,7 +667,7 @@ static void ep93xx_pata_dma_init(struct ep93xx_pata_data *drv_data)
|
||||
return;
|
||||
|
||||
drv_data->dma_tx_data.port = EP93XX_DMA_IDE;
|
||||
drv_data->dma_tx_data.direction = DMA_TO_DEVICE;
|
||||
drv_data->dma_tx_data.direction = DMA_MEM_TO_DEV;
|
||||
drv_data->dma_tx_data.name = "ep93xx-pata-tx";
|
||||
drv_data->dma_tx_channel = dma_request_channel(mask,
|
||||
ep93xx_pata_dma_filter, &drv_data->dma_tx_data);
|
||||
@@ -678,7 +678,7 @@ static void ep93xx_pata_dma_init(struct ep93xx_pata_data *drv_data)
|
||||
|
||||
/* Configure receive channel direction and source address */
|
||||
memset(&conf, 0, sizeof(conf));
|
||||
conf.direction = DMA_FROM_DEVICE;
|
||||
conf.direction = DMA_DEV_TO_MEM;
|
||||
conf.src_addr = drv_data->udma_in_phys;
|
||||
conf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
|
||||
if (dmaengine_slave_config(drv_data->dma_rx_channel, &conf)) {
|
||||
@@ -689,7 +689,7 @@ static void ep93xx_pata_dma_init(struct ep93xx_pata_data *drv_data)
|
||||
|
||||
/* Configure transmit channel direction and destination address */
|
||||
memset(&conf, 0, sizeof(conf));
|
||||
conf.direction = DMA_TO_DEVICE;
|
||||
conf.direction = DMA_MEM_TO_DEV;
|
||||
conf.dst_addr = drv_data->udma_out_phys;
|
||||
conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
|
||||
if (dmaengine_slave_config(drv_data->dma_tx_channel, &conf)) {
|
||||
|
||||
@@ -49,14 +49,9 @@ static struct opp_device *_find_opp_dev(const struct device *dev,
|
||||
static struct opp_table *_find_opp_table_unlocked(struct device *dev)
|
||||
{
|
||||
struct opp_table *opp_table;
|
||||
bool found;
|
||||
|
||||
list_for_each_entry(opp_table, &opp_tables, node) {
|
||||
mutex_lock(&opp_table->lock);
|
||||
found = !!_find_opp_dev(dev, opp_table);
|
||||
mutex_unlock(&opp_table->lock);
|
||||
|
||||
if (found) {
|
||||
if (_find_opp_dev(dev, opp_table)) {
|
||||
_get_opp_table_kref(opp_table);
|
||||
|
||||
return opp_table;
|
||||
@@ -716,8 +711,6 @@ struct opp_device *_add_opp_dev(const struct device *dev,
|
||||
|
||||
/* Initialize opp-dev */
|
||||
opp_dev->dev = dev;
|
||||
|
||||
mutex_lock(&opp_table->lock);
|
||||
list_add(&opp_dev->node, &opp_table->dev_list);
|
||||
|
||||
/* Create debugfs entries for the opp_table */
|
||||
@@ -725,7 +718,6 @@ struct opp_device *_add_opp_dev(const struct device *dev,
|
||||
if (ret)
|
||||
dev_err(dev, "%s: Failed to register opp debugfs (%d)\n",
|
||||
__func__, ret);
|
||||
mutex_unlock(&opp_table->lock);
|
||||
|
||||
return opp_dev;
|
||||
}
|
||||
@@ -744,7 +736,6 @@ static struct opp_table *_allocate_opp_table(struct device *dev)
|
||||
if (!opp_table)
|
||||
return NULL;
|
||||
|
||||
mutex_init(&opp_table->lock);
|
||||
INIT_LIST_HEAD(&opp_table->dev_list);
|
||||
|
||||
opp_dev = _add_opp_dev(dev, opp_table);
|
||||
@@ -766,6 +757,7 @@ static struct opp_table *_allocate_opp_table(struct device *dev)
|
||||
|
||||
BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
|
||||
INIT_LIST_HEAD(&opp_table->opp_list);
|
||||
mutex_init(&opp_table->lock);
|
||||
kref_init(&opp_table->kref);
|
||||
|
||||
/* Secure the device table modification */
|
||||
@@ -807,10 +799,6 @@ static void _opp_table_kref_release(struct kref *kref)
|
||||
if (!IS_ERR(opp_table->clk))
|
||||
clk_put(opp_table->clk);
|
||||
|
||||
/*
|
||||
* No need to take opp_table->lock here as we are guaranteed that no
|
||||
* references to the OPP table are taken at this point.
|
||||
*/
|
||||
opp_dev = list_first_entry(&opp_table->dev_list, struct opp_device,
|
||||
node);
|
||||
|
||||
@@ -1714,9 +1702,6 @@ void _dev_pm_opp_remove_table(struct opp_table *opp_table, struct device *dev,
|
||||
{
|
||||
struct dev_pm_opp *opp, *tmp;
|
||||
|
||||
/* Protect dev_list */
|
||||
mutex_lock(&opp_table->lock);
|
||||
|
||||
/* Find if opp_table manages a single device */
|
||||
if (list_is_singular(&opp_table->dev_list)) {
|
||||
/* Free static OPPs */
|
||||
@@ -1727,8 +1712,6 @@ void _dev_pm_opp_remove_table(struct opp_table *opp_table, struct device *dev,
|
||||
} else {
|
||||
_remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table);
|
||||
}
|
||||
|
||||
mutex_unlock(&opp_table->lock);
|
||||
}
|
||||
|
||||
void _dev_pm_opp_find_and_remove_table(struct device *dev, bool remove_all)
|
||||
|
||||
@@ -222,10 +222,8 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
|
||||
cpumask_clear(cpumask);
|
||||
|
||||
if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) {
|
||||
mutex_lock(&opp_table->lock);
|
||||
list_for_each_entry(opp_dev, &opp_table->dev_list, node)
|
||||
cpumask_set_cpu(opp_dev->dev->id, cpumask);
|
||||
mutex_unlock(&opp_table->lock);
|
||||
} else {
|
||||
cpumask_set_cpu(cpu_dev->id, cpumask);
|
||||
}
|
||||
|
||||
@@ -124,7 +124,7 @@ enum opp_table_access {
|
||||
* @dev_list: list of devices that share these OPPs
|
||||
* @opp_list: table of opps
|
||||
* @kref: for reference count of the table.
|
||||
* @lock: mutex protecting the opp_list and dev_list.
|
||||
* @lock: mutex protecting the opp_list.
|
||||
* @np: struct device_node pointer for opp's DT node.
|
||||
* @clock_latency_ns_max: Max clock latency in nanoseconds.
|
||||
* @shared_opp: OPP is shared between multiple devices.
|
||||
|
||||
@@ -65,6 +65,7 @@ obj-$(CONFIG_ARCH_HISI) += hisilicon/
|
||||
obj-y += imgtec/
|
||||
obj-$(CONFIG_ARCH_MXC) += imx/
|
||||
obj-$(CONFIG_MACH_INGENIC) += ingenic/
|
||||
obj-$(CONFIG_ARCH_K3) += keystone/
|
||||
obj-$(CONFIG_ARCH_KEYSTONE) += keystone/
|
||||
obj-$(CONFIG_MACH_LOONGSON32) += loongson1/
|
||||
obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/
|
||||
|
||||
@@ -7,7 +7,7 @@ config COMMON_CLK_KEYSTONE
|
||||
|
||||
config TI_SCI_CLK
|
||||
tristate "TI System Control Interface clock drivers"
|
||||
depends on (ARCH_KEYSTONE || COMPILE_TEST) && OF
|
||||
depends on (ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST) && OF
|
||||
depends on TI_SCI_PROTOCOL
|
||||
default ARCH_KEYSTONE
|
||||
---help---
|
||||
|
||||
@@ -152,7 +152,7 @@ static int exynos_cpuclk_pre_rate_change(struct clk_notifier_data *ndata,
|
||||
struct exynos_cpuclk *cpuclk, void __iomem *base)
|
||||
{
|
||||
const struct exynos_cpuclk_cfg_data *cfg_data = cpuclk->cfg;
|
||||
unsigned long alt_prate = clk_get_rate(cpuclk->alt_parent);
|
||||
unsigned long alt_prate = clk_hw_get_rate(cpuclk->alt_parent);
|
||||
unsigned long alt_div = 0, alt_div_mask = DIV_MASK;
|
||||
unsigned long div0, div1 = 0, mux_reg;
|
||||
unsigned long flags;
|
||||
@@ -280,7 +280,7 @@ static int exynos5433_cpuclk_pre_rate_change(struct clk_notifier_data *ndata,
|
||||
struct exynos_cpuclk *cpuclk, void __iomem *base)
|
||||
{
|
||||
const struct exynos_cpuclk_cfg_data *cfg_data = cpuclk->cfg;
|
||||
unsigned long alt_prate = clk_get_rate(cpuclk->alt_parent);
|
||||
unsigned long alt_prate = clk_hw_get_rate(cpuclk->alt_parent);
|
||||
unsigned long alt_div = 0, alt_div_mask = DIV_MASK;
|
||||
unsigned long div0, div1 = 0, mux_reg;
|
||||
unsigned long flags;
|
||||
@@ -432,7 +432,7 @@ int __init exynos_register_cpu_clock(struct samsung_clk_provider *ctx,
|
||||
else
|
||||
cpuclk->clk_nb.notifier_call = exynos_cpuclk_notifier_cb;
|
||||
|
||||
cpuclk->alt_parent = __clk_lookup(alt_parent);
|
||||
cpuclk->alt_parent = __clk_get_hw(__clk_lookup(alt_parent));
|
||||
if (!cpuclk->alt_parent) {
|
||||
pr_err("%s: could not lookup alternate parent %s\n",
|
||||
__func__, alt_parent);
|
||||
|
||||
@@ -49,7 +49,7 @@ struct exynos_cpuclk_cfg_data {
|
||||
*/
|
||||
struct exynos_cpuclk {
|
||||
struct clk_hw hw;
|
||||
struct clk *alt_parent;
|
||||
struct clk_hw *alt_parent;
|
||||
void __iomem *ctrl_base;
|
||||
spinlock_t *lock;
|
||||
const struct exynos_cpuclk_cfg_data *cfg;
|
||||
|
||||
@@ -633,6 +633,7 @@ static const struct samsung_div_clock exynos5420_div_clks[] __initconst = {
|
||||
};
|
||||
|
||||
static const struct samsung_gate_clock exynos5420_gate_clks[] __initconst = {
|
||||
GATE(CLK_SECKEY, "seckey", "aclk66_psgen", GATE_BUS_PERIS1, 1, 0, 0),
|
||||
GATE(CLK_MAU_EPLL, "mau_epll", "mout_mau_epll_clk",
|
||||
SRC_MASK_TOP7, 20, CLK_SET_RATE_PARENT, 0),
|
||||
};
|
||||
@@ -1167,8 +1168,6 @@ static const struct samsung_gate_clock exynos5x_gate_clks[] __initconst = {
|
||||
GATE(CLK_TMU, "tmu", "aclk66_psgen", GATE_IP_PERIS, 21, 0, 0),
|
||||
GATE(CLK_TMU_GPU, "tmu_gpu", "aclk66_psgen", GATE_IP_PERIS, 22, 0, 0),
|
||||
|
||||
GATE(CLK_SECKEY, "seckey", "aclk66_psgen", GATE_BUS_PERIS1, 1, 0, 0),
|
||||
|
||||
/* GEN Block */
|
||||
GATE(CLK_ROTATOR, "rotator", "mout_user_aclk266", GATE_IP_GEN, 1, 0, 0),
|
||||
GATE(CLK_JPEG, "jpeg", "aclk300_jpeg", GATE_IP_GEN, 2, 0, 0),
|
||||
|
||||
@@ -75,18 +75,17 @@ struct sh_cmt_info {
|
||||
enum sh_cmt_model model;
|
||||
|
||||
unsigned long width; /* 16 or 32 bit version of hardware block */
|
||||
unsigned long overflow_bit;
|
||||
unsigned long clear_bits;
|
||||
u32 overflow_bit;
|
||||
u32 clear_bits;
|
||||
|
||||
/* callbacks for CMSTR and CMCSR access */
|
||||
unsigned long (*read_control)(void __iomem *base, unsigned long offs);
|
||||
u32 (*read_control)(void __iomem *base, unsigned long offs);
|
||||
void (*write_control)(void __iomem *base, unsigned long offs,
|
||||
unsigned long value);
|
||||
u32 value);
|
||||
|
||||
/* callbacks for CMCNT and CMCOR access */
|
||||
unsigned long (*read_count)(void __iomem *base, unsigned long offs);
|
||||
void (*write_count)(void __iomem *base, unsigned long offs,
|
||||
unsigned long value);
|
||||
u32 (*read_count)(void __iomem *base, unsigned long offs);
|
||||
void (*write_count)(void __iomem *base, unsigned long offs, u32 value);
|
||||
};
|
||||
|
||||
struct sh_cmt_channel {
|
||||
@@ -100,13 +99,13 @@ struct sh_cmt_channel {
|
||||
|
||||
unsigned int timer_bit;
|
||||
unsigned long flags;
|
||||
unsigned long match_value;
|
||||
unsigned long next_match_value;
|
||||
unsigned long max_match_value;
|
||||
u32 match_value;
|
||||
u32 next_match_value;
|
||||
u32 max_match_value;
|
||||
raw_spinlock_t lock;
|
||||
struct clock_event_device ced;
|
||||
struct clocksource cs;
|
||||
unsigned long total_cycles;
|
||||
u64 total_cycles;
|
||||
bool cs_enabled;
|
||||
};
|
||||
|
||||
@@ -157,24 +156,22 @@ struct sh_cmt_device {
|
||||
#define SH_CMT32_CMCSR_CKS_RCLK1 (7 << 0)
|
||||
#define SH_CMT32_CMCSR_CKS_MASK (7 << 0)
|
||||
|
||||
static unsigned long sh_cmt_read16(void __iomem *base, unsigned long offs)
|
||||
static u32 sh_cmt_read16(void __iomem *base, unsigned long offs)
|
||||
{
|
||||
return ioread16(base + (offs << 1));
|
||||
}
|
||||
|
||||
static unsigned long sh_cmt_read32(void __iomem *base, unsigned long offs)
|
||||
static u32 sh_cmt_read32(void __iomem *base, unsigned long offs)
|
||||
{
|
||||
return ioread32(base + (offs << 2));
|
||||
}
|
||||
|
||||
static void sh_cmt_write16(void __iomem *base, unsigned long offs,
|
||||
unsigned long value)
|
||||
static void sh_cmt_write16(void __iomem *base, unsigned long offs, u32 value)
|
||||
{
|
||||
iowrite16(value, base + (offs << 1));
|
||||
}
|
||||
|
||||
static void sh_cmt_write32(void __iomem *base, unsigned long offs,
|
||||
unsigned long value)
|
||||
static void sh_cmt_write32(void __iomem *base, unsigned long offs, u32 value)
|
||||
{
|
||||
iowrite32(value, base + (offs << 2));
|
||||
}
|
||||
@@ -236,7 +233,7 @@ static const struct sh_cmt_info sh_cmt_info[] = {
|
||||
#define CMCNT 1 /* channel register */
|
||||
#define CMCOR 2 /* channel register */
|
||||
|
||||
static inline unsigned long sh_cmt_read_cmstr(struct sh_cmt_channel *ch)
|
||||
static inline u32 sh_cmt_read_cmstr(struct sh_cmt_channel *ch)
|
||||
{
|
||||
if (ch->iostart)
|
||||
return ch->cmt->info->read_control(ch->iostart, 0);
|
||||
@@ -244,8 +241,7 @@ static inline unsigned long sh_cmt_read_cmstr(struct sh_cmt_channel *ch)
|
||||
return ch->cmt->info->read_control(ch->cmt->mapbase, 0);
|
||||
}
|
||||
|
||||
static inline void sh_cmt_write_cmstr(struct sh_cmt_channel *ch,
|
||||
unsigned long value)
|
||||
static inline void sh_cmt_write_cmstr(struct sh_cmt_channel *ch, u32 value)
|
||||
{
|
||||
if (ch->iostart)
|
||||
ch->cmt->info->write_control(ch->iostart, 0, value);
|
||||
@@ -253,39 +249,35 @@ static inline void sh_cmt_write_cmstr(struct sh_cmt_channel *ch,
|
||||
ch->cmt->info->write_control(ch->cmt->mapbase, 0, value);
|
||||
}
|
||||
|
||||
static inline unsigned long sh_cmt_read_cmcsr(struct sh_cmt_channel *ch)
|
||||
static inline u32 sh_cmt_read_cmcsr(struct sh_cmt_channel *ch)
|
||||
{
|
||||
return ch->cmt->info->read_control(ch->ioctrl, CMCSR);
|
||||
}
|
||||
|
||||
static inline void sh_cmt_write_cmcsr(struct sh_cmt_channel *ch,
|
||||
unsigned long value)
|
||||
static inline void sh_cmt_write_cmcsr(struct sh_cmt_channel *ch, u32 value)
|
||||
{
|
||||
ch->cmt->info->write_control(ch->ioctrl, CMCSR, value);
|
||||
}
|
||||
|
||||
static inline unsigned long sh_cmt_read_cmcnt(struct sh_cmt_channel *ch)
|
||||
static inline u32 sh_cmt_read_cmcnt(struct sh_cmt_channel *ch)
|
||||
{
|
||||
return ch->cmt->info->read_count(ch->ioctrl, CMCNT);
|
||||
}
|
||||
|
||||
static inline void sh_cmt_write_cmcnt(struct sh_cmt_channel *ch,
|
||||
unsigned long value)
|
||||
static inline void sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value)
|
||||
{
|
||||
ch->cmt->info->write_count(ch->ioctrl, CMCNT, value);
|
||||
}
|
||||
|
||||
static inline void sh_cmt_write_cmcor(struct sh_cmt_channel *ch,
|
||||
unsigned long value)
|
||||
static inline void sh_cmt_write_cmcor(struct sh_cmt_channel *ch, u32 value)
|
||||
{
|
||||
ch->cmt->info->write_count(ch->ioctrl, CMCOR, value);
|
||||
}
|
||||
|
||||
static unsigned long sh_cmt_get_counter(struct sh_cmt_channel *ch,
|
||||
int *has_wrapped)
|
||||
static u32 sh_cmt_get_counter(struct sh_cmt_channel *ch, u32 *has_wrapped)
|
||||
{
|
||||
unsigned long v1, v2, v3;
|
||||
int o1, o2;
|
||||
u32 v1, v2, v3;
|
||||
u32 o1, o2;
|
||||
|
||||
o1 = sh_cmt_read_cmcsr(ch) & ch->cmt->info->overflow_bit;
|
||||
|
||||
@@ -305,7 +297,8 @@ static unsigned long sh_cmt_get_counter(struct sh_cmt_channel *ch,
|
||||
|
||||
static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start)
|
||||
{
|
||||
unsigned long flags, value;
|
||||
unsigned long flags;
|
||||
u32 value;
|
||||
|
||||
/* start stop register shared by multiple timer channels */
|
||||
raw_spin_lock_irqsave(&ch->cmt->lock, flags);
|
||||
@@ -412,11 +405,11 @@ static void sh_cmt_disable(struct sh_cmt_channel *ch)
|
||||
static void sh_cmt_clock_event_program_verify(struct sh_cmt_channel *ch,
|
||||
int absolute)
|
||||
{
|
||||
unsigned long new_match;
|
||||
unsigned long value = ch->next_match_value;
|
||||
unsigned long delay = 0;
|
||||
unsigned long now = 0;
|
||||
int has_wrapped;
|
||||
u32 value = ch->next_match_value;
|
||||
u32 new_match;
|
||||
u32 delay = 0;
|
||||
u32 now = 0;
|
||||
u32 has_wrapped;
|
||||
|
||||
now = sh_cmt_get_counter(ch, &has_wrapped);
|
||||
ch->flags |= FLAG_REPROGRAM; /* force reprogram */
|
||||
@@ -613,9 +606,10 @@ static struct sh_cmt_channel *cs_to_sh_cmt(struct clocksource *cs)
|
||||
static u64 sh_cmt_clocksource_read(struct clocksource *cs)
|
||||
{
|
||||
struct sh_cmt_channel *ch = cs_to_sh_cmt(cs);
|
||||
unsigned long flags, raw;
|
||||
unsigned long value;
|
||||
int has_wrapped;
|
||||
unsigned long flags;
|
||||
u32 has_wrapped;
|
||||
u64 value;
|
||||
u32 raw;
|
||||
|
||||
raw_spin_lock_irqsave(&ch->lock, flags);
|
||||
value = ch->total_cycles;
|
||||
@@ -688,7 +682,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_channel *ch,
|
||||
cs->disable = sh_cmt_clocksource_disable;
|
||||
cs->suspend = sh_cmt_clocksource_suspend;
|
||||
cs->resume = sh_cmt_clocksource_resume;
|
||||
cs->mask = CLOCKSOURCE_MASK(sizeof(unsigned long) * 8);
|
||||
cs->mask = CLOCKSOURCE_MASK(sizeof(u64) * 8);
|
||||
cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
|
||||
|
||||
dev_info(&ch->cmt->pdev->dev, "ch%u: used as clock source\n",
|
||||
|
||||
@@ -28,9 +28,24 @@
|
||||
|
||||
#define DCP_MAX_CHANS 4
|
||||
#define DCP_BUF_SZ PAGE_SIZE
|
||||
#define DCP_SHA_PAY_SZ 64
|
||||
|
||||
#define DCP_ALIGNMENT 64
|
||||
|
||||
/*
|
||||
* Null hashes to align with hw behavior on imx6sl and ull
|
||||
* these are flipped for consistency with hw output
|
||||
*/
|
||||
const uint8_t sha1_null_hash[] =
|
||||
"\x09\x07\xd8\xaf\x90\x18\x60\x95\xef\xbf"
|
||||
"\x55\x32\x0d\x4b\x6b\x5e\xee\xa3\x39\xda";
|
||||
|
||||
const uint8_t sha256_null_hash[] =
|
||||
"\x55\xb8\x52\x78\x1b\x99\x95\xa4"
|
||||
"\x4c\x93\x9b\x64\xe4\x41\xae\x27"
|
||||
"\x24\xb9\x6f\x99\xc8\xf4\xfb\x9a"
|
||||
"\x14\x1c\xfc\x98\x42\xc4\xb0\xe3";
|
||||
|
||||
/* DCP DMA descriptor. */
|
||||
struct dcp_dma_desc {
|
||||
uint32_t next_cmd_addr;
|
||||
@@ -48,6 +63,7 @@ struct dcp_coherent_block {
|
||||
uint8_t aes_in_buf[DCP_BUF_SZ];
|
||||
uint8_t aes_out_buf[DCP_BUF_SZ];
|
||||
uint8_t sha_in_buf[DCP_BUF_SZ];
|
||||
uint8_t sha_out_buf[DCP_SHA_PAY_SZ];
|
||||
|
||||
uint8_t aes_key[2 * AES_KEYSIZE_128];
|
||||
|
||||
@@ -209,6 +225,12 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
|
||||
dma_addr_t dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
|
||||
DCP_BUF_SZ, DMA_FROM_DEVICE);
|
||||
|
||||
if (actx->fill % AES_BLOCK_SIZE) {
|
||||
dev_err(sdcp->dev, "Invalid block size!\n");
|
||||
ret = -EINVAL;
|
||||
goto aes_done_run;
|
||||
}
|
||||
|
||||
/* Fill in the DMA descriptor. */
|
||||
desc->control0 = MXS_DCP_CONTROL0_DECR_SEMAPHORE |
|
||||
MXS_DCP_CONTROL0_INTERRUPT |
|
||||
@@ -238,6 +260,7 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
|
||||
|
||||
ret = mxs_dcp_start_dma(actx);
|
||||
|
||||
aes_done_run:
|
||||
dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
|
||||
DMA_TO_DEVICE);
|
||||
dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
|
||||
@@ -264,13 +287,15 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
|
||||
|
||||
uint8_t *out_tmp, *src_buf, *dst_buf = NULL;
|
||||
uint32_t dst_off = 0;
|
||||
uint32_t last_out_len = 0;
|
||||
|
||||
uint8_t *key = sdcp->coh->aes_key;
|
||||
|
||||
int ret = 0;
|
||||
int split = 0;
|
||||
unsigned int i, len, clen, rem = 0;
|
||||
unsigned int i, len, clen, rem = 0, tlen = 0;
|
||||
int init = 0;
|
||||
bool limit_hit = false;
|
||||
|
||||
actx->fill = 0;
|
||||
|
||||
@@ -289,6 +314,11 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
|
||||
for_each_sg(req->src, src, nents, i) {
|
||||
src_buf = sg_virt(src);
|
||||
len = sg_dma_len(src);
|
||||
tlen += len;
|
||||
limit_hit = tlen > req->nbytes;
|
||||
|
||||
if (limit_hit)
|
||||
len = req->nbytes - (tlen - len);
|
||||
|
||||
do {
|
||||
if (actx->fill + len > out_off)
|
||||
@@ -305,13 +335,15 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
|
||||
* If we filled the buffer or this is the last SG,
|
||||
* submit the buffer.
|
||||
*/
|
||||
if (actx->fill == out_off || sg_is_last(src)) {
|
||||
if (actx->fill == out_off || sg_is_last(src) ||
|
||||
limit_hit) {
|
||||
ret = mxs_dcp_run_aes(actx, req, init);
|
||||
if (ret)
|
||||
return ret;
|
||||
init = 0;
|
||||
|
||||
out_tmp = out_buf;
|
||||
last_out_len = actx->fill;
|
||||
while (dst && actx->fill) {
|
||||
if (!split) {
|
||||
dst_buf = sg_virt(dst);
|
||||
@@ -334,6 +366,19 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
|
||||
}
|
||||
}
|
||||
} while (len);
|
||||
|
||||
if (limit_hit)
|
||||
break;
|
||||
}
|
||||
|
||||
/* Copy the IV for CBC for chaining */
|
||||
if (!rctx->ecb) {
|
||||
if (rctx->enc)
|
||||
memcpy(req->info, out_buf+(last_out_len-AES_BLOCK_SIZE),
|
||||
AES_BLOCK_SIZE);
|
||||
else
|
||||
memcpy(req->info, in_buf+(last_out_len-AES_BLOCK_SIZE),
|
||||
AES_BLOCK_SIZE);
|
||||
}
|
||||
|
||||
return ret;
|
||||
@@ -513,8 +558,6 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
|
||||
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
|
||||
struct dcp_async_ctx *actx = crypto_ahash_ctx(tfm);
|
||||
struct dcp_sha_req_ctx *rctx = ahash_request_ctx(req);
|
||||
struct hash_alg_common *halg = crypto_hash_alg_common(tfm);
|
||||
|
||||
struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
|
||||
|
||||
dma_addr_t digest_phys = 0;
|
||||
@@ -536,10 +579,23 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
|
||||
desc->payload = 0;
|
||||
desc->status = 0;
|
||||
|
||||
/*
|
||||
* Align driver with hw behavior when generating null hashes
|
||||
*/
|
||||
if (rctx->init && rctx->fini && desc->size == 0) {
|
||||
struct hash_alg_common *halg = crypto_hash_alg_common(tfm);
|
||||
const uint8_t *sha_buf =
|
||||
(actx->alg == MXS_DCP_CONTROL1_HASH_SELECT_SHA1) ?
|
||||
sha1_null_hash : sha256_null_hash;
|
||||
memcpy(sdcp->coh->sha_out_buf, sha_buf, halg->digestsize);
|
||||
ret = 0;
|
||||
goto done_run;
|
||||
}
|
||||
|
||||
/* Set HASH_TERM bit for last transfer block. */
|
||||
if (rctx->fini) {
|
||||
digest_phys = dma_map_single(sdcp->dev, req->result,
|
||||
halg->digestsize, DMA_FROM_DEVICE);
|
||||
digest_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_out_buf,
|
||||
DCP_SHA_PAY_SZ, DMA_FROM_DEVICE);
|
||||
desc->control0 |= MXS_DCP_CONTROL0_HASH_TERM;
|
||||
desc->payload = digest_phys;
|
||||
}
|
||||
@@ -547,9 +603,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
|
||||
ret = mxs_dcp_start_dma(actx);
|
||||
|
||||
if (rctx->fini)
|
||||
dma_unmap_single(sdcp->dev, digest_phys, halg->digestsize,
|
||||
dma_unmap_single(sdcp->dev, digest_phys, DCP_SHA_PAY_SZ,
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
done_run:
|
||||
dma_unmap_single(sdcp->dev, buf_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
|
||||
|
||||
return ret;
|
||||
@@ -567,6 +624,7 @@ static int dcp_sha_req_to_buf(struct crypto_async_request *arq)
|
||||
const int nents = sg_nents(req->src);
|
||||
|
||||
uint8_t *in_buf = sdcp->coh->sha_in_buf;
|
||||
uint8_t *out_buf = sdcp->coh->sha_out_buf;
|
||||
|
||||
uint8_t *src_buf;
|
||||
|
||||
@@ -621,11 +679,9 @@ static int dcp_sha_req_to_buf(struct crypto_async_request *arq)
|
||||
|
||||
actx->fill = 0;
|
||||
|
||||
/* For some reason, the result is flipped. */
|
||||
for (i = 0; i < halg->digestsize / 2; i++) {
|
||||
swap(req->result[i],
|
||||
req->result[halg->digestsize - i - 1]);
|
||||
}
|
||||
/* For some reason the result is flipped */
|
||||
for (i = 0; i < halg->digestsize; i++)
|
||||
req->result[i] = out_buf[halg->digestsize - i - 1];
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -129,7 +129,7 @@ static void
|
||||
ioat_init_channel(struct ioatdma_device *ioat_dma,
|
||||
struct ioatdma_chan *ioat_chan, int idx);
|
||||
static void ioat_intr_quirk(struct ioatdma_device *ioat_dma);
|
||||
static int ioat_enumerate_channels(struct ioatdma_device *ioat_dma);
|
||||
static void ioat_enumerate_channels(struct ioatdma_device *ioat_dma);
|
||||
static int ioat3_dma_self_test(struct ioatdma_device *ioat_dma);
|
||||
|
||||
static int ioat_dca_enabled = 1;
|
||||
@@ -575,7 +575,7 @@ static void ioat_dma_remove(struct ioatdma_device *ioat_dma)
|
||||
* ioat_enumerate_channels - find and initialize the device's channels
|
||||
* @ioat_dma: the ioat dma device to be enumerated
|
||||
*/
|
||||
static int ioat_enumerate_channels(struct ioatdma_device *ioat_dma)
|
||||
static void ioat_enumerate_channels(struct ioatdma_device *ioat_dma)
|
||||
{
|
||||
struct ioatdma_chan *ioat_chan;
|
||||
struct device *dev = &ioat_dma->pdev->dev;
|
||||
@@ -594,7 +594,7 @@ static int ioat_enumerate_channels(struct ioatdma_device *ioat_dma)
|
||||
xfercap_log = readb(ioat_dma->reg_base + IOAT_XFERCAP_OFFSET);
|
||||
xfercap_log &= 0x1f; /* bits [4:0] valid */
|
||||
if (xfercap_log == 0)
|
||||
return 0;
|
||||
return;
|
||||
dev_dbg(dev, "%s: xfercap = %d\n", __func__, 1 << xfercap_log);
|
||||
|
||||
for (i = 0; i < dma->chancnt; i++) {
|
||||
@@ -611,7 +611,6 @@ static int ioat_enumerate_channels(struct ioatdma_device *ioat_dma)
|
||||
}
|
||||
}
|
||||
dma->chancnt = i;
|
||||
return i;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -200,6 +200,7 @@ struct rcar_dmac {
|
||||
struct dma_device engine;
|
||||
struct device *dev;
|
||||
void __iomem *iomem;
|
||||
struct device_dma_parameters parms;
|
||||
|
||||
unsigned int n_channels;
|
||||
struct rcar_dmac_chan *channels;
|
||||
@@ -1764,6 +1765,8 @@ static int rcar_dmac_probe(struct platform_device *pdev)
|
||||
|
||||
dmac->dev = &pdev->dev;
|
||||
platform_set_drvdata(pdev, dmac);
|
||||
dmac->dev->dma_parms = &dmac->parms;
|
||||
dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
|
||||
dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
|
||||
|
||||
ret = rcar_dmac_parse_of(&pdev->dev, dmac);
|
||||
|
||||
@@ -545,7 +545,7 @@ static struct dma_async_tx_descriptor *td_prep_slave_sg(struct dma_chan *chan,
|
||||
}
|
||||
|
||||
dma_sync_single_for_device(chan2dmadev(chan), td_desc->txd.phys,
|
||||
td_desc->desc_list_len, DMA_MEM_TO_DEV);
|
||||
td_desc->desc_list_len, DMA_TO_DEVICE);
|
||||
|
||||
return &td_desc->txd;
|
||||
}
|
||||
|
||||
@@ -122,7 +122,7 @@ static int syscon_gpio_dir_out(struct gpio_chip *chip, unsigned offset, int val)
|
||||
BIT(offs % SYSCON_REG_BITS));
|
||||
}
|
||||
|
||||
priv->data->set(chip, offset, val);
|
||||
chip->set(chip, offset, val);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -38,9 +38,9 @@
|
||||
#define INA3221_WARN3 0x0c
|
||||
#define INA3221_MASK_ENABLE 0x0f
|
||||
|
||||
#define INA3221_CONFIG_MODE_SHUNT BIT(1)
|
||||
#define INA3221_CONFIG_MODE_BUS BIT(2)
|
||||
#define INA3221_CONFIG_MODE_CONTINUOUS BIT(3)
|
||||
#define INA3221_CONFIG_MODE_SHUNT BIT(0)
|
||||
#define INA3221_CONFIG_MODE_BUS BIT(1)
|
||||
#define INA3221_CONFIG_MODE_CONTINUOUS BIT(2)
|
||||
|
||||
#define INA3221_RSHUNT_DEFAULT 10000
|
||||
|
||||
|
||||
@@ -221,8 +221,12 @@ static int pwm_fan_probe(struct platform_device *pdev)
|
||||
|
||||
ctx->pwm = devm_of_pwm_get(&pdev->dev, pdev->dev.of_node, NULL);
|
||||
if (IS_ERR(ctx->pwm)) {
|
||||
dev_err(&pdev->dev, "Could not get PWM\n");
|
||||
return PTR_ERR(ctx->pwm);
|
||||
ret = PTR_ERR(ctx->pwm);
|
||||
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(&pdev->dev, "Could not get PWM: %d\n", ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, ctx);
|
||||
|
||||
@@ -429,12 +429,13 @@ config I2C_BCM_KONA
|
||||
If you do not need KONA I2C interface, say N.
|
||||
|
||||
config I2C_BRCMSTB
|
||||
tristate "BRCM Settop I2C controller"
|
||||
depends on ARCH_BRCMSTB || BMIPS_GENERIC || COMPILE_TEST
|
||||
tristate "BRCM Settop/DSL I2C controller"
|
||||
depends on ARCH_BRCMSTB || BMIPS_GENERIC || ARCH_BCM_63XX || \
|
||||
COMPILE_TEST
|
||||
default y
|
||||
help
|
||||
If you say yes to this option, support will be included for the
|
||||
I2C interface on the Broadcom Settop SoCs.
|
||||
I2C interface on the Broadcom Settop/DSL SoCs.
|
||||
|
||||
If you do not need I2C interface, say N.
|
||||
|
||||
|
||||
@@ -986,7 +986,8 @@ static int __mthca_init_one(struct pci_dev *pdev, int hca_type)
|
||||
goto err_free_dev;
|
||||
}
|
||||
|
||||
if (mthca_cmd_init(mdev)) {
|
||||
err = mthca_cmd_init(mdev);
|
||||
if (err) {
|
||||
mthca_err(mdev, "Failed to init command interface, aborting.\n");
|
||||
goto err_free_dev;
|
||||
}
|
||||
|
||||
@@ -351,7 +351,8 @@ static uint32_t opa_vnic_get_dlid(struct opa_vnic_adapter *adapter,
|
||||
if (unlikely(!dlid))
|
||||
v_warn("Null dlid in MAC address\n");
|
||||
} else if (def_port != OPA_VNIC_INVALID_PORT) {
|
||||
dlid = info->vesw.u_ucast_dlid[def_port];
|
||||
if (def_port < OPA_VESW_MAX_NUM_DEF_PORT)
|
||||
dlid = info->vesw.u_ucast_dlid[def_port];
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -534,20 +534,33 @@ static int __maybe_unused silead_ts_suspend(struct device *dev)
|
||||
static int __maybe_unused silead_ts_resume(struct device *dev)
|
||||
{
|
||||
struct i2c_client *client = to_i2c_client(dev);
|
||||
bool second_try = false;
|
||||
int error, status;
|
||||
|
||||
silead_ts_set_power(client, SILEAD_POWER_ON);
|
||||
|
||||
retry:
|
||||
error = silead_ts_reset(client);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
if (second_try) {
|
||||
error = silead_ts_load_fw(client);
|
||||
if (error)
|
||||
return error;
|
||||
}
|
||||
|
||||
error = silead_ts_startup(client);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
status = silead_ts_get_status(client);
|
||||
if (status != SILEAD_STATUS_OK) {
|
||||
if (!second_try) {
|
||||
second_try = true;
|
||||
dev_dbg(dev, "Reloading firmware after unsuccessful resume\n");
|
||||
goto retry;
|
||||
}
|
||||
dev_err(dev, "Resume error, status: 0x%02x\n", status);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
@@ -203,6 +203,7 @@ static int st1232_ts_probe(struct i2c_client *client,
|
||||
input_dev->id.bustype = BUS_I2C;
|
||||
input_dev->dev.parent = &client->dev;
|
||||
|
||||
__set_bit(INPUT_PROP_DIRECT, input_dev->propbit);
|
||||
__set_bit(EV_SYN, input_dev->evbit);
|
||||
__set_bit(EV_KEY, input_dev->evbit);
|
||||
__set_bit(EV_ABS, input_dev->evbit);
|
||||
|
||||
@@ -551,13 +551,12 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
|
||||
return 0;
|
||||
|
||||
tablep = iopte_deref(pte, data);
|
||||
} else if (unmap_idx >= 0) {
|
||||
io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true);
|
||||
return size;
|
||||
}
|
||||
|
||||
if (unmap_idx < 0)
|
||||
return __arm_lpae_unmap(data, iova, size, lvl, tablep);
|
||||
|
||||
io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true);
|
||||
return size;
|
||||
return __arm_lpae_unmap(data, iova, size, lvl, tablep);
|
||||
}
|
||||
|
||||
static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
|
||||
|
||||
@@ -92,7 +92,7 @@ static int
|
||||
mvebu_icu_irq_domain_translate(struct irq_domain *d, struct irq_fwspec *fwspec,
|
||||
unsigned long *hwirq, unsigned int *type)
|
||||
{
|
||||
struct mvebu_icu *icu = d->host_data;
|
||||
struct mvebu_icu *icu = platform_msi_get_host_data(d);
|
||||
unsigned int icu_group;
|
||||
|
||||
/* Check the count of the parameters in dt */
|
||||
|
||||
@@ -905,6 +905,7 @@ static void cached_dev_detach_finish(struct work_struct *w)
|
||||
bch_write_bdev_super(dc, &cl);
|
||||
closure_sync(&cl);
|
||||
|
||||
calc_cached_dev_sectors(dc->disk.c);
|
||||
bcache_device_detach(&dc->disk);
|
||||
list_move(&dc->list, &uncached_devices);
|
||||
|
||||
|
||||
@@ -8736,6 +8736,18 @@ static void md_start_sync(struct work_struct *ws)
|
||||
*/
|
||||
void md_check_recovery(struct mddev *mddev)
|
||||
{
|
||||
if (test_bit(MD_ALLOW_SB_UPDATE, &mddev->flags) && mddev->sb_flags) {
|
||||
/* Write superblock - thread that called mddev_suspend()
|
||||
* holds reconfig_mutex for us.
|
||||
*/
|
||||
set_bit(MD_UPDATING_SB, &mddev->flags);
|
||||
smp_mb__after_atomic();
|
||||
if (test_bit(MD_ALLOW_SB_UPDATE, &mddev->flags))
|
||||
md_update_sb(mddev, 0);
|
||||
clear_bit_unlock(MD_UPDATING_SB, &mddev->flags);
|
||||
wake_up(&mddev->sb_wait);
|
||||
}
|
||||
|
||||
if (mddev->suspended)
|
||||
return;
|
||||
|
||||
@@ -8896,16 +8908,6 @@ void md_check_recovery(struct mddev *mddev)
|
||||
unlock:
|
||||
wake_up(&mddev->sb_wait);
|
||||
mddev_unlock(mddev);
|
||||
} else if (test_bit(MD_ALLOW_SB_UPDATE, &mddev->flags) && mddev->sb_flags) {
|
||||
/* Write superblock - thread that called mddev_suspend()
|
||||
* holds reconfig_mutex for us.
|
||||
*/
|
||||
set_bit(MD_UPDATING_SB, &mddev->flags);
|
||||
smp_mb__after_atomic();
|
||||
if (test_bit(MD_ALLOW_SB_UPDATE, &mddev->flags))
|
||||
md_update_sb(mddev, 0);
|
||||
clear_bit_unlock(MD_UPDATING_SB, &mddev->flags);
|
||||
wake_up(&mddev->sb_wait);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(md_check_recovery);
|
||||
|
||||
@@ -529,6 +529,17 @@ static enum hrtimer_restart cec_pin_timer(struct hrtimer *timer)
|
||||
/* Start bit, switch to receive state */
|
||||
pin->ts = ts;
|
||||
pin->state = CEC_ST_RX_START_BIT_LOW;
|
||||
/*
|
||||
* If a transmit is pending, then that transmit should
|
||||
* use a signal free time of no more than
|
||||
* CEC_SIGNAL_FREE_TIME_NEW_INITIATOR since it will
|
||||
* have a new initiator due to the receive that is now
|
||||
* starting.
|
||||
*/
|
||||
if (pin->tx_msg.len && pin->tx_signal_free_time >
|
||||
CEC_SIGNAL_FREE_TIME_NEW_INITIATOR)
|
||||
pin->tx_signal_free_time =
|
||||
CEC_SIGNAL_FREE_TIME_NEW_INITIATOR;
|
||||
break;
|
||||
}
|
||||
if (pin->ts == 0)
|
||||
@@ -690,6 +701,15 @@ static int cec_pin_adap_transmit(struct cec_adapter *adap, u8 attempts,
|
||||
{
|
||||
struct cec_pin *pin = adap->pin;
|
||||
|
||||
/*
|
||||
* If a receive is in progress, then this transmit should use
|
||||
* a signal free time of max CEC_SIGNAL_FREE_TIME_NEW_INITIATOR
|
||||
* since when it starts transmitting it will have a new initiator.
|
||||
*/
|
||||
if (pin->state != CEC_ST_IDLE &&
|
||||
signal_free_time > CEC_SIGNAL_FREE_TIME_NEW_INITIATOR)
|
||||
signal_free_time = CEC_SIGNAL_FREE_TIME_NEW_INITIATOR;
|
||||
|
||||
pin->tx_signal_free_time = signal_free_time;
|
||||
pin->tx_msg = *msg;
|
||||
pin->work_tx_status = 0;
|
||||
|
||||
@@ -642,7 +642,8 @@ static int adv748x_parse_dt(struct adv748x_state *state)
|
||||
{
|
||||
struct device_node *ep_np = NULL;
|
||||
struct of_endpoint ep;
|
||||
bool found = false;
|
||||
bool out_found = false;
|
||||
bool in_found = false;
|
||||
|
||||
for_each_endpoint_of_node(state->dev->of_node, ep_np) {
|
||||
of_graph_parse_endpoint(ep_np, &ep);
|
||||
@@ -667,10 +668,17 @@ static int adv748x_parse_dt(struct adv748x_state *state)
|
||||
of_node_get(ep_np);
|
||||
state->endpoints[ep.port] = ep_np;
|
||||
|
||||
found = true;
|
||||
/*
|
||||
* At least one input endpoint and one output endpoint shall
|
||||
* be defined.
|
||||
*/
|
||||
if (ep.port < ADV748X_PORT_TXA)
|
||||
in_found = true;
|
||||
else
|
||||
out_found = true;
|
||||
}
|
||||
|
||||
return found ? 0 : -ENODEV;
|
||||
return in_found && out_found ? 0 : -ENODEV;
|
||||
}
|
||||
|
||||
static void adv748x_dt_cleanup(struct adv748x_state *state)
|
||||
@@ -702,6 +710,17 @@ static int adv748x_probe(struct i2c_client *client,
|
||||
state->i2c_clients[ADV748X_PAGE_IO] = client;
|
||||
i2c_set_clientdata(client, state);
|
||||
|
||||
/*
|
||||
* We can not use container_of to get back to the state with two TXs;
|
||||
* Initialize the TXs's fields unconditionally on the endpoint
|
||||
* presence to access them later.
|
||||
*/
|
||||
state->txa.state = state->txb.state = state;
|
||||
state->txa.page = ADV748X_PAGE_TXA;
|
||||
state->txb.page = ADV748X_PAGE_TXB;
|
||||
state->txa.port = ADV748X_PORT_TXA;
|
||||
state->txb.port = ADV748X_PORT_TXB;
|
||||
|
||||
/* Discover and process ports declared by the Device tree endpoints */
|
||||
ret = adv748x_parse_dt(state);
|
||||
if (ret) {
|
||||
|
||||
@@ -265,19 +265,10 @@ static int adv748x_csi2_init_controls(struct adv748x_csi2 *tx)
|
||||
|
||||
int adv748x_csi2_init(struct adv748x_state *state, struct adv748x_csi2 *tx)
|
||||
{
|
||||
struct device_node *ep;
|
||||
int ret;
|
||||
|
||||
/* We can not use container_of to get back to the state with two TXs */
|
||||
tx->state = state;
|
||||
tx->page = is_txa(tx) ? ADV748X_PAGE_TXA : ADV748X_PAGE_TXB;
|
||||
|
||||
ep = state->endpoints[is_txa(tx) ? ADV748X_PORT_TXA : ADV748X_PORT_TXB];
|
||||
if (!ep) {
|
||||
adv_err(state, "No endpoint found for %s\n",
|
||||
is_txa(tx) ? "txa" : "txb");
|
||||
return -ENODEV;
|
||||
}
|
||||
if (!is_tx_enabled(tx))
|
||||
return 0;
|
||||
|
||||
/* Initialise the virtual channel */
|
||||
adv748x_csi2_set_virtual_channel(tx, 0);
|
||||
@@ -287,7 +278,7 @@ int adv748x_csi2_init(struct adv748x_state *state, struct adv748x_csi2 *tx)
|
||||
is_txa(tx) ? "txa" : "txb");
|
||||
|
||||
/* Ensure that matching is based upon the endpoint fwnodes */
|
||||
tx->sd.fwnode = of_fwnode_handle(ep);
|
||||
tx->sd.fwnode = of_fwnode_handle(state->endpoints[tx->port]);
|
||||
|
||||
/* Register internal ops for incremental subdev registration */
|
||||
tx->sd.internal_ops = &adv748x_csi2_internal_ops;
|
||||
@@ -320,6 +311,9 @@ err_free_media:
|
||||
|
||||
void adv748x_csi2_cleanup(struct adv748x_csi2 *tx)
|
||||
{
|
||||
if (!is_tx_enabled(tx))
|
||||
return;
|
||||
|
||||
v4l2_async_unregister_subdev(&tx->sd);
|
||||
media_entity_cleanup(&tx->sd.entity);
|
||||
v4l2_ctrl_handler_free(&tx->ctrl_hdl);
|
||||
|
||||
@@ -94,6 +94,7 @@ struct adv748x_csi2 {
|
||||
struct adv748x_state *state;
|
||||
struct v4l2_mbus_framefmt format;
|
||||
unsigned int page;
|
||||
unsigned int port;
|
||||
|
||||
struct media_pad pads[ADV748X_CSI2_NR_PADS];
|
||||
struct v4l2_ctrl_handler ctrl_hdl;
|
||||
@@ -102,6 +103,7 @@ struct adv748x_csi2 {
|
||||
|
||||
#define notifier_to_csi2(n) container_of(n, struct adv748x_csi2, notifier)
|
||||
#define adv748x_sd_to_csi2(sd) container_of(sd, struct adv748x_csi2, sd)
|
||||
#define is_tx_enabled(_tx) ((_tx)->state->endpoints[(_tx)->port] != NULL)
|
||||
|
||||
enum adv748x_hdmi_pads {
|
||||
ADV748X_HDMI_SINK,
|
||||
|
||||
@@ -182,7 +182,8 @@ static int dw9714_probe(struct i2c_client *client)
|
||||
return 0;
|
||||
|
||||
err_cleanup:
|
||||
dw9714_subdev_cleanup(dw9714_dev);
|
||||
v4l2_ctrl_handler_free(&dw9714_dev->ctrls_vcm);
|
||||
media_entity_cleanup(&dw9714_dev->sd.entity);
|
||||
dev_err(&client->dev, "Probe failed: %d\n", rval);
|
||||
return rval;
|
||||
}
|
||||
|
||||
@@ -1102,7 +1102,8 @@ fail_nobase_res:
|
||||
|
||||
while (i >= 0) {
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, i);
|
||||
release_mem_region(res->start, resource_size(res));
|
||||
if (res)
|
||||
release_mem_region(res->start, resource_size(res));
|
||||
i--;
|
||||
}
|
||||
vpfe_unregister_ccdc_device(&isif_hw_dev);
|
||||
|
||||
@@ -2374,7 +2374,7 @@ static int pxa_camera_probe(struct platform_device *pdev)
|
||||
pcdev->res = res;
|
||||
|
||||
pcdev->pdata = pdev->dev.platform_data;
|
||||
if (&pdev->dev.of_node && !pcdev->pdata) {
|
||||
if (pdev->dev.of_node && !pcdev->pdata) {
|
||||
err = pxa_camera_pdata_from_dt(&pdev->dev, pcdev, &pcdev->asd);
|
||||
} else {
|
||||
pcdev->platform_flags = pcdev->pdata->flags;
|
||||
|
||||
@@ -40,6 +40,7 @@
|
||||
#define RC6_6A_MCE_TOGGLE_MASK 0x8000 /* for the body bits */
|
||||
#define RC6_6A_LCC_MASK 0xffff0000 /* RC6-6A-32 long customer code mask */
|
||||
#define RC6_6A_MCE_CC 0x800f0000 /* MCE customer code */
|
||||
#define RC6_6A_KATHREIN_CC 0x80460000 /* Kathrein RCU-676 customer code */
|
||||
#ifndef CHAR_BIT
|
||||
#define CHAR_BIT 8 /* Normally in <limits.h> */
|
||||
#endif
|
||||
@@ -252,13 +253,17 @@ again:
|
||||
toggle = 0;
|
||||
break;
|
||||
case 32:
|
||||
if ((scancode & RC6_6A_LCC_MASK) == RC6_6A_MCE_CC) {
|
||||
switch (scancode & RC6_6A_LCC_MASK) {
|
||||
case RC6_6A_MCE_CC:
|
||||
case RC6_6A_KATHREIN_CC:
|
||||
protocol = RC_PROTO_RC6_MCE;
|
||||
toggle = !!(scancode & RC6_6A_MCE_TOGGLE_MASK);
|
||||
scancode &= ~RC6_6A_MCE_TOGGLE_MASK;
|
||||
} else {
|
||||
break;
|
||||
default:
|
||||
protocol = RC_PROTO_RC6_6A_32;
|
||||
toggle = 0;
|
||||
break;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
|
||||
@@ -1389,7 +1389,7 @@ int cx231xx_g_register(struct file *file, void *priv,
|
||||
ret = cx231xx_read_ctrl_reg(dev, VRT_GET_REGISTER,
|
||||
(u16)reg->reg, value, 4);
|
||||
reg->val = value[0] | value[1] << 8 |
|
||||
value[2] << 16 | value[3] << 24;
|
||||
value[2] << 16 | (u32)value[3] << 24;
|
||||
reg->size = 4;
|
||||
break;
|
||||
case 1: /* AFE - read byte */
|
||||
|
||||
@@ -296,11 +296,24 @@ static int ti_tscadc_remove(struct platform_device *pdev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused ti_tscadc_can_wakeup(struct device *dev, void *data)
|
||||
{
|
||||
return device_may_wakeup(dev);
|
||||
}
|
||||
|
||||
static int __maybe_unused tscadc_suspend(struct device *dev)
|
||||
{
|
||||
struct ti_tscadc_dev *tscadc = dev_get_drvdata(dev);
|
||||
|
||||
regmap_write(tscadc->regmap, REG_SE, 0x00);
|
||||
if (device_for_each_child(dev, NULL, ti_tscadc_can_wakeup)) {
|
||||
u32 ctrl;
|
||||
|
||||
regmap_read(tscadc->regmap, REG_CTRL, &ctrl);
|
||||
ctrl &= ~(CNTRLREG_POWERDOWN);
|
||||
ctrl |= CNTRLREG_TSCSSENB;
|
||||
regmap_write(tscadc->regmap, REG_CTRL, ctrl);
|
||||
}
|
||||
pm_runtime_put_sync(dev);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -1028,8 +1028,6 @@ err1:
|
||||
|
||||
void cxl_guest_remove_afu(struct cxl_afu *afu)
|
||||
{
|
||||
pr_devel("in %s - AFU(%d)\n", __func__, afu->slice);
|
||||
|
||||
if (!afu)
|
||||
return;
|
||||
|
||||
|
||||
@@ -914,8 +914,9 @@ static void tmio_mmc_finish_request(struct tmio_mmc_host *host)
|
||||
if (mrq->cmd->error || (mrq->data && mrq->data->error))
|
||||
tmio_mmc_abort_dma(host);
|
||||
|
||||
if (host->check_scc_error)
|
||||
host->check_scc_error(host);
|
||||
/* SCC error means retune, but executed command was still successful */
|
||||
if (host->check_scc_error && host->check_scc_error(host))
|
||||
mmc_retune_needed(host->mmc);
|
||||
|
||||
/* If SET_BLOCK_COUNT, continue with main command */
|
||||
if (host->mrq && !mrq->cmd->error) {
|
||||
|
||||
@@ -30,7 +30,6 @@
|
||||
struct of_flash_list {
|
||||
struct mtd_info *mtd;
|
||||
struct map_info map;
|
||||
struct resource *res;
|
||||
};
|
||||
|
||||
struct of_flash {
|
||||
@@ -55,18 +54,10 @@ static int of_flash_remove(struct platform_device *dev)
|
||||
mtd_concat_destroy(info->cmtd);
|
||||
}
|
||||
|
||||
for (i = 0; i < info->list_size; i++) {
|
||||
for (i = 0; i < info->list_size; i++)
|
||||
if (info->list[i].mtd)
|
||||
map_destroy(info->list[i].mtd);
|
||||
|
||||
if (info->list[i].map.virt)
|
||||
iounmap(info->list[i].map.virt);
|
||||
|
||||
if (info->list[i].res) {
|
||||
release_resource(info->list[i].res);
|
||||
kfree(info->list[i].res);
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -214,10 +205,11 @@ static int of_flash_probe(struct platform_device *dev)
|
||||
|
||||
err = -EBUSY;
|
||||
res_size = resource_size(&res);
|
||||
info->list[i].res = request_mem_region(res.start, res_size,
|
||||
dev_name(&dev->dev));
|
||||
if (!info->list[i].res)
|
||||
info->list[i].map.virt = devm_ioremap_resource(&dev->dev, &res);
|
||||
if (IS_ERR(info->list[i].map.virt)) {
|
||||
err = PTR_ERR(info->list[i].map.virt);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
err = -ENXIO;
|
||||
width = of_get_property(dp, "bank-width", NULL);
|
||||
@@ -240,15 +232,6 @@ static int of_flash_probe(struct platform_device *dev)
|
||||
if (err)
|
||||
goto err_out;
|
||||
|
||||
err = -ENOMEM;
|
||||
info->list[i].map.virt = ioremap(info->list[i].map.phys,
|
||||
info->list[i].map.size);
|
||||
if (!info->list[i].map.virt) {
|
||||
dev_err(&dev->dev, "Failed to ioremap() flash"
|
||||
" region\n");
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
simple_map_init(&info->list[i].map);
|
||||
|
||||
/*
|
||||
|
||||
@@ -480,7 +480,7 @@ static void read_fiforeg(struct sh_flctl *flctl, int rlen, int offset)
|
||||
|
||||
/* initiate DMA transfer */
|
||||
if (flctl->chan_fifo0_rx && rlen >= 32 &&
|
||||
flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_DEV_TO_MEM) > 0)
|
||||
flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_FROM_DEVICE) > 0)
|
||||
goto convert; /* DMA success */
|
||||
|
||||
/* do polling transfer */
|
||||
@@ -539,7 +539,7 @@ static void write_ec_fiforeg(struct sh_flctl *flctl, int rlen,
|
||||
|
||||
/* initiate DMA transfer */
|
||||
if (flctl->chan_fifo0_tx && rlen >= 32 &&
|
||||
flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_MEM_TO_DEV) > 0)
|
||||
flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_TO_DEVICE) > 0)
|
||||
return; /* DMA success */
|
||||
|
||||
/* do polling transfer */
|
||||
|
||||
@@ -266,8 +266,8 @@ void cxgb4_dcb_handle_fw_update(struct adapter *adap,
|
||||
enum cxgb4_dcb_state_input input =
|
||||
((pcmd->u.dcb.control.all_syncd_pkd &
|
||||
FW_PORT_CMD_ALL_SYNCD_F)
|
||||
? CXGB4_DCB_STATE_FW_ALLSYNCED
|
||||
: CXGB4_DCB_STATE_FW_INCOMPLETE);
|
||||
? CXGB4_DCB_INPUT_FW_ALLSYNCED
|
||||
: CXGB4_DCB_INPUT_FW_INCOMPLETE);
|
||||
|
||||
if (dcb->dcb_version != FW_PORT_DCB_VER_UNKNOWN) {
|
||||
dcb_running_version = FW_PORT_CMD_DCB_VERSION_G(
|
||||
|
||||
@@ -67,7 +67,7 @@
|
||||
do { \
|
||||
if ((__dcb)->dcb_version == FW_PORT_DCB_VER_IEEE) \
|
||||
cxgb4_dcb_state_fsm((__dev), \
|
||||
CXGB4_DCB_STATE_FW_ALLSYNCED); \
|
||||
CXGB4_DCB_INPUT_FW_ALLSYNCED); \
|
||||
} while (0)
|
||||
|
||||
/* States we can be in for a port's Data Center Bridging.
|
||||
|
||||
@@ -1307,13 +1307,11 @@ static int hns3_nic_change_mtu(struct net_device *netdev, int new_mtu)
|
||||
}
|
||||
|
||||
ret = h->ae_algo->ops->set_mtu(h, new_mtu);
|
||||
if (ret) {
|
||||
if (ret)
|
||||
netdev_err(netdev, "failed to change MTU in hardware %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
netdev->mtu = new_mtu;
|
||||
else
|
||||
netdev->mtu = new_mtu;
|
||||
|
||||
/* if the netdev was running earlier, bring it up again */
|
||||
if (if_running && hns3_nic_net_open(netdev))
|
||||
|
||||
@@ -3201,7 +3201,7 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
|
||||
vf->link_forced = true;
|
||||
vf->link_up = true;
|
||||
pfe.event_data.link_event.link_status = true;
|
||||
pfe.event_data.link_event.link_speed = I40E_LINK_SPEED_40GB;
|
||||
pfe.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
|
||||
break;
|
||||
case IFLA_VF_LINK_STATE_DISABLE:
|
||||
vf->link_forced = true;
|
||||
|
||||
@@ -3490,12 +3490,18 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
|
||||
else
|
||||
mtqc |= IXGBE_MTQC_64VF;
|
||||
} else {
|
||||
if (tcs > 4)
|
||||
if (tcs > 4) {
|
||||
mtqc = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
|
||||
else if (tcs > 1)
|
||||
} else if (tcs > 1) {
|
||||
mtqc = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
|
||||
else
|
||||
mtqc = IXGBE_MTQC_64Q_1PB;
|
||||
} else {
|
||||
u8 max_txq = adapter->num_tx_queues +
|
||||
adapter->num_xdp_queues;
|
||||
if (max_txq > 63)
|
||||
mtqc = IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
|
||||
else
|
||||
mtqc = IXGBE_MTQC_64Q_1PB;
|
||||
}
|
||||
}
|
||||
|
||||
IXGBE_WRITE_REG(hw, IXGBE_MTQC, mtqc);
|
||||
@@ -5123,6 +5129,7 @@ static void ixgbe_fdir_filter_restore(struct ixgbe_adapter *adapter)
|
||||
struct ixgbe_hw *hw = &adapter->hw;
|
||||
struct hlist_node *node2;
|
||||
struct ixgbe_fdir_filter *filter;
|
||||
u64 action;
|
||||
|
||||
spin_lock(&adapter->fdir_perfect_lock);
|
||||
|
||||
@@ -5131,12 +5138,17 @@ static void ixgbe_fdir_filter_restore(struct ixgbe_adapter *adapter)
|
||||
|
||||
hlist_for_each_entry_safe(filter, node2,
|
||||
&adapter->fdir_filter_list, fdir_node) {
|
||||
action = filter->action;
|
||||
if (action != IXGBE_FDIR_DROP_QUEUE && action != 0)
|
||||
action =
|
||||
(action >> ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF) - 1;
|
||||
|
||||
ixgbe_fdir_write_perfect_filter_82599(hw,
|
||||
&filter->filter,
|
||||
filter->sw_idx,
|
||||
(filter->action == IXGBE_FDIR_DROP_QUEUE) ?
|
||||
(action == IXGBE_FDIR_DROP_QUEUE) ?
|
||||
IXGBE_FDIR_DROP_QUEUE :
|
||||
adapter->rx_ring[filter->action]->reg_idx);
|
||||
adapter->rx_ring[action]->reg_idx);
|
||||
}
|
||||
|
||||
spin_unlock(&adapter->fdir_perfect_lock);
|
||||
|
||||
@@ -1939,8 +1939,15 @@ static int mlxsw_sp_switchdev_event(struct notifier_block *unused,
|
||||
struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
|
||||
struct mlxsw_sp_switchdev_event_work *switchdev_work;
|
||||
struct switchdev_notifier_fdb_info *fdb_info = ptr;
|
||||
struct net_device *br_dev;
|
||||
|
||||
if (!mlxsw_sp_port_dev_lower_find_rcu(dev))
|
||||
/* Tunnel devices are not our uppers, so check their master instead */
|
||||
br_dev = netdev_master_upper_dev_get_rcu(dev);
|
||||
if (!br_dev)
|
||||
return NOTIFY_DONE;
|
||||
if (!netif_is_bridge_master(br_dev))
|
||||
return NOTIFY_DONE;
|
||||
if (!mlxsw_sp_port_dev_lower_find_rcu(br_dev))
|
||||
return NOTIFY_DONE;
|
||||
|
||||
switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC);
|
||||
|
||||
@@ -579,7 +579,7 @@ static void cdc_ncm_set_dgram_size(struct usbnet *dev, int new_size)
|
||||
err = usbnet_read_cmd(dev, USB_CDC_GET_MAX_DATAGRAM_SIZE,
|
||||
USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE,
|
||||
0, iface_no, &max_datagram_size, sizeof(max_datagram_size));
|
||||
if (err < sizeof(max_datagram_size)) {
|
||||
if (err != sizeof(max_datagram_size)) {
|
||||
dev_dbg(&dev->intf->dev, "GET_MAX_DATAGRAM_SIZE failed\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -881,6 +881,7 @@ struct ath10k {
|
||||
|
||||
struct completion install_key_done;
|
||||
|
||||
int last_wmi_vdev_start_status;
|
||||
struct completion vdev_setup_done;
|
||||
|
||||
struct workqueue_struct *workqueue;
|
||||
|
||||
@@ -955,7 +955,7 @@ static inline int ath10k_vdev_setup_sync(struct ath10k *ar)
|
||||
if (time_left == 0)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
return 0;
|
||||
return ar->last_wmi_vdev_start_status;
|
||||
}
|
||||
|
||||
static int ath10k_monitor_vdev_start(struct ath10k *ar, int vdev_id)
|
||||
|
||||
@@ -3133,18 +3133,31 @@ void ath10k_wmi_event_vdev_start_resp(struct ath10k *ar, struct sk_buff *skb)
|
||||
{
|
||||
struct wmi_vdev_start_ev_arg arg = {};
|
||||
int ret;
|
||||
u32 status;
|
||||
|
||||
ath10k_dbg(ar, ATH10K_DBG_WMI, "WMI_VDEV_START_RESP_EVENTID\n");
|
||||
|
||||
ar->last_wmi_vdev_start_status = 0;
|
||||
|
||||
ret = ath10k_wmi_pull_vdev_start(ar, skb, &arg);
|
||||
if (ret) {
|
||||
ath10k_warn(ar, "failed to parse vdev start event: %d\n", ret);
|
||||
return;
|
||||
ar->last_wmi_vdev_start_status = ret;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (WARN_ON(__le32_to_cpu(arg.status)))
|
||||
return;
|
||||
status = __le32_to_cpu(arg.status);
|
||||
if (WARN_ON_ONCE(status)) {
|
||||
ath10k_warn(ar, "vdev-start-response reports status error: %d (%s)\n",
|
||||
status, (status == WMI_VDEV_START_CHAN_INVALID) ?
|
||||
"chan-invalid" : "unknown");
|
||||
/* Setup is done one way or another though, so we should still
|
||||
* do the completion, so don't return here.
|
||||
*/
|
||||
ar->last_wmi_vdev_start_status = -EINVAL;
|
||||
}
|
||||
|
||||
out:
|
||||
complete(&ar->vdev_setup_done);
|
||||
}
|
||||
|
||||
|
||||
@@ -6480,11 +6480,17 @@ struct wmi_ch_info_ev_arg {
|
||||
__le32 rx_frame_count;
|
||||
};
|
||||
|
||||
/* From 10.4 firmware, not sure all have the same values. */
|
||||
enum wmi_vdev_start_status {
|
||||
WMI_VDEV_START_OK = 0,
|
||||
WMI_VDEV_START_CHAN_INVALID,
|
||||
};
|
||||
|
||||
struct wmi_vdev_start_ev_arg {
|
||||
__le32 vdev_id;
|
||||
__le32 req_id;
|
||||
__le32 resp_type; /* %WMI_VDEV_RESP_ */
|
||||
__le32 status;
|
||||
__le32 status; /* See wmi_vdev_start_status enum above */
|
||||
};
|
||||
|
||||
struct wmi_peer_kick_ev_arg {
|
||||
|
||||
@@ -411,7 +411,7 @@ ath_cmn_process_ht20_40_fft(struct ath_rx_status *rs,
|
||||
|
||||
ath_dbg(common, SPECTRAL_SCAN,
|
||||
"Calculated new upper max 0x%X at %i\n",
|
||||
tmp_mag, i);
|
||||
tmp_mag, fft_sample_40.upper_max_index);
|
||||
} else
|
||||
for (i = dc_pos; i < SPECTRAL_HT20_40_NUM_BINS; i++) {
|
||||
if (fft_sample_40.data[i] == (upper_mag >> max_exp))
|
||||
|
||||
@@ -74,7 +74,7 @@
|
||||
#define P2P_AF_MAX_WAIT_TIME msecs_to_jiffies(2000)
|
||||
#define P2P_INVALID_CHANNEL -1
|
||||
#define P2P_CHANNEL_SYNC_RETRY 5
|
||||
#define P2P_AF_FRM_SCAN_MAX_WAIT msecs_to_jiffies(1500)
|
||||
#define P2P_AF_FRM_SCAN_MAX_WAIT msecs_to_jiffies(450)
|
||||
#define P2P_DEFAULT_SLEEP_TIME_VSDB 200
|
||||
|
||||
/* WiFi P2P Public Action Frame OUI Subtypes */
|
||||
@@ -1139,7 +1139,6 @@ static s32 brcmf_p2p_af_searching_channel(struct brcmf_p2p_info *p2p)
|
||||
{
|
||||
struct afx_hdl *afx_hdl = &p2p->afx_hdl;
|
||||
struct brcmf_cfg80211_vif *pri_vif;
|
||||
unsigned long duration;
|
||||
s32 retry;
|
||||
|
||||
brcmf_dbg(TRACE, "Enter\n");
|
||||
@@ -1155,7 +1154,6 @@ static s32 brcmf_p2p_af_searching_channel(struct brcmf_p2p_info *p2p)
|
||||
* pending action frame tx is cancelled.
|
||||
*/
|
||||
retry = 0;
|
||||
duration = msecs_to_jiffies(P2P_AF_FRM_SCAN_MAX_WAIT);
|
||||
while ((retry < P2P_CHANNEL_SYNC_RETRY) &&
|
||||
(afx_hdl->peer_chan == P2P_INVALID_CHANNEL)) {
|
||||
afx_hdl->is_listen = false;
|
||||
@@ -1163,7 +1161,8 @@ static s32 brcmf_p2p_af_searching_channel(struct brcmf_p2p_info *p2p)
|
||||
retry);
|
||||
/* search peer on peer's listen channel */
|
||||
schedule_work(&afx_hdl->afx_work);
|
||||
wait_for_completion_timeout(&afx_hdl->act_frm_scan, duration);
|
||||
wait_for_completion_timeout(&afx_hdl->act_frm_scan,
|
||||
P2P_AF_FRM_SCAN_MAX_WAIT);
|
||||
if ((afx_hdl->peer_chan != P2P_INVALID_CHANNEL) ||
|
||||
(!test_bit(BRCMF_P2P_STATUS_FINDING_COMMON_CHANNEL,
|
||||
&p2p->status)))
|
||||
@@ -1176,7 +1175,7 @@ static s32 brcmf_p2p_af_searching_channel(struct brcmf_p2p_info *p2p)
|
||||
afx_hdl->is_listen = true;
|
||||
schedule_work(&afx_hdl->afx_work);
|
||||
wait_for_completion_timeout(&afx_hdl->act_frm_scan,
|
||||
duration);
|
||||
P2P_AF_FRM_SCAN_MAX_WAIT);
|
||||
}
|
||||
if ((afx_hdl->peer_chan != P2P_INVALID_CHANNEL) ||
|
||||
(!test_bit(BRCMF_P2P_STATUS_FINDING_COMMON_CHANNEL,
|
||||
@@ -1463,10 +1462,12 @@ int brcmf_p2p_notify_action_tx_complete(struct brcmf_if *ifp,
|
||||
return 0;
|
||||
|
||||
if (e->event_code == BRCMF_E_ACTION_FRAME_COMPLETE) {
|
||||
if (e->status == BRCMF_E_STATUS_SUCCESS)
|
||||
if (e->status == BRCMF_E_STATUS_SUCCESS) {
|
||||
set_bit(BRCMF_P2P_STATUS_ACTION_TX_COMPLETED,
|
||||
&p2p->status);
|
||||
else {
|
||||
if (!p2p->wait_for_offchan_complete)
|
||||
complete(&p2p->send_af_done);
|
||||
} else {
|
||||
set_bit(BRCMF_P2P_STATUS_ACTION_TX_NOACK, &p2p->status);
|
||||
/* If there is no ack, we don't need to wait for
|
||||
* WLC_E_ACTION_FRAME_OFFCHAN_COMPLETE event
|
||||
@@ -1517,6 +1518,17 @@ static s32 brcmf_p2p_tx_action_frame(struct brcmf_p2p_info *p2p,
|
||||
p2p->af_sent_channel = le32_to_cpu(af_params->channel);
|
||||
p2p->af_tx_sent_jiffies = jiffies;
|
||||
|
||||
if (test_bit(BRCMF_P2P_STATUS_DISCOVER_LISTEN, &p2p->status) &&
|
||||
p2p->af_sent_channel ==
|
||||
ieee80211_frequency_to_channel(p2p->remain_on_channel.center_freq))
|
||||
p2p->wait_for_offchan_complete = false;
|
||||
else
|
||||
p2p->wait_for_offchan_complete = true;
|
||||
|
||||
brcmf_dbg(TRACE, "Waiting for %s tx completion event\n",
|
||||
(p2p->wait_for_offchan_complete) ?
|
||||
"off-channel" : "on-channel");
|
||||
|
||||
timeout = wait_for_completion_timeout(&p2p->send_af_done,
|
||||
P2P_AF_MAX_WAIT_TIME);
|
||||
|
||||
|
||||
@@ -124,6 +124,7 @@ struct afx_hdl {
|
||||
* @gon_req_action: about to send go negotiation requets frame.
|
||||
* @block_gon_req_tx: drop tx go negotiation requets frame.
|
||||
* @p2pdev_dynamically: is p2p device if created by module param or supplicant.
|
||||
* @wait_for_offchan_complete: wait for off-channel tx completion event.
|
||||
*/
|
||||
struct brcmf_p2p_info {
|
||||
struct brcmf_cfg80211_info *cfg;
|
||||
@@ -144,6 +145,7 @@ struct brcmf_p2p_info {
|
||||
bool gon_req_action;
|
||||
bool block_gon_req_tx;
|
||||
bool p2pdev_dynamically;
|
||||
bool wait_for_offchan_complete;
|
||||
};
|
||||
|
||||
s32 brcmf_p2p_attach(struct brcmf_cfg80211_info *cfg, bool p2pdev_forced);
|
||||
|
||||
@@ -947,8 +947,10 @@ int iwl_mvm_wowlan_config_key_params(struct iwl_mvm *mvm,
|
||||
{
|
||||
struct iwl_wowlan_kek_kck_material_cmd kek_kck_cmd = {};
|
||||
struct iwl_wowlan_tkip_params_cmd tkip_cmd = {};
|
||||
bool unified = fw_has_capa(&mvm->fw->ucode_capa,
|
||||
IWL_UCODE_TLV_CAPA_CNSLDTD_D3_D0_IMG);
|
||||
struct wowlan_key_data key_data = {
|
||||
.configure_keys = !d0i3,
|
||||
.configure_keys = !d0i3 && !unified,
|
||||
.use_rsc_tsc = false,
|
||||
.tkip = &tkip_cmd,
|
||||
.use_tkip = false,
|
||||
|
||||
@@ -509,9 +509,16 @@ static int qtnf_del_key(struct wiphy *wiphy, struct net_device *dev,
|
||||
int ret;
|
||||
|
||||
ret = qtnf_cmd_send_del_key(vif, key_index, pairwise, mac_addr);
|
||||
if (ret)
|
||||
pr_err("VIF%u.%u: failed to delete key: idx=%u pw=%u\n",
|
||||
vif->mac->macid, vif->vifid, key_index, pairwise);
|
||||
if (ret) {
|
||||
if (ret == -ENOENT) {
|
||||
pr_debug("VIF%u.%u: key index %d out of bounds\n",
|
||||
vif->mac->macid, vif->vifid, key_index);
|
||||
} else {
|
||||
pr_err("VIF%u.%u: failed to delete key: idx=%u pw=%u\n",
|
||||
vif->mac->macid, vif->vifid,
|
||||
key_index, pairwise);
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -485,6 +485,9 @@ qtnf_sta_info_parse_rate(struct rate_info *rate_dst,
|
||||
rate_dst->flags |= RATE_INFO_FLAGS_MCS;
|
||||
else if (rate_src->flags & QLINK_STA_INFO_RATE_FLAG_VHT_MCS)
|
||||
rate_dst->flags |= RATE_INFO_FLAGS_VHT_MCS;
|
||||
|
||||
if (rate_src->flags & QLINK_STA_INFO_RATE_FLAG_SHORT_GI)
|
||||
rate_dst->flags |= RATE_INFO_FLAGS_SHORT_GI;
|
||||
}
|
||||
|
||||
static void
|
||||
|
||||
@@ -172,7 +172,8 @@ static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
|
||||
return vif->hash.mapping[skb_get_hash_raw(skb) % size];
|
||||
}
|
||||
|
||||
static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
static netdev_tx_t
|
||||
xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
{
|
||||
struct xenvif *vif = netdev_priv(dev);
|
||||
struct xenvif_queue *queue = NULL;
|
||||
|
||||
@@ -551,13 +551,16 @@ static const unsigned int tvc_3512_pins[] = {
|
||||
319, /* TVC_DATA[1] */
|
||||
301, /* TVC_DATA[2] */
|
||||
283, /* TVC_DATA[3] */
|
||||
265, /* TVC_CLK */
|
||||
320, /* TVC_DATA[4] */
|
||||
302, /* TVC_DATA[5] */
|
||||
284, /* TVC_DATA[6] */
|
||||
266, /* TVC_DATA[7] */
|
||||
};
|
||||
|
||||
static const unsigned int tvc_clk_3512_pins[] = {
|
||||
265, /* TVC_CLK */
|
||||
};
|
||||
|
||||
/* NAND flash pins */
|
||||
static const unsigned int nflash_3512_pins[] = {
|
||||
199, 200, 201, 202, 216, 217, 218, 219, 220, 234, 235, 236, 237, 252,
|
||||
@@ -589,7 +592,7 @@ static const unsigned int pflash_3512_pins_extended[] = {
|
||||
/* Serial flash pins CE0, CE1, DI, DO, CK */
|
||||
static const unsigned int sflash_3512_pins[] = { 230, 231, 232, 233, 211 };
|
||||
|
||||
/* The GPIO0A (0) pin overlap with TVC and extended parallel flash */
|
||||
/* The GPIO0A (0) pin overlap with TVC CLK and extended parallel flash */
|
||||
static const unsigned int gpio0a_3512_pins[] = { 265 };
|
||||
|
||||
/* The GPIO0B (1-4) pins overlap with TVC and ICE */
|
||||
@@ -772,7 +775,13 @@ static const struct gemini_pin_group gemini_3512_pin_groups[] = {
|
||||
.num_pins = ARRAY_SIZE(tvc_3512_pins),
|
||||
/* Conflict with character LCD and ICE */
|
||||
.mask = LCD_PADS_ENABLE,
|
||||
.value = TVC_PADS_ENABLE | TVC_CLK_PAD_ENABLE,
|
||||
.value = TVC_PADS_ENABLE,
|
||||
},
|
||||
{
|
||||
.name = "tvcclkgrp",
|
||||
.pins = tvc_clk_3512_pins,
|
||||
.num_pins = ARRAY_SIZE(tvc_clk_3512_pins),
|
||||
.value = TVC_CLK_PAD_ENABLE,
|
||||
},
|
||||
/*
|
||||
* The construction is done such that it is possible to use a serial
|
||||
@@ -809,8 +818,8 @@ static const struct gemini_pin_group gemini_3512_pin_groups[] = {
|
||||
.name = "gpio0agrp",
|
||||
.pins = gpio0a_3512_pins,
|
||||
.num_pins = ARRAY_SIZE(gpio0a_3512_pins),
|
||||
/* Conflict with TVC */
|
||||
.mask = TVC_PADS_ENABLE,
|
||||
/* Conflict with TVC CLK */
|
||||
.mask = TVC_CLK_PAD_ENABLE,
|
||||
},
|
||||
{
|
||||
.name = "gpio0bgrp",
|
||||
@@ -1476,13 +1485,16 @@ static const unsigned int tvc_3516_pins[] = {
|
||||
311, /* TVC_DATA[1] */
|
||||
394, /* TVC_DATA[2] */
|
||||
374, /* TVC_DATA[3] */
|
||||
333, /* TVC_CLK */
|
||||
354, /* TVC_DATA[4] */
|
||||
395, /* TVC_DATA[5] */
|
||||
312, /* TVC_DATA[6] */
|
||||
334, /* TVC_DATA[7] */
|
||||
};
|
||||
|
||||
static const unsigned int tvc_clk_3516_pins[] = {
|
||||
333, /* TVC_CLK */
|
||||
};
|
||||
|
||||
/* NAND flash pins */
|
||||
static const unsigned int nflash_3516_pins[] = {
|
||||
243, 260, 261, 224, 280, 262, 281, 264, 300, 263, 282, 301, 320, 283,
|
||||
@@ -1515,7 +1527,7 @@ static const unsigned int pflash_3516_pins_extended[] = {
|
||||
static const unsigned int sflash_3516_pins[] = { 296, 338, 295, 359, 339 };
|
||||
|
||||
/* The GPIO0A (0-4) pins overlap with TVC and extended parallel flash */
|
||||
static const unsigned int gpio0a_3516_pins[] = { 333, 354, 395, 312, 334 };
|
||||
static const unsigned int gpio0a_3516_pins[] = { 354, 395, 312, 334 };
|
||||
|
||||
/* The GPIO0B (5-7) pins overlap with ICE */
|
||||
static const unsigned int gpio0b_3516_pins[] = { 375, 396, 376 };
|
||||
@@ -1547,6 +1559,9 @@ static const unsigned int gpio0j_3516_pins[] = { 359, 339 };
|
||||
/* The GPIO0K (30,31) pins overlap with NAND flash */
|
||||
static const unsigned int gpio0k_3516_pins[] = { 275, 298 };
|
||||
|
||||
/* The GPIO0L (0) pins overlap with TVC_CLK */
|
||||
static const unsigned int gpio0l_3516_pins[] = { 333 };
|
||||
|
||||
/* The GPIO1A (0-4) pins that overlap with IDE and parallel flash */
|
||||
static const unsigned int gpio1a_3516_pins[] = { 221, 200, 222, 201, 220 };
|
||||
|
||||
@@ -1693,7 +1708,13 @@ static const struct gemini_pin_group gemini_3516_pin_groups[] = {
|
||||
.num_pins = ARRAY_SIZE(tvc_3516_pins),
|
||||
/* Conflict with character LCD */
|
||||
.mask = LCD_PADS_ENABLE,
|
||||
.value = TVC_PADS_ENABLE | TVC_CLK_PAD_ENABLE,
|
||||
.value = TVC_PADS_ENABLE,
|
||||
},
|
||||
{
|
||||
.name = "tvcclkgrp",
|
||||
.pins = tvc_clk_3516_pins,
|
||||
.num_pins = ARRAY_SIZE(tvc_clk_3516_pins),
|
||||
.value = TVC_CLK_PAD_ENABLE,
|
||||
},
|
||||
/*
|
||||
* The construction is done such that it is possible to use a serial
|
||||
@@ -1804,6 +1825,13 @@ static const struct gemini_pin_group gemini_3516_pin_groups[] = {
|
||||
/* Conflict with parallel and NAND flash */
|
||||
.value = PFLASH_PADS_DISABLE | NAND_PADS_DISABLE,
|
||||
},
|
||||
{
|
||||
.name = "gpio0lgrp",
|
||||
.pins = gpio0l_3516_pins,
|
||||
.num_pins = ARRAY_SIZE(gpio0l_3516_pins),
|
||||
/* Conflict with TVE CLK */
|
||||
.mask = TVC_CLK_PAD_ENABLE,
|
||||
},
|
||||
{
|
||||
.name = "gpio1agrp",
|
||||
.pins = gpio1a_3516_pins,
|
||||
@@ -2164,7 +2192,8 @@ static int gemini_pmx_set_mux(struct pinctrl_dev *pctldev,
|
||||
func->name, grp->name);
|
||||
|
||||
regmap_read(pmx->map, GLOBAL_MISC_CTRL, &before);
|
||||
regmap_update_bits(pmx->map, GLOBAL_MISC_CTRL, grp->mask,
|
||||
regmap_update_bits(pmx->map, GLOBAL_MISC_CTRL,
|
||||
grp->mask | grp->value,
|
||||
grp->value);
|
||||
regmap_read(pmx->map, GLOBAL_MISC_CTRL, &after);
|
||||
|
||||
|
||||
@@ -48,6 +48,11 @@ static ssize_t firmware_store(struct device *dev,
|
||||
}
|
||||
|
||||
len = strcspn(buf, "\n");
|
||||
if (!len) {
|
||||
dev_err(dev, "can't provide a NULL firmware\n");
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
p = kstrndup(buf, len, GFP_KERNEL);
|
||||
if (!p) {
|
||||
|
||||
@@ -466,28 +466,29 @@ struct reset_control *__of_reset_control_get(struct device_node *node,
|
||||
break;
|
||||
}
|
||||
}
|
||||
of_node_put(args.np);
|
||||
|
||||
if (!rcdev) {
|
||||
mutex_unlock(&reset_list_mutex);
|
||||
return ERR_PTR(-EPROBE_DEFER);
|
||||
rstc = ERR_PTR(-EPROBE_DEFER);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (WARN_ON(args.args_count != rcdev->of_reset_n_cells)) {
|
||||
mutex_unlock(&reset_list_mutex);
|
||||
return ERR_PTR(-EINVAL);
|
||||
rstc = ERR_PTR(-EINVAL);
|
||||
goto out;
|
||||
}
|
||||
|
||||
rstc_id = rcdev->of_xlate(rcdev, &args);
|
||||
if (rstc_id < 0) {
|
||||
mutex_unlock(&reset_list_mutex);
|
||||
return ERR_PTR(rstc_id);
|
||||
rstc = ERR_PTR(rstc_id);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* reset_list_mutex also protects the rcdev's reset_control list */
|
||||
rstc = __reset_control_get_internal(rcdev, rstc_id, shared);
|
||||
|
||||
out:
|
||||
mutex_unlock(&reset_list_mutex);
|
||||
of_node_put(args.np);
|
||||
|
||||
return rstc;
|
||||
}
|
||||
|
||||
@@ -287,7 +287,7 @@ static int fsl_lpspi_config(struct fsl_lpspi_data *fsl_lpspi)
|
||||
|
||||
fsl_lpspi_set_watermark(fsl_lpspi);
|
||||
|
||||
temp = CFGR1_PCSCFG | CFGR1_MASTER | CFGR1_NOSTALL;
|
||||
temp = CFGR1_PCSCFG | CFGR1_MASTER;
|
||||
if (fsl_lpspi->config.mode & SPI_CS_HIGH)
|
||||
temp |= CFGR1_PCSPOL;
|
||||
writel(temp, fsl_lpspi->base + IMX7ULP_CFGR1);
|
||||
|
||||
@@ -522,11 +522,11 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
|
||||
mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len);
|
||||
mtk_spi_setup_packet(master);
|
||||
|
||||
cnt = len / 4;
|
||||
cnt = mdata->xfer_len / 4;
|
||||
iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
|
||||
trans->tx_buf + mdata->num_xfered, cnt);
|
||||
|
||||
remainder = len % 4;
|
||||
remainder = mdata->xfer_len % 4;
|
||||
if (remainder > 0) {
|
||||
reg_val = 0;
|
||||
memcpy(®_val,
|
||||
|
||||
@@ -445,6 +445,9 @@ static int rockchip_spi_prepare_dma(struct rockchip_spi *rs)
|
||||
struct dma_slave_config rxconf, txconf;
|
||||
struct dma_async_tx_descriptor *rxdesc, *txdesc;
|
||||
|
||||
memset(&rxconf, 0, sizeof(rxconf));
|
||||
memset(&txconf, 0, sizeof(txconf));
|
||||
|
||||
spin_lock_irqsave(&rs->lock, flags);
|
||||
rs->state &= ~RXBUSY;
|
||||
rs->state &= ~TXBUSY;
|
||||
|
||||
@@ -724,11 +724,9 @@ static int spidev_probe(struct spi_device *spi)
|
||||
* compatible string, it is a Linux implementation thing
|
||||
* rather than a description of the hardware.
|
||||
*/
|
||||
if (spi->dev.of_node && !of_match_device(spidev_dt_ids, &spi->dev)) {
|
||||
dev_err(&spi->dev, "buggy DT: spidev listed directly in DT\n");
|
||||
WARN_ON(spi->dev.of_node &&
|
||||
!of_match_device(spidev_dt_ids, &spi->dev));
|
||||
}
|
||||
WARN(spi->dev.of_node &&
|
||||
of_device_is_compatible(spi->dev.of_node, "spidev"),
|
||||
"%pOF: buggy DT: spidev listed directly in DT\n", spi->dev.of_node);
|
||||
|
||||
spidev_probe_acpi(spi);
|
||||
|
||||
|
||||
@@ -590,8 +590,10 @@ static int __init optee_driver_init(void)
|
||||
return -ENODEV;
|
||||
|
||||
np = of_find_matching_node(fw_np, optee_match);
|
||||
if (!np || !of_device_is_available(np))
|
||||
if (!np || !of_device_is_available(np)) {
|
||||
of_node_put(np);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
optee = optee_probe(np);
|
||||
of_node_put(np);
|
||||
|
||||
@@ -277,27 +277,36 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned cmd,
|
||||
const struct usb_endpoint_descriptor *desc = dep->endpoint.desc;
|
||||
struct dwc3 *dwc = dep->dwc;
|
||||
u32 timeout = 1000;
|
||||
u32 saved_config = 0;
|
||||
u32 reg;
|
||||
|
||||
int cmd_status = 0;
|
||||
int susphy = false;
|
||||
int ret = -EINVAL;
|
||||
|
||||
/*
|
||||
* Synopsys Databook 2.60a states, on section 6.3.2.5.[1-8], that if
|
||||
* we're issuing an endpoint command, we must check if
|
||||
* GUSB2PHYCFG.SUSPHY bit is set. If it is, then we need to clear it.
|
||||
* When operating in USB 2.0 speeds (HS/FS), if GUSB2PHYCFG.ENBLSLPM or
|
||||
* GUSB2PHYCFG.SUSPHY is set, it must be cleared before issuing an
|
||||
* endpoint command.
|
||||
*
|
||||
* We will also set SUSPHY bit to what it was before returning as stated
|
||||
* by the same section on Synopsys databook.
|
||||
* Save and clear both GUSB2PHYCFG.ENBLSLPM and GUSB2PHYCFG.SUSPHY
|
||||
* settings. Restore them after the command is completed.
|
||||
*
|
||||
* DWC_usb3 3.30a and DWC_usb31 1.90a programming guide section 3.2.2
|
||||
*/
|
||||
if (dwc->gadget.speed <= USB_SPEED_HIGH) {
|
||||
reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
|
||||
if (unlikely(reg & DWC3_GUSB2PHYCFG_SUSPHY)) {
|
||||
susphy = true;
|
||||
saved_config |= DWC3_GUSB2PHYCFG_SUSPHY;
|
||||
reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
|
||||
dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
|
||||
}
|
||||
|
||||
if (reg & DWC3_GUSB2PHYCFG_ENBLSLPM) {
|
||||
saved_config |= DWC3_GUSB2PHYCFG_ENBLSLPM;
|
||||
reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM;
|
||||
}
|
||||
|
||||
if (saved_config)
|
||||
dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
|
||||
}
|
||||
|
||||
if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
|
||||
@@ -395,9 +404,9 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned cmd,
|
||||
}
|
||||
}
|
||||
|
||||
if (unlikely(susphy)) {
|
||||
if (saved_config) {
|
||||
reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
|
||||
reg |= DWC3_GUSB2PHYCFG_SUSPHY;
|
||||
reg |= saved_config;
|
||||
dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
|
||||
}
|
||||
|
||||
|
||||
@@ -744,7 +744,7 @@ static void fotg210_get_status(struct fotg210_udc *fotg210,
|
||||
fotg210->ep0_req->length = 2;
|
||||
|
||||
spin_unlock(&fotg210->lock);
|
||||
fotg210_ep_queue(fotg210->gadget.ep0, fotg210->ep0_req, GFP_KERNEL);
|
||||
fotg210_ep_queue(fotg210->gadget.ep0, fotg210->ep0_req, GFP_ATOMIC);
|
||||
spin_lock(&fotg210->lock);
|
||||
}
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user