Merge remote-tracking branch '4.9/tmp-9ae2c67' into 4.9
* 4.9/tmp-9ae2c67:
Linux 4.9.40
alarmtimer: don't rate limit one-shot timers
tracing: Fix kmemleak in instance_rmdir
PM / Domains: defer dev_pm_domain_set() until genpd->attach_dev succeeds if present
reiserfs: Don't clear SGID when inheriting ACLs
spmi: Include OF based modalias in device uevent
of: device: Export of_device_{get_modalias, uvent_modalias} to modules
acpi/nfit: Fix memory corruption/Unregister mce decoder on failure
ovl: fix random return value on mount
hfsplus: Don't clear SGID when inheriting ACLs
mlx5: Avoid that mlx5_ib_sg_to_klms() overflows the klms[] array
drm/mst: Avoid processing partially received up/down message transactions
drm/mst: Avoid dereferencing a NULL mstb in drm_dp_mst_handle_up_req()
drm/mst: Fix error handling during MST sideband message reception
RDMA/core: Initialize port_num in qp_attr
ceph: fix race in concurrent readdir
staging: lustre: ko2iblnd: check copy_from_iter/copy_to_iter return code
staging: sm750fb: avoid conflicting vesafb
staging: comedi: ni_mio_common: fix AO timer off-by-one regression
staging: rtl8188eu: add TL-WN722N v2 support
Revert "perf/core: Drop kernel samples even though :u is specified"
perf annotate: Fix broken arrow at row 0 connecting jmp instruction to its target
iser-target: Avoid isert_conn->cm_id dereference in isert_login_recv_done
target: Fix COMPARE_AND_WRITE caw_sem leak during se_cmd quiesce
udf: Fix deadlock between writeback and udf_setsize()
NFS: only invalidate dentrys that are clearly invalid.
sunrpc: use constant time memory comparison for mac
IB/core: Namespace is mandatory input for address resolution
IB/iser: Fix connection teardown race condition
Input: i8042 - fix crash at boot time
MIPS: Fix a typo: s/preset/present/ in r2-to-r6 emulation error message
MIPS: Send SIGILL for R6 branches in `__compute_return_epc_for_insn'
MIPS: Send SIGILL for linked branches in `__compute_return_epc_for_insn'
MIPS: Rename `sigill_r6' to `sigill_r2r6' in `__compute_return_epc_for_insn'
MIPS: Send SIGILL for BPOSGE32 in `__compute_return_epc_for_insn'
MIPS: math-emu: Prevent wrong ISA mode instruction emulation
MIPS: Fix unaligned PC interpretation in `compute_return_epc'
MIPS: Actually decode JALX in `__compute_return_epc_for_insn'
MIPS: Save static registers before sysmips
MIPS: Fix MIPS I ISA /proc/cpuinfo reporting
x86/ioapic: Pass the correct data to unmask_ioapic_irq()
x86/acpi: Prevent out of bound access caused by broken ACPI tables
Revert "ACPI / EC: Enable event freeze mode..." to fix a regression
ACPI / EC: Drop EC noirq hooks to fix a regression
ubifs: Don't leak kernel memory to the MTD
MIPS: Negate error syscall return in trace
MIPS: Fix mips_atomic_set() with EVA
MIPS: Fix mips_atomic_set() retry condition
ftrace: Fix uninitialized variable in match_records()
nvme-rdma: remove race conditions from IB signalling
vfio: New external user group/file match
vfio: Fix group release deadlock
ovl: drop CAP_SYS_RESOURCE from saved mounter's credentials
drm/ttm: Fix use-after-free in ttm_bo_clean_mm
f2fs: Don't clear SGID when inheriting ACLs
f2fs: sanity check size of nat and sit cache
xfs: Don't clear SGID when inheriting ACLs
ipmi:ssif: Add missing unlock in error branch
ipmi: use rcu lock around call to intf->handlers->sender()
drm/radeon: Fix eDP for single-display iMac10,1 (v2)
drm/radeon/ci: disable mclk switching for high refresh rates (v2)
drm/amd/amdgpu: Return error if initiating read out of range on vram
s390/syscalls: Fix out of bounds arguments access
Raid5 should update rdev->sectors after reshape
ext2: Don't clear SGID when inheriting ACLs
libnvdimm: fix badblock range handling of ARS range
libnvdimm, btt: fix btt_rw_page not returning errors
cx88: Fix regression in initial video standard setting
x86/xen: allow userspace access during hypercalls
md: don't use flush_signals in userspace processes
usb: renesas_usbhs: gadget: disable all eps when the driver stops
usb: renesas_usbhs: fix usbhsc_resume() for !USBHSF_RUNTIME_PWCTRL
USB: cdc-acm: add device-id for quirky printer
usb: storage: return on error to avoid a null pointer dereference
mxl111sf: Fix driver to use heap allocate buffers for USB messages
xhci: Bad Ethernet performance plugged in ASM1042A host
xhci: Fix NULL pointer dereference when cleaning up streams for removed host
xhci: fix 20000ms port resume timeout
ipvs: SNAT packet replies only for NATed connections
PCI/PM: Restore the status of PCI devices across hibernation
PCI: rockchip: Use normal register bank for config accessors
PCI: Work around poweroff & suspend-to-RAM issue on Macbook Pro 11
af_key: Fix sadb_x_ipsecrequest parsing
powerpc/mm/radix: Properly clear process table entry
powerpc/asm: Mark cr0 as clobbered in mftb()
powerpc: Fix emulation of mfocrf in emulate_step()
powerpc: Fix emulation of mcrf in emulate_step()
powerpc/64: Fix atomic64_inc_not_zero() to return an int
powerpc/pseries: Fix passing of pp0 in updatepp() and updateboltedpp()
xen/scsiback: Fix a TMR related use-after-free
iscsi-target: Add login_keys_workaround attribute for non RFC initiators
scsi: Add STARGET_CREATED_REMOVE state to scsi_target_state
scsi: ses: do not add a device to an enclosure if enclosure_add_links() fails.
PM / Domains: Fix unsafe iteration over modified list of domains
PM / Domains: Fix unsafe iteration over modified list of domain providers
PM / Domains: Fix unsafe iteration over modified list of device links
ASoC: compress: Derive substream from stream based on direction
igb: Explicitly select page 0 at initialization
btrfs: Don't clear SGID when inheriting ACLs
wlcore: fix 64K page support
Bluetooth: use constant time memory comparison for secret values
perf intel-pt: Clear FUP flag on error
perf intel-pt: Use FUP always when scanning for an IP
perf intel-pt: Ensure never to set 'last_ip' when packet 'count' is zero
perf intel-pt: Fix last_ip usage
perf intel-pt: Ensure IP is zero when state is INTEL_PT_STATE_NO_IP
perf intel-pt: Fix missing stack clear
perf intel-pt: Improve sample timestamp
perf intel-pt: Move decoder error setting into one condition
NFC: Add sockaddr length checks before accessing sa_family in bind handlers
nfc: Fix the sockaddr length sanitization in llcp_sock_connect
nfc: Ensure presence of required attributes in the activate_target handler
NFC: nfcmrvl: fix firmware-management initialisation
NFC: nfcmrvl: use nfc-device for firmware download
NFC: nfcmrvl: do not use device-managed resources
NFC: nfcmrvl_uart: add missing tty-device sanity check
NFC: fix broken device allocation
ath9k: fix an invalid pointer dereference in ath9k_rng_stop()
ath9k: fix tx99 bus error
ath9k: fix tx99 use after free
thermal: cpu_cooling: Avoid accessing potentially freed structures
thermal: max77620: fix device-node reference imbalance
s5p-jpeg: don't return a random width/height
dm mpath: cleanup -Wbool-operation warning in choose_pgpath()
ir-core: fix gcc-7 warning on bool arithmetic
disable new gcc-7.1.1 warnings for now
Use %zu to print resid (size_t).
ANDROID: keychord: Fix a slab out-of-bounds read.
UPSTREAM: af_key: Fix sadb_x_ipsecrequest parsing
ANDROID: lowmemorykiller: Add tgid to kill message
Revert "ANDROID: proc: smaps: Allow smaps access for CAP_SYS_RESOURCE"
4.9.39
kvm: vmx: allow host to access guest MSR_IA32_BNDCFGS
kvm: vmx: Check value written to IA32_BNDCFGS
kvm: x86: Guest BNDCFGS requires guest MPX support
kvm: vmx: Do not disable intercepts for BNDCFGS
tracing: Use SOFTIRQ_OFFSET for softirq dectection for more accurate results
PM / QoS: return -EINVAL for bogus strings
PM / wakeirq: Convert to SRCU
sched/topology: Fix overlapping sched_group_mask
sched/topology: Optimize build_group_mask()
sched/topology: Fix building of overlapping sched-groups
sched/fair, cpumask: Export for_each_cpu_wrap()
Revert "sched/core: Optimize SCHED_SMT"
crypto: caam - fix signals handling
crypto: caam - properly set IV after {en,de}crypt
crypto: sha1-ssse3 - Disable avx2
crypto: atmel - only treat EBUSY as transient if backlog
crypto: talitos - Extend max key length for SHA384/512-HMAC and AEAD
mm: fix overflow check in expand_upwards()
selftests/capabilities: Fix the test_execve test
mnt: Make propagate_umount less slow for overlapping mount propagation trees
mnt: In propgate_umount handle visiting mounts in any order
mnt: In umount propagation reparent in a separate pass
nvmem: core: fix leaks on registration errors
rcu: Add memory barriers for NOCB leader wakeup
vt: fix unchecked __put_user() in tioclinux ioctls
ARM64: dts: marvell: armada37xx: Fix timer interrupt specifiers
exec: Limit arg stack to at most 75% of _STK_LIM
s390: reduce ELF_ET_DYN_BASE
powerpc: move ELF_ET_DYN_BASE to 4GB / 4MB
arm64: move ELF_ET_DYN_BASE to 4GB / 4MB
arm: move ELF_ET_DYN_BASE to 4MB
binfmt_elf: use ELF_ET_DYN_BASE only for PIE
checkpatch: silence perl 5.26.0 unescaped left brace warnings
fs/dcache.c: fix spin lockup issue on nlru->lock
mm/list_lru.c: fix list_lru_count_node() to be race free
kernel/extable.c: mark core_kernel_text notrace
thp, mm: fix crash due race in MADV_FREE handling
tools/lib/lockdep: Reduce MAX_LOCK_DEPTH to avoid overflowing lock_chain/: Depth
parisc/mm: Ensure IRQs are off in switch_mm()
parisc: DMA API: return error instead of BUG_ON for dma ops on non dma devs
parisc: use compat_sys_keyctl()
parisc: Report SIGSEGV instead of SIGBUS when running out of stack
irqchip/gic-v3: Fix out-of-bound access in gic_set_affinity
cfg80211: Check if NAN service ID is of expected size
cfg80211: Check if PMKID attribute is of expected size
cfg80211: Validate frequencies nested in NL80211_ATTR_SCAN_FREQUENCIES
cfg80211: Define nla_policy for NL80211_ATTR_LOCAL_MESH_POWER_MODE
sfc: don't read beyond unicast address list
brcmfmac: Fix glom_skb leak in brcmf_sdiod_recv_chain
brcmfmac: Fix a memory leak in error handling path in 'brcmf_cfg80211_attach'
brcmfmac: fix possible buffer overflow in brcmf_cfg80211_mgmt_tx()
rds: tcp: use sock_create_lite() to create the accept socket
vrf: fix bug_on triggered by rx when destroying a vrf
net: ipv6: Compare lwstate in detecting duplicate nexthops
net: core: Fix slab-out-of-bounds in netdev_stats_to_stats64
vxlan: fix hlist corruption
ipv6: dad: don't remove dynamic addresses if link is down
net/mlx5e: Fix TX carrier errors report in get stats ndo
liquidio: fix bug in soft reset failure detection
net/mlx5: Cancel delayed recovery work when unloading the driver
net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish()
bpf: prevent leaking pointer via xadd on unpriviledged
rocker: move dereference before free
bridge: mdb: fix leak on complete_info ptr on fail path
net: prevent sign extension in dev_get_stats()
tcp: reset sk_rx_dst in tcp_disconnect()
net: dp83640: Avoid NULL pointer dereference.
ipv6: avoid unregistering inet6_dev for loopback
net/phy: micrel: configure intterupts after autoneg workaround
net: sched: Fix one possible panic when no destroy callback
net_sched: fix error recovery at qdisc creation
xen-netfront: Rework the fix for Rx stall during OOM and network stress
ANDROID: android-verity: mark dev as rw for linear target
ANDROID: sdcardfs: Remove unnecessary lock
ANDROID: binder: don't check prio permissions on restore.
Add BINDER_GET_NODE_DEBUG_INFO ioctl
ANDROID: binder: add RT inheritance flag to node.
ANDROID: binder: improve priority inheritance.
ANDROID: binder: add min sched_policy to node.
ANDROID: binder: add support for RT prio inheritance.
ANDROID: binder: push new transactions to waiting threads.
ANDROID: binder: remove proc waitqueue
Conflicts:
drivers/staging/android/lowmemorykiller.c
Change-Id: I2954e47d7e4fc74cf9bb5033fc151537958b78af
Signed-off-by: Kyle Yan <kyan@codeaurora.org>
This commit is contained in:
5
Makefile
5
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 38
|
||||
SUBLEVEL = 40
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
@@ -633,6 +633,9 @@ include arch/$(SRCARCH)/Makefile
|
||||
|
||||
KBUILD_CFLAGS += $(call cc-option,-fno-delete-null-pointer-checks,)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning,frame-address,)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, int-in-bool-context)
|
||||
|
||||
ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
|
||||
KBUILD_CFLAGS += $(call cc-option,-ffunction-sections,)
|
||||
|
||||
@@ -112,12 +112,8 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs);
|
||||
#define CORE_DUMP_USE_REGSET
|
||||
#define ELF_EXEC_PAGESIZE 4096
|
||||
|
||||
/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
|
||||
use of this is to invoke "./ld.so someprog" to test out a new version of
|
||||
the loader. We need to make sure that it is out of the way of the program
|
||||
that it will "exec", and that there is sufficient room for the brk. */
|
||||
|
||||
#define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2)
|
||||
/* This is the base location for PIE (ET_DYN with INTERP) loads. */
|
||||
#define ELF_ET_DYN_BASE 0x400000UL
|
||||
|
||||
/* When the program starts, a1 contains a pointer to a function to be
|
||||
registered with atexit, as per the SVR4 ABI. A value of 0 means we
|
||||
|
||||
@@ -75,14 +75,10 @@
|
||||
|
||||
timer {
|
||||
compatible = "arm,armv8-timer";
|
||||
interrupts = <GIC_PPI 13
|
||||
(GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>,
|
||||
<GIC_PPI 14
|
||||
(GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>,
|
||||
<GIC_PPI 11
|
||||
(GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>,
|
||||
<GIC_PPI 10
|
||||
(GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>;
|
||||
interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_PPI 14 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_PPI 10 IRQ_TYPE_LEVEL_HIGH>;
|
||||
};
|
||||
|
||||
soc {
|
||||
|
||||
@@ -113,12 +113,11 @@
|
||||
#define ELF_EXEC_PAGESIZE PAGE_SIZE
|
||||
|
||||
/*
|
||||
* This is the location that an ET_DYN program is loaded if exec'ed. Typical
|
||||
* use of this is to invoke "./ld.so someprog" to test out a new version of
|
||||
* the loader. We need to make sure that it is out of the way of the program
|
||||
* that it will "exec", and that there is sufficient room for the brk.
|
||||
* This is the base location for PIE (ET_DYN with INTERP) loads. On
|
||||
* 64-bit, this is raised to 4GB to leave the entire 32-bit address
|
||||
* space open for things that want to use the area for 32-bit pointers.
|
||||
*/
|
||||
#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3)
|
||||
#define ELF_ET_DYN_BASE 0x100000000UL
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
@@ -169,7 +168,8 @@ extern int arch_setup_additional_pages(struct linux_binprm *bprm,
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
||||
#define COMPAT_ELF_ET_DYN_BASE (2 * TASK_SIZE_32 / 3)
|
||||
/* PIE load location for compat arm. Must match ARM ELF_ET_DYN_BASE. */
|
||||
#define COMPAT_ELF_ET_DYN_BASE 0x000400000UL
|
||||
|
||||
/* AArch32 registers. */
|
||||
#define COMPAT_ELF_NGREG 18
|
||||
|
||||
@@ -74,10 +74,7 @@ static inline int compute_return_epc(struct pt_regs *regs)
|
||||
return __microMIPS_compute_return_epc(regs);
|
||||
if (cpu_has_mips16)
|
||||
return __MIPS16e_compute_return_epc(regs);
|
||||
return regs->cp0_epc;
|
||||
}
|
||||
|
||||
if (!delay_slot(regs)) {
|
||||
} else if (!delay_slot(regs)) {
|
||||
regs->cp0_epc += 4;
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -399,7 +399,7 @@ int __MIPS16e_compute_return_epc(struct pt_regs *regs)
|
||||
*
|
||||
* @regs: Pointer to pt_regs
|
||||
* @insn: branch instruction to decode
|
||||
* @returns: -EFAULT on error and forces SIGBUS, and on success
|
||||
* @returns: -EFAULT on error and forces SIGILL, and on success
|
||||
* returns 0 or BRANCH_LIKELY_TAKEN as appropriate after
|
||||
* evaluating the branch.
|
||||
*
|
||||
@@ -431,7 +431,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
/* Fall through */
|
||||
case jr_op:
|
||||
if (NO_R6EMU && insn.r_format.func == jr_op)
|
||||
goto sigill_r6;
|
||||
goto sigill_r2r6;
|
||||
regs->cp0_epc = regs->regs[insn.r_format.rs];
|
||||
break;
|
||||
}
|
||||
@@ -446,7 +446,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
switch (insn.i_format.rt) {
|
||||
case bltzl_op:
|
||||
if (NO_R6EMU)
|
||||
goto sigill_r6;
|
||||
goto sigill_r2r6;
|
||||
case bltz_op:
|
||||
if ((long)regs->regs[insn.i_format.rs] < 0) {
|
||||
epc = epc + 4 + (insn.i_format.simmediate << 2);
|
||||
@@ -459,7 +459,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
|
||||
case bgezl_op:
|
||||
if (NO_R6EMU)
|
||||
goto sigill_r6;
|
||||
goto sigill_r2r6;
|
||||
case bgez_op:
|
||||
if ((long)regs->regs[insn.i_format.rs] >= 0) {
|
||||
epc = epc + 4 + (insn.i_format.simmediate << 2);
|
||||
@@ -473,10 +473,8 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
case bltzal_op:
|
||||
case bltzall_op:
|
||||
if (NO_R6EMU && (insn.i_format.rs ||
|
||||
insn.i_format.rt == bltzall_op)) {
|
||||
ret = -SIGILL;
|
||||
break;
|
||||
}
|
||||
insn.i_format.rt == bltzall_op))
|
||||
goto sigill_r2r6;
|
||||
regs->regs[31] = epc + 8;
|
||||
/*
|
||||
* OK we are here either because we hit a NAL
|
||||
@@ -507,10 +505,8 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
case bgezal_op:
|
||||
case bgezall_op:
|
||||
if (NO_R6EMU && (insn.i_format.rs ||
|
||||
insn.i_format.rt == bgezall_op)) {
|
||||
ret = -SIGILL;
|
||||
break;
|
||||
}
|
||||
insn.i_format.rt == bgezall_op))
|
||||
goto sigill_r2r6;
|
||||
regs->regs[31] = epc + 8;
|
||||
/*
|
||||
* OK we are here either because we hit a BAL
|
||||
@@ -556,6 +552,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
/*
|
||||
* These are unconditional and in j_format.
|
||||
*/
|
||||
case jalx_op:
|
||||
case jal_op:
|
||||
regs->regs[31] = regs->cp0_epc + 8;
|
||||
case j_op:
|
||||
@@ -573,7 +570,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
*/
|
||||
case beql_op:
|
||||
if (NO_R6EMU)
|
||||
goto sigill_r6;
|
||||
goto sigill_r2r6;
|
||||
case beq_op:
|
||||
if (regs->regs[insn.i_format.rs] ==
|
||||
regs->regs[insn.i_format.rt]) {
|
||||
@@ -587,7 +584,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
|
||||
case bnel_op:
|
||||
if (NO_R6EMU)
|
||||
goto sigill_r6;
|
||||
goto sigill_r2r6;
|
||||
case bne_op:
|
||||
if (regs->regs[insn.i_format.rs] !=
|
||||
regs->regs[insn.i_format.rt]) {
|
||||
@@ -601,7 +598,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
|
||||
case blezl_op: /* not really i_format */
|
||||
if (!insn.i_format.rt && NO_R6EMU)
|
||||
goto sigill_r6;
|
||||
goto sigill_r2r6;
|
||||
case blez_op:
|
||||
/*
|
||||
* Compact branches for R6 for the
|
||||
@@ -636,7 +633,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
|
||||
case bgtzl_op:
|
||||
if (!insn.i_format.rt && NO_R6EMU)
|
||||
goto sigill_r6;
|
||||
goto sigill_r2r6;
|
||||
case bgtz_op:
|
||||
/*
|
||||
* Compact branches for R6 for the
|
||||
@@ -774,35 +771,27 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
#else
|
||||
case bc6_op:
|
||||
/* Only valid for MIPS R6 */
|
||||
if (!cpu_has_mips_r6) {
|
||||
ret = -SIGILL;
|
||||
break;
|
||||
}
|
||||
if (!cpu_has_mips_r6)
|
||||
goto sigill_r6;
|
||||
regs->cp0_epc += 8;
|
||||
break;
|
||||
case balc6_op:
|
||||
if (!cpu_has_mips_r6) {
|
||||
ret = -SIGILL;
|
||||
break;
|
||||
}
|
||||
if (!cpu_has_mips_r6)
|
||||
goto sigill_r6;
|
||||
/* Compact branch: BALC */
|
||||
regs->regs[31] = epc + 4;
|
||||
epc += 4 + (insn.i_format.simmediate << 2);
|
||||
regs->cp0_epc = epc;
|
||||
break;
|
||||
case pop66_op:
|
||||
if (!cpu_has_mips_r6) {
|
||||
ret = -SIGILL;
|
||||
break;
|
||||
}
|
||||
if (!cpu_has_mips_r6)
|
||||
goto sigill_r6;
|
||||
/* Compact branch: BEQZC || JIC */
|
||||
regs->cp0_epc += 8;
|
||||
break;
|
||||
case pop76_op:
|
||||
if (!cpu_has_mips_r6) {
|
||||
ret = -SIGILL;
|
||||
break;
|
||||
}
|
||||
if (!cpu_has_mips_r6)
|
||||
goto sigill_r6;
|
||||
/* Compact branch: BNEZC || JIALC */
|
||||
if (!insn.i_format.rs) {
|
||||
/* JIALC: set $31/ra */
|
||||
@@ -814,10 +803,8 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
case pop10_op:
|
||||
case pop30_op:
|
||||
/* Only valid for MIPS R6 */
|
||||
if (!cpu_has_mips_r6) {
|
||||
ret = -SIGILL;
|
||||
break;
|
||||
}
|
||||
if (!cpu_has_mips_r6)
|
||||
goto sigill_r6;
|
||||
/*
|
||||
* Compact branches:
|
||||
* bovc, beqc, beqzalc, bnvc, bnec, bnezlac
|
||||
@@ -831,11 +818,17 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
|
||||
return ret;
|
||||
|
||||
sigill_dsp:
|
||||
printk("%s: DSP branch but not DSP ASE - sending SIGBUS.\n", current->comm);
|
||||
force_sig(SIGBUS, current);
|
||||
pr_info("%s: DSP branch but not DSP ASE - sending SIGILL.\n",
|
||||
current->comm);
|
||||
force_sig(SIGILL, current);
|
||||
return -EFAULT;
|
||||
sigill_r2r6:
|
||||
pr_info("%s: R2 branch but r2-to-r6 emulator is not present - sending SIGILL.\n",
|
||||
current->comm);
|
||||
force_sig(SIGILL, current);
|
||||
return -EFAULT;
|
||||
sigill_r6:
|
||||
pr_info("%s: R2 branch but r2-to-r6 emulator is not preset - sending SIGILL.\n",
|
||||
pr_info("%s: R6 branch but no MIPSr6 ISA support - sending SIGILL.\n",
|
||||
current->comm);
|
||||
force_sig(SIGILL, current);
|
||||
return -EFAULT;
|
||||
|
||||
@@ -83,7 +83,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
|
||||
}
|
||||
|
||||
seq_printf(m, "isa\t\t\t:");
|
||||
if (cpu_has_mips_r1)
|
||||
if (cpu_has_mips_1)
|
||||
seq_printf(m, " mips1");
|
||||
if (cpu_has_mips_2)
|
||||
seq_printf(m, "%s", " mips2");
|
||||
|
||||
@@ -924,7 +924,7 @@ asmlinkage void syscall_trace_leave(struct pt_regs *regs)
|
||||
audit_syscall_exit(regs);
|
||||
|
||||
if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
|
||||
trace_sys_exit(regs, regs->regs[2]);
|
||||
trace_sys_exit(regs, regs_return_value(regs));
|
||||
|
||||
if (test_thread_flag(TIF_SYSCALL_TRACE))
|
||||
tracehook_report_syscall_exit(regs, 0);
|
||||
|
||||
@@ -371,7 +371,7 @@ EXPORT(sys_call_table)
|
||||
PTR sys_writev
|
||||
PTR sys_cacheflush
|
||||
PTR sys_cachectl
|
||||
PTR sys_sysmips
|
||||
PTR __sys_sysmips
|
||||
PTR sys_ni_syscall /* 4150 */
|
||||
PTR sys_getsid
|
||||
PTR sys_fdatasync
|
||||
|
||||
@@ -311,7 +311,7 @@ EXPORT(sys_call_table)
|
||||
PTR sys_sched_getaffinity
|
||||
PTR sys_cacheflush
|
||||
PTR sys_cachectl
|
||||
PTR sys_sysmips
|
||||
PTR __sys_sysmips
|
||||
PTR sys_io_setup /* 5200 */
|
||||
PTR sys_io_destroy
|
||||
PTR sys_io_getevents
|
||||
|
||||
@@ -302,7 +302,7 @@ EXPORT(sysn32_call_table)
|
||||
PTR compat_sys_sched_getaffinity
|
||||
PTR sys_cacheflush
|
||||
PTR sys_cachectl
|
||||
PTR sys_sysmips
|
||||
PTR __sys_sysmips
|
||||
PTR compat_sys_io_setup /* 6200 */
|
||||
PTR sys_io_destroy
|
||||
PTR compat_sys_io_getevents
|
||||
|
||||
@@ -371,7 +371,7 @@ EXPORT(sys32_call_table)
|
||||
PTR compat_sys_writev
|
||||
PTR sys_cacheflush
|
||||
PTR sys_cachectl
|
||||
PTR sys_sysmips
|
||||
PTR __sys_sysmips
|
||||
PTR sys_ni_syscall /* 4150 */
|
||||
PTR sys_getsid
|
||||
PTR sys_fdatasync
|
||||
|
||||
@@ -28,6 +28,7 @@
|
||||
#include <linux/elf.h>
|
||||
|
||||
#include <asm/asm.h>
|
||||
#include <asm/asm-eva.h>
|
||||
#include <asm/branch.h>
|
||||
#include <asm/cachectl.h>
|
||||
#include <asm/cacheflush.h>
|
||||
@@ -138,10 +139,12 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new)
|
||||
__asm__ __volatile__ (
|
||||
" .set "MIPS_ISA_ARCH_LEVEL" \n"
|
||||
" li %[err], 0 \n"
|
||||
"1: ll %[old], (%[addr]) \n"
|
||||
"1: \n"
|
||||
user_ll("%[old]", "(%[addr])")
|
||||
" move %[tmp], %[new] \n"
|
||||
"2: sc %[tmp], (%[addr]) \n"
|
||||
" bnez %[tmp], 4f \n"
|
||||
"2: \n"
|
||||
user_sc("%[tmp]", "(%[addr])")
|
||||
" beqz %[tmp], 4f \n"
|
||||
"3: \n"
|
||||
" .insn \n"
|
||||
" .subsection 2 \n"
|
||||
@@ -199,6 +202,12 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new)
|
||||
unreachable();
|
||||
}
|
||||
|
||||
/*
|
||||
* mips_atomic_set() normally returns directly via syscall_exit potentially
|
||||
* clobbering static registers, so be sure to preserve them.
|
||||
*/
|
||||
save_static_function(sys_sysmips);
|
||||
|
||||
SYSCALL_DEFINE3(sysmips, long, cmd, long, arg1, long, arg2)
|
||||
{
|
||||
switch (cmd) {
|
||||
|
||||
@@ -2522,6 +2522,35 @@ dcopuop:
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Emulate FPU instructions.
|
||||
*
|
||||
* If we use FPU hardware, then we have been typically called to handle
|
||||
* an unimplemented operation, such as where an operand is a NaN or
|
||||
* denormalized. In that case exit the emulation loop after a single
|
||||
* iteration so as to let hardware execute any subsequent instructions.
|
||||
*
|
||||
* If we have no FPU hardware or it has been disabled, then continue
|
||||
* emulating floating-point instructions until one of these conditions
|
||||
* has occurred:
|
||||
*
|
||||
* - a non-FPU instruction has been encountered,
|
||||
*
|
||||
* - an attempt to emulate has ended with a signal,
|
||||
*
|
||||
* - the ISA mode has been switched.
|
||||
*
|
||||
* We need to terminate the emulation loop if we got switched to the
|
||||
* MIPS16 mode, whether supported or not, so that we do not attempt
|
||||
* to emulate a MIPS16 instruction as a regular MIPS FPU instruction.
|
||||
* Similarly if we got switched to the microMIPS mode and only the
|
||||
* regular MIPS mode is supported, so that we do not attempt to emulate
|
||||
* a microMIPS instruction as a regular MIPS FPU instruction. Or if
|
||||
* we got switched to the regular MIPS mode and only the microMIPS mode
|
||||
* is supported, so that we do not attempt to emulate a regular MIPS
|
||||
* instruction that should cause an Address Error exception instead.
|
||||
* For simplicity we always terminate upon an ISA mode switch.
|
||||
*/
|
||||
int fpu_emulator_cop1Handler(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
|
||||
int has_fpu, void *__user *fault_addr)
|
||||
{
|
||||
@@ -2607,6 +2636,15 @@ int fpu_emulator_cop1Handler(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
|
||||
break;
|
||||
if (sig)
|
||||
break;
|
||||
/*
|
||||
* We have to check for the ISA bit explicitly here,
|
||||
* because `get_isa16_mode' may return 0 if support
|
||||
* for code compression has been globally disabled,
|
||||
* or otherwise we may produce the wrong signal or
|
||||
* even proceed successfully where we must not.
|
||||
*/
|
||||
if ((xcp->cp0_epc ^ prevepc) & 0x1)
|
||||
break;
|
||||
|
||||
cond_resched();
|
||||
} while (xcp->cp0_epc > prevepc);
|
||||
|
||||
@@ -20,6 +20,8 @@
|
||||
** flush/purge and allocate "regular" cacheable pages for everything.
|
||||
*/
|
||||
|
||||
#define DMA_ERROR_CODE (~(dma_addr_t)0)
|
||||
|
||||
#ifdef CONFIG_PA11
|
||||
extern struct dma_map_ops pcxl_dma_ops;
|
||||
extern struct dma_map_ops pcx_dma_ops;
|
||||
@@ -54,12 +56,13 @@ parisc_walk_tree(struct device *dev)
|
||||
break;
|
||||
}
|
||||
}
|
||||
BUG_ON(!dev->platform_data);
|
||||
return dev->platform_data;
|
||||
}
|
||||
|
||||
#define GET_IOC(dev) (HBA_DATA(parisc_walk_tree(dev))->iommu)
|
||||
|
||||
|
||||
#define GET_IOC(dev) ({ \
|
||||
void *__pdata = parisc_walk_tree(dev); \
|
||||
__pdata ? HBA_DATA(__pdata)->iommu : NULL; \
|
||||
})
|
||||
|
||||
#ifdef CONFIG_IOMMU_CCIO
|
||||
struct parisc_device;
|
||||
|
||||
@@ -49,15 +49,26 @@ static inline void load_context(mm_context_t context)
|
||||
mtctl(__space_to_prot(context), 8);
|
||||
}
|
||||
|
||||
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk)
|
||||
static inline void switch_mm_irqs_off(struct mm_struct *prev,
|
||||
struct mm_struct *next, struct task_struct *tsk)
|
||||
{
|
||||
|
||||
if (prev != next) {
|
||||
mtctl(__pa(next->pgd), 25);
|
||||
load_context(next->context);
|
||||
}
|
||||
}
|
||||
|
||||
static inline void switch_mm(struct mm_struct *prev,
|
||||
struct mm_struct *next, struct task_struct *tsk)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
local_irq_save(flags);
|
||||
switch_mm_irqs_off(prev, next, tsk);
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
#define switch_mm_irqs_off switch_mm_irqs_off
|
||||
|
||||
#define deactivate_mm(tsk,mm) do { } while (0)
|
||||
|
||||
static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next)
|
||||
|
||||
@@ -361,7 +361,7 @@
|
||||
ENTRY_SAME(ni_syscall) /* 263: reserved for vserver */
|
||||
ENTRY_SAME(add_key)
|
||||
ENTRY_SAME(request_key) /* 265 */
|
||||
ENTRY_SAME(keyctl)
|
||||
ENTRY_COMP(keyctl)
|
||||
ENTRY_SAME(ioprio_set)
|
||||
ENTRY_SAME(ioprio_get)
|
||||
ENTRY_SAME(inotify_init)
|
||||
|
||||
@@ -366,7 +366,7 @@ bad_area:
|
||||
case 15: /* Data TLB miss fault/Data page fault */
|
||||
/* send SIGSEGV when outside of vma */
|
||||
if (!vma ||
|
||||
address < vma->vm_start || address > vma->vm_end) {
|
||||
address < vma->vm_start || address >= vma->vm_end) {
|
||||
si.si_signo = SIGSEGV;
|
||||
si.si_code = SEGV_MAPERR;
|
||||
break;
|
||||
|
||||
@@ -560,7 +560,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
|
||||
* Atomically increments @v by 1, so long as @v is non-zero.
|
||||
* Returns non-zero if @v was non-zero, and zero otherwise.
|
||||
*/
|
||||
static __inline__ long atomic64_inc_not_zero(atomic64_t *v)
|
||||
static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
|
||||
{
|
||||
long t1, t2;
|
||||
|
||||
@@ -579,7 +579,7 @@ static __inline__ long atomic64_inc_not_zero(atomic64_t *v)
|
||||
: "r" (&v->counter)
|
||||
: "cc", "xer", "memory");
|
||||
|
||||
return t1;
|
||||
return t1 != 0;
|
||||
}
|
||||
|
||||
#endif /* __powerpc64__ */
|
||||
|
||||
@@ -23,12 +23,13 @@
|
||||
#define CORE_DUMP_USE_REGSET
|
||||
#define ELF_EXEC_PAGESIZE PAGE_SIZE
|
||||
|
||||
/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
|
||||
use of this is to invoke "./ld.so someprog" to test out a new version of
|
||||
the loader. We need to make sure that it is out of the way of the program
|
||||
that it will "exec", and that there is sufficient room for the brk. */
|
||||
|
||||
#define ELF_ET_DYN_BASE 0x20000000
|
||||
/*
|
||||
* This is the base location for PIE (ET_DYN with INTERP) loads. On
|
||||
* 64-bit, this is raised to 4GB to leave the entire 32-bit address
|
||||
* space open for things that want to use the area for 32-bit pointers.
|
||||
*/
|
||||
#define ELF_ET_DYN_BASE (is_32bit_task() ? 0x000400000UL : \
|
||||
0x100000000UL)
|
||||
|
||||
#define ELF_CORE_EFLAGS (is_elf2_task() ? 2 : 0)
|
||||
|
||||
|
||||
@@ -1283,7 +1283,7 @@ static inline void msr_check_and_clear(unsigned long bits)
|
||||
" .llong 0\n" \
|
||||
".previous" \
|
||||
: "=r" (rval) \
|
||||
: "i" (CPU_FTR_CELL_TB_BUG), "i" (SPRN_TBRL)); \
|
||||
: "i" (CPU_FTR_CELL_TB_BUG), "i" (SPRN_TBRL) : "cr0"); \
|
||||
rval;})
|
||||
#else
|
||||
#define mftb() ({unsigned long rval; \
|
||||
|
||||
@@ -687,8 +687,10 @@ int __kprobes analyse_instr(struct instruction_op *op, struct pt_regs *regs,
|
||||
case 19:
|
||||
switch ((instr >> 1) & 0x3ff) {
|
||||
case 0: /* mcrf */
|
||||
rd = (instr >> 21) & 0x1c;
|
||||
ra = (instr >> 16) & 0x1c;
|
||||
rd = 7 - ((instr >> 23) & 0x7);
|
||||
ra = 7 - ((instr >> 18) & 0x7);
|
||||
rd *= 4;
|
||||
ra *= 4;
|
||||
val = (regs->ccr >> ra) & 0xf;
|
||||
regs->ccr = (regs->ccr & ~(0xfUL << rd)) | (val << rd);
|
||||
goto instr_done;
|
||||
@@ -968,6 +970,19 @@ int __kprobes analyse_instr(struct instruction_op *op, struct pt_regs *regs,
|
||||
#endif
|
||||
|
||||
case 19: /* mfcr */
|
||||
if ((instr >> 20) & 1) {
|
||||
imm = 0xf0000000UL;
|
||||
for (sh = 0; sh < 8; ++sh) {
|
||||
if (instr & (0x80000 >> sh)) {
|
||||
regs->gpr[rd] = regs->ccr & imm;
|
||||
break;
|
||||
}
|
||||
imm >>= 4;
|
||||
}
|
||||
|
||||
goto instr_done;
|
||||
}
|
||||
|
||||
regs->gpr[rd] = regs->ccr;
|
||||
regs->gpr[rd] &= 0xffffffffUL;
|
||||
goto instr_done;
|
||||
|
||||
@@ -167,9 +167,15 @@ void destroy_context(struct mm_struct *mm)
|
||||
mm->context.cop_lockp = NULL;
|
||||
#endif /* CONFIG_PPC_ICSWX */
|
||||
|
||||
if (radix_enabled())
|
||||
process_tb[mm->context.id].prtb1 = 0;
|
||||
else
|
||||
if (radix_enabled()) {
|
||||
/*
|
||||
* Radix doesn't have a valid bit in the process table
|
||||
* entries. However we know that at least P9 implementation
|
||||
* will avoid caching an entry with an invalid RTS field,
|
||||
* and 0 is invalid. So this will do.
|
||||
*/
|
||||
process_tb[mm->context.id].prtb0 = 0;
|
||||
} else
|
||||
subpage_prot_free(mm);
|
||||
destroy_pagetable_page(mm);
|
||||
__destroy_context(mm->context.id);
|
||||
|
||||
@@ -279,7 +279,7 @@ static long pSeries_lpar_hpte_updatepp(unsigned long slot,
|
||||
int ssize, unsigned long inv_flags)
|
||||
{
|
||||
unsigned long lpar_rc;
|
||||
unsigned long flags = (newpp & 7) | H_AVPN;
|
||||
unsigned long flags;
|
||||
unsigned long want_v;
|
||||
|
||||
want_v = hpte_encode_avpn(vpn, psize, ssize);
|
||||
@@ -287,6 +287,11 @@ static long pSeries_lpar_hpte_updatepp(unsigned long slot,
|
||||
pr_devel(" update: avpnv=%016lx, hash=%016lx, f=%lx, psize: %d ...",
|
||||
want_v, slot, flags, psize);
|
||||
|
||||
flags = (newpp & 7) | H_AVPN;
|
||||
if (mmu_has_feature(MMU_FTR_KERNEL_RO))
|
||||
/* Move pp0 into bit 8 (IBM 55) */
|
||||
flags |= (newpp & HPTE_R_PP0) >> 55;
|
||||
|
||||
lpar_rc = plpar_pte_protect(flags, slot, want_v);
|
||||
|
||||
if (lpar_rc == H_NOT_FOUND) {
|
||||
@@ -358,6 +363,10 @@ static void pSeries_lpar_hpte_updateboltedpp(unsigned long newpp,
|
||||
BUG_ON(slot == -1);
|
||||
|
||||
flags = newpp & 7;
|
||||
if (mmu_has_feature(MMU_FTR_KERNEL_RO))
|
||||
/* Move pp0 into bit 8 (IBM 55) */
|
||||
flags |= (newpp & HPTE_R_PP0) >> 55;
|
||||
|
||||
lpar_rc = plpar_pte_protect(flags, slot, 0);
|
||||
|
||||
BUG_ON(lpar_rc != H_SUCCESS);
|
||||
|
||||
@@ -158,14 +158,13 @@ extern unsigned int vdso_enabled;
|
||||
#define CORE_DUMP_USE_REGSET
|
||||
#define ELF_EXEC_PAGESIZE 4096
|
||||
|
||||
/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
|
||||
use of this is to invoke "./ld.so someprog" to test out a new version of
|
||||
the loader. We need to make sure that it is out of the way of the program
|
||||
that it will "exec", and that there is sufficient room for the brk. 64-bit
|
||||
tasks are aligned to 4GB. */
|
||||
#define ELF_ET_DYN_BASE (is_compat_task() ? \
|
||||
(STACK_TOP / 3 * 2) : \
|
||||
(STACK_TOP / 3 * 2) & ~((1UL << 32) - 1))
|
||||
/*
|
||||
* This is the base location for PIE (ET_DYN with INTERP) loads. On
|
||||
* 64-bit, this is raised to 4GB to leave the entire 32-bit address
|
||||
* space open for things that want to use the area for 32-bit pointers.
|
||||
*/
|
||||
#define ELF_ET_DYN_BASE (is_compat_task() ? 0x000400000UL : \
|
||||
0x100000000UL)
|
||||
|
||||
/* This yields a mask that user programs can use to figure out what
|
||||
instruction set this CPU supports. */
|
||||
|
||||
@@ -64,6 +64,12 @@ static inline void syscall_get_arguments(struct task_struct *task,
|
||||
{
|
||||
unsigned long mask = -1UL;
|
||||
|
||||
/*
|
||||
* No arguments for this syscall, there's nothing to do.
|
||||
*/
|
||||
if (!n)
|
||||
return;
|
||||
|
||||
BUG_ON(i + n > 6);
|
||||
#ifdef CONFIG_COMPAT
|
||||
if (test_tsk_thread_flag(task, TIF_31BIT))
|
||||
|
||||
@@ -201,7 +201,7 @@ asmlinkage void sha1_transform_avx2(u32 *digest, const char *data,
|
||||
|
||||
static bool avx2_usable(void)
|
||||
{
|
||||
if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2)
|
||||
if (false && avx_usable() && boot_cpu_has(X86_FEATURE_AVX2)
|
||||
&& boot_cpu_has(X86_FEATURE_BMI1)
|
||||
&& boot_cpu_has(X86_FEATURE_BMI2))
|
||||
return true;
|
||||
|
||||
@@ -245,12 +245,13 @@ extern int force_personality32;
|
||||
#define CORE_DUMP_USE_REGSET
|
||||
#define ELF_EXEC_PAGESIZE 4096
|
||||
|
||||
/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
|
||||
use of this is to invoke "./ld.so someprog" to test out a new version of
|
||||
the loader. We need to make sure that it is out of the way of the program
|
||||
that it will "exec", and that there is sufficient room for the brk. */
|
||||
|
||||
#define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2)
|
||||
/*
|
||||
* This is the base location for PIE (ET_DYN with INTERP) loads. On
|
||||
* 64-bit, this is raised to 4GB to leave the entire 32-bit address
|
||||
* space open for things that want to use the area for 32-bit pointers.
|
||||
*/
|
||||
#define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \
|
||||
0x100000000UL)
|
||||
|
||||
/* This yields a mask that user programs can use to figure out what
|
||||
instruction set this CPU supports. This could be done in user space,
|
||||
|
||||
@@ -405,6 +405,8 @@
|
||||
#define MSR_IA32_TSC_ADJUST 0x0000003b
|
||||
#define MSR_IA32_BNDCFGS 0x00000d90
|
||||
|
||||
#define MSR_IA32_BNDCFGS_RSVD 0x00000ffc
|
||||
|
||||
#define MSR_IA32_XSS 0x00000da0
|
||||
|
||||
#define FEATURE_CONTROL_LOCKED (1<<0)
|
||||
|
||||
@@ -43,6 +43,7 @@
|
||||
|
||||
#include <asm/page.h>
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/smap.h>
|
||||
|
||||
#include <xen/interface/xen.h>
|
||||
#include <xen/interface/sched.h>
|
||||
@@ -214,10 +215,12 @@ privcmd_call(unsigned call,
|
||||
__HYPERCALL_DECLS;
|
||||
__HYPERCALL_5ARG(a1, a2, a3, a4, a5);
|
||||
|
||||
stac();
|
||||
asm volatile("call *%[call]"
|
||||
: __HYPERCALL_5PARAM
|
||||
: [call] "a" (&hypercall_page[call])
|
||||
: __HYPERCALL_CLOBBER5);
|
||||
clac();
|
||||
|
||||
return (long)__res;
|
||||
}
|
||||
|
||||
@@ -337,6 +337,14 @@ static void __init mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger,
|
||||
int pin;
|
||||
struct mpc_intsrc mp_irq;
|
||||
|
||||
/*
|
||||
* Check bus_irq boundary.
|
||||
*/
|
||||
if (bus_irq >= NR_IRQS_LEGACY) {
|
||||
pr_warn("Invalid bus_irq %u for legacy override\n", bus_irq);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Convert 'gsi' to 'ioapic.pin'.
|
||||
*/
|
||||
|
||||
@@ -2116,7 +2116,7 @@ static inline void __init check_timer(void)
|
||||
int idx;
|
||||
idx = find_irq_entry(apic1, pin1, mp_INT);
|
||||
if (idx != -1 && irq_trigger(idx))
|
||||
unmask_ioapic_irq(irq_get_chip_data(0));
|
||||
unmask_ioapic_irq(irq_get_irq_data(0));
|
||||
}
|
||||
irq_domain_deactivate_irq(irq_data);
|
||||
irq_domain_activate_irq(irq_data);
|
||||
|
||||
@@ -144,6 +144,14 @@ static inline bool guest_cpuid_has_rtm(struct kvm_vcpu *vcpu)
|
||||
return best && (best->ebx & bit(X86_FEATURE_RTM));
|
||||
}
|
||||
|
||||
static inline bool guest_cpuid_has_mpx(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_cpuid_entry2 *best;
|
||||
|
||||
best = kvm_find_cpuid_entry(vcpu, 7, 0);
|
||||
return best && (best->ebx & bit(X86_FEATURE_MPX));
|
||||
}
|
||||
|
||||
static inline bool guest_cpuid_has_rdtscp(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_cpuid_entry2 *best;
|
||||
|
||||
@@ -2987,7 +2987,8 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
||||
msr_info->data = vmcs_readl(GUEST_SYSENTER_ESP);
|
||||
break;
|
||||
case MSR_IA32_BNDCFGS:
|
||||
if (!kvm_mpx_supported())
|
||||
if (!kvm_mpx_supported() ||
|
||||
(!msr_info->host_initiated && !guest_cpuid_has_mpx(vcpu)))
|
||||
return 1;
|
||||
msr_info->data = vmcs_read64(GUEST_BNDCFGS);
|
||||
break;
|
||||
@@ -3069,7 +3070,11 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
||||
vmcs_writel(GUEST_SYSENTER_ESP, data);
|
||||
break;
|
||||
case MSR_IA32_BNDCFGS:
|
||||
if (!kvm_mpx_supported())
|
||||
if (!kvm_mpx_supported() ||
|
||||
(!msr_info->host_initiated && !guest_cpuid_has_mpx(vcpu)))
|
||||
return 1;
|
||||
if (is_noncanonical_address(data & PAGE_MASK) ||
|
||||
(data & MSR_IA32_BNDCFGS_RSVD))
|
||||
return 1;
|
||||
vmcs_write64(GUEST_BNDCFGS, data);
|
||||
break;
|
||||
@@ -6474,7 +6479,6 @@ static __init int hardware_setup(void)
|
||||
vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_CS, false);
|
||||
vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_ESP, false);
|
||||
vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_EIP, false);
|
||||
vmx_disable_intercept_for_msr(MSR_IA32_BNDCFGS, true);
|
||||
|
||||
memcpy(vmx_msr_bitmap_legacy_x2apic,
|
||||
vmx_msr_bitmap_legacy, PAGE_SIZE);
|
||||
|
||||
@@ -571,3 +571,35 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2fc0, pci_invalid_bar);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6f60, pci_invalid_bar);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_invalid_bar);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_invalid_bar);
|
||||
|
||||
/*
|
||||
* Apple MacBook Pro: Avoid [mem 0x7fa00000-0x7fbfffff]
|
||||
*
|
||||
* Using the [mem 0x7fa00000-0x7fbfffff] region, e.g., by assigning it to
|
||||
* the 00:1c.0 Root Port, causes a conflict with [io 0x1804], which is used
|
||||
* for soft poweroff and suspend-to-RAM.
|
||||
*
|
||||
* As far as we know, this is related to the address space, not to the Root
|
||||
* Port itself. Attaching the quirk to the Root Port is a convenience, but
|
||||
* it could probably also be a standalone DMI quirk.
|
||||
*
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=103211
|
||||
*/
|
||||
static void quirk_apple_mbp_poweroff(struct pci_dev *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct resource *res;
|
||||
|
||||
if ((!dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,4") &&
|
||||
!dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,5")) ||
|
||||
pdev->bus->number != 0 || pdev->devfn != PCI_DEVFN(0x1c, 0))
|
||||
return;
|
||||
|
||||
res = request_mem_region(0x7fa00000, 0x200000,
|
||||
"MacBook Pro poweroff workaround");
|
||||
if (res)
|
||||
dev_info(dev, "claimed %s %pR\n", res->name, res);
|
||||
else
|
||||
dev_info(dev, "can't work around MacBook Pro poweroff issue\n");
|
||||
}
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8c10, quirk_apple_mbp_poweroff);
|
||||
|
||||
@@ -147,7 +147,7 @@ static unsigned int ec_storm_threshold __read_mostly = 8;
|
||||
module_param(ec_storm_threshold, uint, 0644);
|
||||
MODULE_PARM_DESC(ec_storm_threshold, "Maxim false GPE numbers not considered as GPE storm");
|
||||
|
||||
static bool ec_freeze_events __read_mostly = true;
|
||||
static bool ec_freeze_events __read_mostly = false;
|
||||
module_param(ec_freeze_events, bool, 0644);
|
||||
MODULE_PARM_DESC(ec_freeze_events, "Disabling event handling during suspend/resume");
|
||||
|
||||
@@ -1865,24 +1865,6 @@ error:
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int acpi_ec_suspend_noirq(struct device *dev)
|
||||
{
|
||||
struct acpi_ec *ec =
|
||||
acpi_driver_data(to_acpi_device(dev));
|
||||
|
||||
acpi_ec_enter_noirq(ec);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int acpi_ec_resume_noirq(struct device *dev)
|
||||
{
|
||||
struct acpi_ec *ec =
|
||||
acpi_driver_data(to_acpi_device(dev));
|
||||
|
||||
acpi_ec_leave_noirq(ec);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int acpi_ec_suspend(struct device *dev)
|
||||
{
|
||||
struct acpi_ec *ec =
|
||||
@@ -1904,7 +1886,6 @@ static int acpi_ec_resume(struct device *dev)
|
||||
#endif
|
||||
|
||||
static const struct dev_pm_ops acpi_ec_pm = {
|
||||
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(acpi_ec_suspend_noirq, acpi_ec_resume_noirq)
|
||||
SET_SYSTEM_SLEEP_PM_OPS(acpi_ec_suspend, acpi_ec_resume)
|
||||
};
|
||||
|
||||
|
||||
@@ -2945,6 +2945,8 @@ static struct acpi_driver acpi_nfit_driver = {
|
||||
|
||||
static __init int nfit_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
BUILD_BUG_ON(sizeof(struct acpi_table_nfit) != 40);
|
||||
BUILD_BUG_ON(sizeof(struct acpi_nfit_system_address) != 56);
|
||||
BUILD_BUG_ON(sizeof(struct acpi_nfit_memory_map) != 48);
|
||||
@@ -2972,8 +2974,14 @@ static __init int nfit_init(void)
|
||||
return -ENOMEM;
|
||||
|
||||
nfit_mce_register();
|
||||
ret = acpi_bus_register_driver(&acpi_nfit_driver);
|
||||
if (ret) {
|
||||
nfit_mce_unregister();
|
||||
destroy_workqueue(nfit_wq);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
||||
return acpi_bus_register_driver(&acpi_nfit_driver);
|
||||
}
|
||||
|
||||
static __exit void nfit_exit(void)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1029,8 +1029,6 @@ static struct generic_pm_domain_data *genpd_alloc_dev_data(struct device *dev,
|
||||
|
||||
spin_unlock_irq(&dev->power.lock);
|
||||
|
||||
dev_pm_domain_set(dev, &genpd->domain);
|
||||
|
||||
return gpd_data;
|
||||
|
||||
err_free:
|
||||
@@ -1044,8 +1042,6 @@ static struct generic_pm_domain_data *genpd_alloc_dev_data(struct device *dev,
|
||||
static void genpd_free_dev_data(struct device *dev,
|
||||
struct generic_pm_domain_data *gpd_data)
|
||||
{
|
||||
dev_pm_domain_set(dev, NULL);
|
||||
|
||||
spin_lock_irq(&dev->power.lock);
|
||||
|
||||
dev->power.subsys_data->domain_data = NULL;
|
||||
@@ -1082,6 +1078,8 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
dev_pm_domain_set(dev, &genpd->domain);
|
||||
|
||||
genpd->device_count++;
|
||||
genpd->max_off_time_changed = true;
|
||||
|
||||
@@ -1143,6 +1141,8 @@ static int genpd_remove_device(struct generic_pm_domain *genpd,
|
||||
if (genpd->detach_dev)
|
||||
genpd->detach_dev(genpd, dev);
|
||||
|
||||
dev_pm_domain_set(dev, NULL);
|
||||
|
||||
list_del_init(&pdd->list_node);
|
||||
|
||||
mutex_unlock(&genpd->lock);
|
||||
@@ -1244,7 +1244,7 @@ EXPORT_SYMBOL_GPL(pm_genpd_add_subdomain);
|
||||
int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
|
||||
struct generic_pm_domain *subdomain)
|
||||
{
|
||||
struct gpd_link *link;
|
||||
struct gpd_link *l, *link;
|
||||
int ret = -EINVAL;
|
||||
|
||||
if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain))
|
||||
@@ -1260,7 +1260,7 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
|
||||
goto out;
|
||||
}
|
||||
|
||||
list_for_each_entry(link, &genpd->master_links, master_node) {
|
||||
list_for_each_entry_safe(link, l, &genpd->master_links, master_node) {
|
||||
if (link->slave != subdomain)
|
||||
continue;
|
||||
|
||||
@@ -1607,12 +1607,12 @@ EXPORT_SYMBOL_GPL(of_genpd_add_provider_onecell);
|
||||
*/
|
||||
void of_genpd_del_provider(struct device_node *np)
|
||||
{
|
||||
struct of_genpd_provider *cp;
|
||||
struct of_genpd_provider *cp, *tmp;
|
||||
struct generic_pm_domain *gpd;
|
||||
|
||||
mutex_lock(&gpd_list_lock);
|
||||
mutex_lock(&of_genpd_mutex);
|
||||
list_for_each_entry(cp, &of_genpd_providers, link) {
|
||||
list_for_each_entry_safe(cp, tmp, &of_genpd_providers, link) {
|
||||
if (cp->node == np) {
|
||||
/*
|
||||
* For each PM domain associated with the
|
||||
@@ -1752,14 +1752,14 @@ EXPORT_SYMBOL_GPL(of_genpd_add_subdomain);
|
||||
*/
|
||||
struct generic_pm_domain *of_genpd_remove_last(struct device_node *np)
|
||||
{
|
||||
struct generic_pm_domain *gpd, *genpd = ERR_PTR(-ENOENT);
|
||||
struct generic_pm_domain *gpd, *tmp, *genpd = ERR_PTR(-ENOENT);
|
||||
int ret;
|
||||
|
||||
if (IS_ERR_OR_NULL(np))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
mutex_lock(&gpd_list_lock);
|
||||
list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
|
||||
list_for_each_entry_safe(gpd, tmp, &gpd_list, gpd_list_node) {
|
||||
if (gpd->provider == &np->fwnode) {
|
||||
ret = genpd_remove(gpd);
|
||||
genpd = ret ? ERR_PTR(ret) : gpd;
|
||||
|
||||
@@ -268,6 +268,8 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
|
||||
value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
|
||||
else if (!strcmp(buf, "any") || !strcmp(buf, "any\n"))
|
||||
value = PM_QOS_LATENCY_ANY;
|
||||
else
|
||||
return -EINVAL;
|
||||
}
|
||||
ret = dev_pm_qos_update_user_latency_tolerance(dev, value);
|
||||
return ret < 0 ? ret : n;
|
||||
|
||||
@@ -61,6 +61,8 @@ static LIST_HEAD(wakeup_sources);
|
||||
|
||||
static DECLARE_WAIT_QUEUE_HEAD(wakeup_count_wait_queue);
|
||||
|
||||
DEFINE_STATIC_SRCU(wakeup_srcu);
|
||||
|
||||
static struct wakeup_source deleted_ws = {
|
||||
.name = "deleted",
|
||||
.lock = __SPIN_LOCK_UNLOCKED(deleted_ws.lock),
|
||||
@@ -199,7 +201,7 @@ void wakeup_source_remove(struct wakeup_source *ws)
|
||||
spin_lock_irqsave(&events_lock, flags);
|
||||
list_del_rcu(&ws->entry);
|
||||
spin_unlock_irqrestore(&events_lock, flags);
|
||||
synchronize_rcu();
|
||||
synchronize_srcu(&wakeup_srcu);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(wakeup_source_remove);
|
||||
|
||||
@@ -333,12 +335,12 @@ void device_wakeup_detach_irq(struct device *dev)
|
||||
void device_wakeup_arm_wake_irqs(void)
|
||||
{
|
||||
struct wakeup_source *ws;
|
||||
int srcuidx;
|
||||
|
||||
rcu_read_lock();
|
||||
srcuidx = srcu_read_lock(&wakeup_srcu);
|
||||
list_for_each_entry_rcu(ws, &wakeup_sources, entry)
|
||||
dev_pm_arm_wake_irq(ws->wakeirq);
|
||||
|
||||
rcu_read_unlock();
|
||||
srcu_read_unlock(&wakeup_srcu, srcuidx);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -349,12 +351,12 @@ void device_wakeup_arm_wake_irqs(void)
|
||||
void device_wakeup_disarm_wake_irqs(void)
|
||||
{
|
||||
struct wakeup_source *ws;
|
||||
int srcuidx;
|
||||
|
||||
rcu_read_lock();
|
||||
srcuidx = srcu_read_lock(&wakeup_srcu);
|
||||
list_for_each_entry_rcu(ws, &wakeup_sources, entry)
|
||||
dev_pm_disarm_wake_irq(ws->wakeirq);
|
||||
|
||||
rcu_read_unlock();
|
||||
srcu_read_unlock(&wakeup_srcu, srcuidx);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -837,10 +839,10 @@ EXPORT_SYMBOL_GPL(pm_get_active_wakeup_sources);
|
||||
void pm_print_active_wakeup_sources(void)
|
||||
{
|
||||
struct wakeup_source *ws;
|
||||
int active = 0;
|
||||
int srcuidx, active = 0;
|
||||
struct wakeup_source *last_activity_ws = NULL;
|
||||
|
||||
rcu_read_lock();
|
||||
srcuidx = srcu_read_lock(&wakeup_srcu);
|
||||
list_for_each_entry_rcu(ws, &wakeup_sources, entry) {
|
||||
if (ws->active) {
|
||||
pr_info("active wakeup source: %s\n", ws->name);
|
||||
@@ -856,7 +858,7 @@ void pm_print_active_wakeup_sources(void)
|
||||
if (!active && last_activity_ws)
|
||||
pr_info("last active wakeup source: %s\n",
|
||||
last_activity_ws->name);
|
||||
rcu_read_unlock();
|
||||
srcu_read_unlock(&wakeup_srcu, srcuidx);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_print_active_wakeup_sources);
|
||||
|
||||
@@ -983,8 +985,9 @@ void pm_wakep_autosleep_enabled(bool set)
|
||||
{
|
||||
struct wakeup_source *ws;
|
||||
ktime_t now = ktime_get();
|
||||
int srcuidx;
|
||||
|
||||
rcu_read_lock();
|
||||
srcuidx = srcu_read_lock(&wakeup_srcu);
|
||||
list_for_each_entry_rcu(ws, &wakeup_sources, entry) {
|
||||
spin_lock_irq(&ws->lock);
|
||||
if (ws->autosleep_enabled != set) {
|
||||
@@ -998,7 +1001,7 @@ void pm_wakep_autosleep_enabled(bool set)
|
||||
}
|
||||
spin_unlock_irq(&ws->lock);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
srcu_read_unlock(&wakeup_srcu, srcuidx);
|
||||
}
|
||||
#endif /* CONFIG_PM_AUTOSLEEP */
|
||||
|
||||
@@ -1059,15 +1062,16 @@ static int print_wakeup_source_stats(struct seq_file *m,
|
||||
static int wakeup_sources_stats_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct wakeup_source *ws;
|
||||
int srcuidx;
|
||||
|
||||
seq_puts(m, "name\t\t\t\t\tactive_count\tevent_count\twakeup_count\t"
|
||||
"expire_count\tactive_since\ttotal_time\tmax_time\t"
|
||||
"last_change\tprevent_suspend_time\n");
|
||||
|
||||
rcu_read_lock();
|
||||
srcuidx = srcu_read_lock(&wakeup_srcu);
|
||||
list_for_each_entry_rcu(ws, &wakeup_sources, entry)
|
||||
print_wakeup_source_stats(m, ws);
|
||||
rcu_read_unlock();
|
||||
srcu_read_unlock(&wakeup_srcu, srcuidx);
|
||||
|
||||
print_wakeup_source_stats(m, &deleted_ws);
|
||||
|
||||
|
||||
@@ -3877,6 +3877,9 @@ static void smi_recv_tasklet(unsigned long val)
|
||||
* because the lower layer is allowed to hold locks while calling
|
||||
* message delivery.
|
||||
*/
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
if (!run_to_completion)
|
||||
spin_lock_irqsave(&intf->xmit_msgs_lock, flags);
|
||||
if (intf->curr_msg == NULL && !intf->in_shutdown) {
|
||||
@@ -3899,6 +3902,8 @@ static void smi_recv_tasklet(unsigned long val)
|
||||
if (newmsg)
|
||||
intf->handlers->sender(intf->send_info, newmsg);
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
handle_new_recv_msgs(intf);
|
||||
}
|
||||
|
||||
|
||||
@@ -762,6 +762,11 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
|
||||
result, len, data[2]);
|
||||
} else if (data[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2
|
||||
|| data[1] != IPMI_GET_MSG_FLAGS_CMD) {
|
||||
/*
|
||||
* Don't abort here, maybe it was a queued
|
||||
* response to a previous command.
|
||||
*/
|
||||
ipmi_ssif_unlock_cond(ssif_info, flags);
|
||||
pr_warn(PFX "Invalid response getting flags: %x %x\n",
|
||||
data[0], data[1]);
|
||||
} else {
|
||||
|
||||
@@ -1000,7 +1000,9 @@ static int atmel_sha_finup(struct ahash_request *req)
|
||||
ctx->flags |= SHA_FLAGS_FINUP;
|
||||
|
||||
err1 = atmel_sha_update(req);
|
||||
if (err1 == -EINPROGRESS || err1 == -EBUSY)
|
||||
if (err1 == -EINPROGRESS ||
|
||||
(err1 == -EBUSY && (ahash_request_flags(req) &
|
||||
CRYPTO_TFM_REQ_MAY_BACKLOG)))
|
||||
return err1;
|
||||
|
||||
/*
|
||||
|
||||
@@ -2014,10 +2014,10 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
|
||||
{
|
||||
struct ablkcipher_request *req = context;
|
||||
struct ablkcipher_edesc *edesc;
|
||||
#ifdef DEBUG
|
||||
struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
|
||||
int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
|
||||
|
||||
#ifdef DEBUG
|
||||
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
|
||||
#endif
|
||||
|
||||
@@ -2037,6 +2037,14 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
|
||||
#endif
|
||||
|
||||
ablkcipher_unmap(jrdev, edesc, req);
|
||||
|
||||
/*
|
||||
* The crypto API expects us to set the IV (req->info) to the last
|
||||
* ciphertext block. This is used e.g. by the CTS mode.
|
||||
*/
|
||||
scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - ivsize,
|
||||
ivsize, 0);
|
||||
|
||||
kfree(edesc);
|
||||
|
||||
ablkcipher_request_complete(req, err);
|
||||
@@ -2047,10 +2055,10 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
|
||||
{
|
||||
struct ablkcipher_request *req = context;
|
||||
struct ablkcipher_edesc *edesc;
|
||||
#ifdef DEBUG
|
||||
struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
|
||||
int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
|
||||
|
||||
#ifdef DEBUG
|
||||
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
|
||||
#endif
|
||||
|
||||
@@ -2069,6 +2077,14 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
|
||||
#endif
|
||||
|
||||
ablkcipher_unmap(jrdev, edesc, req);
|
||||
|
||||
/*
|
||||
* The crypto API expects us to set the IV (req->info) to the last
|
||||
* ciphertext block.
|
||||
*/
|
||||
scatterwalk_map_and_copy(req->info, req->src, req->nbytes - ivsize,
|
||||
ivsize, 0);
|
||||
|
||||
kfree(edesc);
|
||||
|
||||
ablkcipher_request_complete(req, err);
|
||||
|
||||
@@ -491,7 +491,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
|
||||
ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
|
||||
if (!ret) {
|
||||
/* in progress */
|
||||
wait_for_completion_interruptible(&result.completion);
|
||||
wait_for_completion(&result.completion);
|
||||
ret = result.err;
|
||||
#ifdef DEBUG
|
||||
print_hex_dump(KERN_ERR,
|
||||
|
||||
@@ -103,7 +103,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
|
||||
ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
|
||||
if (!ret) {
|
||||
/* in progress */
|
||||
wait_for_completion_interruptible(&result.completion);
|
||||
wait_for_completion(&result.completion);
|
||||
ret = result.err;
|
||||
#ifdef DEBUG
|
||||
print_hex_dump(KERN_ERR, "ctx.key@"__stringify(__LINE__)": ",
|
||||
|
||||
@@ -816,7 +816,7 @@ static void talitos_unregister_rng(struct device *dev)
|
||||
* HMAC_SNOOP_NO_AFEA (HSNA) instead of type IPSEC_ESP
|
||||
*/
|
||||
#define TALITOS_CRA_PRIORITY_AEAD_HSNA (TALITOS_CRA_PRIORITY - 1)
|
||||
#define TALITOS_MAX_KEY_SIZE 96
|
||||
#define TALITOS_MAX_KEY_SIZE (AES_MAX_KEY_SIZE + SHA512_BLOCK_SIZE)
|
||||
#define TALITOS_MAX_IV_LENGTH 16 /* max of AES_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE */
|
||||
|
||||
struct talitos_ctx {
|
||||
@@ -1495,6 +1495,11 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *cipher,
|
||||
{
|
||||
struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher);
|
||||
|
||||
if (keylen > TALITOS_MAX_KEY_SIZE) {
|
||||
crypto_ablkcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
memcpy(&ctx->key, key, keylen);
|
||||
ctx->keylen = keylen;
|
||||
|
||||
|
||||
@@ -1419,6 +1419,9 @@ static ssize_t amdgpu_ttm_vram_read(struct file *f, char __user *buf,
|
||||
if (size & 0x3 || *pos & 0x3)
|
||||
return -EINVAL;
|
||||
|
||||
if (*pos >= adev->mc.mc_vram_size)
|
||||
return -ENXIO;
|
||||
|
||||
while (size) {
|
||||
unsigned long flags;
|
||||
uint32_t value;
|
||||
|
||||
@@ -330,6 +330,13 @@ static bool drm_dp_sideband_msg_build(struct drm_dp_sideband_msg_rx *msg,
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* ignore out-of-order messages or messages that are part of a
|
||||
* failed transaction
|
||||
*/
|
||||
if (!recv_hdr.somt && !msg->have_somt)
|
||||
return false;
|
||||
|
||||
/* get length contained in this portion */
|
||||
msg->curchunk_len = recv_hdr.msg_len;
|
||||
msg->curchunk_hdrlen = hdrlen;
|
||||
@@ -2168,7 +2175,7 @@ out_unlock:
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume);
|
||||
|
||||
static void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up)
|
||||
static bool drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up)
|
||||
{
|
||||
int len;
|
||||
u8 replyblock[32];
|
||||
@@ -2183,12 +2190,12 @@ static void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up)
|
||||
replyblock, len);
|
||||
if (ret != len) {
|
||||
DRM_DEBUG_KMS("failed to read DPCD down rep %d %d\n", len, ret);
|
||||
return;
|
||||
return false;
|
||||
}
|
||||
ret = drm_dp_sideband_msg_build(msg, replyblock, len, true);
|
||||
if (!ret) {
|
||||
DRM_DEBUG_KMS("sideband msg build failed %d\n", replyblock[0]);
|
||||
return;
|
||||
return false;
|
||||
}
|
||||
replylen = msg->curchunk_len + msg->curchunk_hdrlen;
|
||||
|
||||
@@ -2200,21 +2207,32 @@ static void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up)
|
||||
ret = drm_dp_dpcd_read(mgr->aux, basereg + curreply,
|
||||
replyblock, len);
|
||||
if (ret != len) {
|
||||
DRM_DEBUG_KMS("failed to read a chunk\n");
|
||||
DRM_DEBUG_KMS("failed to read a chunk (len %d, ret %d)\n",
|
||||
len, ret);
|
||||
return false;
|
||||
}
|
||||
|
||||
ret = drm_dp_sideband_msg_build(msg, replyblock, len, false);
|
||||
if (ret == false)
|
||||
if (!ret) {
|
||||
DRM_DEBUG_KMS("failed to build sideband msg\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
curreply += len;
|
||||
replylen -= len;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
drm_dp_get_one_sb_msg(mgr, false);
|
||||
if (!drm_dp_get_one_sb_msg(mgr, false)) {
|
||||
memset(&mgr->down_rep_recv, 0,
|
||||
sizeof(struct drm_dp_sideband_msg_rx));
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (mgr->down_rep_recv.have_eomt) {
|
||||
struct drm_dp_sideband_msg_tx *txmsg;
|
||||
@@ -2270,7 +2288,12 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
|
||||
static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
|
||||
{
|
||||
int ret = 0;
|
||||
drm_dp_get_one_sb_msg(mgr, true);
|
||||
|
||||
if (!drm_dp_get_one_sb_msg(mgr, true)) {
|
||||
memset(&mgr->up_req_recv, 0,
|
||||
sizeof(struct drm_dp_sideband_msg_rx));
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (mgr->up_req_recv.have_eomt) {
|
||||
struct drm_dp_sideband_msg_req_body msg;
|
||||
@@ -2322,7 +2345,9 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
|
||||
DRM_DEBUG_KMS("Got RSN: pn: %d avail_pbn %d\n", msg.u.resource_stat.port_number, msg.u.resource_stat.available_pbn);
|
||||
}
|
||||
|
||||
drm_dp_put_mst_branch_device(mstb);
|
||||
if (mstb)
|
||||
drm_dp_put_mst_branch_device(mstb);
|
||||
|
||||
memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
}
|
||||
return ret;
|
||||
|
||||
@@ -30,6 +30,7 @@
|
||||
#include "radeon_audio.h"
|
||||
#include "atom.h"
|
||||
#include <linux/backlight.h>
|
||||
#include <linux/dmi.h>
|
||||
|
||||
extern int atom_debug;
|
||||
|
||||
@@ -2183,9 +2184,17 @@ int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx)
|
||||
goto assigned;
|
||||
}
|
||||
|
||||
/* on DCE32 and encoder can driver any block so just crtc id */
|
||||
/*
|
||||
* On DCE32 any encoder can drive any block so usually just use crtc id,
|
||||
* but Apple thinks different at least on iMac10,1, so there use linkb,
|
||||
* otherwise the internal eDP panel will stay dark.
|
||||
*/
|
||||
if (ASIC_IS_DCE32(rdev)) {
|
||||
enc_idx = radeon_crtc->crtc_id;
|
||||
if (dmi_match(DMI_PRODUCT_NAME, "iMac10,1"))
|
||||
enc_idx = (dig->linkb) ? 1 : 0;
|
||||
else
|
||||
enc_idx = radeon_crtc->crtc_id;
|
||||
|
||||
goto assigned;
|
||||
}
|
||||
|
||||
|
||||
@@ -776,6 +776,12 @@ bool ci_dpm_vblank_too_short(struct radeon_device *rdev)
|
||||
u32 vblank_time = r600_dpm_get_vblank_time(rdev);
|
||||
u32 switch_limit = pi->mem_gddr5 ? 450 : 300;
|
||||
|
||||
/* disable mclk switching if the refresh is >120Hz, even if the
|
||||
* blanking period would allow it
|
||||
*/
|
||||
if (r600_dpm_get_vrefresh(rdev) > 120)
|
||||
return true;
|
||||
|
||||
/* disable mclk switching if the refresh is >120Hz, even if the
|
||||
* blanking period would allow it
|
||||
*/
|
||||
|
||||
@@ -1343,7 +1343,6 @@ int ttm_bo_clean_mm(struct ttm_bo_device *bdev, unsigned mem_type)
|
||||
mem_type);
|
||||
return ret;
|
||||
}
|
||||
fence_put(man->move);
|
||||
|
||||
man->use_type = false;
|
||||
man->has_type = false;
|
||||
@@ -1355,6 +1354,9 @@ int ttm_bo_clean_mm(struct ttm_bo_device *bdev, unsigned mem_type)
|
||||
ret = (*man->func->takedown)(man);
|
||||
}
|
||||
|
||||
fence_put(man->move);
|
||||
man->move = NULL;
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(ttm_bo_clean_mm);
|
||||
|
||||
@@ -518,6 +518,11 @@ static int addr_resolve(struct sockaddr *src_in,
|
||||
struct dst_entry *dst;
|
||||
int ret;
|
||||
|
||||
if (!addr->net) {
|
||||
pr_warn_ratelimited("%s: missing namespace\n", __func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (src_in->sa_family == AF_INET) {
|
||||
struct rtable *rt = NULL;
|
||||
const struct sockaddr_in *dst_in4 =
|
||||
@@ -555,7 +560,6 @@ static int addr_resolve(struct sockaddr *src_in,
|
||||
}
|
||||
|
||||
addr->bound_dev_if = ndev->ifindex;
|
||||
addr->net = dev_net(ndev);
|
||||
dev_put(ndev);
|
||||
|
||||
return ret;
|
||||
|
||||
@@ -976,6 +976,8 @@ int rdma_init_qp_attr(struct rdma_cm_id *id, struct ib_qp_attr *qp_attr,
|
||||
} else
|
||||
ret = iw_cm_init_qp_attr(id_priv->cm_id.iw, qp_attr,
|
||||
qp_attr_mask);
|
||||
qp_attr->port_num = id_priv->id.port_num;
|
||||
*qp_attr_mask |= IB_QP_PORT;
|
||||
} else
|
||||
ret = -ENOSYS;
|
||||
|
||||
|
||||
@@ -1823,7 +1823,7 @@ mlx5_ib_sg_to_klms(struct mlx5_ib_mr *mr,
|
||||
mr->ndescs = sg_nents;
|
||||
|
||||
for_each_sg(sgl, sg, sg_nents, i) {
|
||||
if (unlikely(i > mr->max_descs))
|
||||
if (unlikely(i >= mr->max_descs))
|
||||
break;
|
||||
klms[i].va = cpu_to_be64(sg_dma_address(sg) + sg_offset);
|
||||
klms[i].bcount = cpu_to_be32(sg_dma_len(sg) - sg_offset);
|
||||
|
||||
@@ -83,6 +83,7 @@ static struct scsi_host_template iscsi_iser_sht;
|
||||
static struct iscsi_transport iscsi_iser_transport;
|
||||
static struct scsi_transport_template *iscsi_iser_scsi_transport;
|
||||
static struct workqueue_struct *release_wq;
|
||||
static DEFINE_MUTEX(unbind_iser_conn_mutex);
|
||||
struct iser_global ig;
|
||||
|
||||
int iser_debug_level = 0;
|
||||
@@ -550,12 +551,14 @@ iscsi_iser_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
|
||||
*/
|
||||
if (iser_conn) {
|
||||
mutex_lock(&iser_conn->state_mutex);
|
||||
mutex_lock(&unbind_iser_conn_mutex);
|
||||
iser_conn_terminate(iser_conn);
|
||||
iscsi_conn_stop(cls_conn, flag);
|
||||
|
||||
/* unbind */
|
||||
iser_conn->iscsi_conn = NULL;
|
||||
conn->dd_data = NULL;
|
||||
mutex_unlock(&unbind_iser_conn_mutex);
|
||||
|
||||
complete(&iser_conn->stop_completion);
|
||||
mutex_unlock(&iser_conn->state_mutex);
|
||||
@@ -973,13 +976,21 @@ static int iscsi_iser_slave_alloc(struct scsi_device *sdev)
|
||||
struct iser_conn *iser_conn;
|
||||
struct ib_device *ib_dev;
|
||||
|
||||
mutex_lock(&unbind_iser_conn_mutex);
|
||||
|
||||
session = starget_to_session(scsi_target(sdev))->dd_data;
|
||||
iser_conn = session->leadconn->dd_data;
|
||||
if (!iser_conn) {
|
||||
mutex_unlock(&unbind_iser_conn_mutex);
|
||||
return -ENOTCONN;
|
||||
}
|
||||
ib_dev = iser_conn->ib_conn.device->ib_device;
|
||||
|
||||
if (!(ib_dev->attrs.device_cap_flags & IB_DEVICE_SG_GAPS_REG))
|
||||
blk_queue_virt_boundary(sdev->request_queue, ~MASK_4K);
|
||||
|
||||
mutex_unlock(&unbind_iser_conn_mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -1447,7 +1447,7 @@ static void
|
||||
isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||
{
|
||||
struct isert_conn *isert_conn = wc->qp->qp_context;
|
||||
struct ib_device *ib_dev = isert_conn->cm_id->device;
|
||||
struct ib_device *ib_dev = isert_conn->device->ib_device;
|
||||
|
||||
if (unlikely(wc->status != IB_WC_SUCCESS)) {
|
||||
isert_print_wc(wc, "login recv");
|
||||
|
||||
@@ -232,9 +232,11 @@ static ssize_t keychord_write(struct file *file, const char __user *buffer,
|
||||
{
|
||||
struct keychord_device *kdev = file->private_data;
|
||||
struct input_keychord *keychords = 0;
|
||||
struct input_keychord *keychord, *next, *end;
|
||||
struct input_keychord *keychord;
|
||||
int ret, i, key;
|
||||
unsigned long flags;
|
||||
size_t resid = count;
|
||||
size_t key_bytes;
|
||||
|
||||
if (count < sizeof(struct input_keychord))
|
||||
return -EINVAL;
|
||||
@@ -265,15 +267,29 @@ static ssize_t keychord_write(struct file *file, const char __user *buffer,
|
||||
kdev->head = kdev->tail = 0;
|
||||
|
||||
keychord = keychords;
|
||||
end = (struct input_keychord *)((char *)keychord + count);
|
||||
|
||||
while (keychord < end) {
|
||||
next = NEXT_KEYCHORD(keychord);
|
||||
if (keychord->count <= 0 || next > end) {
|
||||
while (resid > 0) {
|
||||
/* Is the entire keychord entry header present ? */
|
||||
if (resid < sizeof(struct input_keychord)) {
|
||||
pr_err("keychord: Insufficient bytes present for header %zu\n",
|
||||
resid);
|
||||
goto err_unlock_return;
|
||||
}
|
||||
resid -= sizeof(struct input_keychord);
|
||||
if (keychord->count <= 0) {
|
||||
pr_err("keychord: invalid keycode count %d\n",
|
||||
keychord->count);
|
||||
goto err_unlock_return;
|
||||
}
|
||||
key_bytes = keychord->count * sizeof(keychord->keycodes[0]);
|
||||
/* Do we have all the expected keycodes ? */
|
||||
if (resid < key_bytes) {
|
||||
pr_err("keychord: Insufficient bytes present for keycount %zu\n",
|
||||
resid);
|
||||
goto err_unlock_return;
|
||||
}
|
||||
resid -= key_bytes;
|
||||
|
||||
if (keychord->version != KEYCHORD_VERSION) {
|
||||
pr_err("keychord: unsupported version %d\n",
|
||||
keychord->version);
|
||||
@@ -292,7 +308,7 @@ static ssize_t keychord_write(struct file *file, const char __user *buffer,
|
||||
}
|
||||
|
||||
kdev->keychord_count++;
|
||||
keychord = next;
|
||||
keychord = NEXT_KEYCHORD(keychord);
|
||||
}
|
||||
|
||||
kdev->keychords = keychords;
|
||||
|
||||
@@ -434,8 +434,10 @@ static int i8042_start(struct serio *serio)
|
||||
{
|
||||
struct i8042_port *port = serio->port_data;
|
||||
|
||||
spin_lock_irq(&i8042_lock);
|
||||
port->exists = true;
|
||||
mb();
|
||||
spin_unlock_irq(&i8042_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -448,16 +450,20 @@ static void i8042_stop(struct serio *serio)
|
||||
{
|
||||
struct i8042_port *port = serio->port_data;
|
||||
|
||||
spin_lock_irq(&i8042_lock);
|
||||
port->exists = false;
|
||||
port->serio = NULL;
|
||||
spin_unlock_irq(&i8042_lock);
|
||||
|
||||
/*
|
||||
* We need to make sure that interrupt handler finishes using
|
||||
* our serio port before we return from this function.
|
||||
* We synchronize with both AUX and KBD IRQs because there is
|
||||
* a (very unlikely) chance that AUX IRQ is raised for KBD port
|
||||
* and vice versa.
|
||||
*/
|
||||
synchronize_irq(I8042_AUX_IRQ);
|
||||
synchronize_irq(I8042_KBD_IRQ);
|
||||
port->serio = NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -574,7 +580,7 @@ static irqreturn_t i8042_interrupt(int irq, void *dev_id)
|
||||
|
||||
spin_unlock_irqrestore(&i8042_lock, flags);
|
||||
|
||||
if (likely(port->exists && !filtered))
|
||||
if (likely(serio && !filtered))
|
||||
serio_interrupt(serio, data, dfl);
|
||||
|
||||
out:
|
||||
|
||||
@@ -647,6 +647,9 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
|
||||
int enabled;
|
||||
u64 val;
|
||||
|
||||
if (cpu >= nr_cpu_ids)
|
||||
return -EINVAL;
|
||||
|
||||
if (gic_irq_in_rdist(d))
|
||||
return -EINVAL;
|
||||
|
||||
|
||||
@@ -646,6 +646,8 @@ static int add_as_linear_device(struct dm_target *ti, char *dev)
|
||||
android_verity_target.direct_access = dm_linear_direct_access,
|
||||
android_verity_target.io_hints = NULL;
|
||||
|
||||
set_disk_ro(dm_disk(dm_table_get_md(ti->table)), 0);
|
||||
|
||||
err = dm_linear_ctr(ti, DM_LINEAR_ARGS, linear_table_args);
|
||||
|
||||
if (!err) {
|
||||
|
||||
@@ -431,7 +431,7 @@ static struct pgpath *choose_pgpath(struct multipath *m, size_t nr_bytes)
|
||||
unsigned long flags;
|
||||
struct priority_group *pg;
|
||||
struct pgpath *pgpath;
|
||||
bool bypassed = true;
|
||||
unsigned bypassed = 1;
|
||||
|
||||
if (!atomic_read(&m->nr_valid_paths)) {
|
||||
clear_bit(MPATHF_QUEUE_IO, &m->flags);
|
||||
@@ -470,7 +470,7 @@ check_current_pg:
|
||||
*/
|
||||
do {
|
||||
list_for_each_entry(pg, &m->priority_groups, list) {
|
||||
if (pg->bypassed == bypassed)
|
||||
if (pg->bypassed == !!bypassed)
|
||||
continue;
|
||||
pgpath = choose_path_in_pg(m, pg, nr_bytes);
|
||||
if (!IS_ERR_OR_NULL(pgpath)) {
|
||||
|
||||
@@ -1073,7 +1073,7 @@ static void raid1_make_request(struct mddev *mddev, struct bio * bio)
|
||||
*/
|
||||
DEFINE_WAIT(w);
|
||||
for (;;) {
|
||||
flush_signals(current);
|
||||
sigset_t full, old;
|
||||
prepare_to_wait(&conf->wait_barrier,
|
||||
&w, TASK_INTERRUPTIBLE);
|
||||
if (bio_end_sector(bio) <= mddev->suspend_lo ||
|
||||
@@ -1082,7 +1082,10 @@ static void raid1_make_request(struct mddev *mddev, struct bio * bio)
|
||||
!md_cluster_ops->area_resyncing(mddev, WRITE,
|
||||
bio->bi_iter.bi_sector, bio_end_sector(bio))))
|
||||
break;
|
||||
sigfillset(&full);
|
||||
sigprocmask(SIG_BLOCK, &full, &old);
|
||||
schedule();
|
||||
sigprocmask(SIG_SETMASK, &old, NULL);
|
||||
}
|
||||
finish_wait(&conf->wait_barrier, &w);
|
||||
}
|
||||
|
||||
@@ -5300,12 +5300,15 @@ static void raid5_make_request(struct mddev *mddev, struct bio * bi)
|
||||
* userspace, we want an interruptible
|
||||
* wait.
|
||||
*/
|
||||
flush_signals(current);
|
||||
prepare_to_wait(&conf->wait_for_overlap,
|
||||
&w, TASK_INTERRUPTIBLE);
|
||||
if (logical_sector >= mddev->suspend_lo &&
|
||||
logical_sector < mddev->suspend_hi) {
|
||||
sigset_t full, old;
|
||||
sigfillset(&full);
|
||||
sigprocmask(SIG_BLOCK, &full, &old);
|
||||
schedule();
|
||||
sigprocmask(SIG_SETMASK, &old, NULL);
|
||||
do_prepare = true;
|
||||
}
|
||||
goto retry;
|
||||
@@ -7557,12 +7560,10 @@ static void end_reshape(struct r5conf *conf)
|
||||
{
|
||||
|
||||
if (!test_bit(MD_RECOVERY_INTR, &conf->mddev->recovery)) {
|
||||
struct md_rdev *rdev;
|
||||
|
||||
spin_lock_irq(&conf->device_lock);
|
||||
conf->previous_raid_disks = conf->raid_disks;
|
||||
rdev_for_each(rdev, conf->mddev)
|
||||
rdev->data_offset = rdev->new_data_offset;
|
||||
md_finish_reshape(conf->mddev);
|
||||
smp_wmb();
|
||||
conf->reshape_progress = MaxSector;
|
||||
conf->mddev->reshape_position = MaxSector;
|
||||
|
||||
@@ -3691,7 +3691,14 @@ struct cx88_core *cx88_core_create(struct pci_dev *pci, int nr)
|
||||
core->nr = nr;
|
||||
sprintf(core->name, "cx88[%d]", core->nr);
|
||||
|
||||
core->tvnorm = V4L2_STD_NTSC_M;
|
||||
/*
|
||||
* Note: Setting initial standard here would cause first call to
|
||||
* cx88_set_tvnorm() to return without programming any registers. Leave
|
||||
* it blank for at this point and it will get set later in
|
||||
* cx8800_initdev()
|
||||
*/
|
||||
core->tvnorm = 0;
|
||||
|
||||
core->width = 320;
|
||||
core->height = 240;
|
||||
core->field = V4L2_FIELD_INTERLACED;
|
||||
|
||||
@@ -1422,7 +1422,7 @@ static int cx8800_initdev(struct pci_dev *pci_dev,
|
||||
|
||||
/* initial device configuration */
|
||||
mutex_lock(&core->lock);
|
||||
cx88_set_tvnorm(core, core->tvnorm);
|
||||
cx88_set_tvnorm(core, V4L2_STD_NTSC_M);
|
||||
v4l2_ctrl_handler_setup(&core->video_hdl);
|
||||
v4l2_ctrl_handler_setup(&core->audio_hdl);
|
||||
cx88_video_mux(core, 0);
|
||||
|
||||
@@ -1099,10 +1099,10 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
|
||||
struct s5p_jpeg_ctx *ctx)
|
||||
{
|
||||
int c, components = 0, notfound, n_dht = 0, n_dqt = 0;
|
||||
unsigned int height, width, word, subsampling = 0, sos = 0, sof = 0,
|
||||
sof_len = 0;
|
||||
unsigned int dht[S5P_JPEG_MAX_MARKER], dht_len[S5P_JPEG_MAX_MARKER],
|
||||
dqt[S5P_JPEG_MAX_MARKER], dqt_len[S5P_JPEG_MAX_MARKER];
|
||||
unsigned int height = 0, width = 0, word, subsampling = 0;
|
||||
unsigned int sos = 0, sof = 0, sof_len = 0;
|
||||
unsigned int dht[S5P_JPEG_MAX_MARKER], dht_len[S5P_JPEG_MAX_MARKER];
|
||||
unsigned int dqt[S5P_JPEG_MAX_MARKER], dqt_len[S5P_JPEG_MAX_MARKER];
|
||||
long length;
|
||||
struct s5p_jpeg_buffer jpeg_buffer;
|
||||
|
||||
|
||||
@@ -1629,7 +1629,7 @@ static void imon_incoming_packet(struct imon_context *ictx,
|
||||
if (kc == KEY_KEYBOARD && !ictx->release_code) {
|
||||
ictx->last_keycode = kc;
|
||||
if (!nomouse) {
|
||||
ictx->pad_mouse = ~(ictx->pad_mouse) & 0x1;
|
||||
ictx->pad_mouse = !ictx->pad_mouse;
|
||||
dev_dbg(dev, "toggling to %s mode\n",
|
||||
ictx->pad_mouse ? "mouse" : "keyboard");
|
||||
spin_unlock_irqrestore(&ictx->kc_lock, flags);
|
||||
|
||||
@@ -320,7 +320,7 @@ fail:
|
||||
static int mxl111sf_i2c_send_data(struct mxl111sf_state *state,
|
||||
u8 index, u8 *wdata)
|
||||
{
|
||||
int ret = mxl111sf_ctrl_msg(state->d, wdata[0],
|
||||
int ret = mxl111sf_ctrl_msg(state, wdata[0],
|
||||
&wdata[1], 25, NULL, 0);
|
||||
mxl_fail(ret);
|
||||
|
||||
@@ -330,7 +330,7 @@ static int mxl111sf_i2c_send_data(struct mxl111sf_state *state,
|
||||
static int mxl111sf_i2c_get_data(struct mxl111sf_state *state,
|
||||
u8 index, u8 *wdata, u8 *rdata)
|
||||
{
|
||||
int ret = mxl111sf_ctrl_msg(state->d, wdata[0],
|
||||
int ret = mxl111sf_ctrl_msg(state, wdata[0],
|
||||
&wdata[1], 25, rdata, 24);
|
||||
mxl_fail(ret);
|
||||
|
||||
|
||||
@@ -24,9 +24,6 @@
|
||||
#include "lgdt3305.h"
|
||||
#include "lg2160.h"
|
||||
|
||||
/* Max transfer size done by I2C transfer functions */
|
||||
#define MAX_XFER_SIZE 64
|
||||
|
||||
int dvb_usb_mxl111sf_debug;
|
||||
module_param_named(debug, dvb_usb_mxl111sf_debug, int, 0644);
|
||||
MODULE_PARM_DESC(debug, "set debugging level "
|
||||
@@ -56,27 +53,34 @@ MODULE_PARM_DESC(rfswitch, "force rf switch position (0=auto, 1=ext, 2=int).");
|
||||
|
||||
DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
|
||||
|
||||
int mxl111sf_ctrl_msg(struct dvb_usb_device *d,
|
||||
int mxl111sf_ctrl_msg(struct mxl111sf_state *state,
|
||||
u8 cmd, u8 *wbuf, int wlen, u8 *rbuf, int rlen)
|
||||
{
|
||||
struct dvb_usb_device *d = state->d;
|
||||
int wo = (rbuf == NULL || rlen == 0); /* write-only */
|
||||
int ret;
|
||||
u8 sndbuf[MAX_XFER_SIZE];
|
||||
|
||||
if (1 + wlen > sizeof(sndbuf)) {
|
||||
if (1 + wlen > MXL_MAX_XFER_SIZE) {
|
||||
pr_warn("%s: len=%d is too big!\n", __func__, wlen);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
pr_debug("%s(wlen = %d, rlen = %d)\n", __func__, wlen, rlen);
|
||||
|
||||
memset(sndbuf, 0, 1+wlen);
|
||||
mutex_lock(&state->msg_lock);
|
||||
memset(state->sndbuf, 0, 1+wlen);
|
||||
memset(state->rcvbuf, 0, rlen);
|
||||
|
||||
sndbuf[0] = cmd;
|
||||
memcpy(&sndbuf[1], wbuf, wlen);
|
||||
state->sndbuf[0] = cmd;
|
||||
memcpy(&state->sndbuf[1], wbuf, wlen);
|
||||
|
||||
ret = (wo) ? dvb_usbv2_generic_write(d, state->sndbuf, 1+wlen) :
|
||||
dvb_usbv2_generic_rw(d, state->sndbuf, 1+wlen, state->rcvbuf,
|
||||
rlen);
|
||||
|
||||
memcpy(rbuf, state->rcvbuf, rlen);
|
||||
mutex_unlock(&state->msg_lock);
|
||||
|
||||
ret = (wo) ? dvb_usbv2_generic_write(d, sndbuf, 1+wlen) :
|
||||
dvb_usbv2_generic_rw(d, sndbuf, 1+wlen, rbuf, rlen);
|
||||
mxl_fail(ret);
|
||||
|
||||
return ret;
|
||||
@@ -92,7 +96,7 @@ int mxl111sf_read_reg(struct mxl111sf_state *state, u8 addr, u8 *data)
|
||||
u8 buf[2];
|
||||
int ret;
|
||||
|
||||
ret = mxl111sf_ctrl_msg(state->d, MXL_CMD_REG_READ, &addr, 1, buf, 2);
|
||||
ret = mxl111sf_ctrl_msg(state, MXL_CMD_REG_READ, &addr, 1, buf, 2);
|
||||
if (mxl_fail(ret)) {
|
||||
mxl_debug("error reading reg: 0x%02x", addr);
|
||||
goto fail;
|
||||
@@ -118,7 +122,7 @@ int mxl111sf_write_reg(struct mxl111sf_state *state, u8 addr, u8 data)
|
||||
|
||||
pr_debug("W: (0x%02x, 0x%02x)\n", addr, data);
|
||||
|
||||
ret = mxl111sf_ctrl_msg(state->d, MXL_CMD_REG_WRITE, buf, 2, NULL, 0);
|
||||
ret = mxl111sf_ctrl_msg(state, MXL_CMD_REG_WRITE, buf, 2, NULL, 0);
|
||||
if (mxl_fail(ret))
|
||||
pr_err("error writing reg: 0x%02x, val: 0x%02x", addr, data);
|
||||
return ret;
|
||||
@@ -922,6 +926,8 @@ static int mxl111sf_init(struct dvb_usb_device *d)
|
||||
static u8 eeprom[256];
|
||||
struct i2c_client c;
|
||||
|
||||
mutex_init(&state->msg_lock);
|
||||
|
||||
ret = get_chip_info(state);
|
||||
if (mxl_fail(ret))
|
||||
pr_err("failed to get chip info during probe");
|
||||
|
||||
@@ -19,6 +19,9 @@
|
||||
#include <media/tveeprom.h>
|
||||
#include <media/media-entity.h>
|
||||
|
||||
/* Max transfer size done by I2C transfer functions */
|
||||
#define MXL_MAX_XFER_SIZE 64
|
||||
|
||||
#define MXL_EP1_REG_READ 1
|
||||
#define MXL_EP2_REG_WRITE 2
|
||||
#define MXL_EP3_INTERRUPT 3
|
||||
@@ -86,6 +89,9 @@ struct mxl111sf_state {
|
||||
struct mutex fe_lock;
|
||||
u8 num_frontends;
|
||||
struct mxl111sf_adap_state adap_state[3];
|
||||
u8 sndbuf[MXL_MAX_XFER_SIZE];
|
||||
u8 rcvbuf[MXL_MAX_XFER_SIZE];
|
||||
struct mutex msg_lock;
|
||||
#ifdef CONFIG_MEDIA_CONTROLLER_DVB
|
||||
struct media_entity tuner;
|
||||
struct media_pad tuner_pads[2];
|
||||
@@ -108,7 +114,7 @@ int mxl111sf_ctrl_program_regs(struct mxl111sf_state *state,
|
||||
|
||||
/* needed for hardware i2c functions in mxl111sf-i2c.c:
|
||||
* mxl111sf_i2c_send_data / mxl111sf_i2c_get_data */
|
||||
int mxl111sf_ctrl_msg(struct dvb_usb_device *d,
|
||||
int mxl111sf_ctrl_msg(struct mxl111sf_state *state,
|
||||
u8 cmd, u8 *wbuf, int wlen, u8 *rbuf, int rlen);
|
||||
|
||||
#define mxl_printk(kern, fmt, arg...) \
|
||||
|
||||
@@ -375,6 +375,7 @@ int enclosure_add_device(struct enclosure_device *edev, int component,
|
||||
struct device *dev)
|
||||
{
|
||||
struct enclosure_component *cdev;
|
||||
int err;
|
||||
|
||||
if (!edev || component >= edev->components)
|
||||
return -EINVAL;
|
||||
@@ -384,12 +385,17 @@ int enclosure_add_device(struct enclosure_device *edev, int component,
|
||||
if (cdev->dev == dev)
|
||||
return -EEXIST;
|
||||
|
||||
if (cdev->dev)
|
||||
if (cdev->dev) {
|
||||
enclosure_remove_links(cdev);
|
||||
|
||||
put_device(cdev->dev);
|
||||
put_device(cdev->dev);
|
||||
}
|
||||
cdev->dev = get_device(dev);
|
||||
return enclosure_add_links(cdev);
|
||||
err = enclosure_add_links(cdev);
|
||||
if (err) {
|
||||
put_device(cdev->dev);
|
||||
cdev->dev = NULL;
|
||||
}
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(enclosure_add_device);
|
||||
|
||||
|
||||
@@ -230,7 +230,7 @@ static int cn23xx_pf_soft_reset(struct octeon_device *oct)
|
||||
/* Wait for 100ms as Octeon resets. */
|
||||
mdelay(100);
|
||||
|
||||
if (octeon_read_csr64(oct, CN23XX_SLI_SCRATCH1) == 0x1234ULL) {
|
||||
if (octeon_read_csr64(oct, CN23XX_SLI_SCRATCH1)) {
|
||||
dev_err(&oct->pci_dev->dev, "OCTEON[%d]: Soft reset failed\n",
|
||||
oct->octeon_id);
|
||||
return 1;
|
||||
|
||||
@@ -48,7 +48,7 @@ int lio_cn6xxx_soft_reset(struct octeon_device *oct)
|
||||
/* Wait for 10ms as Octeon resets. */
|
||||
mdelay(100);
|
||||
|
||||
if (octeon_read_csr64(oct, CN6XXX_SLI_SCRATCH1) == 0x1234ULL) {
|
||||
if (octeon_read_csr64(oct, CN6XXX_SLI_SCRATCH1)) {
|
||||
dev_err(&oct->pci_dev->dev, "Soft reset failed\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
@@ -246,6 +246,7 @@ static s32 igb_init_phy_params_82575(struct e1000_hw *hw)
|
||||
E1000_STATUS_FUNC_SHIFT;
|
||||
|
||||
/* Set phy->phy_addr and phy->id. */
|
||||
igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, 0);
|
||||
ret_val = igb_get_phy_id_82575(hw);
|
||||
if (ret_val)
|
||||
return ret_val;
|
||||
|
||||
@@ -2671,8 +2671,6 @@ mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
|
||||
PPORT_802_3_GET(pstats, a_frame_check_sequence_errors);
|
||||
stats->rx_frame_errors = PPORT_802_3_GET(pstats, a_alignment_errors);
|
||||
stats->tx_aborted_errors = PPORT_2863_GET(pstats, if_out_discards);
|
||||
stats->tx_carrier_errors =
|
||||
PPORT_802_3_GET(pstats, a_symbol_error_during_carrier);
|
||||
stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors +
|
||||
stats->rx_frame_errors;
|
||||
stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors;
|
||||
|
||||
@@ -67,6 +67,7 @@ enum {
|
||||
|
||||
enum {
|
||||
MLX5_DROP_NEW_HEALTH_WORK,
|
||||
MLX5_DROP_NEW_RECOVERY_WORK,
|
||||
};
|
||||
|
||||
static u8 get_nic_state(struct mlx5_core_dev *dev)
|
||||
@@ -193,7 +194,7 @@ static void health_care(struct work_struct *work)
|
||||
mlx5_handle_bad_state(dev);
|
||||
|
||||
spin_lock(&health->wq_lock);
|
||||
if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags))
|
||||
if (!test_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags))
|
||||
schedule_delayed_work(&health->recover_work, recover_delay);
|
||||
else
|
||||
dev_err(&dev->pdev->dev,
|
||||
@@ -328,6 +329,7 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
|
||||
init_timer(&health->timer);
|
||||
health->sick = 0;
|
||||
clear_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
|
||||
clear_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
|
||||
health->health = &dev->iseg->health;
|
||||
health->health_counter = &dev->iseg->health_counter;
|
||||
|
||||
@@ -350,11 +352,22 @@ void mlx5_drain_health_wq(struct mlx5_core_dev *dev)
|
||||
|
||||
spin_lock(&health->wq_lock);
|
||||
set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
|
||||
set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
|
||||
spin_unlock(&health->wq_lock);
|
||||
cancel_delayed_work_sync(&health->recover_work);
|
||||
cancel_work_sync(&health->work);
|
||||
}
|
||||
|
||||
void mlx5_drain_health_recovery(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_core_health *health = &dev->priv.health;
|
||||
|
||||
spin_lock(&health->wq_lock);
|
||||
set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
|
||||
spin_unlock(&health->wq_lock);
|
||||
cancel_delayed_work_sync(&dev->priv.health.recover_work);
|
||||
}
|
||||
|
||||
void mlx5_health_cleanup(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_core_health *health = &dev->priv.health;
|
||||
|
||||
@@ -1169,7 +1169,7 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
|
||||
int err = 0;
|
||||
|
||||
if (cleanup)
|
||||
mlx5_drain_health_wq(dev);
|
||||
mlx5_drain_health_recovery(dev);
|
||||
|
||||
mutex_lock(&dev->intf_state_mutex);
|
||||
if (test_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state)) {
|
||||
|
||||
@@ -1505,8 +1505,8 @@ static int ofdpa_port_ipv4_nh(struct ofdpa_port *ofdpa_port,
|
||||
*index = entry->index;
|
||||
resolved = false;
|
||||
} else if (removing) {
|
||||
ofdpa_neigh_del(trans, found);
|
||||
*index = found->index;
|
||||
ofdpa_neigh_del(trans, found);
|
||||
} else if (updating) {
|
||||
ofdpa_neigh_update(found, trans, NULL, false);
|
||||
resolved = !is_zero_ether_addr(found->eth_dst);
|
||||
|
||||
@@ -4399,12 +4399,9 @@ static void efx_ef10_filter_uc_addr_list(struct efx_nic *efx)
|
||||
struct efx_ef10_filter_table *table = efx->filter_state;
|
||||
struct net_device *net_dev = efx->net_dev;
|
||||
struct netdev_hw_addr *uc;
|
||||
int addr_count;
|
||||
unsigned int i;
|
||||
|
||||
addr_count = netdev_uc_count(net_dev);
|
||||
table->uc_promisc = !!(net_dev->flags & IFF_PROMISC);
|
||||
table->dev_uc_count = 1 + addr_count;
|
||||
ether_addr_copy(table->dev_uc_list[0].addr, net_dev->dev_addr);
|
||||
i = 1;
|
||||
netdev_for_each_uc_addr(uc, net_dev) {
|
||||
@@ -4415,6 +4412,8 @@ static void efx_ef10_filter_uc_addr_list(struct efx_nic *efx)
|
||||
ether_addr_copy(table->dev_uc_list[i].addr, uc->addr);
|
||||
i++;
|
||||
}
|
||||
|
||||
table->dev_uc_count = i;
|
||||
}
|
||||
|
||||
static void efx_ef10_filter_mc_addr_list(struct efx_nic *efx)
|
||||
@@ -4422,11 +4421,10 @@ static void efx_ef10_filter_mc_addr_list(struct efx_nic *efx)
|
||||
struct efx_ef10_filter_table *table = efx->filter_state;
|
||||
struct net_device *net_dev = efx->net_dev;
|
||||
struct netdev_hw_addr *mc;
|
||||
unsigned int i, addr_count;
|
||||
unsigned int i;
|
||||
|
||||
table->mc_promisc = !!(net_dev->flags & (IFF_PROMISC | IFF_ALLMULTI));
|
||||
|
||||
addr_count = netdev_mc_count(net_dev);
|
||||
i = 0;
|
||||
netdev_for_each_mc_addr(mc, net_dev) {
|
||||
if (i >= EFX_EF10_FILTER_DEV_MC_MAX) {
|
||||
|
||||
@@ -908,7 +908,7 @@ static void decode_txts(struct dp83640_private *dp83640,
|
||||
if (overflow) {
|
||||
pr_debug("tx timestamp queue overflow, count %d\n", overflow);
|
||||
while (skb) {
|
||||
skb_complete_tx_timestamp(skb, NULL);
|
||||
kfree_skb(skb);
|
||||
skb = skb_dequeue(&dp83640->tx_queue);
|
||||
}
|
||||
return;
|
||||
|
||||
@@ -622,6 +622,8 @@ static int ksz9031_read_status(struct phy_device *phydev)
|
||||
if ((regval & 0xFF) == 0xFF) {
|
||||
phy_init_hw(phydev);
|
||||
phydev->link = 0;
|
||||
if (phydev->drv->config_intr && phy_interrupt_is_valid(phydev))
|
||||
phydev->drv->config_intr(phydev);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -787,15 +787,10 @@ static int vrf_del_slave(struct net_device *dev, struct net_device *port_dev)
|
||||
static void vrf_dev_uninit(struct net_device *dev)
|
||||
{
|
||||
struct net_vrf *vrf = netdev_priv(dev);
|
||||
struct net_device *port_dev;
|
||||
struct list_head *iter;
|
||||
|
||||
vrf_rtable_release(dev, vrf);
|
||||
vrf_rt6_release(dev, vrf);
|
||||
|
||||
netdev_for_each_lower_dev(dev, port_dev, iter)
|
||||
vrf_del_slave(dev, port_dev);
|
||||
|
||||
free_percpu(dev->dstats);
|
||||
dev->dstats = NULL;
|
||||
}
|
||||
@@ -1232,6 +1227,12 @@ static int vrf_validate(struct nlattr *tb[], struct nlattr *data[])
|
||||
|
||||
static void vrf_dellink(struct net_device *dev, struct list_head *head)
|
||||
{
|
||||
struct net_device *port_dev;
|
||||
struct list_head *iter;
|
||||
|
||||
netdev_for_each_lower_dev(dev, port_dev, iter)
|
||||
vrf_del_slave(dev, port_dev);
|
||||
|
||||
unregister_netdevice_queue(dev, head);
|
||||
}
|
||||
|
||||
|
||||
@@ -227,15 +227,15 @@ static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family,
|
||||
|
||||
static struct vxlan_dev *vxlan_vs_find_vni(struct vxlan_sock *vs, __be32 vni)
|
||||
{
|
||||
struct vxlan_dev *vxlan;
|
||||
struct vxlan_dev_node *node;
|
||||
|
||||
/* For flow based devices, map all packets to VNI 0 */
|
||||
if (vs->flags & VXLAN_F_COLLECT_METADATA)
|
||||
vni = 0;
|
||||
|
||||
hlist_for_each_entry_rcu(vxlan, vni_head(vs, vni), hlist) {
|
||||
if (vxlan->default_dst.remote_vni == vni)
|
||||
return vxlan;
|
||||
hlist_for_each_entry_rcu(node, vni_head(vs, vni), hlist) {
|
||||
if (node->vxlan->default_dst.remote_vni == vni)
|
||||
return node->vxlan;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
@@ -2309,17 +2309,22 @@ static void vxlan_vs_del_dev(struct vxlan_dev *vxlan)
|
||||
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
|
||||
|
||||
spin_lock(&vn->sock_lock);
|
||||
hlist_del_init_rcu(&vxlan->hlist);
|
||||
hlist_del_init_rcu(&vxlan->hlist4.hlist);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
hlist_del_init_rcu(&vxlan->hlist6.hlist);
|
||||
#endif
|
||||
spin_unlock(&vn->sock_lock);
|
||||
}
|
||||
|
||||
static void vxlan_vs_add_dev(struct vxlan_sock *vs, struct vxlan_dev *vxlan)
|
||||
static void vxlan_vs_add_dev(struct vxlan_sock *vs, struct vxlan_dev *vxlan,
|
||||
struct vxlan_dev_node *node)
|
||||
{
|
||||
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
|
||||
__be32 vni = vxlan->default_dst.remote_vni;
|
||||
|
||||
node->vxlan = vxlan;
|
||||
spin_lock(&vn->sock_lock);
|
||||
hlist_add_head_rcu(&vxlan->hlist, vni_head(vs, vni));
|
||||
hlist_add_head_rcu(&node->hlist, vni_head(vs, vni));
|
||||
spin_unlock(&vn->sock_lock);
|
||||
}
|
||||
|
||||
@@ -2778,6 +2783,7 @@ static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
|
||||
{
|
||||
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
|
||||
struct vxlan_sock *vs = NULL;
|
||||
struct vxlan_dev_node *node;
|
||||
|
||||
if (!vxlan->cfg.no_share) {
|
||||
spin_lock(&vn->sock_lock);
|
||||
@@ -2795,12 +2801,16 @@ static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
|
||||
if (IS_ERR(vs))
|
||||
return PTR_ERR(vs);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
if (ipv6)
|
||||
if (ipv6) {
|
||||
rcu_assign_pointer(vxlan->vn6_sock, vs);
|
||||
else
|
||||
node = &vxlan->hlist6;
|
||||
} else
|
||||
#endif
|
||||
{
|
||||
rcu_assign_pointer(vxlan->vn4_sock, vs);
|
||||
vxlan_vs_add_dev(vs, vxlan);
|
||||
node = &vxlan->hlist4;
|
||||
}
|
||||
vxlan_vs_add_dev(vs, vxlan, node);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -1821,8 +1821,6 @@ static void ar9003_hw_spectral_scan_wait(struct ath_hw *ah)
|
||||
static void ar9003_hw_tx99_start(struct ath_hw *ah, u32 qnum)
|
||||
{
|
||||
REG_SET_BIT(ah, AR_PHY_TEST, PHY_AGC_CLR);
|
||||
REG_SET_BIT(ah, 0x9864, 0x7f000);
|
||||
REG_SET_BIT(ah, 0x9924, 0x7f00fe);
|
||||
REG_CLR_BIT(ah, AR_DIAG_SW, AR_DIAG_RX_DIS);
|
||||
REG_WRITE(ah, AR_CR, AR_CR_RXD);
|
||||
REG_WRITE(ah, AR_DLCL_IFS(qnum), 0);
|
||||
|
||||
@@ -120,6 +120,8 @@ void ath9k_rng_start(struct ath_softc *sc)
|
||||
|
||||
void ath9k_rng_stop(struct ath_softc *sc)
|
||||
{
|
||||
if (sc->rng_task)
|
||||
if (sc->rng_task) {
|
||||
kthread_stop(sc->rng_task);
|
||||
sc->rng_task = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -189,22 +189,27 @@ static ssize_t write_file_tx99(struct file *file, const char __user *user_buf,
|
||||
if (strtobool(buf, &start))
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&sc->mutex);
|
||||
|
||||
if (start == sc->tx99_state) {
|
||||
if (!start)
|
||||
return count;
|
||||
goto out;
|
||||
ath_dbg(common, XMIT, "Resetting TX99\n");
|
||||
ath9k_tx99_deinit(sc);
|
||||
}
|
||||
|
||||
if (!start) {
|
||||
ath9k_tx99_deinit(sc);
|
||||
return count;
|
||||
goto out;
|
||||
}
|
||||
|
||||
r = ath9k_tx99_init(sc);
|
||||
if (r)
|
||||
if (r) {
|
||||
mutex_unlock(&sc->mutex);
|
||||
return r;
|
||||
|
||||
}
|
||||
out:
|
||||
mutex_unlock(&sc->mutex);
|
||||
return count;
|
||||
}
|
||||
|
||||
|
||||
@@ -705,7 +705,7 @@ done:
|
||||
int brcmf_sdiod_recv_chain(struct brcmf_sdio_dev *sdiodev,
|
||||
struct sk_buff_head *pktq, uint totlen)
|
||||
{
|
||||
struct sk_buff *glom_skb;
|
||||
struct sk_buff *glom_skb = NULL;
|
||||
struct sk_buff *skb;
|
||||
u32 addr = sdiodev->sbwad;
|
||||
int err = 0;
|
||||
@@ -726,10 +726,8 @@ int brcmf_sdiod_recv_chain(struct brcmf_sdio_dev *sdiodev,
|
||||
return -ENOMEM;
|
||||
err = brcmf_sdiod_buffrw(sdiodev, SDIO_FUNC_2, false, addr,
|
||||
glom_skb);
|
||||
if (err) {
|
||||
brcmu_pkt_buf_free_skb(glom_skb);
|
||||
if (err)
|
||||
goto done;
|
||||
}
|
||||
|
||||
skb_queue_walk(pktq, skb) {
|
||||
memcpy(skb->data, glom_skb->data, skb->len);
|
||||
@@ -740,6 +738,7 @@ int brcmf_sdiod_recv_chain(struct brcmf_sdio_dev *sdiodev,
|
||||
pktq);
|
||||
|
||||
done:
|
||||
brcmu_pkt_buf_free_skb(glom_skb);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
@@ -4930,6 +4930,11 @@ brcmf_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
|
||||
cfg80211_mgmt_tx_status(wdev, *cookie, buf, len, true,
|
||||
GFP_KERNEL);
|
||||
} else if (ieee80211_is_action(mgmt->frame_control)) {
|
||||
if (len > BRCMF_FIL_ACTION_FRAME_SIZE + DOT11_MGMT_HDR_LEN) {
|
||||
brcmf_err("invalid action frame length\n");
|
||||
err = -EINVAL;
|
||||
goto exit;
|
||||
}
|
||||
af_params = kzalloc(sizeof(*af_params), GFP_KERNEL);
|
||||
if (af_params == NULL) {
|
||||
brcmf_err("unable to allocate frame\n");
|
||||
@@ -6873,7 +6878,7 @@ struct brcmf_cfg80211_info *brcmf_cfg80211_attach(struct brcmf_pub *drvr,
|
||||
wiphy = wiphy_new(ops, sizeof(struct brcmf_cfg80211_info));
|
||||
if (!wiphy) {
|
||||
brcmf_err("Could not allocate wiphy device\n");
|
||||
return NULL;
|
||||
goto ops_out;
|
||||
}
|
||||
memcpy(wiphy->perm_addr, drvr->mac, ETH_ALEN);
|
||||
set_wiphy_dev(wiphy, busdev);
|
||||
@@ -7007,6 +7012,7 @@ priv_out:
|
||||
ifp->vif = NULL;
|
||||
wiphy_out:
|
||||
brcmf_free_wiphy(wiphy);
|
||||
ops_out:
|
||||
kfree(ops);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@@ -70,10 +70,10 @@
|
||||
#define WSPI_MAX_CHUNK_SIZE 4092
|
||||
|
||||
/*
|
||||
* wl18xx driver aggregation buffer size is (13 * PAGE_SIZE) compared to
|
||||
* (4 * PAGE_SIZE) for wl12xx, so use the larger buffer needed for wl18xx
|
||||
* wl18xx driver aggregation buffer size is (13 * 4K) compared to
|
||||
* (4 * 4K) for wl12xx, so use the larger buffer needed for wl18xx
|
||||
*/
|
||||
#define SPI_AGGR_BUFFER_SIZE (13 * PAGE_SIZE)
|
||||
#define SPI_AGGR_BUFFER_SIZE (13 * SZ_4K)
|
||||
|
||||
/* Maximum number of SPI write chunks */
|
||||
#define WSPI_MAX_NUM_OF_CHUNKS \
|
||||
|
||||
@@ -281,6 +281,7 @@ static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
|
||||
{
|
||||
RING_IDX req_prod = queue->rx.req_prod_pvt;
|
||||
int notify;
|
||||
int err = 0;
|
||||
|
||||
if (unlikely(!netif_carrier_ok(queue->info->netdev)))
|
||||
return;
|
||||
@@ -295,8 +296,10 @@ static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
|
||||
struct xen_netif_rx_request *req;
|
||||
|
||||
skb = xennet_alloc_one_rx_buffer(queue);
|
||||
if (!skb)
|
||||
if (!skb) {
|
||||
err = -ENOMEM;
|
||||
break;
|
||||
}
|
||||
|
||||
id = xennet_rxidx(req_prod);
|
||||
|
||||
@@ -320,8 +323,13 @@ static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
|
||||
|
||||
queue->rx.req_prod_pvt = req_prod;
|
||||
|
||||
/* Not enough requests? Try again later. */
|
||||
if (req_prod - queue->rx.sring->req_prod < NET_RX_SLOTS_MIN) {
|
||||
/* Try again later if there are not enough requests or skb allocation
|
||||
* failed.
|
||||
* Enough requests is quantified as the sum of newly created slots and
|
||||
* the unconsumed slots at the backend.
|
||||
*/
|
||||
if (req_prod - queue->rx.rsp_cons < NET_RX_SLOTS_MIN ||
|
||||
unlikely(err)) {
|
||||
mod_timer(&queue->rx_refill_timer, jiffies + (HZ/10));
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -459,7 +459,7 @@ int nfcmrvl_fw_dnld_init(struct nfcmrvl_private *priv)
|
||||
|
||||
INIT_WORK(&priv->fw_dnld.rx_work, fw_dnld_rx_work);
|
||||
snprintf(name, sizeof(name), "%s_nfcmrvl_fw_dnld_rx_wq",
|
||||
dev_name(priv->dev));
|
||||
dev_name(&priv->ndev->nfc_dev->dev));
|
||||
priv->fw_dnld.rx_wq = create_singlethread_workqueue(name);
|
||||
if (!priv->fw_dnld.rx_wq)
|
||||
return -ENOMEM;
|
||||
@@ -496,6 +496,7 @@ int nfcmrvl_fw_dnld_start(struct nci_dev *ndev, const char *firmware_name)
|
||||
{
|
||||
struct nfcmrvl_private *priv = nci_get_drvdata(ndev);
|
||||
struct nfcmrvl_fw_dnld *fw_dnld = &priv->fw_dnld;
|
||||
int res;
|
||||
|
||||
if (!priv->support_fw_dnld)
|
||||
return -ENOTSUPP;
|
||||
@@ -511,7 +512,9 @@ int nfcmrvl_fw_dnld_start(struct nci_dev *ndev, const char *firmware_name)
|
||||
*/
|
||||
|
||||
/* Retrieve FW binary */
|
||||
if (request_firmware(&fw_dnld->fw, firmware_name, priv->dev) < 0) {
|
||||
res = request_firmware(&fw_dnld->fw, firmware_name,
|
||||
&ndev->nfc_dev->dev);
|
||||
if (res < 0) {
|
||||
nfc_err(priv->dev, "failed to retrieve FW %s", firmware_name);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
@@ -124,12 +124,13 @@ struct nfcmrvl_private *nfcmrvl_nci_register_dev(enum nfcmrvl_phy phy,
|
||||
memcpy(&priv->config, pdata, sizeof(*pdata));
|
||||
|
||||
if (priv->config.reset_n_io) {
|
||||
rc = devm_gpio_request_one(dev,
|
||||
priv->config.reset_n_io,
|
||||
GPIOF_OUT_INIT_LOW,
|
||||
"nfcmrvl_reset_n");
|
||||
if (rc < 0)
|
||||
rc = gpio_request_one(priv->config.reset_n_io,
|
||||
GPIOF_OUT_INIT_LOW,
|
||||
"nfcmrvl_reset_n");
|
||||
if (rc < 0) {
|
||||
priv->config.reset_n_io = 0;
|
||||
nfc_err(dev, "failed to request reset_n io\n");
|
||||
}
|
||||
}
|
||||
|
||||
if (phy == NFCMRVL_PHY_SPI) {
|
||||
@@ -154,32 +155,36 @@ struct nfcmrvl_private *nfcmrvl_nci_register_dev(enum nfcmrvl_phy phy,
|
||||
if (!priv->ndev) {
|
||||
nfc_err(dev, "nci_allocate_device failed\n");
|
||||
rc = -ENOMEM;
|
||||
goto error;
|
||||
goto error_free_gpio;
|
||||
}
|
||||
|
||||
nci_set_drvdata(priv->ndev, priv);
|
||||
|
||||
rc = nci_register_device(priv->ndev);
|
||||
if (rc) {
|
||||
nfc_err(dev, "nci_register_device failed %d\n", rc);
|
||||
goto error_free_dev;
|
||||
}
|
||||
|
||||
/* Ensure that controller is powered off */
|
||||
nfcmrvl_chip_halt(priv);
|
||||
|
||||
rc = nfcmrvl_fw_dnld_init(priv);
|
||||
if (rc) {
|
||||
nfc_err(dev, "failed to initialize FW download %d\n", rc);
|
||||
goto error_free_dev;
|
||||
}
|
||||
|
||||
nci_set_drvdata(priv->ndev, priv);
|
||||
|
||||
rc = nci_register_device(priv->ndev);
|
||||
if (rc) {
|
||||
nfc_err(dev, "nci_register_device failed %d\n", rc);
|
||||
goto error_fw_dnld_deinit;
|
||||
}
|
||||
|
||||
/* Ensure that controller is powered off */
|
||||
nfcmrvl_chip_halt(priv);
|
||||
|
||||
nfc_info(dev, "registered with nci successfully\n");
|
||||
return priv;
|
||||
|
||||
error_fw_dnld_deinit:
|
||||
nfcmrvl_fw_dnld_deinit(priv);
|
||||
error_free_dev:
|
||||
nci_free_device(priv->ndev);
|
||||
error:
|
||||
error_free_gpio:
|
||||
if (priv->config.reset_n_io)
|
||||
gpio_free(priv->config.reset_n_io);
|
||||
kfree(priv);
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
@@ -195,7 +200,7 @@ void nfcmrvl_nci_unregister_dev(struct nfcmrvl_private *priv)
|
||||
nfcmrvl_fw_dnld_deinit(priv);
|
||||
|
||||
if (priv->config.reset_n_io)
|
||||
devm_gpio_free(priv->dev, priv->config.reset_n_io);
|
||||
gpio_free(priv->config.reset_n_io);
|
||||
|
||||
nci_unregister_device(ndev);
|
||||
nci_free_device(ndev);
|
||||
|
||||
@@ -109,6 +109,7 @@ static int nfcmrvl_nci_uart_open(struct nci_uart *nu)
|
||||
struct nfcmrvl_private *priv;
|
||||
struct nfcmrvl_platform_data *pdata = NULL;
|
||||
struct nfcmrvl_platform_data config;
|
||||
struct device *dev = nu->tty->dev;
|
||||
|
||||
/*
|
||||
* Platform data cannot be used here since usually it is already used
|
||||
@@ -116,9 +117,8 @@ static int nfcmrvl_nci_uart_open(struct nci_uart *nu)
|
||||
* and check if DT entries were added.
|
||||
*/
|
||||
|
||||
if (nu->tty->dev->parent && nu->tty->dev->parent->of_node)
|
||||
if (nfcmrvl_uart_parse_dt(nu->tty->dev->parent->of_node,
|
||||
&config) == 0)
|
||||
if (dev && dev->parent && dev->parent->of_node)
|
||||
if (nfcmrvl_uart_parse_dt(dev->parent->of_node, &config) == 0)
|
||||
pdata = &config;
|
||||
|
||||
if (!pdata) {
|
||||
@@ -131,7 +131,7 @@ static int nfcmrvl_nci_uart_open(struct nci_uart *nu)
|
||||
}
|
||||
|
||||
priv = nfcmrvl_nci_register_dev(NFCMRVL_PHY_UART, nu, &uart_ops,
|
||||
nu->tty->dev, pdata);
|
||||
dev, pdata);
|
||||
if (IS_ERR(priv))
|
||||
return PTR_ERR(priv);
|
||||
|
||||
|
||||
@@ -1203,10 +1203,13 @@ static int btt_rw_page(struct block_device *bdev, sector_t sector,
|
||||
struct page *page, bool is_write)
|
||||
{
|
||||
struct btt *btt = bdev->bd_disk->private_data;
|
||||
int rc;
|
||||
|
||||
btt_do_bvec(btt, NULL, page, PAGE_SIZE, 0, is_write, sector);
|
||||
page_endio(page, is_write, 0);
|
||||
return 0;
|
||||
rc = btt_do_bvec(btt, NULL, page, PAGE_SIZE, 0, is_write, sector);
|
||||
if (rc == 0)
|
||||
page_endio(page, is_write, 0);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -450,14 +450,15 @@ static void set_badblock(struct badblocks *bb, sector_t s, int num)
|
||||
static void __add_badblock_range(struct badblocks *bb, u64 ns_offset, u64 len)
|
||||
{
|
||||
const unsigned int sector_size = 512;
|
||||
sector_t start_sector;
|
||||
sector_t start_sector, end_sector;
|
||||
u64 num_sectors;
|
||||
u32 rem;
|
||||
|
||||
start_sector = div_u64(ns_offset, sector_size);
|
||||
num_sectors = div_u64_rem(len, sector_size, &rem);
|
||||
end_sector = div_u64_rem(ns_offset + len, sector_size, &rem);
|
||||
if (rem)
|
||||
num_sectors++;
|
||||
end_sector++;
|
||||
num_sectors = end_sector - start_sector;
|
||||
|
||||
if (unlikely(num_sectors > (u64)INT_MAX)) {
|
||||
u64 remaining = num_sectors;
|
||||
|
||||
@@ -88,7 +88,7 @@ enum nvme_rdma_queue_flags {
|
||||
|
||||
struct nvme_rdma_queue {
|
||||
struct nvme_rdma_qe *rsp_ring;
|
||||
u8 sig_count;
|
||||
atomic_t sig_count;
|
||||
int queue_size;
|
||||
size_t cmnd_capsule_len;
|
||||
struct nvme_rdma_ctrl *ctrl;
|
||||
@@ -555,6 +555,7 @@ static int nvme_rdma_init_queue(struct nvme_rdma_ctrl *ctrl,
|
||||
queue->cmnd_capsule_len = sizeof(struct nvme_command);
|
||||
|
||||
queue->queue_size = queue_size;
|
||||
atomic_set(&queue->sig_count, 0);
|
||||
|
||||
queue->cm_id = rdma_create_id(&init_net, nvme_rdma_cm_handler, queue,
|
||||
RDMA_PS_TCP, IB_QPT_RC);
|
||||
@@ -1011,17 +1012,16 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||
nvme_rdma_wr_error(cq, wc, "SEND");
|
||||
}
|
||||
|
||||
static inline int nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue)
|
||||
/*
|
||||
* We want to signal completion at least every queue depth/2. This returns the
|
||||
* largest power of two that is not above half of (queue size + 1) to optimize
|
||||
* (avoid divisions).
|
||||
*/
|
||||
static inline bool nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue)
|
||||
{
|
||||
int sig_limit;
|
||||
int limit = 1 << ilog2((queue->queue_size + 1) / 2);
|
||||
|
||||
/*
|
||||
* We signal completion every queue depth/2 and also handle the
|
||||
* degenerated case of a device with queue_depth=1, where we
|
||||
* would need to signal every message.
|
||||
*/
|
||||
sig_limit = max(queue->queue_size / 2, 1);
|
||||
return (++queue->sig_count % sig_limit) == 0;
|
||||
return (atomic_inc_return(&queue->sig_count) & (limit - 1)) == 0;
|
||||
}
|
||||
|
||||
static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
|
||||
|
||||
@@ -488,21 +488,24 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
|
||||
|
||||
rval = device_add(&nvmem->dev);
|
||||
if (rval)
|
||||
goto out;
|
||||
goto err_put_device;
|
||||
|
||||
if (config->compat) {
|
||||
rval = nvmem_setup_compat(nvmem, config);
|
||||
if (rval)
|
||||
goto out;
|
||||
goto err_device_del;
|
||||
}
|
||||
|
||||
if (config->cells)
|
||||
nvmem_add_cells(nvmem, config);
|
||||
|
||||
return nvmem;
|
||||
out:
|
||||
ida_simple_remove(&nvmem_ida, nvmem->id);
|
||||
kfree(nvmem);
|
||||
|
||||
err_device_del:
|
||||
device_del(&nvmem->dev);
|
||||
err_put_device:
|
||||
put_device(&nvmem->dev);
|
||||
|
||||
return ERR_PTR(rval);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nvmem_register);
|
||||
|
||||
@@ -225,6 +225,7 @@ ssize_t of_device_get_modalias(struct device *dev, char *str, ssize_t len)
|
||||
|
||||
return tsize;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_device_get_modalias);
|
||||
|
||||
/**
|
||||
* of_device_uevent - Display OF related uevent information
|
||||
@@ -287,3 +288,4 @@ int of_device_uevent_modalias(struct device *dev, struct kobj_uevent_env *env)
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_device_uevent_modalias);
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user