Merge 4.9.77 into android-4.9-o
Changes in 4.9.77 dm bufio: fix shrinker scans when (nr_to_scan < retain_target) mac80211: Add RX flag to indicate ICV stripped ath10k: rebuild crypto header in rx data frames KVM: Fix stack-out-of-bounds read in write_mmio can: gs_usb: fix return value of the "set_bittiming" callback IB/srpt: Disable RDMA access by the initiator MIPS: Validate PR_SET_FP_MODE prctl(2) requests against the ABI of the task MIPS: Factor out NT_PRFPREG regset access helpers MIPS: Guard against any partial write attempt with PTRACE_SETREGSET MIPS: Consistently handle buffer counter with PTRACE_SETREGSET MIPS: Fix an FCSR access API regression with NT_PRFPREG and MSA MIPS: Also verify sizeof `elf_fpreg_t' with PTRACE_SETREGSET MIPS: Disallow outsized PTRACE_SETREGSET NT_PRFPREG regset accesses kvm: vmx: Scrub hardware GPRs at VM-exit platform/x86: wmi: Call acpi_wmi_init() later x86/acpi: Handle SCI interrupts above legacy space gracefully ALSA: pcm: Remove incorrect snd_BUG_ON() usages ALSA: pcm: Add missing error checks in OSS emulation plugin builder ALSA: pcm: Abort properly at pending signal in OSS read/write loops ALSA: pcm: Allow aborting mutex lock at OSS read/write loops ALSA: aloop: Release cable upon open error path ALSA: aloop: Fix inconsistent format due to incomplete rule ALSA: aloop: Fix racy hw constraints adjustment x86/acpi: Reduce code duplication in mp_override_legacy_irq() zswap: don't param_set_charp while holding spinlock lan78xx: use skb_cow_head() to deal with cloned skbs sr9700: use skb_cow_head() to deal with cloned skbs smsc75xx: use skb_cow_head() to deal with cloned skbs cx82310_eth: use skb_cow_head() to deal with cloned skbs xhci: Fix ring leak in failure path of xhci_alloc_virt_device() 8021q: fix a memory leak for VLAN 0 device ip6_tunnel: disable dst caching if tunnel is dual-stack net: core: fix module type in sock_diag_bind RDS: Heap OOB write in rds_message_alloc_sgs() RDS: null pointer dereference in rds_atomic_free_op sh_eth: fix TSU resource handling sh_eth: fix SH7757 GEther initialization net: stmmac: enable EEE in MII, GMII or RGMII only ipv6: fix possible mem leaks in ipv6_make_skb() ethtool: do not print warning for applications using legacy API mlxsw: spectrum_router: Fix NULL pointer deref net/sched: Fix update of lastuse in act modules implementing stats_update crypto: algapi - fix NULL dereference in crypto_remove_spawns() rbd: set max_segments to USHRT_MAX x86/microcode/intel: Extend BDW late-loading with a revision check KVM: x86: Add memory barrier on vmcs field lookup drm/vmwgfx: Potential off by one in vmw_view_add() kaiser: Set _PAGE_NX only if supported iscsi-target: Make TASK_REASSIGN use proper se_cmd->cmd_kref target: Avoid early CMD_T_PRE_EXECUTE failures during ABORT_TASK bpf: move fixup_bpf_calls() function bpf: refactor fixup_bpf_calls() bpf: prevent out-of-bounds speculation bpf, array: fix overflow in max_entries and undefined behavior in index_mask USB: serial: cp210x: add IDs for LifeScan OneTouch Verio IQ USB: serial: cp210x: add new device ID ELV ALC 8xxx usb: misc: usb3503: make sure reset is low for at least 100us USB: fix usbmon BUG trigger usbip: remove kernel addresses from usb device and urb debug msgs usbip: fix vudc_rx: harden CMD_SUBMIT path to handle malicious input usbip: vudc_tx: fix v_send_ret_submit() vulnerability to null xfer buffer staging: android: ashmem: fix a race condition in ASHMEM_SET_SIZE ioctl Bluetooth: Prevent stack info leak from the EFS element. uas: ignore UAS for Norelsys NS1068(X) chips e1000e: Fix e1000_check_for_copper_link_ich8lan return value. x86/Documentation: Add PTI description x86/cpu: Factor out application of forced CPU caps x86/cpufeatures: Make CPU bugs sticky x86/cpufeatures: Add X86_BUG_CPU_INSECURE x86/pti: Rename BUG_CPU_INSECURE to BUG_CPU_MELTDOWN x86/cpufeatures: Add X86_BUG_SPECTRE_V[12] x86/cpu: Merge bugs.c and bugs_64.c sysfs/cpu: Add vulnerability folder x86/cpu: Implement CPU vulnerabilites sysfs functions x86/cpu/AMD: Make LFENCE a serializing instruction x86/cpu/AMD: Use LFENCE_RDTSC in preference to MFENCE_RDTSC sysfs/cpu: Fix typos in vulnerability documentation x86/alternatives: Fix optimize_nops() checking x86/alternatives: Add missing '\n' at end of ALTERNATIVE inline asm x86/mm/32: Move setup_clear_cpu_cap(X86_FEATURE_PCID) earlier objtool, modules: Discard objtool annotation sections for modules objtool: Detect jumps to retpoline thunks objtool: Allow alternatives to be ignored x86/asm: Use register variable to get stack pointer value x86/retpoline: Add initial retpoline support x86/spectre: Add boot time option to select Spectre v2 mitigation x86/retpoline/crypto: Convert crypto assembler indirect jumps x86/retpoline/entry: Convert entry assembler indirect jumps x86/retpoline/ftrace: Convert ftrace assembler indirect jumps x86/retpoline/hyperv: Convert assembler indirect jumps x86/retpoline/xen: Convert Xen hypercall indirect jumps x86/retpoline/checksum32: Convert assembler indirect jumps x86/retpoline/irq32: Convert assembler indirect jumps x86/retpoline: Fill return stack buffer on vmexit selftests/x86: Add test_vsyscall x86/retpoline: Remove compile time warning objtool: Fix retpoline support for pre-ORC objtool x86/pti/efi: broken conversion from efi to kernel page table Linux 4.9.77 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -350,3 +350,19 @@ Contact: Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
|
||||
Description: AArch64 CPU registers
|
||||
'identification' directory exposes the CPU ID registers for
|
||||
identifying model and revision of the CPU.
|
||||
|
||||
What: /sys/devices/system/cpu/vulnerabilities
|
||||
/sys/devices/system/cpu/vulnerabilities/meltdown
|
||||
/sys/devices/system/cpu/vulnerabilities/spectre_v1
|
||||
/sys/devices/system/cpu/vulnerabilities/spectre_v2
|
||||
Date: January 2018
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
Description: Information about CPU vulnerabilities
|
||||
|
||||
The files are named after the code names of CPU
|
||||
vulnerabilities. The output of those files reflects the
|
||||
state of the CPUs in the system. Possible output values:
|
||||
|
||||
"Not affected" CPU is not affected by the vulnerability
|
||||
"Vulnerable" CPU is affected and no mitigation in effect
|
||||
"Mitigation: $M" CPU is affected and mitigation $M is in effect
|
||||
|
||||
@@ -2697,6 +2697,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||
nosmt [KNL,S390] Disable symmetric multithreading (SMT).
|
||||
Equivalent to smt=1.
|
||||
|
||||
nospectre_v2 [X86] Disable all mitigations for the Spectre variant 2
|
||||
(indirect branch prediction) vulnerability. System may
|
||||
allow data leaks with this option, which is equivalent
|
||||
to spectre_v2=off.
|
||||
|
||||
noxsave [BUGS=X86] Disables x86 extended register state save
|
||||
and restore using xsave. The kernel will fallback to
|
||||
enabling legacy floating-point and sse state.
|
||||
@@ -2769,8 +2774,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||
|
||||
nojitter [IA-64] Disables jitter checking for ITC timers.
|
||||
|
||||
nopti [X86-64] Disable KAISER isolation of kernel from user.
|
||||
|
||||
no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver
|
||||
|
||||
no-kvmapf [X86,KVM] Disable paravirtualized asynchronous page
|
||||
@@ -3333,11 +3336,20 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||
pt. [PARIDE]
|
||||
See Documentation/blockdev/paride.txt.
|
||||
|
||||
pti= [X86_64]
|
||||
Control KAISER user/kernel address space isolation:
|
||||
on - enable
|
||||
off - disable
|
||||
auto - default setting
|
||||
pti= [X86_64] Control Page Table Isolation of user and
|
||||
kernel address spaces. Disabling this feature
|
||||
removes hardening, but improves performance of
|
||||
system calls and interrupts.
|
||||
|
||||
on - unconditionally enable
|
||||
off - unconditionally disable
|
||||
auto - kernel detects whether your CPU model is
|
||||
vulnerable to issues that PTI mitigates
|
||||
|
||||
Not specifying this option is equivalent to pti=auto.
|
||||
|
||||
nopti [X86_64]
|
||||
Equivalent to pti=off
|
||||
|
||||
pty.legacy_count=
|
||||
[KNL] Number of legacy pty's. Overwrites compiled-in
|
||||
@@ -3943,6 +3955,29 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||
sonypi.*= [HW] Sony Programmable I/O Control Device driver
|
||||
See Documentation/laptops/sonypi.txt
|
||||
|
||||
spectre_v2= [X86] Control mitigation of Spectre variant 2
|
||||
(indirect branch speculation) vulnerability.
|
||||
|
||||
on - unconditionally enable
|
||||
off - unconditionally disable
|
||||
auto - kernel detects whether your CPU model is
|
||||
vulnerable
|
||||
|
||||
Selecting 'on' will, and 'auto' may, choose a
|
||||
mitigation method at run time according to the
|
||||
CPU, the available microcode, the setting of the
|
||||
CONFIG_RETPOLINE configuration option, and the
|
||||
compiler with which the kernel was built.
|
||||
|
||||
Specific mitigations can also be selected manually:
|
||||
|
||||
retpoline - replace indirect branches
|
||||
retpoline,generic - google's original retpoline
|
||||
retpoline,amd - AMD-specific minimal thunk
|
||||
|
||||
Not specifying this option is equivalent to
|
||||
spectre_v2=auto.
|
||||
|
||||
spia_io_base= [HW,MTD]
|
||||
spia_fio_base=
|
||||
spia_pedr=
|
||||
|
||||
186
Documentation/x86/pti.txt
Normal file
186
Documentation/x86/pti.txt
Normal file
@@ -0,0 +1,186 @@
|
||||
Overview
|
||||
========
|
||||
|
||||
Page Table Isolation (pti, previously known as KAISER[1]) is a
|
||||
countermeasure against attacks on the shared user/kernel address
|
||||
space such as the "Meltdown" approach[2].
|
||||
|
||||
To mitigate this class of attacks, we create an independent set of
|
||||
page tables for use only when running userspace applications. When
|
||||
the kernel is entered via syscalls, interrupts or exceptions, the
|
||||
page tables are switched to the full "kernel" copy. When the system
|
||||
switches back to user mode, the user copy is used again.
|
||||
|
||||
The userspace page tables contain only a minimal amount of kernel
|
||||
data: only what is needed to enter/exit the kernel such as the
|
||||
entry/exit functions themselves and the interrupt descriptor table
|
||||
(IDT). There are a few strictly unnecessary things that get mapped
|
||||
such as the first C function when entering an interrupt (see
|
||||
comments in pti.c).
|
||||
|
||||
This approach helps to ensure that side-channel attacks leveraging
|
||||
the paging structures do not function when PTI is enabled. It can be
|
||||
enabled by setting CONFIG_PAGE_TABLE_ISOLATION=y at compile time.
|
||||
Once enabled at compile-time, it can be disabled at boot with the
|
||||
'nopti' or 'pti=' kernel parameters (see kernel-parameters.txt).
|
||||
|
||||
Page Table Management
|
||||
=====================
|
||||
|
||||
When PTI is enabled, the kernel manages two sets of page tables.
|
||||
The first set is very similar to the single set which is present in
|
||||
kernels without PTI. This includes a complete mapping of userspace
|
||||
that the kernel can use for things like copy_to_user().
|
||||
|
||||
Although _complete_, the user portion of the kernel page tables is
|
||||
crippled by setting the NX bit in the top level. This ensures
|
||||
that any missed kernel->user CR3 switch will immediately crash
|
||||
userspace upon executing its first instruction.
|
||||
|
||||
The userspace page tables map only the kernel data needed to enter
|
||||
and exit the kernel. This data is entirely contained in the 'struct
|
||||
cpu_entry_area' structure which is placed in the fixmap which gives
|
||||
each CPU's copy of the area a compile-time-fixed virtual address.
|
||||
|
||||
For new userspace mappings, the kernel makes the entries in its
|
||||
page tables like normal. The only difference is when the kernel
|
||||
makes entries in the top (PGD) level. In addition to setting the
|
||||
entry in the main kernel PGD, a copy of the entry is made in the
|
||||
userspace page tables' PGD.
|
||||
|
||||
This sharing at the PGD level also inherently shares all the lower
|
||||
layers of the page tables. This leaves a single, shared set of
|
||||
userspace page tables to manage. One PTE to lock, one set of
|
||||
accessed bits, dirty bits, etc...
|
||||
|
||||
Overhead
|
||||
========
|
||||
|
||||
Protection against side-channel attacks is important. But,
|
||||
this protection comes at a cost:
|
||||
|
||||
1. Increased Memory Use
|
||||
a. Each process now needs an order-1 PGD instead of order-0.
|
||||
(Consumes an additional 4k per process).
|
||||
b. The 'cpu_entry_area' structure must be 2MB in size and 2MB
|
||||
aligned so that it can be mapped by setting a single PMD
|
||||
entry. This consumes nearly 2MB of RAM once the kernel
|
||||
is decompressed, but no space in the kernel image itself.
|
||||
|
||||
2. Runtime Cost
|
||||
a. CR3 manipulation to switch between the page table copies
|
||||
must be done at interrupt, syscall, and exception entry
|
||||
and exit (it can be skipped when the kernel is interrupted,
|
||||
though.) Moves to CR3 are on the order of a hundred
|
||||
cycles, and are required at every entry and exit.
|
||||
b. A "trampoline" must be used for SYSCALL entry. This
|
||||
trampoline depends on a smaller set of resources than the
|
||||
non-PTI SYSCALL entry code, so requires mapping fewer
|
||||
things into the userspace page tables. The downside is
|
||||
that stacks must be switched at entry time.
|
||||
d. Global pages are disabled for all kernel structures not
|
||||
mapped into both kernel and userspace page tables. This
|
||||
feature of the MMU allows different processes to share TLB
|
||||
entries mapping the kernel. Losing the feature means more
|
||||
TLB misses after a context switch. The actual loss of
|
||||
performance is very small, however, never exceeding 1%.
|
||||
d. Process Context IDentifiers (PCID) is a CPU feature that
|
||||
allows us to skip flushing the entire TLB when switching page
|
||||
tables by setting a special bit in CR3 when the page tables
|
||||
are changed. This makes switching the page tables (at context
|
||||
switch, or kernel entry/exit) cheaper. But, on systems with
|
||||
PCID support, the context switch code must flush both the user
|
||||
and kernel entries out of the TLB. The user PCID TLB flush is
|
||||
deferred until the exit to userspace, minimizing the cost.
|
||||
See intel.com/sdm for the gory PCID/INVPCID details.
|
||||
e. The userspace page tables must be populated for each new
|
||||
process. Even without PTI, the shared kernel mappings
|
||||
are created by copying top-level (PGD) entries into each
|
||||
new process. But, with PTI, there are now *two* kernel
|
||||
mappings: one in the kernel page tables that maps everything
|
||||
and one for the entry/exit structures. At fork(), we need to
|
||||
copy both.
|
||||
f. In addition to the fork()-time copying, there must also
|
||||
be an update to the userspace PGD any time a set_pgd() is done
|
||||
on a PGD used to map userspace. This ensures that the kernel
|
||||
and userspace copies always map the same userspace
|
||||
memory.
|
||||
g. On systems without PCID support, each CR3 write flushes
|
||||
the entire TLB. That means that each syscall, interrupt
|
||||
or exception flushes the TLB.
|
||||
h. INVPCID is a TLB-flushing instruction which allows flushing
|
||||
of TLB entries for non-current PCIDs. Some systems support
|
||||
PCIDs, but do not support INVPCID. On these systems, addresses
|
||||
can only be flushed from the TLB for the current PCID. When
|
||||
flushing a kernel address, we need to flush all PCIDs, so a
|
||||
single kernel address flush will require a TLB-flushing CR3
|
||||
write upon the next use of every PCID.
|
||||
|
||||
Possible Future Work
|
||||
====================
|
||||
1. We can be more careful about not actually writing to CR3
|
||||
unless its value is actually changed.
|
||||
2. Allow PTI to be enabled/disabled at runtime in addition to the
|
||||
boot-time switching.
|
||||
|
||||
Testing
|
||||
========
|
||||
|
||||
To test stability of PTI, the following test procedure is recommended,
|
||||
ideally doing all of these in parallel:
|
||||
|
||||
1. Set CONFIG_DEBUG_ENTRY=y
|
||||
2. Run several copies of all of the tools/testing/selftests/x86/ tests
|
||||
(excluding MPX and protection_keys) in a loop on multiple CPUs for
|
||||
several minutes. These tests frequently uncover corner cases in the
|
||||
kernel entry code. In general, old kernels might cause these tests
|
||||
themselves to crash, but they should never crash the kernel.
|
||||
3. Run the 'perf' tool in a mode (top or record) that generates many
|
||||
frequent performance monitoring non-maskable interrupts (see "NMI"
|
||||
in /proc/interrupts). This exercises the NMI entry/exit code which
|
||||
is known to trigger bugs in code paths that did not expect to be
|
||||
interrupted, including nested NMIs. Using "-c" boosts the rate of
|
||||
NMIs, and using two -c with separate counters encourages nested NMIs
|
||||
and less deterministic behavior.
|
||||
|
||||
while true; do perf record -c 10000 -e instructions,cycles -a sleep 10; done
|
||||
|
||||
4. Launch a KVM virtual machine.
|
||||
5. Run 32-bit binaries on systems supporting the SYSCALL instruction.
|
||||
This has been a lightly-tested code path and needs extra scrutiny.
|
||||
|
||||
Debugging
|
||||
=========
|
||||
|
||||
Bugs in PTI cause a few different signatures of crashes
|
||||
that are worth noting here.
|
||||
|
||||
* Failures of the selftests/x86 code. Usually a bug in one of the
|
||||
more obscure corners of entry_64.S
|
||||
* Crashes in early boot, especially around CPU bringup. Bugs
|
||||
in the trampoline code or mappings cause these.
|
||||
* Crashes at the first interrupt. Caused by bugs in entry_64.S,
|
||||
like screwing up a page table switch. Also caused by
|
||||
incorrectly mapping the IRQ handler entry code.
|
||||
* Crashes at the first NMI. The NMI code is separate from main
|
||||
interrupt handlers and can have bugs that do not affect
|
||||
normal interrupts. Also caused by incorrectly mapping NMI
|
||||
code. NMIs that interrupt the entry code must be very
|
||||
careful and can be the cause of crashes that show up when
|
||||
running perf.
|
||||
* Kernel crashes at the first exit to userspace. entry_64.S
|
||||
bugs, or failing to map some of the exit code.
|
||||
* Crashes at first interrupt that interrupts userspace. The paths
|
||||
in entry_64.S that return to userspace are sometimes separate
|
||||
from the ones that return to the kernel.
|
||||
* Double faults: overflowing the kernel stack because of page
|
||||
faults upon page faults. Caused by touching non-pti-mapped
|
||||
data in the entry code, or forgetting to switch to kernel
|
||||
CR3 before calling into C functions which are not pti-mapped.
|
||||
* Userspace segfaults early in boot, sometimes manifesting
|
||||
as mount(8) failing to mount the rootfs. These have
|
||||
tended to be TLB invalidation issues. Usually invalidating
|
||||
the wrong PCID, or otherwise missing an invalidation.
|
||||
|
||||
1. https://gruss.cc/files/kaiser.pdf
|
||||
2. https://meltdownattack.com/meltdown.pdf
|
||||
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 76
|
||||
SUBLEVEL = 77
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
||||
@@ -112,7 +112,7 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||
}
|
||||
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr,
|
||||
data);
|
||||
&data);
|
||||
data = vcpu_data_host_to_guest(vcpu, data, len);
|
||||
vcpu_set_reg(vcpu, vcpu->arch.mmio_decode.rt, data);
|
||||
}
|
||||
@@ -182,14 +182,14 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||
data = vcpu_data_guest_to_host(vcpu, vcpu_get_reg(vcpu, rt),
|
||||
len);
|
||||
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, len, fault_ipa, data);
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, len, fault_ipa, &data);
|
||||
kvm_mmio_write_buf(data_buf, len, data);
|
||||
|
||||
ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, fault_ipa, len,
|
||||
data_buf);
|
||||
} else {
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, len,
|
||||
fault_ipa, 0);
|
||||
fault_ipa, NULL);
|
||||
|
||||
ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, fault_ipa, len,
|
||||
data_buf);
|
||||
|
||||
@@ -683,6 +683,18 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
|
||||
struct task_struct *t;
|
||||
int max_users;
|
||||
|
||||
/* If nothing to change, return right away, successfully. */
|
||||
if (value == mips_get_process_fp_mode(task))
|
||||
return 0;
|
||||
|
||||
/* Only accept a mode change if 64-bit FP enabled for o32. */
|
||||
if (!IS_ENABLED(CONFIG_MIPS_O32_FP64_SUPPORT))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* And only for o32 tasks. */
|
||||
if (IS_ENABLED(CONFIG_64BIT) && !test_thread_flag(TIF_32BIT_REGS))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* Check the value is valid */
|
||||
if (value & ~known_bits)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
@@ -439,25 +439,38 @@ static int gpr64_set(struct task_struct *target,
|
||||
|
||||
#endif /* CONFIG_64BIT */
|
||||
|
||||
static int fpr_get(struct task_struct *target,
|
||||
const struct user_regset *regset,
|
||||
unsigned int pos, unsigned int count,
|
||||
void *kbuf, void __user *ubuf)
|
||||
/*
|
||||
* Copy the floating-point context to the supplied NT_PRFPREG buffer,
|
||||
* !CONFIG_CPU_HAS_MSA variant. FP context's general register slots
|
||||
* correspond 1:1 to buffer slots. Only general registers are copied.
|
||||
*/
|
||||
static int fpr_get_fpa(struct task_struct *target,
|
||||
unsigned int *pos, unsigned int *count,
|
||||
void **kbuf, void __user **ubuf)
|
||||
{
|
||||
unsigned i;
|
||||
int err;
|
||||
return user_regset_copyout(pos, count, kbuf, ubuf,
|
||||
&target->thread.fpu,
|
||||
0, NUM_FPU_REGS * sizeof(elf_fpreg_t));
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy the floating-point context to the supplied NT_PRFPREG buffer,
|
||||
* CONFIG_CPU_HAS_MSA variant. Only lower 64 bits of FP context's
|
||||
* general register slots are copied to buffer slots. Only general
|
||||
* registers are copied.
|
||||
*/
|
||||
static int fpr_get_msa(struct task_struct *target,
|
||||
unsigned int *pos, unsigned int *count,
|
||||
void **kbuf, void __user **ubuf)
|
||||
{
|
||||
unsigned int i;
|
||||
u64 fpr_val;
|
||||
int err;
|
||||
|
||||
/* XXX fcr31 */
|
||||
|
||||
if (sizeof(target->thread.fpu.fpr[i]) == sizeof(elf_fpreg_t))
|
||||
return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
|
||||
&target->thread.fpu,
|
||||
0, sizeof(elf_fpregset_t));
|
||||
|
||||
BUILD_BUG_ON(sizeof(fpr_val) != sizeof(elf_fpreg_t));
|
||||
for (i = 0; i < NUM_FPU_REGS; i++) {
|
||||
fpr_val = get_fpr64(&target->thread.fpu.fpr[i], 0);
|
||||
err = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
|
||||
err = user_regset_copyout(pos, count, kbuf, ubuf,
|
||||
&fpr_val, i * sizeof(elf_fpreg_t),
|
||||
(i + 1) * sizeof(elf_fpreg_t));
|
||||
if (err)
|
||||
@@ -467,27 +480,64 @@ static int fpr_get(struct task_struct *target,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fpr_set(struct task_struct *target,
|
||||
/*
|
||||
* Copy the floating-point context to the supplied NT_PRFPREG buffer.
|
||||
* Choose the appropriate helper for general registers, and then copy
|
||||
* the FCSR register separately.
|
||||
*/
|
||||
static int fpr_get(struct task_struct *target,
|
||||
const struct user_regset *regset,
|
||||
unsigned int pos, unsigned int count,
|
||||
const void *kbuf, const void __user *ubuf)
|
||||
void *kbuf, void __user *ubuf)
|
||||
{
|
||||
unsigned i;
|
||||
const int fcr31_pos = NUM_FPU_REGS * sizeof(elf_fpreg_t);
|
||||
int err;
|
||||
|
||||
if (sizeof(target->thread.fpu.fpr[0]) == sizeof(elf_fpreg_t))
|
||||
err = fpr_get_fpa(target, &pos, &count, &kbuf, &ubuf);
|
||||
else
|
||||
err = fpr_get_msa(target, &pos, &count, &kbuf, &ubuf);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
|
||||
&target->thread.fpu.fcr31,
|
||||
fcr31_pos, fcr31_pos + sizeof(u32));
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy the supplied NT_PRFPREG buffer to the floating-point context,
|
||||
* !CONFIG_CPU_HAS_MSA variant. Buffer slots correspond 1:1 to FP
|
||||
* context's general register slots. Only general registers are copied.
|
||||
*/
|
||||
static int fpr_set_fpa(struct task_struct *target,
|
||||
unsigned int *pos, unsigned int *count,
|
||||
const void **kbuf, const void __user **ubuf)
|
||||
{
|
||||
return user_regset_copyin(pos, count, kbuf, ubuf,
|
||||
&target->thread.fpu,
|
||||
0, NUM_FPU_REGS * sizeof(elf_fpreg_t));
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy the supplied NT_PRFPREG buffer to the floating-point context,
|
||||
* CONFIG_CPU_HAS_MSA variant. Buffer slots are copied to lower 64
|
||||
* bits only of FP context's general register slots. Only general
|
||||
* registers are copied.
|
||||
*/
|
||||
static int fpr_set_msa(struct task_struct *target,
|
||||
unsigned int *pos, unsigned int *count,
|
||||
const void **kbuf, const void __user **ubuf)
|
||||
{
|
||||
unsigned int i;
|
||||
u64 fpr_val;
|
||||
|
||||
/* XXX fcr31 */
|
||||
|
||||
init_fp_ctx(target);
|
||||
|
||||
if (sizeof(target->thread.fpu.fpr[i]) == sizeof(elf_fpreg_t))
|
||||
return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
|
||||
&target->thread.fpu,
|
||||
0, sizeof(elf_fpregset_t));
|
||||
int err;
|
||||
|
||||
BUILD_BUG_ON(sizeof(fpr_val) != sizeof(elf_fpreg_t));
|
||||
for (i = 0; i < NUM_FPU_REGS && count >= sizeof(elf_fpreg_t); i++) {
|
||||
err = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
|
||||
for (i = 0; i < NUM_FPU_REGS && *count > 0; i++) {
|
||||
err = user_regset_copyin(pos, count, kbuf, ubuf,
|
||||
&fpr_val, i * sizeof(elf_fpreg_t),
|
||||
(i + 1) * sizeof(elf_fpreg_t));
|
||||
if (err)
|
||||
@@ -498,6 +548,53 @@ static int fpr_set(struct task_struct *target,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy the supplied NT_PRFPREG buffer to the floating-point context.
|
||||
* Choose the appropriate helper for general registers, and then copy
|
||||
* the FCSR register separately.
|
||||
*
|
||||
* We optimize for the case where `count % sizeof(elf_fpreg_t) == 0',
|
||||
* which is supposed to have been guaranteed by the kernel before
|
||||
* calling us, e.g. in `ptrace_regset'. We enforce that requirement,
|
||||
* so that we can safely avoid preinitializing temporaries for
|
||||
* partial register writes.
|
||||
*/
|
||||
static int fpr_set(struct task_struct *target,
|
||||
const struct user_regset *regset,
|
||||
unsigned int pos, unsigned int count,
|
||||
const void *kbuf, const void __user *ubuf)
|
||||
{
|
||||
const int fcr31_pos = NUM_FPU_REGS * sizeof(elf_fpreg_t);
|
||||
u32 fcr31;
|
||||
int err;
|
||||
|
||||
BUG_ON(count % sizeof(elf_fpreg_t));
|
||||
|
||||
if (pos + count > sizeof(elf_fpregset_t))
|
||||
return -EIO;
|
||||
|
||||
init_fp_ctx(target);
|
||||
|
||||
if (sizeof(target->thread.fpu.fpr[0]) == sizeof(elf_fpreg_t))
|
||||
err = fpr_set_fpa(target, &pos, &count, &kbuf, &ubuf);
|
||||
else
|
||||
err = fpr_set_msa(target, &pos, &count, &kbuf, &ubuf);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (count > 0) {
|
||||
err = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
|
||||
&fcr31,
|
||||
fcr31_pos, fcr31_pos + sizeof(u32));
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
ptrace_setfcr31(target, fcr31);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
enum mips_regset {
|
||||
REGSET_GPR,
|
||||
REGSET_FPR,
|
||||
|
||||
@@ -64,6 +64,7 @@ config X86
|
||||
select GENERIC_CLOCKEVENTS_MIN_ADJUST
|
||||
select GENERIC_CMOS_UPDATE
|
||||
select GENERIC_CPU_AUTOPROBE
|
||||
select GENERIC_CPU_VULNERABILITIES
|
||||
select GENERIC_EARLY_IOREMAP
|
||||
select GENERIC_FIND_FIRST_BIT
|
||||
select GENERIC_IOMAP
|
||||
@@ -407,6 +408,19 @@ config GOLDFISH
|
||||
def_bool y
|
||||
depends on X86_GOLDFISH
|
||||
|
||||
config RETPOLINE
|
||||
bool "Avoid speculative indirect branches in kernel"
|
||||
default y
|
||||
---help---
|
||||
Compile kernel with the retpoline compiler options to guard against
|
||||
kernel-to-user data leaks by avoiding speculative indirect
|
||||
branches. Requires a compiler with -mindirect-branch=thunk-extern
|
||||
support for full protection. The kernel may run slower.
|
||||
|
||||
Without compiler support, at least indirect branches in assembler
|
||||
code are eliminated. Since this includes the syscall entry path,
|
||||
it is not entirely pointless.
|
||||
|
||||
if X86_32
|
||||
config X86_EXTENDED_PLATFORM
|
||||
bool "Support for extended (non-PC) x86 platforms"
|
||||
|
||||
@@ -184,6 +184,14 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
|
||||
KBUILD_CFLAGS += $(mflags-y)
|
||||
KBUILD_AFLAGS += $(mflags-y)
|
||||
|
||||
# Avoid indirect branches in kernel to deal with Spectre
|
||||
ifdef CONFIG_RETPOLINE
|
||||
RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register)
|
||||
ifneq ($(RETPOLINE_CFLAGS),)
|
||||
KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE
|
||||
endif
|
||||
endif
|
||||
|
||||
archscripts: scripts_basic
|
||||
$(Q)$(MAKE) $(build)=arch/x86/tools relocs
|
||||
|
||||
|
||||
@@ -32,6 +32,7 @@
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/inst.h>
|
||||
#include <asm/frame.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
/*
|
||||
* The following macros are used to move an (un)aligned 16 byte value to/from
|
||||
@@ -2734,7 +2735,7 @@ ENTRY(aesni_xts_crypt8)
|
||||
pxor INC, STATE4
|
||||
movdqu IV, 0x30(OUTP)
|
||||
|
||||
call *%r11
|
||||
CALL_NOSPEC %r11
|
||||
|
||||
movdqu 0x00(OUTP), INC
|
||||
pxor INC, STATE1
|
||||
@@ -2779,7 +2780,7 @@ ENTRY(aesni_xts_crypt8)
|
||||
_aesni_gf128mul_x_ble()
|
||||
movups IV, (IVP)
|
||||
|
||||
call *%r11
|
||||
CALL_NOSPEC %r11
|
||||
|
||||
movdqu 0x40(OUTP), INC
|
||||
pxor INC, STATE1
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/frame.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
#define CAMELLIA_TABLE_BYTE_LEN 272
|
||||
|
||||
@@ -1224,7 +1225,7 @@ camellia_xts_crypt_16way:
|
||||
vpxor 14 * 16(%rax), %xmm15, %xmm14;
|
||||
vpxor 15 * 16(%rax), %xmm15, %xmm15;
|
||||
|
||||
call *%r9;
|
||||
CALL_NOSPEC %r9;
|
||||
|
||||
addq $(16 * 16), %rsp;
|
||||
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/frame.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
#define CAMELLIA_TABLE_BYTE_LEN 272
|
||||
|
||||
@@ -1337,7 +1338,7 @@ camellia_xts_crypt_32way:
|
||||
vpxor 14 * 32(%rax), %ymm15, %ymm14;
|
||||
vpxor 15 * 32(%rax), %ymm15, %ymm15;
|
||||
|
||||
call *%r9;
|
||||
CALL_NOSPEC %r9;
|
||||
|
||||
addq $(16 * 32), %rsp;
|
||||
|
||||
|
||||
@@ -45,6 +45,7 @@
|
||||
|
||||
#include <asm/inst.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
## ISCSI CRC 32 Implementation with crc32 and pclmulqdq Instruction
|
||||
|
||||
@@ -172,7 +173,7 @@ continue_block:
|
||||
movzxw (bufp, %rax, 2), len
|
||||
lea crc_array(%rip), bufp
|
||||
lea (bufp, len, 1), bufp
|
||||
jmp *bufp
|
||||
JMP_NOSPEC bufp
|
||||
|
||||
################################################################
|
||||
## 2a) PROCESS FULL BLOCKS:
|
||||
|
||||
@@ -45,6 +45,7 @@
|
||||
#include <asm/asm.h>
|
||||
#include <asm/smap.h>
|
||||
#include <asm/export.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
.section .entry.text, "ax"
|
||||
|
||||
@@ -260,7 +261,7 @@ ENTRY(ret_from_fork)
|
||||
|
||||
/* kernel thread */
|
||||
1: movl %edi, %eax
|
||||
call *%ebx
|
||||
CALL_NOSPEC %ebx
|
||||
/*
|
||||
* A kernel thread is allowed to return here after successfully
|
||||
* calling do_execve(). Exit to userspace to complete the execve()
|
||||
@@ -984,7 +985,8 @@ trace:
|
||||
movl 0x4(%ebp), %edx
|
||||
subl $MCOUNT_INSN_SIZE, %eax
|
||||
|
||||
call *ftrace_trace_function
|
||||
movl ftrace_trace_function, %ecx
|
||||
CALL_NOSPEC %ecx
|
||||
|
||||
popl %edx
|
||||
popl %ecx
|
||||
@@ -1020,7 +1022,7 @@ return_to_handler:
|
||||
movl %eax, %ecx
|
||||
popl %edx
|
||||
popl %eax
|
||||
jmp *%ecx
|
||||
JMP_NOSPEC %ecx
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_TRACING
|
||||
@@ -1062,7 +1064,7 @@ error_code:
|
||||
movl %ecx, %es
|
||||
TRACE_IRQS_OFF
|
||||
movl %esp, %eax # pt_regs pointer
|
||||
call *%edi
|
||||
CALL_NOSPEC %edi
|
||||
jmp ret_from_exception
|
||||
END(page_fault)
|
||||
|
||||
|
||||
@@ -37,6 +37,7 @@
|
||||
#include <asm/pgtable_types.h>
|
||||
#include <asm/export.h>
|
||||
#include <asm/kaiser.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
#include <linux/err.h>
|
||||
|
||||
/* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */
|
||||
@@ -208,7 +209,12 @@ entry_SYSCALL_64_fastpath:
|
||||
* It might end up jumping to the slow path. If it jumps, RAX
|
||||
* and all argument registers are clobbered.
|
||||
*/
|
||||
#ifdef CONFIG_RETPOLINE
|
||||
movq sys_call_table(, %rax, 8), %rax
|
||||
call __x86_indirect_thunk_rax
|
||||
#else
|
||||
call *sys_call_table(, %rax, 8)
|
||||
#endif
|
||||
.Lentry_SYSCALL_64_after_fastpath_call:
|
||||
|
||||
movq %rax, RAX(%rsp)
|
||||
@@ -380,7 +386,7 @@ ENTRY(stub_ptregs_64)
|
||||
jmp entry_SYSCALL64_slow_path
|
||||
|
||||
1:
|
||||
jmp *%rax /* Called from C */
|
||||
JMP_NOSPEC %rax /* Called from C */
|
||||
END(stub_ptregs_64)
|
||||
|
||||
.macro ptregs_stub func
|
||||
@@ -457,7 +463,7 @@ ENTRY(ret_from_fork)
|
||||
1:
|
||||
/* kernel thread */
|
||||
movq %r12, %rdi
|
||||
call *%rbx
|
||||
CALL_NOSPEC %rbx
|
||||
/*
|
||||
* A kernel thread is allowed to return here after successfully
|
||||
* calling do_execve(). Exit to userspace to complete the execve()
|
||||
|
||||
@@ -139,7 +139,7 @@ static inline int alternatives_text_reserved(void *start, void *end)
|
||||
".popsection\n" \
|
||||
".pushsection .altinstr_replacement, \"ax\"\n" \
|
||||
ALTINSTR_REPLACEMENT(newinstr, feature, 1) \
|
||||
".popsection"
|
||||
".popsection\n"
|
||||
|
||||
#define ALTERNATIVE_2(oldinstr, newinstr1, feature1, newinstr2, feature2)\
|
||||
OLDINSTR_2(oldinstr, 1, 2) \
|
||||
@@ -150,7 +150,7 @@ static inline int alternatives_text_reserved(void *start, void *end)
|
||||
".pushsection .altinstr_replacement, \"ax\"\n" \
|
||||
ALTINSTR_REPLACEMENT(newinstr1, feature1, 1) \
|
||||
ALTINSTR_REPLACEMENT(newinstr2, feature2, 2) \
|
||||
".popsection"
|
||||
".popsection\n"
|
||||
|
||||
/*
|
||||
* Alternative instructions for different CPU types or capabilities.
|
||||
|
||||
@@ -10,7 +10,32 @@
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/special_insns.h>
|
||||
#include <asm/preempt.h>
|
||||
#include <asm/asm.h>
|
||||
|
||||
#ifndef CONFIG_X86_CMPXCHG64
|
||||
extern void cmpxchg8b_emu(void);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_RETPOLINE
|
||||
#ifdef CONFIG_X86_32
|
||||
#define INDIRECT_THUNK(reg) extern asmlinkage void __x86_indirect_thunk_e ## reg(void);
|
||||
#else
|
||||
#define INDIRECT_THUNK(reg) extern asmlinkage void __x86_indirect_thunk_r ## reg(void);
|
||||
INDIRECT_THUNK(8)
|
||||
INDIRECT_THUNK(9)
|
||||
INDIRECT_THUNK(10)
|
||||
INDIRECT_THUNK(11)
|
||||
INDIRECT_THUNK(12)
|
||||
INDIRECT_THUNK(13)
|
||||
INDIRECT_THUNK(14)
|
||||
INDIRECT_THUNK(15)
|
||||
#endif
|
||||
INDIRECT_THUNK(ax)
|
||||
INDIRECT_THUNK(bx)
|
||||
INDIRECT_THUNK(cx)
|
||||
INDIRECT_THUNK(dx)
|
||||
INDIRECT_THUNK(si)
|
||||
INDIRECT_THUNK(di)
|
||||
INDIRECT_THUNK(bp)
|
||||
INDIRECT_THUNK(sp)
|
||||
#endif /* CONFIG_RETPOLINE */
|
||||
|
||||
@@ -125,4 +125,15 @@
|
||||
/* For C file, we already have NOKPROBE_SYMBOL macro */
|
||||
#endif
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* This output constraint should be used for any inline asm which has a "call"
|
||||
* instruction. Otherwise the asm may be inserted before the frame pointer
|
||||
* gets set up by the containing function. If you forget to do this, objtool
|
||||
* may print a "call without frame pointer save/setup" warning.
|
||||
*/
|
||||
register unsigned long current_stack_pointer asm(_ASM_SP);
|
||||
#define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer)
|
||||
#endif
|
||||
|
||||
#endif /* _ASM_X86_ASM_H */
|
||||
|
||||
@@ -135,6 +135,8 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
|
||||
set_bit(bit, (unsigned long *)cpu_caps_set); \
|
||||
} while (0)
|
||||
|
||||
#define setup_force_cpu_bug(bit) setup_force_cpu_cap(bit)
|
||||
|
||||
#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_X86_FAST_FEATURE_TESTS)
|
||||
/*
|
||||
* Static testing of CPU features. Used the same as boot_cpu_has().
|
||||
|
||||
@@ -194,6 +194,9 @@
|
||||
#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
|
||||
#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
|
||||
|
||||
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* Generic Retpoline mitigation for Spectre variant 2 */
|
||||
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* AMD Retpoline mitigation for Spectre variant 2 */
|
||||
|
||||
#define X86_FEATURE_INTEL_PT ( 7*32+15) /* Intel Processor Trace */
|
||||
#define X86_FEATURE_AVX512_4VNNIW (7*32+16) /* AVX-512 Neural Network Instructions */
|
||||
#define X86_FEATURE_AVX512_4FMAPS (7*32+17) /* AVX-512 Multiply Accumulation Single precision */
|
||||
@@ -316,5 +319,8 @@
|
||||
#define X86_BUG_SWAPGS_FENCE X86_BUG(11) /* SWAPGS without input dep on GS */
|
||||
#define X86_BUG_MONITOR X86_BUG(12) /* IPI required to wake up remote CPU */
|
||||
#define X86_BUG_AMD_E400 X86_BUG(13) /* CPU is among the affected by Erratum 400 */
|
||||
#define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* CPU is affected by meltdown attack and needs kernel page table isolation */
|
||||
#define X86_BUG_SPECTRE_V1 X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
|
||||
#define X86_BUG_SPECTRE_V2 X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
|
||||
|
||||
#endif /* _ASM_X86_CPUFEATURES_H */
|
||||
|
||||
@@ -330,6 +330,9 @@
|
||||
#define FAM10H_MMIO_CONF_BASE_MASK 0xfffffffULL
|
||||
#define FAM10H_MMIO_CONF_BASE_SHIFT 20
|
||||
#define MSR_FAM10H_NODE_ID 0xc001100c
|
||||
#define MSR_F10H_DECFG 0xc0011029
|
||||
#define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT 1
|
||||
#define MSR_F10H_DECFG_LFENCE_SERIALIZE BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT)
|
||||
|
||||
/* K8 MSRs */
|
||||
#define MSR_K8_TOP_MEM1 0xc001001a
|
||||
|
||||
214
arch/x86/include/asm/nospec-branch.h
Normal file
214
arch/x86/include/asm/nospec-branch.h
Normal file
@@ -0,0 +1,214 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
|
||||
#ifndef __NOSPEC_BRANCH_H__
|
||||
#define __NOSPEC_BRANCH_H__
|
||||
|
||||
#include <asm/alternative.h>
|
||||
#include <asm/alternative-asm.h>
|
||||
#include <asm/cpufeatures.h>
|
||||
|
||||
/*
|
||||
* Fill the CPU return stack buffer.
|
||||
*
|
||||
* Each entry in the RSB, if used for a speculative 'ret', contains an
|
||||
* infinite 'pause; jmp' loop to capture speculative execution.
|
||||
*
|
||||
* This is required in various cases for retpoline and IBRS-based
|
||||
* mitigations for the Spectre variant 2 vulnerability. Sometimes to
|
||||
* eliminate potentially bogus entries from the RSB, and sometimes
|
||||
* purely to ensure that it doesn't get empty, which on some CPUs would
|
||||
* allow predictions from other (unwanted!) sources to be used.
|
||||
*
|
||||
* We define a CPP macro such that it can be used from both .S files and
|
||||
* inline assembly. It's possible to do a .macro and then include that
|
||||
* from C via asm(".include <asm/nospec-branch.h>") but let's not go there.
|
||||
*/
|
||||
|
||||
#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */
|
||||
#define RSB_FILL_LOOPS 16 /* To avoid underflow */
|
||||
|
||||
/*
|
||||
* Google experimented with loop-unrolling and this turned out to be
|
||||
* the optimal version — two calls, each with their own speculation
|
||||
* trap should their return address end up getting used, in a loop.
|
||||
*/
|
||||
#define __FILL_RETURN_BUFFER(reg, nr, sp) \
|
||||
mov $(nr/2), reg; \
|
||||
771: \
|
||||
call 772f; \
|
||||
773: /* speculation trap */ \
|
||||
pause; \
|
||||
jmp 773b; \
|
||||
772: \
|
||||
call 774f; \
|
||||
775: /* speculation trap */ \
|
||||
pause; \
|
||||
jmp 775b; \
|
||||
774: \
|
||||
dec reg; \
|
||||
jnz 771b; \
|
||||
add $(BITS_PER_LONG/8) * nr, sp;
|
||||
|
||||
#ifdef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* This should be used immediately before a retpoline alternative. It tells
|
||||
* objtool where the retpolines are so that it can make sense of the control
|
||||
* flow by just reading the original instruction(s) and ignoring the
|
||||
* alternatives.
|
||||
*/
|
||||
.macro ANNOTATE_NOSPEC_ALTERNATIVE
|
||||
.Lannotate_\@:
|
||||
.pushsection .discard.nospec
|
||||
.long .Lannotate_\@ - .
|
||||
.popsection
|
||||
.endm
|
||||
|
||||
/*
|
||||
* These are the bare retpoline primitives for indirect jmp and call.
|
||||
* Do not use these directly; they only exist to make the ALTERNATIVE
|
||||
* invocation below less ugly.
|
||||
*/
|
||||
.macro RETPOLINE_JMP reg:req
|
||||
call .Ldo_rop_\@
|
||||
.Lspec_trap_\@:
|
||||
pause
|
||||
jmp .Lspec_trap_\@
|
||||
.Ldo_rop_\@:
|
||||
mov \reg, (%_ASM_SP)
|
||||
ret
|
||||
.endm
|
||||
|
||||
/*
|
||||
* This is a wrapper around RETPOLINE_JMP so the called function in reg
|
||||
* returns to the instruction after the macro.
|
||||
*/
|
||||
.macro RETPOLINE_CALL reg:req
|
||||
jmp .Ldo_call_\@
|
||||
.Ldo_retpoline_jmp_\@:
|
||||
RETPOLINE_JMP \reg
|
||||
.Ldo_call_\@:
|
||||
call .Ldo_retpoline_jmp_\@
|
||||
.endm
|
||||
|
||||
/*
|
||||
* JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple
|
||||
* indirect jmp/call which may be susceptible to the Spectre variant 2
|
||||
* attack.
|
||||
*/
|
||||
.macro JMP_NOSPEC reg:req
|
||||
#ifdef CONFIG_RETPOLINE
|
||||
ANNOTATE_NOSPEC_ALTERNATIVE
|
||||
ALTERNATIVE_2 __stringify(jmp *\reg), \
|
||||
__stringify(RETPOLINE_JMP \reg), X86_FEATURE_RETPOLINE, \
|
||||
__stringify(lfence; jmp *\reg), X86_FEATURE_RETPOLINE_AMD
|
||||
#else
|
||||
jmp *\reg
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro CALL_NOSPEC reg:req
|
||||
#ifdef CONFIG_RETPOLINE
|
||||
ANNOTATE_NOSPEC_ALTERNATIVE
|
||||
ALTERNATIVE_2 __stringify(call *\reg), \
|
||||
__stringify(RETPOLINE_CALL \reg), X86_FEATURE_RETPOLINE,\
|
||||
__stringify(lfence; call *\reg), X86_FEATURE_RETPOLINE_AMD
|
||||
#else
|
||||
call *\reg
|
||||
#endif
|
||||
.endm
|
||||
|
||||
/*
|
||||
* A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
|
||||
* monstrosity above, manually.
|
||||
*/
|
||||
.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
|
||||
#ifdef CONFIG_RETPOLINE
|
||||
ANNOTATE_NOSPEC_ALTERNATIVE
|
||||
ALTERNATIVE "jmp .Lskip_rsb_\@", \
|
||||
__stringify(__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)) \
|
||||
\ftr
|
||||
.Lskip_rsb_\@:
|
||||
#endif
|
||||
.endm
|
||||
|
||||
#else /* __ASSEMBLY__ */
|
||||
|
||||
#define ANNOTATE_NOSPEC_ALTERNATIVE \
|
||||
"999:\n\t" \
|
||||
".pushsection .discard.nospec\n\t" \
|
||||
".long 999b - .\n\t" \
|
||||
".popsection\n\t"
|
||||
|
||||
#if defined(CONFIG_X86_64) && defined(RETPOLINE)
|
||||
|
||||
/*
|
||||
* Since the inline asm uses the %V modifier which is only in newer GCC,
|
||||
* the 64-bit one is dependent on RETPOLINE not CONFIG_RETPOLINE.
|
||||
*/
|
||||
# define CALL_NOSPEC \
|
||||
ANNOTATE_NOSPEC_ALTERNATIVE \
|
||||
ALTERNATIVE( \
|
||||
"call *%[thunk_target]\n", \
|
||||
"call __x86_indirect_thunk_%V[thunk_target]\n", \
|
||||
X86_FEATURE_RETPOLINE)
|
||||
# define THUNK_TARGET(addr) [thunk_target] "r" (addr)
|
||||
|
||||
#elif defined(CONFIG_X86_32) && defined(CONFIG_RETPOLINE)
|
||||
/*
|
||||
* For i386 we use the original ret-equivalent retpoline, because
|
||||
* otherwise we'll run out of registers. We don't care about CET
|
||||
* here, anyway.
|
||||
*/
|
||||
# define CALL_NOSPEC ALTERNATIVE("call *%[thunk_target]\n", \
|
||||
" jmp 904f;\n" \
|
||||
" .align 16\n" \
|
||||
"901: call 903f;\n" \
|
||||
"902: pause;\n" \
|
||||
" jmp 902b;\n" \
|
||||
" .align 16\n" \
|
||||
"903: addl $4, %%esp;\n" \
|
||||
" pushl %[thunk_target];\n" \
|
||||
" ret;\n" \
|
||||
" .align 16\n" \
|
||||
"904: call 901b;\n", \
|
||||
X86_FEATURE_RETPOLINE)
|
||||
|
||||
# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
|
||||
#else /* No retpoline for C / inline asm */
|
||||
# define CALL_NOSPEC "call *%[thunk_target]\n"
|
||||
# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
|
||||
#endif
|
||||
|
||||
/* The Spectre V2 mitigation variants */
|
||||
enum spectre_v2_mitigation {
|
||||
SPECTRE_V2_NONE,
|
||||
SPECTRE_V2_RETPOLINE_MINIMAL,
|
||||
SPECTRE_V2_RETPOLINE_MINIMAL_AMD,
|
||||
SPECTRE_V2_RETPOLINE_GENERIC,
|
||||
SPECTRE_V2_RETPOLINE_AMD,
|
||||
SPECTRE_V2_IBRS,
|
||||
};
|
||||
|
||||
/*
|
||||
* On VMEXIT we must ensure that no RSB predictions learned in the guest
|
||||
* can be followed in the host, by overwriting the RSB completely. Both
|
||||
* retpoline and IBRS mitigations for Spectre v2 need this; only on future
|
||||
* CPUs with IBRS_ATT *might* it be avoided.
|
||||
*/
|
||||
static inline void vmexit_fill_RSB(void)
|
||||
{
|
||||
#ifdef CONFIG_RETPOLINE
|
||||
unsigned long loops = RSB_CLEAR_LOOPS / 2;
|
||||
|
||||
asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE
|
||||
ALTERNATIVE("jmp 910f",
|
||||
__stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)),
|
||||
X86_FEATURE_RETPOLINE)
|
||||
"910:"
|
||||
: "=&r" (loops), ASM_CALL_CONSTRAINT
|
||||
: "r" (loops) : "memory" );
|
||||
#endif
|
||||
}
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* __NOSPEC_BRANCH_H__ */
|
||||
@@ -27,6 +27,17 @@ static inline void paravirt_release_pud(unsigned long pfn) {}
|
||||
*/
|
||||
extern gfp_t __userpte_alloc_gfp;
|
||||
|
||||
#ifdef CONFIG_PAGE_TABLE_ISOLATION
|
||||
/*
|
||||
* Instead of one PGD, we acquire two PGDs. Being order-1, it is
|
||||
* both 8k in size and 8k-aligned. That lets us just flip bit 12
|
||||
* in a pointer to swap between the two 4k halves.
|
||||
*/
|
||||
#define PGD_ALLOCATION_ORDER 1
|
||||
#else
|
||||
#define PGD_ALLOCATION_ORDER 0
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Allocate and free page tables.
|
||||
*/
|
||||
|
||||
@@ -156,8 +156,8 @@ extern struct cpuinfo_x86 boot_cpu_data;
|
||||
extern struct cpuinfo_x86 new_cpu_data;
|
||||
|
||||
extern struct tss_struct doublefault_tss;
|
||||
extern __u32 cpu_caps_cleared[NCAPINTS];
|
||||
extern __u32 cpu_caps_set[NCAPINTS];
|
||||
extern __u32 cpu_caps_cleared[NCAPINTS + NBUGINTS];
|
||||
extern __u32 cpu_caps_set[NCAPINTS + NBUGINTS];
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
DECLARE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
|
||||
|
||||
@@ -152,17 +152,6 @@ struct thread_info {
|
||||
*/
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
static inline unsigned long current_stack_pointer(void)
|
||||
{
|
||||
unsigned long sp;
|
||||
#ifdef CONFIG_X86_64
|
||||
asm("mov %%rsp,%0" : "=g" (sp));
|
||||
#else
|
||||
asm("mov %%esp,%0" : "=g" (sp));
|
||||
#endif
|
||||
return sp;
|
||||
}
|
||||
|
||||
/*
|
||||
* Walks up the stack frames to make sure that the specified object is
|
||||
* entirely contained by a single stack frame.
|
||||
|
||||
@@ -44,6 +44,7 @@
|
||||
#include <asm/page.h>
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/smap.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
#include <xen/interface/xen.h>
|
||||
#include <xen/interface/sched.h>
|
||||
@@ -216,9 +217,9 @@ privcmd_call(unsigned call,
|
||||
__HYPERCALL_5ARG(a1, a2, a3, a4, a5);
|
||||
|
||||
stac();
|
||||
asm volatile("call *%[call]"
|
||||
asm volatile(CALL_NOSPEC
|
||||
: __HYPERCALL_5PARAM
|
||||
: [call] "a" (&hypercall_page[call])
|
||||
: [thunk_target] "a" (&hypercall_page[call])
|
||||
: __HYPERCALL_CLOBBER5);
|
||||
clac();
|
||||
|
||||
|
||||
@@ -335,13 +335,12 @@ acpi_parse_lapic_nmi(struct acpi_subtable_header * header, const unsigned long e
|
||||
#ifdef CONFIG_X86_IO_APIC
|
||||
#define MP_ISA_BUS 0
|
||||
|
||||
static int __init mp_register_ioapic_irq(u8 bus_irq, u8 polarity,
|
||||
u8 trigger, u32 gsi);
|
||||
|
||||
static void __init mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger,
|
||||
u32 gsi)
|
||||
{
|
||||
int ioapic;
|
||||
int pin;
|
||||
struct mpc_intsrc mp_irq;
|
||||
|
||||
/*
|
||||
* Check bus_irq boundary.
|
||||
*/
|
||||
@@ -350,14 +349,6 @@ static void __init mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger,
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Convert 'gsi' to 'ioapic.pin'.
|
||||
*/
|
||||
ioapic = mp_find_ioapic(gsi);
|
||||
if (ioapic < 0)
|
||||
return;
|
||||
pin = mp_find_ioapic_pin(ioapic, gsi);
|
||||
|
||||
/*
|
||||
* TBD: This check is for faulty timer entries, where the override
|
||||
* erroneously sets the trigger to level, resulting in a HUGE
|
||||
@@ -366,16 +357,8 @@ static void __init mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger,
|
||||
if ((bus_irq == 0) && (trigger == 3))
|
||||
trigger = 1;
|
||||
|
||||
mp_irq.type = MP_INTSRC;
|
||||
mp_irq.irqtype = mp_INT;
|
||||
mp_irq.irqflag = (trigger << 2) | polarity;
|
||||
mp_irq.srcbus = MP_ISA_BUS;
|
||||
mp_irq.srcbusirq = bus_irq; /* IRQ */
|
||||
mp_irq.dstapic = mpc_ioapic_id(ioapic); /* APIC ID */
|
||||
mp_irq.dstirq = pin; /* INTIN# */
|
||||
|
||||
mp_save_irq(&mp_irq);
|
||||
|
||||
if (mp_register_ioapic_irq(bus_irq, polarity, trigger, gsi) < 0)
|
||||
return;
|
||||
/*
|
||||
* Reset default identity mapping if gsi is also an legacy IRQ,
|
||||
* otherwise there will be more than one entry with the same GSI
|
||||
@@ -422,6 +405,34 @@ static int mp_config_acpi_gsi(struct device *dev, u32 gsi, int trigger,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init mp_register_ioapic_irq(u8 bus_irq, u8 polarity,
|
||||
u8 trigger, u32 gsi)
|
||||
{
|
||||
struct mpc_intsrc mp_irq;
|
||||
int ioapic, pin;
|
||||
|
||||
/* Convert 'gsi' to 'ioapic.pin'(INTIN#) */
|
||||
ioapic = mp_find_ioapic(gsi);
|
||||
if (ioapic < 0) {
|
||||
pr_warn("Failed to find ioapic for gsi : %u\n", gsi);
|
||||
return ioapic;
|
||||
}
|
||||
|
||||
pin = mp_find_ioapic_pin(ioapic, gsi);
|
||||
|
||||
mp_irq.type = MP_INTSRC;
|
||||
mp_irq.irqtype = mp_INT;
|
||||
mp_irq.irqflag = (trigger << 2) | polarity;
|
||||
mp_irq.srcbus = MP_ISA_BUS;
|
||||
mp_irq.srcbusirq = bus_irq;
|
||||
mp_irq.dstapic = mpc_ioapic_id(ioapic);
|
||||
mp_irq.dstirq = pin;
|
||||
|
||||
mp_save_irq(&mp_irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init
|
||||
acpi_parse_ioapic(struct acpi_subtable_header * header, const unsigned long end)
|
||||
{
|
||||
@@ -466,7 +477,11 @@ static void __init acpi_sci_ioapic_setup(u8 bus_irq, u16 polarity, u16 trigger,
|
||||
if (acpi_sci_flags & ACPI_MADT_POLARITY_MASK)
|
||||
polarity = acpi_sci_flags & ACPI_MADT_POLARITY_MASK;
|
||||
|
||||
mp_override_legacy_irq(bus_irq, polarity, trigger, gsi);
|
||||
if (bus_irq < NR_IRQS_LEGACY)
|
||||
mp_override_legacy_irq(bus_irq, polarity, trigger, gsi);
|
||||
else
|
||||
mp_register_ioapic_irq(bus_irq, polarity, trigger, gsi);
|
||||
|
||||
acpi_penalize_sci_irq(bus_irq, trigger, polarity);
|
||||
|
||||
/*
|
||||
|
||||
@@ -340,9 +340,12 @@ done:
|
||||
static void __init_or_module optimize_nops(struct alt_instr *a, u8 *instr)
|
||||
{
|
||||
unsigned long flags;
|
||||
int i;
|
||||
|
||||
if (instr[0] != 0x90)
|
||||
return;
|
||||
for (i = 0; i < a->padlen; i++) {
|
||||
if (instr[i] != 0x90)
|
||||
return;
|
||||
}
|
||||
|
||||
local_irq_save(flags);
|
||||
add_nops(instr + (a->instrlen - a->padlen), a->padlen);
|
||||
|
||||
@@ -20,13 +20,11 @@ obj-y := intel_cacheinfo.o scattered.o topology.o
|
||||
obj-y += common.o
|
||||
obj-y += rdrand.o
|
||||
obj-y += match.o
|
||||
obj-y += bugs.o
|
||||
|
||||
obj-$(CONFIG_PROC_FS) += proc.o
|
||||
obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
|
||||
|
||||
obj-$(CONFIG_X86_32) += bugs.o
|
||||
obj-$(CONFIG_X86_64) += bugs_64.o
|
||||
|
||||
obj-$(CONFIG_CPU_SUP_INTEL) += intel.o
|
||||
obj-$(CONFIG_CPU_SUP_AMD) += amd.o
|
||||
obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o
|
||||
|
||||
@@ -782,8 +782,32 @@ static void init_amd(struct cpuinfo_x86 *c)
|
||||
set_cpu_cap(c, X86_FEATURE_K8);
|
||||
|
||||
if (cpu_has(c, X86_FEATURE_XMM2)) {
|
||||
/* MFENCE stops RDTSC speculation */
|
||||
set_cpu_cap(c, X86_FEATURE_MFENCE_RDTSC);
|
||||
unsigned long long val;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* A serializing LFENCE has less overhead than MFENCE, so
|
||||
* use it for execution serialization. On families which
|
||||
* don't have that MSR, LFENCE is already serializing.
|
||||
* msr_set_bit() uses the safe accessors, too, even if the MSR
|
||||
* is not present.
|
||||
*/
|
||||
msr_set_bit(MSR_F10H_DECFG,
|
||||
MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT);
|
||||
|
||||
/*
|
||||
* Verify that the MSR write was successful (could be running
|
||||
* under a hypervisor) and only then assume that LFENCE is
|
||||
* serializing.
|
||||
*/
|
||||
ret = rdmsrl_safe(MSR_F10H_DECFG, &val);
|
||||
if (!ret && (val & MSR_F10H_DECFG_LFENCE_SERIALIZE)) {
|
||||
/* A serializing LFENCE stops RDTSC speculation */
|
||||
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
|
||||
} else {
|
||||
/* MFENCE stops RDTSC speculation */
|
||||
set_cpu_cap(c, X86_FEATURE_MFENCE_RDTSC);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -9,6 +9,10 @@
|
||||
*/
|
||||
#include <linux/init.h>
|
||||
#include <linux/utsname.h>
|
||||
#include <linux/cpu.h>
|
||||
|
||||
#include <asm/nospec-branch.h>
|
||||
#include <asm/cmdline.h>
|
||||
#include <asm/bugs.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/processor-flags.h>
|
||||
@@ -16,23 +20,24 @@
|
||||
#include <asm/msr.h>
|
||||
#include <asm/paravirt.h>
|
||||
#include <asm/alternative.h>
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/cacheflush.h>
|
||||
|
||||
static void __init spectre_v2_select_mitigation(void);
|
||||
|
||||
void __init check_bugs(void)
|
||||
{
|
||||
#ifdef CONFIG_X86_32
|
||||
/*
|
||||
* Regardless of whether PCID is enumerated, the SDM says
|
||||
* that it can't be enabled in 32-bit mode.
|
||||
*/
|
||||
setup_clear_cpu_cap(X86_FEATURE_PCID);
|
||||
#endif
|
||||
|
||||
identify_boot_cpu();
|
||||
#ifndef CONFIG_SMP
|
||||
pr_info("CPU: ");
|
||||
print_cpu_info(&boot_cpu_data);
|
||||
#endif
|
||||
|
||||
if (!IS_ENABLED(CONFIG_SMP)) {
|
||||
pr_info("CPU: ");
|
||||
print_cpu_info(&boot_cpu_data);
|
||||
}
|
||||
|
||||
/* Select the proper spectre mitigation before patching alternatives */
|
||||
spectre_v2_select_mitigation();
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
/*
|
||||
* Check whether we are able to run this kernel safely on SMP.
|
||||
*
|
||||
@@ -48,4 +53,194 @@ void __init check_bugs(void)
|
||||
alternative_instructions();
|
||||
|
||||
fpu__init_check_bugs();
|
||||
#else /* CONFIG_X86_64 */
|
||||
alternative_instructions();
|
||||
|
||||
/*
|
||||
* Make sure the first 2MB area is not mapped by huge pages
|
||||
* There are typically fixed size MTRRs in there and overlapping
|
||||
* MTRRs into large pages causes slow downs.
|
||||
*
|
||||
* Right now we don't do that with gbpages because there seems
|
||||
* very little benefit for that case.
|
||||
*/
|
||||
if (!direct_gbpages)
|
||||
set_memory_4k((unsigned long)__va(0), 1);
|
||||
#endif
|
||||
}
|
||||
|
||||
/* The kernel command line selection */
|
||||
enum spectre_v2_mitigation_cmd {
|
||||
SPECTRE_V2_CMD_NONE,
|
||||
SPECTRE_V2_CMD_AUTO,
|
||||
SPECTRE_V2_CMD_FORCE,
|
||||
SPECTRE_V2_CMD_RETPOLINE,
|
||||
SPECTRE_V2_CMD_RETPOLINE_GENERIC,
|
||||
SPECTRE_V2_CMD_RETPOLINE_AMD,
|
||||
};
|
||||
|
||||
static const char *spectre_v2_strings[] = {
|
||||
[SPECTRE_V2_NONE] = "Vulnerable",
|
||||
[SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline",
|
||||
[SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline",
|
||||
[SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline",
|
||||
[SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline",
|
||||
};
|
||||
|
||||
#undef pr_fmt
|
||||
#define pr_fmt(fmt) "Spectre V2 mitigation: " fmt
|
||||
|
||||
static enum spectre_v2_mitigation spectre_v2_enabled = SPECTRE_V2_NONE;
|
||||
|
||||
static void __init spec2_print_if_insecure(const char *reason)
|
||||
{
|
||||
if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
|
||||
pr_info("%s\n", reason);
|
||||
}
|
||||
|
||||
static void __init spec2_print_if_secure(const char *reason)
|
||||
{
|
||||
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
|
||||
pr_info("%s\n", reason);
|
||||
}
|
||||
|
||||
static inline bool retp_compiler(void)
|
||||
{
|
||||
return __is_defined(RETPOLINE);
|
||||
}
|
||||
|
||||
static inline bool match_option(const char *arg, int arglen, const char *opt)
|
||||
{
|
||||
int len = strlen(opt);
|
||||
|
||||
return len == arglen && !strncmp(arg, opt, len);
|
||||
}
|
||||
|
||||
static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
|
||||
{
|
||||
char arg[20];
|
||||
int ret;
|
||||
|
||||
ret = cmdline_find_option(boot_command_line, "spectre_v2", arg,
|
||||
sizeof(arg));
|
||||
if (ret > 0) {
|
||||
if (match_option(arg, ret, "off")) {
|
||||
goto disable;
|
||||
} else if (match_option(arg, ret, "on")) {
|
||||
spec2_print_if_secure("force enabled on command line.");
|
||||
return SPECTRE_V2_CMD_FORCE;
|
||||
} else if (match_option(arg, ret, "retpoline")) {
|
||||
spec2_print_if_insecure("retpoline selected on command line.");
|
||||
return SPECTRE_V2_CMD_RETPOLINE;
|
||||
} else if (match_option(arg, ret, "retpoline,amd")) {
|
||||
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
|
||||
pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n");
|
||||
return SPECTRE_V2_CMD_AUTO;
|
||||
}
|
||||
spec2_print_if_insecure("AMD retpoline selected on command line.");
|
||||
return SPECTRE_V2_CMD_RETPOLINE_AMD;
|
||||
} else if (match_option(arg, ret, "retpoline,generic")) {
|
||||
spec2_print_if_insecure("generic retpoline selected on command line.");
|
||||
return SPECTRE_V2_CMD_RETPOLINE_GENERIC;
|
||||
} else if (match_option(arg, ret, "auto")) {
|
||||
return SPECTRE_V2_CMD_AUTO;
|
||||
}
|
||||
}
|
||||
|
||||
if (!cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
|
||||
return SPECTRE_V2_CMD_AUTO;
|
||||
disable:
|
||||
spec2_print_if_insecure("disabled on command line.");
|
||||
return SPECTRE_V2_CMD_NONE;
|
||||
}
|
||||
|
||||
static void __init spectre_v2_select_mitigation(void)
|
||||
{
|
||||
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
|
||||
enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
|
||||
|
||||
/*
|
||||
* If the CPU is not affected and the command line mode is NONE or AUTO
|
||||
* then nothing to do.
|
||||
*/
|
||||
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
|
||||
(cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
|
||||
return;
|
||||
|
||||
switch (cmd) {
|
||||
case SPECTRE_V2_CMD_NONE:
|
||||
return;
|
||||
|
||||
case SPECTRE_V2_CMD_FORCE:
|
||||
/* FALLTRHU */
|
||||
case SPECTRE_V2_CMD_AUTO:
|
||||
goto retpoline_auto;
|
||||
|
||||
case SPECTRE_V2_CMD_RETPOLINE_AMD:
|
||||
if (IS_ENABLED(CONFIG_RETPOLINE))
|
||||
goto retpoline_amd;
|
||||
break;
|
||||
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
|
||||
if (IS_ENABLED(CONFIG_RETPOLINE))
|
||||
goto retpoline_generic;
|
||||
break;
|
||||
case SPECTRE_V2_CMD_RETPOLINE:
|
||||
if (IS_ENABLED(CONFIG_RETPOLINE))
|
||||
goto retpoline_auto;
|
||||
break;
|
||||
}
|
||||
pr_err("kernel not compiled with retpoline; no mitigation available!");
|
||||
return;
|
||||
|
||||
retpoline_auto:
|
||||
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
|
||||
retpoline_amd:
|
||||
if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
|
||||
pr_err("LFENCE not serializing. Switching to generic retpoline\n");
|
||||
goto retpoline_generic;
|
||||
}
|
||||
mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD :
|
||||
SPECTRE_V2_RETPOLINE_MINIMAL_AMD;
|
||||
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
|
||||
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
|
||||
} else {
|
||||
retpoline_generic:
|
||||
mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_GENERIC :
|
||||
SPECTRE_V2_RETPOLINE_MINIMAL;
|
||||
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
|
||||
}
|
||||
|
||||
spectre_v2_enabled = mode;
|
||||
pr_info("%s\n", spectre_v2_strings[mode]);
|
||||
}
|
||||
|
||||
#undef pr_fmt
|
||||
|
||||
#ifdef CONFIG_SYSFS
|
||||
ssize_t cpu_show_meltdown(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
|
||||
return sprintf(buf, "Not affected\n");
|
||||
if (boot_cpu_has(X86_FEATURE_KAISER))
|
||||
return sprintf(buf, "Mitigation: PTI\n");
|
||||
return sprintf(buf, "Vulnerable\n");
|
||||
}
|
||||
|
||||
ssize_t cpu_show_spectre_v1(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1))
|
||||
return sprintf(buf, "Not affected\n");
|
||||
return sprintf(buf, "Vulnerable\n");
|
||||
}
|
||||
|
||||
ssize_t cpu_show_spectre_v2(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
|
||||
return sprintf(buf, "Not affected\n");
|
||||
|
||||
return sprintf(buf, "%s\n", spectre_v2_strings[spectre_v2_enabled]);
|
||||
}
|
||||
#endif
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 1994 Linus Torvalds
|
||||
* Copyright (C) 2000 SuSE
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <asm/alternative.h>
|
||||
#include <asm/bugs.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/mtrr.h>
|
||||
#include <asm/cacheflush.h>
|
||||
|
||||
void __init check_bugs(void)
|
||||
{
|
||||
identify_boot_cpu();
|
||||
#if !defined(CONFIG_SMP)
|
||||
pr_info("CPU: ");
|
||||
print_cpu_info(&boot_cpu_data);
|
||||
#endif
|
||||
alternative_instructions();
|
||||
|
||||
/*
|
||||
* Make sure the first 2MB area is not mapped by huge pages
|
||||
* There are typically fixed size MTRRs in there and overlapping
|
||||
* MTRRs into large pages causes slow downs.
|
||||
*
|
||||
* Right now we don't do that with gbpages because there seems
|
||||
* very little benefit for that case.
|
||||
*/
|
||||
if (!direct_gbpages)
|
||||
set_memory_4k((unsigned long)__va(0), 1);
|
||||
}
|
||||
@@ -480,8 +480,8 @@ static const char *table_lookup_model(struct cpuinfo_x86 *c)
|
||||
return NULL; /* Not found */
|
||||
}
|
||||
|
||||
__u32 cpu_caps_cleared[NCAPINTS];
|
||||
__u32 cpu_caps_set[NCAPINTS];
|
||||
__u32 cpu_caps_cleared[NCAPINTS + NBUGINTS];
|
||||
__u32 cpu_caps_set[NCAPINTS + NBUGINTS];
|
||||
|
||||
void load_percpu_segment(int cpu)
|
||||
{
|
||||
@@ -706,6 +706,16 @@ void cpu_detect(struct cpuinfo_x86 *c)
|
||||
}
|
||||
}
|
||||
|
||||
static void apply_forced_caps(struct cpuinfo_x86 *c)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < NCAPINTS + NBUGINTS; i++) {
|
||||
c->x86_capability[i] &= ~cpu_caps_cleared[i];
|
||||
c->x86_capability[i] |= cpu_caps_set[i];
|
||||
}
|
||||
}
|
||||
|
||||
void get_cpu_cap(struct cpuinfo_x86 *c)
|
||||
{
|
||||
u32 eax, ebx, ecx, edx;
|
||||
@@ -872,7 +882,22 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
|
||||
}
|
||||
|
||||
setup_force_cpu_cap(X86_FEATURE_ALWAYS);
|
||||
|
||||
/* Assume for now that ALL x86 CPUs are insecure */
|
||||
setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
|
||||
|
||||
setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
|
||||
setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
|
||||
|
||||
fpu__init_system(c);
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
/*
|
||||
* Regardless of whether PCID is enumerated, the SDM says
|
||||
* that it can't be enabled in 32-bit mode.
|
||||
*/
|
||||
setup_clear_cpu_cap(X86_FEATURE_PCID);
|
||||
#endif
|
||||
}
|
||||
|
||||
void __init early_cpu_init(void)
|
||||
@@ -1086,10 +1111,7 @@ static void identify_cpu(struct cpuinfo_x86 *c)
|
||||
this_cpu->c_identify(c);
|
||||
|
||||
/* Clear/Set all flags overridden by options, after probe */
|
||||
for (i = 0; i < NCAPINTS; i++) {
|
||||
c->x86_capability[i] &= ~cpu_caps_cleared[i];
|
||||
c->x86_capability[i] |= cpu_caps_set[i];
|
||||
}
|
||||
apply_forced_caps(c);
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
|
||||
@@ -1151,10 +1173,7 @@ static void identify_cpu(struct cpuinfo_x86 *c)
|
||||
* Clear/Set all flags overridden by options, need do it
|
||||
* before following smp all cpus cap AND.
|
||||
*/
|
||||
for (i = 0; i < NCAPINTS; i++) {
|
||||
c->x86_capability[i] &= ~cpu_caps_cleared[i];
|
||||
c->x86_capability[i] |= cpu_caps_set[i];
|
||||
}
|
||||
apply_forced_caps(c);
|
||||
|
||||
/*
|
||||
* On SMP, boot_cpu_data holds the common feature set between
|
||||
|
||||
@@ -1051,8 +1051,17 @@ static bool is_blacklisted(unsigned int cpu)
|
||||
{
|
||||
struct cpuinfo_x86 *c = &cpu_data(cpu);
|
||||
|
||||
if (c->x86 == 6 && c->x86_model == INTEL_FAM6_BROADWELL_X) {
|
||||
pr_err_once("late loading on model 79 is disabled.\n");
|
||||
/*
|
||||
* Late loading on model 79 with microcode revision less than 0x0b000021
|
||||
* may result in a system hang. This behavior is documented in item
|
||||
* BDF90, #334165 (Intel Xeon Processor E7-8800/4800 v4 Product Family).
|
||||
*/
|
||||
if (c->x86 == 6 &&
|
||||
c->x86_model == INTEL_FAM6_BROADWELL_X &&
|
||||
c->x86_mask == 0x01 &&
|
||||
c->microcode < 0x0b000021) {
|
||||
pr_err_once("Erratum BDF90: late loading with revision < 0x0b000021 (0x%x) disabled.\n", c->microcode);
|
||||
pr_err_once("Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
#include <linux/mm.h>
|
||||
|
||||
#include <asm/apic.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
#ifdef CONFIG_DEBUG_STACKOVERFLOW
|
||||
|
||||
@@ -54,17 +55,17 @@ DEFINE_PER_CPU(struct irq_stack *, softirq_stack);
|
||||
static void call_on_stack(void *func, void *stack)
|
||||
{
|
||||
asm volatile("xchgl %%ebx,%%esp \n"
|
||||
"call *%%edi \n"
|
||||
CALL_NOSPEC
|
||||
"movl %%ebx,%%esp \n"
|
||||
: "=b" (stack)
|
||||
: "0" (stack),
|
||||
"D"(func)
|
||||
[thunk_target] "D"(func)
|
||||
: "memory", "cc", "edx", "ecx", "eax");
|
||||
}
|
||||
|
||||
static inline void *current_stack(void)
|
||||
{
|
||||
return (void *)(current_stack_pointer() & ~(THREAD_SIZE - 1));
|
||||
return (void *)(current_stack_pointer & ~(THREAD_SIZE - 1));
|
||||
}
|
||||
|
||||
static inline int execute_on_irq_stack(int overflow, struct irq_desc *desc)
|
||||
@@ -88,17 +89,17 @@ static inline int execute_on_irq_stack(int overflow, struct irq_desc *desc)
|
||||
|
||||
/* Save the next esp at the bottom of the stack */
|
||||
prev_esp = (u32 *)irqstk;
|
||||
*prev_esp = current_stack_pointer();
|
||||
*prev_esp = current_stack_pointer;
|
||||
|
||||
if (unlikely(overflow))
|
||||
call_on_stack(print_stack_overflow, isp);
|
||||
|
||||
asm volatile("xchgl %%ebx,%%esp \n"
|
||||
"call *%%edi \n"
|
||||
CALL_NOSPEC
|
||||
"movl %%ebx,%%esp \n"
|
||||
: "=a" (arg1), "=b" (isp)
|
||||
: "0" (desc), "1" (isp),
|
||||
"D" (desc->handle_irq)
|
||||
[thunk_target] "D" (desc->handle_irq)
|
||||
: "memory", "cc", "ecx");
|
||||
return 1;
|
||||
}
|
||||
@@ -139,7 +140,7 @@ void do_softirq_own_stack(void)
|
||||
|
||||
/* Push the previous esp onto the stack */
|
||||
prev_esp = (u32 *)irqstk;
|
||||
*prev_esp = current_stack_pointer();
|
||||
*prev_esp = current_stack_pointer;
|
||||
|
||||
call_on_stack(__do_softirq, isp);
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/ftrace.h>
|
||||
#include <asm/export.h>
|
||||
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
.code64
|
||||
.section .entry.text, "ax"
|
||||
@@ -290,8 +290,9 @@ trace:
|
||||
* ip and parent ip are used and the list function is called when
|
||||
* function tracing is enabled.
|
||||
*/
|
||||
call *ftrace_trace_function
|
||||
|
||||
movq ftrace_trace_function, %r8
|
||||
CALL_NOSPEC %r8
|
||||
restore_mcount_regs
|
||||
|
||||
jmp fgraph_trace
|
||||
@@ -334,5 +335,5 @@ GLOBAL(return_to_handler)
|
||||
movq 8(%rsp), %rdx
|
||||
movq (%rsp), %rax
|
||||
addq $24, %rsp
|
||||
jmp *%rdi
|
||||
JMP_NOSPEC %rdi
|
||||
#endif
|
||||
|
||||
@@ -153,7 +153,7 @@ void ist_begin_non_atomic(struct pt_regs *regs)
|
||||
* from double_fault.
|
||||
*/
|
||||
BUG_ON((unsigned long)(current_top_of_stack() -
|
||||
current_stack_pointer()) >= THREAD_SIZE);
|
||||
current_stack_pointer) >= THREAD_SIZE);
|
||||
|
||||
preempt_enable_no_resched();
|
||||
}
|
||||
|
||||
@@ -44,6 +44,7 @@
|
||||
#include <asm/debugreg.h>
|
||||
#include <asm/kvm_para.h>
|
||||
#include <asm/irq_remapping.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
#include <asm/virtext.h>
|
||||
#include "trace.h"
|
||||
@@ -4868,6 +4869,25 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
"mov %%r13, %c[r13](%[svm]) \n\t"
|
||||
"mov %%r14, %c[r14](%[svm]) \n\t"
|
||||
"mov %%r15, %c[r15](%[svm]) \n\t"
|
||||
#endif
|
||||
/*
|
||||
* Clear host registers marked as clobbered to prevent
|
||||
* speculative use.
|
||||
*/
|
||||
"xor %%" _ASM_BX ", %%" _ASM_BX " \n\t"
|
||||
"xor %%" _ASM_CX ", %%" _ASM_CX " \n\t"
|
||||
"xor %%" _ASM_DX ", %%" _ASM_DX " \n\t"
|
||||
"xor %%" _ASM_SI ", %%" _ASM_SI " \n\t"
|
||||
"xor %%" _ASM_DI ", %%" _ASM_DI " \n\t"
|
||||
#ifdef CONFIG_X86_64
|
||||
"xor %%r8, %%r8 \n\t"
|
||||
"xor %%r9, %%r9 \n\t"
|
||||
"xor %%r10, %%r10 \n\t"
|
||||
"xor %%r11, %%r11 \n\t"
|
||||
"xor %%r12, %%r12 \n\t"
|
||||
"xor %%r13, %%r13 \n\t"
|
||||
"xor %%r14, %%r14 \n\t"
|
||||
"xor %%r15, %%r15 \n\t"
|
||||
#endif
|
||||
"pop %%" _ASM_BP
|
||||
:
|
||||
@@ -4898,6 +4918,9 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
#endif
|
||||
);
|
||||
|
||||
/* Eliminate branch target predictions from guest mode */
|
||||
vmexit_fill_RSB();
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
wrmsrl(MSR_GS_BASE, svm->host.gs_base);
|
||||
#else
|
||||
|
||||
@@ -48,6 +48,7 @@
|
||||
#include <asm/kexec.h>
|
||||
#include <asm/apic.h>
|
||||
#include <asm/irq_remapping.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
#include "trace.h"
|
||||
#include "pmu.h"
|
||||
@@ -857,8 +858,16 @@ static inline short vmcs_field_to_offset(unsigned long field)
|
||||
{
|
||||
BUILD_BUG_ON(ARRAY_SIZE(vmcs_field_to_offset_table) > SHRT_MAX);
|
||||
|
||||
if (field >= ARRAY_SIZE(vmcs_field_to_offset_table) ||
|
||||
vmcs_field_to_offset_table[field] == 0)
|
||||
if (field >= ARRAY_SIZE(vmcs_field_to_offset_table))
|
||||
return -ENOENT;
|
||||
|
||||
/*
|
||||
* FIXME: Mitigation for CVE-2017-5753. To be replaced with a
|
||||
* generic mechanism.
|
||||
*/
|
||||
asm("lfence");
|
||||
|
||||
if (vmcs_field_to_offset_table[field] == 0)
|
||||
return -ENOENT;
|
||||
|
||||
return vmcs_field_to_offset_table[field];
|
||||
@@ -8948,6 +8957,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
/* Save guest registers, load host registers, keep flags */
|
||||
"mov %0, %c[wordsize](%%" _ASM_SP ") \n\t"
|
||||
"pop %0 \n\t"
|
||||
"setbe %c[fail](%0)\n\t"
|
||||
"mov %%" _ASM_AX ", %c[rax](%0) \n\t"
|
||||
"mov %%" _ASM_BX ", %c[rbx](%0) \n\t"
|
||||
__ASM_SIZE(pop) " %c[rcx](%0) \n\t"
|
||||
@@ -8964,12 +8974,23 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
"mov %%r13, %c[r13](%0) \n\t"
|
||||
"mov %%r14, %c[r14](%0) \n\t"
|
||||
"mov %%r15, %c[r15](%0) \n\t"
|
||||
"xor %%r8d, %%r8d \n\t"
|
||||
"xor %%r9d, %%r9d \n\t"
|
||||
"xor %%r10d, %%r10d \n\t"
|
||||
"xor %%r11d, %%r11d \n\t"
|
||||
"xor %%r12d, %%r12d \n\t"
|
||||
"xor %%r13d, %%r13d \n\t"
|
||||
"xor %%r14d, %%r14d \n\t"
|
||||
"xor %%r15d, %%r15d \n\t"
|
||||
#endif
|
||||
"mov %%cr2, %%" _ASM_AX " \n\t"
|
||||
"mov %%" _ASM_AX ", %c[cr2](%0) \n\t"
|
||||
|
||||
"xor %%eax, %%eax \n\t"
|
||||
"xor %%ebx, %%ebx \n\t"
|
||||
"xor %%esi, %%esi \n\t"
|
||||
"xor %%edi, %%edi \n\t"
|
||||
"pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t"
|
||||
"setbe %c[fail](%0) \n\t"
|
||||
".pushsection .rodata \n\t"
|
||||
".global vmx_return \n\t"
|
||||
"vmx_return: " _ASM_PTR " 2b \n\t"
|
||||
@@ -9006,6 +9027,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
#endif
|
||||
);
|
||||
|
||||
/* Eliminate branch target predictions from guest mode */
|
||||
vmexit_fill_RSB();
|
||||
|
||||
/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
|
||||
if (debugctlmsr)
|
||||
update_debugctlmsr(debugctlmsr);
|
||||
|
||||
@@ -4264,7 +4264,7 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v)
|
||||
addr, n, v))
|
||||
&& kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, n, v))
|
||||
break;
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_READ, n, addr, *(u64 *)v);
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_READ, n, addr, v);
|
||||
handled += n;
|
||||
addr += n;
|
||||
len -= n;
|
||||
@@ -4517,7 +4517,7 @@ static int read_prepare(struct kvm_vcpu *vcpu, void *val, int bytes)
|
||||
{
|
||||
if (vcpu->mmio_read_completed) {
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_READ, bytes,
|
||||
vcpu->mmio_fragments[0].gpa, *(u64 *)val);
|
||||
vcpu->mmio_fragments[0].gpa, val);
|
||||
vcpu->mmio_read_completed = 0;
|
||||
return 1;
|
||||
}
|
||||
@@ -4539,14 +4539,14 @@ static int write_emulate(struct kvm_vcpu *vcpu, gpa_t gpa,
|
||||
|
||||
static int write_mmio(struct kvm_vcpu *vcpu, gpa_t gpa, int bytes, void *val)
|
||||
{
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, bytes, gpa, *(u64 *)val);
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, bytes, gpa, val);
|
||||
return vcpu_mmio_write(vcpu, gpa, bytes, val);
|
||||
}
|
||||
|
||||
static int read_exit_mmio(struct kvm_vcpu *vcpu, gpa_t gpa,
|
||||
void *val, int bytes)
|
||||
{
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, bytes, gpa, 0);
|
||||
trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, bytes, gpa, NULL);
|
||||
return X86EMUL_IO_NEEDED;
|
||||
}
|
||||
|
||||
|
||||
@@ -25,6 +25,7 @@ lib-y += memcpy_$(BITS).o
|
||||
lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
|
||||
lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o
|
||||
lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
|
||||
lib-$(CONFIG_RETPOLINE) += retpoline.o
|
||||
|
||||
obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o
|
||||
|
||||
|
||||
@@ -29,7 +29,8 @@
|
||||
#include <asm/errno.h>
|
||||
#include <asm/asm.h>
|
||||
#include <asm/export.h>
|
||||
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
/*
|
||||
* computes a partial checksum, e.g. for TCP/UDP fragments
|
||||
*/
|
||||
@@ -156,7 +157,7 @@ ENTRY(csum_partial)
|
||||
negl %ebx
|
||||
lea 45f(%ebx,%ebx,2), %ebx
|
||||
testl %esi, %esi
|
||||
jmp *%ebx
|
||||
JMP_NOSPEC %ebx
|
||||
|
||||
# Handle 2-byte-aligned regions
|
||||
20: addw (%esi), %ax
|
||||
@@ -439,7 +440,7 @@ ENTRY(csum_partial_copy_generic)
|
||||
andl $-32,%edx
|
||||
lea 3f(%ebx,%ebx), %ebx
|
||||
testl %esi, %esi
|
||||
jmp *%ebx
|
||||
JMP_NOSPEC %ebx
|
||||
1: addl $64,%esi
|
||||
addl $64,%edi
|
||||
SRC(movb -32(%edx),%bl) ; SRC(movb (%edx),%bl)
|
||||
|
||||
48
arch/x86/lib/retpoline.S
Normal file
48
arch/x86/lib/retpoline.S
Normal file
@@ -0,0 +1,48 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
|
||||
#include <linux/stringify.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/dwarf2.h>
|
||||
#include <asm/cpufeatures.h>
|
||||
#include <asm/alternative-asm.h>
|
||||
#include <asm/export.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
.macro THUNK reg
|
||||
.section .text.__x86.indirect_thunk.\reg
|
||||
|
||||
ENTRY(__x86_indirect_thunk_\reg)
|
||||
CFI_STARTPROC
|
||||
JMP_NOSPEC %\reg
|
||||
CFI_ENDPROC
|
||||
ENDPROC(__x86_indirect_thunk_\reg)
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Despite being an assembler file we can't just use .irp here
|
||||
* because __KSYM_DEPS__ only uses the C preprocessor and would
|
||||
* only see one instance of "__x86_indirect_thunk_\reg" rather
|
||||
* than one per register with the correct names. So we do it
|
||||
* the simple and nasty way...
|
||||
*/
|
||||
#define EXPORT_THUNK(reg) EXPORT_SYMBOL(__x86_indirect_thunk_ ## reg)
|
||||
#define GENERATE_THUNK(reg) THUNK reg ; EXPORT_THUNK(reg)
|
||||
|
||||
GENERATE_THUNK(_ASM_AX)
|
||||
GENERATE_THUNK(_ASM_BX)
|
||||
GENERATE_THUNK(_ASM_CX)
|
||||
GENERATE_THUNK(_ASM_DX)
|
||||
GENERATE_THUNK(_ASM_SI)
|
||||
GENERATE_THUNK(_ASM_DI)
|
||||
GENERATE_THUNK(_ASM_BP)
|
||||
GENERATE_THUNK(_ASM_SP)
|
||||
#ifdef CONFIG_64BIT
|
||||
GENERATE_THUNK(r8)
|
||||
GENERATE_THUNK(r9)
|
||||
GENERATE_THUNK(r10)
|
||||
GENERATE_THUNK(r11)
|
||||
GENERATE_THUNK(r12)
|
||||
GENERATE_THUNK(r13)
|
||||
GENERATE_THUNK(r14)
|
||||
GENERATE_THUNK(r15)
|
||||
#endif
|
||||
@@ -197,6 +197,8 @@ static int kaiser_add_user_map(const void *__start_addr, unsigned long size,
|
||||
* requires that not to be #defined to 0): so mask it off here.
|
||||
*/
|
||||
flags &= ~_PAGE_GLOBAL;
|
||||
if (!(__supported_pte_mask & _PAGE_NX))
|
||||
flags &= ~_PAGE_NX;
|
||||
|
||||
for (; address < end_addr; address += PAGE_SIZE) {
|
||||
target_address = get_pa_from_mapping(address);
|
||||
|
||||
@@ -345,13 +345,6 @@ static inline void _pgd_free(pgd_t *pgd)
|
||||
}
|
||||
#else
|
||||
|
||||
/*
|
||||
* Instead of one pgd, Kaiser acquires two pgds. Being order-1, it is
|
||||
* both 8k in size and 8k-aligned. That lets us just flip bit 12
|
||||
* in a pointer to swap between the two 4k halves.
|
||||
*/
|
||||
#define PGD_ALLOCATION_ORDER kaiser_enabled
|
||||
|
||||
static inline pgd_t *_pgd_alloc(void)
|
||||
{
|
||||
return (pgd_t *)__get_free_pages(PGALLOC_GFP, PGD_ALLOCATION_ORDER);
|
||||
|
||||
@@ -110,7 +110,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
|
||||
* mapped in the new pgd, we'll double-fault. Forcibly
|
||||
* map it.
|
||||
*/
|
||||
unsigned int stack_pgd_index = pgd_index(current_stack_pointer());
|
||||
unsigned int stack_pgd_index = pgd_index(current_stack_pointer);
|
||||
|
||||
pgd_t *pgd = next->pgd + stack_pgd_index;
|
||||
|
||||
|
||||
@@ -142,7 +142,7 @@ int __init efi_alloc_page_tables(void)
|
||||
return 0;
|
||||
|
||||
gfp_mask = GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO;
|
||||
efi_pgd = (pgd_t *)__get_free_page(gfp_mask);
|
||||
efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, PGD_ALLOCATION_ORDER);
|
||||
if (!efi_pgd)
|
||||
return -ENOMEM;
|
||||
|
||||
|
||||
@@ -167,6 +167,18 @@ void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list,
|
||||
|
||||
spawn->alg = NULL;
|
||||
spawns = &inst->alg.cra_users;
|
||||
|
||||
/*
|
||||
* We may encounter an unregistered instance here, since
|
||||
* an instance's spawns are set up prior to the instance
|
||||
* being registered. An unregistered instance will have
|
||||
* NULL ->cra_users.next, since ->cra_users isn't
|
||||
* properly initialized until registration. But an
|
||||
* unregistered instance cannot have any users, so treat
|
||||
* it the same as ->cra_users being empty.
|
||||
*/
|
||||
if (spawns->next == NULL)
|
||||
break;
|
||||
}
|
||||
} while ((spawns = crypto_more_spawns(alg, &stack, &top,
|
||||
&secondary_spawns)));
|
||||
|
||||
@@ -235,6 +235,9 @@ config GENERIC_CPU_DEVICES
|
||||
config GENERIC_CPU_AUTOPROBE
|
||||
bool
|
||||
|
||||
config GENERIC_CPU_VULNERABILITIES
|
||||
bool
|
||||
|
||||
config SOC_BUS
|
||||
bool
|
||||
|
||||
|
||||
@@ -499,10 +499,58 @@ static void __init cpu_dev_register_generic(void)
|
||||
#endif
|
||||
}
|
||||
|
||||
#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
|
||||
|
||||
ssize_t __weak cpu_show_meltdown(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "Not affected\n");
|
||||
}
|
||||
|
||||
ssize_t __weak cpu_show_spectre_v1(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "Not affected\n");
|
||||
}
|
||||
|
||||
ssize_t __weak cpu_show_spectre_v2(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "Not affected\n");
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
|
||||
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
|
||||
static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
|
||||
|
||||
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
|
||||
&dev_attr_meltdown.attr,
|
||||
&dev_attr_spectre_v1.attr,
|
||||
&dev_attr_spectre_v2.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct attribute_group cpu_root_vulnerabilities_group = {
|
||||
.name = "vulnerabilities",
|
||||
.attrs = cpu_root_vulnerabilities_attrs,
|
||||
};
|
||||
|
||||
static void __init cpu_register_vulnerabilities(void)
|
||||
{
|
||||
if (sysfs_create_group(&cpu_subsys.dev_root->kobj,
|
||||
&cpu_root_vulnerabilities_group))
|
||||
pr_err("Unable to register CPU vulnerabilities\n");
|
||||
}
|
||||
|
||||
#else
|
||||
static inline void cpu_register_vulnerabilities(void) { }
|
||||
#endif
|
||||
|
||||
void __init cpu_dev_init(void)
|
||||
{
|
||||
if (subsys_system_register(&cpu_subsys, cpu_root_attr_groups))
|
||||
panic("Failed to register CPU subsystem");
|
||||
|
||||
cpu_dev_register_generic();
|
||||
cpu_register_vulnerabilities();
|
||||
}
|
||||
|
||||
@@ -4511,7 +4511,7 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
|
||||
segment_size = rbd_obj_bytes(&rbd_dev->header);
|
||||
blk_queue_max_hw_sectors(q, segment_size / SECTOR_SIZE);
|
||||
q->limits.max_sectors = queue_max_hw_sectors(q);
|
||||
blk_queue_max_segments(q, segment_size / SECTOR_SIZE);
|
||||
blk_queue_max_segments(q, USHRT_MAX);
|
||||
blk_queue_max_segment_size(q, segment_size);
|
||||
blk_queue_io_min(q, segment_size);
|
||||
blk_queue_io_opt(q, segment_size);
|
||||
|
||||
@@ -2729,6 +2729,8 @@ static int vmw_cmd_dx_view_define(struct vmw_private *dev_priv,
|
||||
}
|
||||
|
||||
view_type = vmw_view_cmd_to_type(header->id);
|
||||
if (view_type == vmw_view_max)
|
||||
return -EINVAL;
|
||||
cmd = container_of(header, typeof(*cmd), header);
|
||||
ret = vmw_cmd_res_check(dev_priv, sw_context, vmw_res_surface,
|
||||
user_surface_converter,
|
||||
|
||||
@@ -31,6 +31,7 @@
|
||||
#include <linux/clockchips.h>
|
||||
#include <asm/hyperv.h>
|
||||
#include <asm/mshyperv.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
#include "hyperv_vmbus.h"
|
||||
|
||||
/* The one and only */
|
||||
@@ -103,9 +104,10 @@ u64 hv_do_hypercall(u64 control, void *input, void *output)
|
||||
return (u64)ULLONG_MAX;
|
||||
|
||||
__asm__ __volatile__("mov %0, %%r8" : : "r" (output_address) : "r8");
|
||||
__asm__ __volatile__("call *%3" : "=a" (hv_status) :
|
||||
__asm__ __volatile__(CALL_NOSPEC :
|
||||
"=a" (hv_status) :
|
||||
"c" (control), "d" (input_address),
|
||||
"m" (hypercall_page));
|
||||
THUNK_TARGET(hypercall_page));
|
||||
|
||||
return hv_status;
|
||||
|
||||
@@ -123,11 +125,12 @@ u64 hv_do_hypercall(u64 control, void *input, void *output)
|
||||
if (!hypercall_page)
|
||||
return (u64)ULLONG_MAX;
|
||||
|
||||
__asm__ __volatile__ ("call *%8" : "=d"(hv_status_hi),
|
||||
__asm__ __volatile__ (CALL_NOSPEC : "=d"(hv_status_hi),
|
||||
"=a"(hv_status_lo) : "d" (control_hi),
|
||||
"a" (control_lo), "b" (input_address_hi),
|
||||
"c" (input_address_lo), "D"(output_address_hi),
|
||||
"S"(output_address_lo), "m" (hypercall_page));
|
||||
"S"(output_address_lo),
|
||||
THUNK_TARGET(hypercall_page));
|
||||
|
||||
return hv_status_lo | ((u64)hv_status_hi << 32);
|
||||
#endif /* !x86_64 */
|
||||
|
||||
@@ -992,8 +992,7 @@ static int srpt_init_ch_qp(struct srpt_rdma_ch *ch, struct ib_qp *qp)
|
||||
return -ENOMEM;
|
||||
|
||||
attr->qp_state = IB_QPS_INIT;
|
||||
attr->qp_access_flags = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ |
|
||||
IB_ACCESS_REMOTE_WRITE;
|
||||
attr->qp_access_flags = IB_ACCESS_LOCAL_WRITE;
|
||||
attr->port_num = ch->sport->port;
|
||||
attr->pkey_index = 0;
|
||||
|
||||
|
||||
@@ -1554,7 +1554,8 @@ static unsigned long __scan(struct dm_bufio_client *c, unsigned long nr_to_scan,
|
||||
int l;
|
||||
struct dm_buffer *b, *tmp;
|
||||
unsigned long freed = 0;
|
||||
unsigned long count = nr_to_scan;
|
||||
unsigned long count = c->n_buffers[LIST_CLEAN] +
|
||||
c->n_buffers[LIST_DIRTY];
|
||||
unsigned long retain_target = get_retain_buffers(c);
|
||||
|
||||
for (l = 0; l < LIST_SIZE; l++) {
|
||||
@@ -1591,6 +1592,7 @@ dm_bufio_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
|
||||
{
|
||||
struct dm_bufio_client *c;
|
||||
unsigned long count;
|
||||
unsigned long retain_target;
|
||||
|
||||
c = container_of(shrink, struct dm_bufio_client, shrinker);
|
||||
if (sc->gfp_mask & __GFP_FS)
|
||||
@@ -1599,8 +1601,9 @@ dm_bufio_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
|
||||
return 0;
|
||||
|
||||
count = c->n_buffers[LIST_CLEAN] + c->n_buffers[LIST_DIRTY];
|
||||
retain_target = get_retain_buffers(c);
|
||||
dm_bufio_unlock(c);
|
||||
return count;
|
||||
return (count < retain_target) ? 0 : (count - retain_target);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -449,7 +449,7 @@ static int gs_usb_set_bittiming(struct net_device *netdev)
|
||||
dev_err(netdev->dev.parent, "Couldn't set bittimings (err=%d)",
|
||||
rc);
|
||||
|
||||
return rc;
|
||||
return (rc > 0) ? 0 : rc;
|
||||
}
|
||||
|
||||
static void gs_usb_xmit_callback(struct urb *urb)
|
||||
|
||||
@@ -1364,6 +1364,9 @@ out:
|
||||
* Checks to see of the link status of the hardware has changed. If a
|
||||
* change in link status has been detected, then we read the PHY registers
|
||||
* to get the current speed/duplex if link exists.
|
||||
*
|
||||
* Returns a negative error code (-E1000_ERR_*) or 0 (link down) or 1 (link
|
||||
* up).
|
||||
**/
|
||||
static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
{
|
||||
@@ -1379,7 +1382,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
* Change or Rx Sequence Error interrupt.
|
||||
*/
|
||||
if (!mac->get_link_status)
|
||||
return 0;
|
||||
return 1;
|
||||
|
||||
/* First we want to see if the MII Status Register reports
|
||||
* link. If so, then we want to get the current speed/duplex
|
||||
@@ -1611,10 +1614,12 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
|
||||
* different link partner.
|
||||
*/
|
||||
ret_val = e1000e_config_fc_after_link_up(hw);
|
||||
if (ret_val)
|
||||
if (ret_val) {
|
||||
e_dbg("Error configuring flow control\n");
|
||||
return ret_val;
|
||||
}
|
||||
|
||||
return ret_val;
|
||||
return 1;
|
||||
}
|
||||
|
||||
static s32 e1000_get_variants_ich8lan(struct e1000_adapter *adapter)
|
||||
|
||||
@@ -1328,9 +1328,9 @@ set_trap:
|
||||
static void __mlxsw_sp_nexthop_neigh_update(struct mlxsw_sp_nexthop *nh,
|
||||
bool removing)
|
||||
{
|
||||
if (!removing && !nh->should_offload)
|
||||
if (!removing)
|
||||
nh->should_offload = 1;
|
||||
else if (removing && nh->offloaded)
|
||||
else
|
||||
nh->should_offload = 0;
|
||||
nh->update = 1;
|
||||
}
|
||||
|
||||
@@ -3087,18 +3087,37 @@ static int sh_eth_drv_probe(struct platform_device *pdev)
|
||||
/* ioremap the TSU registers */
|
||||
if (mdp->cd->tsu) {
|
||||
struct resource *rtsu;
|
||||
|
||||
rtsu = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
||||
mdp->tsu_addr = devm_ioremap_resource(&pdev->dev, rtsu);
|
||||
if (IS_ERR(mdp->tsu_addr)) {
|
||||
ret = PTR_ERR(mdp->tsu_addr);
|
||||
if (!rtsu) {
|
||||
dev_err(&pdev->dev, "no TSU resource\n");
|
||||
ret = -ENODEV;
|
||||
goto out_release;
|
||||
}
|
||||
/* We can only request the TSU region for the first port
|
||||
* of the two sharing this TSU for the probe to succeed...
|
||||
*/
|
||||
if (devno % 2 == 0 &&
|
||||
!devm_request_mem_region(&pdev->dev, rtsu->start,
|
||||
resource_size(rtsu),
|
||||
dev_name(&pdev->dev))) {
|
||||
dev_err(&pdev->dev, "can't request TSU resource.\n");
|
||||
ret = -EBUSY;
|
||||
goto out_release;
|
||||
}
|
||||
mdp->tsu_addr = devm_ioremap(&pdev->dev, rtsu->start,
|
||||
resource_size(rtsu));
|
||||
if (!mdp->tsu_addr) {
|
||||
dev_err(&pdev->dev, "TSU region ioremap() failed.\n");
|
||||
ret = -ENOMEM;
|
||||
goto out_release;
|
||||
}
|
||||
mdp->port = devno % 2;
|
||||
ndev->features = NETIF_F_HW_VLAN_CTAG_FILTER;
|
||||
}
|
||||
|
||||
/* initialize first or needed device */
|
||||
if (!devno || pd->needs_init) {
|
||||
/* Need to init only the first port of the two sharing a TSU */
|
||||
if (devno % 2 == 0) {
|
||||
if (mdp->cd->chip_reset)
|
||||
mdp->cd->chip_reset(ndev);
|
||||
|
||||
|
||||
@@ -280,8 +280,14 @@ static void stmmac_eee_ctrl_timer(unsigned long arg)
|
||||
bool stmmac_eee_init(struct stmmac_priv *priv)
|
||||
{
|
||||
unsigned long flags;
|
||||
int interface = priv->plat->interface;
|
||||
bool ret = false;
|
||||
|
||||
if ((interface != PHY_INTERFACE_MODE_MII) &&
|
||||
(interface != PHY_INTERFACE_MODE_GMII) &&
|
||||
!phy_interface_mode_is_rgmii(interface))
|
||||
goto out;
|
||||
|
||||
/* Using PCS we cannot dial with the phy registers at this stage
|
||||
* so we do not support extra feature like EEE.
|
||||
*/
|
||||
|
||||
@@ -293,12 +293,9 @@ static struct sk_buff *cx82310_tx_fixup(struct usbnet *dev, struct sk_buff *skb,
|
||||
{
|
||||
int len = skb->len;
|
||||
|
||||
if (skb_headroom(skb) < 2) {
|
||||
struct sk_buff *skb2 = skb_copy_expand(skb, 2, 0, flags);
|
||||
if (skb_cow_head(skb, 2)) {
|
||||
dev_kfree_skb_any(skb);
|
||||
skb = skb2;
|
||||
if (!skb)
|
||||
return NULL;
|
||||
return NULL;
|
||||
}
|
||||
skb_push(skb, 2);
|
||||
|
||||
|
||||
@@ -2419,14 +2419,9 @@ static struct sk_buff *lan78xx_tx_prep(struct lan78xx_net *dev,
|
||||
{
|
||||
u32 tx_cmd_a, tx_cmd_b;
|
||||
|
||||
if (skb_headroom(skb) < TX_OVERHEAD) {
|
||||
struct sk_buff *skb2;
|
||||
|
||||
skb2 = skb_copy_expand(skb, TX_OVERHEAD, 0, flags);
|
||||
if (skb_cow_head(skb, TX_OVERHEAD)) {
|
||||
dev_kfree_skb_any(skb);
|
||||
skb = skb2;
|
||||
if (!skb)
|
||||
return NULL;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (lan78xx_linearize(skb) < 0)
|
||||
|
||||
@@ -2205,13 +2205,9 @@ static struct sk_buff *smsc75xx_tx_fixup(struct usbnet *dev,
|
||||
{
|
||||
u32 tx_cmd_a, tx_cmd_b;
|
||||
|
||||
if (skb_headroom(skb) < SMSC75XX_TX_OVERHEAD) {
|
||||
struct sk_buff *skb2 =
|
||||
skb_copy_expand(skb, SMSC75XX_TX_OVERHEAD, 0, flags);
|
||||
if (skb_cow_head(skb, SMSC75XX_TX_OVERHEAD)) {
|
||||
dev_kfree_skb_any(skb);
|
||||
skb = skb2;
|
||||
if (!skb)
|
||||
return NULL;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tx_cmd_a = (u32)(skb->len & TX_CMD_A_LEN) | TX_CMD_A_FCS;
|
||||
|
||||
@@ -456,14 +456,9 @@ static struct sk_buff *sr9700_tx_fixup(struct usbnet *dev, struct sk_buff *skb,
|
||||
|
||||
len = skb->len;
|
||||
|
||||
if (skb_headroom(skb) < SR_TX_OVERHEAD) {
|
||||
struct sk_buff *skb2;
|
||||
|
||||
skb2 = skb_copy_expand(skb, SR_TX_OVERHEAD, 0, flags);
|
||||
if (skb_cow_head(skb, SR_TX_OVERHEAD)) {
|
||||
dev_kfree_skb_any(skb);
|
||||
skb = skb2;
|
||||
if (!skb)
|
||||
return NULL;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
__skb_push(skb, SR_TX_OVERHEAD);
|
||||
|
||||
@@ -548,6 +548,11 @@ static int ath10k_htt_rx_crypto_param_len(struct ath10k *ar,
|
||||
return IEEE80211_TKIP_IV_LEN;
|
||||
case HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2:
|
||||
return IEEE80211_CCMP_HDR_LEN;
|
||||
case HTT_RX_MPDU_ENCRYPT_AES_CCM256_WPA2:
|
||||
return IEEE80211_CCMP_256_HDR_LEN;
|
||||
case HTT_RX_MPDU_ENCRYPT_AES_GCMP_WPA2:
|
||||
case HTT_RX_MPDU_ENCRYPT_AES_GCMP256_WPA2:
|
||||
return IEEE80211_GCMP_HDR_LEN;
|
||||
case HTT_RX_MPDU_ENCRYPT_WEP128:
|
||||
case HTT_RX_MPDU_ENCRYPT_WAPI:
|
||||
break;
|
||||
@@ -573,6 +578,11 @@ static int ath10k_htt_rx_crypto_tail_len(struct ath10k *ar,
|
||||
return IEEE80211_TKIP_ICV_LEN;
|
||||
case HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2:
|
||||
return IEEE80211_CCMP_MIC_LEN;
|
||||
case HTT_RX_MPDU_ENCRYPT_AES_CCM256_WPA2:
|
||||
return IEEE80211_CCMP_256_MIC_LEN;
|
||||
case HTT_RX_MPDU_ENCRYPT_AES_GCMP_WPA2:
|
||||
case HTT_RX_MPDU_ENCRYPT_AES_GCMP256_WPA2:
|
||||
return IEEE80211_GCMP_MIC_LEN;
|
||||
case HTT_RX_MPDU_ENCRYPT_WEP128:
|
||||
case HTT_RX_MPDU_ENCRYPT_WAPI:
|
||||
break;
|
||||
@@ -1024,9 +1034,21 @@ static void ath10k_htt_rx_h_undecap_raw(struct ath10k *ar,
|
||||
hdr = (void *)msdu->data;
|
||||
|
||||
/* Tail */
|
||||
if (status->flag & RX_FLAG_IV_STRIPPED)
|
||||
if (status->flag & RX_FLAG_IV_STRIPPED) {
|
||||
skb_trim(msdu, msdu->len -
|
||||
ath10k_htt_rx_crypto_tail_len(ar, enctype));
|
||||
} else {
|
||||
/* MIC */
|
||||
if ((status->flag & RX_FLAG_MIC_STRIPPED) &&
|
||||
enctype == HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2)
|
||||
skb_trim(msdu, msdu->len - 8);
|
||||
|
||||
/* ICV */
|
||||
if (status->flag & RX_FLAG_ICV_STRIPPED &&
|
||||
enctype != HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2)
|
||||
skb_trim(msdu, msdu->len -
|
||||
ath10k_htt_rx_crypto_tail_len(ar, enctype));
|
||||
}
|
||||
|
||||
/* MMIC */
|
||||
if ((status->flag & RX_FLAG_MMIC_STRIPPED) &&
|
||||
@@ -1048,7 +1070,8 @@ static void ath10k_htt_rx_h_undecap_raw(struct ath10k *ar,
|
||||
static void ath10k_htt_rx_h_undecap_nwifi(struct ath10k *ar,
|
||||
struct sk_buff *msdu,
|
||||
struct ieee80211_rx_status *status,
|
||||
const u8 first_hdr[64])
|
||||
const u8 first_hdr[64],
|
||||
enum htt_rx_mpdu_encrypt_type enctype)
|
||||
{
|
||||
struct ieee80211_hdr *hdr;
|
||||
struct htt_rx_desc *rxd;
|
||||
@@ -1056,6 +1079,7 @@ static void ath10k_htt_rx_h_undecap_nwifi(struct ath10k *ar,
|
||||
u8 da[ETH_ALEN];
|
||||
u8 sa[ETH_ALEN];
|
||||
int l3_pad_bytes;
|
||||
int bytes_aligned = ar->hw_params.decap_align_bytes;
|
||||
|
||||
/* Delivered decapped frame:
|
||||
* [nwifi 802.11 header] <-- replaced with 802.11 hdr
|
||||
@@ -1084,6 +1108,14 @@ static void ath10k_htt_rx_h_undecap_nwifi(struct ath10k *ar,
|
||||
/* push original 802.11 header */
|
||||
hdr = (struct ieee80211_hdr *)first_hdr;
|
||||
hdr_len = ieee80211_hdrlen(hdr->frame_control);
|
||||
|
||||
if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
|
||||
memcpy(skb_push(msdu,
|
||||
ath10k_htt_rx_crypto_param_len(ar, enctype)),
|
||||
(void *)hdr + round_up(hdr_len, bytes_aligned),
|
||||
ath10k_htt_rx_crypto_param_len(ar, enctype));
|
||||
}
|
||||
|
||||
memcpy(skb_push(msdu, hdr_len), hdr, hdr_len);
|
||||
|
||||
/* original 802.11 header has a different DA and in
|
||||
@@ -1144,6 +1176,7 @@ static void ath10k_htt_rx_h_undecap_eth(struct ath10k *ar,
|
||||
u8 sa[ETH_ALEN];
|
||||
int l3_pad_bytes;
|
||||
struct htt_rx_desc *rxd;
|
||||
int bytes_aligned = ar->hw_params.decap_align_bytes;
|
||||
|
||||
/* Delivered decapped frame:
|
||||
* [eth header] <-- replaced with 802.11 hdr & rfc1042/llc
|
||||
@@ -1172,6 +1205,14 @@ static void ath10k_htt_rx_h_undecap_eth(struct ath10k *ar,
|
||||
/* push original 802.11 header */
|
||||
hdr = (struct ieee80211_hdr *)first_hdr;
|
||||
hdr_len = ieee80211_hdrlen(hdr->frame_control);
|
||||
|
||||
if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
|
||||
memcpy(skb_push(msdu,
|
||||
ath10k_htt_rx_crypto_param_len(ar, enctype)),
|
||||
(void *)hdr + round_up(hdr_len, bytes_aligned),
|
||||
ath10k_htt_rx_crypto_param_len(ar, enctype));
|
||||
}
|
||||
|
||||
memcpy(skb_push(msdu, hdr_len), hdr, hdr_len);
|
||||
|
||||
/* original 802.11 header has a different DA and in
|
||||
@@ -1185,12 +1226,14 @@ static void ath10k_htt_rx_h_undecap_eth(struct ath10k *ar,
|
||||
static void ath10k_htt_rx_h_undecap_snap(struct ath10k *ar,
|
||||
struct sk_buff *msdu,
|
||||
struct ieee80211_rx_status *status,
|
||||
const u8 first_hdr[64])
|
||||
const u8 first_hdr[64],
|
||||
enum htt_rx_mpdu_encrypt_type enctype)
|
||||
{
|
||||
struct ieee80211_hdr *hdr;
|
||||
size_t hdr_len;
|
||||
int l3_pad_bytes;
|
||||
struct htt_rx_desc *rxd;
|
||||
int bytes_aligned = ar->hw_params.decap_align_bytes;
|
||||
|
||||
/* Delivered decapped frame:
|
||||
* [amsdu header] <-- replaced with 802.11 hdr
|
||||
@@ -1206,6 +1249,14 @@ static void ath10k_htt_rx_h_undecap_snap(struct ath10k *ar,
|
||||
|
||||
hdr = (struct ieee80211_hdr *)first_hdr;
|
||||
hdr_len = ieee80211_hdrlen(hdr->frame_control);
|
||||
|
||||
if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
|
||||
memcpy(skb_push(msdu,
|
||||
ath10k_htt_rx_crypto_param_len(ar, enctype)),
|
||||
(void *)hdr + round_up(hdr_len, bytes_aligned),
|
||||
ath10k_htt_rx_crypto_param_len(ar, enctype));
|
||||
}
|
||||
|
||||
memcpy(skb_push(msdu, hdr_len), hdr, hdr_len);
|
||||
}
|
||||
|
||||
@@ -1240,13 +1291,15 @@ static void ath10k_htt_rx_h_undecap(struct ath10k *ar,
|
||||
is_decrypted);
|
||||
break;
|
||||
case RX_MSDU_DECAP_NATIVE_WIFI:
|
||||
ath10k_htt_rx_h_undecap_nwifi(ar, msdu, status, first_hdr);
|
||||
ath10k_htt_rx_h_undecap_nwifi(ar, msdu, status, first_hdr,
|
||||
enctype);
|
||||
break;
|
||||
case RX_MSDU_DECAP_ETHERNET2_DIX:
|
||||
ath10k_htt_rx_h_undecap_eth(ar, msdu, status, first_hdr, enctype);
|
||||
break;
|
||||
case RX_MSDU_DECAP_8023_SNAP_LLC:
|
||||
ath10k_htt_rx_h_undecap_snap(ar, msdu, status, first_hdr);
|
||||
ath10k_htt_rx_h_undecap_snap(ar, msdu, status, first_hdr,
|
||||
enctype);
|
||||
break;
|
||||
}
|
||||
}
|
||||
@@ -1289,7 +1342,8 @@ static void ath10k_htt_rx_h_csum_offload(struct sk_buff *msdu)
|
||||
|
||||
static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
|
||||
struct sk_buff_head *amsdu,
|
||||
struct ieee80211_rx_status *status)
|
||||
struct ieee80211_rx_status *status,
|
||||
bool fill_crypt_header)
|
||||
{
|
||||
struct sk_buff *first;
|
||||
struct sk_buff *last;
|
||||
@@ -1299,7 +1353,6 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
|
||||
enum htt_rx_mpdu_encrypt_type enctype;
|
||||
u8 first_hdr[64];
|
||||
u8 *qos;
|
||||
size_t hdr_len;
|
||||
bool has_fcs_err;
|
||||
bool has_crypto_err;
|
||||
bool has_tkip_err;
|
||||
@@ -1324,15 +1377,17 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
|
||||
* decapped header. It'll be used for undecapping of each MSDU.
|
||||
*/
|
||||
hdr = (void *)rxd->rx_hdr_status;
|
||||
hdr_len = ieee80211_hdrlen(hdr->frame_control);
|
||||
memcpy(first_hdr, hdr, hdr_len);
|
||||
memcpy(first_hdr, hdr, RX_HTT_HDR_STATUS_LEN);
|
||||
|
||||
/* Each A-MSDU subframe will use the original header as the base and be
|
||||
* reported as a separate MSDU so strip the A-MSDU bit from QoS Ctl.
|
||||
*/
|
||||
hdr = (void *)first_hdr;
|
||||
qos = ieee80211_get_qos_ctl(hdr);
|
||||
qos[0] &= ~IEEE80211_QOS_CTL_A_MSDU_PRESENT;
|
||||
|
||||
if (ieee80211_is_data_qos(hdr->frame_control)) {
|
||||
qos = ieee80211_get_qos_ctl(hdr);
|
||||
qos[0] &= ~IEEE80211_QOS_CTL_A_MSDU_PRESENT;
|
||||
}
|
||||
|
||||
/* Some attention flags are valid only in the last MSDU. */
|
||||
last = skb_peek_tail(amsdu);
|
||||
@@ -1379,9 +1434,14 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
|
||||
status->flag |= RX_FLAG_DECRYPTED;
|
||||
|
||||
if (likely(!is_mgmt))
|
||||
status->flag |= RX_FLAG_IV_STRIPPED |
|
||||
RX_FLAG_MMIC_STRIPPED;
|
||||
}
|
||||
status->flag |= RX_FLAG_MMIC_STRIPPED;
|
||||
|
||||
if (fill_crypt_header)
|
||||
status->flag |= RX_FLAG_MIC_STRIPPED |
|
||||
RX_FLAG_ICV_STRIPPED;
|
||||
else
|
||||
status->flag |= RX_FLAG_IV_STRIPPED;
|
||||
}
|
||||
|
||||
skb_queue_walk(amsdu, msdu) {
|
||||
ath10k_htt_rx_h_csum_offload(msdu);
|
||||
@@ -1397,6 +1457,9 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
|
||||
if (is_mgmt)
|
||||
continue;
|
||||
|
||||
if (fill_crypt_header)
|
||||
continue;
|
||||
|
||||
hdr = (void *)msdu->data;
|
||||
hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_PROTECTED);
|
||||
}
|
||||
@@ -1407,6 +1470,9 @@ static void ath10k_htt_rx_h_deliver(struct ath10k *ar,
|
||||
struct ieee80211_rx_status *status)
|
||||
{
|
||||
struct sk_buff *msdu;
|
||||
struct sk_buff *first_subframe;
|
||||
|
||||
first_subframe = skb_peek(amsdu);
|
||||
|
||||
while ((msdu = __skb_dequeue(amsdu))) {
|
||||
/* Setup per-MSDU flags */
|
||||
@@ -1415,6 +1481,13 @@ static void ath10k_htt_rx_h_deliver(struct ath10k *ar,
|
||||
else
|
||||
status->flag |= RX_FLAG_AMSDU_MORE;
|
||||
|
||||
if (msdu == first_subframe) {
|
||||
first_subframe = NULL;
|
||||
status->flag &= ~RX_FLAG_ALLOW_SAME_PN;
|
||||
} else {
|
||||
status->flag |= RX_FLAG_ALLOW_SAME_PN;
|
||||
}
|
||||
|
||||
ath10k_process_rx(ar, status, msdu);
|
||||
}
|
||||
}
|
||||
@@ -1557,7 +1630,7 @@ static int ath10k_htt_rx_handle_amsdu(struct ath10k_htt *htt)
|
||||
ath10k_htt_rx_h_ppdu(ar, &amsdu, rx_status, 0xffff);
|
||||
ath10k_htt_rx_h_unchain(ar, &amsdu, ret > 0);
|
||||
ath10k_htt_rx_h_filter(ar, &amsdu, rx_status);
|
||||
ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status);
|
||||
ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status, true);
|
||||
ath10k_htt_rx_h_deliver(ar, &amsdu, rx_status);
|
||||
|
||||
return num_msdus;
|
||||
@@ -1892,7 +1965,7 @@ static int ath10k_htt_rx_in_ord_ind(struct ath10k *ar, struct sk_buff *skb)
|
||||
num_msdus += skb_queue_len(&amsdu);
|
||||
ath10k_htt_rx_h_ppdu(ar, &amsdu, status, vdev_id);
|
||||
ath10k_htt_rx_h_filter(ar, &amsdu, status);
|
||||
ath10k_htt_rx_h_mpdu(ar, &amsdu, status);
|
||||
ath10k_htt_rx_h_mpdu(ar, &amsdu, status, false);
|
||||
ath10k_htt_rx_h_deliver(ar, &amsdu, status);
|
||||
break;
|
||||
case -EAGAIN:
|
||||
|
||||
@@ -239,6 +239,9 @@ enum htt_rx_mpdu_encrypt_type {
|
||||
HTT_RX_MPDU_ENCRYPT_WAPI = 5,
|
||||
HTT_RX_MPDU_ENCRYPT_AES_CCM_WPA2 = 6,
|
||||
HTT_RX_MPDU_ENCRYPT_NONE = 7,
|
||||
HTT_RX_MPDU_ENCRYPT_AES_CCM256_WPA2 = 8,
|
||||
HTT_RX_MPDU_ENCRYPT_AES_GCMP_WPA2 = 9,
|
||||
HTT_RX_MPDU_ENCRYPT_AES_GCMP256_WPA2 = 10,
|
||||
};
|
||||
|
||||
#define RX_MPDU_START_INFO0_PEER_IDX_MASK 0x000007ff
|
||||
|
||||
@@ -848,5 +848,5 @@ static void __exit acpi_wmi_exit(void)
|
||||
pr_info("Mapper unloaded\n");
|
||||
}
|
||||
|
||||
subsys_initcall(acpi_wmi_init);
|
||||
subsys_initcall_sync(acpi_wmi_init);
|
||||
module_exit(acpi_wmi_exit);
|
||||
|
||||
@@ -766,10 +766,12 @@ static long ashmem_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
break;
|
||||
case ASHMEM_SET_SIZE:
|
||||
ret = -EINVAL;
|
||||
mutex_lock(&ashmem_mutex);
|
||||
if (!asma->file) {
|
||||
ret = 0;
|
||||
asma->size = (size_t)arg;
|
||||
}
|
||||
mutex_unlock(&ashmem_mutex);
|
||||
break;
|
||||
case ASHMEM_GET_SIZE:
|
||||
ret = asma->size;
|
||||
|
||||
@@ -1940,7 +1940,6 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
|
||||
struct iscsi_tmr_req *tmr_req;
|
||||
struct iscsi_tm *hdr;
|
||||
int out_of_order_cmdsn = 0, ret;
|
||||
bool sess_ref = false;
|
||||
u8 function, tcm_function = TMR_UNKNOWN;
|
||||
|
||||
hdr = (struct iscsi_tm *) buf;
|
||||
@@ -1982,18 +1981,17 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
|
||||
buf);
|
||||
}
|
||||
|
||||
transport_init_se_cmd(&cmd->se_cmd, &iscsi_ops,
|
||||
conn->sess->se_sess, 0, DMA_NONE,
|
||||
TCM_SIMPLE_TAG, cmd->sense_buffer + 2);
|
||||
|
||||
target_get_sess_cmd(&cmd->se_cmd, true);
|
||||
|
||||
/*
|
||||
* TASK_REASSIGN for ERL=2 / connection stays inside of
|
||||
* LIO-Target $FABRIC_MOD
|
||||
*/
|
||||
if (function != ISCSI_TM_FUNC_TASK_REASSIGN) {
|
||||
transport_init_se_cmd(&cmd->se_cmd, &iscsi_ops,
|
||||
conn->sess->se_sess, 0, DMA_NONE,
|
||||
TCM_SIMPLE_TAG, cmd->sense_buffer + 2);
|
||||
|
||||
target_get_sess_cmd(&cmd->se_cmd, true);
|
||||
sess_ref = true;
|
||||
|
||||
switch (function) {
|
||||
case ISCSI_TM_FUNC_ABORT_TASK:
|
||||
tcm_function = TMR_ABORT_TASK;
|
||||
@@ -2132,12 +2130,8 @@ attach:
|
||||
* For connection recovery, this is also the default action for
|
||||
* TMR TASK_REASSIGN.
|
||||
*/
|
||||
if (sess_ref) {
|
||||
pr_debug("Handle TMR, using sess_ref=true check\n");
|
||||
target_put_sess_cmd(&cmd->se_cmd);
|
||||
}
|
||||
|
||||
iscsit_add_cmd_to_response_queue(cmd, conn, cmd->i_state);
|
||||
target_put_sess_cmd(&cmd->se_cmd);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(iscsit_handle_task_mgt_cmd);
|
||||
|
||||
@@ -133,6 +133,15 @@ static bool __target_check_io_state(struct se_cmd *se_cmd,
|
||||
spin_unlock(&se_cmd->t_state_lock);
|
||||
return false;
|
||||
}
|
||||
if (se_cmd->transport_state & CMD_T_PRE_EXECUTE) {
|
||||
if (se_cmd->scsi_status) {
|
||||
pr_debug("Attempted to abort io tag: %llu early failure"
|
||||
" status: 0x%02x\n", se_cmd->tag,
|
||||
se_cmd->scsi_status);
|
||||
spin_unlock(&se_cmd->t_state_lock);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
if (sess->sess_tearing_down || se_cmd->cmd_wait_set) {
|
||||
pr_debug("Attempted to abort io tag: %llu already shutdown,"
|
||||
" skipping\n", se_cmd->tag);
|
||||
|
||||
@@ -1939,6 +1939,7 @@ void target_execute_cmd(struct se_cmd *cmd)
|
||||
}
|
||||
|
||||
cmd->t_state = TRANSPORT_PROCESSING;
|
||||
cmd->transport_state &= ~CMD_T_PRE_EXECUTE;
|
||||
cmd->transport_state |= CMD_T_ACTIVE|CMD_T_BUSY|CMD_T_SENT;
|
||||
spin_unlock_irq(&cmd->t_state_lock);
|
||||
|
||||
@@ -2592,6 +2593,7 @@ int target_get_sess_cmd(struct se_cmd *se_cmd, bool ack_kref)
|
||||
ret = -ESHUTDOWN;
|
||||
goto out;
|
||||
}
|
||||
se_cmd->transport_state |= CMD_T_PRE_EXECUTE;
|
||||
list_add_tail(&se_cmd->se_cmd_list, &se_sess->sess_cmd_list);
|
||||
out:
|
||||
spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
|
||||
|
||||
@@ -1086,7 +1086,8 @@ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
|
||||
|
||||
return 1;
|
||||
fail:
|
||||
|
||||
if (dev->eps[0].ring)
|
||||
xhci_ring_free(xhci, dev->eps[0].ring);
|
||||
if (dev->in_ctx)
|
||||
xhci_free_container_ctx(xhci, dev->in_ctx);
|
||||
if (dev->out_ctx)
|
||||
|
||||
@@ -292,6 +292,8 @@ static int usb3503_probe(struct usb3503 *hub)
|
||||
if (gpio_is_valid(hub->gpio_reset)) {
|
||||
err = devm_gpio_request_one(dev, hub->gpio_reset,
|
||||
GPIOF_OUT_INIT_LOW, "usb3503 reset");
|
||||
/* Datasheet defines a hardware reset to be at least 100us */
|
||||
usleep_range(100, 10000);
|
||||
if (err) {
|
||||
dev_err(dev,
|
||||
"unable to request GPIO %d as reset pin (%d)\n",
|
||||
|
||||
@@ -1002,7 +1002,9 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg
|
||||
break;
|
||||
|
||||
case MON_IOCQ_RING_SIZE:
|
||||
mutex_lock(&rp->fetch_lock);
|
||||
ret = rp->b_size;
|
||||
mutex_unlock(&rp->fetch_lock);
|
||||
break;
|
||||
|
||||
case MON_IOCT_RING_SIZE:
|
||||
@@ -1229,12 +1231,16 @@ static int mon_bin_vma_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
|
||||
unsigned long offset, chunk_idx;
|
||||
struct page *pageptr;
|
||||
|
||||
mutex_lock(&rp->fetch_lock);
|
||||
offset = vmf->pgoff << PAGE_SHIFT;
|
||||
if (offset >= rp->b_size)
|
||||
if (offset >= rp->b_size) {
|
||||
mutex_unlock(&rp->fetch_lock);
|
||||
return VM_FAULT_SIGBUS;
|
||||
}
|
||||
chunk_idx = offset / CHUNK_SIZE;
|
||||
pageptr = rp->b_vec[chunk_idx].pg;
|
||||
get_page(pageptr);
|
||||
mutex_unlock(&rp->fetch_lock);
|
||||
vmf->page = pageptr;
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -121,6 +121,7 @@ static const struct usb_device_id id_table[] = {
|
||||
{ USB_DEVICE(0x10C4, 0x8470) }, /* Juniper Networks BX Series System Console */
|
||||
{ USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */
|
||||
{ USB_DEVICE(0x10C4, 0x84B6) }, /* Starizona Hyperion */
|
||||
{ USB_DEVICE(0x10C4, 0x85A7) }, /* LifeScan OneTouch Verio IQ */
|
||||
{ USB_DEVICE(0x10C4, 0x85EA) }, /* AC-Services IBUS-IF */
|
||||
{ USB_DEVICE(0x10C4, 0x85EB) }, /* AC-Services CIS-IBUS */
|
||||
{ USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */
|
||||
@@ -171,6 +172,7 @@ static const struct usb_device_id id_table[] = {
|
||||
{ USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
|
||||
{ USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
|
||||
{ USB_DEVICE(0x18EF, 0xE025) }, /* ELV Marble Sound Board 1 */
|
||||
{ USB_DEVICE(0x18EF, 0xE030) }, /* ELV ALC 8xxx Battery Charger */
|
||||
{ USB_DEVICE(0x18EF, 0xE032) }, /* ELV TFD500 Data Logger */
|
||||
{ USB_DEVICE(0x1901, 0x0190) }, /* GE B850 CP2105 Recorder interface */
|
||||
{ USB_DEVICE(0x1901, 0x0193) }, /* GE B650 CP2104 PMC interface */
|
||||
|
||||
@@ -156,6 +156,13 @@ UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
|
||||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||
US_FL_NO_ATA_1X),
|
||||
|
||||
/* Reported-by: Icenowy Zheng <icenowy@aosc.io> */
|
||||
UNUSUAL_DEV(0x2537, 0x1068, 0x0000, 0x9999,
|
||||
"Norelsys",
|
||||
"NS1068X",
|
||||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||
US_FL_IGNORE_UAS),
|
||||
|
||||
/* Reported-by: Takeo Nakayama <javhera@gmx.com> */
|
||||
UNUSUAL_DEV(0x357d, 0x7788, 0x0000, 0x9999,
|
||||
"JMicron",
|
||||
|
||||
@@ -105,7 +105,7 @@ static void usbip_dump_usb_device(struct usb_device *udev)
|
||||
dev_dbg(dev, " devnum(%d) devpath(%s) usb speed(%s)",
|
||||
udev->devnum, udev->devpath, usb_speed_string(udev->speed));
|
||||
|
||||
pr_debug("tt %p, ttport %d\n", udev->tt, udev->ttport);
|
||||
pr_debug("tt hub ttport %d\n", udev->ttport);
|
||||
|
||||
dev_dbg(dev, " ");
|
||||
for (i = 0; i < 16; i++)
|
||||
@@ -138,12 +138,8 @@ static void usbip_dump_usb_device(struct usb_device *udev)
|
||||
}
|
||||
pr_debug("\n");
|
||||
|
||||
dev_dbg(dev, "parent %p, bus %p\n", udev->parent, udev->bus);
|
||||
|
||||
dev_dbg(dev,
|
||||
"descriptor %p, config %p, actconfig %p, rawdescriptors %p\n",
|
||||
&udev->descriptor, udev->config,
|
||||
udev->actconfig, udev->rawdescriptors);
|
||||
dev_dbg(dev, "parent %s, bus %s\n", dev_name(&udev->parent->dev),
|
||||
udev->bus->bus_name);
|
||||
|
||||
dev_dbg(dev, "have_langid %d, string_langid %d\n",
|
||||
udev->have_langid, udev->string_langid);
|
||||
@@ -251,9 +247,6 @@ void usbip_dump_urb(struct urb *urb)
|
||||
|
||||
dev = &urb->dev->dev;
|
||||
|
||||
dev_dbg(dev, " urb :%p\n", urb);
|
||||
dev_dbg(dev, " dev :%p\n", urb->dev);
|
||||
|
||||
usbip_dump_usb_device(urb->dev);
|
||||
|
||||
dev_dbg(dev, " pipe :%08x ", urb->pipe);
|
||||
@@ -262,11 +255,9 @@ void usbip_dump_urb(struct urb *urb)
|
||||
|
||||
dev_dbg(dev, " status :%d\n", urb->status);
|
||||
dev_dbg(dev, " transfer_flags :%08X\n", urb->transfer_flags);
|
||||
dev_dbg(dev, " transfer_buffer :%p\n", urb->transfer_buffer);
|
||||
dev_dbg(dev, " transfer_buffer_length:%d\n",
|
||||
urb->transfer_buffer_length);
|
||||
dev_dbg(dev, " actual_length :%d\n", urb->actual_length);
|
||||
dev_dbg(dev, " setup_packet :%p\n", urb->setup_packet);
|
||||
|
||||
if (urb->setup_packet && usb_pipetype(urb->pipe) == PIPE_CONTROL)
|
||||
usbip_dump_usb_ctrlrequest(
|
||||
@@ -276,8 +267,6 @@ void usbip_dump_urb(struct urb *urb)
|
||||
dev_dbg(dev, " number_of_packets :%d\n", urb->number_of_packets);
|
||||
dev_dbg(dev, " interval :%d\n", urb->interval);
|
||||
dev_dbg(dev, " error_count :%d\n", urb->error_count);
|
||||
dev_dbg(dev, " context :%p\n", urb->context);
|
||||
dev_dbg(dev, " complete :%p\n", urb->complete);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usbip_dump_urb);
|
||||
|
||||
|
||||
@@ -132,6 +132,25 @@ static int v_recv_cmd_submit(struct vudc *udc,
|
||||
urb_p->new = 1;
|
||||
urb_p->seqnum = pdu->base.seqnum;
|
||||
|
||||
if (urb_p->ep->type == USB_ENDPOINT_XFER_ISOC) {
|
||||
/* validate packet size and number of packets */
|
||||
unsigned int maxp, packets, bytes;
|
||||
|
||||
maxp = usb_endpoint_maxp(urb_p->ep->desc);
|
||||
maxp *= usb_endpoint_maxp_mult(urb_p->ep->desc);
|
||||
bytes = pdu->u.cmd_submit.transfer_buffer_length;
|
||||
packets = DIV_ROUND_UP(bytes, maxp);
|
||||
|
||||
if (pdu->u.cmd_submit.number_of_packets < 0 ||
|
||||
pdu->u.cmd_submit.number_of_packets > packets) {
|
||||
dev_err(&udc->gadget.dev,
|
||||
"CMD_SUBMIT: isoc invalid num packets %d\n",
|
||||
pdu->u.cmd_submit.number_of_packets);
|
||||
ret = -EMSGSIZE;
|
||||
goto free_urbp;
|
||||
}
|
||||
}
|
||||
|
||||
ret = alloc_urb_from_cmd(&urb_p->urb, pdu, urb_p->ep->type);
|
||||
if (ret) {
|
||||
usbip_event_add(&udc->ud, VUDC_EVENT_ERROR_MALLOC);
|
||||
|
||||
@@ -97,6 +97,13 @@ static int v_send_ret_submit(struct vudc *udc, struct urbp *urb_p)
|
||||
memset(&pdu_header, 0, sizeof(pdu_header));
|
||||
memset(&msg, 0, sizeof(msg));
|
||||
|
||||
if (urb->actual_length > 0 && !urb->transfer_buffer) {
|
||||
dev_err(&udc->gadget.dev,
|
||||
"urb: actual_length %d transfer_buffer null\n",
|
||||
urb->actual_length);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (urb_p->type == USB_ENDPOINT_XFER_ISOC)
|
||||
iovnum = 2 + urb->number_of_packets;
|
||||
else
|
||||
@@ -112,8 +119,8 @@ static int v_send_ret_submit(struct vudc *udc, struct urbp *urb_p)
|
||||
|
||||
/* 1. setup usbip_header */
|
||||
setup_ret_submit_pdu(&pdu_header, urb_p);
|
||||
usbip_dbg_stub_tx("setup txdata seqnum: %d urb: %p\n",
|
||||
pdu_header.base.seqnum, urb);
|
||||
usbip_dbg_stub_tx("setup txdata seqnum: %d\n",
|
||||
pdu_header.base.seqnum);
|
||||
usbip_header_correct_endian(&pdu_header, 1);
|
||||
|
||||
iov[iovnum].iov_base = &pdu_header;
|
||||
|
||||
@@ -43,6 +43,7 @@ struct bpf_map {
|
||||
u32 max_entries;
|
||||
u32 map_flags;
|
||||
u32 pages;
|
||||
bool unpriv_array;
|
||||
struct user_struct *user;
|
||||
const struct bpf_map_ops *ops;
|
||||
struct work_struct work;
|
||||
@@ -189,6 +190,7 @@ struct bpf_prog_aux {
|
||||
struct bpf_array {
|
||||
struct bpf_map map;
|
||||
u32 elem_size;
|
||||
u32 index_mask;
|
||||
/* 'ownership' of prog_array is claimed by the first program that
|
||||
* is going to use this map or by the first program which FD is stored
|
||||
* in the map to make sure that all callers and callees have the same
|
||||
|
||||
@@ -67,7 +67,10 @@ struct bpf_verifier_state_list {
|
||||
};
|
||||
|
||||
struct bpf_insn_aux_data {
|
||||
enum bpf_reg_type ptr_type; /* pointer type for load/store insns */
|
||||
union {
|
||||
enum bpf_reg_type ptr_type; /* pointer type for load/store insns */
|
||||
struct bpf_map *map_ptr; /* pointer for call insn into lookup_elem */
|
||||
};
|
||||
bool seen; /* this insn was processed by the verifier */
|
||||
};
|
||||
|
||||
|
||||
@@ -44,6 +44,13 @@ extern void cpu_remove_dev_attr(struct device_attribute *attr);
|
||||
extern int cpu_add_dev_attr_group(struct attribute_group *attrs);
|
||||
extern void cpu_remove_dev_attr_group(struct attribute_group *attrs);
|
||||
|
||||
extern ssize_t cpu_show_meltdown(struct device *dev,
|
||||
struct device_attribute *attr, char *buf);
|
||||
extern ssize_t cpu_show_spectre_v1(struct device *dev,
|
||||
struct device_attribute *attr, char *buf);
|
||||
extern ssize_t cpu_show_spectre_v2(struct device *dev,
|
||||
struct device_attribute *attr, char *buf);
|
||||
|
||||
extern __printf(4, 5)
|
||||
struct device *cpu_device_create(struct device *parent, void *drvdata,
|
||||
const struct attribute_group **groups,
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
* For more information, see tools/objtool/Documentation/stack-validation.txt.
|
||||
*/
|
||||
#define STACK_FRAME_NON_STANDARD(func) \
|
||||
static void __used __section(__func_stack_frame_non_standard) \
|
||||
static void __used __section(.discard.func_stack_frame_non_standard) \
|
||||
*__func_stack_frame_non_standard_##func = func
|
||||
|
||||
#else /* !CONFIG_STACK_VALIDATION */
|
||||
|
||||
@@ -683,6 +683,17 @@ static inline bool phy_is_internal(struct phy_device *phydev)
|
||||
return phydev->is_internal;
|
||||
}
|
||||
|
||||
/**
|
||||
* phy_interface_mode_is_rgmii - Convenience function for testing if a
|
||||
* PHY interface mode is RGMII (all variants)
|
||||
* @mode: the phy_interface_t enum
|
||||
*/
|
||||
static inline bool phy_interface_mode_is_rgmii(phy_interface_t mode)
|
||||
{
|
||||
return mode >= PHY_INTERFACE_MODE_RGMII &&
|
||||
mode <= PHY_INTERFACE_MODE_RGMII_TXID;
|
||||
};
|
||||
|
||||
/**
|
||||
* phy_interface_is_rgmii - Convenience function for testing if a PHY interface
|
||||
* is RGMII (all variants)
|
||||
|
||||
@@ -16,7 +16,6 @@ struct sh_eth_plat_data {
|
||||
unsigned char mac_addr[ETH_ALEN];
|
||||
unsigned no_ether_link:1;
|
||||
unsigned ether_link_active_low:1;
|
||||
unsigned needs_init:1;
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
@@ -1007,7 +1007,7 @@ ieee80211_tx_info_clear_status(struct ieee80211_tx_info *info)
|
||||
* @RX_FLAG_DECRYPTED: This frame was decrypted in hardware.
|
||||
* @RX_FLAG_MMIC_STRIPPED: the Michael MIC is stripped off this frame,
|
||||
* verification has been done by the hardware.
|
||||
* @RX_FLAG_IV_STRIPPED: The IV/ICV are stripped from this frame.
|
||||
* @RX_FLAG_IV_STRIPPED: The IV and ICV are stripped from this frame.
|
||||
* If this flag is set, the stack cannot do any replay detection
|
||||
* hence the driver or hardware will have to do that.
|
||||
* @RX_FLAG_PN_VALIDATED: Currently only valid for CCMP/GCMP frames, this
|
||||
@@ -1078,6 +1078,8 @@ ieee80211_tx_info_clear_status(struct ieee80211_tx_info *info)
|
||||
* @RX_FLAG_ALLOW_SAME_PN: Allow the same PN as same packet before.
|
||||
* This is used for AMSDU subframes which can have the same PN as
|
||||
* the first subframe.
|
||||
* @RX_FLAG_ICV_STRIPPED: The ICV is stripped from this frame. CRC checking must
|
||||
* be done in the hardware.
|
||||
*/
|
||||
enum mac80211_rx_flags {
|
||||
RX_FLAG_MMIC_ERROR = BIT(0),
|
||||
@@ -1113,6 +1115,7 @@ enum mac80211_rx_flags {
|
||||
RX_FLAG_RADIOTAP_VENDOR_DATA = BIT(31),
|
||||
RX_FLAG_MIC_STRIPPED = BIT_ULL(32),
|
||||
RX_FLAG_ALLOW_SAME_PN = BIT_ULL(33),
|
||||
RX_FLAG_ICV_STRIPPED = BIT_ULL(34),
|
||||
};
|
||||
|
||||
#define RX_FLAG_STBC_SHIFT 26
|
||||
|
||||
@@ -493,6 +493,7 @@ struct se_cmd {
|
||||
#define CMD_T_BUSY (1 << 9)
|
||||
#define CMD_T_TAS (1 << 10)
|
||||
#define CMD_T_FABRIC_STOP (1 << 11)
|
||||
#define CMD_T_PRE_EXECUTE (1 << 12)
|
||||
spinlock_t t_state_lock;
|
||||
struct kref cmd_kref;
|
||||
struct completion t_transport_stop_comp;
|
||||
|
||||
@@ -208,7 +208,7 @@ TRACE_EVENT(kvm_ack_irq,
|
||||
{ KVM_TRACE_MMIO_WRITE, "write" }
|
||||
|
||||
TRACE_EVENT(kvm_mmio,
|
||||
TP_PROTO(int type, int len, u64 gpa, u64 val),
|
||||
TP_PROTO(int type, int len, u64 gpa, void *val),
|
||||
TP_ARGS(type, len, gpa, val),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
@@ -222,7 +222,10 @@ TRACE_EVENT(kvm_mmio,
|
||||
__entry->type = type;
|
||||
__entry->len = len;
|
||||
__entry->gpa = gpa;
|
||||
__entry->val = val;
|
||||
__entry->val = 0;
|
||||
if (val)
|
||||
memcpy(&__entry->val, val,
|
||||
min_t(u32, sizeof(__entry->val), len));
|
||||
),
|
||||
|
||||
TP_printk("mmio %s len %u gpa 0x%llx val 0x%llx",
|
||||
|
||||
@@ -46,9 +46,10 @@ static int bpf_array_alloc_percpu(struct bpf_array *array)
|
||||
static struct bpf_map *array_map_alloc(union bpf_attr *attr)
|
||||
{
|
||||
bool percpu = attr->map_type == BPF_MAP_TYPE_PERCPU_ARRAY;
|
||||
u32 elem_size, index_mask, max_entries;
|
||||
bool unpriv = !capable(CAP_SYS_ADMIN);
|
||||
struct bpf_array *array;
|
||||
u64 array_size;
|
||||
u32 elem_size;
|
||||
u64 array_size, mask64;
|
||||
|
||||
/* check sanity of attributes */
|
||||
if (attr->max_entries == 0 || attr->key_size != 4 ||
|
||||
@@ -63,11 +64,32 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
|
||||
|
||||
elem_size = round_up(attr->value_size, 8);
|
||||
|
||||
max_entries = attr->max_entries;
|
||||
|
||||
/* On 32 bit archs roundup_pow_of_two() with max_entries that has
|
||||
* upper most bit set in u32 space is undefined behavior due to
|
||||
* resulting 1U << 32, so do it manually here in u64 space.
|
||||
*/
|
||||
mask64 = fls_long(max_entries - 1);
|
||||
mask64 = 1ULL << mask64;
|
||||
mask64 -= 1;
|
||||
|
||||
index_mask = mask64;
|
||||
if (unpriv) {
|
||||
/* round up array size to nearest power of 2,
|
||||
* since cpu will speculate within index_mask limits
|
||||
*/
|
||||
max_entries = index_mask + 1;
|
||||
/* Check for overflows. */
|
||||
if (max_entries < attr->max_entries)
|
||||
return ERR_PTR(-E2BIG);
|
||||
}
|
||||
|
||||
array_size = sizeof(*array);
|
||||
if (percpu)
|
||||
array_size += (u64) attr->max_entries * sizeof(void *);
|
||||
array_size += (u64) max_entries * sizeof(void *);
|
||||
else
|
||||
array_size += (u64) attr->max_entries * elem_size;
|
||||
array_size += (u64) max_entries * elem_size;
|
||||
|
||||
/* make sure there is no u32 overflow later in round_up() */
|
||||
if (array_size >= U32_MAX - PAGE_SIZE)
|
||||
@@ -77,6 +99,8 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
|
||||
array = bpf_map_area_alloc(array_size);
|
||||
if (!array)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
array->index_mask = index_mask;
|
||||
array->map.unpriv_array = unpriv;
|
||||
|
||||
/* copy mandatory map attributes */
|
||||
array->map.map_type = attr->map_type;
|
||||
@@ -110,7 +134,7 @@ static void *array_map_lookup_elem(struct bpf_map *map, void *key)
|
||||
if (unlikely(index >= array->map.max_entries))
|
||||
return NULL;
|
||||
|
||||
return array->value + array->elem_size * index;
|
||||
return array->value + array->elem_size * (index & array->index_mask);
|
||||
}
|
||||
|
||||
/* Called from eBPF program */
|
||||
@@ -122,7 +146,7 @@ static void *percpu_array_map_lookup_elem(struct bpf_map *map, void *key)
|
||||
if (unlikely(index >= array->map.max_entries))
|
||||
return NULL;
|
||||
|
||||
return this_cpu_ptr(array->pptrs[index]);
|
||||
return this_cpu_ptr(array->pptrs[index & array->index_mask]);
|
||||
}
|
||||
|
||||
int bpf_percpu_array_copy(struct bpf_map *map, void *key, void *value)
|
||||
@@ -142,7 +166,7 @@ int bpf_percpu_array_copy(struct bpf_map *map, void *key, void *value)
|
||||
*/
|
||||
size = round_up(map->value_size, 8);
|
||||
rcu_read_lock();
|
||||
pptr = array->pptrs[index];
|
||||
pptr = array->pptrs[index & array->index_mask];
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_long_memcpy(value + off, per_cpu_ptr(pptr, cpu), size);
|
||||
off += size;
|
||||
@@ -190,10 +214,11 @@ static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
|
||||
return -EEXIST;
|
||||
|
||||
if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY)
|
||||
memcpy(this_cpu_ptr(array->pptrs[index]),
|
||||
memcpy(this_cpu_ptr(array->pptrs[index & array->index_mask]),
|
||||
value, map->value_size);
|
||||
else
|
||||
memcpy(array->value + array->elem_size * index,
|
||||
memcpy(array->value +
|
||||
array->elem_size * (index & array->index_mask),
|
||||
value, map->value_size);
|
||||
return 0;
|
||||
}
|
||||
@@ -227,7 +252,7 @@ int bpf_percpu_array_update(struct bpf_map *map, void *key, void *value,
|
||||
*/
|
||||
size = round_up(map->value_size, 8);
|
||||
rcu_read_lock();
|
||||
pptr = array->pptrs[index];
|
||||
pptr = array->pptrs[index & array->index_mask];
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_long_memcpy(per_cpu_ptr(pptr, cpu), value + off, size);
|
||||
off += size;
|
||||
|
||||
@@ -565,57 +565,6 @@ void bpf_register_prog_type(struct bpf_prog_type_list *tl)
|
||||
list_add(&tl->list_node, &bpf_prog_types);
|
||||
}
|
||||
|
||||
/* fixup insn->imm field of bpf_call instructions:
|
||||
* if (insn->imm == BPF_FUNC_map_lookup_elem)
|
||||
* insn->imm = bpf_map_lookup_elem - __bpf_call_base;
|
||||
* else if (insn->imm == BPF_FUNC_map_update_elem)
|
||||
* insn->imm = bpf_map_update_elem - __bpf_call_base;
|
||||
* else ...
|
||||
*
|
||||
* this function is called after eBPF program passed verification
|
||||
*/
|
||||
static void fixup_bpf_calls(struct bpf_prog *prog)
|
||||
{
|
||||
const struct bpf_func_proto *fn;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < prog->len; i++) {
|
||||
struct bpf_insn *insn = &prog->insnsi[i];
|
||||
|
||||
if (insn->code == (BPF_JMP | BPF_CALL)) {
|
||||
/* we reach here when program has bpf_call instructions
|
||||
* and it passed bpf_check(), means that
|
||||
* ops->get_func_proto must have been supplied, check it
|
||||
*/
|
||||
BUG_ON(!prog->aux->ops->get_func_proto);
|
||||
|
||||
if (insn->imm == BPF_FUNC_get_route_realm)
|
||||
prog->dst_needed = 1;
|
||||
if (insn->imm == BPF_FUNC_get_prandom_u32)
|
||||
bpf_user_rnd_init_once();
|
||||
if (insn->imm == BPF_FUNC_tail_call) {
|
||||
/* mark bpf_tail_call as different opcode
|
||||
* to avoid conditional branch in
|
||||
* interpeter for every normal call
|
||||
* and to prevent accidental JITing by
|
||||
* JIT compiler that doesn't support
|
||||
* bpf_tail_call yet
|
||||
*/
|
||||
insn->imm = 0;
|
||||
insn->code |= BPF_X;
|
||||
continue;
|
||||
}
|
||||
|
||||
fn = prog->aux->ops->get_func_proto(insn->imm);
|
||||
/* all functions that have prototype and verifier allowed
|
||||
* programs to call them, must be real in-kernel functions
|
||||
*/
|
||||
BUG_ON(!fn->func);
|
||||
insn->imm = fn->func - __bpf_call_base;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* drop refcnt on maps used by eBPF program and free auxilary data */
|
||||
static void free_used_maps(struct bpf_prog_aux *aux)
|
||||
{
|
||||
@@ -810,9 +759,6 @@ static int bpf_prog_load(union bpf_attr *attr)
|
||||
if (err < 0)
|
||||
goto free_used_maps;
|
||||
|
||||
/* fixup BPF_CALL->imm field */
|
||||
fixup_bpf_calls(prog);
|
||||
|
||||
/* eBPF program is ready to be JITed */
|
||||
prog = bpf_prog_select_runtime(prog, &err);
|
||||
if (err < 0)
|
||||
|
||||
@@ -1187,7 +1187,7 @@ static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
|
||||
}
|
||||
}
|
||||
|
||||
static int check_call(struct bpf_verifier_env *env, int func_id)
|
||||
static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx)
|
||||
{
|
||||
struct bpf_verifier_state *state = &env->cur_state;
|
||||
const struct bpf_func_proto *fn = NULL;
|
||||
@@ -1238,6 +1238,13 @@ static int check_call(struct bpf_verifier_env *env, int func_id)
|
||||
err = check_func_arg(env, BPF_REG_2, fn->arg2_type, &meta);
|
||||
if (err)
|
||||
return err;
|
||||
if (func_id == BPF_FUNC_tail_call) {
|
||||
if (meta.map_ptr == NULL) {
|
||||
verbose("verifier bug\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
env->insn_aux_data[insn_idx].map_ptr = meta.map_ptr;
|
||||
}
|
||||
err = check_func_arg(env, BPF_REG_3, fn->arg3_type, &meta);
|
||||
if (err)
|
||||
return err;
|
||||
@@ -3019,7 +3026,7 @@ static int do_check(struct bpf_verifier_env *env)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
err = check_call(env, insn->imm);
|
||||
err = check_call(env, insn->imm, insn_idx);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
@@ -3362,6 +3369,81 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* fixup insn->imm field of bpf_call instructions
|
||||
*
|
||||
* this function is called after eBPF program passed verification
|
||||
*/
|
||||
static int fixup_bpf_calls(struct bpf_verifier_env *env)
|
||||
{
|
||||
struct bpf_prog *prog = env->prog;
|
||||
struct bpf_insn *insn = prog->insnsi;
|
||||
const struct bpf_func_proto *fn;
|
||||
const int insn_cnt = prog->len;
|
||||
struct bpf_insn insn_buf[16];
|
||||
struct bpf_prog *new_prog;
|
||||
struct bpf_map *map_ptr;
|
||||
int i, cnt, delta = 0;
|
||||
|
||||
|
||||
for (i = 0; i < insn_cnt; i++, insn++) {
|
||||
if (insn->code != (BPF_JMP | BPF_CALL))
|
||||
continue;
|
||||
|
||||
if (insn->imm == BPF_FUNC_get_route_realm)
|
||||
prog->dst_needed = 1;
|
||||
if (insn->imm == BPF_FUNC_get_prandom_u32)
|
||||
bpf_user_rnd_init_once();
|
||||
if (insn->imm == BPF_FUNC_tail_call) {
|
||||
/* mark bpf_tail_call as different opcode to avoid
|
||||
* conditional branch in the interpeter for every normal
|
||||
* call and to prevent accidental JITing by JIT compiler
|
||||
* that doesn't support bpf_tail_call yet
|
||||
*/
|
||||
insn->imm = 0;
|
||||
insn->code |= BPF_X;
|
||||
|
||||
/* instead of changing every JIT dealing with tail_call
|
||||
* emit two extra insns:
|
||||
* if (index >= max_entries) goto out;
|
||||
* index &= array->index_mask;
|
||||
* to avoid out-of-bounds cpu speculation
|
||||
*/
|
||||
map_ptr = env->insn_aux_data[i + delta].map_ptr;
|
||||
if (!map_ptr->unpriv_array)
|
||||
continue;
|
||||
insn_buf[0] = BPF_JMP_IMM(BPF_JGE, BPF_REG_3,
|
||||
map_ptr->max_entries, 2);
|
||||
insn_buf[1] = BPF_ALU32_IMM(BPF_AND, BPF_REG_3,
|
||||
container_of(map_ptr,
|
||||
struct bpf_array,
|
||||
map)->index_mask);
|
||||
insn_buf[2] = *insn;
|
||||
cnt = 3;
|
||||
new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
|
||||
if (!new_prog)
|
||||
return -ENOMEM;
|
||||
|
||||
delta += cnt - 1;
|
||||
env->prog = prog = new_prog;
|
||||
insn = new_prog->insnsi + i + delta;
|
||||
continue;
|
||||
}
|
||||
|
||||
fn = prog->aux->ops->get_func_proto(insn->imm);
|
||||
/* all functions that have prototype and verifier allowed
|
||||
* programs to call them, must be real in-kernel functions
|
||||
*/
|
||||
if (!fn->func) {
|
||||
verbose("kernel subsystem misconfigured func %d\n",
|
||||
insn->imm);
|
||||
return -EFAULT;
|
||||
}
|
||||
insn->imm = fn->func - __bpf_call_base;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void free_states(struct bpf_verifier_env *env)
|
||||
{
|
||||
struct bpf_verifier_state_list *sl, *sln;
|
||||
@@ -3463,6 +3545,9 @@ skip_full_check:
|
||||
/* program is valid, convert *(u32*)(ctx + off) accesses */
|
||||
ret = convert_ctx_accesses(env);
|
||||
|
||||
if (ret == 0)
|
||||
ret = fixup_bpf_calls(env);
|
||||
|
||||
if (log_level && log_len >= log_size - 1) {
|
||||
BUG_ON(log_len >= log_size);
|
||||
/* verifier log exceeded user supplied buffer */
|
||||
|
||||
12
mm/zswap.c
12
mm/zswap.c
@@ -752,18 +752,22 @@ static int __zswap_param_set(const char *val, const struct kernel_param *kp,
|
||||
pool = zswap_pool_find_get(type, compressor);
|
||||
if (pool) {
|
||||
zswap_pool_debug("using existing", pool);
|
||||
WARN_ON(pool == zswap_pool_current());
|
||||
list_del_rcu(&pool->list);
|
||||
} else {
|
||||
spin_unlock(&zswap_pools_lock);
|
||||
pool = zswap_pool_create(type, compressor);
|
||||
spin_lock(&zswap_pools_lock);
|
||||
}
|
||||
|
||||
spin_unlock(&zswap_pools_lock);
|
||||
|
||||
if (!pool)
|
||||
pool = zswap_pool_create(type, compressor);
|
||||
|
||||
if (pool)
|
||||
ret = param_set_charp(s, kp);
|
||||
else
|
||||
ret = -EINVAL;
|
||||
|
||||
spin_lock(&zswap_pools_lock);
|
||||
|
||||
if (!ret) {
|
||||
put_pool = zswap_pool_current();
|
||||
list_add_rcu(&pool->list, &zswap_pools);
|
||||
|
||||
@@ -111,12 +111,7 @@ void unregister_vlan_dev(struct net_device *dev, struct list_head *head)
|
||||
vlan_gvrp_uninit_applicant(real_dev);
|
||||
}
|
||||
|
||||
/* Take it out of our own structures, but be sure to interlock with
|
||||
* HW accelerating devices or SW vlan input packet processing if
|
||||
* VLAN is not 0 (leave it there for 802.1p).
|
||||
*/
|
||||
if (vlan_id)
|
||||
vlan_vid_del(real_dev, vlan->vlan_proto, vlan_id);
|
||||
vlan_vid_del(real_dev, vlan->vlan_proto, vlan_id);
|
||||
|
||||
/* Get rid of the vlan's reference to real_dev */
|
||||
dev_put(real_dev);
|
||||
|
||||
@@ -3353,9 +3353,10 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
|
||||
break;
|
||||
|
||||
case L2CAP_CONF_EFS:
|
||||
remote_efs = 1;
|
||||
if (olen == sizeof(efs))
|
||||
if (olen == sizeof(efs)) {
|
||||
remote_efs = 1;
|
||||
memcpy(&efs, (void *) val, olen);
|
||||
}
|
||||
break;
|
||||
|
||||
case L2CAP_CONF_EWS:
|
||||
@@ -3574,16 +3575,17 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len,
|
||||
break;
|
||||
|
||||
case L2CAP_CONF_EFS:
|
||||
if (olen == sizeof(efs))
|
||||
if (olen == sizeof(efs)) {
|
||||
memcpy(&efs, (void *)val, olen);
|
||||
|
||||
if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
|
||||
efs.stype != L2CAP_SERV_NOTRAFIC &&
|
||||
efs.stype != chan->local_stype)
|
||||
return -ECONNREFUSED;
|
||||
if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
|
||||
efs.stype != L2CAP_SERV_NOTRAFIC &&
|
||||
efs.stype != chan->local_stype)
|
||||
return -ECONNREFUSED;
|
||||
|
||||
l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
|
||||
(unsigned long) &efs, endptr - ptr);
|
||||
l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
|
||||
(unsigned long) &efs, endptr - ptr);
|
||||
}
|
||||
break;
|
||||
|
||||
case L2CAP_CONF_FCS:
|
||||
|
||||
@@ -742,15 +742,6 @@ static int ethtool_set_link_ksettings(struct net_device *dev,
|
||||
return dev->ethtool_ops->set_link_ksettings(dev, &link_ksettings);
|
||||
}
|
||||
|
||||
static void
|
||||
warn_incomplete_ethtool_legacy_settings_conversion(const char *details)
|
||||
{
|
||||
char name[sizeof(current->comm)];
|
||||
|
||||
pr_info_once("warning: `%s' uses legacy ethtool link settings API, %s\n",
|
||||
get_task_comm(name, current), details);
|
||||
}
|
||||
|
||||
/* Query device for its ethtool_cmd settings.
|
||||
*
|
||||
* Backward compatibility note: for compatibility with legacy ethtool,
|
||||
@@ -777,10 +768,8 @@ static int ethtool_get_settings(struct net_device *dev, void __user *useraddr)
|
||||
&link_ksettings);
|
||||
if (err < 0)
|
||||
return err;
|
||||
if (!convert_link_ksettings_to_legacy_settings(&cmd,
|
||||
&link_ksettings))
|
||||
warn_incomplete_ethtool_legacy_settings_conversion(
|
||||
"link modes are only partially reported");
|
||||
convert_link_ksettings_to_legacy_settings(&cmd,
|
||||
&link_ksettings);
|
||||
|
||||
/* send a sensible cmd tag back to user */
|
||||
cmd.cmd = ETHTOOL_GSET;
|
||||
|
||||
@@ -295,7 +295,7 @@ static int sock_diag_bind(struct net *net, int group)
|
||||
case SKNLGRP_INET6_UDP_DESTROY:
|
||||
if (!sock_diag_handlers[AF_INET6])
|
||||
request_module("net-pf-%d-proto-%d-type-%d", PF_NETLINK,
|
||||
NETLINK_SOCK_DIAG, AF_INET);
|
||||
NETLINK_SOCK_DIAG, AF_INET6);
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
|
||||
@@ -1808,9 +1808,10 @@ struct sk_buff *ip6_make_skb(struct sock *sk,
|
||||
cork.base.opt = NULL;
|
||||
v6_cork.opt = NULL;
|
||||
err = ip6_setup_cork(sk, &cork, &v6_cork, ipc6, rt, fl6);
|
||||
if (err)
|
||||
if (err) {
|
||||
ip6_cork_release(&cork, &v6_cork);
|
||||
return ERR_PTR(err);
|
||||
|
||||
}
|
||||
if (ipc6->dontfrag < 0)
|
||||
ipc6->dontfrag = inet6_sk(sk)->dontfrag;
|
||||
|
||||
|
||||
@@ -1080,10 +1080,11 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
|
||||
memcpy(&fl6->daddr, addr6, sizeof(fl6->daddr));
|
||||
neigh_release(neigh);
|
||||
}
|
||||
} else if (!(t->parms.flags &
|
||||
(IP6_TNL_F_USE_ORIG_TCLASS | IP6_TNL_F_USE_ORIG_FWMARK))) {
|
||||
/* enable the cache only only if the routing decision does
|
||||
* not depend on the current inner header value
|
||||
} else if (t->parms.proto != 0 && !(t->parms.flags &
|
||||
(IP6_TNL_F_USE_ORIG_TCLASS |
|
||||
IP6_TNL_F_USE_ORIG_FWMARK))) {
|
||||
/* enable the cache only if neither the outer protocol nor the
|
||||
* routing decision depends on the current inner header value
|
||||
*/
|
||||
use_cache = true;
|
||||
}
|
||||
|
||||
@@ -293,7 +293,8 @@ ieee80211_crypto_wep_decrypt(struct ieee80211_rx_data *rx)
|
||||
return RX_DROP_UNUSABLE;
|
||||
ieee80211_wep_remove_iv(rx->local, rx->skb, rx->key);
|
||||
/* remove ICV */
|
||||
if (pskb_trim(rx->skb, rx->skb->len - IEEE80211_WEP_ICV_LEN))
|
||||
if (!(status->flag & RX_FLAG_ICV_STRIPPED) &&
|
||||
pskb_trim(rx->skb, rx->skb->len - IEEE80211_WEP_ICV_LEN))
|
||||
return RX_DROP_UNUSABLE;
|
||||
}
|
||||
|
||||
|
||||
@@ -295,7 +295,8 @@ ieee80211_crypto_tkip_decrypt(struct ieee80211_rx_data *rx)
|
||||
return RX_DROP_UNUSABLE;
|
||||
|
||||
/* Trim ICV */
|
||||
skb_trim(skb, skb->len - IEEE80211_TKIP_ICV_LEN);
|
||||
if (!(status->flag & RX_FLAG_ICV_STRIPPED))
|
||||
skb_trim(skb, skb->len - IEEE80211_TKIP_ICV_LEN);
|
||||
|
||||
/* Remove IV */
|
||||
memmove(skb->data + IEEE80211_TKIP_IV_LEN, skb->data, hdrlen);
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user