Merge remote-tracking branch 'kernel-common/android-4.14-stable' into LE.UM.4.2.1.r1.3
* kernel-common/android-4.14-stable: ANDROID: kbuild: merge more sections with LTO Linux 4.14.184 uprobes: ensure that uprobe->offset and ->ref_ctr_offset are properly aligned iio: vcnl4000: Fix i2c swapped word reading. x86/speculation: Add Ivy Bridge to affected list x86/speculation: Add SRBDS vulnerability and mitigation documentation x86/speculation: Add Special Register Buffer Data Sampling (SRBDS) mitigation x86/cpu: Add 'table' argument to cpu_matches() x86/cpu: Add a steppings field to struct x86_cpu_id nvmem: qfprom: remove incorrect write support CDC-ACM: heed quirk also in error handling staging: rtl8712: Fix IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK tty: hvc_console, fix crashes on parallel open/close vt: keyboard: avoid signed integer overflow in k_ascii usb: musb: Fix runtime PM imbalance on error usb: musb: start session in resume for host port USB: serial: option: add Telit LE910C1-EUX compositions USB: serial: usb_wwan: do not resubmit rx urb on fatal errors USB: serial: qcserial: add DW5816e QDL support l2tp: add sk_family checks to l2tp_validate_socket net: check untrusted gso_size at kernel entry vsock: fix timeout in vsock_accept() NFC: st21nfca: add missed kfree_skb() in an error path net: usb: qmi_wwan: add Telit LE910C1-EUX composition l2tp: do not use inet_hash()/inet_unhash() devinet: fix memleak in inetdev_init() airo: Fix read overflows sending packets scsi: ufs: Release clock if DMA map fails mmc: fix compilation of user API kernel/relay.c: handle alloc_percpu returning NULL in relay_open p54usb: add AirVasT USB stick device-id HID: i2c-hid: add Schneider SCL142ALM to descriptor override HID: sony: Fix for broken buttons on DS3 USB dongles mm: Fix mremap not considering huge pmd devmap net: smsc911x: Fix runtime PM imbalance on error net: ethernet: stmmac: Enable interface clocks on probe for IPQ806x net/ethernet/freescale: rework quiesce/activate for ucc_geth net: bmac: Fix read of MAC address from ROM x86/mmiotrace: Use cpumask_available() for cpumask_var_t variables i2c: altera: Fix race between xfer_msg and isr thread ARC: [plat-eznps]: Restrict to CONFIG_ISA_ARCOMPACT ARC: Fix ICCM & DCCM runtime size checks pppoe: only process PADT targeted at local interfaces s390/ftrace: save traced function caller spi: dw: use "smp_mb()" to avoid sending spi data error scsi: hisi_sas: Check sas_port before using it libnvdimm: Fix endian conversion issues scsi: scsi_devinfo: fixup string compare ANDROID: Incremental fs: Remove dependency on PKCS7_MESSAGE_PARSER ANDROID: dm-bow: Add block_size option ANDROID: Incremental fs: Cache successful hash calculations ANDROID: Incremental fs: Fix four error-path bugs
This commit is contained in:
@@ -381,6 +381,7 @@ What: /sys/devices/system/cpu/vulnerabilities
|
||||
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
|
||||
/sys/devices/system/cpu/vulnerabilities/l1tf
|
||||
/sys/devices/system/cpu/vulnerabilities/mds
|
||||
/sys/devices/system/cpu/vulnerabilities/srbds
|
||||
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
|
||||
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
|
||||
Date: January 2018
|
||||
|
||||
@@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
|
||||
mds
|
||||
tsx_async_abort
|
||||
multihit.rst
|
||||
special-register-buffer-data-sampling.rst
|
||||
|
||||
@@ -0,0 +1,149 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
SRBDS - Special Register Buffer Data Sampling
|
||||
=============================================
|
||||
|
||||
SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
|
||||
infer values returned from special register accesses. Special register
|
||||
accesses are accesses to off core registers. According to Intel's evaluation,
|
||||
the special register reads that have a security expectation of privacy are
|
||||
RDRAND, RDSEED and SGX EGETKEY.
|
||||
|
||||
When RDRAND, RDSEED and EGETKEY instructions are used, the data is moved
|
||||
to the core through the special register mechanism that is susceptible
|
||||
to MDS attacks.
|
||||
|
||||
Affected processors
|
||||
--------------------
|
||||
Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
|
||||
be affected.
|
||||
|
||||
A processor is affected by SRBDS if its Family_Model and stepping is
|
||||
in the following list, with the exception of the listed processors
|
||||
exporting MDS_NO while Intel TSX is available yet not enabled. The
|
||||
latter class of processors are only affected when Intel TSX is enabled
|
||||
by software using TSX_CTRL_MSR otherwise they are not affected.
|
||||
|
||||
============= ============ ========
|
||||
common name Family_Model Stepping
|
||||
============= ============ ========
|
||||
IvyBridge 06_3AH All
|
||||
|
||||
Haswell 06_3CH All
|
||||
Haswell_L 06_45H All
|
||||
Haswell_G 06_46H All
|
||||
|
||||
Broadwell_G 06_47H All
|
||||
Broadwell 06_3DH All
|
||||
|
||||
Skylake_L 06_4EH All
|
||||
Skylake 06_5EH All
|
||||
|
||||
Kabylake_L 06_8EH <= 0xC
|
||||
Kabylake 06_9EH <= 0xD
|
||||
============= ============ ========
|
||||
|
||||
Related CVEs
|
||||
------------
|
||||
|
||||
The following CVE entry is related to this SRBDS issue:
|
||||
|
||||
============== ===== =====================================
|
||||
CVE-2020-0543 SRBDS Special Register Buffer Data Sampling
|
||||
============== ===== =====================================
|
||||
|
||||
Attack scenarios
|
||||
----------------
|
||||
An unprivileged user can extract values returned from RDRAND and RDSEED
|
||||
executed on another core or sibling thread using MDS techniques.
|
||||
|
||||
|
||||
Mitigation mechanism
|
||||
-------------------
|
||||
Intel will release microcode updates that modify the RDRAND, RDSEED, and
|
||||
EGETKEY instructions to overwrite secret special register data in the shared
|
||||
staging buffer before the secret data can be accessed by another logical
|
||||
processor.
|
||||
|
||||
During execution of the RDRAND, RDSEED, or EGETKEY instructions, off-core
|
||||
accesses from other logical processors will be delayed until the special
|
||||
register read is complete and the secret data in the shared staging buffer is
|
||||
overwritten.
|
||||
|
||||
This has three effects on performance:
|
||||
|
||||
#. RDRAND, RDSEED, or EGETKEY instructions have higher latency.
|
||||
|
||||
#. Executing RDRAND at the same time on multiple logical processors will be
|
||||
serialized, resulting in an overall reduction in the maximum RDRAND
|
||||
bandwidth.
|
||||
|
||||
#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
|
||||
logical processors that miss their core caches, with an impact similar to
|
||||
legacy locked cache-line-split accesses.
|
||||
|
||||
The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable
|
||||
the mitigation for RDRAND and RDSEED instructions executed outside of Intel
|
||||
Software Guard Extensions (Intel SGX) enclaves. On logical processors that
|
||||
disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not
|
||||
take longer to execute and do not impact performance of sibling logical
|
||||
processors memory accesses. The opt-out mechanism does not affect Intel SGX
|
||||
enclaves (including execution of RDRAND or RDSEED inside an enclave, as well
|
||||
as EGETKEY execution).
|
||||
|
||||
IA32_MCU_OPT_CTRL MSR Definition
|
||||
--------------------------------
|
||||
Along with the mitigation for this issue, Intel added a new thread-scope
|
||||
IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
|
||||
RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
|
||||
9]==1. This MSR is introduced through the microcode update.
|
||||
|
||||
Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
|
||||
disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
|
||||
enclave on that logical processor. Opting out of the mitigation for a
|
||||
particular logical processor does not affect the RDRAND and RDSEED mitigations
|
||||
for other logical processors.
|
||||
|
||||
Note that inside of an Intel SGX enclave, the mitigation is applied regardless
|
||||
of the value of RNGDS_MITG_DS.
|
||||
|
||||
Mitigation control on the kernel command line
|
||||
---------------------------------------------
|
||||
The kernel command line allows control over the SRBDS mitigation at boot time
|
||||
with the option "srbds=". The option for this is:
|
||||
|
||||
============= =============================================================
|
||||
off This option disables SRBDS mitigation for RDRAND and RDSEED on
|
||||
affected platforms.
|
||||
============= =============================================================
|
||||
|
||||
SRBDS System Information
|
||||
-----------------------
|
||||
The Linux kernel provides vulnerability status information through sysfs. For
|
||||
SRBDS this can be accessed by the following sysfs file:
|
||||
/sys/devices/system/cpu/vulnerabilities/srbds
|
||||
|
||||
The possible values contained in this file are:
|
||||
|
||||
============================== =============================================
|
||||
Not affected Processor not vulnerable
|
||||
Vulnerable Processor vulnerable and mitigation disabled
|
||||
Vulnerable: No microcode Processor vulnerable and microcode is missing
|
||||
mitigation
|
||||
Mitigation: Microcode Processor is vulnerable and mitigation is in
|
||||
effect.
|
||||
Mitigation: TSX disabled Processor is only vulnerable when TSX is
|
||||
enabled while this system was booted with TSX
|
||||
disabled.
|
||||
Unknown: Dependent on
|
||||
hypervisor status Running on virtual guest processor that is
|
||||
affected but with no way to know if host
|
||||
processor is mitigated or vulnerable.
|
||||
============================== =============================================
|
||||
|
||||
SRBDS Default mitigation
|
||||
------------------------
|
||||
This new microcode serializes processor access during execution of RDRAND,
|
||||
RDSEED ensures that the shared buffer is overwritten before it is released for
|
||||
reuse. Use the "srbds=off" kernel command line to disable the mitigation for
|
||||
RDRAND and RDSEED.
|
||||
@@ -4271,6 +4271,26 @@
|
||||
spia_pedr=
|
||||
spia_peddr=
|
||||
|
||||
srbds= [X86,INTEL]
|
||||
Control the Special Register Buffer Data Sampling
|
||||
(SRBDS) mitigation.
|
||||
|
||||
Certain CPUs are vulnerable to an MDS-like
|
||||
exploit which can leak bits from the random
|
||||
number generator.
|
||||
|
||||
By default, this issue is mitigated by
|
||||
microcode. However, the microcode fix can cause
|
||||
the RDRAND and RDSEED instructions to become
|
||||
much slower. Among other effects, this will
|
||||
result in reduced throughput from /dev/urandom.
|
||||
|
||||
The microcode mitigation can be disabled with
|
||||
the following option:
|
||||
|
||||
off: Disable mitigation and remove
|
||||
performance impact to RDRAND and RDSEED
|
||||
|
||||
srcutree.counter_wrap_check [KNL]
|
||||
Specifies how frequently to check for
|
||||
grace-period sequence counter wrap for the
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 14
|
||||
SUBLEVEL = 183
|
||||
SUBLEVEL = 184
|
||||
EXTRAVERSION =
|
||||
NAME = Petit Gorille
|
||||
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
#include <linux/clocksource.h>
|
||||
#include <linux/console.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/sizes.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/of_fdt.h>
|
||||
#include <linux/of.h>
|
||||
@@ -355,12 +356,12 @@ static void arc_chk_core_config(void)
|
||||
if ((unsigned int)__arc_dccm_base != cpu->dccm.base_addr)
|
||||
panic("Linux built with incorrect DCCM Base address\n");
|
||||
|
||||
if (CONFIG_ARC_DCCM_SZ != cpu->dccm.sz)
|
||||
if (CONFIG_ARC_DCCM_SZ * SZ_1K != cpu->dccm.sz)
|
||||
panic("Linux built with incorrect DCCM Size\n");
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ARC_HAS_ICCM
|
||||
if (CONFIG_ARC_ICCM_SZ != cpu->iccm.sz)
|
||||
if (CONFIG_ARC_ICCM_SZ * SZ_1K != cpu->iccm.sz)
|
||||
panic("Linux built with incorrect ICCM Size\n");
|
||||
#endif
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
|
||||
menuconfig ARC_PLAT_EZNPS
|
||||
bool "\"EZchip\" ARC dev platform"
|
||||
depends on ISA_ARCOMPACT
|
||||
select CPU_BIG_ENDIAN
|
||||
select CLKSRC_NPS if !PHYS_ADDR_T_64BIT
|
||||
select EZNPS_GIC
|
||||
|
||||
@@ -40,6 +40,7 @@ EXPORT_SYMBOL(_mcount)
|
||||
ENTRY(ftrace_caller)
|
||||
.globl ftrace_regs_caller
|
||||
.set ftrace_regs_caller,ftrace_caller
|
||||
stg %r14,(__SF_GPRS+8*8)(%r15) # save traced function caller
|
||||
lgr %r1,%r15
|
||||
#ifndef CC_USING_HOTPATCH
|
||||
aghi %r0,MCOUNT_RETURN_FIXUP
|
||||
|
||||
@@ -9,6 +9,33 @@
|
||||
|
||||
#include <linux/mod_devicetable.h>
|
||||
|
||||
#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins)
|
||||
|
||||
/**
|
||||
* X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
|
||||
* @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
|
||||
* The name is expanded to X86_VENDOR_@_vendor
|
||||
* @_family: The family number or X86_FAMILY_ANY
|
||||
* @_model: The model number, model constant or X86_MODEL_ANY
|
||||
* @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_ANY
|
||||
* @_feature: A X86_FEATURE bit or X86_FEATURE_ANY
|
||||
* @_data: Driver specific data or NULL. The internal storage
|
||||
* format is unsigned long. The supplied value, pointer
|
||||
* etc. is casted to unsigned long internally.
|
||||
*
|
||||
* Backport version to keep the SRBDS pile consistant. No shorter variants
|
||||
* required for this.
|
||||
*/
|
||||
#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
|
||||
_steppings, _feature, _data) { \
|
||||
.vendor = X86_VENDOR_##_vendor, \
|
||||
.family = _family, \
|
||||
.model = _model, \
|
||||
.steppings = _steppings, \
|
||||
.feature = _feature, \
|
||||
.driver_data = (unsigned long) _data \
|
||||
}
|
||||
|
||||
extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -346,6 +346,7 @@
|
||||
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
|
||||
#define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
|
||||
#define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
|
||||
#define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */
|
||||
#define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */
|
||||
#define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */
|
||||
#define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
|
||||
@@ -390,5 +391,6 @@
|
||||
#define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
|
||||
#define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
|
||||
#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
|
||||
#define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
|
||||
|
||||
#endif /* _ASM_X86_CPUFEATURES_H */
|
||||
|
||||
@@ -110,6 +110,10 @@
|
||||
#define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */
|
||||
#define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */
|
||||
|
||||
/* SRBDS support */
|
||||
#define MSR_IA32_MCU_OPT_CTRL 0x00000123
|
||||
#define RNGDS_MITG_DIS BIT(0)
|
||||
|
||||
#define MSR_IA32_SYSENTER_CS 0x00000174
|
||||
#define MSR_IA32_SYSENTER_ESP 0x00000175
|
||||
#define MSR_IA32_SYSENTER_EIP 0x00000176
|
||||
|
||||
@@ -234,6 +234,7 @@ static inline int pmd_large(pmd_t pte)
|
||||
}
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
/* NOTE: when predicate huge page, consider also pmd_devmap, or use pmd_large */
|
||||
static inline int pmd_trans_huge(pmd_t pmd)
|
||||
{
|
||||
return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
|
||||
|
||||
@@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
|
||||
static void __init mds_select_mitigation(void);
|
||||
static void __init mds_print_mitigation(void);
|
||||
static void __init taa_select_mitigation(void);
|
||||
static void __init srbds_select_mitigation(void);
|
||||
|
||||
/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
|
||||
u64 x86_spec_ctrl_base;
|
||||
@@ -108,6 +109,7 @@ void __init check_bugs(void)
|
||||
l1tf_select_mitigation();
|
||||
mds_select_mitigation();
|
||||
taa_select_mitigation();
|
||||
srbds_select_mitigation();
|
||||
|
||||
/*
|
||||
* As MDS and TAA mitigations are inter-related, print MDS
|
||||
@@ -390,6 +392,97 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
|
||||
}
|
||||
early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
|
||||
|
||||
#undef pr_fmt
|
||||
#define pr_fmt(fmt) "SRBDS: " fmt
|
||||
|
||||
enum srbds_mitigations {
|
||||
SRBDS_MITIGATION_OFF,
|
||||
SRBDS_MITIGATION_UCODE_NEEDED,
|
||||
SRBDS_MITIGATION_FULL,
|
||||
SRBDS_MITIGATION_TSX_OFF,
|
||||
SRBDS_MITIGATION_HYPERVISOR,
|
||||
};
|
||||
|
||||
static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
|
||||
|
||||
static const char * const srbds_strings[] = {
|
||||
[SRBDS_MITIGATION_OFF] = "Vulnerable",
|
||||
[SRBDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
|
||||
[SRBDS_MITIGATION_FULL] = "Mitigation: Microcode",
|
||||
[SRBDS_MITIGATION_TSX_OFF] = "Mitigation: TSX disabled",
|
||||
[SRBDS_MITIGATION_HYPERVISOR] = "Unknown: Dependent on hypervisor status",
|
||||
};
|
||||
|
||||
static bool srbds_off;
|
||||
|
||||
void update_srbds_msr(void)
|
||||
{
|
||||
u64 mcu_ctrl;
|
||||
|
||||
if (!boot_cpu_has_bug(X86_BUG_SRBDS))
|
||||
return;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
||||
return;
|
||||
|
||||
if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
|
||||
return;
|
||||
|
||||
rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
|
||||
|
||||
switch (srbds_mitigation) {
|
||||
case SRBDS_MITIGATION_OFF:
|
||||
case SRBDS_MITIGATION_TSX_OFF:
|
||||
mcu_ctrl |= RNGDS_MITG_DIS;
|
||||
break;
|
||||
case SRBDS_MITIGATION_FULL:
|
||||
mcu_ctrl &= ~RNGDS_MITG_DIS;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
|
||||
}
|
||||
|
||||
static void __init srbds_select_mitigation(void)
|
||||
{
|
||||
u64 ia32_cap;
|
||||
|
||||
if (!boot_cpu_has_bug(X86_BUG_SRBDS))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Check to see if this is one of the MDS_NO systems supporting
|
||||
* TSX that are only exposed to SRBDS when TSX is enabled.
|
||||
*/
|
||||
ia32_cap = x86_read_arch_cap_msr();
|
||||
if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
|
||||
srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
|
||||
else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
||||
srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
|
||||
else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
|
||||
srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
|
||||
else if (cpu_mitigations_off() || srbds_off)
|
||||
srbds_mitigation = SRBDS_MITIGATION_OFF;
|
||||
|
||||
update_srbds_msr();
|
||||
pr_info("%s\n", srbds_strings[srbds_mitigation]);
|
||||
}
|
||||
|
||||
static int __init srbds_parse_cmdline(char *str)
|
||||
{
|
||||
if (!str)
|
||||
return -EINVAL;
|
||||
|
||||
if (!boot_cpu_has_bug(X86_BUG_SRBDS))
|
||||
return 0;
|
||||
|
||||
srbds_off = !strcmp(str, "off");
|
||||
return 0;
|
||||
}
|
||||
early_param("srbds", srbds_parse_cmdline);
|
||||
|
||||
#undef pr_fmt
|
||||
#define pr_fmt(fmt) "Spectre V1 : " fmt
|
||||
|
||||
@@ -1491,6 +1584,11 @@ static char *ibpb_state(void)
|
||||
return "";
|
||||
}
|
||||
|
||||
static ssize_t srbds_show_state(char *buf)
|
||||
{
|
||||
return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
|
||||
}
|
||||
|
||||
static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
|
||||
char *buf, unsigned int bug)
|
||||
{
|
||||
@@ -1532,6 +1630,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
|
||||
case X86_BUG_ITLB_MULTIHIT:
|
||||
return itlb_multihit_show_state(buf);
|
||||
|
||||
case X86_BUG_SRBDS:
|
||||
return srbds_show_state(buf);
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
@@ -1578,4 +1679,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
|
||||
{
|
||||
return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
|
||||
}
|
||||
|
||||
ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
|
||||
}
|
||||
#endif
|
||||
|
||||
@@ -964,9 +964,30 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
|
||||
{}
|
||||
};
|
||||
|
||||
static bool __init cpu_matches(unsigned long which)
|
||||
#define VULNBL_INTEL_STEPPINGS(model, steppings, issues) \
|
||||
X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, \
|
||||
INTEL_FAM6_##model, steppings, \
|
||||
X86_FEATURE_ANY, issues)
|
||||
|
||||
#define SRBDS BIT(0)
|
||||
|
||||
static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
|
||||
VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(HASWELL_CORE, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(HASWELL_ULT, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(HASWELL_GT3E, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(BROADWELL_GT3E, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(BROADWELL_CORE, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x0, 0xC), SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x0, 0xD), SRBDS),
|
||||
{}
|
||||
};
|
||||
|
||||
static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
|
||||
{
|
||||
const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
|
||||
const struct x86_cpu_id *m = x86_match_cpu(table);
|
||||
|
||||
return m && !!(m->driver_data & which);
|
||||
}
|
||||
@@ -986,29 +1007,32 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
|
||||
u64 ia32_cap = x86_read_arch_cap_msr();
|
||||
|
||||
/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
|
||||
if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
|
||||
if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) &&
|
||||
!(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
|
||||
setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
|
||||
|
||||
if (cpu_matches(NO_SPECULATION))
|
||||
if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION))
|
||||
return;
|
||||
|
||||
setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
|
||||
setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
|
||||
|
||||
if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
|
||||
if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
|
||||
!(ia32_cap & ARCH_CAP_SSB_NO) &&
|
||||
!cpu_has(c, X86_FEATURE_AMD_SSB_NO))
|
||||
setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
|
||||
|
||||
if (ia32_cap & ARCH_CAP_IBRS_ALL)
|
||||
setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
|
||||
|
||||
if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
|
||||
if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
|
||||
!(ia32_cap & ARCH_CAP_MDS_NO)) {
|
||||
setup_force_cpu_bug(X86_BUG_MDS);
|
||||
if (cpu_matches(MSBDS_ONLY))
|
||||
if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY))
|
||||
setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
|
||||
}
|
||||
|
||||
if (!cpu_matches(NO_SWAPGS))
|
||||
if (!cpu_matches(cpu_vuln_whitelist, NO_SWAPGS))
|
||||
setup_force_cpu_bug(X86_BUG_SWAPGS);
|
||||
|
||||
/*
|
||||
@@ -1026,7 +1050,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
|
||||
(ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
|
||||
setup_force_cpu_bug(X86_BUG_TAA);
|
||||
|
||||
if (cpu_matches(NO_MELTDOWN))
|
||||
/*
|
||||
* SRBDS affects CPUs which support RDRAND or RDSEED and are listed
|
||||
* in the vulnerability blacklist.
|
||||
*/
|
||||
if ((cpu_has(c, X86_FEATURE_RDRAND) ||
|
||||
cpu_has(c, X86_FEATURE_RDSEED)) &&
|
||||
cpu_matches(cpu_vuln_blacklist, SRBDS))
|
||||
setup_force_cpu_bug(X86_BUG_SRBDS);
|
||||
|
||||
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
|
||||
return;
|
||||
|
||||
/* Rogue Data Cache Load? No! */
|
||||
@@ -1035,7 +1068,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
|
||||
|
||||
setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
|
||||
|
||||
if (cpu_matches(NO_L1TF))
|
||||
if (cpu_matches(cpu_vuln_whitelist, NO_L1TF))
|
||||
return;
|
||||
|
||||
setup_force_cpu_bug(X86_BUG_L1TF);
|
||||
@@ -1451,6 +1484,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
|
||||
mtrr_ap_init();
|
||||
validate_apic_and_package_id(c);
|
||||
x86_spec_ctrl_setup_ap();
|
||||
update_srbds_msr();
|
||||
}
|
||||
|
||||
static __init int setup_noclflush(char *arg)
|
||||
|
||||
@@ -69,6 +69,7 @@ extern int detect_ht_early(struct cpuinfo_x86 *c);
|
||||
unsigned int aperfmperf_get_khz(int cpu);
|
||||
|
||||
extern void x86_spec_ctrl_setup_ap(void);
|
||||
extern void update_srbds_msr(void);
|
||||
|
||||
extern u64 x86_read_arch_cap_msr(void);
|
||||
|
||||
|
||||
@@ -34,13 +34,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
|
||||
const struct x86_cpu_id *m;
|
||||
struct cpuinfo_x86 *c = &boot_cpu_data;
|
||||
|
||||
for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
|
||||
for (m = match;
|
||||
m->vendor | m->family | m->model | m->steppings | m->feature;
|
||||
m++) {
|
||||
if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
|
||||
continue;
|
||||
if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
|
||||
continue;
|
||||
if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
|
||||
continue;
|
||||
if (m->steppings != X86_STEPPING_ANY &&
|
||||
!(BIT(c->x86_stepping) & m->steppings))
|
||||
continue;
|
||||
if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
|
||||
continue;
|
||||
return m;
|
||||
|
||||
@@ -385,7 +385,7 @@ static void enter_uniprocessor(void)
|
||||
int cpu;
|
||||
int err;
|
||||
|
||||
if (downed_cpus == NULL &&
|
||||
if (!cpumask_available(downed_cpus) &&
|
||||
!alloc_cpumask_var(&downed_cpus, GFP_KERNEL)) {
|
||||
pr_notice("Failed to allocate mask\n");
|
||||
goto out;
|
||||
@@ -415,7 +415,7 @@ static void leave_uniprocessor(void)
|
||||
int cpu;
|
||||
int err;
|
||||
|
||||
if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0)
|
||||
if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0)
|
||||
return;
|
||||
pr_notice("Re-enabling CPUs...\n");
|
||||
for_each_cpu(cpu, downed_cpus) {
|
||||
|
||||
@@ -644,6 +644,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
|
||||
return sprintf(buf, "Not affected\n");
|
||||
}
|
||||
|
||||
ssize_t __weak cpu_show_srbds(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "Not affected\n");
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
|
||||
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
|
||||
static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
|
||||
@@ -652,6 +658,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
|
||||
static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
|
||||
static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
|
||||
static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
|
||||
static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
|
||||
|
||||
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
|
||||
&dev_attr_meltdown.attr,
|
||||
@@ -662,6 +669,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
|
||||
&dev_attr_mds.attr,
|
||||
&dev_attr_tsx_async_abort.attr,
|
||||
&dev_attr_itlb_multihit.attr,
|
||||
&dev_attr_srbds.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
|
||||
@@ -837,6 +837,23 @@ static u8 *sony_report_fixup(struct hid_device *hdev, u8 *rdesc,
|
||||
if (sc->quirks & PS3REMOTE)
|
||||
return ps3remote_fixup(hdev, rdesc, rsize);
|
||||
|
||||
/*
|
||||
* Some knock-off USB dongles incorrectly report their button count
|
||||
* as 13 instead of 16 causing three non-functional buttons.
|
||||
*/
|
||||
if ((sc->quirks & SIXAXIS_CONTROLLER_USB) && *rsize >= 45 &&
|
||||
/* Report Count (13) */
|
||||
rdesc[23] == 0x95 && rdesc[24] == 0x0D &&
|
||||
/* Usage Maximum (13) */
|
||||
rdesc[37] == 0x29 && rdesc[38] == 0x0D &&
|
||||
/* Report Count (3) */
|
||||
rdesc[43] == 0x95 && rdesc[44] == 0x03) {
|
||||
hid_info(hdev, "Fixing up USB dongle report descriptor\n");
|
||||
rdesc[24] = 0x10;
|
||||
rdesc[38] = 0x10;
|
||||
rdesc[44] = 0x00;
|
||||
}
|
||||
|
||||
return rdesc;
|
||||
}
|
||||
|
||||
|
||||
@@ -381,6 +381,14 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{
|
||||
.ident = "Schneider SCL142ALM",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SCHNEIDER"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SCL142ALM"),
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{ } /* Terminate list */
|
||||
};
|
||||
|
||||
|
||||
@@ -81,6 +81,7 @@
|
||||
* @isr_mask: cached copy of local ISR enables.
|
||||
* @isr_status: cached copy of local ISR status.
|
||||
* @lock: spinlock for IRQ synchronization.
|
||||
* @isr_mutex: mutex for IRQ thread.
|
||||
*/
|
||||
struct altr_i2c_dev {
|
||||
void __iomem *base;
|
||||
@@ -97,6 +98,7 @@ struct altr_i2c_dev {
|
||||
u32 isr_mask;
|
||||
u32 isr_status;
|
||||
spinlock_t lock; /* IRQ synchronization */
|
||||
struct mutex isr_mutex;
|
||||
};
|
||||
|
||||
static void
|
||||
@@ -256,10 +258,11 @@ static irqreturn_t altr_i2c_isr(int irq, void *_dev)
|
||||
struct altr_i2c_dev *idev = _dev;
|
||||
u32 status = idev->isr_status;
|
||||
|
||||
mutex_lock(&idev->isr_mutex);
|
||||
if (!idev->msg) {
|
||||
dev_warn(idev->dev, "unexpected interrupt\n");
|
||||
altr_i2c_int_clear(idev, ALTR_I2C_ALL_IRQ);
|
||||
return IRQ_HANDLED;
|
||||
goto out;
|
||||
}
|
||||
read = (idev->msg->flags & I2C_M_RD) != 0;
|
||||
|
||||
@@ -312,6 +315,8 @@ static irqreturn_t altr_i2c_isr(int irq, void *_dev)
|
||||
complete(&idev->msg_complete);
|
||||
dev_dbg(idev->dev, "Message Complete\n");
|
||||
}
|
||||
out:
|
||||
mutex_unlock(&idev->isr_mutex);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
@@ -323,6 +328,7 @@ static int altr_i2c_xfer_msg(struct altr_i2c_dev *idev, struct i2c_msg *msg)
|
||||
u32 value;
|
||||
u8 addr = i2c_8bit_addr_from_msg(msg);
|
||||
|
||||
mutex_lock(&idev->isr_mutex);
|
||||
idev->msg = msg;
|
||||
idev->msg_len = msg->len;
|
||||
idev->buf = msg->buf;
|
||||
@@ -347,6 +353,7 @@ static int altr_i2c_xfer_msg(struct altr_i2c_dev *idev, struct i2c_msg *msg)
|
||||
altr_i2c_int_enable(idev, imask, true);
|
||||
altr_i2c_fill_tx_fifo(idev);
|
||||
}
|
||||
mutex_unlock(&idev->isr_mutex);
|
||||
|
||||
time_left = wait_for_completion_timeout(&idev->msg_complete,
|
||||
ALTR_I2C_XFER_TIMEOUT);
|
||||
@@ -420,6 +427,7 @@ static int altr_i2c_probe(struct platform_device *pdev)
|
||||
idev->dev = &pdev->dev;
|
||||
init_completion(&idev->msg_complete);
|
||||
spin_lock_init(&idev->lock);
|
||||
mutex_init(&idev->isr_mutex);
|
||||
|
||||
ret = device_property_read_u32(idev->dev, "fifo-size",
|
||||
&idev->fifo_size);
|
||||
|
||||
@@ -61,7 +61,6 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
|
||||
u8 rdy_mask, u8 data_reg, int *val)
|
||||
{
|
||||
int tries = 20;
|
||||
__be16 buf;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&data->lock);
|
||||
@@ -88,13 +87,12 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ret = i2c_smbus_read_i2c_block_data(data->client,
|
||||
data_reg, sizeof(buf), (u8 *) &buf);
|
||||
ret = i2c_smbus_read_word_swapped(data->client, data_reg);
|
||||
if (ret < 0)
|
||||
goto fail;
|
||||
|
||||
mutex_unlock(&data->lock);
|
||||
*val = be16_to_cpu(buf);
|
||||
*val = ret;
|
||||
|
||||
return 0;
|
||||
|
||||
|
||||
@@ -622,6 +622,72 @@ static void dm_bow_dtr(struct dm_target *ti)
|
||||
kfree(bc);
|
||||
}
|
||||
|
||||
static void dm_bow_io_hints(struct dm_target *ti, struct queue_limits *limits)
|
||||
{
|
||||
struct bow_context *bc = ti->private;
|
||||
const unsigned int block_size = bc->block_size;
|
||||
|
||||
limits->logical_block_size =
|
||||
max_t(unsigned short, limits->logical_block_size, block_size);
|
||||
limits->physical_block_size =
|
||||
max_t(unsigned int, limits->physical_block_size, block_size);
|
||||
limits->io_min = max_t(unsigned int, limits->io_min, block_size);
|
||||
|
||||
if (limits->max_discard_sectors == 0) {
|
||||
limits->discard_granularity = 1 << 12;
|
||||
limits->max_hw_discard_sectors = 1 << 15;
|
||||
limits->max_discard_sectors = 1 << 15;
|
||||
bc->forward_trims = false;
|
||||
} else {
|
||||
limits->discard_granularity = 1 << 12;
|
||||
bc->forward_trims = true;
|
||||
}
|
||||
}
|
||||
|
||||
static int dm_bow_ctr_optional(struct dm_target *ti, unsigned int argc,
|
||||
char **argv)
|
||||
{
|
||||
struct bow_context *bc = ti->private;
|
||||
struct dm_arg_set as;
|
||||
static const struct dm_arg _args[] = {
|
||||
{0, 1, "Invalid number of feature args"},
|
||||
};
|
||||
unsigned int opt_params;
|
||||
const char *opt_string;
|
||||
int err;
|
||||
char dummy;
|
||||
|
||||
as.argc = argc;
|
||||
as.argv = argv;
|
||||
|
||||
err = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
while (opt_params--) {
|
||||
opt_string = dm_shift_arg(&as);
|
||||
if (!opt_string) {
|
||||
ti->error = "Not enough feature arguments";
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (sscanf(opt_string, "block_size:%u%c",
|
||||
&bc->block_size, &dummy) == 1) {
|
||||
if (bc->block_size < SECTOR_SIZE ||
|
||||
bc->block_size > 4096 ||
|
||||
!is_power_of_2(bc->block_size)) {
|
||||
ti->error = "Invalid block_size";
|
||||
return -EINVAL;
|
||||
}
|
||||
} else {
|
||||
ti->error = "Invalid feature arguments";
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dm_bow_ctr(struct dm_target *ti, unsigned int argc, char **argv)
|
||||
{
|
||||
struct bow_context *bc;
|
||||
@@ -629,7 +695,7 @@ static int dm_bow_ctr(struct dm_target *ti, unsigned int argc, char **argv)
|
||||
int ret;
|
||||
struct mapped_device *md = dm_table_get_md(ti->table);
|
||||
|
||||
if (argc != 1) {
|
||||
if (argc < 1) {
|
||||
ti->error = "Invalid argument count";
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -652,17 +718,13 @@ static int dm_bow_ctr(struct dm_target *ti, unsigned int argc, char **argv)
|
||||
goto bad;
|
||||
}
|
||||
|
||||
if (bc->dev->bdev->bd_queue->limits.max_discard_sectors == 0) {
|
||||
bc->dev->bdev->bd_queue->limits.discard_granularity = 1 << 12;
|
||||
bc->dev->bdev->bd_queue->limits.max_hw_discard_sectors = 1 << 15;
|
||||
bc->dev->bdev->bd_queue->limits.max_discard_sectors = 1 << 15;
|
||||
bc->forward_trims = false;
|
||||
} else {
|
||||
bc->dev->bdev->bd_queue->limits.discard_granularity = 1 << 12;
|
||||
bc->forward_trims = true;
|
||||
bc->block_size = bc->dev->bdev->bd_queue->limits.logical_block_size;
|
||||
if (argc > 1) {
|
||||
ret = dm_bow_ctr_optional(ti, argc - 1, &argv[1]);
|
||||
if (ret)
|
||||
goto bad;
|
||||
}
|
||||
|
||||
bc->block_size = bc->dev->bdev->bd_queue->limits.logical_block_size;
|
||||
bc->block_shift = ilog2(bc->block_size);
|
||||
bc->log_sector = kzalloc(bc->block_size, GFP_KERNEL);
|
||||
if (!bc->log_sector) {
|
||||
@@ -1206,7 +1268,7 @@ static int dm_bow_iterate_devices(struct dm_target *ti,
|
||||
|
||||
static struct target_type bow_target = {
|
||||
.name = "bow",
|
||||
.version = {1, 1, 1},
|
||||
.version = {1, 2, 0},
|
||||
.module = THIS_MODULE,
|
||||
.ctr = dm_bow_ctr,
|
||||
.dtr = dm_bow_dtr,
|
||||
@@ -1214,6 +1276,7 @@ static struct target_type bow_target = {
|
||||
.status = dm_bow_status,
|
||||
.prepare_ioctl = dm_bow_prepare_ioctl,
|
||||
.iterate_devices = dm_bow_iterate_devices,
|
||||
.io_hints = dm_bow_io_hints,
|
||||
};
|
||||
|
||||
int __init dm_bow_init(void)
|
||||
|
||||
@@ -1187,7 +1187,7 @@ bmac_get_station_address(struct net_device *dev, unsigned char *ea)
|
||||
int i;
|
||||
unsigned short data;
|
||||
|
||||
for (i = 0; i < 6; i++)
|
||||
for (i = 0; i < 3; i++)
|
||||
{
|
||||
reset_and_select_srom(dev);
|
||||
data = read_srom(dev, i + EnetAddressOffset/2, SROMAddressBits);
|
||||
|
||||
@@ -45,6 +45,7 @@
|
||||
#include <soc/fsl/qe/ucc.h>
|
||||
#include <soc/fsl/qe/ucc_fast.h>
|
||||
#include <asm/machdep.h>
|
||||
#include <net/sch_generic.h>
|
||||
|
||||
#include "ucc_geth.h"
|
||||
|
||||
@@ -1551,11 +1552,8 @@ static int ugeth_disable(struct ucc_geth_private *ugeth, enum comm_dir mode)
|
||||
|
||||
static void ugeth_quiesce(struct ucc_geth_private *ugeth)
|
||||
{
|
||||
/* Prevent any further xmits, plus detach the device. */
|
||||
netif_device_detach(ugeth->ndev);
|
||||
|
||||
/* Wait for any current xmits to finish. */
|
||||
netif_tx_disable(ugeth->ndev);
|
||||
/* Prevent any further xmits */
|
||||
netif_tx_stop_all_queues(ugeth->ndev);
|
||||
|
||||
/* Disable the interrupt to avoid NAPI rescheduling. */
|
||||
disable_irq(ugeth->ug_info->uf_info.irq);
|
||||
@@ -1568,7 +1566,10 @@ static void ugeth_activate(struct ucc_geth_private *ugeth)
|
||||
{
|
||||
napi_enable(&ugeth->napi);
|
||||
enable_irq(ugeth->ug_info->uf_info.irq);
|
||||
netif_device_attach(ugeth->ndev);
|
||||
|
||||
/* allow to xmit again */
|
||||
netif_tx_wake_all_queues(ugeth->ndev);
|
||||
__netdev_watchdog_up(ugeth->ndev);
|
||||
}
|
||||
|
||||
/* Called every time the controller might need to be made
|
||||
|
||||
@@ -2515,20 +2515,20 @@ static int smsc911x_drv_probe(struct platform_device *pdev)
|
||||
|
||||
retval = smsc911x_init(dev);
|
||||
if (retval < 0)
|
||||
goto out_disable_resources;
|
||||
goto out_init_fail;
|
||||
|
||||
netif_carrier_off(dev);
|
||||
|
||||
retval = smsc911x_mii_init(pdev, dev);
|
||||
if (retval) {
|
||||
SMSC_WARN(pdata, probe, "Error %i initialising mii", retval);
|
||||
goto out_disable_resources;
|
||||
goto out_init_fail;
|
||||
}
|
||||
|
||||
retval = register_netdev(dev);
|
||||
if (retval) {
|
||||
SMSC_WARN(pdata, probe, "Error %i registering device", retval);
|
||||
goto out_disable_resources;
|
||||
goto out_init_fail;
|
||||
} else {
|
||||
SMSC_TRACE(pdata, probe,
|
||||
"Network interface: \"%s\"", dev->name);
|
||||
@@ -2569,9 +2569,10 @@ static int smsc911x_drv_probe(struct platform_device *pdev)
|
||||
|
||||
return 0;
|
||||
|
||||
out_disable_resources:
|
||||
out_init_fail:
|
||||
pm_runtime_put(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
out_disable_resources:
|
||||
(void)smsc911x_disable_resources(pdev);
|
||||
out_enable_resources_fail:
|
||||
smsc911x_free_resources(pdev);
|
||||
|
||||
@@ -330,6 +330,19 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
|
||||
/* Enable PTP clock */
|
||||
regmap_read(gmac->nss_common, NSS_COMMON_CLK_GATE, &val);
|
||||
val |= NSS_COMMON_CLK_GATE_PTP_EN(gmac->id);
|
||||
switch (gmac->phy_mode) {
|
||||
case PHY_INTERFACE_MODE_RGMII:
|
||||
val |= NSS_COMMON_CLK_GATE_RGMII_RX_EN(gmac->id) |
|
||||
NSS_COMMON_CLK_GATE_RGMII_TX_EN(gmac->id);
|
||||
break;
|
||||
case PHY_INTERFACE_MODE_SGMII:
|
||||
val |= NSS_COMMON_CLK_GATE_GMII_RX_EN(gmac->id) |
|
||||
NSS_COMMON_CLK_GATE_GMII_TX_EN(gmac->id);
|
||||
break;
|
||||
default:
|
||||
/* We don't get here; the switch above will have errored out */
|
||||
unreachable();
|
||||
}
|
||||
regmap_write(gmac->nss_common, NSS_COMMON_CLK_GATE, val);
|
||||
|
||||
if (gmac->phy_mode == PHY_INTERFACE_MODE_SGMII) {
|
||||
|
||||
@@ -497,6 +497,9 @@ static int pppoe_disc_rcv(struct sk_buff *skb, struct net_device *dev,
|
||||
if (!skb)
|
||||
goto out;
|
||||
|
||||
if (skb->pkt_type != PACKET_HOST)
|
||||
goto abort;
|
||||
|
||||
if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr)))
|
||||
goto abort;
|
||||
|
||||
|
||||
@@ -1249,6 +1249,7 @@ static const struct usb_device_id products[] = {
|
||||
{QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */
|
||||
{QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
|
||||
{QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */
|
||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
|
||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */
|
||||
{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
|
||||
{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
|
||||
|
||||
@@ -1928,6 +1928,10 @@ static netdev_tx_t mpi_start_xmit(struct sk_buff *skb,
|
||||
airo_print_err(dev->name, "%s: skb == NULL!",__func__);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
if (skb_padto(skb, ETH_ZLEN)) {
|
||||
dev->stats.tx_dropped++;
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
npacks = skb_queue_len (&ai->txq);
|
||||
|
||||
if (npacks >= MAXTXQ - 1) {
|
||||
@@ -2130,6 +2134,10 @@ static netdev_tx_t airo_start_xmit(struct sk_buff *skb,
|
||||
airo_print_err(dev->name, "%s: skb == NULL!", __func__);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
if (skb_padto(skb, ETH_ZLEN)) {
|
||||
dev->stats.tx_dropped++;
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
/* Find a vacant FID */
|
||||
for( i = 0; i < MAX_FIDS / 2 && (fids[i] & 0xffff0000); i++ );
|
||||
@@ -2204,6 +2212,10 @@ static netdev_tx_t airo_start_xmit11(struct sk_buff *skb,
|
||||
airo_print_err(dev->name, "%s: skb == NULL!", __func__);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
if (skb_padto(skb, ETH_ZLEN)) {
|
||||
dev->stats.tx_dropped++;
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
/* Find a vacant FID */
|
||||
for( i = MAX_FIDS / 2; i < MAX_FIDS && (fids[i] & 0xffff0000); i++ );
|
||||
|
||||
@@ -64,6 +64,7 @@ static const struct usb_device_id p54u_table[] = {
|
||||
{USB_DEVICE(0x0db0, 0x6826)}, /* MSI UB54G (MS-6826) */
|
||||
{USB_DEVICE(0x107b, 0x55f2)}, /* Gateway WGU-210 (Gemtek) */
|
||||
{USB_DEVICE(0x124a, 0x4023)}, /* Shuttle PN15, Airvast WM168g, IOGear GWU513 */
|
||||
{USB_DEVICE(0x124a, 0x4026)}, /* AirVasT USB wireless device */
|
||||
{USB_DEVICE(0x1435, 0x0210)}, /* Inventel UR054G */
|
||||
{USB_DEVICE(0x15a9, 0x0002)}, /* Gemtek WUBI-100GW 802.11g */
|
||||
{USB_DEVICE(0x1630, 0x0005)}, /* 2Wire 802.11g USB (v1) / Z-Com */
|
||||
|
||||
@@ -184,8 +184,10 @@ static int st21nfca_tm_send_atr_res(struct nfc_hci_dev *hdev,
|
||||
memcpy(atr_res->gbi, atr_req->gbi, gb_len);
|
||||
r = nfc_set_remote_general_bytes(hdev->ndev, atr_res->gbi,
|
||||
gb_len);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
kfree_skb(skb);
|
||||
return r;
|
||||
}
|
||||
}
|
||||
|
||||
info->dep_info.curr_nfc_dep_pni = 0;
|
||||
|
||||
@@ -400,9 +400,9 @@ static int btt_flog_write(struct arena_info *arena, u32 lane, u32 sub,
|
||||
arena->freelist[lane].sub = 1 - arena->freelist[lane].sub;
|
||||
if (++(arena->freelist[lane].seq) == 4)
|
||||
arena->freelist[lane].seq = 1;
|
||||
if (ent_e_flag(ent->old_map))
|
||||
if (ent_e_flag(le32_to_cpu(ent->old_map)))
|
||||
arena->freelist[lane].has_err = 1;
|
||||
arena->freelist[lane].block = le32_to_cpu(ent_lba(ent->old_map));
|
||||
arena->freelist[lane].block = ent_lba(le32_to_cpu(ent->old_map));
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -568,8 +568,8 @@ static int btt_freelist_init(struct arena_info *arena)
|
||||
* FIXME: if error clearing fails during init, we want to make
|
||||
* the BTT read-only
|
||||
*/
|
||||
if (ent_e_flag(log_new.old_map) &&
|
||||
!ent_normal(log_new.old_map)) {
|
||||
if (ent_e_flag(le32_to_cpu(log_new.old_map)) &&
|
||||
!ent_normal(le32_to_cpu(log_new.old_map))) {
|
||||
arena->freelist[i].has_err = 1;
|
||||
ret = arena_clear_freelist_error(arena, i);
|
||||
if (ret)
|
||||
|
||||
@@ -1978,7 +1978,7 @@ struct device *create_namespace_pmem(struct nd_region *nd_region,
|
||||
nd_mapping = &nd_region->mapping[i];
|
||||
label_ent = list_first_entry_or_null(&nd_mapping->labels,
|
||||
typeof(*label_ent), list);
|
||||
label0 = label_ent ? label_ent->label : 0;
|
||||
label0 = label_ent ? label_ent->label : NULL;
|
||||
|
||||
if (!label0) {
|
||||
WARN_ON(1);
|
||||
@@ -2315,8 +2315,9 @@ static struct device **scan_labels(struct nd_region *nd_region)
|
||||
continue;
|
||||
|
||||
/* skip labels that describe extents outside of the region */
|
||||
if (nd_label->dpa < nd_mapping->start || nd_label->dpa > map_end)
|
||||
continue;
|
||||
if (__le64_to_cpu(nd_label->dpa) < nd_mapping->start ||
|
||||
__le64_to_cpu(nd_label->dpa) > map_end)
|
||||
continue;
|
||||
|
||||
i = add_namespace_resource(nd_region, nd_label, devs, count);
|
||||
if (i < 0)
|
||||
|
||||
@@ -30,19 +30,6 @@ static int qfprom_reg_read(void *context,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qfprom_reg_write(void *context,
|
||||
unsigned int reg, void *_val, size_t bytes)
|
||||
{
|
||||
void __iomem *base = context;
|
||||
u8 *val = _val;
|
||||
int i = 0, words = bytes;
|
||||
|
||||
while (words--)
|
||||
writeb(*val++, base + reg + i++);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qfprom_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct nvmem_device *nvmem = platform_get_drvdata(pdev);
|
||||
@@ -56,7 +43,6 @@ static struct nvmem_config econfig = {
|
||||
.stride = 1,
|
||||
.word_size = 1,
|
||||
.reg_read = qfprom_reg_read,
|
||||
.reg_write = qfprom_reg_write,
|
||||
};
|
||||
|
||||
static int qfprom_probe(struct platform_device *pdev)
|
||||
|
||||
@@ -655,12 +655,13 @@ static void hisi_sas_port_notify_formed(struct asd_sas_phy *sas_phy)
|
||||
struct hisi_hba *hisi_hba = sas_ha->lldd_ha;
|
||||
struct hisi_sas_phy *phy = sas_phy->lldd_phy;
|
||||
struct asd_sas_port *sas_port = sas_phy->port;
|
||||
struct hisi_sas_port *port = to_hisi_sas_port(sas_port);
|
||||
struct hisi_sas_port *port;
|
||||
unsigned long flags;
|
||||
|
||||
if (!sas_port)
|
||||
return;
|
||||
|
||||
port = to_hisi_sas_port(sas_port);
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
port->port_attached = 1;
|
||||
port->id = phy->port_id;
|
||||
|
||||
@@ -392,8 +392,8 @@ EXPORT_SYMBOL(scsi_dev_info_list_add_keyed);
|
||||
|
||||
/**
|
||||
* scsi_dev_info_list_find - find a matching dev_info list entry.
|
||||
* @vendor: vendor string
|
||||
* @model: model (product) string
|
||||
* @vendor: full vendor string
|
||||
* @model: full model (product) string
|
||||
* @key: specify list to use
|
||||
*
|
||||
* Description:
|
||||
@@ -408,7 +408,7 @@ static struct scsi_dev_info_list *scsi_dev_info_list_find(const char *vendor,
|
||||
struct scsi_dev_info_list *devinfo;
|
||||
struct scsi_dev_info_list_table *devinfo_table =
|
||||
scsi_devinfo_lookup_by_key(key);
|
||||
size_t vmax, mmax;
|
||||
size_t vmax, mmax, mlen;
|
||||
const char *vskip, *mskip;
|
||||
|
||||
if (IS_ERR(devinfo_table))
|
||||
@@ -447,15 +447,18 @@ static struct scsi_dev_info_list *scsi_dev_info_list_find(const char *vendor,
|
||||
dev_info_list) {
|
||||
if (devinfo->compatible) {
|
||||
/*
|
||||
* Behave like the older version of get_device_flags.
|
||||
* vendor strings must be an exact match
|
||||
*/
|
||||
if (memcmp(devinfo->vendor, vskip, vmax) ||
|
||||
(vmax < sizeof(devinfo->vendor) &&
|
||||
devinfo->vendor[vmax]))
|
||||
if (vmax != strlen(devinfo->vendor) ||
|
||||
memcmp(devinfo->vendor, vskip, vmax))
|
||||
continue;
|
||||
if (memcmp(devinfo->model, mskip, mmax) ||
|
||||
(mmax < sizeof(devinfo->model) &&
|
||||
devinfo->model[mmax]))
|
||||
|
||||
/*
|
||||
* @model specifies the full string, and
|
||||
* must be larger or equal to devinfo->model
|
||||
*/
|
||||
mlen = strlen(devinfo->model);
|
||||
if (mmax < mlen || memcmp(devinfo->model, mskip, mlen))
|
||||
continue;
|
||||
return devinfo;
|
||||
} else {
|
||||
|
||||
@@ -3871,6 +3871,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
|
||||
|
||||
err = ufshcd_map_sg(hba, lrbp);
|
||||
if (err) {
|
||||
ufshcd_release(hba);
|
||||
lrbp->cmd = NULL;
|
||||
clear_bit_unlock(tag, &hba->lrb_in_use);
|
||||
ufshcd_release_all(hba);
|
||||
|
||||
@@ -305,6 +305,9 @@ static int dw_spi_transfer_one(struct spi_master *master,
|
||||
dws->len = transfer->len;
|
||||
spin_unlock_irqrestore(&dws->buf_lock, flags);
|
||||
|
||||
/* Ensure dw->rx and dw->rx_end are visible */
|
||||
smp_mb();
|
||||
|
||||
spi_enable_chip(dws, 0);
|
||||
|
||||
/* Handle per transfer options for bpw and speed */
|
||||
|
||||
@@ -468,7 +468,7 @@ static inline unsigned char *get_hdr_bssid(unsigned char *pframe)
|
||||
/* block-ack parameters */
|
||||
#define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
|
||||
#define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
|
||||
#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
|
||||
#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFC0
|
||||
#define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
|
||||
#define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
|
||||
|
||||
@@ -562,13 +562,6 @@ struct ieee80211_ht_addt_info {
|
||||
#define IEEE80211_HT_IE_NON_GF_STA_PRSNT 0x0004
|
||||
#define IEEE80211_HT_IE_NON_HT_STA_PRSNT 0x0010
|
||||
|
||||
/* block-ack parameters */
|
||||
#define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
|
||||
#define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
|
||||
#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
|
||||
#define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
|
||||
#define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
|
||||
|
||||
/*
|
||||
* A-PMDU buffer sizes
|
||||
* According to IEEE802.11n spec size varies from 8K to 64K (in powers of 2)
|
||||
|
||||
@@ -357,15 +357,14 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
|
||||
* tty fields and return the kref reference.
|
||||
*/
|
||||
if (rc) {
|
||||
tty_port_tty_set(&hp->port, NULL);
|
||||
tty->driver_data = NULL;
|
||||
tty_port_put(&hp->port);
|
||||
printk(KERN_ERR "hvc_open: request_irq failed with rc %d.\n", rc);
|
||||
} else
|
||||
} else {
|
||||
/* We are ready... raise DTR/RTS */
|
||||
if (C_BAUD(tty))
|
||||
if (hp->ops->dtr_rts)
|
||||
hp->ops->dtr_rts(hp, 1);
|
||||
tty_port_set_initialized(&hp->port, true);
|
||||
}
|
||||
|
||||
/* Force wakeup of the polling thread */
|
||||
hvc_kick();
|
||||
@@ -375,22 +374,12 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
|
||||
|
||||
static void hvc_close(struct tty_struct *tty, struct file * filp)
|
||||
{
|
||||
struct hvc_struct *hp;
|
||||
struct hvc_struct *hp = tty->driver_data;
|
||||
unsigned long flags;
|
||||
|
||||
if (tty_hung_up_p(filp))
|
||||
return;
|
||||
|
||||
/*
|
||||
* No driver_data means that this close was issued after a failed
|
||||
* hvc_open by the tty layer's release_dev() function and we can just
|
||||
* exit cleanly because the kref reference wasn't made.
|
||||
*/
|
||||
if (!tty->driver_data)
|
||||
return;
|
||||
|
||||
hp = tty->driver_data;
|
||||
|
||||
spin_lock_irqsave(&hp->port.lock, flags);
|
||||
|
||||
if (--hp->port.count == 0) {
|
||||
@@ -398,6 +387,9 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
|
||||
/* We are done with the tty pointer now. */
|
||||
tty_port_tty_set(&hp->port, NULL);
|
||||
|
||||
if (!tty_port_initialized(&hp->port))
|
||||
return;
|
||||
|
||||
if (C_HUPCL(tty))
|
||||
if (hp->ops->dtr_rts)
|
||||
hp->ops->dtr_rts(hp, 0);
|
||||
@@ -414,6 +406,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
|
||||
* waking periodically to check chars_in_buffer().
|
||||
*/
|
||||
tty_wait_until_sent(tty, HVC_CLOSE_WAIT);
|
||||
tty_port_set_initialized(&hp->port, false);
|
||||
} else {
|
||||
if (hp->port.count < 0)
|
||||
printk(KERN_ERR "hvc_close %X: oops, count is %d\n",
|
||||
|
||||
@@ -126,7 +126,11 @@ static DEFINE_SPINLOCK(func_buf_lock); /* guard 'func_buf' and friends */
|
||||
static unsigned long key_down[BITS_TO_LONGS(KEY_CNT)]; /* keyboard key bitmap */
|
||||
static unsigned char shift_down[NR_SHIFT]; /* shift state counters.. */
|
||||
static bool dead_key_next;
|
||||
static int npadch = -1; /* -1 or number assembled on pad */
|
||||
|
||||
/* Handles a number being assembled on the number pad */
|
||||
static bool npadch_active;
|
||||
static unsigned int npadch_value;
|
||||
|
||||
static unsigned int diacr;
|
||||
static char rep; /* flag telling character repeat */
|
||||
|
||||
@@ -816,12 +820,12 @@ static void k_shift(struct vc_data *vc, unsigned char value, char up_flag)
|
||||
shift_state &= ~(1 << value);
|
||||
|
||||
/* kludge */
|
||||
if (up_flag && shift_state != old_state && npadch != -1) {
|
||||
if (up_flag && shift_state != old_state && npadch_active) {
|
||||
if (kbd->kbdmode == VC_UNICODE)
|
||||
to_utf8(vc, npadch);
|
||||
to_utf8(vc, npadch_value);
|
||||
else
|
||||
put_queue(vc, npadch & 0xff);
|
||||
npadch = -1;
|
||||
put_queue(vc, npadch_value & 0xff);
|
||||
npadch_active = false;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -839,7 +843,7 @@ static void k_meta(struct vc_data *vc, unsigned char value, char up_flag)
|
||||
|
||||
static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
|
||||
{
|
||||
int base;
|
||||
unsigned int base;
|
||||
|
||||
if (up_flag)
|
||||
return;
|
||||
@@ -853,10 +857,12 @@ static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
|
||||
base = 16;
|
||||
}
|
||||
|
||||
if (npadch == -1)
|
||||
npadch = value;
|
||||
else
|
||||
npadch = npadch * base + value;
|
||||
if (!npadch_active) {
|
||||
npadch_value = 0;
|
||||
npadch_active = true;
|
||||
}
|
||||
|
||||
npadch_value = npadch_value * base + value;
|
||||
}
|
||||
|
||||
static void k_lock(struct vc_data *vc, unsigned char value, char up_flag)
|
||||
|
||||
@@ -602,7 +602,7 @@ static void acm_softint(struct work_struct *work)
|
||||
}
|
||||
|
||||
if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
|
||||
for (i = 0; i < ACM_NR; i++)
|
||||
for (i = 0; i < acm->rx_buflimit; i++)
|
||||
if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
|
||||
acm_submit_read_urb(acm, i, GFP_NOIO);
|
||||
}
|
||||
|
||||
@@ -2749,6 +2749,13 @@ static int musb_resume(struct device *dev)
|
||||
musb_enable_interrupts(musb);
|
||||
musb_platform_enable(musb);
|
||||
|
||||
/* session might be disabled in suspend */
|
||||
if (musb->port_mode == MUSB_HOST &&
|
||||
!(musb->ops->quirks & MUSB_PRESERVE_SESSION)) {
|
||||
devctl |= MUSB_DEVCTL_SESSION;
|
||||
musb_writeb(musb->mregs, MUSB_DEVCTL, devctl);
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&musb->lock, flags);
|
||||
error = musb_run_resume_work(musb);
|
||||
if (error)
|
||||
|
||||
@@ -206,6 +206,11 @@ static ssize_t musb_test_mode_write(struct file *file,
|
||||
u8 test;
|
||||
char buf[24];
|
||||
|
||||
memset(buf, 0x00, sizeof(buf));
|
||||
|
||||
if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
|
||||
return -EFAULT;
|
||||
|
||||
pm_runtime_get_sync(musb->controller);
|
||||
test = musb_readb(musb->mregs, MUSB_TESTMODE);
|
||||
if (test) {
|
||||
@@ -214,11 +219,6 @@ static ssize_t musb_test_mode_write(struct file *file,
|
||||
goto ret;
|
||||
}
|
||||
|
||||
memset(buf, 0x00, sizeof(buf));
|
||||
|
||||
if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
|
||||
return -EFAULT;
|
||||
|
||||
if (strstarts(buf, "force host full-speed"))
|
||||
test = MUSB_TEST_FORCE_HOST | MUSB_TEST_FORCE_FS;
|
||||
|
||||
|
||||
@@ -1160,6 +1160,10 @@ static const struct usb_device_id option_ids[] = {
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) },
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) },
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1031, 0xff), /* Telit LE910C1-EUX */
|
||||
.driver_info = NCTRL(0) | RSVD(3) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff), /* Telit LE910C1-EUX (ECM) */
|
||||
.driver_info = NCTRL(0) },
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
|
||||
.driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
|
||||
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
|
||||
|
||||
@@ -177,6 +177,7 @@ static const struct usb_device_id id_table[] = {
|
||||
{DEVICE_SWI(0x413c, 0x81b3)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
|
||||
{DEVICE_SWI(0x413c, 0x81b5)}, /* Dell Wireless 5811e QDL */
|
||||
{DEVICE_SWI(0x413c, 0x81b6)}, /* Dell Wireless 5811e QDL */
|
||||
{DEVICE_SWI(0x413c, 0x81cb)}, /* Dell Wireless 5816e QDL */
|
||||
{DEVICE_SWI(0x413c, 0x81cc)}, /* Dell Wireless 5816e */
|
||||
{DEVICE_SWI(0x413c, 0x81cf)}, /* Dell Wireless 5819 */
|
||||
{DEVICE_SWI(0x413c, 0x81d0)}, /* Dell Wireless 5819 */
|
||||
|
||||
@@ -302,6 +302,10 @@ static void usb_wwan_indat_callback(struct urb *urb)
|
||||
if (status) {
|
||||
dev_dbg(dev, "%s: nonzero status: %d on endpoint %02x.\n",
|
||||
__func__, status, endpoint);
|
||||
|
||||
/* don't resubmit on fatal errors */
|
||||
if (status == -ESHUTDOWN || status == -ENOENT)
|
||||
return;
|
||||
} else {
|
||||
if (urb->actual_length) {
|
||||
tty_insert_flip_string(&port->port, data,
|
||||
|
||||
@@ -9,7 +9,6 @@ config INCREMENTAL_FS
|
||||
select X509_CERTIFICATE_PARSER
|
||||
select ASYMMETRIC_KEY_TYPE
|
||||
select ASYMMETRIC_PUBLIC_KEY_SUBTYPE
|
||||
select PKCS7_MESSAGE_PARSER
|
||||
help
|
||||
Incremental FS is a read-only virtual file system that facilitates execution
|
||||
of programs while their binaries are still being lazily downloaded over the
|
||||
|
||||
@@ -2,15 +2,16 @@
|
||||
/*
|
||||
* Copyright 2019 Google LLC
|
||||
*/
|
||||
#include <linux/gfp.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/file.h>
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/lz4.h>
|
||||
#include <linux/crc32.h>
|
||||
#include <linux/file.h>
|
||||
#include <linux/gfp.h>
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/lz4.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
#include "data_mgmt.h"
|
||||
#include "format.h"
|
||||
@@ -179,7 +180,8 @@ struct data_file *incfs_open_data_file(struct mount_info *mi, struct file *bf)
|
||||
out:
|
||||
if (error) {
|
||||
incfs_free_bfc(bfc);
|
||||
df->df_backing_file_context = NULL;
|
||||
if (df)
|
||||
df->df_backing_file_context = NULL;
|
||||
incfs_free_data_file(df);
|
||||
return ERR_PTR(error);
|
||||
}
|
||||
@@ -382,24 +384,25 @@ static void log_block_read(struct mount_info *mi, incfs_uuid_t *id,
|
||||
++head->current_record_no;
|
||||
|
||||
spin_unlock(&log->rl_lock);
|
||||
if (schedule_delayed_work(&log->ml_wakeup_work, msecs_to_jiffies(16)))
|
||||
pr_debug("incfs: scheduled a log pollers wakeup");
|
||||
schedule_delayed_work(&log->ml_wakeup_work, msecs_to_jiffies(16));
|
||||
}
|
||||
|
||||
static int validate_hash_tree(struct file *bf, struct data_file *df,
|
||||
int block_index, struct mem_range data, u8 *buf)
|
||||
static int validate_hash_tree(struct file *bf, struct file *f, int block_index,
|
||||
struct mem_range data, u8 *buf)
|
||||
{
|
||||
u8 digest[INCFS_MAX_HASH_SIZE] = {};
|
||||
struct data_file *df = get_incfs_data_file(f);
|
||||
u8 stored_digest[INCFS_MAX_HASH_SIZE] = {};
|
||||
u8 calculated_digest[INCFS_MAX_HASH_SIZE] = {};
|
||||
struct mtree *tree = NULL;
|
||||
struct incfs_df_signature *sig = NULL;
|
||||
struct mem_range calc_digest_rng;
|
||||
struct mem_range saved_digest_rng;
|
||||
struct mem_range root_hash_rng;
|
||||
int digest_size;
|
||||
int hash_block_index = block_index;
|
||||
int hash_per_block;
|
||||
int lvl = 0;
|
||||
int lvl;
|
||||
int res;
|
||||
loff_t hash_block_offset[INCFS_MAX_MTREE_LEVELS];
|
||||
size_t hash_offset_in_block[INCFS_MAX_MTREE_LEVELS];
|
||||
int hash_per_block;
|
||||
pgoff_t file_pages;
|
||||
|
||||
tree = df->df_hash_tree;
|
||||
sig = df->df_signature;
|
||||
@@ -408,38 +411,60 @@ static int validate_hash_tree(struct file *bf, struct data_file *df,
|
||||
|
||||
digest_size = tree->alg->digest_size;
|
||||
hash_per_block = INCFS_DATA_FILE_BLOCK_SIZE / digest_size;
|
||||
calc_digest_rng = range(digest, digest_size);
|
||||
res = incfs_calc_digest(tree->alg, data, calc_digest_rng);
|
||||
if (res)
|
||||
return res;
|
||||
|
||||
for (lvl = 0; lvl < tree->depth; lvl++) {
|
||||
loff_t lvl_off =
|
||||
tree->hash_level_suboffset[lvl] + sig->hash_offset;
|
||||
loff_t hash_block_off = lvl_off +
|
||||
round_down(hash_block_index * digest_size,
|
||||
INCFS_DATA_FILE_BLOCK_SIZE);
|
||||
size_t hash_off_in_block = hash_block_index * digest_size
|
||||
% INCFS_DATA_FILE_BLOCK_SIZE;
|
||||
struct mem_range buf_range = range(buf,
|
||||
INCFS_DATA_FILE_BLOCK_SIZE);
|
||||
ssize_t read_res = incfs_kread(bf, buf,
|
||||
INCFS_DATA_FILE_BLOCK_SIZE, hash_block_off);
|
||||
loff_t lvl_off = tree->hash_level_suboffset[lvl];
|
||||
|
||||
if (read_res < 0)
|
||||
return read_res;
|
||||
if (read_res != INCFS_DATA_FILE_BLOCK_SIZE)
|
||||
hash_block_offset[lvl] =
|
||||
lvl_off + round_down(hash_block_index * digest_size,
|
||||
INCFS_DATA_FILE_BLOCK_SIZE);
|
||||
hash_offset_in_block[lvl] = hash_block_index * digest_size %
|
||||
INCFS_DATA_FILE_BLOCK_SIZE;
|
||||
hash_block_index /= hash_per_block;
|
||||
}
|
||||
|
||||
memcpy(stored_digest, tree->root_hash, digest_size);
|
||||
|
||||
file_pages = DIV_ROUND_UP(df->df_size, INCFS_DATA_FILE_BLOCK_SIZE);
|
||||
for (lvl = tree->depth - 1; lvl >= 0; lvl--) {
|
||||
pgoff_t hash_page =
|
||||
file_pages +
|
||||
hash_block_offset[lvl] / INCFS_DATA_FILE_BLOCK_SIZE;
|
||||
struct page *page = find_get_page_flags(
|
||||
f->f_inode->i_mapping, hash_page, FGP_ACCESSED);
|
||||
|
||||
if (page && PageChecked(page)) {
|
||||
u8 *addr = kmap_atomic(page);
|
||||
|
||||
memcpy(stored_digest, addr + hash_offset_in_block[lvl],
|
||||
digest_size);
|
||||
kunmap_atomic(addr);
|
||||
put_page(page);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (page)
|
||||
put_page(page);
|
||||
|
||||
res = incfs_kread(bf, buf, INCFS_DATA_FILE_BLOCK_SIZE,
|
||||
hash_block_offset[lvl] + sig->hash_offset);
|
||||
if (res < 0)
|
||||
return res;
|
||||
if (res != INCFS_DATA_FILE_BLOCK_SIZE)
|
||||
return -EIO;
|
||||
res = incfs_calc_digest(tree->alg,
|
||||
range(buf, INCFS_DATA_FILE_BLOCK_SIZE),
|
||||
range(calculated_digest, digest_size));
|
||||
if (res)
|
||||
return res;
|
||||
|
||||
saved_digest_rng = range(buf + hash_off_in_block, digest_size);
|
||||
if (!incfs_equal_ranges(calc_digest_rng, saved_digest_rng)) {
|
||||
if (memcmp(stored_digest, calculated_digest, digest_size)) {
|
||||
int i;
|
||||
bool zero = true;
|
||||
|
||||
pr_debug("incfs: Hash mismatch lvl:%d blk:%d\n",
|
||||
lvl, block_index);
|
||||
for (i = 0; i < saved_digest_rng.len; ++i)
|
||||
if (saved_digest_rng.data[i]) {
|
||||
for (i = 0; i < digest_size; i++)
|
||||
if (stored_digest[i]) {
|
||||
zero = false;
|
||||
break;
|
||||
}
|
||||
@@ -449,17 +474,31 @@ static int validate_hash_tree(struct file *bf, struct data_file *df,
|
||||
return -EBADMSG;
|
||||
}
|
||||
|
||||
res = incfs_calc_digest(tree->alg, buf_range, calc_digest_rng);
|
||||
if (res)
|
||||
return res;
|
||||
hash_block_index /= hash_per_block;
|
||||
memcpy(stored_digest, buf + hash_offset_in_block[lvl],
|
||||
digest_size);
|
||||
|
||||
page = grab_cache_page(f->f_inode->i_mapping, hash_page);
|
||||
if (page) {
|
||||
u8 *addr = kmap_atomic(page);
|
||||
|
||||
memcpy(addr, buf, INCFS_DATA_FILE_BLOCK_SIZE);
|
||||
kunmap_atomic(addr);
|
||||
SetPageChecked(page);
|
||||
unlock_page(page);
|
||||
put_page(page);
|
||||
}
|
||||
}
|
||||
|
||||
root_hash_rng = range(tree->root_hash, digest_size);
|
||||
if (!incfs_equal_ranges(calc_digest_rng, root_hash_rng)) {
|
||||
pr_debug("incfs: Root hash mismatch blk:%d\n", block_index);
|
||||
res = incfs_calc_digest(tree->alg, data,
|
||||
range(calculated_digest, digest_size));
|
||||
if (res)
|
||||
return res;
|
||||
|
||||
if (memcmp(stored_digest, calculated_digest, digest_size)) {
|
||||
pr_debug("incfs: Leaf hash mismatch blk:%d\n", block_index);
|
||||
return -EBADMSG;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -871,7 +910,7 @@ static int wait_for_data_block(struct data_file *df, int block_index,
|
||||
return error;
|
||||
}
|
||||
|
||||
ssize_t incfs_read_data_file_block(struct mem_range dst, struct data_file *df,
|
||||
ssize_t incfs_read_data_file_block(struct mem_range dst, struct file *f,
|
||||
int index, int timeout_ms,
|
||||
struct mem_range tmp)
|
||||
{
|
||||
@@ -881,6 +920,7 @@ ssize_t incfs_read_data_file_block(struct mem_range dst, struct data_file *df,
|
||||
struct mount_info *mi = NULL;
|
||||
struct file *bf = NULL;
|
||||
struct data_file_block block = {};
|
||||
struct data_file *df = get_incfs_data_file(f);
|
||||
|
||||
if (!dst.data || !df)
|
||||
return -EFAULT;
|
||||
@@ -923,7 +963,7 @@ ssize_t incfs_read_data_file_block(struct mem_range dst, struct data_file *df,
|
||||
}
|
||||
|
||||
if (result > 0) {
|
||||
int err = validate_hash_tree(bf, df, index, dst, tmp.data);
|
||||
int err = validate_hash_tree(bf, f, index, dst, tmp.data);
|
||||
|
||||
if (err < 0)
|
||||
result = err;
|
||||
@@ -1055,11 +1095,12 @@ int incfs_process_new_hash_block(struct data_file *df,
|
||||
}
|
||||
|
||||
error = mutex_lock_interruptible(&bfc->bc_mutex);
|
||||
if (!error)
|
||||
if (!error) {
|
||||
error = incfs_write_hash_block_to_backing_file(
|
||||
bfc, range(data, block->data_len), block->block_index,
|
||||
hash_area_base, df->df_blockmap_off, df->df_size);
|
||||
mutex_unlock(&bfc->bc_mutex);
|
||||
mutex_unlock(&bfc->bc_mutex);
|
||||
}
|
||||
return error;
|
||||
}
|
||||
|
||||
@@ -1112,6 +1153,9 @@ static int process_file_signature_md(struct incfs_file_signature *sg,
|
||||
void *buf = NULL;
|
||||
ssize_t read;
|
||||
|
||||
if (!signature)
|
||||
return -ENOMEM;
|
||||
|
||||
if (!df || !df->df_backing_file_context ||
|
||||
!df->df_backing_file_context->bc_file) {
|
||||
error = -ENOENT;
|
||||
|
||||
@@ -276,7 +276,7 @@ int incfs_scan_metadata_chain(struct data_file *df);
|
||||
struct dir_file *incfs_open_dir_file(struct mount_info *mi, struct file *bf);
|
||||
void incfs_free_dir_file(struct dir_file *dir);
|
||||
|
||||
ssize_t incfs_read_data_file_block(struct mem_range dst, struct data_file *df,
|
||||
ssize_t incfs_read_data_file_block(struct mem_range dst, struct file *f,
|
||||
int index, int timeout_ms,
|
||||
struct mem_range tmp);
|
||||
|
||||
@@ -315,7 +315,7 @@ static inline struct inode_info *get_incfs_node(struct inode *inode)
|
||||
if (!inode)
|
||||
return NULL;
|
||||
|
||||
if (inode->i_sb->s_magic != INCFS_MAGIC_NUMBER) {
|
||||
if (inode->i_sb->s_magic != (long) INCFS_MAGIC_NUMBER) {
|
||||
/* This inode doesn't belong to us. */
|
||||
pr_warn_once("incfs: %s on an alien inode.", __func__);
|
||||
return NULL;
|
||||
|
||||
@@ -6,7 +6,6 @@
|
||||
#include <crypto/hash.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/version.h>
|
||||
#include <crypto/pkcs7.h>
|
||||
|
||||
#include "integrity.h"
|
||||
|
||||
|
||||
@@ -818,7 +818,7 @@ static int read_single_page(struct file *f, struct page *page)
|
||||
tmp.data = (u8 *)__get_free_pages(GFP_NOFS, get_order(tmp.len));
|
||||
bytes_to_read = min_t(loff_t, size - offset, PAGE_SIZE);
|
||||
read_result = incfs_read_data_file_block(
|
||||
range(page_start, bytes_to_read), df, block_index,
|
||||
range(page_start, bytes_to_read), f, block_index,
|
||||
timeout_ms, tmp);
|
||||
|
||||
free_pages((unsigned long)tmp.data, get_order(tmp.len));
|
||||
|
||||
@@ -607,6 +607,10 @@ struct mips_cdmm_device_id {
|
||||
/*
|
||||
* MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id.
|
||||
* Although gcc seems to ignore this error, clang fails without this define.
|
||||
*
|
||||
* Note: The ordering of the struct is different from upstream because the
|
||||
* static initializers in kernels < 5.7 still use C89 style while upstream
|
||||
* has been converted to proper C99 initializers.
|
||||
*/
|
||||
#define x86cpu_device_id x86_cpu_id
|
||||
struct x86_cpu_id {
|
||||
@@ -615,6 +619,7 @@ struct x86_cpu_id {
|
||||
__u16 model;
|
||||
__u16 feature; /* bit index */
|
||||
kernel_ulong_t driver_data;
|
||||
__u16 steppings;
|
||||
};
|
||||
|
||||
#define X86_FEATURE_MATCH(x) \
|
||||
@@ -623,6 +628,7 @@ struct x86_cpu_id {
|
||||
#define X86_VENDOR_ANY 0xffff
|
||||
#define X86_FAMILY_ANY 0
|
||||
#define X86_MODEL_ANY 0
|
||||
#define X86_STEPPING_ANY 0
|
||||
#define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */
|
||||
|
||||
/*
|
||||
|
||||
@@ -31,6 +31,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
|
||||
{
|
||||
unsigned int gso_type = 0;
|
||||
unsigned int thlen = 0;
|
||||
unsigned int p_off = 0;
|
||||
unsigned int ip_proto;
|
||||
|
||||
if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
|
||||
@@ -68,7 +69,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
|
||||
if (!skb_partial_csum_set(skb, start, off))
|
||||
return -EINVAL;
|
||||
|
||||
if (skb_transport_offset(skb) + thlen > skb_headlen(skb))
|
||||
p_off = skb_transport_offset(skb) + thlen;
|
||||
if (p_off > skb_headlen(skb))
|
||||
return -EINVAL;
|
||||
} else {
|
||||
/* gso packets without NEEDS_CSUM do not set transport_offset.
|
||||
@@ -90,17 +92,25 @@ retry:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (keys.control.thoff + thlen > skb_headlen(skb) ||
|
||||
p_off = keys.control.thoff + thlen;
|
||||
if (p_off > skb_headlen(skb) ||
|
||||
keys.basic.ip_proto != ip_proto)
|
||||
return -EINVAL;
|
||||
|
||||
skb_set_transport_header(skb, keys.control.thoff);
|
||||
} else if (gso_type) {
|
||||
p_off = thlen;
|
||||
if (p_off > skb_headlen(skb))
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
|
||||
u16 gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
|
||||
|
||||
if (skb->len - p_off <= gso_size)
|
||||
return -EINVAL;
|
||||
|
||||
skb_shinfo(skb)->gso_size = gso_size;
|
||||
skb_shinfo(skb)->gso_type = gso_type;
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
#define LINUX_MMC_IOCTL_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/major.h>
|
||||
|
||||
struct mmc_ioc_cmd {
|
||||
/* Implies direction of data. true = write, false = read */
|
||||
|
||||
@@ -612,10 +612,6 @@ static int prepare_uprobe(struct uprobe *uprobe, struct file *file,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/* uprobe_write_opcode() assumes we don't cross page boundary */
|
||||
BUG_ON((uprobe->offset & ~PAGE_MASK) +
|
||||
UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
|
||||
|
||||
smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
|
||||
set_bit(UPROBE_COPY_INSN, &uprobe->flags);
|
||||
|
||||
@@ -894,6 +890,13 @@ int uprobe_register(struct inode *inode, loff_t offset, struct uprobe_consumer *
|
||||
if (offset > i_size_read(inode))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* This ensures that copy_from_page() and copy_to_page()
|
||||
* can't cross page boundary.
|
||||
*/
|
||||
if (!IS_ALIGNED(offset, UPROBE_SWBP_INSN_SIZE))
|
||||
return -EINVAL;
|
||||
|
||||
retry:
|
||||
uprobe = alloc_uprobe(inode, offset);
|
||||
if (!uprobe)
|
||||
@@ -1704,6 +1707,9 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr)
|
||||
uprobe_opcode_t opcode;
|
||||
int result;
|
||||
|
||||
if (WARN_ON_ONCE(!IS_ALIGNED(vaddr, UPROBE_SWBP_INSN_SIZE)))
|
||||
return -EINVAL;
|
||||
|
||||
pagefault_disable();
|
||||
result = __get_user(opcode, (uprobe_opcode_t __user *)vaddr);
|
||||
pagefault_enable();
|
||||
|
||||
@@ -580,6 +580,11 @@ struct rchan *relay_open(const char *base_filename,
|
||||
return NULL;
|
||||
|
||||
chan->buf = alloc_percpu(struct rchan_buf *);
|
||||
if (!chan->buf) {
|
||||
kfree(chan);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
chan->version = RELAYFS_CHANNEL_VERSION;
|
||||
chan->n_subbufs = n_subbufs;
|
||||
chan->subbuf_size = subbuf_size;
|
||||
|
||||
@@ -223,7 +223,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
|
||||
new_pmd = alloc_new_pmd(vma->vm_mm, vma, new_addr);
|
||||
if (!new_pmd)
|
||||
break;
|
||||
if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) {
|
||||
if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) {
|
||||
if (extent == HPAGE_PMD_SIZE) {
|
||||
bool moved;
|
||||
/* See comment in move_ptes() */
|
||||
|
||||
@@ -262,6 +262,7 @@ static struct in_device *inetdev_init(struct net_device *dev)
|
||||
err = devinet_sysctl_register(in_dev);
|
||||
if (err) {
|
||||
in_dev->dead = 1;
|
||||
neigh_parms_release(&arp_tbl, in_dev->arp_parms);
|
||||
in_dev_put(in_dev);
|
||||
in_dev = NULL;
|
||||
goto out;
|
||||
|
||||
@@ -1581,6 +1581,8 @@ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32
|
||||
tunnel_id, fd);
|
||||
goto err;
|
||||
}
|
||||
if (sk->sk_family != PF_INET && sk->sk_family != PF_INET6)
|
||||
goto err;
|
||||
switch (encap) {
|
||||
case L2TP_ENCAPTYPE_UDP:
|
||||
if (sk->sk_protocol != IPPROTO_UDP) {
|
||||
|
||||
@@ -24,7 +24,6 @@
|
||||
#include <net/icmp.h>
|
||||
#include <net/udp.h>
|
||||
#include <net/inet_common.h>
|
||||
#include <net/inet_hashtables.h>
|
||||
#include <net/tcp_states.h>
|
||||
#include <net/protocol.h>
|
||||
#include <net/xfrm.h>
|
||||
@@ -215,15 +214,31 @@ discard:
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int l2tp_ip_hash(struct sock *sk)
|
||||
{
|
||||
if (sk_unhashed(sk)) {
|
||||
write_lock_bh(&l2tp_ip_lock);
|
||||
sk_add_node(sk, &l2tp_ip_table);
|
||||
write_unlock_bh(&l2tp_ip_lock);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void l2tp_ip_unhash(struct sock *sk)
|
||||
{
|
||||
if (sk_unhashed(sk))
|
||||
return;
|
||||
write_lock_bh(&l2tp_ip_lock);
|
||||
sk_del_node_init(sk);
|
||||
write_unlock_bh(&l2tp_ip_lock);
|
||||
}
|
||||
|
||||
static int l2tp_ip_open(struct sock *sk)
|
||||
{
|
||||
/* Prevent autobind. We don't have ports. */
|
||||
inet_sk(sk)->inet_num = IPPROTO_L2TP;
|
||||
|
||||
write_lock_bh(&l2tp_ip_lock);
|
||||
sk_add_node(sk, &l2tp_ip_table);
|
||||
write_unlock_bh(&l2tp_ip_lock);
|
||||
|
||||
l2tp_ip_hash(sk);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -605,8 +620,8 @@ static struct proto l2tp_ip_prot = {
|
||||
.sendmsg = l2tp_ip_sendmsg,
|
||||
.recvmsg = l2tp_ip_recvmsg,
|
||||
.backlog_rcv = l2tp_ip_backlog_recv,
|
||||
.hash = inet_hash,
|
||||
.unhash = inet_unhash,
|
||||
.hash = l2tp_ip_hash,
|
||||
.unhash = l2tp_ip_unhash,
|
||||
.obj_size = sizeof(struct l2tp_ip_sock),
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_setsockopt = compat_ip_setsockopt,
|
||||
|
||||
@@ -24,8 +24,6 @@
|
||||
#include <net/icmp.h>
|
||||
#include <net/udp.h>
|
||||
#include <net/inet_common.h>
|
||||
#include <net/inet_hashtables.h>
|
||||
#include <net/inet6_hashtables.h>
|
||||
#include <net/tcp_states.h>
|
||||
#include <net/protocol.h>
|
||||
#include <net/xfrm.h>
|
||||
@@ -228,15 +226,31 @@ discard:
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int l2tp_ip6_hash(struct sock *sk)
|
||||
{
|
||||
if (sk_unhashed(sk)) {
|
||||
write_lock_bh(&l2tp_ip6_lock);
|
||||
sk_add_node(sk, &l2tp_ip6_table);
|
||||
write_unlock_bh(&l2tp_ip6_lock);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void l2tp_ip6_unhash(struct sock *sk)
|
||||
{
|
||||
if (sk_unhashed(sk))
|
||||
return;
|
||||
write_lock_bh(&l2tp_ip6_lock);
|
||||
sk_del_node_init(sk);
|
||||
write_unlock_bh(&l2tp_ip6_lock);
|
||||
}
|
||||
|
||||
static int l2tp_ip6_open(struct sock *sk)
|
||||
{
|
||||
/* Prevent autobind. We don't have ports. */
|
||||
inet_sk(sk)->inet_num = IPPROTO_L2TP;
|
||||
|
||||
write_lock_bh(&l2tp_ip6_lock);
|
||||
sk_add_node(sk, &l2tp_ip6_table);
|
||||
write_unlock_bh(&l2tp_ip6_lock);
|
||||
|
||||
l2tp_ip6_hash(sk);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -741,8 +755,8 @@ static struct proto l2tp_ip6_prot = {
|
||||
.sendmsg = l2tp_ip6_sendmsg,
|
||||
.recvmsg = l2tp_ip6_recvmsg,
|
||||
.backlog_rcv = l2tp_ip6_backlog_recv,
|
||||
.hash = inet6_hash,
|
||||
.unhash = inet_unhash,
|
||||
.hash = l2tp_ip6_hash,
|
||||
.unhash = l2tp_ip6_unhash,
|
||||
.obj_size = sizeof(struct l2tp_ip6_sock),
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_setsockopt = compat_ipv6_setsockopt,
|
||||
|
||||
@@ -1290,7 +1290,7 @@ static int vsock_accept(struct socket *sock, struct socket *newsock, int flags,
|
||||
/* Wait for children sockets to appear; these are the new sockets
|
||||
* created upon connection establishment.
|
||||
*/
|
||||
timeout = sock_sndtimeo(listener, flags & O_NONBLOCK);
|
||||
timeout = sock_rcvtimeo(listener, flags & O_NONBLOCK);
|
||||
prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE);
|
||||
|
||||
while ((connected = vsock_dequeue_accept(listener)) == NULL &&
|
||||
|
||||
@@ -15,12 +15,20 @@ SECTIONS {
|
||||
*(.eh_frame)
|
||||
}
|
||||
|
||||
.bss : { *(.bss .bss.[0-9a-zA-Z_]*) }
|
||||
.data : { *(.data .data.[0-9a-zA-Z_]*) }
|
||||
.rela.data : { *(.rela.data .rela.data.[0-9a-zA-Z_]*) }
|
||||
.rela.rodata : { *(.rela.rodata .rela.rodata.[0-9a-zA-Z_]*) }
|
||||
.rela.text : { *(.rela.text .rela.text.[0-9a-zA-Z_]*) }
|
||||
.rodata : { *(.rodata .rodata.[0-9a-zA-Z_]*) }
|
||||
.bss : {
|
||||
*(.bss .bss.[0-9a-zA-Z_]*)
|
||||
*(.bss..L* .bss..compoundliteral*)
|
||||
}
|
||||
|
||||
.data : {
|
||||
*(.data .data.[0-9a-zA-Z_]*)
|
||||
*(.data..L* .data..compoundliteral*)
|
||||
}
|
||||
|
||||
.rodata : {
|
||||
*(.rodata .rodata.[0-9a-zA-Z_]*)
|
||||
*(.rodata..L* .rodata..compoundliteral*)
|
||||
}
|
||||
|
||||
/*
|
||||
* With CFI_CLANG, ensure __cfi_check is at the beginning of the
|
||||
@@ -30,5 +38,4 @@ SECTIONS {
|
||||
*(.text.__cfi_check)
|
||||
*(.text .text.[0-9a-zA-Z_]* .text..L.cfi*)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user