Merge branch 'android-4.14-stable' of https://android.googlesource.com/kernel/common into HEAD
Change-Id: I361a6992f3dd33708fc0b32ee7b92a403a75bec1
This commit is contained in:
@@ -59,17 +59,18 @@ class
|
||||
|
||||
dma_mode
|
||||
|
||||
Transfer modes supported by the device when in DMA mode.
|
||||
DMA transfer mode used by the device.
|
||||
Mostly used by PATA device.
|
||||
|
||||
pio_mode
|
||||
|
||||
Transfer modes supported by the device when in PIO mode.
|
||||
PIO transfer mode used by the device.
|
||||
Mostly used by PATA device.
|
||||
|
||||
xfer_mode
|
||||
|
||||
Current transfer mode.
|
||||
Mostly used by PATA device.
|
||||
|
||||
id
|
||||
|
||||
|
||||
@@ -384,6 +384,7 @@ What: /sys/devices/system/cpu/vulnerabilities
|
||||
/sys/devices/system/cpu/vulnerabilities/srbds
|
||||
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
|
||||
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
|
||||
/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
|
||||
Date: January 2018
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
Description: Information about CPU vulnerabilities
|
||||
|
||||
@@ -15,3 +15,4 @@ are configurable at compile, boot or run time.
|
||||
tsx_async_abort
|
||||
multihit.rst
|
||||
special-register-buffer-data-sampling.rst
|
||||
processor_mmio_stale_data.rst
|
||||
|
||||
246
Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
Normal file
246
Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
Normal file
@@ -0,0 +1,246 @@
|
||||
=========================================
|
||||
Processor MMIO Stale Data Vulnerabilities
|
||||
=========================================
|
||||
|
||||
Processor MMIO Stale Data Vulnerabilities are a class of memory-mapped I/O
|
||||
(MMIO) vulnerabilities that can expose data. The sequences of operations for
|
||||
exposing data range from simple to very complex. Because most of the
|
||||
vulnerabilities require the attacker to have access to MMIO, many environments
|
||||
are not affected. System environments using virtualization where MMIO access is
|
||||
provided to untrusted guests may need mitigation. These vulnerabilities are
|
||||
not transient execution attacks. However, these vulnerabilities may propagate
|
||||
stale data into core fill buffers where the data can subsequently be inferred
|
||||
by an unmitigated transient execution attack. Mitigation for these
|
||||
vulnerabilities includes a combination of microcode update and software
|
||||
changes, depending on the platform and usage model. Some of these mitigations
|
||||
are similar to those used to mitigate Microarchitectural Data Sampling (MDS) or
|
||||
those used to mitigate Special Register Buffer Data Sampling (SRBDS).
|
||||
|
||||
Data Propagators
|
||||
================
|
||||
Propagators are operations that result in stale data being copied or moved from
|
||||
one microarchitectural buffer or register to another. Processor MMIO Stale Data
|
||||
Vulnerabilities are operations that may result in stale data being directly
|
||||
read into an architectural, software-visible state or sampled from a buffer or
|
||||
register.
|
||||
|
||||
Fill Buffer Stale Data Propagator (FBSDP)
|
||||
-----------------------------------------
|
||||
Stale data may propagate from fill buffers (FB) into the non-coherent portion
|
||||
of the uncore on some non-coherent writes. Fill buffer propagation by itself
|
||||
does not make stale data architecturally visible. Stale data must be propagated
|
||||
to a location where it is subject to reading or sampling.
|
||||
|
||||
Sideband Stale Data Propagator (SSDP)
|
||||
-------------------------------------
|
||||
The sideband stale data propagator (SSDP) is limited to the client (including
|
||||
Intel Xeon server E3) uncore implementation. The sideband response buffer is
|
||||
shared by all client cores. For non-coherent reads that go to sideband
|
||||
destinations, the uncore logic returns 64 bytes of data to the core, including
|
||||
both requested data and unrequested stale data, from a transaction buffer and
|
||||
the sideband response buffer. As a result, stale data from the sideband
|
||||
response and transaction buffers may now reside in a core fill buffer.
|
||||
|
||||
Primary Stale Data Propagator (PSDP)
|
||||
------------------------------------
|
||||
The primary stale data propagator (PSDP) is limited to the client (including
|
||||
Intel Xeon server E3) uncore implementation. Similar to the sideband response
|
||||
buffer, the primary response buffer is shared by all client cores. For some
|
||||
processors, MMIO primary reads will return 64 bytes of data to the core fill
|
||||
buffer including both requested data and unrequested stale data. This is
|
||||
similar to the sideband stale data propagator.
|
||||
|
||||
Vulnerabilities
|
||||
===============
|
||||
Device Register Partial Write (DRPW) (CVE-2022-21166)
|
||||
-----------------------------------------------------
|
||||
Some endpoint MMIO registers incorrectly handle writes that are smaller than
|
||||
the register size. Instead of aborting the write or only copying the correct
|
||||
subset of bytes (for example, 2 bytes for a 2-byte write), more bytes than
|
||||
specified by the write transaction may be written to the register. On
|
||||
processors affected by FBSDP, this may expose stale data from the fill buffers
|
||||
of the core that created the write transaction.
|
||||
|
||||
Shared Buffers Data Sampling (SBDS) (CVE-2022-21125)
|
||||
----------------------------------------------------
|
||||
After propagators may have moved data around the uncore and copied stale data
|
||||
into client core fill buffers, processors affected by MFBDS can leak data from
|
||||
the fill buffer. It is limited to the client (including Intel Xeon server E3)
|
||||
uncore implementation.
|
||||
|
||||
Shared Buffers Data Read (SBDR) (CVE-2022-21123)
|
||||
------------------------------------------------
|
||||
It is similar to Shared Buffer Data Sampling (SBDS) except that the data is
|
||||
directly read into the architectural software-visible state. It is limited to
|
||||
the client (including Intel Xeon server E3) uncore implementation.
|
||||
|
||||
Affected Processors
|
||||
===================
|
||||
Not all the CPUs are affected by all the variants. For instance, most
|
||||
processors for the server market (excluding Intel Xeon E3 processors) are
|
||||
impacted by only Device Register Partial Write (DRPW).
|
||||
|
||||
Below is the list of affected Intel processors [#f1]_:
|
||||
|
||||
=================== ============ =========
|
||||
Common name Family_Model Steppings
|
||||
=================== ============ =========
|
||||
HASWELL_X 06_3FH 2,4
|
||||
SKYLAKE_L 06_4EH 3
|
||||
BROADWELL_X 06_4FH All
|
||||
SKYLAKE_X 06_55H 3,4,6,7,11
|
||||
BROADWELL_D 06_56H 3,4,5
|
||||
SKYLAKE 06_5EH 3
|
||||
ICELAKE_X 06_6AH 4,5,6
|
||||
ICELAKE_D 06_6CH 1
|
||||
ICELAKE_L 06_7EH 5
|
||||
ATOM_TREMONT_D 06_86H All
|
||||
LAKEFIELD 06_8AH 1
|
||||
KABYLAKE_L 06_8EH 9 to 12
|
||||
ATOM_TREMONT 06_96H 1
|
||||
ATOM_TREMONT_L 06_9CH 0
|
||||
KABYLAKE 06_9EH 9 to 13
|
||||
COMETLAKE 06_A5H 2,3,5
|
||||
COMETLAKE_L 06_A6H 0,1
|
||||
ROCKETLAKE 06_A7H 1
|
||||
=================== ============ =========
|
||||
|
||||
If a CPU is in the affected processor list, but not affected by a variant, it
|
||||
is indicated by new bits in MSR IA32_ARCH_CAPABILITIES. As described in a later
|
||||
section, mitigation largely remains the same for all the variants, i.e. to
|
||||
clear the CPU fill buffers via VERW instruction.
|
||||
|
||||
New bits in MSRs
|
||||
================
|
||||
Newer processors and microcode update on existing affected processors added new
|
||||
bits to IA32_ARCH_CAPABILITIES MSR. These bits can be used to enumerate
|
||||
specific variants of Processor MMIO Stale Data vulnerabilities and mitigation
|
||||
capability.
|
||||
|
||||
MSR IA32_ARCH_CAPABILITIES
|
||||
--------------------------
|
||||
Bit 13 - SBDR_SSDP_NO - When set, processor is not affected by either the
|
||||
Shared Buffers Data Read (SBDR) vulnerability or the sideband stale
|
||||
data propagator (SSDP).
|
||||
Bit 14 - FBSDP_NO - When set, processor is not affected by the Fill Buffer
|
||||
Stale Data Propagator (FBSDP).
|
||||
Bit 15 - PSDP_NO - When set, processor is not affected by Primary Stale Data
|
||||
Propagator (PSDP).
|
||||
Bit 17 - FB_CLEAR - When set, VERW instruction will overwrite CPU fill buffer
|
||||
values as part of MD_CLEAR operations. Processors that do not
|
||||
enumerate MDS_NO (meaning they are affected by MDS) but that do
|
||||
enumerate support for both L1D_FLUSH and MD_CLEAR implicitly enumerate
|
||||
FB_CLEAR as part of their MD_CLEAR support.
|
||||
Bit 18 - FB_CLEAR_CTRL - Processor supports read and write to MSR
|
||||
IA32_MCU_OPT_CTRL[FB_CLEAR_DIS]. On such processors, the FB_CLEAR_DIS
|
||||
bit can be set to cause the VERW instruction to not perform the
|
||||
FB_CLEAR action. Not all processors that support FB_CLEAR will support
|
||||
FB_CLEAR_CTRL.
|
||||
|
||||
MSR IA32_MCU_OPT_CTRL
|
||||
---------------------
|
||||
Bit 3 - FB_CLEAR_DIS - When set, VERW instruction does not perform the FB_CLEAR
|
||||
action. This may be useful to reduce the performance impact of FB_CLEAR in
|
||||
cases where system software deems it warranted (for example, when performance
|
||||
is more critical, or the untrusted software has no MMIO access). Note that
|
||||
FB_CLEAR_DIS has no impact on enumeration (for example, it does not change
|
||||
FB_CLEAR or MD_CLEAR enumeration) and it may not be supported on all processors
|
||||
that enumerate FB_CLEAR.
|
||||
|
||||
Mitigation
|
||||
==========
|
||||
Like MDS, all variants of Processor MMIO Stale Data vulnerabilities have the
|
||||
same mitigation strategy to force the CPU to clear the affected buffers before
|
||||
an attacker can extract the secrets.
|
||||
|
||||
This is achieved by using the otherwise unused and obsolete VERW instruction in
|
||||
combination with a microcode update. The microcode clears the affected CPU
|
||||
buffers when the VERW instruction is executed.
|
||||
|
||||
Kernel reuses the MDS function to invoke the buffer clearing:
|
||||
|
||||
mds_clear_cpu_buffers()
|
||||
|
||||
On MDS affected CPUs, the kernel already invokes CPU buffer clear on
|
||||
kernel/userspace, hypervisor/guest and C-state (idle) transitions. No
|
||||
additional mitigation is needed on such CPUs.
|
||||
|
||||
For CPUs not affected by MDS or TAA, mitigation is needed only for the attacker
|
||||
with MMIO capability. Therefore, VERW is not required for kernel/userspace. For
|
||||
virtualization case, VERW is only needed at VMENTER for a guest with MMIO
|
||||
capability.
|
||||
|
||||
Mitigation points
|
||||
-----------------
|
||||
Return to user space
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
Same mitigation as MDS when affected by MDS/TAA, otherwise no mitigation
|
||||
needed.
|
||||
|
||||
C-State transition
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
Control register writes by CPU during C-state transition can propagate data
|
||||
from fill buffer to uncore buffers. Execute VERW before C-state transition to
|
||||
clear CPU fill buffers.
|
||||
|
||||
Guest entry point
|
||||
^^^^^^^^^^^^^^^^^
|
||||
Same mitigation as MDS when processor is also affected by MDS/TAA, otherwise
|
||||
execute VERW at VMENTER only for MMIO capable guests. On CPUs not affected by
|
||||
MDS/TAA, guest without MMIO access cannot extract secrets using Processor MMIO
|
||||
Stale Data vulnerabilities, so there is no need to execute VERW for such guests.
|
||||
|
||||
Mitigation control on the kernel command line
|
||||
---------------------------------------------
|
||||
The kernel command line allows to control the Processor MMIO Stale Data
|
||||
mitigations at boot time with the option "mmio_stale_data=". The valid
|
||||
arguments for this option are:
|
||||
|
||||
========== =================================================================
|
||||
full If the CPU is vulnerable, enable mitigation; CPU buffer clearing
|
||||
on exit to userspace and when entering a VM. Idle transitions are
|
||||
protected as well. It does not automatically disable SMT.
|
||||
full,nosmt Same as full, with SMT disabled on vulnerable CPUs. This is the
|
||||
complete mitigation.
|
||||
off Disables mitigation completely.
|
||||
========== =================================================================
|
||||
|
||||
If the CPU is affected and mmio_stale_data=off is not supplied on the kernel
|
||||
command line, then the kernel selects the appropriate mitigation.
|
||||
|
||||
Mitigation status information
|
||||
-----------------------------
|
||||
The Linux kernel provides a sysfs interface to enumerate the current
|
||||
vulnerability status of the system: whether the system is vulnerable, and
|
||||
which mitigations are active. The relevant sysfs file is:
|
||||
|
||||
/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
|
||||
|
||||
The possible values in this file are:
|
||||
|
||||
.. list-table::
|
||||
|
||||
* - 'Not affected'
|
||||
- The processor is not vulnerable
|
||||
* - 'Vulnerable'
|
||||
- The processor is vulnerable, but no mitigation enabled
|
||||
* - 'Vulnerable: Clear CPU buffers attempted, no microcode'
|
||||
- The processor is vulnerable, but microcode is not updated. The
|
||||
mitigation is enabled on a best effort basis.
|
||||
* - 'Mitigation: Clear CPU buffers'
|
||||
- The processor is vulnerable and the CPU buffer clearing mitigation is
|
||||
enabled.
|
||||
|
||||
If the processor is vulnerable then the following information is appended to
|
||||
the above information:
|
||||
|
||||
======================== ===========================================
|
||||
'SMT vulnerable' SMT is enabled
|
||||
'SMT disabled' SMT is disabled
|
||||
'SMT Host state unknown' Kernel runs in a VM, Host SMT state unknown
|
||||
======================== ===========================================
|
||||
|
||||
References
|
||||
----------
|
||||
.. [#f1] Affected Processors
|
||||
https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html
|
||||
@@ -2466,6 +2466,7 @@
|
||||
kvm.nx_huge_pages=off [X86]
|
||||
no_entry_flush [PPC]
|
||||
no_uaccess_flush [PPC]
|
||||
mmio_stale_data=off [X86]
|
||||
|
||||
Exceptions:
|
||||
This does not have any effect on
|
||||
@@ -2487,6 +2488,7 @@
|
||||
Equivalent to: l1tf=flush,nosmt [X86]
|
||||
mds=full,nosmt [X86]
|
||||
tsx_async_abort=full,nosmt [X86]
|
||||
mmio_stale_data=full,nosmt [X86]
|
||||
|
||||
mminit_loglevel=
|
||||
[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
|
||||
@@ -2496,6 +2498,40 @@
|
||||
log everything. Information is printed at KERN_DEBUG
|
||||
so loglevel=8 may also need to be specified.
|
||||
|
||||
mmio_stale_data=
|
||||
[X86,INTEL] Control mitigation for the Processor
|
||||
MMIO Stale Data vulnerabilities.
|
||||
|
||||
Processor MMIO Stale Data is a class of
|
||||
vulnerabilities that may expose data after an MMIO
|
||||
operation. Exposed data could originate or end in
|
||||
the same CPU buffers as affected by MDS and TAA.
|
||||
Therefore, similar to MDS and TAA, the mitigation
|
||||
is to clear the affected CPU buffers.
|
||||
|
||||
This parameter controls the mitigation. The
|
||||
options are:
|
||||
|
||||
full - Enable mitigation on vulnerable CPUs
|
||||
|
||||
full,nosmt - Enable mitigation and disable SMT on
|
||||
vulnerable CPUs.
|
||||
|
||||
off - Unconditionally disable mitigation
|
||||
|
||||
On MDS or TAA affected machines,
|
||||
mmio_stale_data=off can be prevented by an active
|
||||
MDS or TAA mitigation as these vulnerabilities are
|
||||
mitigated with the same mechanism so in order to
|
||||
disable this mitigation, you need to specify
|
||||
mds=off and tsx_async_abort=off too.
|
||||
|
||||
Not specifying this option is equivalent to
|
||||
mmio_stale_data=full.
|
||||
|
||||
For details see:
|
||||
Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
|
||||
|
||||
module.sig_enforce
|
||||
[KNL] When CONFIG_MODULE_SIG is set, this means that
|
||||
modules without (valid) signatures will fail to load.
|
||||
|
||||
@@ -96,7 +96,7 @@ finally:
|
||||
#
|
||||
# This is also used if you do content translation via gettext catalogs.
|
||||
# Usually you set "language" from the command line for these cases.
|
||||
language = None
|
||||
language = 'en'
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
|
||||
@@ -9,8 +9,9 @@ Required properties:
|
||||
- The second cell is reserved and is currently unused.
|
||||
- gpio-controller : Marks the device node as a GPIO controller.
|
||||
- interrupt-controller: Mark the device node as an interrupt controller
|
||||
- #interrupt-cells : Should be 1. The interrupt type is fixed in the hardware.
|
||||
- #interrupt-cells : Should be 2. The interrupt type is fixed in the hardware.
|
||||
- The first cell is the GPIO offset number within the GPIO controller.
|
||||
- The second cell is the interrupt trigger type and level flags.
|
||||
- interrupts: Specify the interrupt.
|
||||
- altr,interrupt-type: Specifies the interrupt trigger type the GPIO
|
||||
hardware is synthesized. This field is required if the Altera GPIO controller
|
||||
@@ -38,6 +39,6 @@ gpio_altr: gpio@0xff200000 {
|
||||
altr,interrupt-type = <IRQ_TYPE_EDGE_RISING>;
|
||||
#gpio-cells = <2>;
|
||||
gpio-controller;
|
||||
#interrupt-cells = <1>;
|
||||
#interrupt-cells = <2>;
|
||||
interrupt-controller;
|
||||
};
|
||||
|
||||
@@ -133,7 +133,7 @@ as you intend it to.
|
||||
|
||||
The maintainer will thank you if you write your patch description in a
|
||||
form which can be easily pulled into Linux's source code management
|
||||
system, ``git``, as a "commit log". See :ref:`explicit_in_reply_to`.
|
||||
system, ``git``, as a "commit log". See :ref:`the_canonical_patch_format`.
|
||||
|
||||
Solve only one problem per patch. If your description starts to get
|
||||
long, that's a sign that you probably need to split up your patch.
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 14
|
||||
SUBLEVEL = 277
|
||||
SUBLEVEL = 284
|
||||
EXTRAVERSION =
|
||||
NAME = Petit Gorille
|
||||
|
||||
|
||||
@@ -48,18 +48,17 @@
|
||||
"GPIO18",
|
||||
"NC", /* GPIO19 */
|
||||
"NC", /* GPIO20 */
|
||||
"GPIO21",
|
||||
"CAM_GPIO0",
|
||||
"GPIO22",
|
||||
"GPIO23",
|
||||
"GPIO24",
|
||||
"GPIO25",
|
||||
"NC", /* GPIO26 */
|
||||
"CAM_GPIO0",
|
||||
/* Binary number representing build/revision */
|
||||
"CONFIG0",
|
||||
"CONFIG1",
|
||||
"CONFIG2",
|
||||
"CONFIG3",
|
||||
"GPIO27",
|
||||
"GPIO28",
|
||||
"GPIO29",
|
||||
"GPIO30",
|
||||
"GPIO31",
|
||||
"NC", /* GPIO32 */
|
||||
"NC", /* GPIO33 */
|
||||
"NC", /* GPIO34 */
|
||||
|
||||
@@ -77,16 +77,18 @@
|
||||
"GPIO27",
|
||||
"SDA0",
|
||||
"SCL0",
|
||||
"NC", /* GPIO30 */
|
||||
"NC", /* GPIO31 */
|
||||
"NC", /* GPIO32 */
|
||||
"NC", /* GPIO33 */
|
||||
"NC", /* GPIO34 */
|
||||
"NC", /* GPIO35 */
|
||||
"NC", /* GPIO36 */
|
||||
"NC", /* GPIO37 */
|
||||
"NC", /* GPIO38 */
|
||||
"NC", /* GPIO39 */
|
||||
/* Used by BT module */
|
||||
"CTS0",
|
||||
"RTS0",
|
||||
"TXD0",
|
||||
"RXD0",
|
||||
/* Used by Wifi */
|
||||
"SD1_CLK",
|
||||
"SD1_CMD",
|
||||
"SD1_DATA0",
|
||||
"SD1_DATA1",
|
||||
"SD1_DATA2",
|
||||
"SD1_DATA3",
|
||||
"CAM_GPIO1", /* GPIO40 */
|
||||
"WL_ON", /* GPIO41 */
|
||||
"NC", /* GPIO42 */
|
||||
|
||||
@@ -128,7 +128,7 @@
|
||||
samsung,i2c-max-bus-freq = <20000>;
|
||||
|
||||
eeprom@50 {
|
||||
compatible = "samsung,s524ad0xd1";
|
||||
compatible = "samsung,s524ad0xd1", "atmel,24c128";
|
||||
reg = <0x50>;
|
||||
};
|
||||
|
||||
@@ -287,7 +287,7 @@
|
||||
samsung,i2c-max-bus-freq = <20000>;
|
||||
|
||||
eeprom@51 {
|
||||
compatible = "samsung,s524ad0xd1";
|
||||
compatible = "samsung,s524ad0xd1", "atmel,24c128";
|
||||
reg = <0x51>;
|
||||
};
|
||||
|
||||
|
||||
@@ -316,6 +316,8 @@
|
||||
codec: sgtl5000@0a {
|
||||
compatible = "fsl,sgtl5000";
|
||||
reg = <0x0a>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_sgtl5000>;
|
||||
clocks = <&clks IMX6QDL_CLK_CKO>;
|
||||
VDDA-supply = <®_2p5v>;
|
||||
VDDIO-supply = <®_3p3v>;
|
||||
@@ -543,8 +545,6 @@
|
||||
MX6QDL_PAD_DISP0_DAT21__AUD4_TXD 0x130b0
|
||||
MX6QDL_PAD_DISP0_DAT22__AUD4_TXFS 0x130b0
|
||||
MX6QDL_PAD_DISP0_DAT23__AUD4_RXD 0x130b0
|
||||
/* SGTL5000 sys_mclk */
|
||||
MX6QDL_PAD_GPIO_5__CCM_CLKO1 0x130b0
|
||||
>;
|
||||
};
|
||||
|
||||
@@ -810,6 +810,12 @@
|
||||
>;
|
||||
};
|
||||
|
||||
pinctrl_sgtl5000: sgtl5000grp {
|
||||
fsl,pins = <
|
||||
MX6QDL_PAD_GPIO_5__CCM_CLKO1 0x130b0
|
||||
>;
|
||||
};
|
||||
|
||||
pinctrl_spdif: spdifgrp {
|
||||
fsl,pins = <
|
||||
MX6QDL_PAD_GPIO_16__SPDIF_IN 0x1b0b0
|
||||
|
||||
@@ -29,6 +29,8 @@
|
||||
aliases {
|
||||
display0 = &lcd;
|
||||
display1 = &tv0;
|
||||
/delete-property/ mmc2;
|
||||
/delete-property/ mmc3;
|
||||
};
|
||||
|
||||
gpio-keys {
|
||||
|
||||
@@ -286,7 +286,7 @@
|
||||
clocks = <&armclk>;
|
||||
};
|
||||
|
||||
gic: gic@1000 {
|
||||
gic: interrupt-controller@1000 {
|
||||
compatible = "arm,arm11mp-gic";
|
||||
interrupt-controller;
|
||||
#interrupt-cells = <3>;
|
||||
|
||||
@@ -1071,7 +1071,7 @@ vector_bhb_loop8_\name:
|
||||
|
||||
@ bhb workaround
|
||||
mov r0, #8
|
||||
3: b . + 4
|
||||
3: W(b) . + 4
|
||||
subs r0, r0, #1
|
||||
bne 3b
|
||||
dsb
|
||||
|
||||
@@ -51,17 +51,17 @@ int notrace unwind_frame(struct stackframe *frame)
|
||||
return -EINVAL;
|
||||
|
||||
frame->sp = frame->fp;
|
||||
frame->fp = *(unsigned long *)(fp);
|
||||
frame->pc = *(unsigned long *)(fp + 4);
|
||||
frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
|
||||
frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 4));
|
||||
#else
|
||||
/* check current frame pointer is within bounds */
|
||||
if (fp < low + 12 || fp > high - 4)
|
||||
return -EINVAL;
|
||||
|
||||
/* restore the registers from the stack frame */
|
||||
frame->fp = *(unsigned long *)(fp - 12);
|
||||
frame->sp = *(unsigned long *)(fp - 8);
|
||||
frame->pc = *(unsigned long *)(fp - 4);
|
||||
frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 12));
|
||||
frame->sp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 8));
|
||||
frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 4));
|
||||
#endif
|
||||
|
||||
if (ALIGN(frame->fp, THREAD_SIZE) != ALIGN(fp, THREAD_SIZE))
|
||||
|
||||
@@ -70,14 +70,17 @@ static void __init hi3xxx_smp_prepare_cpus(unsigned int max_cpus)
|
||||
}
|
||||
ctrl_base = of_iomap(np, 0);
|
||||
if (!ctrl_base) {
|
||||
of_node_put(np);
|
||||
pr_err("failed to map address\n");
|
||||
return;
|
||||
}
|
||||
if (of_property_read_u32(np, "smp-offset", &offset) < 0) {
|
||||
of_node_put(np);
|
||||
pr_err("failed to find smp-offset property\n");
|
||||
return;
|
||||
}
|
||||
ctrl_base += offset;
|
||||
of_node_put(np);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -163,6 +166,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle)
|
||||
if (WARN_ON(!node))
|
||||
return -1;
|
||||
ctrl_base = of_iomap(node, 0);
|
||||
of_node_put(node);
|
||||
|
||||
/* set the secondary core boot from DDR */
|
||||
remap_reg_value = readl_relaxed(ctrl_base + REG_SC_CTRL);
|
||||
|
||||
@@ -44,7 +44,7 @@ static DEFINE_SPINLOCK(clockfw_lock);
|
||||
unsigned long omap1_uart_recalc(struct clk *clk)
|
||||
{
|
||||
unsigned int val = __raw_readl(clk->enable_reg);
|
||||
return val & clk->enable_bit ? 48000000 : 12000000;
|
||||
return val & 1 << clk->enable_bit ? 48000000 : 12000000;
|
||||
}
|
||||
|
||||
unsigned long omap1_sossi_recalc(struct clk *clk)
|
||||
|
||||
@@ -342,10 +342,12 @@ void __init omap_gic_of_init(void)
|
||||
|
||||
np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-gic");
|
||||
gic_dist_base_addr = of_iomap(np, 0);
|
||||
of_node_put(np);
|
||||
WARN_ON(!gic_dist_base_addr);
|
||||
|
||||
np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-twd-timer");
|
||||
twd_base = of_iomap(np, 0);
|
||||
of_node_put(np);
|
||||
WARN_ON(!twd_base);
|
||||
|
||||
skip_errata_init:
|
||||
|
||||
@@ -146,6 +146,7 @@ static int __init dcscb_init(void)
|
||||
if (!node)
|
||||
return -ENODEV;
|
||||
dcscb_base = of_iomap(node, 0);
|
||||
of_node_put(node);
|
||||
if (!dcscb_base)
|
||||
return -EADDRNOTAVAIL;
|
||||
cfg = readl_relaxed(dcscb_base + DCS_CFG_R);
|
||||
|
||||
@@ -297,6 +297,7 @@ void cpu_v7_ca15_ibe(void)
|
||||
{
|
||||
if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
|
||||
cpu_v7_spectre_v2_init();
|
||||
cpu_v7_spectre_bhb_init();
|
||||
}
|
||||
|
||||
void cpu_v7_bugs_init(void)
|
||||
|
||||
@@ -181,7 +181,7 @@
|
||||
clocks {
|
||||
sleep_clk: sleep_clk {
|
||||
compatible = "fixed-clock";
|
||||
clock-frequency = <32000>;
|
||||
clock-frequency = <32768>;
|
||||
#clock-cells = <0>;
|
||||
};
|
||||
|
||||
|
||||
@@ -308,7 +308,7 @@ comment "Processor Specific Options"
|
||||
|
||||
config M68KFPU_EMU
|
||||
bool "Math emulation support"
|
||||
depends on MMU
|
||||
depends on M68KCLASSIC && FPU
|
||||
help
|
||||
At some point in the future, this will cause floating-point math
|
||||
instructions to be emulated by the kernel on machines that lack a
|
||||
|
||||
@@ -309,6 +309,7 @@ comment "Machine Options"
|
||||
|
||||
config UBOOT
|
||||
bool "Support for U-Boot command line parameters"
|
||||
depends on COLDFIRE
|
||||
help
|
||||
If you say Y here kernel will try to collect command
|
||||
line parameters from the initial u-boot stack.
|
||||
|
||||
@@ -42,7 +42,8 @@ extern void paging_init(void);
|
||||
* ZERO_PAGE is a global shared page that is always zero: used
|
||||
* for zero-mapped memory areas etc..
|
||||
*/
|
||||
#define ZERO_PAGE(vaddr) (virt_to_page(0))
|
||||
extern void *empty_zero_page;
|
||||
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
|
||||
|
||||
/*
|
||||
* No page table caches to initialise.
|
||||
|
||||
@@ -174,7 +174,7 @@ void __init plat_mem_setup(void)
|
||||
dtb = phys_to_virt(fw_arg2);
|
||||
else if (fw_passed_dtb) /* UHI interface */
|
||||
dtb = (void *)fw_passed_dtb;
|
||||
else if (__dtb_start != __dtb_end)
|
||||
else if (&__dtb_start != &__dtb_end)
|
||||
dtb = (void *)__dtb_start;
|
||||
else
|
||||
panic("no dtb found");
|
||||
|
||||
@@ -28,7 +28,6 @@
|
||||
#define cpu_has_6k_cache 0
|
||||
#define cpu_has_8k_cache 0
|
||||
#define cpu_has_tx39_cache 0
|
||||
#define cpu_has_fpu 1
|
||||
#define cpu_has_nofpuex 0
|
||||
#define cpu_has_32fpr 1
|
||||
#define cpu_has_counter 1
|
||||
|
||||
@@ -40,9 +40,9 @@
|
||||
typedef unsigned int cycles_t;
|
||||
|
||||
/*
|
||||
* On R4000/R4400 before version 5.0 an erratum exists such that if the
|
||||
* cycle counter is read in the exact moment that it is matching the
|
||||
* compare register, no interrupt will be generated.
|
||||
* On R4000/R4400 an erratum exists such that if the cycle counter is
|
||||
* read in the exact moment that it is matching the compare register,
|
||||
* no interrupt will be generated.
|
||||
*
|
||||
* There is a suggested workaround and also the erratum can't strike if
|
||||
* the compare interrupt isn't being used as the clock source device.
|
||||
@@ -63,7 +63,7 @@ static inline int can_use_mips_counter(unsigned int prid)
|
||||
if (!__builtin_constant_p(cpu_has_counter))
|
||||
asm volatile("" : "=m" (cpu_data[0].options));
|
||||
if (likely(cpu_has_counter &&
|
||||
prid >= (PRID_IMP_R4000 | PRID_REV_ENCODE_44(5, 0))))
|
||||
prid > (PRID_IMP_R4000 | PRID_REV_ENCODE_44(15, 15))))
|
||||
return 1;
|
||||
else
|
||||
return 0;
|
||||
|
||||
@@ -31,6 +31,7 @@ phys_addr_t __weak mips_cpc_default_phys_base(void)
|
||||
cpc_node = of_find_compatible_node(of_root, NULL, "mti,mips-cpc");
|
||||
if (cpc_node) {
|
||||
err = of_address_to_resource(cpc_node, 0, &res);
|
||||
of_node_put(cpc_node);
|
||||
if (!err)
|
||||
return res.start;
|
||||
}
|
||||
|
||||
@@ -155,15 +155,10 @@ static __init int cpu_has_mfc0_count_bug(void)
|
||||
case CPU_R4400MC:
|
||||
/*
|
||||
* The published errata for the R4400 up to 3.0 say the CPU
|
||||
* has the mfc0 from count bug.
|
||||
* has the mfc0 from count bug. This seems the last version
|
||||
* produced.
|
||||
*/
|
||||
if ((current_cpu_data.processor_id & 0xff) <= 0x30)
|
||||
return 1;
|
||||
|
||||
/*
|
||||
* we assume newer revisions are ok
|
||||
*/
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -169,6 +169,8 @@ static inline void clkdev_add_sys(const char *dev, unsigned int module,
|
||||
{
|
||||
struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
|
||||
|
||||
if (!clk)
|
||||
return;
|
||||
clk->cl.dev_id = dev;
|
||||
clk->cl.con_id = NULL;
|
||||
clk->cl.clk = clk;
|
||||
|
||||
@@ -82,7 +82,7 @@ void __init plat_mem_setup(void)
|
||||
|
||||
if (fw_passed_dtb) /* UHI interface */
|
||||
dtb = (void *)fw_passed_dtb;
|
||||
else if (__dtb_start != __dtb_end)
|
||||
else if (&__dtb_start != &__dtb_end)
|
||||
dtb = (void *)__dtb_start;
|
||||
else
|
||||
panic("no dtb found");
|
||||
|
||||
@@ -124,6 +124,8 @@ static inline void clkdev_add_gptu(struct device *dev, const char *con,
|
||||
{
|
||||
struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
|
||||
|
||||
if (!clk)
|
||||
return;
|
||||
clk->cl.dev_id = dev_name(dev);
|
||||
clk->cl.con_id = con;
|
||||
clk->cl.clk = clk;
|
||||
|
||||
@@ -313,6 +313,8 @@ static void clkdev_add_pmu(const char *dev, const char *con, bool deactivate,
|
||||
{
|
||||
struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
|
||||
|
||||
if (!clk)
|
||||
return;
|
||||
clk->cl.dev_id = dev;
|
||||
clk->cl.con_id = con;
|
||||
clk->cl.clk = clk;
|
||||
@@ -336,6 +338,8 @@ static void clkdev_add_cgu(const char *dev, const char *con,
|
||||
{
|
||||
struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
|
||||
|
||||
if (!clk)
|
||||
return;
|
||||
clk->cl.dev_id = dev;
|
||||
clk->cl.con_id = con;
|
||||
clk->cl.clk = clk;
|
||||
@@ -354,24 +358,28 @@ static void clkdev_add_pci(void)
|
||||
struct clk *clk_ext = kzalloc(sizeof(struct clk), GFP_KERNEL);
|
||||
|
||||
/* main pci clock */
|
||||
clk->cl.dev_id = "17000000.pci";
|
||||
clk->cl.con_id = NULL;
|
||||
clk->cl.clk = clk;
|
||||
clk->rate = CLOCK_33M;
|
||||
clk->rates = valid_pci_rates;
|
||||
clk->enable = pci_enable;
|
||||
clk->disable = pmu_disable;
|
||||
clk->module = 0;
|
||||
clk->bits = PMU_PCI;
|
||||
clkdev_add(&clk->cl);
|
||||
if (clk) {
|
||||
clk->cl.dev_id = "17000000.pci";
|
||||
clk->cl.con_id = NULL;
|
||||
clk->cl.clk = clk;
|
||||
clk->rate = CLOCK_33M;
|
||||
clk->rates = valid_pci_rates;
|
||||
clk->enable = pci_enable;
|
||||
clk->disable = pmu_disable;
|
||||
clk->module = 0;
|
||||
clk->bits = PMU_PCI;
|
||||
clkdev_add(&clk->cl);
|
||||
}
|
||||
|
||||
/* use internal/external bus clock */
|
||||
clk_ext->cl.dev_id = "17000000.pci";
|
||||
clk_ext->cl.con_id = "external";
|
||||
clk_ext->cl.clk = clk_ext;
|
||||
clk_ext->enable = pci_ext_enable;
|
||||
clk_ext->disable = pci_ext_disable;
|
||||
clkdev_add(&clk_ext->cl);
|
||||
if (clk_ext) {
|
||||
clk_ext->cl.dev_id = "17000000.pci";
|
||||
clk_ext->cl.con_id = "external";
|
||||
clk_ext->cl.clk = clk_ext;
|
||||
clk_ext->enable = pci_ext_enable;
|
||||
clk_ext->disable = pci_ext_disable;
|
||||
clkdev_add(&clk_ext->cl);
|
||||
}
|
||||
}
|
||||
|
||||
/* xway socs can generate clocks on gpio pins */
|
||||
@@ -391,9 +399,15 @@ static void clkdev_add_clkout(void)
|
||||
char *name;
|
||||
|
||||
name = kzalloc(sizeof("clkout0"), GFP_KERNEL);
|
||||
if (!name)
|
||||
continue;
|
||||
sprintf(name, "clkout%d", i);
|
||||
|
||||
clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
|
||||
if (!clk) {
|
||||
kfree(name);
|
||||
continue;
|
||||
}
|
||||
clk->cl.dev_id = "1f103000.cgu";
|
||||
clk->cl.con_id = name;
|
||||
clk->cl.clk = clk;
|
||||
|
||||
@@ -36,7 +36,7 @@ static ulong get_fdtaddr(void)
|
||||
if (fw_passed_dtb && !fw_arg2 && !fw_arg3)
|
||||
return (ulong)fw_passed_dtb;
|
||||
|
||||
if (__dtb_start < __dtb_end)
|
||||
if (&__dtb_start < &__dtb_end)
|
||||
ftaddr = (ulong)__dtb_start;
|
||||
|
||||
return ftaddr;
|
||||
|
||||
@@ -79,7 +79,7 @@ void __init plat_mem_setup(void)
|
||||
*/
|
||||
if (fw_passed_dtb)
|
||||
dtb = (void *)fw_passed_dtb;
|
||||
else if (__dtb_start != __dtb_end)
|
||||
else if (&__dtb_start != &__dtb_end)
|
||||
dtb = (void *)__dtb_start;
|
||||
|
||||
__dt_setup_arch(dtb);
|
||||
|
||||
@@ -27,6 +27,7 @@ static inline cycles_t get_cycles(void)
|
||||
{
|
||||
return mfspr(SPR_TTCR);
|
||||
}
|
||||
#define get_cycles get_cycles
|
||||
|
||||
/* This isn't really used any more */
|
||||
#define CLOCK_TICK_RATE 1000
|
||||
|
||||
@@ -459,6 +459,15 @@ _start:
|
||||
l.ori r3,r0,0x1
|
||||
l.mtspr r0,r3,SPR_SR
|
||||
|
||||
/*
|
||||
* Start the TTCR as early as possible, so that the RNG can make use of
|
||||
* measurements of boot time from the earliest opportunity. Especially
|
||||
* important is that the TTCR does not return zero by the time we reach
|
||||
* rand_initialize().
|
||||
*/
|
||||
l.movhi r3,hi(SPR_TTMR_CR)
|
||||
l.mtspr r0,r3,SPR_TTMR
|
||||
|
||||
CLEAR_GPR(r1)
|
||||
CLEAR_GPR(r2)
|
||||
CLEAR_GPR(r3)
|
||||
|
||||
@@ -408,8 +408,7 @@ show_cpuinfo (struct seq_file *m, void *v)
|
||||
}
|
||||
seq_printf(m, " (0x%02lx)\n", boot_cpu_data.pdc.capabilities);
|
||||
|
||||
seq_printf(m, "model\t\t: %s\n"
|
||||
"model name\t: %s\n",
|
||||
seq_printf(m, "model\t\t: %s - %s\n",
|
||||
boot_cpu_data.pdc.sys_model_name,
|
||||
cpuinfo->dev ?
|
||||
cpuinfo->dev->name : "Unknown");
|
||||
|
||||
@@ -41,7 +41,7 @@ static int __init powersave_off(char *arg)
|
||||
{
|
||||
ppc_md.power_save = NULL;
|
||||
cpuidle_disable = IDLE_POWERSAVE_OFF;
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
__setup("powersave=off", powersave_off);
|
||||
|
||||
|
||||
@@ -2920,8 +2920,13 @@ long arch_ptrace(struct task_struct *child, long request,
|
||||
|
||||
flush_fp_to_thread(child);
|
||||
if (fpidx < (PT_FPSCR - PT_FPR0))
|
||||
memcpy(&tmp, &child->thread.TS_FPR(fpidx),
|
||||
sizeof(long));
|
||||
if (IS_ENABLED(CONFIG_PPC32)) {
|
||||
// On 32-bit the index we are passed refers to 32-bit words
|
||||
tmp = ((u32 *)child->thread.fp_state.fpr)[fpidx];
|
||||
} else {
|
||||
memcpy(&tmp, &child->thread.TS_FPR(fpidx),
|
||||
sizeof(long));
|
||||
}
|
||||
else
|
||||
tmp = child->thread.fp_state.fpscr;
|
||||
}
|
||||
@@ -2953,8 +2958,13 @@ long arch_ptrace(struct task_struct *child, long request,
|
||||
|
||||
flush_fp_to_thread(child);
|
||||
if (fpidx < (PT_FPSCR - PT_FPR0))
|
||||
memcpy(&child->thread.TS_FPR(fpidx), &data,
|
||||
sizeof(long));
|
||||
if (IS_ENABLED(CONFIG_PPC32)) {
|
||||
// On 32-bit the index we are passed refers to 32-bit words
|
||||
((u32 *)child->thread.fp_state.fpr)[fpidx] = data;
|
||||
} else {
|
||||
memcpy(&child->thread.TS_FPR(fpidx), &data,
|
||||
sizeof(long));
|
||||
}
|
||||
else
|
||||
child->thread.fp_state.fpscr = data;
|
||||
ret = 0;
|
||||
|
||||
@@ -324,7 +324,8 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
|
||||
if (event_is_threshold(event) && is_thresh_cmp_valid(event)) {
|
||||
mask |= CNST_THRESH_MASK;
|
||||
value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
|
||||
}
|
||||
} else if (event_is_threshold(event))
|
||||
return -1;
|
||||
} else {
|
||||
/*
|
||||
* Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC,
|
||||
|
||||
@@ -341,6 +341,6 @@ late_initcall(cpm_init);
|
||||
static int __init cpm_powersave_off(char *arg)
|
||||
{
|
||||
cpm.powersave_off = 1;
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
__setup("powersave=off", cpm_powersave_off);
|
||||
|
||||
@@ -291,6 +291,7 @@ cpm_setbrg(uint brg, uint rate)
|
||||
out_be32(bp, (((BRG_UART_CLK_DIV16 / rate) - 1) << 1) |
|
||||
CPM_BRG_EN | CPM_BRG_DIV16);
|
||||
}
|
||||
EXPORT_SYMBOL(cpm_setbrg);
|
||||
|
||||
struct cpm_ioport16 {
|
||||
__be16 dir, par, odr_sor, dat, intr;
|
||||
|
||||
@@ -509,8 +509,10 @@ int fsl_rio_setup(struct platform_device *dev)
|
||||
if (rc) {
|
||||
dev_err(&dev->dev, "Can't get %pOF property 'reg'\n",
|
||||
rmu_node);
|
||||
of_node_put(rmu_node);
|
||||
goto err_rmu;
|
||||
}
|
||||
of_node_put(rmu_node);
|
||||
rmu_regs_win = ioremap(rmu_regs.start, resource_size(&rmu_regs));
|
||||
if (!rmu_regs_win) {
|
||||
dev_err(&dev->dev, "Unable to map rmu register window\n");
|
||||
|
||||
@@ -199,6 +199,7 @@ int icp_opal_init(void)
|
||||
|
||||
printk("XICS: Using OPAL ICP fallbacks\n");
|
||||
|
||||
of_node_put(np);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -50,10 +50,17 @@ static inline bool test_preempt_need_resched(void)
|
||||
|
||||
static inline void __preempt_count_add(int val)
|
||||
{
|
||||
if (__builtin_constant_p(val) && (val >= -128) && (val <= 127))
|
||||
__atomic_add_const(val, &S390_lowcore.preempt_count);
|
||||
else
|
||||
__atomic_add(val, &S390_lowcore.preempt_count);
|
||||
/*
|
||||
* With some obscure config options and CONFIG_PROFILE_ALL_BRANCHES
|
||||
* enabled, gcc 12 fails to handle __builtin_constant_p().
|
||||
*/
|
||||
if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES)) {
|
||||
if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) {
|
||||
__atomic_add_const(val, &S390_lowcore.preempt_count);
|
||||
return;
|
||||
}
|
||||
}
|
||||
__atomic_add(val, &S390_lowcore.preempt_count);
|
||||
}
|
||||
|
||||
static inline void __preempt_count_sub(int val)
|
||||
|
||||
@@ -220,7 +220,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
|
||||
unsigned long *stack_out)
|
||||
{
|
||||
struct winch_data data;
|
||||
int fds[2], n, err;
|
||||
int fds[2], n, err, pid;
|
||||
char c;
|
||||
|
||||
err = os_pipe(fds, 1, 1);
|
||||
@@ -238,8 +238,9 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
|
||||
* problem with /dev/net/tun, which if held open by this
|
||||
* thread, prevents the TUN/TAP device from being reused.
|
||||
*/
|
||||
err = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
|
||||
if (err < 0) {
|
||||
pid = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
|
||||
if (pid < 0) {
|
||||
err = pid;
|
||||
printk(UM_KERN_ERR "fork of winch_thread failed - errno = %d\n",
|
||||
-err);
|
||||
goto out_close;
|
||||
@@ -263,7 +264,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
|
||||
goto out_close;
|
||||
}
|
||||
|
||||
return err;
|
||||
return pid;
|
||||
|
||||
out_close:
|
||||
close(fds[1]);
|
||||
|
||||
@@ -328,7 +328,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
|
||||
static __init int vdso_setup(char *s)
|
||||
{
|
||||
vdso64_enabled = simple_strtoul(s, NULL, 0);
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
__setup("vdso=", vdso_setup);
|
||||
#endif
|
||||
|
||||
@@ -16,7 +16,19 @@
|
||||
|
||||
/* Asm macros */
|
||||
|
||||
#define ACPI_FLUSH_CPU_CACHE() wbinvd()
|
||||
/*
|
||||
* ACPI_FLUSH_CPU_CACHE() flushes caches on entering sleep states.
|
||||
* It is required to prevent data loss.
|
||||
*
|
||||
* While running inside virtual machine, the kernel can bypass cache flushing.
|
||||
* Changing sleep state in a virtual machine doesn't affect the host system
|
||||
* sleep state and cannot lead to data loss.
|
||||
*/
|
||||
#define ACPI_FLUSH_CPU_CACHE() \
|
||||
do { \
|
||||
if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) \
|
||||
wbinvd(); \
|
||||
} while (0)
|
||||
|
||||
int __acpi_acquire_global_lock(unsigned int *lock);
|
||||
int __acpi_release_global_lock(unsigned int *lock);
|
||||
|
||||
@@ -393,5 +393,6 @@
|
||||
#define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
|
||||
#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
|
||||
#define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
|
||||
#define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
|
||||
|
||||
#endif /* _ASM_X86_CPUFEATURES_H */
|
||||
|
||||
@@ -10,6 +10,10 @@
|
||||
*
|
||||
* Things ending in "2" are usually because we have no better
|
||||
* name for them. There's no processor called "SILVERMONT2".
|
||||
*
|
||||
* While adding a new CPUID for a new microarchitecture, add a new
|
||||
* group to keep logically sorted out in chronological order. Within
|
||||
* that group keep the CPUID for the variants sorted by model number.
|
||||
*/
|
||||
|
||||
#define INTEL_FAM6_CORE_YONAH 0x0E
|
||||
@@ -49,6 +53,24 @@
|
||||
#define INTEL_FAM6_KABYLAKE_MOBILE 0x8E
|
||||
#define INTEL_FAM6_KABYLAKE_DESKTOP 0x9E
|
||||
|
||||
#define INTEL_FAM6_CANNONLAKE_MOBILE 0x66
|
||||
|
||||
#define INTEL_FAM6_ICELAKE_X 0x6A
|
||||
#define INTEL_FAM6_ICELAKE_XEON_D 0x6C
|
||||
#define INTEL_FAM6_ICELAKE_DESKTOP 0x7D
|
||||
#define INTEL_FAM6_ICELAKE_MOBILE 0x7E
|
||||
|
||||
#define INTEL_FAM6_COMETLAKE 0xA5
|
||||
#define INTEL_FAM6_COMETLAKE_L 0xA6
|
||||
|
||||
#define INTEL_FAM6_ROCKETLAKE 0xA7
|
||||
|
||||
/* Hybrid Core/Atom Processors */
|
||||
|
||||
#define INTEL_FAM6_LAKEFIELD 0x8A
|
||||
#define INTEL_FAM6_ALDERLAKE 0x97
|
||||
#define INTEL_FAM6_ALDERLAKE_L 0x9A
|
||||
|
||||
/* "Small Core" Processors (Atom) */
|
||||
|
||||
#define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
|
||||
@@ -68,7 +90,10 @@
|
||||
#define INTEL_FAM6_ATOM_GOLDMONT 0x5C /* Apollo Lake */
|
||||
#define INTEL_FAM6_ATOM_GOLDMONT_X 0x5F /* Denverton */
|
||||
#define INTEL_FAM6_ATOM_GOLDMONT_PLUS 0x7A /* Gemini Lake */
|
||||
|
||||
#define INTEL_FAM6_ATOM_TREMONT_X 0x86 /* Jacobsville */
|
||||
#define INTEL_FAM6_ATOM_TREMONT 0x96 /* Elkhart Lake */
|
||||
#define INTEL_FAM6_ATOM_TREMONT_L 0x9C /* Jasper Lake */
|
||||
|
||||
/* Xeon Phi */
|
||||
|
||||
|
||||
@@ -147,11 +147,13 @@ extern void load_ucode_ap(void);
|
||||
void reload_early_microcode(void);
|
||||
extern bool get_builtin_firmware(struct cpio_data *cd, const char *name);
|
||||
extern bool initrd_gone;
|
||||
void microcode_bsp_resume(void);
|
||||
#else
|
||||
static inline int __init microcode_init(void) { return 0; };
|
||||
static inline void __init load_ucode_bsp(void) { }
|
||||
static inline void load_ucode_ap(void) { }
|
||||
static inline void reload_early_microcode(void) { }
|
||||
static inline void microcode_bsp_resume(void) { }
|
||||
static inline bool
|
||||
get_builtin_firmware(struct cpio_data *cd, const char *name) { return false; }
|
||||
#endif
|
||||
|
||||
@@ -96,6 +96,30 @@
|
||||
* Not susceptible to
|
||||
* TSX Async Abort (TAA) vulnerabilities.
|
||||
*/
|
||||
#define ARCH_CAP_SBDR_SSDP_NO BIT(13) /*
|
||||
* Not susceptible to SBDR and SSDP
|
||||
* variants of Processor MMIO stale data
|
||||
* vulnerabilities.
|
||||
*/
|
||||
#define ARCH_CAP_FBSDP_NO BIT(14) /*
|
||||
* Not susceptible to FBSDP variant of
|
||||
* Processor MMIO stale data
|
||||
* vulnerabilities.
|
||||
*/
|
||||
#define ARCH_CAP_PSDP_NO BIT(15) /*
|
||||
* Not susceptible to PSDP variant of
|
||||
* Processor MMIO stale data
|
||||
* vulnerabilities.
|
||||
*/
|
||||
#define ARCH_CAP_FB_CLEAR BIT(17) /*
|
||||
* VERW clears CPU fill buffer
|
||||
* even on MDS_NO CPUs.
|
||||
*/
|
||||
#define ARCH_CAP_FB_CLEAR_CTRL BIT(18) /*
|
||||
* MSR_IA32_MCU_OPT_CTRL[FB_CLEAR_DIS]
|
||||
* bit available to control VERW
|
||||
* behavior.
|
||||
*/
|
||||
|
||||
#define MSR_IA32_FLUSH_CMD 0x0000010b
|
||||
#define L1D_FLUSH BIT(0) /*
|
||||
@@ -113,6 +137,7 @@
|
||||
/* SRBDS support */
|
||||
#define MSR_IA32_MCU_OPT_CTRL 0x00000123
|
||||
#define RNGDS_MITG_DIS BIT(0)
|
||||
#define FB_CLEAR_DIS BIT(3) /* CPU Fill buffer clear disable */
|
||||
|
||||
#define MSR_IA32_SYSENTER_CS 0x00000174
|
||||
#define MSR_IA32_SYSENTER_ESP 0x00000175
|
||||
|
||||
@@ -323,6 +323,8 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
|
||||
DECLARE_STATIC_KEY_FALSE(mds_user_clear);
|
||||
DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
|
||||
|
||||
DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
|
||||
|
||||
#include <asm/segment.h>
|
||||
|
||||
/**
|
||||
|
||||
@@ -21,7 +21,6 @@ struct saved_context {
|
||||
#endif
|
||||
unsigned long cr0, cr2, cr3, cr4;
|
||||
u64 misc_enable;
|
||||
bool misc_enable_saved;
|
||||
struct saved_msrs saved_msrs;
|
||||
struct desc_ptr gdt_desc;
|
||||
struct desc_ptr idt;
|
||||
@@ -30,6 +29,7 @@ struct saved_context {
|
||||
unsigned long tr;
|
||||
unsigned long safety;
|
||||
unsigned long return_address;
|
||||
bool misc_enable_saved;
|
||||
} __attribute__((packed));
|
||||
|
||||
#endif /* _ASM_X86_SUSPEND_32_H */
|
||||
|
||||
@@ -14,9 +14,13 @@
|
||||
* Image of the saved processor state, used by the low level ACPI suspend to
|
||||
* RAM code and by the low level hibernation code.
|
||||
*
|
||||
* If you modify it, fix arch/x86/kernel/acpi/wakeup_64.S and make sure that
|
||||
* __save/__restore_processor_state(), defined in arch/x86/kernel/suspend_64.c,
|
||||
* still work as required.
|
||||
* If you modify it, check how it is used in arch/x86/kernel/acpi/wakeup_64.S
|
||||
* and make sure that __save/__restore_processor_state(), defined in
|
||||
* arch/x86/power/cpu.c, still work as required.
|
||||
*
|
||||
* Because the structure is packed, make sure to avoid unaligned members. For
|
||||
* optimisation purposes but also because tools like kmemleak only search for
|
||||
* pointers that are aligned.
|
||||
*/
|
||||
struct saved_context {
|
||||
struct pt_regs regs;
|
||||
@@ -36,7 +40,6 @@ struct saved_context {
|
||||
|
||||
unsigned long cr0, cr2, cr3, cr4, cr8;
|
||||
u64 misc_enable;
|
||||
bool misc_enable_saved;
|
||||
struct saved_msrs saved_msrs;
|
||||
unsigned long efer;
|
||||
u16 gdt_pad; /* Unused */
|
||||
@@ -48,6 +51,7 @@ struct saved_context {
|
||||
unsigned long tr;
|
||||
unsigned long safety;
|
||||
unsigned long return_address;
|
||||
bool misc_enable_saved;
|
||||
} __attribute__((packed));
|
||||
|
||||
#define loaddebug(thread,register) \
|
||||
|
||||
@@ -167,7 +167,7 @@ static __init int setup_apicpmtimer(char *s)
|
||||
{
|
||||
apic_calibrate_pmtmr = 1;
|
||||
notsc_setup(NULL);
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
__setup("apicpmtimer", setup_apicpmtimer);
|
||||
#endif
|
||||
|
||||
@@ -40,8 +40,10 @@ static void __init spectre_v2_select_mitigation(void);
|
||||
static void __init ssb_select_mitigation(void);
|
||||
static void __init l1tf_select_mitigation(void);
|
||||
static void __init mds_select_mitigation(void);
|
||||
static void __init mds_print_mitigation(void);
|
||||
static void __init md_clear_update_mitigation(void);
|
||||
static void __init md_clear_select_mitigation(void);
|
||||
static void __init taa_select_mitigation(void);
|
||||
static void __init mmio_select_mitigation(void);
|
||||
static void __init srbds_select_mitigation(void);
|
||||
|
||||
/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
|
||||
@@ -76,6 +78,10 @@ EXPORT_SYMBOL_GPL(mds_user_clear);
|
||||
DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
|
||||
EXPORT_SYMBOL_GPL(mds_idle_clear);
|
||||
|
||||
/* Controls CPU Fill buffer clear before KVM guest MMIO accesses */
|
||||
DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
|
||||
EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
|
||||
|
||||
void __init check_bugs(void)
|
||||
{
|
||||
identify_boot_cpu();
|
||||
@@ -108,16 +114,9 @@ void __init check_bugs(void)
|
||||
spectre_v2_select_mitigation();
|
||||
ssb_select_mitigation();
|
||||
l1tf_select_mitigation();
|
||||
mds_select_mitigation();
|
||||
taa_select_mitigation();
|
||||
md_clear_select_mitigation();
|
||||
srbds_select_mitigation();
|
||||
|
||||
/*
|
||||
* As MDS and TAA mitigations are inter-related, print MDS
|
||||
* mitigation until after TAA mitigation selection is done.
|
||||
*/
|
||||
mds_print_mitigation();
|
||||
|
||||
arch_smt_update();
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
@@ -257,14 +256,6 @@ static void __init mds_select_mitigation(void)
|
||||
}
|
||||
}
|
||||
|
||||
static void __init mds_print_mitigation(void)
|
||||
{
|
||||
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
|
||||
return;
|
||||
|
||||
pr_info("%s\n", mds_strings[mds_mitigation]);
|
||||
}
|
||||
|
||||
static int __init mds_cmdline(char *str)
|
||||
{
|
||||
if (!boot_cpu_has_bug(X86_BUG_MDS))
|
||||
@@ -312,7 +303,7 @@ static void __init taa_select_mitigation(void)
|
||||
/* TSX previously disabled by tsx=off */
|
||||
if (!boot_cpu_has(X86_FEATURE_RTM)) {
|
||||
taa_mitigation = TAA_MITIGATION_TSX_DISABLED;
|
||||
goto out;
|
||||
return;
|
||||
}
|
||||
|
||||
if (cpu_mitigations_off()) {
|
||||
@@ -326,7 +317,7 @@ static void __init taa_select_mitigation(void)
|
||||
*/
|
||||
if (taa_mitigation == TAA_MITIGATION_OFF &&
|
||||
mds_mitigation == MDS_MITIGATION_OFF)
|
||||
goto out;
|
||||
return;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
|
||||
taa_mitigation = TAA_MITIGATION_VERW;
|
||||
@@ -358,18 +349,6 @@ static void __init taa_select_mitigation(void)
|
||||
|
||||
if (taa_nosmt || cpu_mitigations_auto_nosmt())
|
||||
cpu_smt_disable(false);
|
||||
|
||||
/*
|
||||
* Update MDS mitigation, if necessary, as the mds_user_clear is
|
||||
* now enabled for TAA mitigation.
|
||||
*/
|
||||
if (mds_mitigation == MDS_MITIGATION_OFF &&
|
||||
boot_cpu_has_bug(X86_BUG_MDS)) {
|
||||
mds_mitigation = MDS_MITIGATION_FULL;
|
||||
mds_select_mitigation();
|
||||
}
|
||||
out:
|
||||
pr_info("%s\n", taa_strings[taa_mitigation]);
|
||||
}
|
||||
|
||||
static int __init tsx_async_abort_parse_cmdline(char *str)
|
||||
@@ -393,6 +372,151 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
|
||||
}
|
||||
early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
|
||||
|
||||
#undef pr_fmt
|
||||
#define pr_fmt(fmt) "MMIO Stale Data: " fmt
|
||||
|
||||
enum mmio_mitigations {
|
||||
MMIO_MITIGATION_OFF,
|
||||
MMIO_MITIGATION_UCODE_NEEDED,
|
||||
MMIO_MITIGATION_VERW,
|
||||
};
|
||||
|
||||
/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
|
||||
static enum mmio_mitigations mmio_mitigation __ro_after_init = MMIO_MITIGATION_VERW;
|
||||
static bool mmio_nosmt __ro_after_init = false;
|
||||
|
||||
static const char * const mmio_strings[] = {
|
||||
[MMIO_MITIGATION_OFF] = "Vulnerable",
|
||||
[MMIO_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode",
|
||||
[MMIO_MITIGATION_VERW] = "Mitigation: Clear CPU buffers",
|
||||
};
|
||||
|
||||
static void __init mmio_select_mitigation(void)
|
||||
{
|
||||
u64 ia32_cap;
|
||||
|
||||
if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) ||
|
||||
cpu_mitigations_off()) {
|
||||
mmio_mitigation = MMIO_MITIGATION_OFF;
|
||||
return;
|
||||
}
|
||||
|
||||
if (mmio_mitigation == MMIO_MITIGATION_OFF)
|
||||
return;
|
||||
|
||||
ia32_cap = x86_read_arch_cap_msr();
|
||||
|
||||
/*
|
||||
* Enable CPU buffer clear mitigation for host and VMM, if also affected
|
||||
* by MDS or TAA. Otherwise, enable mitigation for VMM only.
|
||||
*/
|
||||
if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
|
||||
boot_cpu_has(X86_FEATURE_RTM)))
|
||||
static_branch_enable(&mds_user_clear);
|
||||
else
|
||||
static_branch_enable(&mmio_stale_data_clear);
|
||||
|
||||
/*
|
||||
* If Processor-MMIO-Stale-Data bug is present and Fill Buffer data can
|
||||
* be propagated to uncore buffers, clearing the Fill buffers on idle
|
||||
* is required irrespective of SMT state.
|
||||
*/
|
||||
if (!(ia32_cap & ARCH_CAP_FBSDP_NO))
|
||||
static_branch_enable(&mds_idle_clear);
|
||||
|
||||
/*
|
||||
* Check if the system has the right microcode.
|
||||
*
|
||||
* CPU Fill buffer clear mitigation is enumerated by either an explicit
|
||||
* FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
|
||||
* affected systems.
|
||||
*/
|
||||
if ((ia32_cap & ARCH_CAP_FB_CLEAR) ||
|
||||
(boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
|
||||
boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
|
||||
!(ia32_cap & ARCH_CAP_MDS_NO)))
|
||||
mmio_mitigation = MMIO_MITIGATION_VERW;
|
||||
else
|
||||
mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
|
||||
|
||||
if (mmio_nosmt || cpu_mitigations_auto_nosmt())
|
||||
cpu_smt_disable(false);
|
||||
}
|
||||
|
||||
static int __init mmio_stale_data_parse_cmdline(char *str)
|
||||
{
|
||||
if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
|
||||
return 0;
|
||||
|
||||
if (!str)
|
||||
return -EINVAL;
|
||||
|
||||
if (!strcmp(str, "off")) {
|
||||
mmio_mitigation = MMIO_MITIGATION_OFF;
|
||||
} else if (!strcmp(str, "full")) {
|
||||
mmio_mitigation = MMIO_MITIGATION_VERW;
|
||||
} else if (!strcmp(str, "full,nosmt")) {
|
||||
mmio_mitigation = MMIO_MITIGATION_VERW;
|
||||
mmio_nosmt = true;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
|
||||
|
||||
#undef pr_fmt
|
||||
#define pr_fmt(fmt) "" fmt
|
||||
|
||||
static void __init md_clear_update_mitigation(void)
|
||||
{
|
||||
if (cpu_mitigations_off())
|
||||
return;
|
||||
|
||||
if (!static_key_enabled(&mds_user_clear))
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data
|
||||
* mitigation, if necessary.
|
||||
*/
|
||||
if (mds_mitigation == MDS_MITIGATION_OFF &&
|
||||
boot_cpu_has_bug(X86_BUG_MDS)) {
|
||||
mds_mitigation = MDS_MITIGATION_FULL;
|
||||
mds_select_mitigation();
|
||||
}
|
||||
if (taa_mitigation == TAA_MITIGATION_OFF &&
|
||||
boot_cpu_has_bug(X86_BUG_TAA)) {
|
||||
taa_mitigation = TAA_MITIGATION_VERW;
|
||||
taa_select_mitigation();
|
||||
}
|
||||
if (mmio_mitigation == MMIO_MITIGATION_OFF &&
|
||||
boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
|
||||
mmio_mitigation = MMIO_MITIGATION_VERW;
|
||||
mmio_select_mitigation();
|
||||
}
|
||||
out:
|
||||
if (boot_cpu_has_bug(X86_BUG_MDS))
|
||||
pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
|
||||
if (boot_cpu_has_bug(X86_BUG_TAA))
|
||||
pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
|
||||
if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
|
||||
pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
|
||||
}
|
||||
|
||||
static void __init md_clear_select_mitigation(void)
|
||||
{
|
||||
mds_select_mitigation();
|
||||
taa_select_mitigation();
|
||||
mmio_select_mitigation();
|
||||
|
||||
/*
|
||||
* As MDS, TAA and MMIO Stale Data mitigations are inter-related, update
|
||||
* and print their mitigation after MDS, TAA and MMIO Stale Data
|
||||
* mitigation selection is done.
|
||||
*/
|
||||
md_clear_update_mitigation();
|
||||
}
|
||||
|
||||
#undef pr_fmt
|
||||
#define pr_fmt(fmt) "SRBDS: " fmt
|
||||
|
||||
@@ -454,11 +578,13 @@ static void __init srbds_select_mitigation(void)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Check to see if this is one of the MDS_NO systems supporting
|
||||
* TSX that are only exposed to SRBDS when TSX is enabled.
|
||||
* Check to see if this is one of the MDS_NO systems supporting TSX that
|
||||
* are only exposed to SRBDS when TSX is enabled or when CPU is affected
|
||||
* by Processor MMIO Stale Data vulnerability.
|
||||
*/
|
||||
ia32_cap = x86_read_arch_cap_msr();
|
||||
if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
|
||||
if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM) &&
|
||||
!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
|
||||
srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
|
||||
else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
||||
srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
|
||||
@@ -1066,6 +1192,8 @@ static void update_indir_branch_cond(void)
|
||||
/* Update the static key controlling the MDS CPU buffer clear in idle */
|
||||
static void update_mds_branch_idle(void)
|
||||
{
|
||||
u64 ia32_cap = x86_read_arch_cap_msr();
|
||||
|
||||
/*
|
||||
* Enable the idle clearing if SMT is active on CPUs which are
|
||||
* affected only by MSBDS and not any other MDS variant.
|
||||
@@ -1077,14 +1205,17 @@ static void update_mds_branch_idle(void)
|
||||
if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
|
||||
return;
|
||||
|
||||
if (sched_smt_active())
|
||||
if (sched_smt_active()) {
|
||||
static_branch_enable(&mds_idle_clear);
|
||||
else
|
||||
} else if (mmio_mitigation == MMIO_MITIGATION_OFF ||
|
||||
(ia32_cap & ARCH_CAP_FBSDP_NO)) {
|
||||
static_branch_disable(&mds_idle_clear);
|
||||
}
|
||||
}
|
||||
|
||||
#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
|
||||
#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
|
||||
#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
|
||||
|
||||
void arch_smt_update(void)
|
||||
{
|
||||
@@ -1129,6 +1260,16 @@ void arch_smt_update(void)
|
||||
break;
|
||||
}
|
||||
|
||||
switch (mmio_mitigation) {
|
||||
case MMIO_MITIGATION_VERW:
|
||||
case MMIO_MITIGATION_UCODE_NEEDED:
|
||||
if (sched_smt_active())
|
||||
pr_warn_once(MMIO_MSG_SMT);
|
||||
break;
|
||||
case MMIO_MITIGATION_OFF:
|
||||
break;
|
||||
}
|
||||
|
||||
mutex_unlock(&spec_ctrl_mutex);
|
||||
}
|
||||
|
||||
@@ -1680,6 +1821,20 @@ static ssize_t tsx_async_abort_show_state(char *buf)
|
||||
sched_smt_active() ? "vulnerable" : "disabled");
|
||||
}
|
||||
|
||||
static ssize_t mmio_stale_data_show_state(char *buf)
|
||||
{
|
||||
if (mmio_mitigation == MMIO_MITIGATION_OFF)
|
||||
return sysfs_emit(buf, "%s\n", mmio_strings[mmio_mitigation]);
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
|
||||
return sysfs_emit(buf, "%s; SMT Host state unknown\n",
|
||||
mmio_strings[mmio_mitigation]);
|
||||
}
|
||||
|
||||
return sysfs_emit(buf, "%s; SMT %s\n", mmio_strings[mmio_mitigation],
|
||||
sched_smt_active() ? "vulnerable" : "disabled");
|
||||
}
|
||||
|
||||
static char *stibp_state(void)
|
||||
{
|
||||
if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
|
||||
@@ -1777,6 +1932,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
|
||||
case X86_BUG_SRBDS:
|
||||
return srbds_show_state(buf);
|
||||
|
||||
case X86_BUG_MMIO_STALE_DATA:
|
||||
return mmio_stale_data_show_state(buf);
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
@@ -1828,4 +1986,9 @@ ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *
|
||||
{
|
||||
return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
|
||||
}
|
||||
|
||||
ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA);
|
||||
}
|
||||
#endif
|
||||
|
||||
@@ -970,18 +970,42 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
|
||||
X86_FEATURE_ANY, issues)
|
||||
|
||||
#define SRBDS BIT(0)
|
||||
/* CPU is affected by X86_BUG_MMIO_STALE_DATA */
|
||||
#define MMIO BIT(1)
|
||||
/* CPU is affected by Shared Buffers Data Sampling (SBDS), a variant of X86_BUG_MMIO_STALE_DATA */
|
||||
#define MMIO_SBDS BIT(2)
|
||||
|
||||
static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
|
||||
VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(HASWELL_CORE, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(HASWELL_ULT, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(HASWELL_GT3E, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(HASWELL_X, BIT(2) | BIT(4), MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(BROADWELL_XEON_D,X86_STEPPINGS(0x3, 0x5), MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(BROADWELL_GT3E, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(BROADWELL_CORE, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(SKYLAKE_X, BIT(3) | BIT(4) | BIT(6) |
|
||||
BIT(7) | BIT(0xB), MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPING_ANY, SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x0, 0xC), SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x0, 0xD), SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x0, 0x8), SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x0, 0x8), SRBDS),
|
||||
VULNBL_INTEL_STEPPINGS(ICELAKE_MOBILE, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SBDS),
|
||||
VULNBL_INTEL_STEPPINGS(ICELAKE_XEON_D, X86_STEPPINGS(0x1, 0x1), MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_SBDS),
|
||||
VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS),
|
||||
VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS),
|
||||
VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS),
|
||||
VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_X, X86_STEPPING_ANY, MMIO),
|
||||
VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MMIO_SBDS),
|
||||
{}
|
||||
};
|
||||
|
||||
@@ -1002,6 +1026,13 @@ u64 x86_read_arch_cap_msr(void)
|
||||
return ia32_cap;
|
||||
}
|
||||
|
||||
static bool arch_cap_mmio_immune(u64 ia32_cap)
|
||||
{
|
||||
return (ia32_cap & ARCH_CAP_FBSDP_NO &&
|
||||
ia32_cap & ARCH_CAP_PSDP_NO &&
|
||||
ia32_cap & ARCH_CAP_SBDR_SSDP_NO);
|
||||
}
|
||||
|
||||
static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
|
||||
{
|
||||
u64 ia32_cap = x86_read_arch_cap_msr();
|
||||
@@ -1053,12 +1084,27 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
|
||||
/*
|
||||
* SRBDS affects CPUs which support RDRAND or RDSEED and are listed
|
||||
* in the vulnerability blacklist.
|
||||
*
|
||||
* Some of the implications and mitigation of Shared Buffers Data
|
||||
* Sampling (SBDS) are similar to SRBDS. Give SBDS same treatment as
|
||||
* SRBDS.
|
||||
*/
|
||||
if ((cpu_has(c, X86_FEATURE_RDRAND) ||
|
||||
cpu_has(c, X86_FEATURE_RDSEED)) &&
|
||||
cpu_matches(cpu_vuln_blacklist, SRBDS))
|
||||
cpu_matches(cpu_vuln_blacklist, SRBDS | MMIO_SBDS))
|
||||
setup_force_cpu_bug(X86_BUG_SRBDS);
|
||||
|
||||
/*
|
||||
* Processor MMIO Stale Data bug enumeration
|
||||
*
|
||||
* Affected CPU list is generally enough to enumerate the vulnerability,
|
||||
* but for virtualization case check for ARCH_CAP MSR bits also, VMM may
|
||||
* not want the guest to enumerate the bug.
|
||||
*/
|
||||
if (cpu_matches(cpu_vuln_blacklist, MMIO) &&
|
||||
!arch_cap_mmio_immune(ia32_cap))
|
||||
setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
|
||||
|
||||
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
|
||||
return;
|
||||
|
||||
|
||||
@@ -71,7 +71,7 @@ static bool ring3mwait_disabled __read_mostly;
|
||||
static int __init ring3mwait_disable(char *__unused)
|
||||
{
|
||||
ring3mwait_disabled = true;
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
__setup("ring3mwait=disable", ring3mwait_disable);
|
||||
|
||||
|
||||
@@ -773,9 +773,9 @@ static struct subsys_interface mc_cpu_interface = {
|
||||
};
|
||||
|
||||
/**
|
||||
* mc_bp_resume - Update boot CPU microcode during resume.
|
||||
* microcode_bsp_resume - Update boot CPU microcode during resume.
|
||||
*/
|
||||
static void mc_bp_resume(void)
|
||||
void microcode_bsp_resume(void)
|
||||
{
|
||||
int cpu = smp_processor_id();
|
||||
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
|
||||
@@ -787,7 +787,7 @@ static void mc_bp_resume(void)
|
||||
}
|
||||
|
||||
static struct syscore_ops mc_syscore_ops = {
|
||||
.resume = mc_bp_resume,
|
||||
.resume = microcode_bsp_resume,
|
||||
};
|
||||
|
||||
static int mc_cpu_starting(unsigned int cpu)
|
||||
|
||||
@@ -175,8 +175,7 @@ void set_task_blockstep(struct task_struct *task, bool on)
|
||||
*
|
||||
* NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if
|
||||
* task is current or it can't be running, otherwise we can race
|
||||
* with __switch_to_xtra(). We rely on ptrace_freeze_traced() but
|
||||
* PTRACE_KILL is not safe.
|
||||
* with __switch_to_xtra(). We rely on ptrace_freeze_traced().
|
||||
*/
|
||||
local_irq_disable();
|
||||
debugctl = get_debugctlmsr();
|
||||
|
||||
@@ -70,9 +70,6 @@ static int __init control_va_addr_alignment(char *str)
|
||||
if (*str == 0)
|
||||
return 1;
|
||||
|
||||
if (*str == '=')
|
||||
str++;
|
||||
|
||||
if (!strcmp(str, "32"))
|
||||
va_align.flags = ALIGN_VA_32;
|
||||
else if (!strcmp(str, "64"))
|
||||
@@ -82,11 +79,11 @@ static int __init control_va_addr_alignment(char *str)
|
||||
else if (!strcmp(str, "on"))
|
||||
va_align.flags = ALIGN_VA_32 | ALIGN_VA_64;
|
||||
else
|
||||
return 0;
|
||||
pr_warn("invalid option value: 'align_va_addr=%s'\n", str);
|
||||
|
||||
return 1;
|
||||
}
|
||||
__setup("align_va_addr", control_va_addr_alignment);
|
||||
__setup("align_va_addr=", control_va_addr_alignment);
|
||||
|
||||
SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
|
||||
unsigned long, prot, unsigned long, flags,
|
||||
|
||||
@@ -517,6 +517,11 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
|
||||
union cpuid10_eax eax;
|
||||
union cpuid10_edx edx;
|
||||
|
||||
if (!static_cpu_has(X86_FEATURE_ARCH_PERFMON)) {
|
||||
entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
perf_get_x86_pmu_capability(&cap);
|
||||
|
||||
/*
|
||||
|
||||
@@ -214,6 +214,9 @@ static const struct {
|
||||
#define L1D_CACHE_ORDER 4
|
||||
static void *vmx_l1d_flush_pages;
|
||||
|
||||
/* Control for disabling CPU Fill buffer clear */
|
||||
static bool __read_mostly vmx_fb_clear_ctrl_available;
|
||||
|
||||
static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
|
||||
{
|
||||
struct page *page;
|
||||
@@ -820,6 +823,8 @@ struct vcpu_vmx {
|
||||
*/
|
||||
u64 msr_ia32_feature_control;
|
||||
u64 msr_ia32_feature_control_valid_bits;
|
||||
u64 msr_ia32_mcu_opt_ctrl;
|
||||
bool disable_fb_clear;
|
||||
};
|
||||
|
||||
enum segment_cache_field {
|
||||
@@ -1628,6 +1633,60 @@ static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
|
||||
: : "a" (&operand), "c" (ext) : "cc", "memory");
|
||||
}
|
||||
|
||||
static void vmx_setup_fb_clear_ctrl(void)
|
||||
{
|
||||
u64 msr;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES) &&
|
||||
!boot_cpu_has_bug(X86_BUG_MDS) &&
|
||||
!boot_cpu_has_bug(X86_BUG_TAA)) {
|
||||
rdmsrl(MSR_IA32_ARCH_CAPABILITIES, msr);
|
||||
if (msr & ARCH_CAP_FB_CLEAR_CTRL)
|
||||
vmx_fb_clear_ctrl_available = true;
|
||||
}
|
||||
}
|
||||
|
||||
static __always_inline void vmx_disable_fb_clear(struct vcpu_vmx *vmx)
|
||||
{
|
||||
u64 msr;
|
||||
|
||||
if (!vmx->disable_fb_clear)
|
||||
return;
|
||||
|
||||
rdmsrl(MSR_IA32_MCU_OPT_CTRL, msr);
|
||||
msr |= FB_CLEAR_DIS;
|
||||
wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr);
|
||||
/* Cache the MSR value to avoid reading it later */
|
||||
vmx->msr_ia32_mcu_opt_ctrl = msr;
|
||||
}
|
||||
|
||||
static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx)
|
||||
{
|
||||
if (!vmx->disable_fb_clear)
|
||||
return;
|
||||
|
||||
vmx->msr_ia32_mcu_opt_ctrl &= ~FB_CLEAR_DIS;
|
||||
wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl);
|
||||
}
|
||||
|
||||
static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
|
||||
{
|
||||
vmx->disable_fb_clear = vmx_fb_clear_ctrl_available;
|
||||
|
||||
/*
|
||||
* If guest will not execute VERW, there is no need to set FB_CLEAR_DIS
|
||||
* at VMEntry. Skip the MSR read/write when a guest has no use case to
|
||||
* execute VERW.
|
||||
*/
|
||||
if ((vcpu->arch.arch_capabilities & ARCH_CAP_FB_CLEAR) ||
|
||||
((vcpu->arch.arch_capabilities & ARCH_CAP_MDS_NO) &&
|
||||
(vcpu->arch.arch_capabilities & ARCH_CAP_TAA_NO) &&
|
||||
(vcpu->arch.arch_capabilities & ARCH_CAP_PSDP_NO) &&
|
||||
(vcpu->arch.arch_capabilities & ARCH_CAP_FBSDP_NO) &&
|
||||
(vcpu->arch.arch_capabilities & ARCH_CAP_SBDR_SSDP_NO)))
|
||||
vmx->disable_fb_clear = false;
|
||||
}
|
||||
|
||||
static struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr)
|
||||
{
|
||||
int i;
|
||||
@@ -3700,9 +3759,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
||||
}
|
||||
break;
|
||||
}
|
||||
ret = kvm_set_msr_common(vcpu, msr_info);
|
||||
ret = kvm_set_msr_common(vcpu, msr_info);
|
||||
}
|
||||
|
||||
/* FB_CLEAR may have changed, also update the FB_CLEAR_DIS behavior */
|
||||
if (msr_index == MSR_IA32_ARCH_CAPABILITIES)
|
||||
vmx_update_fb_clear_dis(vcpu, vmx);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -6008,6 +6071,8 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
|
||||
update_exception_bitmap(vcpu);
|
||||
|
||||
vpid_sync_context(vmx->vpid);
|
||||
|
||||
vmx_update_fb_clear_dis(vcpu, vmx);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -9779,6 +9844,11 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
vmx_l1d_flush(vcpu);
|
||||
else if (static_branch_unlikely(&mds_user_clear))
|
||||
mds_clear_cpu_buffers();
|
||||
else if (static_branch_unlikely(&mmio_stale_data_clear) &&
|
||||
kvm_arch_has_assigned_device(vcpu->kvm))
|
||||
mds_clear_cpu_buffers();
|
||||
|
||||
vmx_disable_fb_clear(vmx);
|
||||
|
||||
asm(
|
||||
/* Store host registers */
|
||||
@@ -9897,6 +9967,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
#endif
|
||||
);
|
||||
|
||||
vmx_enable_fb_clear(vmx);
|
||||
|
||||
/*
|
||||
* We do not use IBRS in the kernel. If this vCPU has used the
|
||||
* SPEC_CTRL MSR it may have left it on; save the value and
|
||||
@@ -12921,8 +12993,11 @@ static int __init vmx_init(void)
|
||||
}
|
||||
}
|
||||
|
||||
vmx_setup_fb_clear_ctrl();
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
|
||||
|
||||
INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
|
||||
spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
|
||||
}
|
||||
|
||||
@@ -1127,6 +1127,10 @@ u64 kvm_get_arch_capabilities(void)
|
||||
|
||||
/* KVM does not emulate MSR_IA32_TSX_CTRL. */
|
||||
data &= ~ARCH_CAP_TSX_CTRL_MSR;
|
||||
|
||||
/* Guests don't need to know "Fill buffer clear control" exists */
|
||||
data &= ~ARCH_CAP_FB_CLEAR_CTRL;
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
|
||||
@@ -43,8 +43,8 @@ static void delay_loop(unsigned long loops)
|
||||
" jnz 2b \n"
|
||||
"3: dec %0 \n"
|
||||
|
||||
: /* we don't need output */
|
||||
:"a" (loops)
|
||||
: "+a" (loops)
|
||||
:
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -140,7 +140,7 @@ void memcpy_flushcache(void *_dst, const void *_src, size_t size)
|
||||
|
||||
/* cache copy and flush to align dest */
|
||||
if (!IS_ALIGNED(dest, 8)) {
|
||||
unsigned len = min_t(unsigned, size, ALIGN(dest, 8) - dest);
|
||||
size_t len = min_t(size_t, size, ALIGN(dest, 8) - dest);
|
||||
|
||||
memcpy((void *) dest, (void *) source, len);
|
||||
clean_cache_range((void *) dest, len);
|
||||
|
||||
@@ -74,7 +74,7 @@ int pat_debug_enable;
|
||||
static int __init pat_debug_setup(char *str)
|
||||
{
|
||||
pat_debug_enable = 1;
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
__setup("debugpat", pat_debug_setup);
|
||||
|
||||
|
||||
@@ -442,6 +442,11 @@ void __init xen_msi_init(void)
|
||||
|
||||
x86_msi.setup_msi_irqs = xen_hvm_setup_msi_irqs;
|
||||
x86_msi.teardown_msi_irq = xen_teardown_msi_irq;
|
||||
/*
|
||||
* With XEN PIRQ/Eventchannels in use PCI/MSI[-X] masking is solely
|
||||
* controlled by the hypervisor.
|
||||
*/
|
||||
pci_msi_ignore_mask = 1;
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
@@ -26,6 +26,7 @@
|
||||
#include <asm/cpu.h>
|
||||
#include <asm/mmu_context.h>
|
||||
#include <asm/cpu_device_id.h>
|
||||
#include <asm/microcode.h>
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
__visible unsigned long saved_context_ebx;
|
||||
@@ -268,6 +269,13 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
|
||||
x86_platform.restore_sched_clock_state();
|
||||
mtrr_bp_restore();
|
||||
perf_restore_debug_store();
|
||||
|
||||
microcode_bsp_resume();
|
||||
|
||||
/*
|
||||
* This needs to happen after the microcode has been updated upon resume
|
||||
* because some of the MSRs are "emulated" in microcode.
|
||||
*/
|
||||
msr_restore_context(ctxt);
|
||||
}
|
||||
|
||||
|
||||
@@ -23,9 +23,11 @@ static long write_ldt_entry(struct mm_id *mm_idp, int func,
|
||||
{
|
||||
long res;
|
||||
void *stub_addr;
|
||||
|
||||
BUILD_BUG_ON(sizeof(*desc) % sizeof(long));
|
||||
|
||||
res = syscall_stub_data(mm_idp, (unsigned long *)desc,
|
||||
(sizeof(*desc) + sizeof(long) - 1) &
|
||||
~(sizeof(long) - 1),
|
||||
sizeof(*desc) / sizeof(long),
|
||||
addr, &stub_addr);
|
||||
if (!res) {
|
||||
unsigned long args[] = { func,
|
||||
|
||||
@@ -10,13 +10,12 @@
|
||||
#include <linux/msg.h>
|
||||
#include <linux/shm.h>
|
||||
|
||||
typedef long syscall_handler_t(void);
|
||||
typedef long syscall_handler_t(long, long, long, long, long, long);
|
||||
|
||||
extern syscall_handler_t *sys_call_table[];
|
||||
|
||||
#define EXECUTE_SYSCALL(syscall, regs) \
|
||||
(((long (*)(long, long, long, long, long, long)) \
|
||||
(*sys_call_table[syscall]))(UPT_SYSCALL_ARG1(®s->regs), \
|
||||
(((*sys_call_table[syscall]))(UPT_SYSCALL_ARG1(®s->regs), \
|
||||
UPT_SYSCALL_ARG2(®s->regs), \
|
||||
UPT_SYSCALL_ARG3(®s->regs), \
|
||||
UPT_SYSCALL_ARG4(®s->regs), \
|
||||
|
||||
@@ -35,12 +35,12 @@
|
||||
|
||||
void user_enable_single_step(struct task_struct *child)
|
||||
{
|
||||
child->ptrace |= PT_SINGLESTEP;
|
||||
set_tsk_thread_flag(child, TIF_SINGLESTEP);
|
||||
}
|
||||
|
||||
void user_disable_single_step(struct task_struct *child)
|
||||
{
|
||||
child->ptrace &= ~PT_SINGLESTEP;
|
||||
clear_tsk_thread_flag(child, TIF_SINGLESTEP);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -459,7 +459,7 @@ static void do_signal(struct pt_regs *regs)
|
||||
/* Set up the stack frame */
|
||||
ret = setup_frame(&ksig, sigmask_to_save(), regs);
|
||||
signal_setup_done(ret, &ksig, 0);
|
||||
if (current->ptrace & PT_SINGLESTEP)
|
||||
if (test_thread_flag(TIF_SINGLESTEP))
|
||||
task_pt_regs(current)->icountlevel = 1;
|
||||
|
||||
return;
|
||||
@@ -485,7 +485,7 @@ static void do_signal(struct pt_regs *regs)
|
||||
/* If there's no signal to deliver, we just restore the saved mask. */
|
||||
restore_saved_sigmask();
|
||||
|
||||
if (current->ptrace & PT_SINGLESTEP)
|
||||
if (test_thread_flag(TIF_SINGLESTEP))
|
||||
task_pt_regs(current)->icountlevel = 1;
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1680,7 +1680,7 @@ struct bio *bio_copy_kern(struct request_queue *q, void *data, unsigned int len,
|
||||
if (bytes > len)
|
||||
bytes = len;
|
||||
|
||||
page = alloc_page(q->bounce_gfp | gfp_mask);
|
||||
page = alloc_page(q->bounce_gfp | __GFP_ZERO | gfp_mask);
|
||||
if (!page)
|
||||
goto cleanup;
|
||||
|
||||
|
||||
@@ -438,18 +438,29 @@ static ssize_t acpi_data_show(struct file *filp, struct kobject *kobj,
|
||||
{
|
||||
struct acpi_data_attr *data_attr;
|
||||
void __iomem *base;
|
||||
ssize_t rc;
|
||||
ssize_t size;
|
||||
|
||||
data_attr = container_of(bin_attr, struct acpi_data_attr, attr);
|
||||
size = data_attr->attr.size;
|
||||
|
||||
base = acpi_os_map_memory(data_attr->addr, data_attr->attr.size);
|
||||
if (offset < 0)
|
||||
return -EINVAL;
|
||||
|
||||
if (offset >= size)
|
||||
return 0;
|
||||
|
||||
if (count > size - offset)
|
||||
count = size - offset;
|
||||
|
||||
base = acpi_os_map_iomem(data_attr->addr, size);
|
||||
if (!base)
|
||||
return -ENOMEM;
|
||||
rc = memory_read_from_buffer(buf, count, &offset, base,
|
||||
data_attr->attr.size);
|
||||
acpi_os_unmap_memory(base, data_attr->attr.size);
|
||||
|
||||
return rc;
|
||||
memcpy_fromio(buf, base + offset, count);
|
||||
|
||||
acpi_os_unmap_iomem(base, size);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static int acpi_bert_data_init(void *th, struct acpi_data_attr *data_attr)
|
||||
|
||||
@@ -196,7 +196,7 @@ static struct {
|
||||
{ XFER_PIO_0, "XFER_PIO_0" },
|
||||
{ XFER_PIO_SLOW, "XFER_PIO_SLOW" }
|
||||
};
|
||||
ata_bitfield_name_match(xfer,ata_xfer_names)
|
||||
ata_bitfield_name_search(xfer, ata_xfer_names)
|
||||
|
||||
/*
|
||||
* ATA Port attributes
|
||||
|
||||
@@ -888,12 +888,14 @@ static int octeon_cf_probe(struct platform_device *pdev)
|
||||
int i;
|
||||
res_dma = platform_get_resource(dma_dev, IORESOURCE_MEM, 0);
|
||||
if (!res_dma) {
|
||||
put_device(&dma_dev->dev);
|
||||
of_node_put(dma_node);
|
||||
return -EINVAL;
|
||||
}
|
||||
cf_port->dma_base = (u64)devm_ioremap_nocache(&pdev->dev, res_dma->start,
|
||||
resource_size(res_dma));
|
||||
if (!cf_port->dma_base) {
|
||||
put_device(&dma_dev->dev);
|
||||
of_node_put(dma_node);
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -903,6 +905,7 @@ static int octeon_cf_probe(struct platform_device *pdev)
|
||||
irq = i;
|
||||
irq_handler = octeon_cf_interrupt;
|
||||
}
|
||||
put_device(&dma_dev->dev);
|
||||
}
|
||||
of_node_put(dma_node);
|
||||
}
|
||||
|
||||
@@ -561,6 +561,12 @@ ssize_t __weak cpu_show_srbds(struct device *dev,
|
||||
return sprintf(buf, "Not affected\n");
|
||||
}
|
||||
|
||||
ssize_t __weak cpu_show_mmio_stale_data(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "Not affected\n");
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
|
||||
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
|
||||
static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
|
||||
@@ -570,6 +576,7 @@ static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
|
||||
static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
|
||||
static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
|
||||
static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
|
||||
static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL);
|
||||
|
||||
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
|
||||
&dev_attr_meltdown.attr,
|
||||
@@ -581,6 +588,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
|
||||
&dev_attr_tsx_async_abort.attr,
|
||||
&dev_attr_itlb_multihit.attr,
|
||||
&dev_attr_srbds.attr,
|
||||
&dev_attr_mmio_stale_data.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
|
||||
@@ -348,6 +348,7 @@ static int register_node(struct node *node, int num)
|
||||
*/
|
||||
void unregister_node(struct node *node)
|
||||
{
|
||||
compaction_unregister_node(node);
|
||||
hugetlb_unregister_node(node); /* no-op, if memoryless node */
|
||||
|
||||
device_unregister(&node->dev);
|
||||
|
||||
@@ -35,6 +35,22 @@ config BLK_DEV_FD
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called floppy.
|
||||
|
||||
config BLK_DEV_FD_RAWCMD
|
||||
bool "Support for raw floppy disk commands (DEPRECATED)"
|
||||
depends on BLK_DEV_FD
|
||||
help
|
||||
If you want to use actual physical floppies and expect to do
|
||||
special low-level hardware accesses to them (access and use
|
||||
non-standard formats, for example), then enable this.
|
||||
|
||||
Note that the code enabled by this option is rarely used and
|
||||
might be unstable or insecure, and distros should not enable it.
|
||||
|
||||
Note: FDRAWCMD is deprecated and will be removed from the kernel
|
||||
in the near future.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config AMIGA_FLOPPY
|
||||
tristate "Amiga floppy support"
|
||||
depends on AMIGA
|
||||
|
||||
@@ -195,7 +195,7 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr,
|
||||
unsigned int set_size)
|
||||
{
|
||||
struct drbd_request *r;
|
||||
struct drbd_request *req = NULL;
|
||||
struct drbd_request *req = NULL, *tmp = NULL;
|
||||
int expect_epoch = 0;
|
||||
int expect_size = 0;
|
||||
|
||||
@@ -249,8 +249,11 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr,
|
||||
* to catch requests being barrier-acked "unexpectedly".
|
||||
* It usually should find the same req again, or some READ preceding it. */
|
||||
list_for_each_entry(req, &connection->transfer_log, tl_requests)
|
||||
if (req->epoch == expect_epoch)
|
||||
if (req->epoch == expect_epoch) {
|
||||
tmp = req;
|
||||
break;
|
||||
}
|
||||
req = list_prepare_entry(tmp, &connection->transfer_log, tl_requests);
|
||||
list_for_each_entry_safe_from(req, r, &connection->transfer_log, tl_requests) {
|
||||
if (req->epoch != expect_epoch)
|
||||
break;
|
||||
|
||||
@@ -774,9 +774,11 @@ int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info)
|
||||
mutex_lock(&adm_ctx.resource->adm_mutex);
|
||||
|
||||
if (info->genlhdr->cmd == DRBD_ADM_PRIMARY)
|
||||
retcode = drbd_set_role(adm_ctx.device, R_PRIMARY, parms.assume_uptodate);
|
||||
retcode = (enum drbd_ret_code)drbd_set_role(adm_ctx.device,
|
||||
R_PRIMARY, parms.assume_uptodate);
|
||||
else
|
||||
retcode = drbd_set_role(adm_ctx.device, R_SECONDARY, 0);
|
||||
retcode = (enum drbd_ret_code)drbd_set_role(adm_ctx.device,
|
||||
R_SECONDARY, 0);
|
||||
|
||||
mutex_unlock(&adm_ctx.resource->adm_mutex);
|
||||
genl_lock();
|
||||
@@ -1941,7 +1943,7 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
|
||||
drbd_flush_workqueue(&connection->sender_work);
|
||||
|
||||
rv = _drbd_request_state(device, NS(disk, D_ATTACHING), CS_VERBOSE);
|
||||
retcode = rv; /* FIXME: Type mismatch. */
|
||||
retcode = (enum drbd_ret_code)rv;
|
||||
drbd_resume_io(device);
|
||||
if (rv < SS_SUCCESS)
|
||||
goto fail;
|
||||
@@ -2671,7 +2673,8 @@ int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info)
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
retcode = conn_request_state(connection, NS(conn, C_UNCONNECTED), CS_VERBOSE);
|
||||
retcode = (enum drbd_ret_code)conn_request_state(connection,
|
||||
NS(conn, C_UNCONNECTED), CS_VERBOSE);
|
||||
|
||||
conn_reconfig_done(connection);
|
||||
mutex_unlock(&adm_ctx.resource->adm_mutex);
|
||||
@@ -2777,7 +2780,7 @@ int drbd_adm_disconnect(struct sk_buff *skb, struct genl_info *info)
|
||||
mutex_lock(&adm_ctx.resource->adm_mutex);
|
||||
rv = conn_try_disconnect(connection, parms.force_disconnect);
|
||||
if (rv < SS_SUCCESS)
|
||||
retcode = rv; /* FIXME: Type mismatch. */
|
||||
retcode = (enum drbd_ret_code)rv;
|
||||
else
|
||||
retcode = NO_ERROR;
|
||||
mutex_unlock(&adm_ctx.resource->adm_mutex);
|
||||
|
||||
@@ -516,8 +516,8 @@ static unsigned long fdc_busy;
|
||||
static DECLARE_WAIT_QUEUE_HEAD(fdc_wait);
|
||||
static DECLARE_WAIT_QUEUE_HEAD(command_done);
|
||||
|
||||
/* Errors during formatting are counted here. */
|
||||
static int format_errors;
|
||||
/* errors encountered on the current (or last) request */
|
||||
static int floppy_errors;
|
||||
|
||||
/* Format request descriptor. */
|
||||
static struct format_descr format_req;
|
||||
@@ -537,7 +537,6 @@ static struct format_descr format_req;
|
||||
static char *floppy_track_buffer;
|
||||
static int max_buffer_sectors;
|
||||
|
||||
static int *errors;
|
||||
typedef void (*done_f)(int);
|
||||
static const struct cont_t {
|
||||
void (*interrupt)(void);
|
||||
@@ -1426,7 +1425,7 @@ static int interpret_errors(void)
|
||||
if (DP->flags & FTD_MSG)
|
||||
DPRINT("Over/Underrun - retrying\n");
|
||||
bad = 0;
|
||||
} else if (*errors >= DP->max_errors.reporting) {
|
||||
} else if (floppy_errors >= DP->max_errors.reporting) {
|
||||
print_errors();
|
||||
}
|
||||
if (ST2 & ST2_WC || ST2 & ST2_BC)
|
||||
@@ -2049,7 +2048,7 @@ static void bad_flp_intr(void)
|
||||
if (!next_valid_format())
|
||||
return;
|
||||
}
|
||||
err_count = ++(*errors);
|
||||
err_count = ++floppy_errors;
|
||||
INFBOUND(DRWE->badness, err_count);
|
||||
if (err_count > DP->max_errors.abort)
|
||||
cont->done(0);
|
||||
@@ -2194,9 +2193,8 @@ static int do_format(int drive, struct format_descr *tmp_format_req)
|
||||
return -EINVAL;
|
||||
}
|
||||
format_req = *tmp_format_req;
|
||||
format_errors = 0;
|
||||
cont = &format_cont;
|
||||
errors = &format_errors;
|
||||
floppy_errors = 0;
|
||||
ret = wait_til_done(redo_format, true);
|
||||
if (ret == -EINTR)
|
||||
return -EINTR;
|
||||
@@ -2679,7 +2677,7 @@ static int make_raw_rw_request(void)
|
||||
*/
|
||||
if (!direct ||
|
||||
(indirect * 2 > direct * 3 &&
|
||||
*errors < DP->max_errors.read_track &&
|
||||
floppy_errors < DP->max_errors.read_track &&
|
||||
((!probing ||
|
||||
(DP->read_track & (1 << DRS->probed_format)))))) {
|
||||
max_size = blk_rq_sectors(current_req);
|
||||
@@ -2813,7 +2811,7 @@ static int set_next_request(void)
|
||||
if (q) {
|
||||
current_req = blk_fetch_request(q);
|
||||
if (current_req) {
|
||||
current_req->error_count = 0;
|
||||
floppy_errors = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
@@ -2875,7 +2873,6 @@ do_request:
|
||||
_floppy = floppy_type + DP->autodetect[DRS->probed_format];
|
||||
} else
|
||||
probing = 0;
|
||||
errors = &(current_req->error_count);
|
||||
tmp = make_raw_rw_request();
|
||||
if (tmp < 2) {
|
||||
request_done(tmp);
|
||||
@@ -3018,6 +3015,8 @@ static const char *drive_name(int type, int drive)
|
||||
return "(null)";
|
||||
}
|
||||
|
||||
#ifdef CONFIG_BLK_DEV_FD_RAWCMD
|
||||
|
||||
/* raw commands */
|
||||
static void raw_cmd_done(int flag)
|
||||
{
|
||||
@@ -3229,6 +3228,35 @@ static int raw_cmd_ioctl(int cmd, void __user *param)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int floppy_raw_cmd_ioctl(int type, int drive, int cmd,
|
||||
void __user *param)
|
||||
{
|
||||
int ret;
|
||||
|
||||
pr_warn_once("Note: FDRAWCMD is deprecated and will be removed from the kernel in the near future.\n");
|
||||
|
||||
if (type)
|
||||
return -EINVAL;
|
||||
if (lock_fdc(drive))
|
||||
return -EINTR;
|
||||
set_floppy(drive);
|
||||
ret = raw_cmd_ioctl(cmd, param);
|
||||
if (ret == -EINTR)
|
||||
return -EINTR;
|
||||
process_fd_request();
|
||||
return ret;
|
||||
}
|
||||
|
||||
#else /* CONFIG_BLK_DEV_FD_RAWCMD */
|
||||
|
||||
static int floppy_raw_cmd_ioctl(int type, int drive, int cmd,
|
||||
void __user *param)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
static int invalidate_drive(struct block_device *bdev)
|
||||
{
|
||||
/* invalidate the buffer track to force a reread */
|
||||
@@ -3416,7 +3444,6 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
|
||||
{
|
||||
int drive = (long)bdev->bd_disk->private_data;
|
||||
int type = ITYPE(UDRS->fd_device);
|
||||
int i;
|
||||
int ret;
|
||||
int size;
|
||||
union inparam {
|
||||
@@ -3567,16 +3594,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
|
||||
outparam = UDRWE;
|
||||
break;
|
||||
case FDRAWCMD:
|
||||
if (type)
|
||||
return -EINVAL;
|
||||
if (lock_fdc(drive))
|
||||
return -EINTR;
|
||||
set_floppy(drive);
|
||||
i = raw_cmd_ioctl(cmd, (void __user *)param);
|
||||
if (i == -EINTR)
|
||||
return -EINTR;
|
||||
process_fd_request();
|
||||
return i;
|
||||
return floppy_raw_cmd_ioctl(type, drive, cmd, (void __user *)param);
|
||||
case FDTWADDLE:
|
||||
if (lock_fdc(drive))
|
||||
return -EINTR;
|
||||
|
||||
@@ -1275,7 +1275,7 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b
|
||||
static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
|
||||
struct block_device *bdev)
|
||||
{
|
||||
sock_shutdown(nbd);
|
||||
nbd_clear_sock(nbd);
|
||||
__invalidate_device(bdev, true);
|
||||
nbd_bdev_reset(bdev);
|
||||
if (test_and_clear_bit(NBD_HAS_CONFIG_REF,
|
||||
@@ -1382,15 +1382,20 @@ static struct nbd_config *nbd_alloc_config(void)
|
||||
{
|
||||
struct nbd_config *config;
|
||||
|
||||
if (!try_module_get(THIS_MODULE))
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
config = kzalloc(sizeof(struct nbd_config), GFP_NOFS);
|
||||
if (!config)
|
||||
return NULL;
|
||||
if (!config) {
|
||||
module_put(THIS_MODULE);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
atomic_set(&config->recv_threads, 0);
|
||||
init_waitqueue_head(&config->recv_wq);
|
||||
init_waitqueue_head(&config->conn_wait);
|
||||
config->blksize = NBD_DEF_BLKSIZE;
|
||||
atomic_set(&config->live_connections, 0);
|
||||
try_module_get(THIS_MODULE);
|
||||
return config;
|
||||
}
|
||||
|
||||
@@ -1417,12 +1422,13 @@ static int nbd_open(struct block_device *bdev, fmode_t mode)
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
goto out;
|
||||
}
|
||||
config = nbd->config = nbd_alloc_config();
|
||||
if (!config) {
|
||||
ret = -ENOMEM;
|
||||
config = nbd_alloc_config();
|
||||
if (IS_ERR(config)) {
|
||||
ret = PTR_ERR(config);
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
goto out;
|
||||
}
|
||||
nbd->config = config;
|
||||
refcount_set(&nbd->config_refs, 1);
|
||||
refcount_inc(&nbd->refs);
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
@@ -1803,13 +1809,14 @@ again:
|
||||
nbd_put(nbd);
|
||||
return -EINVAL;
|
||||
}
|
||||
config = nbd->config = nbd_alloc_config();
|
||||
if (!nbd->config) {
|
||||
config = nbd_alloc_config();
|
||||
if (IS_ERR(config)) {
|
||||
mutex_unlock(&nbd->config_lock);
|
||||
nbd_put(nbd);
|
||||
printk(KERN_ERR "nbd: couldn't allocate config\n");
|
||||
return -ENOMEM;
|
||||
return PTR_ERR(config);
|
||||
}
|
||||
nbd->config = config;
|
||||
refcount_set(&nbd->config_refs, 1);
|
||||
set_bit(NBD_BOUND, &config->runtime_flags);
|
||||
|
||||
@@ -2319,6 +2326,12 @@ static void __exit nbd_cleanup(void)
|
||||
struct nbd_device *nbd;
|
||||
LIST_HEAD(del_list);
|
||||
|
||||
/*
|
||||
* Unregister netlink interface prior to waiting
|
||||
* for the completion of netlink commands.
|
||||
*/
|
||||
genl_unregister_family(&nbd_genl_family);
|
||||
|
||||
nbd_dbg_close();
|
||||
|
||||
mutex_lock(&nbd_index_mutex);
|
||||
@@ -2328,13 +2341,15 @@ static void __exit nbd_cleanup(void)
|
||||
while (!list_empty(&del_list)) {
|
||||
nbd = list_first_entry(&del_list, struct nbd_device, list);
|
||||
list_del_init(&nbd->list);
|
||||
if (refcount_read(&nbd->config_refs))
|
||||
printk(KERN_ERR "nbd: possibly leaking nbd_config (ref %d)\n",
|
||||
refcount_read(&nbd->config_refs));
|
||||
if (refcount_read(&nbd->refs) != 1)
|
||||
printk(KERN_ERR "nbd: possibly leaking a device\n");
|
||||
nbd_put(nbd);
|
||||
}
|
||||
|
||||
idr_destroy(&nbd_index_idr);
|
||||
genl_unregister_family(&nbd_genl_family);
|
||||
unregister_blkdev(NBD_MAJOR, "nbd");
|
||||
}
|
||||
|
||||
|
||||
@@ -224,6 +224,8 @@ static struct sunxi_rsb_device *sunxi_rsb_device_create(struct sunxi_rsb *rsb,
|
||||
|
||||
dev_dbg(&rdev->dev, "device %s registered\n", dev_name(&rdev->dev));
|
||||
|
||||
return rdev;
|
||||
|
||||
err_device_add:
|
||||
put_device(&rdev->dev);
|
||||
|
||||
|
||||
@@ -816,6 +816,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
|
||||
break;
|
||||
|
||||
case SSIF_GETTING_EVENTS:
|
||||
if (!msg) {
|
||||
/* Should never happen, but just in case. */
|
||||
dev_warn(&ssif_info->client->dev,
|
||||
"No message set while getting events\n");
|
||||
ipmi_ssif_unlock_cond(ssif_info, flags);
|
||||
break;
|
||||
}
|
||||
|
||||
if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
|
||||
/* Error getting event, probably done. */
|
||||
msg->done(msg);
|
||||
@@ -839,6 +847,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
|
||||
break;
|
||||
|
||||
case SSIF_GETTING_MESSAGES:
|
||||
if (!msg) {
|
||||
/* Should never happen, but just in case. */
|
||||
dev_warn(&ssif_info->client->dev,
|
||||
"No message set while getting messages\n");
|
||||
ipmi_ssif_unlock_cond(ssif_info, flags);
|
||||
break;
|
||||
}
|
||||
|
||||
if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
|
||||
/* Error getting event, probably done. */
|
||||
msg->done(msg);
|
||||
@@ -861,6 +877,13 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
|
||||
deliver_recv_msg(ssif_info, msg);
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
/* Should never happen, but just in case. */
|
||||
dev_warn(&ssif_info->client->dev,
|
||||
"Invalid state in message done handling: %d\n",
|
||||
ssif_info->ssif_state);
|
||||
ipmi_ssif_unlock_cond(ssif_info, flags);
|
||||
}
|
||||
|
||||
flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
|
||||
|
||||
@@ -692,6 +692,7 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
|
||||
if (!wait_event_timeout(ibmvtpm->crq_queue.wq,
|
||||
ibmvtpm->rtce_buf != NULL,
|
||||
HZ)) {
|
||||
rc = -ENODEV;
|
||||
dev_err(dev, "CRQ response timed out\n");
|
||||
goto init_irq_cleanup;
|
||||
}
|
||||
|
||||
@@ -119,6 +119,10 @@ static void clk_generated_best_diff(struct clk_rate_request *req,
|
||||
tmp_rate = parent_rate;
|
||||
else
|
||||
tmp_rate = parent_rate / div;
|
||||
|
||||
if (tmp_rate < req->min_rate || tmp_rate > req->max_rate)
|
||||
return;
|
||||
|
||||
tmp_diff = abs(req->rate - tmp_rate);
|
||||
|
||||
if (*best_diff < 0 || *best_diff > tmp_diff) {
|
||||
|
||||
@@ -117,6 +117,8 @@ static int sun9i_a80_mmc_config_clk_probe(struct platform_device *pdev)
|
||||
spin_lock_init(&data->lock);
|
||||
|
||||
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (!r)
|
||||
return -EINVAL;
|
||||
/* one clock/reset pair per word */
|
||||
count = DIV_ROUND_UP((resource_size(r)), SUN9I_MMC_WIDTH);
|
||||
data->membase = devm_ioremap_resource(&pdev->dev, r);
|
||||
|
||||
@@ -247,7 +247,7 @@ static int __init oxnas_rps_timer_init(struct device_node *np)
|
||||
}
|
||||
|
||||
rps->irq = irq_of_parse_and_map(np, 0);
|
||||
if (rps->irq < 0) {
|
||||
if (!rps->irq) {
|
||||
ret = -EINVAL;
|
||||
goto err_iomap;
|
||||
}
|
||||
|
||||
@@ -227,6 +227,11 @@ static int __init sp804_of_init(struct device_node *np)
|
||||
struct clk *clk1, *clk2;
|
||||
const char *name = of_get_property(np, "compatible", NULL);
|
||||
|
||||
if (initialized) {
|
||||
pr_debug("%pOF: skipping further SP804 timer device\n", np);
|
||||
return 0;
|
||||
}
|
||||
|
||||
base = of_iomap(np, 0);
|
||||
if (!base)
|
||||
return -ENXIO;
|
||||
@@ -235,11 +240,6 @@ static int __init sp804_of_init(struct device_node *np)
|
||||
writel(0, base + TIMER_CTRL);
|
||||
writel(0, base + TIMER_2_BASE + TIMER_CTRL);
|
||||
|
||||
if (initialized || !of_device_is_available(np)) {
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
clk1 = of_clk_get(np, 0);
|
||||
if (IS_ERR(clk1))
|
||||
clk1 = NULL;
|
||||
|
||||
@@ -1241,19 +1241,14 @@ int extcon_dev_register(struct extcon_dev *edev)
|
||||
edev->dev.type = &edev->extcon_dev_type;
|
||||
}
|
||||
|
||||
ret = device_register(&edev->dev);
|
||||
if (ret) {
|
||||
put_device(&edev->dev);
|
||||
goto err_dev;
|
||||
}
|
||||
|
||||
spin_lock_init(&edev->lock);
|
||||
edev->nh = devm_kcalloc(&edev->dev, edev->max_supported,
|
||||
sizeof(*edev->nh), GFP_KERNEL);
|
||||
if (!edev->nh) {
|
||||
ret = -ENOMEM;
|
||||
device_unregister(&edev->dev);
|
||||
goto err_dev;
|
||||
if (edev->max_supported) {
|
||||
edev->nh = kcalloc(edev->max_supported, sizeof(*edev->nh),
|
||||
GFP_KERNEL);
|
||||
if (!edev->nh) {
|
||||
ret = -ENOMEM;
|
||||
goto err_alloc_nh;
|
||||
}
|
||||
}
|
||||
|
||||
for (index = 0; index < edev->max_supported; index++)
|
||||
@@ -1264,6 +1259,12 @@ int extcon_dev_register(struct extcon_dev *edev)
|
||||
dev_set_drvdata(&edev->dev, edev);
|
||||
edev->state = 0;
|
||||
|
||||
ret = device_register(&edev->dev);
|
||||
if (ret) {
|
||||
put_device(&edev->dev);
|
||||
goto err_dev;
|
||||
}
|
||||
|
||||
mutex_lock(&extcon_dev_list_lock);
|
||||
list_add(&edev->entry, &extcon_dev_list);
|
||||
mutex_unlock(&extcon_dev_list_lock);
|
||||
@@ -1271,6 +1272,9 @@ int extcon_dev_register(struct extcon_dev *edev)
|
||||
return 0;
|
||||
|
||||
err_dev:
|
||||
if (edev->max_supported)
|
||||
kfree(edev->nh);
|
||||
err_alloc_nh:
|
||||
if (edev->max_supported)
|
||||
kfree(edev->extcon_dev_type.groups);
|
||||
err_alloc_groups:
|
||||
@@ -1331,6 +1335,7 @@ void extcon_dev_unregister(struct extcon_dev *edev)
|
||||
if (edev->max_supported) {
|
||||
kfree(edev->extcon_dev_type.groups);
|
||||
kfree(edev->cables);
|
||||
kfree(edev->nh);
|
||||
}
|
||||
|
||||
put_device(&edev->dev);
|
||||
|
||||
@@ -681,6 +681,7 @@ EXPORT_SYMBOL_GPL(fw_card_release);
|
||||
void fw_core_remove_card(struct fw_card *card)
|
||||
{
|
||||
struct fw_card_driver dummy_driver = dummy_driver_template;
|
||||
unsigned long flags;
|
||||
|
||||
card->driver->update_phy_reg(card, 4,
|
||||
PHY_LINK_ACTIVE | PHY_CONTENDER, 0);
|
||||
@@ -695,7 +696,9 @@ void fw_core_remove_card(struct fw_card *card)
|
||||
dummy_driver.stop_iso = card->driver->stop_iso;
|
||||
card->driver = &dummy_driver;
|
||||
|
||||
spin_lock_irqsave(&card->lock, flags);
|
||||
fw_destroy_nodes(card);
|
||||
spin_unlock_irqrestore(&card->lock, flags);
|
||||
|
||||
/* Wait for all users, especially device workqueue jobs, to finish. */
|
||||
fw_card_put(card);
|
||||
|
||||
@@ -1495,6 +1495,7 @@ static void outbound_phy_packet_callback(struct fw_packet *packet,
|
||||
{
|
||||
struct outbound_phy_packet_event *e =
|
||||
container_of(packet, struct outbound_phy_packet_event, p);
|
||||
struct client *e_client;
|
||||
|
||||
switch (status) {
|
||||
/* expected: */
|
||||
@@ -1511,9 +1512,10 @@ static void outbound_phy_packet_callback(struct fw_packet *packet,
|
||||
}
|
||||
e->phy_packet.data[0] = packet->timestamp;
|
||||
|
||||
e_client = e->client;
|
||||
queue_event(e->client, &e->event, &e->phy_packet,
|
||||
sizeof(e->phy_packet) + e->phy_packet.length, NULL, 0);
|
||||
client_put(e->client);
|
||||
client_put(e_client);
|
||||
}
|
||||
|
||||
static int ioctl_send_phy_packet(struct client *client, union ioctl_arg *arg)
|
||||
|
||||
@@ -387,16 +387,13 @@ static void report_found_node(struct fw_card *card,
|
||||
card->bm_retries = 0;
|
||||
}
|
||||
|
||||
/* Must be called with card->lock held */
|
||||
void fw_destroy_nodes(struct fw_card *card)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&card->lock, flags);
|
||||
card->color++;
|
||||
if (card->local_node != NULL)
|
||||
for_each_fw_node(card, card->local_node, report_lost_node);
|
||||
card->local_node = NULL;
|
||||
spin_unlock_irqrestore(&card->lock, flags);
|
||||
}
|
||||
|
||||
static void move_tree(struct fw_node *node0, struct fw_node *node1, int port)
|
||||
@@ -522,6 +519,8 @@ void fw_core_handle_bus_reset(struct fw_card *card, int node_id, int generation,
|
||||
struct fw_node *local_node;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&card->lock, flags);
|
||||
|
||||
/*
|
||||
* If the selfID buffer is not the immediate successor of the
|
||||
* previously processed one, we cannot reliably compare the
|
||||
@@ -533,8 +532,6 @@ void fw_core_handle_bus_reset(struct fw_card *card, int node_id, int generation,
|
||||
card->bm_retries = 0;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&card->lock, flags);
|
||||
|
||||
card->broadcast_channel_allocated = card->broadcast_channel_auto_allocated;
|
||||
card->node_id = node_id;
|
||||
/*
|
||||
|
||||
@@ -86,24 +86,25 @@ static int try_cancel_split_timeout(struct fw_transaction *t)
|
||||
static int close_transaction(struct fw_transaction *transaction,
|
||||
struct fw_card *card, int rcode)
|
||||
{
|
||||
struct fw_transaction *t;
|
||||
struct fw_transaction *t = NULL, *iter;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&card->lock, flags);
|
||||
list_for_each_entry(t, &card->transaction_list, link) {
|
||||
if (t == transaction) {
|
||||
if (!try_cancel_split_timeout(t)) {
|
||||
list_for_each_entry(iter, &card->transaction_list, link) {
|
||||
if (iter == transaction) {
|
||||
if (!try_cancel_split_timeout(iter)) {
|
||||
spin_unlock_irqrestore(&card->lock, flags);
|
||||
goto timed_out;
|
||||
}
|
||||
list_del_init(&t->link);
|
||||
card->tlabel_mask &= ~(1ULL << t->tlabel);
|
||||
list_del_init(&iter->link);
|
||||
card->tlabel_mask &= ~(1ULL << iter->tlabel);
|
||||
t = iter;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&card->lock, flags);
|
||||
|
||||
if (&t->link != &card->transaction_list) {
|
||||
if (t) {
|
||||
t->callback(card, rcode, NULL, 0, t->callback_data);
|
||||
return 0;
|
||||
}
|
||||
@@ -938,7 +939,7 @@ EXPORT_SYMBOL(fw_core_handle_request);
|
||||
|
||||
void fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
|
||||
{
|
||||
struct fw_transaction *t;
|
||||
struct fw_transaction *t = NULL, *iter;
|
||||
unsigned long flags;
|
||||
u32 *data;
|
||||
size_t data_length;
|
||||
@@ -950,20 +951,21 @@ void fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
|
||||
rcode = HEADER_GET_RCODE(p->header[1]);
|
||||
|
||||
spin_lock_irqsave(&card->lock, flags);
|
||||
list_for_each_entry(t, &card->transaction_list, link) {
|
||||
if (t->node_id == source && t->tlabel == tlabel) {
|
||||
if (!try_cancel_split_timeout(t)) {
|
||||
list_for_each_entry(iter, &card->transaction_list, link) {
|
||||
if (iter->node_id == source && iter->tlabel == tlabel) {
|
||||
if (!try_cancel_split_timeout(iter)) {
|
||||
spin_unlock_irqrestore(&card->lock, flags);
|
||||
goto timed_out;
|
||||
}
|
||||
list_del_init(&t->link);
|
||||
card->tlabel_mask &= ~(1ULL << t->tlabel);
|
||||
list_del_init(&iter->link);
|
||||
card->tlabel_mask &= ~(1ULL << iter->tlabel);
|
||||
t = iter;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&card->lock, flags);
|
||||
|
||||
if (&t->link == &card->transaction_list) {
|
||||
if (!t) {
|
||||
timed_out:
|
||||
fw_notice(card, "unsolicited response (source %x, tlabel %x)\n",
|
||||
source, tlabel);
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user