Merge 4.14.204 into android-4.14-stable
Changes in 4.14.204
scripts/setlocalversion: make git describe output more reliable
arm64: link with -z norelro regardless of CONFIG_RELOCATABLE
efivarfs: Replace invalid slashes with exclamation marks in dentries.
gtp: fix an use-before-init in gtp_newlink()
ravb: Fix bit fields checking in ravb_hwtstamp_get()
tipc: fix memory leak caused by tipc_buf_append()
arch/x86/amd/ibs: Fix re-arming IBS Fetch
x86/xen: disable Firmware First mode for correctable memory errors
fuse: fix page dereference after free
p54: avoid accessing the data mapped to streaming DMA
mtd: lpddr: Fix bad logic in print_drs_error
ata: sata_rcar: Fix DMA boundary mask
fscrypt: return -EXDEV for incompatible rename or link into encrypted dir
x86/unwind/orc: Fix inactive tasks with stack pointer in %sp on GCC 10 compiled kernels
mlxsw: core: Fix use-after-free in mlxsw_emad_trans_finish()
futex: Fix incorrect should_fail_futex() handling
powerpc/powernv/smp: Fix spurious DBG() warning
powerpc: select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
sparc64: remove mm_cpumask clearing to fix kthread_use_mm race
f2fs: add trace exit in exception path
f2fs: fix to check segment boundary during SIT page readahead
um: change sigio_spinlock to a mutex
ARM: 8997/2: hw_breakpoint: Handle inexact watchpoint addresses
xfs: fix realtime bitmap/summary file truncation when growing rt volume
video: fbdev: pvr2fb: initialize variables
ath10k: start recovery process when payload length exceeds max htc length for sdio
ath10k: fix VHT NSS calculation when STBC is enabled
drm/brige/megachips: Add checking if ge_b850v3_lvds_init() is working correctly
media: videodev2.h: RGB BT2020 and HSV are always full range
media: platform: Improve queue set up flow for bug fixing
usb: typec: tcpm: During PR_SWAP, source caps should be sent only after tSwapSourceStart
media: tw5864: check status of tw5864_frameinterval_get
mmc: via-sdmmc: Fix data race bug
drm/bridge/synopsys: dsi: add support for non-continuous HS clock
printk: reduce LOG_BUF_SHIFT range for H8300
kgdb: Make "kgdbcon" work properly with "kgdb_earlycon"
cpufreq: sti-cpufreq: add stih418 support
USB: adutux: fix debugging
uio: free uio id after uio file node is freed
arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
ACPI: Add out of bounds and numa_off protections to pxm_to_node()
drivers/net/wan/hdlc_fr: Correctly handle special skb->protocol values
bus/fsl_mc: Do not rely on caller to provide non NULL mc_io
power: supply: test_power: add missing newlines when printing parameters by sysfs
md/bitmap: md_bitmap_get_counter returns wrong blocks
bnxt_en: Log unknown link speed appropriately.
clk: ti: clockdomain: fix static checker warning
net: 9p: initialize sun_server.sun_path to have addr's value only when addr is valid
drivers: watchdog: rdc321x_wdt: Fix race condition bugs
ext4: Detect already used quota file early
gfs2: add validation checks for size of superblock
arm64: dts: renesas: ulcb: add full-pwr-cycle-in-suspend into eMMC nodes
memory: emif: Remove bogus debugfs error handling
ARM: dts: s5pv210: remove DMA controller bus node name to fix dtschema warnings
ARM: dts: s5pv210: move PMU node out of clock controller
ARM: dts: s5pv210: remove dedicated 'audio-subsystem' node
nbd: make the config put is called before the notifying the waiter
sgl_alloc_order: fix memory leak
nvme-rdma: fix crash when connect rejected
md/raid5: fix oops during stripe resizing
perf/x86/amd/ibs: Don't include randomized bits in get_ibs_op_count()
perf/x86/amd/ibs: Fix raw sample data accumulation
leds: bcm6328, bcm6358: use devres LED registering function
fs: Don't invalidate page buffers in block_write_full_page()
NFS: fix nfs_path in case of a rename retry
ACPI / extlog: Check for RDMSR failure
ACPI: video: use ACPI backlight for HP 635 Notebook
ACPI: debug: don't allow debugging when ACPI is disabled
acpi-cpufreq: Honor _PSD table setting on new AMD CPUs
w1: mxc_w1: Fix timeout resolution problem leading to bus error
scsi: mptfusion: Fix null pointer dereferences in mptscsih_remove()
btrfs: reschedule if necessary when logging directory items
btrfs: send, recompute reference path after orphanization of a directory
btrfs: use kvzalloc() to allocate clone_roots in btrfs_ioctl_send()
btrfs: cleanup cow block on error
btrfs: fix use-after-free on readahead extent after failure to create it
usb: dwc3: ep0: Fix ZLP for OUT ep0 requests
usb: dwc3: core: add phy cleanup for probe error handling
usb: dwc3: core: don't trigger runtime pm when remove driver
usb: cdc-acm: fix cooldown mechanism
usb: host: fsl-mph-dr-of: check return of dma_set_mask()
drm/i915: Force VT'd workarounds when running as a guest OS
vt: keyboard, simplify vt_kdgkbsent
vt: keyboard, extend func_buf_lock to readers
dmaengine: dma-jz4780: Fix race in jz4780_dma_tx_status
iio:light:si1145: Fix timestamp alignment and prevent data leak.
iio:adc:ti-adc0832 Fix alignment issue with timestamp
iio:adc:ti-adc12138 Fix alignment issue with timestamp
iio:gyro:itg3200: Fix timestamp alignment and prevent data leak.
s390/stp: add locking to sysfs functions
powerpc/rtas: Restrict RTAS requests from userspace
powerpc: Warn about use of smt_snooze_delay
powerpc/powernv/elog: Fix race while processing OPAL error log event.
NFSv4.2: support EXCHGID4_FLAG_SUPP_FENCE_OPS 4.2 EXCHANGE_ID flag
NFSD: Add missing NFSv2 .pc_func methods
ubifs: dent: Fix some potential memory leaks while iterating entries
perf python scripting: Fix printable strings in python3 scripts
ubi: check kthread_should_stop() after the setting of task state
ia64: fix build error with !COREDUMP
drm/amdgpu: don't map BO in reserved region
ceph: promote to unsigned long long before shifting
libceph: clear con->out_msg on Policy::stateful_server faults
9P: Cast to loff_t before multiplying
ring-buffer: Return 0 on success from ring_buffer_resize()
vringh: fix __vringh_iov() when riov and wiov are different
ext4: fix leaking sysfs kobject after failed mount
ext4: fix error handling code in add_new_gdb
ext4: fix invalid inode checksum
drm/ttm: fix eviction valuable range check.
rtc: rx8010: don't modify the global rtc ops
tty: make FONTX ioctl use the tty pointer they were actually passed
arm64: berlin: Select DW_APB_TIMER_OF
cachefiles: Handle readpage error correctly
hil/parisc: Disable HIL driver when it gets stuck
arm: dts: mt7623: add missing pause for switchport
ARM: samsung: fix PM debug build with DEBUG_LL but !MMU
ARM: s3c24xx: fix missing system reset
device property: Keep secondary firmware node secondary by type
device property: Don't clear secondary pointer for shared primary firmware node
KVM: arm64: Fix AArch32 handling of DBGD{CCINT,SCRext} and DBGVCR
staging: comedi: cb_pcidas: Allow 2-channel commands for AO subdevice
staging: octeon: repair "fixed-link" support
staging: octeon: Drop on uncorrectable alignment or FCS error
Linux 4.14.204
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ibed153216ddb983a9ef0640ae9c82781f51880fe
This commit is contained in:
@@ -29,8 +29,7 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
|
||||
:c:type:`v4l2_hsv_encoding` specifies which encoding is used.
|
||||
|
||||
.. note:: The default R'G'B' quantization is full range for all
|
||||
colorspaces except for BT.2020 which uses limited range R'G'B'
|
||||
quantization.
|
||||
colorspaces. HSV formats are always full range.
|
||||
|
||||
.. tabularcolumns:: |p{6.0cm}|p{11.5cm}|
|
||||
|
||||
@@ -162,8 +161,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
|
||||
- Details
|
||||
* - ``V4L2_QUANTIZATION_DEFAULT``
|
||||
- Use the default quantization encoding as defined by the
|
||||
colorspace. This is always full range for R'G'B' (except for the
|
||||
BT.2020 colorspace) and HSV. It is usually limited range for Y'CbCr.
|
||||
colorspace. This is always full range for R'G'B' and HSV.
|
||||
It is usually limited range for Y'CbCr.
|
||||
* - ``V4L2_QUANTIZATION_FULL_RANGE``
|
||||
- Use the full range quantization encoding. I.e. the range [0…1] is
|
||||
mapped to [0…255] (with possible clipping to [1…254] to avoid the
|
||||
@@ -173,4 +172,4 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
|
||||
* - ``V4L2_QUANTIZATION_LIM_RANGE``
|
||||
- Use the limited range quantization encoding. I.e. the range [0…1]
|
||||
is mapped to [16…235]. Cb and Cr are mapped from [-0.5…0.5] to
|
||||
[16…240].
|
||||
[16…240]. Limited Range cannot be used with HSV.
|
||||
|
||||
@@ -370,9 +370,8 @@ Colorspace BT.2020 (V4L2_COLORSPACE_BT2020)
|
||||
The :ref:`itu2020` standard defines the colorspace used by Ultra-high
|
||||
definition television (UHDTV). The default transfer function is
|
||||
``V4L2_XFER_FUNC_709``. The default Y'CbCr encoding is
|
||||
``V4L2_YCBCR_ENC_BT2020``. The default R'G'B' quantization is limited
|
||||
range (!), and so is the default Y'CbCr quantization. The chromaticities
|
||||
of the primary colors and the white reference are:
|
||||
``V4L2_YCBCR_ENC_BT2020``. The default Y'CbCr quantization is limited range.
|
||||
The chromaticities of the primary colors and the white reference are:
|
||||
|
||||
|
||||
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 14
|
||||
SUBLEVEL = 203
|
||||
SUBLEVEL = 204
|
||||
EXTRAVERSION =
|
||||
NAME = Petit Gorille
|
||||
|
||||
|
||||
@@ -602,7 +602,9 @@ config ARCH_S3C24XX
|
||||
select HAVE_S3C_RTC if RTC_CLASS
|
||||
select MULTI_IRQ_HANDLER
|
||||
select NEED_MACH_IO_H
|
||||
select S3C2410_WATCHDOG
|
||||
select SAMSUNG_ATAGS
|
||||
select WATCHDOG
|
||||
help
|
||||
Samsung S3C2410, S3C2412, S3C2413, S3C2416, S3C2440, S3C2442, S3C2443
|
||||
and S3C2450 SoCs based systems, such as the Simtec Electronics BAST
|
||||
|
||||
@@ -183,6 +183,7 @@
|
||||
fixed-link {
|
||||
speed = <1000>;
|
||||
full-duplex;
|
||||
pause;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -101,19 +101,16 @@
|
||||
};
|
||||
|
||||
clocks: clock-controller@e0100000 {
|
||||
compatible = "samsung,s5pv210-clock", "simple-bus";
|
||||
compatible = "samsung,s5pv210-clock";
|
||||
reg = <0xe0100000 0x10000>;
|
||||
clock-names = "xxti", "xusbxti";
|
||||
clocks = <&xxti>, <&xusbxti>;
|
||||
#clock-cells = <1>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
ranges;
|
||||
};
|
||||
|
||||
pmu_syscon: syscon@e0108000 {
|
||||
compatible = "samsung-s5pv210-pmu", "syscon";
|
||||
reg = <0xe0108000 0x8000>;
|
||||
};
|
||||
pmu_syscon: syscon@e0108000 {
|
||||
compatible = "samsung-s5pv210-pmu", "syscon";
|
||||
reg = <0xe0108000 0x8000>;
|
||||
};
|
||||
|
||||
pinctrl0: pinctrl@e0200000 {
|
||||
@@ -129,35 +126,28 @@
|
||||
};
|
||||
};
|
||||
|
||||
amba {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
compatible = "simple-bus";
|
||||
ranges;
|
||||
pdma0: dma@e0900000 {
|
||||
compatible = "arm,pl330", "arm,primecell";
|
||||
reg = <0xe0900000 0x1000>;
|
||||
interrupt-parent = <&vic0>;
|
||||
interrupts = <19>;
|
||||
clocks = <&clocks CLK_PDMA0>;
|
||||
clock-names = "apb_pclk";
|
||||
#dma-cells = <1>;
|
||||
#dma-channels = <8>;
|
||||
#dma-requests = <32>;
|
||||
};
|
||||
|
||||
pdma0: dma@e0900000 {
|
||||
compatible = "arm,pl330", "arm,primecell";
|
||||
reg = <0xe0900000 0x1000>;
|
||||
interrupt-parent = <&vic0>;
|
||||
interrupts = <19>;
|
||||
clocks = <&clocks CLK_PDMA0>;
|
||||
clock-names = "apb_pclk";
|
||||
#dma-cells = <1>;
|
||||
#dma-channels = <8>;
|
||||
#dma-requests = <32>;
|
||||
};
|
||||
|
||||
pdma1: dma@e0a00000 {
|
||||
compatible = "arm,pl330", "arm,primecell";
|
||||
reg = <0xe0a00000 0x1000>;
|
||||
interrupt-parent = <&vic0>;
|
||||
interrupts = <20>;
|
||||
clocks = <&clocks CLK_PDMA1>;
|
||||
clock-names = "apb_pclk";
|
||||
#dma-cells = <1>;
|
||||
#dma-channels = <8>;
|
||||
#dma-requests = <32>;
|
||||
};
|
||||
pdma1: dma@e0a00000 {
|
||||
compatible = "arm,pl330", "arm,primecell";
|
||||
reg = <0xe0a00000 0x1000>;
|
||||
interrupt-parent = <&vic0>;
|
||||
interrupts = <20>;
|
||||
clocks = <&clocks CLK_PDMA1>;
|
||||
clock-names = "apb_pclk";
|
||||
#dma-cells = <1>;
|
||||
#dma-channels = <8>;
|
||||
#dma-requests = <32>;
|
||||
};
|
||||
|
||||
spi0: spi@e1300000 {
|
||||
@@ -230,43 +220,36 @@
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
audio-subsystem {
|
||||
compatible = "samsung,s5pv210-audss", "simple-bus";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
ranges;
|
||||
clk_audss: clock-controller@eee10000 {
|
||||
compatible = "samsung,s5pv210-audss-clock";
|
||||
reg = <0xeee10000 0x1000>;
|
||||
clock-names = "hclk", "xxti",
|
||||
"fout_epll",
|
||||
"sclk_audio0";
|
||||
clocks = <&clocks DOUT_HCLKP>, <&xxti>,
|
||||
<&clocks FOUT_EPLL>,
|
||||
<&clocks SCLK_AUDIO0>;
|
||||
#clock-cells = <1>;
|
||||
};
|
||||
|
||||
clk_audss: clock-controller@eee10000 {
|
||||
compatible = "samsung,s5pv210-audss-clock";
|
||||
reg = <0xeee10000 0x1000>;
|
||||
clock-names = "hclk", "xxti",
|
||||
"fout_epll",
|
||||
"sclk_audio0";
|
||||
clocks = <&clocks DOUT_HCLKP>, <&xxti>,
|
||||
<&clocks FOUT_EPLL>,
|
||||
<&clocks SCLK_AUDIO0>;
|
||||
#clock-cells = <1>;
|
||||
};
|
||||
|
||||
i2s0: i2s@eee30000 {
|
||||
compatible = "samsung,s5pv210-i2s";
|
||||
reg = <0xeee30000 0x1000>;
|
||||
interrupt-parent = <&vic2>;
|
||||
interrupts = <16>;
|
||||
dma-names = "rx", "tx", "tx-sec";
|
||||
dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>;
|
||||
clock-names = "iis",
|
||||
"i2s_opclk0",
|
||||
"i2s_opclk1";
|
||||
clocks = <&clk_audss CLK_I2S>,
|
||||
<&clk_audss CLK_I2S>,
|
||||
<&clk_audss CLK_DOUT_AUD_BUS>;
|
||||
samsung,idma-addr = <0xc0010000>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&i2s0_bus>;
|
||||
#sound-dai-cells = <0>;
|
||||
status = "disabled";
|
||||
};
|
||||
i2s0: i2s@eee30000 {
|
||||
compatible = "samsung,s5pv210-i2s";
|
||||
reg = <0xeee30000 0x1000>;
|
||||
interrupt-parent = <&vic2>;
|
||||
interrupts = <16>;
|
||||
dma-names = "rx", "tx", "tx-sec";
|
||||
dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>;
|
||||
clock-names = "iis",
|
||||
"i2s_opclk0",
|
||||
"i2s_opclk1";
|
||||
clocks = <&clk_audss CLK_I2S>,
|
||||
<&clk_audss CLK_I2S>,
|
||||
<&clk_audss CLK_DOUT_AUD_BUS>;
|
||||
samsung,idma-addr = <0xc0010000>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&i2s0_bus>;
|
||||
#sound-dai-cells = <0>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
i2s1: i2s@e2100000 {
|
||||
|
||||
@@ -688,6 +688,40 @@ static void disable_single_step(struct perf_event *bp)
|
||||
arch_install_hw_breakpoint(bp);
|
||||
}
|
||||
|
||||
/*
|
||||
* Arm32 hardware does not always report a watchpoint hit address that matches
|
||||
* one of the watchpoints set. It can also report an address "near" the
|
||||
* watchpoint if a single instruction access both watched and unwatched
|
||||
* addresses. There is no straight-forward way, short of disassembling the
|
||||
* offending instruction, to map that address back to the watchpoint. This
|
||||
* function computes the distance of the memory access from the watchpoint as a
|
||||
* heuristic for the likelyhood that a given access triggered the watchpoint.
|
||||
*
|
||||
* See this same function in the arm64 platform code, which has the same
|
||||
* problem.
|
||||
*
|
||||
* The function returns the distance of the address from the bytes watched by
|
||||
* the watchpoint. In case of an exact match, it returns 0.
|
||||
*/
|
||||
static u32 get_distance_from_watchpoint(unsigned long addr, u32 val,
|
||||
struct arch_hw_breakpoint_ctrl *ctrl)
|
||||
{
|
||||
u32 wp_low, wp_high;
|
||||
u32 lens, lene;
|
||||
|
||||
lens = __ffs(ctrl->len);
|
||||
lene = __fls(ctrl->len);
|
||||
|
||||
wp_low = val + lens;
|
||||
wp_high = val + lene;
|
||||
if (addr < wp_low)
|
||||
return wp_low - addr;
|
||||
else if (addr > wp_high)
|
||||
return addr - wp_high;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int watchpoint_fault_on_uaccess(struct pt_regs *regs,
|
||||
struct arch_hw_breakpoint *info)
|
||||
{
|
||||
@@ -697,23 +731,25 @@ static int watchpoint_fault_on_uaccess(struct pt_regs *regs,
|
||||
static void watchpoint_handler(unsigned long addr, unsigned int fsr,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
int i, access;
|
||||
u32 val, ctrl_reg, alignment_mask;
|
||||
int i, access, closest_match = 0;
|
||||
u32 min_dist = -1, dist;
|
||||
u32 val, ctrl_reg;
|
||||
struct perf_event *wp, **slots;
|
||||
struct arch_hw_breakpoint *info;
|
||||
struct arch_hw_breakpoint_ctrl ctrl;
|
||||
|
||||
slots = this_cpu_ptr(wp_on_reg);
|
||||
|
||||
/*
|
||||
* Find all watchpoints that match the reported address. If no exact
|
||||
* match is found. Attribute the hit to the closest watchpoint.
|
||||
*/
|
||||
rcu_read_lock();
|
||||
for (i = 0; i < core_num_wrps; ++i) {
|
||||
rcu_read_lock();
|
||||
|
||||
wp = slots[i];
|
||||
|
||||
if (wp == NULL)
|
||||
goto unlock;
|
||||
continue;
|
||||
|
||||
info = counter_arch_bp(wp);
|
||||
/*
|
||||
* The DFAR is an unknown value on debug architectures prior
|
||||
* to 7.1. Since we only allow a single watchpoint on these
|
||||
@@ -722,33 +758,31 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
|
||||
*/
|
||||
if (debug_arch < ARM_DEBUG_ARCH_V7_1) {
|
||||
BUG_ON(i > 0);
|
||||
info = counter_arch_bp(wp);
|
||||
info->trigger = wp->attr.bp_addr;
|
||||
} else {
|
||||
if (info->ctrl.len == ARM_BREAKPOINT_LEN_8)
|
||||
alignment_mask = 0x7;
|
||||
else
|
||||
alignment_mask = 0x3;
|
||||
|
||||
/* Check if the watchpoint value matches. */
|
||||
val = read_wb_reg(ARM_BASE_WVR + i);
|
||||
if (val != (addr & ~alignment_mask))
|
||||
goto unlock;
|
||||
|
||||
/* Possible match, check the byte address select. */
|
||||
ctrl_reg = read_wb_reg(ARM_BASE_WCR + i);
|
||||
decode_ctrl_reg(ctrl_reg, &ctrl);
|
||||
if (!((1 << (addr & alignment_mask)) & ctrl.len))
|
||||
goto unlock;
|
||||
|
||||
/* Check that the access type matches. */
|
||||
if (debug_exception_updates_fsr()) {
|
||||
access = (fsr & ARM_FSR_ACCESS_MASK) ?
|
||||
HW_BREAKPOINT_W : HW_BREAKPOINT_R;
|
||||
if (!(access & hw_breakpoint_type(wp)))
|
||||
goto unlock;
|
||||
continue;
|
||||
}
|
||||
|
||||
val = read_wb_reg(ARM_BASE_WVR + i);
|
||||
ctrl_reg = read_wb_reg(ARM_BASE_WCR + i);
|
||||
decode_ctrl_reg(ctrl_reg, &ctrl);
|
||||
dist = get_distance_from_watchpoint(addr, val, &ctrl);
|
||||
if (dist < min_dist) {
|
||||
min_dist = dist;
|
||||
closest_match = i;
|
||||
}
|
||||
/* Is this an exact match? */
|
||||
if (dist != 0)
|
||||
continue;
|
||||
|
||||
/* We have a winner. */
|
||||
info = counter_arch_bp(wp);
|
||||
info->trigger = addr;
|
||||
}
|
||||
|
||||
@@ -770,13 +804,23 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
|
||||
* we can single-step over the watchpoint trigger.
|
||||
*/
|
||||
if (!is_default_overflow_handler(wp))
|
||||
goto unlock;
|
||||
|
||||
continue;
|
||||
step:
|
||||
enable_single_step(wp, instruction_pointer(regs));
|
||||
unlock:
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
if (min_dist > 0 && min_dist != -1) {
|
||||
/* No exact match found. */
|
||||
wp = slots[closest_match];
|
||||
info = counter_arch_bp(wp);
|
||||
info->trigger = addr;
|
||||
pr_debug("watchpoint fired: address = 0x%x\n", info->trigger);
|
||||
perf_bp_event(wp, regs);
|
||||
if (is_default_overflow_handler(wp))
|
||||
enable_single_step(wp, instruction_pointer(regs));
|
||||
}
|
||||
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static void watchpoint_single_step_handler(unsigned long pc)
|
||||
|
||||
@@ -242,6 +242,7 @@ config SAMSUNG_PM_DEBUG
|
||||
bool "Samsung PM Suspend debug"
|
||||
depends on PM && DEBUG_KERNEL
|
||||
depends on DEBUG_EXYNOS_UART || DEBUG_S3C24XX_UART || DEBUG_S3C2410_UART
|
||||
depends on DEBUG_LL && MMU
|
||||
help
|
||||
Say Y here if you want verbose debugging from the PM Suspend and
|
||||
Resume code. See <file:Documentation/arm/Samsung-S3C24XX/Suspend.txt>
|
||||
|
||||
@@ -46,6 +46,7 @@ config ARCH_BCM_IPROC
|
||||
config ARCH_BERLIN
|
||||
bool "Marvell Berlin SoC Family"
|
||||
select DW_APB_ICTL
|
||||
select DW_APB_TIMER_OF
|
||||
select GPIOLIB
|
||||
select PINCTRL
|
||||
help
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
#
|
||||
# Copyright (C) 1995-2001 by Russell King
|
||||
|
||||
LDFLAGS_vmlinux :=--no-undefined -X
|
||||
LDFLAGS_vmlinux :=--no-undefined -X -z norelro
|
||||
CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)
|
||||
GZFLAGS :=-9
|
||||
|
||||
@@ -18,7 +18,7 @@ ifeq ($(CONFIG_RELOCATABLE), y)
|
||||
# Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
|
||||
# for relative relocs, since this leads to better Image compression
|
||||
# with the relocation offsets always being zero.
|
||||
LDFLAGS_vmlinux += -shared -Bsymbolic -z notext -z norelro \
|
||||
LDFLAGS_vmlinux += -shared -Bsymbolic -z notext \
|
||||
$(call ld-option, --no-apply-dynamic-relocs)
|
||||
endif
|
||||
|
||||
|
||||
@@ -397,6 +397,7 @@
|
||||
bus-width = <8>;
|
||||
mmc-hs200-1_8v;
|
||||
non-removable;
|
||||
full-pwr-cycle-in-suspend;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
||||
@@ -185,6 +185,7 @@ enum vcpu_sysreg {
|
||||
#define cp14_DBGWCR0 (DBGWCR0_EL1 * 2)
|
||||
#define cp14_DBGWVR0 (DBGWVR0_EL1 * 2)
|
||||
#define cp14_DBGDCCINT (MDCCINT_EL1 * 2)
|
||||
#define cp14_DBGVCR (DBGVCR32_EL2 * 2)
|
||||
|
||||
#define NR_COPRO_REGS (NR_SYS_REGS * 2)
|
||||
|
||||
|
||||
@@ -25,6 +25,9 @@ const struct cpumask *cpumask_of_node(int node);
|
||||
/* Returns a pointer to the cpumask of CPUs on Node 'node'. */
|
||||
static inline const struct cpumask *cpumask_of_node(int node)
|
||||
{
|
||||
if (node == NUMA_NO_NODE)
|
||||
return cpu_all_mask;
|
||||
|
||||
return node_to_cpumask_map[node];
|
||||
}
|
||||
#endif
|
||||
|
||||
@@ -1178,9 +1178,9 @@ static const struct sys_reg_desc cp14_regs[] = {
|
||||
{ Op1( 0), CRn( 0), CRm( 1), Op2( 0), trap_raz_wi },
|
||||
DBG_BCR_BVR_WCR_WVR(1),
|
||||
/* DBGDCCINT */
|
||||
{ Op1( 0), CRn( 0), CRm( 2), Op2( 0), trap_debug32 },
|
||||
{ Op1( 0), CRn( 0), CRm( 2), Op2( 0), trap_debug32, NULL, cp14_DBGDCCINT },
|
||||
/* DBGDSCRext */
|
||||
{ Op1( 0), CRn( 0), CRm( 2), Op2( 2), trap_debug32 },
|
||||
{ Op1( 0), CRn( 0), CRm( 2), Op2( 2), trap_debug32, NULL, cp14_DBGDSCRext },
|
||||
DBG_BCR_BVR_WCR_WVR(2),
|
||||
/* DBGDTR[RT]Xint */
|
||||
{ Op1( 0), CRn( 0), CRm( 3), Op2( 0), trap_raz_wi },
|
||||
@@ -1195,7 +1195,7 @@ static const struct sys_reg_desc cp14_regs[] = {
|
||||
{ Op1( 0), CRn( 0), CRm( 6), Op2( 2), trap_raz_wi },
|
||||
DBG_BCR_BVR_WCR_WVR(6),
|
||||
/* DBGVCR */
|
||||
{ Op1( 0), CRn( 0), CRm( 7), Op2( 0), trap_debug32 },
|
||||
{ Op1( 0), CRn( 0), CRm( 7), Op2( 0), trap_debug32, NULL, cp14_DBGVCR },
|
||||
DBG_BCR_BVR_WCR_WVR(7),
|
||||
DBG_BCR_BVR_WCR_WVR(8),
|
||||
DBG_BCR_BVR_WCR_WVR(9),
|
||||
|
||||
@@ -58,7 +58,11 @@ EXPORT_SYMBOL(node_to_cpumask_map);
|
||||
*/
|
||||
const struct cpumask *cpumask_of_node(int node)
|
||||
{
|
||||
if (WARN_ON(node >= nr_node_ids))
|
||||
|
||||
if (node == NUMA_NO_NODE)
|
||||
return cpu_all_mask;
|
||||
|
||||
if (WARN_ON(node < 0 || node >= nr_node_ids))
|
||||
return cpu_none_mask;
|
||||
|
||||
if (WARN_ON(node_to_cpumask_map[node] == NULL))
|
||||
|
||||
@@ -43,7 +43,7 @@ endif
|
||||
obj-$(CONFIG_INTEL_IOMMU) += pci-dma.o
|
||||
obj-$(CONFIG_SWIOTLB) += pci-swiotlb.o
|
||||
|
||||
obj-$(CONFIG_BINFMT_ELF) += elfcore.o
|
||||
obj-$(CONFIG_ELF_CORE) += elfcore.o
|
||||
|
||||
# fp_emulate() expects f2-f5,f16-f31 to contain the user-level state.
|
||||
CFLAGS_traps.o += -mfixed-range=f2-f5,f16-f31
|
||||
|
||||
@@ -154,6 +154,7 @@ config PPC
|
||||
select ARCH_USE_BUILTIN_BSWAP
|
||||
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
|
||||
select ARCH_WANT_IPC_PARSE_VERSION
|
||||
select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
|
||||
select ARCH_WEAK_RELEASE_ACQUIRE
|
||||
select BINFMT_ELF
|
||||
select BUILDTIME_EXTABLE_SORT
|
||||
@@ -1026,6 +1027,19 @@ config FSL_RIO
|
||||
|
||||
source "drivers/rapidio/Kconfig"
|
||||
|
||||
config PPC_RTAS_FILTER
|
||||
bool "Enable filtering of RTAS syscalls"
|
||||
default y
|
||||
depends on PPC_RTAS
|
||||
help
|
||||
The RTAS syscall API has security issues that could be used to
|
||||
compromise system integrity. This option enforces restrictions on the
|
||||
RTAS calls and arguments passed by userspace programs to mitigate
|
||||
these issues.
|
||||
|
||||
Say Y unless you know what you are doing and the filter is causing
|
||||
problems for you.
|
||||
|
||||
endmenu
|
||||
|
||||
config NONSTATIC_KERNEL
|
||||
|
||||
@@ -101,7 +101,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
|
||||
*/
|
||||
static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next)
|
||||
{
|
||||
switch_mm(prev, next, current);
|
||||
switch_mm_irqs_off(prev, next, current);
|
||||
}
|
||||
|
||||
/* We don't currently use enter_lazy_tlb() for anything */
|
||||
|
||||
@@ -1056,6 +1056,147 @@ struct pseries_errorlog *get_pseries_errorlog(struct rtas_error_log *log,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PPC_RTAS_FILTER
|
||||
|
||||
/*
|
||||
* The sys_rtas syscall, as originally designed, allows root to pass
|
||||
* arbitrary physical addresses to RTAS calls. A number of RTAS calls
|
||||
* can be abused to write to arbitrary memory and do other things that
|
||||
* are potentially harmful to system integrity, and thus should only
|
||||
* be used inside the kernel and not exposed to userspace.
|
||||
*
|
||||
* All known legitimate users of the sys_rtas syscall will only ever
|
||||
* pass addresses that fall within the RMO buffer, and use a known
|
||||
* subset of RTAS calls.
|
||||
*
|
||||
* Accordingly, we filter RTAS requests to check that the call is
|
||||
* permitted, and that provided pointers fall within the RMO buffer.
|
||||
* The rtas_filters list contains an entry for each permitted call,
|
||||
* with the indexes of the parameters which are expected to contain
|
||||
* addresses and sizes of buffers allocated inside the RMO buffer.
|
||||
*/
|
||||
struct rtas_filter {
|
||||
const char *name;
|
||||
int token;
|
||||
/* Indexes into the args buffer, -1 if not used */
|
||||
int buf_idx1;
|
||||
int size_idx1;
|
||||
int buf_idx2;
|
||||
int size_idx2;
|
||||
|
||||
int fixed_size;
|
||||
};
|
||||
|
||||
static struct rtas_filter rtas_filters[] __ro_after_init = {
|
||||
{ "ibm,activate-firmware", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,configure-connector", -1, 0, -1, 1, -1, 4096 }, /* Special cased */
|
||||
{ "display-character", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,display-message", -1, 0, -1, -1, -1 },
|
||||
{ "ibm,errinjct", -1, 2, -1, -1, -1, 1024 },
|
||||
{ "ibm,close-errinjct", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,open-errinct", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,get-config-addr-info2", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,get-dynamic-sensor-state", -1, 1, -1, -1, -1 },
|
||||
{ "ibm,get-indices", -1, 2, 3, -1, -1 },
|
||||
{ "get-power-level", -1, -1, -1, -1, -1 },
|
||||
{ "get-sensor-state", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,get-system-parameter", -1, 1, 2, -1, -1 },
|
||||
{ "get-time-of-day", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,get-vpd", -1, 0, -1, 1, 2 },
|
||||
{ "ibm,lpar-perftools", -1, 2, 3, -1, -1 },
|
||||
{ "ibm,platform-dump", -1, 4, 5, -1, -1 },
|
||||
{ "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,scan-log-dump", -1, 0, 1, -1, -1 },
|
||||
{ "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 },
|
||||
{ "ibm,set-eeh-option", -1, -1, -1, -1, -1 },
|
||||
{ "set-indicator", -1, -1, -1, -1, -1 },
|
||||
{ "set-power-level", -1, -1, -1, -1, -1 },
|
||||
{ "set-time-for-power-on", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,set-system-parameter", -1, 1, -1, -1, -1 },
|
||||
{ "set-time-of-day", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,suspend-me", -1, -1, -1, -1, -1 },
|
||||
{ "ibm,update-nodes", -1, 0, -1, -1, -1, 4096 },
|
||||
{ "ibm,update-properties", -1, 0, -1, -1, -1, 4096 },
|
||||
{ "ibm,physical-attestation", -1, 0, 1, -1, -1 },
|
||||
};
|
||||
|
||||
static bool in_rmo_buf(u32 base, u32 end)
|
||||
{
|
||||
return base >= rtas_rmo_buf &&
|
||||
base < (rtas_rmo_buf + RTAS_RMOBUF_MAX) &&
|
||||
base <= end &&
|
||||
end >= rtas_rmo_buf &&
|
||||
end < (rtas_rmo_buf + RTAS_RMOBUF_MAX);
|
||||
}
|
||||
|
||||
static bool block_rtas_call(int token, int nargs,
|
||||
struct rtas_args *args)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(rtas_filters); i++) {
|
||||
struct rtas_filter *f = &rtas_filters[i];
|
||||
u32 base, size, end;
|
||||
|
||||
if (token != f->token)
|
||||
continue;
|
||||
|
||||
if (f->buf_idx1 != -1) {
|
||||
base = be32_to_cpu(args->args[f->buf_idx1]);
|
||||
if (f->size_idx1 != -1)
|
||||
size = be32_to_cpu(args->args[f->size_idx1]);
|
||||
else if (f->fixed_size)
|
||||
size = f->fixed_size;
|
||||
else
|
||||
size = 1;
|
||||
|
||||
end = base + size - 1;
|
||||
if (!in_rmo_buf(base, end))
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (f->buf_idx2 != -1) {
|
||||
base = be32_to_cpu(args->args[f->buf_idx2]);
|
||||
if (f->size_idx2 != -1)
|
||||
size = be32_to_cpu(args->args[f->size_idx2]);
|
||||
else if (f->fixed_size)
|
||||
size = f->fixed_size;
|
||||
else
|
||||
size = 1;
|
||||
end = base + size - 1;
|
||||
|
||||
/*
|
||||
* Special case for ibm,configure-connector where the
|
||||
* address can be 0
|
||||
*/
|
||||
if (!strcmp(f->name, "ibm,configure-connector") &&
|
||||
base == 0)
|
||||
return false;
|
||||
|
||||
if (!in_rmo_buf(base, end))
|
||||
goto err;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
err:
|
||||
pr_err_ratelimited("sys_rtas: RTAS call blocked - exploit attempt?\n");
|
||||
pr_err_ratelimited("sys_rtas: token=0x%x, nargs=%d (called by %s)\n",
|
||||
token, nargs, current->comm);
|
||||
return true;
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
static bool block_rtas_call(int token, int nargs,
|
||||
struct rtas_args *args)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_PPC_RTAS_FILTER */
|
||||
|
||||
/* We assume to be passed big endian arguments */
|
||||
asmlinkage int ppc_rtas(struct rtas_args __user *uargs)
|
||||
{
|
||||
@@ -1093,6 +1234,9 @@ asmlinkage int ppc_rtas(struct rtas_args __user *uargs)
|
||||
args.rets = &args.args[nargs];
|
||||
memset(args.rets, 0, nret * sizeof(rtas_arg_t));
|
||||
|
||||
if (block_rtas_call(token, nargs, &args))
|
||||
return -EINVAL;
|
||||
|
||||
/* Need to handle ibm,suspend_me call specially */
|
||||
if (token == ibm_suspend_me_token) {
|
||||
|
||||
@@ -1154,6 +1298,9 @@ void __init rtas_initialize(void)
|
||||
unsigned long rtas_region = RTAS_INSTANTIATE_MAX;
|
||||
u32 base, size, entry;
|
||||
int no_base, no_size, no_entry;
|
||||
#ifdef CONFIG_PPC_RTAS_FILTER
|
||||
int i;
|
||||
#endif
|
||||
|
||||
/* Get RTAS dev node and fill up our "rtas" structure with infos
|
||||
* about it.
|
||||
@@ -1189,6 +1336,12 @@ void __init rtas_initialize(void)
|
||||
#ifdef CONFIG_RTAS_ERROR_LOGGING
|
||||
rtas_last_error_token = rtas_token("rtas-last-error");
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PPC_RTAS_FILTER
|
||||
for (i = 0; i < ARRAY_SIZE(rtas_filters); i++) {
|
||||
rtas_filters[i].token = rtas_token(rtas_filters[i].name);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
int __init early_init_dt_scan_rtas(unsigned long node,
|
||||
|
||||
@@ -28,29 +28,27 @@
|
||||
|
||||
static DEFINE_PER_CPU(struct cpu, cpu_devices);
|
||||
|
||||
/*
|
||||
* SMT snooze delay stuff, 64-bit only for now
|
||||
*/
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
|
||||
/* Time in microseconds we delay before sleeping in the idle loop */
|
||||
static DEFINE_PER_CPU(long, smt_snooze_delay) = { 100 };
|
||||
/*
|
||||
* Snooze delay has not been hooked up since 3fa8cad82b94 ("powerpc/pseries/cpuidle:
|
||||
* smt-snooze-delay cleanup.") and has been broken even longer. As was foretold in
|
||||
* 2014:
|
||||
*
|
||||
* "ppc64_util currently utilises it. Once we fix ppc64_util, propose to clean
|
||||
* up the kernel code."
|
||||
*
|
||||
* powerpc-utils stopped using it as of 1.3.8. At some point in the future this
|
||||
* code should be removed.
|
||||
*/
|
||||
|
||||
static ssize_t store_smt_snooze_delay(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
struct cpu *cpu = container_of(dev, struct cpu, dev);
|
||||
ssize_t ret;
|
||||
long snooze;
|
||||
|
||||
ret = sscanf(buf, "%ld", &snooze);
|
||||
if (ret != 1)
|
||||
return -EINVAL;
|
||||
|
||||
per_cpu(smt_snooze_delay, cpu->dev.id) = snooze;
|
||||
pr_warn_once("%s (%d) stored to unsupported smt_snooze_delay, which has no effect.\n",
|
||||
current->comm, current->pid);
|
||||
return count;
|
||||
}
|
||||
|
||||
@@ -58,9 +56,9 @@ static ssize_t show_smt_snooze_delay(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct cpu *cpu = container_of(dev, struct cpu, dev);
|
||||
|
||||
return sprintf(buf, "%ld\n", per_cpu(smt_snooze_delay, cpu->dev.id));
|
||||
pr_warn_once("%s (%d) read from unsupported smt_snooze_delay\n",
|
||||
current->comm, current->pid);
|
||||
return sprintf(buf, "100\n");
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(smt_snooze_delay, 0644, show_smt_snooze_delay,
|
||||
@@ -68,16 +66,10 @@ static DEVICE_ATTR(smt_snooze_delay, 0644, show_smt_snooze_delay,
|
||||
|
||||
static int __init setup_smt_snooze_delay(char *str)
|
||||
{
|
||||
unsigned int cpu;
|
||||
long snooze;
|
||||
|
||||
if (!cpu_has_feature(CPU_FTR_SMT))
|
||||
return 1;
|
||||
|
||||
snooze = simple_strtol(str, NULL, 10);
|
||||
for_each_possible_cpu(cpu)
|
||||
per_cpu(smt_snooze_delay, cpu) = snooze;
|
||||
|
||||
pr_warn("smt-snooze-delay command line option has no effect\n");
|
||||
return 1;
|
||||
}
|
||||
__setup("smt-snooze-delay=", setup_smt_snooze_delay);
|
||||
|
||||
@@ -183,14 +183,14 @@ static ssize_t raw_attr_read(struct file *filep, struct kobject *kobj,
|
||||
return count;
|
||||
}
|
||||
|
||||
static struct elog_obj *create_elog_obj(uint64_t id, size_t size, uint64_t type)
|
||||
static void create_elog_obj(uint64_t id, size_t size, uint64_t type)
|
||||
{
|
||||
struct elog_obj *elog;
|
||||
int rc;
|
||||
|
||||
elog = kzalloc(sizeof(*elog), GFP_KERNEL);
|
||||
if (!elog)
|
||||
return NULL;
|
||||
return;
|
||||
|
||||
elog->kobj.kset = elog_kset;
|
||||
|
||||
@@ -223,18 +223,37 @@ static struct elog_obj *create_elog_obj(uint64_t id, size_t size, uint64_t type)
|
||||
rc = kobject_add(&elog->kobj, NULL, "0x%llx", id);
|
||||
if (rc) {
|
||||
kobject_put(&elog->kobj);
|
||||
return NULL;
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* As soon as the sysfs file for this elog is created/activated there is
|
||||
* a chance the opal_errd daemon (or any userspace) might read and
|
||||
* acknowledge the elog before kobject_uevent() is called. If that
|
||||
* happens then there is a potential race between
|
||||
* elog_ack_store->kobject_put() and kobject_uevent() which leads to a
|
||||
* use-after-free of a kernfs object resulting in a kernel crash.
|
||||
*
|
||||
* To avoid that, we need to take a reference on behalf of the bin file,
|
||||
* so that our reference remains valid while we call kobject_uevent().
|
||||
* We then drop our reference before exiting the function, leaving the
|
||||
* bin file to drop the last reference (if it hasn't already).
|
||||
*/
|
||||
|
||||
/* Take a reference for the bin file */
|
||||
kobject_get(&elog->kobj);
|
||||
rc = sysfs_create_bin_file(&elog->kobj, &elog->raw_attr);
|
||||
if (rc) {
|
||||
if (rc == 0) {
|
||||
kobject_uevent(&elog->kobj, KOBJ_ADD);
|
||||
} else {
|
||||
/* Drop the reference taken for the bin file */
|
||||
kobject_put(&elog->kobj);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
kobject_uevent(&elog->kobj, KOBJ_ADD);
|
||||
/* Drop our reference */
|
||||
kobject_put(&elog->kobj);
|
||||
|
||||
return elog;
|
||||
return;
|
||||
}
|
||||
|
||||
static irqreturn_t elog_event(int irq, void *data)
|
||||
|
||||
@@ -44,7 +44,7 @@
|
||||
#include <asm/udbg.h>
|
||||
#define DBG(fmt...) udbg_printf(fmt)
|
||||
#else
|
||||
#define DBG(fmt...)
|
||||
#define DBG(fmt...) do { } while (0)
|
||||
#endif
|
||||
|
||||
static void pnv_smp_setup_cpu(int cpu)
|
||||
|
||||
@@ -347,8 +347,9 @@ static DEFINE_PER_CPU(atomic_t, clock_sync_word);
|
||||
static DEFINE_MUTEX(clock_sync_mutex);
|
||||
static unsigned long clock_sync_flags;
|
||||
|
||||
#define CLOCK_SYNC_HAS_STP 0
|
||||
#define CLOCK_SYNC_STP 1
|
||||
#define CLOCK_SYNC_HAS_STP 0
|
||||
#define CLOCK_SYNC_STP 1
|
||||
#define CLOCK_SYNC_STPINFO_VALID 2
|
||||
|
||||
/*
|
||||
* The get_clock function for the physical clock. It will get the current
|
||||
@@ -585,6 +586,22 @@ void stp_queue_work(void)
|
||||
queue_work(time_sync_wq, &stp_work);
|
||||
}
|
||||
|
||||
static int __store_stpinfo(void)
|
||||
{
|
||||
int rc = chsc_sstpi(stp_page, &stp_info, sizeof(struct stp_sstpi));
|
||||
|
||||
if (rc)
|
||||
clear_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags);
|
||||
else
|
||||
set_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int stpinfo_valid(void)
|
||||
{
|
||||
return stp_online && test_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags);
|
||||
}
|
||||
|
||||
static int stp_sync_clock(void *data)
|
||||
{
|
||||
struct clock_sync_data *sync = data;
|
||||
@@ -606,8 +623,7 @@ static int stp_sync_clock(void *data)
|
||||
if (rc == 0) {
|
||||
sync->clock_delta = clock_delta;
|
||||
clock_sync_global(clock_delta);
|
||||
rc = chsc_sstpi(stp_page, &stp_info,
|
||||
sizeof(struct stp_sstpi));
|
||||
rc = __store_stpinfo();
|
||||
if (rc == 0 && stp_info.tmd != 2)
|
||||
rc = -EAGAIN;
|
||||
}
|
||||
@@ -652,7 +668,7 @@ static void stp_work_fn(struct work_struct *work)
|
||||
if (rc)
|
||||
goto out_unlock;
|
||||
|
||||
rc = chsc_sstpi(stp_page, &stp_info, sizeof(struct stp_sstpi));
|
||||
rc = __store_stpinfo();
|
||||
if (rc || stp_info.c == 0)
|
||||
goto out_unlock;
|
||||
|
||||
@@ -689,10 +705,14 @@ static ssize_t stp_ctn_id_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online)
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%016llx\n",
|
||||
*(unsigned long long *) stp_info.ctnid);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid())
|
||||
ret = sprintf(buf, "%016llx\n",
|
||||
*(unsigned long long *) stp_info.ctnid);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(ctn_id, 0400, stp_ctn_id_show, NULL);
|
||||
@@ -701,9 +721,13 @@ static ssize_t stp_ctn_type_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online)
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%i\n", stp_info.ctn);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid())
|
||||
ret = sprintf(buf, "%i\n", stp_info.ctn);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(ctn_type, 0400, stp_ctn_type_show, NULL);
|
||||
@@ -712,9 +736,13 @@ static ssize_t stp_dst_offset_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online || !(stp_info.vbits & 0x2000))
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%i\n", (int)(s16) stp_info.dsto);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid() && (stp_info.vbits & 0x2000))
|
||||
ret = sprintf(buf, "%i\n", (int)(s16) stp_info.dsto);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(dst_offset, 0400, stp_dst_offset_show, NULL);
|
||||
@@ -723,9 +751,13 @@ static ssize_t stp_leap_seconds_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online || !(stp_info.vbits & 0x8000))
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%i\n", (int)(s16) stp_info.leaps);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid() && (stp_info.vbits & 0x8000))
|
||||
ret = sprintf(buf, "%i\n", (int)(s16) stp_info.leaps);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(leap_seconds, 0400, stp_leap_seconds_show, NULL);
|
||||
@@ -734,9 +766,13 @@ static ssize_t stp_stratum_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online)
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%i\n", (int)(s16) stp_info.stratum);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid())
|
||||
ret = sprintf(buf, "%i\n", (int)(s16) stp_info.stratum);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(stratum, 0400, stp_stratum_show, NULL);
|
||||
@@ -745,9 +781,13 @@ static ssize_t stp_time_offset_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online || !(stp_info.vbits & 0x0800))
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%i\n", (int) stp_info.tto);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid() && (stp_info.vbits & 0x0800))
|
||||
ret = sprintf(buf, "%i\n", (int) stp_info.tto);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(time_offset, 0400, stp_time_offset_show, NULL);
|
||||
@@ -756,9 +796,13 @@ static ssize_t stp_time_zone_offset_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online || !(stp_info.vbits & 0x4000))
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%i\n", (int)(s16) stp_info.tzo);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid() && (stp_info.vbits & 0x4000))
|
||||
ret = sprintf(buf, "%i\n", (int)(s16) stp_info.tzo);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(time_zone_offset, 0400,
|
||||
@@ -768,9 +812,13 @@ static ssize_t stp_timing_mode_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online)
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%i\n", stp_info.tmd);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid())
|
||||
ret = sprintf(buf, "%i\n", stp_info.tmd);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(timing_mode, 0400, stp_timing_mode_show, NULL);
|
||||
@@ -779,9 +827,13 @@ static ssize_t stp_timing_state_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
if (!stp_online)
|
||||
return -ENODATA;
|
||||
return sprintf(buf, "%i\n", stp_info.tst);
|
||||
ssize_t ret = -ENODATA;
|
||||
|
||||
mutex_lock(&stp_work_mutex);
|
||||
if (stpinfo_valid())
|
||||
ret = sprintf(buf, "%i\n", stp_info.tst);
|
||||
mutex_unlock(&stp_work_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(timing_state, 0400, stp_timing_state_show, NULL);
|
||||
|
||||
@@ -1039,38 +1039,9 @@ void smp_fetch_global_pmu(void)
|
||||
* are flush_tlb_*() routines, and these run after flush_cache_*()
|
||||
* which performs the flushw.
|
||||
*
|
||||
* The SMP TLB coherency scheme we use works as follows:
|
||||
*
|
||||
* 1) mm->cpu_vm_mask is a bit mask of which cpus an address
|
||||
* space has (potentially) executed on, this is the heuristic
|
||||
* we use to avoid doing cross calls.
|
||||
*
|
||||
* Also, for flushing from kswapd and also for clones, we
|
||||
* use cpu_vm_mask as the list of cpus to make run the TLB.
|
||||
*
|
||||
* 2) TLB context numbers are shared globally across all processors
|
||||
* in the system, this allows us to play several games to avoid
|
||||
* cross calls.
|
||||
*
|
||||
* One invariant is that when a cpu switches to a process, and
|
||||
* that processes tsk->active_mm->cpu_vm_mask does not have the
|
||||
* current cpu's bit set, that tlb context is flushed locally.
|
||||
*
|
||||
* If the address space is non-shared (ie. mm->count == 1) we avoid
|
||||
* cross calls when we want to flush the currently running process's
|
||||
* tlb state. This is done by clearing all cpu bits except the current
|
||||
* processor's in current->mm->cpu_vm_mask and performing the
|
||||
* flush locally only. This will force any subsequent cpus which run
|
||||
* this task to flush the context from the local tlb if the process
|
||||
* migrates to another cpu (again).
|
||||
*
|
||||
* 3) For shared address spaces (threads) and swapping we bite the
|
||||
* bullet for most cases and perform the cross call (but only to
|
||||
* the cpus listed in cpu_vm_mask).
|
||||
*
|
||||
* The performance gain from "optimizing" away the cross call for threads is
|
||||
* questionable (in theory the big win for threads is the massive sharing of
|
||||
* address space state across processors).
|
||||
* mm->cpu_vm_mask is a bit mask of which cpus an address
|
||||
* space has (potentially) executed on, this is the heuristic
|
||||
* we use to limit cross calls.
|
||||
*/
|
||||
|
||||
/* This currently is only used by the hugetlb arch pre-fault
|
||||
@@ -1080,18 +1051,13 @@ void smp_fetch_global_pmu(void)
|
||||
void smp_flush_tlb_mm(struct mm_struct *mm)
|
||||
{
|
||||
u32 ctx = CTX_HWBITS(mm->context);
|
||||
int cpu = get_cpu();
|
||||
|
||||
if (atomic_read(&mm->mm_users) == 1) {
|
||||
cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
|
||||
goto local_flush_and_out;
|
||||
}
|
||||
get_cpu();
|
||||
|
||||
smp_cross_call_masked(&xcall_flush_tlb_mm,
|
||||
ctx, 0, 0,
|
||||
mm_cpumask(mm));
|
||||
|
||||
local_flush_and_out:
|
||||
__flush_tlb_mm(ctx, SECONDARY_CONTEXT);
|
||||
|
||||
put_cpu();
|
||||
@@ -1114,17 +1080,15 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long
|
||||
{
|
||||
u32 ctx = CTX_HWBITS(mm->context);
|
||||
struct tlb_pending_info info;
|
||||
int cpu = get_cpu();
|
||||
|
||||
get_cpu();
|
||||
|
||||
info.ctx = ctx;
|
||||
info.nr = nr;
|
||||
info.vaddrs = vaddrs;
|
||||
|
||||
if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
|
||||
cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
|
||||
else
|
||||
smp_call_function_many(mm_cpumask(mm), tlb_pending_func,
|
||||
&info, 1);
|
||||
smp_call_function_many(mm_cpumask(mm), tlb_pending_func,
|
||||
&info, 1);
|
||||
|
||||
__flush_tlb_pending(ctx, nr, vaddrs);
|
||||
|
||||
@@ -1134,14 +1098,13 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long
|
||||
void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr)
|
||||
{
|
||||
unsigned long context = CTX_HWBITS(mm->context);
|
||||
int cpu = get_cpu();
|
||||
|
||||
if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
|
||||
cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
|
||||
else
|
||||
smp_cross_call_masked(&xcall_flush_tlb_page,
|
||||
context, vaddr, 0,
|
||||
mm_cpumask(mm));
|
||||
get_cpu();
|
||||
|
||||
smp_cross_call_masked(&xcall_flush_tlb_page,
|
||||
context, vaddr, 0,
|
||||
mm_cpumask(mm));
|
||||
|
||||
__flush_tlb_page(context, vaddr);
|
||||
|
||||
put_cpu();
|
||||
|
||||
@@ -36,14 +36,14 @@ int write_sigio_irq(int fd)
|
||||
}
|
||||
|
||||
/* These are called from os-Linux/sigio.c to protect its pollfds arrays. */
|
||||
static DEFINE_SPINLOCK(sigio_spinlock);
|
||||
static DEFINE_MUTEX(sigio_mutex);
|
||||
|
||||
void sigio_lock(void)
|
||||
{
|
||||
spin_lock(&sigio_spinlock);
|
||||
mutex_lock(&sigio_mutex);
|
||||
}
|
||||
|
||||
void sigio_unlock(void)
|
||||
{
|
||||
spin_unlock(&sigio_spinlock);
|
||||
mutex_unlock(&sigio_mutex);
|
||||
}
|
||||
|
||||
@@ -89,6 +89,7 @@ struct perf_ibs {
|
||||
u64 max_period;
|
||||
unsigned long offset_mask[1];
|
||||
int offset_max;
|
||||
unsigned int fetch_count_reset_broken : 1;
|
||||
struct cpu_perf_ibs __percpu *pcpu;
|
||||
|
||||
struct attribute **format_attrs;
|
||||
@@ -346,11 +347,15 @@ static u64 get_ibs_op_count(u64 config)
|
||||
{
|
||||
u64 count = 0;
|
||||
|
||||
/*
|
||||
* If the internal 27-bit counter rolled over, the count is MaxCnt
|
||||
* and the lower 7 bits of CurCnt are randomized.
|
||||
* Otherwise CurCnt has the full 27-bit current counter value.
|
||||
*/
|
||||
if (config & IBS_OP_VAL)
|
||||
count += (config & IBS_OP_MAX_CNT) << 4; /* cnt rolled over */
|
||||
|
||||
if (ibs_caps & IBS_CAPS_RDWROPCNT)
|
||||
count += (config & IBS_OP_CUR_CNT) >> 32;
|
||||
count = (config & IBS_OP_MAX_CNT) << 4;
|
||||
else if (ibs_caps & IBS_CAPS_RDWROPCNT)
|
||||
count = (config & IBS_OP_CUR_CNT) >> 32;
|
||||
|
||||
return count;
|
||||
}
|
||||
@@ -375,7 +380,12 @@ perf_ibs_event_update(struct perf_ibs *perf_ibs, struct perf_event *event,
|
||||
static inline void perf_ibs_enable_event(struct perf_ibs *perf_ibs,
|
||||
struct hw_perf_event *hwc, u64 config)
|
||||
{
|
||||
wrmsrl(hwc->config_base, hwc->config | config | perf_ibs->enable_mask);
|
||||
u64 tmp = hwc->config | config;
|
||||
|
||||
if (perf_ibs->fetch_count_reset_broken)
|
||||
wrmsrl(hwc->config_base, tmp & ~perf_ibs->enable_mask);
|
||||
|
||||
wrmsrl(hwc->config_base, tmp | perf_ibs->enable_mask);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -637,18 +647,24 @@ fail:
|
||||
perf_ibs->offset_max,
|
||||
offset + 1);
|
||||
} while (offset < offset_max);
|
||||
/*
|
||||
* Read IbsBrTarget, IbsOpData4, and IbsExtdCtl separately
|
||||
* depending on their availability.
|
||||
* Can't add to offset_max as they are staggered
|
||||
*/
|
||||
if (event->attr.sample_type & PERF_SAMPLE_RAW) {
|
||||
/*
|
||||
* Read IbsBrTarget and IbsOpData4 separately
|
||||
* depending on their availability.
|
||||
* Can't add to offset_max as they are staggered
|
||||
*/
|
||||
if (ibs_caps & IBS_CAPS_BRNTRGT) {
|
||||
rdmsrl(MSR_AMD64_IBSBRTARGET, *buf++);
|
||||
size++;
|
||||
if (perf_ibs == &perf_ibs_op) {
|
||||
if (ibs_caps & IBS_CAPS_BRNTRGT) {
|
||||
rdmsrl(MSR_AMD64_IBSBRTARGET, *buf++);
|
||||
size++;
|
||||
}
|
||||
if (ibs_caps & IBS_CAPS_OPDATA4) {
|
||||
rdmsrl(MSR_AMD64_IBSOPDATA4, *buf++);
|
||||
size++;
|
||||
}
|
||||
}
|
||||
if (ibs_caps & IBS_CAPS_OPDATA4) {
|
||||
rdmsrl(MSR_AMD64_IBSOPDATA4, *buf++);
|
||||
if (perf_ibs == &perf_ibs_fetch && (ibs_caps & IBS_CAPS_FETCHCTLEXTD)) {
|
||||
rdmsrl(MSR_AMD64_ICIBSEXTDCTL, *buf++);
|
||||
size++;
|
||||
}
|
||||
}
|
||||
@@ -744,6 +760,13 @@ static __init void perf_event_ibs_init(void)
|
||||
{
|
||||
struct attribute **attr = ibs_op_format_attrs;
|
||||
|
||||
/*
|
||||
* Some chips fail to reset the fetch count when it is written; instead
|
||||
* they need a 0-1 transition of IbsFetchEn.
|
||||
*/
|
||||
if (boot_cpu_data.x86 >= 0x16 && boot_cpu_data.x86 <= 0x18)
|
||||
perf_ibs_fetch.fetch_count_reset_broken = 1;
|
||||
|
||||
perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
|
||||
|
||||
if (ibs_caps & IBS_CAPS_OPCNT) {
|
||||
|
||||
@@ -377,6 +377,7 @@
|
||||
#define MSR_AMD64_IBSOP_REG_MASK ((1UL<<MSR_AMD64_IBSOP_REG_COUNT)-1)
|
||||
#define MSR_AMD64_IBSCTL 0xc001103a
|
||||
#define MSR_AMD64_IBSBRTARGET 0xc001103b
|
||||
#define MSR_AMD64_ICIBSEXTDCTL 0xc001103c
|
||||
#define MSR_AMD64_IBSOPDATA4 0xc001103d
|
||||
#define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */
|
||||
|
||||
|
||||
@@ -255,19 +255,12 @@ EXPORT_SYMBOL_GPL(unwind_get_return_address);
|
||||
|
||||
unsigned long *unwind_get_return_address_ptr(struct unwind_state *state)
|
||||
{
|
||||
struct task_struct *task = state->task;
|
||||
|
||||
if (unwind_done(state))
|
||||
return NULL;
|
||||
|
||||
if (state->regs)
|
||||
return &state->regs->ip;
|
||||
|
||||
if (task != current && state->sp == task->thread.sp) {
|
||||
struct inactive_task_frame *frame = (void *)task->thread.sp;
|
||||
return &frame->ret_addr;
|
||||
}
|
||||
|
||||
if (state->sp)
|
||||
return (unsigned long *)state->sp - 1;
|
||||
|
||||
@@ -550,7 +543,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
|
||||
} else {
|
||||
struct inactive_task_frame *frame = (void *)task->thread.sp;
|
||||
|
||||
state->sp = task->thread.sp;
|
||||
state->sp = task->thread.sp + sizeof(*frame);
|
||||
state->bp = READ_ONCE_NOCHECK(frame->bp);
|
||||
state->ip = READ_ONCE_NOCHECK(frame->ret_addr);
|
||||
state->signal = (void *)state->ip == ret_from_fork;
|
||||
|
||||
@@ -1404,6 +1404,15 @@ asmlinkage __visible void __init xen_start_kernel(void)
|
||||
x86_init.mpparse.get_smp_config = x86_init_uint_noop;
|
||||
|
||||
xen_boot_params_init_edd();
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
/*
|
||||
* Disable selecting "Firmware First mode" for correctable
|
||||
* memory errors, as this is the duty of the hypervisor to
|
||||
* decide.
|
||||
*/
|
||||
acpi_disable_cmcff = 1;
|
||||
#endif
|
||||
}
|
||||
#ifdef CONFIG_PCI
|
||||
/* PCI BIOS service won't work from a PV guest. */
|
||||
|
||||
@@ -757,6 +757,9 @@ int __init acpi_aml_init(void)
|
||||
goto err_exit;
|
||||
}
|
||||
|
||||
if (acpi_disabled)
|
||||
return -ENODEV;
|
||||
|
||||
/* Initialize AML IO interface */
|
||||
mutex_init(&acpi_aml_io.lock);
|
||||
init_waitqueue_head(&acpi_aml_io.wait);
|
||||
|
||||
@@ -224,9 +224,9 @@ static int __init extlog_init(void)
|
||||
u64 cap;
|
||||
int rc;
|
||||
|
||||
rdmsrl(MSR_IA32_MCG_CAP, cap);
|
||||
|
||||
if (!(cap & MCG_ELOG_P) || !extlog_get_l1addr())
|
||||
if (rdmsrl_safe(MSR_IA32_MCG_CAP, &cap) ||
|
||||
!(cap & MCG_ELOG_P) ||
|
||||
!extlog_get_l1addr())
|
||||
return -ENODEV;
|
||||
|
||||
if (edac_get_report_status() == EDAC_REPORTING_FORCE) {
|
||||
|
||||
@@ -46,7 +46,7 @@ int acpi_numa __initdata;
|
||||
|
||||
int pxm_to_node(int pxm)
|
||||
{
|
||||
if (pxm < 0)
|
||||
if (pxm < 0 || pxm >= MAX_PXM_DOMAINS || numa_off)
|
||||
return NUMA_NO_NODE;
|
||||
return pxm_to_node_map[pxm];
|
||||
}
|
||||
|
||||
@@ -274,6 +274,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "530U4E/540U4E"),
|
||||
},
|
||||
},
|
||||
/* https://bugs.launchpad.net/bugs/1894667 */
|
||||
{
|
||||
.callback = video_detect_force_video,
|
||||
.ident = "HP 635 Notebook",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "HP 635 Notebook PC"),
|
||||
},
|
||||
},
|
||||
|
||||
/* Non win8 machines which need native backlight nevertheless */
|
||||
{
|
||||
|
||||
@@ -122,7 +122,7 @@
|
||||
/* Descriptor table word 0 bit (when DTA32M = 1) */
|
||||
#define SATA_RCAR_DTEND BIT(0)
|
||||
|
||||
#define SATA_RCAR_DMA_BOUNDARY 0x1FFFFFFEUL
|
||||
#define SATA_RCAR_DMA_BOUNDARY 0x1FFFFFFFUL
|
||||
|
||||
/* Gen2 Physical Layer Control Registers */
|
||||
#define RCAR_GEN2_PHY_CTL1_REG 0x1704
|
||||
|
||||
@@ -3074,6 +3074,7 @@ static inline bool fwnode_is_primary(struct fwnode_handle *fwnode)
|
||||
*/
|
||||
void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
|
||||
{
|
||||
struct device *parent = dev->parent;
|
||||
struct fwnode_handle *fn = dev->fwnode;
|
||||
|
||||
if (fwnode) {
|
||||
@@ -3088,7 +3089,8 @@ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
|
||||
} else {
|
||||
if (fwnode_is_primary(fn)) {
|
||||
dev->fwnode = fn->secondary;
|
||||
fn->secondary = NULL;
|
||||
if (!(parent && fn == parent->fwnode))
|
||||
fn->secondary = ERR_PTR(-ENODEV);
|
||||
} else {
|
||||
dev->fwnode = NULL;
|
||||
}
|
||||
|
||||
@@ -725,9 +725,9 @@ static void recv_work(struct work_struct *work)
|
||||
|
||||
blk_mq_complete_request(blk_mq_rq_from_pdu(cmd));
|
||||
}
|
||||
nbd_config_put(nbd);
|
||||
atomic_dec(&config->recv_threads);
|
||||
wake_up(&config->recv_wq);
|
||||
nbd_config_put(nbd);
|
||||
kfree(args);
|
||||
}
|
||||
|
||||
|
||||
@@ -146,10 +146,12 @@ static void __init of_ti_clockdomain_setup(struct device_node *node)
|
||||
if (clk_hw_get_flags(clk_hw) & CLK_IS_BASIC) {
|
||||
pr_warn("can't setup clkdm for basic clk %s\n",
|
||||
__clk_get_name(clk));
|
||||
clk_put(clk);
|
||||
continue;
|
||||
}
|
||||
to_clk_hw_omap(clk_hw)->clkdm_name = clkdm_name;
|
||||
omap2_init_clk_clkdm(clk_hw);
|
||||
clk_put(clk);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -701,7 +701,8 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
cpumask_copy(policy->cpus, topology_core_cpumask(cpu));
|
||||
}
|
||||
|
||||
if (check_amd_hwpstate_cpu(cpu) && !acpi_pstate_strict) {
|
||||
if (check_amd_hwpstate_cpu(cpu) && boot_cpu_data.x86 < 0x19 &&
|
||||
!acpi_pstate_strict) {
|
||||
cpumask_clear(policy->cpus);
|
||||
cpumask_set_cpu(cpu, policy->cpus);
|
||||
cpumask_copy(data->freqdomain_cpus,
|
||||
|
||||
@@ -144,7 +144,8 @@ static const struct reg_field sti_stih407_dvfs_regfields[DVFS_MAX_REGFIELDS] = {
|
||||
static const struct reg_field *sti_cpufreq_match(void)
|
||||
{
|
||||
if (of_machine_is_compatible("st,stih407") ||
|
||||
of_machine_is_compatible("st,stih410"))
|
||||
of_machine_is_compatible("st,stih410") ||
|
||||
of_machine_is_compatible("st,stih418"))
|
||||
return sti_stih407_dvfs_regfields;
|
||||
|
||||
return NULL;
|
||||
@@ -261,7 +262,8 @@ static int sti_cpufreq_init(void)
|
||||
int ret;
|
||||
|
||||
if ((!of_machine_is_compatible("st,stih407")) &&
|
||||
(!of_machine_is_compatible("st,stih410")))
|
||||
(!of_machine_is_compatible("st,stih410")) &&
|
||||
(!of_machine_is_compatible("st,stih418")))
|
||||
return -ENODEV;
|
||||
|
||||
ddata.cpu = get_cpu_device(0);
|
||||
|
||||
@@ -567,11 +567,11 @@ static enum dma_status jz4780_dma_tx_status(struct dma_chan *chan,
|
||||
enum dma_status status;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&jzchan->vchan.lock, flags);
|
||||
|
||||
status = dma_cookie_status(chan, cookie, txstate);
|
||||
if ((status == DMA_COMPLETE) || (txstate == NULL))
|
||||
return status;
|
||||
|
||||
spin_lock_irqsave(&jzchan->vchan.lock, flags);
|
||||
goto out_unlock_irqrestore;
|
||||
|
||||
vdesc = vchan_find_desc(&jzchan->vchan, cookie);
|
||||
if (vdesc) {
|
||||
@@ -588,6 +588,7 @@ static enum dma_status jz4780_dma_tx_status(struct dma_chan *chan,
|
||||
&& jzchan->desc->status & (JZ_DMA_DCS_AR | JZ_DMA_DCS_HLT))
|
||||
status = DMA_ERROR;
|
||||
|
||||
out_unlock_irqrestore:
|
||||
spin_unlock_irqrestore(&jzchan->vchan.lock, flags);
|
||||
return status;
|
||||
}
|
||||
|
||||
@@ -553,6 +553,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
|
||||
struct ww_acquire_ctx ticket;
|
||||
struct list_head list;
|
||||
uint64_t va_flags;
|
||||
uint64_t vm_size;
|
||||
int r = 0;
|
||||
|
||||
if (args->va_address < AMDGPU_VA_RESERVED_SIZE) {
|
||||
@@ -563,6 +564,15 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
vm_size = adev->vm_manager.max_pfn * AMDGPU_GPU_PAGE_SIZE;
|
||||
vm_size -= AMDGPU_VA_RESERVED_SIZE;
|
||||
if (args->va_address + args->map_size > vm_size) {
|
||||
dev_dbg(&dev->pdev->dev,
|
||||
"va_address 0x%llx is in top reserved area 0x%llx\n",
|
||||
args->va_address + args->map_size, vm_size);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if ((args->flags & ~valid_flags) && (args->flags & ~prt_flags)) {
|
||||
dev_err(&dev->pdev->dev, "invalid flags combination 0x%08X\n",
|
||||
args->flags);
|
||||
|
||||
@@ -306,8 +306,12 @@ static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
|
||||
const struct i2c_device_id *id)
|
||||
{
|
||||
struct device *dev = &stdp4028_i2c->dev;
|
||||
int ret;
|
||||
|
||||
ge_b850v3_lvds_init(dev);
|
||||
ret = ge_b850v3_lvds_init(dev);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c;
|
||||
i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr);
|
||||
@@ -365,8 +369,12 @@ static int stdp2690_ge_b850v3_fw_probe(struct i2c_client *stdp2690_i2c,
|
||||
const struct i2c_device_id *id)
|
||||
{
|
||||
struct device *dev = &stdp2690_i2c->dev;
|
||||
int ret;
|
||||
|
||||
ge_b850v3_lvds_init(dev);
|
||||
ret = ge_b850v3_lvds_init(dev);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ge_b850v3_lvds_ptr->stdp2690_i2c = stdp2690_i2c;
|
||||
i2c_set_clientdata(stdp2690_i2c, ge_b850v3_lvds_ptr);
|
||||
|
||||
@@ -311,7 +311,6 @@ static void dw_mipi_message_config(struct dw_mipi_dsi *dsi,
|
||||
if (lpm)
|
||||
val |= CMD_MODE_ALL_LP;
|
||||
|
||||
dsi_write(dsi, DSI_LPCLK_CTRL, lpm ? 0 : PHY_TXREQUESTCLKHS);
|
||||
dsi_write(dsi, DSI_CMD_MODE_CFG, val);
|
||||
}
|
||||
|
||||
@@ -468,16 +467,22 @@ static void dw_mipi_dsi_video_mode_config(struct dw_mipi_dsi *dsi)
|
||||
static void dw_mipi_dsi_set_mode(struct dw_mipi_dsi *dsi,
|
||||
unsigned long mode_flags)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
dsi_write(dsi, DSI_PWR_UP, RESET);
|
||||
|
||||
if (mode_flags & MIPI_DSI_MODE_VIDEO) {
|
||||
dsi_write(dsi, DSI_MODE_CFG, ENABLE_VIDEO_MODE);
|
||||
dw_mipi_dsi_video_mode_config(dsi);
|
||||
dsi_write(dsi, DSI_LPCLK_CTRL, PHY_TXREQUESTCLKHS);
|
||||
} else {
|
||||
dsi_write(dsi, DSI_MODE_CFG, ENABLE_CMD_MODE);
|
||||
}
|
||||
|
||||
val = PHY_TXREQUESTCLKHS;
|
||||
if (dsi->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS)
|
||||
val |= AUTO_CLKLANE_CTRL;
|
||||
dsi_write(dsi, DSI_LPCLK_CTRL, val);
|
||||
|
||||
dsi_write(dsi, DSI_PWR_UP, POWERUP);
|
||||
}
|
||||
|
||||
|
||||
@@ -33,6 +33,8 @@
|
||||
#include <uapi/drm/i915_drm.h>
|
||||
#include <uapi/drm/drm_fourcc.h>
|
||||
|
||||
#include <asm/hypervisor.h>
|
||||
|
||||
#include <linux/io-mapping.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/i2c-algo-bit.h>
|
||||
@@ -3141,7 +3143,9 @@ static inline bool intel_vtd_active(void)
|
||||
if (intel_iommu_gfx_mapped)
|
||||
return true;
|
||||
#endif
|
||||
return false;
|
||||
|
||||
/* Running as a guest, we assume the host is enforcing VT'd */
|
||||
return !hypervisor_is_type(X86_HYPER_NATIVE);
|
||||
}
|
||||
|
||||
static inline bool intel_scanout_needs_vtd_wa(struct drm_i915_private *dev_priv)
|
||||
|
||||
@@ -729,7 +729,7 @@ bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
|
||||
/* Don't evict this BO if it's outside of the
|
||||
* requested placement range
|
||||
*/
|
||||
if (place->fpfn >= (bo->mem.start + bo->mem.size) ||
|
||||
if (place->fpfn >= (bo->mem.start + bo->mem.num_pages) ||
|
||||
(place->lpfn && place->lpfn <= bo->mem.start))
|
||||
return false;
|
||||
|
||||
|
||||
@@ -31,6 +31,12 @@ struct adc0832 {
|
||||
struct regulator *reg;
|
||||
struct mutex lock;
|
||||
u8 mux_bits;
|
||||
/*
|
||||
* Max size needed: 16x 1 byte ADC data + 8 bytes timestamp
|
||||
* May be shorter if not all channels are enabled subject
|
||||
* to the timestamp remaining 8 byte aligned.
|
||||
*/
|
||||
u8 data[24] __aligned(8);
|
||||
|
||||
u8 tx_buf[2] ____cacheline_aligned;
|
||||
u8 rx_buf[2];
|
||||
@@ -203,7 +209,6 @@ static irqreturn_t adc0832_trigger_handler(int irq, void *p)
|
||||
struct iio_poll_func *pf = p;
|
||||
struct iio_dev *indio_dev = pf->indio_dev;
|
||||
struct adc0832 *adc = iio_priv(indio_dev);
|
||||
u8 data[24] = { }; /* 16x 1 byte ADC data + 8 bytes timestamp */
|
||||
int scan_index;
|
||||
int i = 0;
|
||||
|
||||
@@ -221,10 +226,10 @@ static irqreturn_t adc0832_trigger_handler(int irq, void *p)
|
||||
goto out;
|
||||
}
|
||||
|
||||
data[i] = ret;
|
||||
adc->data[i] = ret;
|
||||
i++;
|
||||
}
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, data,
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, adc->data,
|
||||
iio_get_time_ns(indio_dev));
|
||||
out:
|
||||
mutex_unlock(&adc->lock);
|
||||
|
||||
@@ -50,6 +50,12 @@ struct adc12138 {
|
||||
struct completion complete;
|
||||
/* The number of cclk periods for the S/H's acquisition time */
|
||||
unsigned int acquisition_time;
|
||||
/*
|
||||
* Maximum size needed: 16x 2 bytes ADC data + 8 bytes timestamp.
|
||||
* Less may be need if not all channels are enabled, as long as
|
||||
* the 8 byte alignment of the timestamp is maintained.
|
||||
*/
|
||||
__be16 data[20] __aligned(8);
|
||||
|
||||
u8 tx_buf[2] ____cacheline_aligned;
|
||||
u8 rx_buf[2];
|
||||
@@ -333,7 +339,6 @@ static irqreturn_t adc12138_trigger_handler(int irq, void *p)
|
||||
struct iio_poll_func *pf = p;
|
||||
struct iio_dev *indio_dev = pf->indio_dev;
|
||||
struct adc12138 *adc = iio_priv(indio_dev);
|
||||
__be16 data[20] = { }; /* 16x 2 bytes ADC data + 8 bytes timestamp */
|
||||
__be16 trash;
|
||||
int ret;
|
||||
int scan_index;
|
||||
@@ -349,7 +354,7 @@ static irqreturn_t adc12138_trigger_handler(int irq, void *p)
|
||||
reinit_completion(&adc->complete);
|
||||
|
||||
ret = adc12138_start_and_read_conv(adc, scan_chan,
|
||||
i ? &data[i - 1] : &trash);
|
||||
i ? &adc->data[i - 1] : &trash);
|
||||
if (ret) {
|
||||
dev_warn(&adc->spi->dev,
|
||||
"failed to start conversion\n");
|
||||
@@ -366,7 +371,7 @@ static irqreturn_t adc12138_trigger_handler(int irq, void *p)
|
||||
}
|
||||
|
||||
if (i) {
|
||||
ret = adc12138_read_conv_data(adc, &data[i - 1]);
|
||||
ret = adc12138_read_conv_data(adc, &adc->data[i - 1]);
|
||||
if (ret) {
|
||||
dev_warn(&adc->spi->dev,
|
||||
"failed to get conversion data\n");
|
||||
@@ -374,7 +379,7 @@ static irqreturn_t adc12138_trigger_handler(int irq, void *p)
|
||||
}
|
||||
}
|
||||
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, data,
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, adc->data,
|
||||
iio_get_time_ns(indio_dev));
|
||||
out:
|
||||
mutex_unlock(&adc->lock);
|
||||
|
||||
@@ -49,13 +49,20 @@ static irqreturn_t itg3200_trigger_handler(int irq, void *p)
|
||||
struct iio_poll_func *pf = p;
|
||||
struct iio_dev *indio_dev = pf->indio_dev;
|
||||
struct itg3200 *st = iio_priv(indio_dev);
|
||||
__be16 buf[ITG3200_SCAN_ELEMENTS + sizeof(s64)/sizeof(u16)];
|
||||
/*
|
||||
* Ensure correct alignment and padding including for the
|
||||
* timestamp that may be inserted.
|
||||
*/
|
||||
struct {
|
||||
__be16 buf[ITG3200_SCAN_ELEMENTS];
|
||||
s64 ts __aligned(8);
|
||||
} scan;
|
||||
|
||||
int ret = itg3200_read_all_channels(st->i2c, buf);
|
||||
int ret = itg3200_read_all_channels(st->i2c, scan.buf);
|
||||
if (ret < 0)
|
||||
goto error_ret;
|
||||
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, buf, pf->timestamp);
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
|
||||
|
||||
iio_trigger_notify_done(indio_dev->trig);
|
||||
|
||||
|
||||
@@ -172,6 +172,7 @@ struct si1145_part_info {
|
||||
* @part_info: Part information
|
||||
* @trig: Pointer to iio trigger
|
||||
* @meas_rate: Value of MEAS_RATE register. Only set in HW in auto mode
|
||||
* @buffer: Used to pack data read from sensor.
|
||||
*/
|
||||
struct si1145_data {
|
||||
struct i2c_client *client;
|
||||
@@ -183,6 +184,14 @@ struct si1145_data {
|
||||
bool autonomous;
|
||||
struct iio_trigger *trig;
|
||||
int meas_rate;
|
||||
/*
|
||||
* Ensure timestamp will be naturally aligned if present.
|
||||
* Maximum buffer size (may be only partly used if not all
|
||||
* channels are enabled):
|
||||
* 6*2 bytes channels data + 4 bytes alignment +
|
||||
* 8 bytes timestamp
|
||||
*/
|
||||
u8 buffer[24] __aligned(8);
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -444,12 +453,6 @@ static irqreturn_t si1145_trigger_handler(int irq, void *private)
|
||||
struct iio_poll_func *pf = private;
|
||||
struct iio_dev *indio_dev = pf->indio_dev;
|
||||
struct si1145_data *data = iio_priv(indio_dev);
|
||||
/*
|
||||
* Maximum buffer size:
|
||||
* 6*2 bytes channels data + 4 bytes alignment +
|
||||
* 8 bytes timestamp
|
||||
*/
|
||||
u8 buffer[24];
|
||||
int i, j = 0;
|
||||
int ret;
|
||||
u8 irq_status = 0;
|
||||
@@ -482,7 +485,7 @@ static irqreturn_t si1145_trigger_handler(int irq, void *private)
|
||||
|
||||
ret = i2c_smbus_read_i2c_block_data_or_emulated(
|
||||
data->client, indio_dev->channels[i].address,
|
||||
sizeof(u16) * run, &buffer[j]);
|
||||
sizeof(u16) * run, &data->buffer[j]);
|
||||
if (ret < 0)
|
||||
goto done;
|
||||
j += run * sizeof(u16);
|
||||
@@ -497,7 +500,7 @@ static irqreturn_t si1145_trigger_handler(int irq, void *private)
|
||||
goto done;
|
||||
}
|
||||
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, buffer,
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
|
||||
iio_get_time_ns(indio_dev));
|
||||
|
||||
done:
|
||||
|
||||
@@ -74,7 +74,7 @@ EXPORT_SYMBOL(hil_mlc_unregister);
|
||||
static LIST_HEAD(hil_mlcs);
|
||||
static DEFINE_RWLOCK(hil_mlcs_lock);
|
||||
static struct timer_list hil_mlcs_kicker;
|
||||
static int hil_mlcs_probe;
|
||||
static int hil_mlcs_probe, hil_mlc_stop;
|
||||
|
||||
static void hil_mlcs_process(unsigned long unused);
|
||||
static DECLARE_TASKLET_DISABLED(hil_mlcs_tasklet, hil_mlcs_process, 0);
|
||||
@@ -704,9 +704,13 @@ static int hilse_donode(hil_mlc *mlc)
|
||||
if (!mlc->ostarted) {
|
||||
mlc->ostarted = 1;
|
||||
mlc->opacket = pack;
|
||||
mlc->out(mlc);
|
||||
rc = mlc->out(mlc);
|
||||
nextidx = HILSEN_DOZE;
|
||||
write_unlock_irqrestore(&mlc->lock, flags);
|
||||
if (rc) {
|
||||
hil_mlc_stop = 1;
|
||||
return 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
mlc->ostarted = 0;
|
||||
@@ -717,8 +721,13 @@ static int hilse_donode(hil_mlc *mlc)
|
||||
|
||||
case HILSE_CTS:
|
||||
write_lock_irqsave(&mlc->lock, flags);
|
||||
nextidx = mlc->cts(mlc) ? node->bad : node->good;
|
||||
rc = mlc->cts(mlc);
|
||||
nextidx = rc ? node->bad : node->good;
|
||||
write_unlock_irqrestore(&mlc->lock, flags);
|
||||
if (rc) {
|
||||
hil_mlc_stop = 1;
|
||||
return 1;
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
@@ -786,6 +795,12 @@ static void hil_mlcs_process(unsigned long unused)
|
||||
|
||||
static void hil_mlcs_timer(unsigned long data)
|
||||
{
|
||||
if (hil_mlc_stop) {
|
||||
/* could not send packet - stop immediately. */
|
||||
pr_warn(PREFIX "HIL seems stuck - Disabling HIL MLC.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
hil_mlcs_probe = 1;
|
||||
tasklet_schedule(&hil_mlcs_tasklet);
|
||||
/* Re-insert the periodic task. */
|
||||
|
||||
@@ -213,7 +213,7 @@ static int hp_sdc_mlc_cts(hil_mlc *mlc)
|
||||
priv->tseq[2] = 1;
|
||||
priv->tseq[3] = 0;
|
||||
priv->tseq[4] = 0;
|
||||
__hp_sdc_enqueue_transaction(&priv->trans);
|
||||
return __hp_sdc_enqueue_transaction(&priv->trans);
|
||||
busy:
|
||||
return 1;
|
||||
done:
|
||||
@@ -222,7 +222,7 @@ static int hp_sdc_mlc_cts(hil_mlc *mlc)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hp_sdc_mlc_out(hil_mlc *mlc)
|
||||
static int hp_sdc_mlc_out(hil_mlc *mlc)
|
||||
{
|
||||
struct hp_sdc_mlc_priv_s *priv;
|
||||
|
||||
@@ -237,7 +237,7 @@ static void hp_sdc_mlc_out(hil_mlc *mlc)
|
||||
do_data:
|
||||
if (priv->emtestmode) {
|
||||
up(&mlc->osem);
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
/* Shouldn't be sending commands when loop may be busy */
|
||||
BUG_ON(down_trylock(&mlc->csem));
|
||||
@@ -299,7 +299,7 @@ static void hp_sdc_mlc_out(hil_mlc *mlc)
|
||||
BUG_ON(down_trylock(&mlc->csem));
|
||||
}
|
||||
enqueue:
|
||||
hp_sdc_enqueue_transaction(&priv->trans);
|
||||
return hp_sdc_enqueue_transaction(&priv->trans);
|
||||
}
|
||||
|
||||
static int __init hp_sdc_mlc_init(void)
|
||||
|
||||
@@ -336,7 +336,7 @@ static int bcm6328_led(struct device *dev, struct device_node *nc, u32 reg,
|
||||
led->cdev.brightness_set = bcm6328_led_set;
|
||||
led->cdev.blink_set = bcm6328_blink_set;
|
||||
|
||||
rc = led_classdev_register(dev, &led->cdev);
|
||||
rc = devm_led_classdev_register(dev, &led->cdev);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
|
||||
|
||||
@@ -141,7 +141,7 @@ static int bcm6358_led(struct device *dev, struct device_node *nc, u32 reg,
|
||||
|
||||
led->cdev.brightness_set = bcm6358_led_set;
|
||||
|
||||
rc = led_classdev_register(dev, &led->cdev);
|
||||
rc = devm_led_classdev_register(dev, &led->cdev);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
|
||||
|
||||
@@ -1369,7 +1369,7 @@ __acquires(bitmap->lock)
|
||||
if (bitmap->bp[page].hijacked ||
|
||||
bitmap->bp[page].map == NULL)
|
||||
csize = ((sector_t)1) << (bitmap->chunkshift +
|
||||
PAGE_COUNTER_SHIFT - 1);
|
||||
PAGE_COUNTER_SHIFT);
|
||||
else
|
||||
csize = ((sector_t)1) << bitmap->chunkshift;
|
||||
*blocks = csize - (offset & (csize - 1));
|
||||
|
||||
@@ -2415,8 +2415,6 @@ static int resize_stripes(struct r5conf *conf, int newsize)
|
||||
} else
|
||||
err = -ENOMEM;
|
||||
|
||||
mutex_unlock(&conf->cache_size_mutex);
|
||||
|
||||
conf->slab_cache = sc;
|
||||
conf->active_name = 1-conf->active_name;
|
||||
|
||||
@@ -2439,6 +2437,8 @@ static int resize_stripes(struct r5conf *conf, int newsize)
|
||||
|
||||
if (!err)
|
||||
conf->pool_size = newsize;
|
||||
mutex_unlock(&conf->cache_size_mutex);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
@@ -776,6 +776,9 @@ static int tw5864_enum_frameintervals(struct file *file, void *priv,
|
||||
fintv->type = V4L2_FRMIVAL_TYPE_STEPWISE;
|
||||
|
||||
ret = tw5864_frameinterval_get(input, &frameinterval);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
fintv->stepwise.step = frameinterval;
|
||||
fintv->stepwise.min = frameinterval;
|
||||
fintv->stepwise.max = frameinterval;
|
||||
@@ -794,6 +797,9 @@ static int tw5864_g_parm(struct file *file, void *priv,
|
||||
cp->capability = V4L2_CAP_TIMEPERFRAME;
|
||||
|
||||
ret = tw5864_frameinterval_get(input, &cp->timeperframe);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
cp->timeperframe.numerator *= input->frame_interval;
|
||||
cp->capturemode = 0;
|
||||
cp->readbuffers = 2;
|
||||
|
||||
@@ -579,6 +579,13 @@ static int mtk_jpeg_queue_setup(struct vb2_queue *q,
|
||||
if (!q_data)
|
||||
return -EINVAL;
|
||||
|
||||
if (*num_planes) {
|
||||
for (i = 0; i < *num_planes; i++)
|
||||
if (sizes[i] < q_data->sizeimage[i])
|
||||
return -EINVAL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
*num_planes = q_data->fmt->colplanes;
|
||||
for (i = 0; i < q_data->fmt->colplanes; i++) {
|
||||
sizes[i] = q_data->sizeimage[i];
|
||||
|
||||
@@ -165,35 +165,12 @@ static const struct file_operations emif_mr4_fops = {
|
||||
|
||||
static int __init_or_module emif_debugfs_init(struct emif_data *emif)
|
||||
{
|
||||
struct dentry *dentry;
|
||||
int ret;
|
||||
|
||||
dentry = debugfs_create_dir(dev_name(emif->dev), NULL);
|
||||
if (!dentry) {
|
||||
ret = -ENOMEM;
|
||||
goto err0;
|
||||
}
|
||||
emif->debugfs_root = dentry;
|
||||
|
||||
dentry = debugfs_create_file("regcache_dump", S_IRUGO,
|
||||
emif->debugfs_root, emif, &emif_regdump_fops);
|
||||
if (!dentry) {
|
||||
ret = -ENOMEM;
|
||||
goto err1;
|
||||
}
|
||||
|
||||
dentry = debugfs_create_file("mr4", S_IRUGO,
|
||||
emif->debugfs_root, emif, &emif_mr4_fops);
|
||||
if (!dentry) {
|
||||
ret = -ENOMEM;
|
||||
goto err1;
|
||||
}
|
||||
|
||||
emif->debugfs_root = debugfs_create_dir(dev_name(emif->dev), NULL);
|
||||
debugfs_create_file("regcache_dump", S_IRUGO, emif->debugfs_root, emif,
|
||||
&emif_regdump_fops);
|
||||
debugfs_create_file("mr4", S_IRUGO, emif->debugfs_root, emif,
|
||||
&emif_mr4_fops);
|
||||
return 0;
|
||||
err1:
|
||||
debugfs_remove_recursive(emif->debugfs_root);
|
||||
err0:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit emif_debugfs_exit(struct emif_data *emif)
|
||||
|
||||
@@ -1174,8 +1174,10 @@ mptscsih_remove(struct pci_dev *pdev)
|
||||
MPT_SCSI_HOST *hd;
|
||||
int sz1;
|
||||
|
||||
if((hd = shost_priv(host)) == NULL)
|
||||
return;
|
||||
if (host == NULL)
|
||||
hd = NULL;
|
||||
else
|
||||
hd = shost_priv(host);
|
||||
|
||||
mptscsih_shutdown(pdev);
|
||||
|
||||
@@ -1191,14 +1193,15 @@ mptscsih_remove(struct pci_dev *pdev)
|
||||
"Free'd ScsiLookup (%d) memory\n",
|
||||
ioc->name, sz1));
|
||||
|
||||
kfree(hd->info_kbuf);
|
||||
if (hd)
|
||||
kfree(hd->info_kbuf);
|
||||
|
||||
/* NULL the Scsi_Host pointer
|
||||
*/
|
||||
ioc->sh = NULL;
|
||||
|
||||
scsi_host_put(host);
|
||||
|
||||
if (host)
|
||||
scsi_host_put(host);
|
||||
mpt_detach(pdev);
|
||||
|
||||
}
|
||||
|
||||
@@ -1275,11 +1275,14 @@ static void via_init_sdc_pm(struct via_crdr_mmc_host *host)
|
||||
static int via_sd_suspend(struct pci_dev *pcidev, pm_message_t state)
|
||||
{
|
||||
struct via_crdr_mmc_host *host;
|
||||
unsigned long flags;
|
||||
|
||||
host = pci_get_drvdata(pcidev);
|
||||
|
||||
spin_lock_irqsave(&host->lock, flags);
|
||||
via_save_pcictrlreg(host);
|
||||
via_save_sdcreg(host);
|
||||
spin_unlock_irqrestore(&host->lock, flags);
|
||||
|
||||
pci_save_state(pcidev);
|
||||
pci_enable_wake(pcidev, pci_choose_state(pcidev, state), 0);
|
||||
|
||||
@@ -1478,6 +1478,19 @@ int ubi_thread(void *u)
|
||||
!ubi->thread_enabled || ubi_dbg_is_bgt_disabled(ubi)) {
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
spin_unlock(&ubi->wl_lock);
|
||||
|
||||
/*
|
||||
* Check kthread_should_stop() after we set the task
|
||||
* state to guarantee that we either see the stop bit
|
||||
* and exit or the task state is reset to runnable such
|
||||
* that it's not scheduled out indefinitely and detects
|
||||
* the stop bit at kthread_should_stop().
|
||||
*/
|
||||
if (kthread_should_stop()) {
|
||||
set_current_state(TASK_RUNNING);
|
||||
break;
|
||||
}
|
||||
|
||||
schedule();
|
||||
continue;
|
||||
}
|
||||
|
||||
@@ -5780,6 +5780,11 @@ static void bnxt_report_link(struct bnxt *bp)
|
||||
u16 fec;
|
||||
|
||||
netif_carrier_on(bp->dev);
|
||||
speed = bnxt_fw_to_ethtool_speed(bp->link_info.link_speed);
|
||||
if (speed == SPEED_UNKNOWN) {
|
||||
netdev_info(bp->dev, "NIC Link is Up, speed unknown\n");
|
||||
return;
|
||||
}
|
||||
if (bp->link_info.duplex == BNXT_LINK_DUPLEX_FULL)
|
||||
duplex = "full";
|
||||
else
|
||||
@@ -5792,7 +5797,6 @@ static void bnxt_report_link(struct bnxt *bp)
|
||||
flow_ctrl = "ON - receive";
|
||||
else
|
||||
flow_ctrl = "none";
|
||||
speed = bnxt_fw_to_ethtool_speed(bp->link_info.link_speed);
|
||||
netdev_info(bp->dev, "NIC Link is Up, %u Mbps %s duplex, Flow control: %s\n",
|
||||
speed, duplex, flow_ctrl);
|
||||
if (bp->flags & BNXT_FLAG_EEE_CAP)
|
||||
|
||||
@@ -520,6 +520,9 @@ static void mlxsw_emad_transmit_retry(struct mlxsw_core *mlxsw_core,
|
||||
err = mlxsw_emad_transmit(trans->core, trans);
|
||||
if (err == 0)
|
||||
return;
|
||||
|
||||
if (!atomic_dec_and_test(&trans->active))
|
||||
return;
|
||||
} else {
|
||||
err = -EIO;
|
||||
}
|
||||
|
||||
@@ -1768,12 +1768,16 @@ static int ravb_hwtstamp_get(struct net_device *ndev, struct ifreq *req)
|
||||
config.flags = 0;
|
||||
config.tx_type = priv->tstamp_tx_ctrl ? HWTSTAMP_TX_ON :
|
||||
HWTSTAMP_TX_OFF;
|
||||
if (priv->tstamp_rx_ctrl & RAVB_RXTSTAMP_TYPE_V2_L2_EVENT)
|
||||
switch (priv->tstamp_rx_ctrl & RAVB_RXTSTAMP_TYPE) {
|
||||
case RAVB_RXTSTAMP_TYPE_V2_L2_EVENT:
|
||||
config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
|
||||
else if (priv->tstamp_rx_ctrl & RAVB_RXTSTAMP_TYPE_ALL)
|
||||
break;
|
||||
case RAVB_RXTSTAMP_TYPE_ALL:
|
||||
config.rx_filter = HWTSTAMP_FILTER_ALL;
|
||||
else
|
||||
break;
|
||||
default:
|
||||
config.rx_filter = HWTSTAMP_FILTER_NONE;
|
||||
}
|
||||
|
||||
return copy_to_user(req->ifr_data, &config, sizeof(config)) ?
|
||||
-EFAULT : 0;
|
||||
|
||||
@@ -667,10 +667,6 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
|
||||
|
||||
gtp = netdev_priv(dev);
|
||||
|
||||
err = gtp_encap_enable(gtp, data);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
if (!data[IFLA_GTP_PDP_HASHSIZE]) {
|
||||
hashsize = 1024;
|
||||
} else {
|
||||
@@ -681,12 +677,16 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
|
||||
|
||||
err = gtp_hashtable_new(gtp, hashsize);
|
||||
if (err < 0)
|
||||
goto out_encap;
|
||||
return err;
|
||||
|
||||
err = gtp_encap_enable(gtp, data);
|
||||
if (err < 0)
|
||||
goto out_hashtable;
|
||||
|
||||
err = register_netdevice(dev);
|
||||
if (err < 0) {
|
||||
netdev_dbg(dev, "failed to register new netdev %d\n", err);
|
||||
goto out_hashtable;
|
||||
goto out_encap;
|
||||
}
|
||||
|
||||
gn = net_generic(dev_net(dev), gtp_net_id);
|
||||
@@ -697,11 +697,11 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
|
||||
|
||||
return 0;
|
||||
|
||||
out_encap:
|
||||
gtp_encap_disable(gtp);
|
||||
out_hashtable:
|
||||
kfree(gtp->addr_hash);
|
||||
kfree(gtp->tid_hash);
|
||||
out_encap:
|
||||
gtp_encap_disable(gtp);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
@@ -275,63 +275,69 @@ static inline struct net_device **get_dev_p(struct pvc_device *pvc,
|
||||
|
||||
static int fr_hard_header(struct sk_buff **skb_p, u16 dlci)
|
||||
{
|
||||
u16 head_len;
|
||||
struct sk_buff *skb = *skb_p;
|
||||
|
||||
switch (skb->protocol) {
|
||||
case cpu_to_be16(NLPID_CCITT_ANSI_LMI):
|
||||
head_len = 4;
|
||||
skb_push(skb, head_len);
|
||||
skb->data[3] = NLPID_CCITT_ANSI_LMI;
|
||||
break;
|
||||
if (!skb->dev) { /* Control packets */
|
||||
switch (dlci) {
|
||||
case LMI_CCITT_ANSI_DLCI:
|
||||
skb_push(skb, 4);
|
||||
skb->data[3] = NLPID_CCITT_ANSI_LMI;
|
||||
break;
|
||||
|
||||
case cpu_to_be16(NLPID_CISCO_LMI):
|
||||
head_len = 4;
|
||||
skb_push(skb, head_len);
|
||||
skb->data[3] = NLPID_CISCO_LMI;
|
||||
break;
|
||||
case LMI_CISCO_DLCI:
|
||||
skb_push(skb, 4);
|
||||
skb->data[3] = NLPID_CISCO_LMI;
|
||||
break;
|
||||
|
||||
case cpu_to_be16(ETH_P_IP):
|
||||
head_len = 4;
|
||||
skb_push(skb, head_len);
|
||||
skb->data[3] = NLPID_IP;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
case cpu_to_be16(ETH_P_IPV6):
|
||||
head_len = 4;
|
||||
skb_push(skb, head_len);
|
||||
skb->data[3] = NLPID_IPV6;
|
||||
break;
|
||||
} else if (skb->dev->type == ARPHRD_DLCI) {
|
||||
switch (skb->protocol) {
|
||||
case htons(ETH_P_IP):
|
||||
skb_push(skb, 4);
|
||||
skb->data[3] = NLPID_IP;
|
||||
break;
|
||||
|
||||
case cpu_to_be16(ETH_P_802_3):
|
||||
head_len = 10;
|
||||
if (skb_headroom(skb) < head_len) {
|
||||
struct sk_buff *skb2 = skb_realloc_headroom(skb,
|
||||
head_len);
|
||||
case htons(ETH_P_IPV6):
|
||||
skb_push(skb, 4);
|
||||
skb->data[3] = NLPID_IPV6;
|
||||
break;
|
||||
|
||||
default:
|
||||
skb_push(skb, 10);
|
||||
skb->data[3] = FR_PAD;
|
||||
skb->data[4] = NLPID_SNAP;
|
||||
/* OUI 00-00-00 indicates an Ethertype follows */
|
||||
skb->data[5] = 0x00;
|
||||
skb->data[6] = 0x00;
|
||||
skb->data[7] = 0x00;
|
||||
/* This should be an Ethertype: */
|
||||
*(__be16 *)(skb->data + 8) = skb->protocol;
|
||||
}
|
||||
|
||||
} else if (skb->dev->type == ARPHRD_ETHER) {
|
||||
if (skb_headroom(skb) < 10) {
|
||||
struct sk_buff *skb2 = skb_realloc_headroom(skb, 10);
|
||||
if (!skb2)
|
||||
return -ENOBUFS;
|
||||
dev_kfree_skb(skb);
|
||||
skb = *skb_p = skb2;
|
||||
}
|
||||
skb_push(skb, head_len);
|
||||
skb_push(skb, 10);
|
||||
skb->data[3] = FR_PAD;
|
||||
skb->data[4] = NLPID_SNAP;
|
||||
skb->data[5] = FR_PAD;
|
||||
/* OUI 00-80-C2 stands for the 802.1 organization */
|
||||
skb->data[5] = 0x00;
|
||||
skb->data[6] = 0x80;
|
||||
skb->data[7] = 0xC2;
|
||||
/* PID 00-07 stands for Ethernet frames without FCS */
|
||||
skb->data[8] = 0x00;
|
||||
skb->data[9] = 0x07; /* bridged Ethernet frame w/out FCS */
|
||||
break;
|
||||
skb->data[9] = 0x07;
|
||||
|
||||
default:
|
||||
head_len = 10;
|
||||
skb_push(skb, head_len);
|
||||
skb->data[3] = FR_PAD;
|
||||
skb->data[4] = NLPID_SNAP;
|
||||
skb->data[5] = FR_PAD;
|
||||
skb->data[6] = FR_PAD;
|
||||
skb->data[7] = FR_PAD;
|
||||
*(__be16*)(skb->data + 8) = skb->protocol;
|
||||
} else {
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dlci_to_q922(skb->data, dlci);
|
||||
@@ -427,8 +433,8 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
skb_put(skb, pad);
|
||||
memset(skb->data + len, 0, pad);
|
||||
}
|
||||
skb->protocol = cpu_to_be16(ETH_P_802_3);
|
||||
}
|
||||
skb->dev = dev;
|
||||
if (!fr_hard_header(&skb, pvc->dlci)) {
|
||||
dev->stats.tx_bytes += skb->len;
|
||||
dev->stats.tx_packets++;
|
||||
@@ -496,10 +502,8 @@ static void fr_lmi_send(struct net_device *dev, int fullrep)
|
||||
memset(skb->data, 0, len);
|
||||
skb_reserve(skb, 4);
|
||||
if (lmi == LMI_CISCO) {
|
||||
skb->protocol = cpu_to_be16(NLPID_CISCO_LMI);
|
||||
fr_hard_header(&skb, LMI_CISCO_DLCI);
|
||||
} else {
|
||||
skb->protocol = cpu_to_be16(NLPID_CCITT_ANSI_LMI);
|
||||
fr_hard_header(&skb, LMI_CCITT_ANSI_DLCI);
|
||||
}
|
||||
data = skb_tail_pointer(skb);
|
||||
|
||||
@@ -622,6 +622,7 @@ static void ath10k_htt_rx_h_rates(struct ath10k *ar,
|
||||
u8 preamble = 0;
|
||||
u8 group_id;
|
||||
u32 info1, info2, info3;
|
||||
u32 stbc, nsts_su;
|
||||
|
||||
info1 = __le32_to_cpu(rxd->ppdu_start.info1);
|
||||
info2 = __le32_to_cpu(rxd->ppdu_start.info2);
|
||||
@@ -666,11 +667,16 @@ static void ath10k_htt_rx_h_rates(struct ath10k *ar,
|
||||
*/
|
||||
bw = info2 & 3;
|
||||
sgi = info3 & 1;
|
||||
stbc = (info2 >> 3) & 1;
|
||||
group_id = (info2 >> 4) & 0x3F;
|
||||
|
||||
if (GROUP_ID_IS_SU_MIMO(group_id)) {
|
||||
mcs = (info3 >> 4) & 0x0F;
|
||||
nss = ((info2 >> 10) & 0x07) + 1;
|
||||
nsts_su = ((info2 >> 10) & 0x07);
|
||||
if (stbc)
|
||||
nss = (nsts_su >> 2) + 1;
|
||||
else
|
||||
nss = (nsts_su + 1);
|
||||
} else {
|
||||
/* Hardware doesn't decode VHT-SIG-B into Rx descriptor
|
||||
* so it's impossible to decode MCS. Also since
|
||||
|
||||
@@ -561,6 +561,10 @@ static int ath10k_sdio_mbox_rx_alloc(struct ath10k *ar,
|
||||
le16_to_cpu(htc_hdr->len),
|
||||
ATH10K_HTC_MBOX_MAX_PAYLOAD_LENGTH);
|
||||
ret = -ENOMEM;
|
||||
|
||||
queue_work(ar->workqueue, &ar->restart_work);
|
||||
ath10k_warn(ar, "exceeds length, start recovery\n");
|
||||
|
||||
goto err;
|
||||
}
|
||||
|
||||
|
||||
@@ -332,10 +332,12 @@ static void p54p_tx(struct ieee80211_hw *dev, struct sk_buff *skb)
|
||||
struct p54p_desc *desc;
|
||||
dma_addr_t mapping;
|
||||
u32 idx, i;
|
||||
__le32 device_addr;
|
||||
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
idx = le32_to_cpu(ring_control->host_idx[1]);
|
||||
i = idx % ARRAY_SIZE(ring_control->tx_data);
|
||||
device_addr = ((struct p54_hdr *)skb->data)->req_id;
|
||||
|
||||
mapping = pci_map_single(priv->pdev, skb->data, skb->len,
|
||||
PCI_DMA_TODEVICE);
|
||||
@@ -349,7 +351,7 @@ static void p54p_tx(struct ieee80211_hw *dev, struct sk_buff *skb)
|
||||
|
||||
desc = &ring_control->tx_data[i];
|
||||
desc->host_addr = cpu_to_le32(mapping);
|
||||
desc->device_addr = ((struct p54_hdr *)skb->data)->req_id;
|
||||
desc->device_addr = device_addr;
|
||||
desc->len = cpu_to_le16(skb->len);
|
||||
desc->flags = 0;
|
||||
|
||||
|
||||
@@ -1545,7 +1545,6 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
|
||||
complete(&queue->cm_done);
|
||||
return 0;
|
||||
case RDMA_CM_EVENT_REJECTED:
|
||||
nvme_rdma_destroy_queue_ib(queue);
|
||||
cm_error = nvme_rdma_conn_rejected(queue, ev);
|
||||
break;
|
||||
case RDMA_CM_EVENT_ROUTE_ERROR:
|
||||
|
||||
@@ -344,6 +344,7 @@ static int param_set_ac_online(const char *key, const struct kernel_param *kp)
|
||||
static int param_get_ac_online(char *buffer, const struct kernel_param *kp)
|
||||
{
|
||||
strcpy(buffer, map_get_key(map_ac_online, ac_online, "unknown"));
|
||||
strcat(buffer, "\n");
|
||||
return strlen(buffer);
|
||||
}
|
||||
|
||||
@@ -357,6 +358,7 @@ static int param_set_usb_online(const char *key, const struct kernel_param *kp)
|
||||
static int param_get_usb_online(char *buffer, const struct kernel_param *kp)
|
||||
{
|
||||
strcpy(buffer, map_get_key(map_ac_online, usb_online, "unknown"));
|
||||
strcat(buffer, "\n");
|
||||
return strlen(buffer);
|
||||
}
|
||||
|
||||
@@ -371,6 +373,7 @@ static int param_set_battery_status(const char *key,
|
||||
static int param_get_battery_status(char *buffer, const struct kernel_param *kp)
|
||||
{
|
||||
strcpy(buffer, map_get_key(map_status, battery_status, "unknown"));
|
||||
strcat(buffer, "\n");
|
||||
return strlen(buffer);
|
||||
}
|
||||
|
||||
@@ -385,6 +388,7 @@ static int param_set_battery_health(const char *key,
|
||||
static int param_get_battery_health(char *buffer, const struct kernel_param *kp)
|
||||
{
|
||||
strcpy(buffer, map_get_key(map_health, battery_health, "unknown"));
|
||||
strcat(buffer, "\n");
|
||||
return strlen(buffer);
|
||||
}
|
||||
|
||||
@@ -400,6 +404,7 @@ static int param_get_battery_present(char *buffer,
|
||||
const struct kernel_param *kp)
|
||||
{
|
||||
strcpy(buffer, map_get_key(map_present, battery_present, "unknown"));
|
||||
strcat(buffer, "\n");
|
||||
return strlen(buffer);
|
||||
}
|
||||
|
||||
@@ -417,6 +422,7 @@ static int param_get_battery_technology(char *buffer,
|
||||
{
|
||||
strcpy(buffer,
|
||||
map_get_key(map_technology, battery_technology, "unknown"));
|
||||
strcat(buffer, "\n");
|
||||
return strlen(buffer);
|
||||
}
|
||||
|
||||
|
||||
@@ -429,16 +429,26 @@ static int rx8010_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
|
||||
}
|
||||
}
|
||||
|
||||
static struct rtc_class_ops rx8010_rtc_ops = {
|
||||
static const struct rtc_class_ops rx8010_rtc_ops_default = {
|
||||
.read_time = rx8010_get_time,
|
||||
.set_time = rx8010_set_time,
|
||||
.ioctl = rx8010_ioctl,
|
||||
};
|
||||
|
||||
static const struct rtc_class_ops rx8010_rtc_ops_alarm = {
|
||||
.read_time = rx8010_get_time,
|
||||
.set_time = rx8010_set_time,
|
||||
.ioctl = rx8010_ioctl,
|
||||
.read_alarm = rx8010_read_alarm,
|
||||
.set_alarm = rx8010_set_alarm,
|
||||
.alarm_irq_enable = rx8010_alarm_irq_enable,
|
||||
};
|
||||
|
||||
static int rx8010_probe(struct i2c_client *client,
|
||||
const struct i2c_device_id *id)
|
||||
{
|
||||
struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent);
|
||||
const struct rtc_class_ops *rtc_ops;
|
||||
struct rx8010_data *rx8010;
|
||||
int err = 0;
|
||||
|
||||
@@ -469,16 +479,16 @@ static int rx8010_probe(struct i2c_client *client,
|
||||
|
||||
if (err) {
|
||||
dev_err(&client->dev, "unable to request IRQ\n");
|
||||
client->irq = 0;
|
||||
} else {
|
||||
rx8010_rtc_ops.read_alarm = rx8010_read_alarm;
|
||||
rx8010_rtc_ops.set_alarm = rx8010_set_alarm;
|
||||
rx8010_rtc_ops.alarm_irq_enable = rx8010_alarm_irq_enable;
|
||||
return err;
|
||||
}
|
||||
|
||||
rtc_ops = &rx8010_rtc_ops_alarm;
|
||||
} else {
|
||||
rtc_ops = &rx8010_rtc_ops_default;
|
||||
}
|
||||
|
||||
rx8010->rtc = devm_rtc_device_register(&client->dev, client->name,
|
||||
&rx8010_rtc_ops, THIS_MODULE);
|
||||
rtc_ops, THIS_MODULE);
|
||||
|
||||
if (IS_ERR(rx8010->rtc)) {
|
||||
dev_err(&client->dev, "unable to register the class device\n");
|
||||
|
||||
@@ -1351,6 +1351,7 @@ static int cb_pcidas_auto_attach(struct comedi_device *dev,
|
||||
if (dev->irq && board->has_ao_fifo) {
|
||||
dev->write_subdev = s;
|
||||
s->subdev_flags |= SDF_CMD_WRITE;
|
||||
s->len_chanlist = s->n_chan;
|
||||
s->do_cmdtest = cb_pcidas_ao_cmdtest;
|
||||
s->do_cmd = cb_pcidas_ao_cmd;
|
||||
s->cancel = cb_pcidas_ao_cancel;
|
||||
|
||||
@@ -166,7 +166,12 @@ error_destroy_mc_io:
|
||||
*/
|
||||
void fsl_destroy_mc_io(struct fsl_mc_io *mc_io)
|
||||
{
|
||||
struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev;
|
||||
struct fsl_mc_device *dpmcp_dev;
|
||||
|
||||
if (!mc_io)
|
||||
return;
|
||||
|
||||
dpmcp_dev = mc_io->dpmcp_dev;
|
||||
|
||||
if (dpmcp_dev)
|
||||
fsl_mc_io_unset_dpmcp(mc_io);
|
||||
|
||||
@@ -155,12 +155,6 @@ int cvm_oct_phy_setup_device(struct net_device *dev)
|
||||
|
||||
phy_node = of_parse_phandle(priv->of_node, "phy-handle", 0);
|
||||
if (!phy_node && of_phy_is_fixed_link(priv->of_node)) {
|
||||
int rc;
|
||||
|
||||
rc = of_phy_register_fixed_link(priv->of_node);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
phy_node = of_node_get(priv->of_node);
|
||||
}
|
||||
if (!phy_node)
|
||||
|
||||
@@ -83,15 +83,17 @@ static inline int cvm_oct_check_rcv_error(cvmx_wqe_t *work)
|
||||
else
|
||||
port = work->word1.cn38xx.ipprt;
|
||||
|
||||
if ((work->word2.snoip.err_code == 10) && (work->word1.len <= 64)) {
|
||||
if ((work->word2.snoip.err_code == 10) && (work->word1.len <= 64))
|
||||
/*
|
||||
* Ignore length errors on min size packets. Some
|
||||
* equipment incorrectly pads packets to 64+4FCS
|
||||
* instead of 60+4FCS. Note these packets still get
|
||||
* counted as frame errors.
|
||||
*/
|
||||
} else if (work->word2.snoip.err_code == 5 ||
|
||||
work->word2.snoip.err_code == 7) {
|
||||
return 0;
|
||||
|
||||
if (work->word2.snoip.err_code == 5 ||
|
||||
work->word2.snoip.err_code == 7) {
|
||||
/*
|
||||
* We received a packet with either an alignment error
|
||||
* or a FCS error. This may be signalling that we are
|
||||
@@ -122,7 +124,10 @@ static inline int cvm_oct_check_rcv_error(cvmx_wqe_t *work)
|
||||
/* Port received 0xd5 preamble */
|
||||
work->packet_ptr.s.addr += i + 1;
|
||||
work->word1.len -= i + 5;
|
||||
} else if ((*ptr & 0xf) == 0xd) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
if ((*ptr & 0xf) == 0xd) {
|
||||
/* Port received 0xd preamble */
|
||||
work->packet_ptr.s.addr += i;
|
||||
work->word1.len -= i + 4;
|
||||
@@ -132,21 +137,20 @@ static inline int cvm_oct_check_rcv_error(cvmx_wqe_t *work)
|
||||
((*(ptr + 1) & 0xf) << 4);
|
||||
ptr++;
|
||||
}
|
||||
} else {
|
||||
printk_ratelimited("Port %d unknown preamble, packet dropped\n",
|
||||
port);
|
||||
cvm_oct_free_work(work);
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
printk_ratelimited("Port %d unknown preamble, packet dropped\n",
|
||||
port);
|
||||
cvm_oct_free_work(work);
|
||||
return 1;
|
||||
}
|
||||
} else {
|
||||
printk_ratelimited("Port %d receive error code %d, packet dropped\n",
|
||||
port, work->word2.snoip.err_code);
|
||||
cvm_oct_free_work(work);
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
printk_ratelimited("Port %d receive error code %d, packet dropped\n",
|
||||
port, work->word2.snoip.err_code);
|
||||
cvm_oct_free_work(work);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void copy_segments_to_skb(cvmx_wqe_t *work, struct sk_buff *skb)
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
#include <linux/phy.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/of_mdio.h>
|
||||
#include <linux/of_net.h>
|
||||
#include <linux/if_ether.h>
|
||||
#include <linux/if_vlan.h>
|
||||
@@ -878,6 +879,14 @@ static int cvm_oct_probe(struct platform_device *pdev)
|
||||
break;
|
||||
}
|
||||
|
||||
if (priv->of_node && of_phy_is_fixed_link(priv->of_node)) {
|
||||
if (of_phy_register_fixed_link(priv->of_node)) {
|
||||
netdev_err(dev, "Failed to register fixed link for interface %d, port %d\n",
|
||||
interface, priv->port);
|
||||
dev->netdev_ops = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
if (!dev->netdev_ops) {
|
||||
free_netdev(dev);
|
||||
} else if (register_netdev(dev) < 0) {
|
||||
|
||||
@@ -280,6 +280,7 @@ static inline unsigned int rdo_max_power(u32 rdo)
|
||||
#define PD_T_ERROR_RECOVERY 100 /* minimum 25 is insufficient */
|
||||
#define PD_T_SRCSWAPSTDBY 625 /* Maximum of 650ms */
|
||||
#define PD_T_NEWSRC 250 /* Maximum of 275ms */
|
||||
#define PD_T_SWAP_SRC_START 20 /* Minimum of 20ms */
|
||||
|
||||
#define PD_T_DRP_TRY 100 /* 75 - 150 ms */
|
||||
#define PD_T_DRP_TRYWAIT 600 /* 400 - 800 ms */
|
||||
|
||||
@@ -2741,7 +2741,7 @@ static void run_state_machine(struct tcpm_port *port)
|
||||
*/
|
||||
tcpm_set_pwr_role(port, TYPEC_SOURCE);
|
||||
tcpm_pd_send_control(port, PD_CTRL_PS_RDY);
|
||||
tcpm_set_state(port, SRC_STARTUP, 0);
|
||||
tcpm_set_state(port, SRC_STARTUP, PD_T_SWAP_SRC_START);
|
||||
break;
|
||||
|
||||
case VCONN_SWAP_ACCEPT:
|
||||
|
||||
@@ -713,8 +713,13 @@ static void k_fn(struct vc_data *vc, unsigned char value, char up_flag)
|
||||
return;
|
||||
|
||||
if ((unsigned)value < ARRAY_SIZE(func_table)) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&func_buf_lock, flags);
|
||||
if (func_table[value])
|
||||
puts_queue(vc, func_table[value]);
|
||||
spin_unlock_irqrestore(&func_buf_lock, flags);
|
||||
|
||||
} else
|
||||
pr_err("k_fn called with value=%d\n", value);
|
||||
}
|
||||
@@ -1959,13 +1964,11 @@ out:
|
||||
#undef s
|
||||
#undef v
|
||||
|
||||
/* FIXME: This one needs untangling and locking */
|
||||
/* FIXME: This one needs untangling */
|
||||
int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
|
||||
{
|
||||
struct kbsentry *kbs;
|
||||
char *p;
|
||||
u_char *q;
|
||||
u_char __user *up;
|
||||
int sz, fnw_sz;
|
||||
int delta;
|
||||
char *first_free, *fj, *fnw;
|
||||
@@ -1991,23 +1994,19 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
|
||||
i = kbs->kb_func;
|
||||
|
||||
switch (cmd) {
|
||||
case KDGKBSENT:
|
||||
sz = sizeof(kbs->kb_string) - 1; /* sz should have been
|
||||
a struct member */
|
||||
up = user_kdgkb->kb_string;
|
||||
p = func_table[i];
|
||||
if(p)
|
||||
for ( ; *p && sz; p++, sz--)
|
||||
if (put_user(*p, up++)) {
|
||||
ret = -EFAULT;
|
||||
goto reterr;
|
||||
}
|
||||
if (put_user('\0', up)) {
|
||||
ret = -EFAULT;
|
||||
goto reterr;
|
||||
}
|
||||
kfree(kbs);
|
||||
return ((p && *p) ? -EOVERFLOW : 0);
|
||||
case KDGKBSENT: {
|
||||
/* size should have been a struct member */
|
||||
ssize_t len = sizeof(user_kdgkb->kb_string);
|
||||
|
||||
spin_lock_irqsave(&func_buf_lock, flags);
|
||||
len = strlcpy(kbs->kb_string, func_table[i] ? : "", len);
|
||||
spin_unlock_irqrestore(&func_buf_lock, flags);
|
||||
|
||||
ret = copy_to_user(user_kdgkb->kb_string, kbs->kb_string,
|
||||
len + 1) ? -EFAULT : 0;
|
||||
|
||||
goto reterr;
|
||||
}
|
||||
case KDSKBSENT:
|
||||
if (!perm) {
|
||||
ret = -EPERM;
|
||||
|
||||
@@ -244,7 +244,7 @@ int vt_waitactive(int n)
|
||||
|
||||
|
||||
static inline int
|
||||
do_fontx_ioctl(int cmd, struct consolefontdesc __user *user_cfd, int perm, struct console_font_op *op)
|
||||
do_fontx_ioctl(struct vc_data *vc, int cmd, struct consolefontdesc __user *user_cfd, int perm, struct console_font_op *op)
|
||||
{
|
||||
struct consolefontdesc cfdarg;
|
||||
int i;
|
||||
@@ -262,15 +262,16 @@ do_fontx_ioctl(int cmd, struct consolefontdesc __user *user_cfd, int perm, struc
|
||||
op->height = cfdarg.charheight;
|
||||
op->charcount = cfdarg.charcount;
|
||||
op->data = cfdarg.chardata;
|
||||
return con_font_op(vc_cons[fg_console].d, op);
|
||||
case GIO_FONTX: {
|
||||
return con_font_op(vc, op);
|
||||
|
||||
case GIO_FONTX:
|
||||
op->op = KD_FONT_OP_GET;
|
||||
op->flags = KD_FONT_FLAG_OLD;
|
||||
op->width = 8;
|
||||
op->height = cfdarg.charheight;
|
||||
op->charcount = cfdarg.charcount;
|
||||
op->data = cfdarg.chardata;
|
||||
i = con_font_op(vc_cons[fg_console].d, op);
|
||||
i = con_font_op(vc, op);
|
||||
if (i)
|
||||
return i;
|
||||
cfdarg.charheight = op->height;
|
||||
@@ -278,7 +279,6 @@ do_fontx_ioctl(int cmd, struct consolefontdesc __user *user_cfd, int perm, struc
|
||||
if (copy_to_user(user_cfd, &cfdarg, sizeof(struct consolefontdesc)))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -924,7 +924,7 @@ int vt_ioctl(struct tty_struct *tty,
|
||||
op.height = 0;
|
||||
op.charcount = 256;
|
||||
op.data = up;
|
||||
ret = con_font_op(vc_cons[fg_console].d, &op);
|
||||
ret = con_font_op(vc, &op);
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -935,7 +935,7 @@ int vt_ioctl(struct tty_struct *tty,
|
||||
op.height = 32;
|
||||
op.charcount = 256;
|
||||
op.data = up;
|
||||
ret = con_font_op(vc_cons[fg_console].d, &op);
|
||||
ret = con_font_op(vc, &op);
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -952,7 +952,7 @@ int vt_ioctl(struct tty_struct *tty,
|
||||
|
||||
case PIO_FONTX:
|
||||
case GIO_FONTX:
|
||||
ret = do_fontx_ioctl(cmd, up, perm, &op);
|
||||
ret = do_fontx_ioctl(vc, cmd, up, perm, &op);
|
||||
break;
|
||||
|
||||
case PIO_FONTRESET:
|
||||
@@ -969,11 +969,11 @@ int vt_ioctl(struct tty_struct *tty,
|
||||
{
|
||||
op.op = KD_FONT_OP_SET_DEFAULT;
|
||||
op.data = NULL;
|
||||
ret = con_font_op(vc_cons[fg_console].d, &op);
|
||||
ret = con_font_op(vc, &op);
|
||||
if (ret)
|
||||
break;
|
||||
console_lock();
|
||||
con_set_default_unimap(vc_cons[fg_console].d);
|
||||
con_set_default_unimap(vc);
|
||||
console_unlock();
|
||||
break;
|
||||
}
|
||||
@@ -1100,8 +1100,9 @@ struct compat_consolefontdesc {
|
||||
};
|
||||
|
||||
static inline int
|
||||
compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
|
||||
int perm, struct console_font_op *op)
|
||||
compat_fontx_ioctl(struct vc_data *vc, int cmd,
|
||||
struct compat_consolefontdesc __user *user_cfd,
|
||||
int perm, struct console_font_op *op)
|
||||
{
|
||||
struct compat_consolefontdesc cfdarg;
|
||||
int i;
|
||||
@@ -1119,7 +1120,8 @@ compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
|
||||
op->height = cfdarg.charheight;
|
||||
op->charcount = cfdarg.charcount;
|
||||
op->data = compat_ptr(cfdarg.chardata);
|
||||
return con_font_op(vc_cons[fg_console].d, op);
|
||||
return con_font_op(vc, op);
|
||||
|
||||
case GIO_FONTX:
|
||||
op->op = KD_FONT_OP_GET;
|
||||
op->flags = KD_FONT_FLAG_OLD;
|
||||
@@ -1127,7 +1129,7 @@ compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
|
||||
op->height = cfdarg.charheight;
|
||||
op->charcount = cfdarg.charcount;
|
||||
op->data = compat_ptr(cfdarg.chardata);
|
||||
i = con_font_op(vc_cons[fg_console].d, op);
|
||||
i = con_font_op(vc, op);
|
||||
if (i)
|
||||
return i;
|
||||
cfdarg.charheight = op->height;
|
||||
@@ -1218,7 +1220,7 @@ long vt_compat_ioctl(struct tty_struct *tty,
|
||||
*/
|
||||
case PIO_FONTX:
|
||||
case GIO_FONTX:
|
||||
ret = compat_fontx_ioctl(cmd, up, perm, &op);
|
||||
ret = compat_fontx_ioctl(vc, cmd, up, perm, &op);
|
||||
break;
|
||||
|
||||
case KDFONTOP:
|
||||
|
||||
@@ -1009,8 +1009,6 @@ void uio_unregister_device(struct uio_info *info)
|
||||
|
||||
idev = info->uio_dev;
|
||||
|
||||
uio_free_minor(idev);
|
||||
|
||||
mutex_lock(&idev->info_lock);
|
||||
uio_dev_del_attributes(idev);
|
||||
|
||||
@@ -1022,6 +1020,8 @@ void uio_unregister_device(struct uio_info *info)
|
||||
|
||||
device_unregister(&idev->dev);
|
||||
|
||||
uio_free_minor(idev);
|
||||
|
||||
return;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(uio_unregister_device);
|
||||
|
||||
@@ -520,6 +520,7 @@ static void acm_read_bulk_callback(struct urb *urb)
|
||||
"%s - cooling babbling device\n", __func__);
|
||||
usb_mark_last_busy(acm->dev);
|
||||
set_bit(rb->index, &acm->urbs_in_error_delay);
|
||||
set_bit(ACM_ERROR_DELAY, &acm->flags);
|
||||
cooldown = true;
|
||||
break;
|
||||
default:
|
||||
@@ -545,7 +546,7 @@ static void acm_read_bulk_callback(struct urb *urb)
|
||||
|
||||
if (stopped || stalled || cooldown) {
|
||||
if (stalled)
|
||||
schedule_work(&acm->work);
|
||||
schedule_delayed_work(&acm->dwork, 0);
|
||||
else if (cooldown)
|
||||
schedule_delayed_work(&acm->dwork, HZ / 2);
|
||||
return;
|
||||
@@ -580,13 +581,13 @@ static void acm_write_bulk(struct urb *urb)
|
||||
acm_write_done(acm, wb);
|
||||
spin_unlock_irqrestore(&acm->write_lock, flags);
|
||||
set_bit(EVENT_TTY_WAKEUP, &acm->flags);
|
||||
schedule_work(&acm->work);
|
||||
schedule_delayed_work(&acm->dwork, 0);
|
||||
}
|
||||
|
||||
static void acm_softint(struct work_struct *work)
|
||||
{
|
||||
int i;
|
||||
struct acm *acm = container_of(work, struct acm, work);
|
||||
struct acm *acm = container_of(work, struct acm, dwork.work);
|
||||
|
||||
if (test_bit(EVENT_RX_STALL, &acm->flags)) {
|
||||
smp_mb(); /* against acm_suspend() */
|
||||
@@ -602,7 +603,7 @@ static void acm_softint(struct work_struct *work)
|
||||
if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
|
||||
for (i = 0; i < acm->rx_buflimit; i++)
|
||||
if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
|
||||
acm_submit_read_urb(acm, i, GFP_NOIO);
|
||||
acm_submit_read_urb(acm, i, GFP_KERNEL);
|
||||
}
|
||||
|
||||
if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))
|
||||
@@ -1405,7 +1406,6 @@ made_compressed_probe:
|
||||
acm->ctrlsize = ctrlsize;
|
||||
acm->readsize = readsize;
|
||||
acm->rx_buflimit = num_rx_buf;
|
||||
INIT_WORK(&acm->work, acm_softint);
|
||||
INIT_DELAYED_WORK(&acm->dwork, acm_softint);
|
||||
init_waitqueue_head(&acm->wioctl);
|
||||
spin_lock_init(&acm->write_lock);
|
||||
@@ -1619,7 +1619,6 @@ static void acm_disconnect(struct usb_interface *intf)
|
||||
}
|
||||
|
||||
acm_kill_urbs(acm);
|
||||
cancel_work_sync(&acm->work);
|
||||
cancel_delayed_work_sync(&acm->dwork);
|
||||
|
||||
tty_unregister_device(acm_tty_driver, acm->minor);
|
||||
@@ -1662,7 +1661,6 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message)
|
||||
return 0;
|
||||
|
||||
acm_kill_urbs(acm);
|
||||
cancel_work_sync(&acm->work);
|
||||
cancel_delayed_work_sync(&acm->dwork);
|
||||
acm->urbs_in_error_delay = 0;
|
||||
|
||||
|
||||
@@ -111,8 +111,7 @@ struct acm {
|
||||
# define ACM_ERROR_DELAY 3
|
||||
unsigned long urbs_in_error_delay; /* these need to be restarted after a delay */
|
||||
struct usb_cdc_line_coding line; /* bits, stop, parity */
|
||||
struct work_struct work; /* work queue entry for various purposes*/
|
||||
struct delayed_work dwork; /* for cool downs needed in error recovery */
|
||||
struct delayed_work dwork; /* work queue entry for various purposes */
|
||||
unsigned int ctrlin; /* input control lines (DCD, DSR, RI, break, overruns) */
|
||||
unsigned int ctrlout; /* output control lines (DTR, RTS) */
|
||||
struct async_icount iocount; /* counters for control line changes */
|
||||
|
||||
@@ -1287,6 +1287,17 @@ static int dwc3_probe(struct platform_device *pdev)
|
||||
|
||||
err5:
|
||||
dwc3_event_buffers_cleanup(dwc);
|
||||
|
||||
usb_phy_shutdown(dwc->usb2_phy);
|
||||
usb_phy_shutdown(dwc->usb3_phy);
|
||||
phy_exit(dwc->usb2_generic_phy);
|
||||
phy_exit(dwc->usb3_generic_phy);
|
||||
|
||||
usb_phy_set_suspend(dwc->usb2_phy, 1);
|
||||
usb_phy_set_suspend(dwc->usb3_phy, 1);
|
||||
phy_power_off(dwc->usb2_generic_phy);
|
||||
phy_power_off(dwc->usb3_generic_phy);
|
||||
|
||||
dwc3_ulpi_exit(dwc);
|
||||
|
||||
err4:
|
||||
@@ -1332,9 +1343,9 @@ static int dwc3_remove(struct platform_device *pdev)
|
||||
dwc3_core_exit(dwc);
|
||||
dwc3_ulpi_exit(dwc);
|
||||
|
||||
pm_runtime_put_sync(&pdev->dev);
|
||||
pm_runtime_allow(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
pm_runtime_put_noidle(&pdev->dev);
|
||||
pm_runtime_set_suspended(&pdev->dev);
|
||||
|
||||
dwc3_free_event_buffers(dwc);
|
||||
dwc3_free_scratch_buffers(dwc);
|
||||
|
||||
@@ -967,12 +967,16 @@ static void dwc3_ep0_xfer_complete(struct dwc3 *dwc,
|
||||
static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
|
||||
struct dwc3_ep *dep, struct dwc3_request *req)
|
||||
{
|
||||
unsigned int trb_length = 0;
|
||||
int ret;
|
||||
|
||||
req->direction = !!dep->number;
|
||||
|
||||
if (req->request.length == 0) {
|
||||
dwc3_ep0_prepare_one_trb(dep, dwc->ep0_trb_addr, 0,
|
||||
if (!req->direction)
|
||||
trb_length = dep->endpoint.maxpacket;
|
||||
|
||||
dwc3_ep0_prepare_one_trb(dep, dwc->bounce_addr, trb_length,
|
||||
DWC3_TRBCTL_CONTROL_DATA, false);
|
||||
ret = dwc3_ep0_start_trans(dep);
|
||||
} else if (!IS_ALIGNED(req->request.length, dep->endpoint.maxpacket)
|
||||
@@ -1024,9 +1028,12 @@ static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
|
||||
|
||||
req->trb = &dwc->ep0_trb[dep->trb_enqueue - 1];
|
||||
|
||||
if (!req->direction)
|
||||
trb_length = dep->endpoint.maxpacket;
|
||||
|
||||
/* Now prepare one extra TRB to align transfer size */
|
||||
dwc3_ep0_prepare_one_trb(dep, dwc->bounce_addr,
|
||||
0, DWC3_TRBCTL_CONTROL_DATA,
|
||||
trb_length, DWC3_TRBCTL_CONTROL_DATA,
|
||||
false);
|
||||
ret = dwc3_ep0_start_trans(dep);
|
||||
} else {
|
||||
|
||||
@@ -98,10 +98,13 @@ static struct platform_device *fsl_usb2_device_register(
|
||||
|
||||
pdev->dev.coherent_dma_mask = ofdev->dev.coherent_dma_mask;
|
||||
|
||||
if (!pdev->dev.dma_mask)
|
||||
if (!pdev->dev.dma_mask) {
|
||||
pdev->dev.dma_mask = &ofdev->dev.coherent_dma_mask;
|
||||
else
|
||||
dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
|
||||
} else {
|
||||
retval = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
|
||||
if (retval)
|
||||
goto error;
|
||||
}
|
||||
|
||||
retval = platform_device_add_data(pdev, pdata, sizeof(*pdata));
|
||||
if (retval)
|
||||
|
||||
@@ -209,6 +209,7 @@ static void adu_interrupt_out_callback(struct urb *urb)
|
||||
|
||||
if (status != 0) {
|
||||
if ((status != -ENOENT) &&
|
||||
(status != -ESHUTDOWN) &&
|
||||
(status != -ECONNRESET)) {
|
||||
dev_dbg(&dev->udev->dev,
|
||||
"%s :nonzero status received: %d\n", __func__,
|
||||
|
||||
@@ -273,13 +273,14 @@ __vringh_iov(struct vringh *vrh, u16 i,
|
||||
desc_max = vrh->vring.num;
|
||||
up_next = -1;
|
||||
|
||||
/* You must want something! */
|
||||
if (WARN_ON(!riov && !wiov))
|
||||
return -EINVAL;
|
||||
|
||||
if (riov)
|
||||
riov->i = riov->used = 0;
|
||||
else if (wiov)
|
||||
if (wiov)
|
||||
wiov->i = wiov->used = 0;
|
||||
else
|
||||
/* You must want something! */
|
||||
BUG();
|
||||
|
||||
for (;;) {
|
||||
void *addr;
|
||||
|
||||
@@ -1029,6 +1029,8 @@ static int __init pvr2fb_setup(char *options)
|
||||
if (!options || !*options)
|
||||
return 0;
|
||||
|
||||
cable_arg[0] = output_arg[0] = 0;
|
||||
|
||||
while ((this_opt = strsep(&options, ","))) {
|
||||
if (!*this_opt)
|
||||
continue;
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/jiffies.h>
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
@@ -47,12 +47,12 @@ struct mxc_w1_device {
|
||||
static u8 mxc_w1_ds2_reset_bus(void *data)
|
||||
{
|
||||
struct mxc_w1_device *dev = data;
|
||||
unsigned long timeout;
|
||||
ktime_t timeout;
|
||||
|
||||
writeb(MXC_W1_CONTROL_RPP, dev->regs + MXC_W1_CONTROL);
|
||||
|
||||
/* Wait for reset sequence 511+512us, use 1500us for sure */
|
||||
timeout = jiffies + usecs_to_jiffies(1500);
|
||||
timeout = ktime_add_us(ktime_get(), 1500);
|
||||
|
||||
udelay(511 + 512);
|
||||
|
||||
@@ -62,7 +62,7 @@ static u8 mxc_w1_ds2_reset_bus(void *data)
|
||||
/* PST bit is valid after the RPP bit is self-cleared */
|
||||
if (!(ctrl & MXC_W1_CONTROL_RPP))
|
||||
return !(ctrl & MXC_W1_CONTROL_PST);
|
||||
} while (time_is_after_jiffies(timeout));
|
||||
} while (ktime_before(ktime_get(), timeout));
|
||||
|
||||
return 1;
|
||||
}
|
||||
@@ -75,12 +75,12 @@ static u8 mxc_w1_ds2_reset_bus(void *data)
|
||||
static u8 mxc_w1_ds2_touch_bit(void *data, u8 bit)
|
||||
{
|
||||
struct mxc_w1_device *dev = data;
|
||||
unsigned long timeout;
|
||||
ktime_t timeout;
|
||||
|
||||
writeb(MXC_W1_CONTROL_WR(bit), dev->regs + MXC_W1_CONTROL);
|
||||
|
||||
/* Wait for read/write bit (60us, Max 120us), use 200us for sure */
|
||||
timeout = jiffies + usecs_to_jiffies(200);
|
||||
timeout = ktime_add_us(ktime_get(), 200);
|
||||
|
||||
udelay(60);
|
||||
|
||||
@@ -90,7 +90,7 @@ static u8 mxc_w1_ds2_touch_bit(void *data, u8 bit)
|
||||
/* RDST bit is valid after the WR1/RD bit is self-cleared */
|
||||
if (!(ctrl & MXC_W1_CONTROL_WR(bit)))
|
||||
return !!(ctrl & MXC_W1_CONTROL_RDST);
|
||||
} while (time_is_after_jiffies(timeout));
|
||||
} while (ktime_before(ktime_get(), timeout));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -244,6 +244,8 @@ static int rdc321x_wdt_probe(struct platform_device *pdev)
|
||||
|
||||
rdc321x_wdt_device.sb_pdev = pdata->sb_pdev;
|
||||
rdc321x_wdt_device.base_reg = r->start;
|
||||
rdc321x_wdt_device.queue = 0;
|
||||
rdc321x_wdt_device.default_ticks = ticks;
|
||||
|
||||
err = misc_register(&rdc321x_wdt_misc);
|
||||
if (err < 0) {
|
||||
@@ -258,14 +260,11 @@ static int rdc321x_wdt_probe(struct platform_device *pdev)
|
||||
rdc321x_wdt_device.base_reg, RDC_WDT_RST);
|
||||
|
||||
init_completion(&rdc321x_wdt_device.stop);
|
||||
rdc321x_wdt_device.queue = 0;
|
||||
|
||||
clear_bit(0, &rdc321x_wdt_device.inuse);
|
||||
|
||||
setup_timer(&rdc321x_wdt_device.timer, rdc321x_wdt_trigger, 0);
|
||||
|
||||
rdc321x_wdt_device.default_ticks = ticks;
|
||||
|
||||
dev_info(&pdev->dev, "watchdog init success\n");
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -624,9 +624,9 @@ static void v9fs_mmap_vm_close(struct vm_area_struct *vma)
|
||||
struct writeback_control wbc = {
|
||||
.nr_to_write = LONG_MAX,
|
||||
.sync_mode = WB_SYNC_ALL,
|
||||
.range_start = vma->vm_pgoff * PAGE_SIZE,
|
||||
.range_start = (loff_t)vma->vm_pgoff * PAGE_SIZE,
|
||||
/* absolute end, byte at end included */
|
||||
.range_end = vma->vm_pgoff * PAGE_SIZE +
|
||||
.range_end = (loff_t)vma->vm_pgoff * PAGE_SIZE +
|
||||
(vma->vm_end - vma->vm_start - 1),
|
||||
};
|
||||
|
||||
|
||||
@@ -1130,6 +1130,8 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
|
||||
|
||||
ret = update_ref_for_cow(trans, root, buf, cow, &last_ref);
|
||||
if (ret) {
|
||||
btrfs_tree_unlock(cow);
|
||||
free_extent_buffer(cow);
|
||||
btrfs_abort_transaction(trans, ret);
|
||||
return ret;
|
||||
}
|
||||
@@ -1137,6 +1139,8 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
|
||||
if (test_bit(BTRFS_ROOT_REF_COWS, &root->state)) {
|
||||
ret = btrfs_reloc_cow_block(trans, root, buf, cow);
|
||||
if (ret) {
|
||||
btrfs_tree_unlock(cow);
|
||||
free_extent_buffer(cow);
|
||||
btrfs_abort_transaction(trans, ret);
|
||||
return ret;
|
||||
}
|
||||
@@ -1168,6 +1172,8 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
|
||||
if (last_ref) {
|
||||
ret = tree_mod_log_free_eb(fs_info, buf);
|
||||
if (ret) {
|
||||
btrfs_tree_unlock(cow);
|
||||
free_extent_buffer(cow);
|
||||
btrfs_abort_transaction(trans, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -456,6 +456,8 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
|
||||
}
|
||||
have_zone = 1;
|
||||
}
|
||||
if (!have_zone)
|
||||
radix_tree_delete(&fs_info->reada_tree, index);
|
||||
spin_unlock(&fs_info->reada_lock);
|
||||
btrfs_dev_replace_unlock(&fs_info->dev_replace, 0);
|
||||
|
||||
|
||||
@@ -3820,6 +3820,72 @@ static int update_ref_path(struct send_ctx *sctx, struct recorded_ref *ref)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* When processing the new references for an inode we may orphanize an existing
|
||||
* directory inode because its old name conflicts with one of the new references
|
||||
* of the current inode. Later, when processing another new reference of our
|
||||
* inode, we might need to orphanize another inode, but the path we have in the
|
||||
* reference reflects the pre-orphanization name of the directory we previously
|
||||
* orphanized. For example:
|
||||
*
|
||||
* parent snapshot looks like:
|
||||
*
|
||||
* . (ino 256)
|
||||
* |----- f1 (ino 257)
|
||||
* |----- f2 (ino 258)
|
||||
* |----- d1/ (ino 259)
|
||||
* |----- d2/ (ino 260)
|
||||
*
|
||||
* send snapshot looks like:
|
||||
*
|
||||
* . (ino 256)
|
||||
* |----- d1 (ino 258)
|
||||
* |----- f2/ (ino 259)
|
||||
* |----- f2_link/ (ino 260)
|
||||
* | |----- f1 (ino 257)
|
||||
* |
|
||||
* |----- d2 (ino 258)
|
||||
*
|
||||
* When processing inode 257 we compute the name for inode 259 as "d1", and we
|
||||
* cache it in the name cache. Later when we start processing inode 258, when
|
||||
* collecting all its new references we set a full path of "d1/d2" for its new
|
||||
* reference with name "d2". When we start processing the new references we
|
||||
* start by processing the new reference with name "d1", and this results in
|
||||
* orphanizing inode 259, since its old reference causes a conflict. Then we
|
||||
* move on the next new reference, with name "d2", and we find out we must
|
||||
* orphanize inode 260, as its old reference conflicts with ours - but for the
|
||||
* orphanization we use a source path corresponding to the path we stored in the
|
||||
* new reference, which is "d1/d2" and not "o259-6-0/d2" - this makes the
|
||||
* receiver fail since the path component "d1/" no longer exists, it was renamed
|
||||
* to "o259-6-0/" when processing the previous new reference. So in this case we
|
||||
* must recompute the path in the new reference and use it for the new
|
||||
* orphanization operation.
|
||||
*/
|
||||
static int refresh_ref_path(struct send_ctx *sctx, struct recorded_ref *ref)
|
||||
{
|
||||
char *name;
|
||||
int ret;
|
||||
|
||||
name = kmemdup(ref->name, ref->name_len, GFP_KERNEL);
|
||||
if (!name)
|
||||
return -ENOMEM;
|
||||
|
||||
fs_path_reset(ref->full_path);
|
||||
ret = get_cur_path(sctx, ref->dir, ref->dir_gen, ref->full_path);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = fs_path_add(ref->full_path, name, ref->name_len);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* Update the reference's base name pointer. */
|
||||
set_ref_path(ref, ref->full_path);
|
||||
out:
|
||||
kfree(name);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This does all the move/link/unlink/rmdir magic.
|
||||
*/
|
||||
@@ -3950,6 +4016,12 @@ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move)
|
||||
struct name_cache_entry *nce;
|
||||
struct waiting_dir_move *wdm;
|
||||
|
||||
if (orphanized_dir) {
|
||||
ret = refresh_ref_path(sctx, cur);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = orphanize_inode(sctx, ow_inode, ow_gen,
|
||||
cur->full_path);
|
||||
if (ret < 0)
|
||||
@@ -6629,7 +6701,7 @@ long btrfs_ioctl_send(struct file *mnt_file, void __user *arg_)
|
||||
|
||||
alloc_size = sizeof(struct clone_root) * (arg->clone_sources_count + 1);
|
||||
|
||||
sctx->clone_roots = kzalloc(alloc_size, GFP_KERNEL);
|
||||
sctx->clone_roots = kvzalloc(alloc_size, GFP_KERNEL);
|
||||
if (!sctx->clone_roots) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
|
||||
@@ -3478,6 +3478,7 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
|
||||
* search and this search we'll not find the key again and can just
|
||||
* bail.
|
||||
*/
|
||||
search:
|
||||
ret = btrfs_search_slot(NULL, root, &min_key, path, 0, 0);
|
||||
if (ret != 0)
|
||||
goto done;
|
||||
@@ -3497,6 +3498,13 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
|
||||
|
||||
if (min_key.objectid != ino || min_key.type != key_type)
|
||||
goto done;
|
||||
|
||||
if (need_resched()) {
|
||||
btrfs_release_path(path);
|
||||
cond_resched();
|
||||
goto search;
|
||||
}
|
||||
|
||||
ret = overwrite_item(trans, log, dst_path, src, i,
|
||||
&min_key);
|
||||
if (ret) {
|
||||
|
||||
16
fs/buffer.c
16
fs/buffer.c
@@ -2800,16 +2800,6 @@ int nobh_writepage(struct page *page, get_block_t *get_block,
|
||||
/* Is the page fully outside i_size? (truncate in progress) */
|
||||
offset = i_size & (PAGE_SIZE-1);
|
||||
if (page->index >= end_index+1 || !offset) {
|
||||
/*
|
||||
* The page may have dirty, unmapped buffers. For example,
|
||||
* they may have been added in ext3_writepage(). Make them
|
||||
* freeable here, so the page does not leak.
|
||||
*/
|
||||
#if 0
|
||||
/* Not really sure about this - do we need this ? */
|
||||
if (page->mapping->a_ops->invalidatepage)
|
||||
page->mapping->a_ops->invalidatepage(page, offset);
|
||||
#endif
|
||||
unlock_page(page);
|
||||
return 0; /* don't care */
|
||||
}
|
||||
@@ -3004,12 +2994,6 @@ int block_write_full_page(struct page *page, get_block_t *get_block,
|
||||
/* Is the page fully outside i_size? (truncate in progress) */
|
||||
offset = i_size & (PAGE_SIZE-1);
|
||||
if (page->index >= end_index+1 || !offset) {
|
||||
/*
|
||||
* The page may have dirty, unmapped buffers. For example,
|
||||
* they may have been added in ext3_writepage(). Make them
|
||||
* freeable here, so the page does not leak.
|
||||
*/
|
||||
do_invalidatepage(page, 0, PAGE_SIZE);
|
||||
unlock_page(page);
|
||||
return 0; /* don't care */
|
||||
}
|
||||
|
||||
@@ -125,7 +125,7 @@ static int cachefiles_read_reissue(struct cachefiles_object *object,
|
||||
_debug("reissue read");
|
||||
ret = bmapping->a_ops->readpage(NULL, backpage);
|
||||
if (ret < 0)
|
||||
goto unlock_discard;
|
||||
goto discard;
|
||||
}
|
||||
|
||||
/* but the page may have been read before the monitor was installed, so
|
||||
@@ -142,6 +142,7 @@ static int cachefiles_read_reissue(struct cachefiles_object *object,
|
||||
|
||||
unlock_discard:
|
||||
unlock_page(backpage);
|
||||
discard:
|
||||
spin_lock_irq(&object->work_lock);
|
||||
list_del(&monitor->op_link);
|
||||
spin_unlock_irq(&object->work_lock);
|
||||
|
||||
@@ -1427,7 +1427,7 @@ static int ceph_filemap_fault(struct vm_fault *vmf)
|
||||
struct ceph_inode_info *ci = ceph_inode(inode);
|
||||
struct ceph_file_info *fi = vma->vm_file->private_data;
|
||||
struct page *pinned_page = NULL;
|
||||
loff_t off = vmf->pgoff << PAGE_SHIFT;
|
||||
loff_t off = (loff_t)vmf->pgoff << PAGE_SHIFT;
|
||||
int want, got, ret;
|
||||
sigset_t oldset;
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user