Merge 5.15.49 into android13-5.15-lts
Changes in 5.15.49 Revert "drm/amd/display: Fix DCN3 B0 DP Alt Mapping" nfsd: Replace use of rwsem with errseq_t arm64: dts: imx8mm-beacon: Enable RTS-CTS on UART3 arm64: dts: imx8mn-beacon: Enable RTS-CTS on UART3 powerpc/kasan: Silence KASAN warnings in __get_wchan() ASoC: nau8822: Add operation for internal PLL off and on drm/amd/display: Read Golden Settings Table from VBIOS drm/amdkfd: Use mmget_not_zero in MMU notifier dma-debug: make things less spammy under memory pressure ASoC: cs42l52: Fix TLV scales for mixer controls ASoC: cs35l36: Update digital volume TLV ASoC: cs53l30: Correct number of volume levels on SX controls ASoC: cs42l52: Correct TLV for Bypass Volume ASoC: cs42l56: Correct typo in minimum level for SX volume controls ASoC: cs42l51: Correct minimum value for SX volume control drm/amdkfd: add pinned BOs to kfd_bo_list ata: libata-core: fix NULL pointer deref in ata_host_alloc_pinfo() quota: Prevent memory allocation recursion while holding dq_lock ASoC: wm8962: Fix suspend while playing music ASoC: es8328: Fix event generation for deemphasis control ASoC: wm_adsp: Fix event generation for wm_adsp_fw_put() Input: soc_button_array - also add Lenovo Yoga Tablet2 1051F to dmi_use_low_level_irq scsi: vmw_pvscsi: Expand vcpuHint to 16 bits scsi: lpfc: Resolve NULL ptr dereference after an ELS LOGO is aborted scsi: lpfc: Fix port stuck in bypassed state after LIP in PT2PT topology scsi: lpfc: Allow reduced polling rate for nvme_admin_async_event cmd completion scsi: mpt3sas: Fix out-of-bounds compiler warning scsi: ipr: Fix missing/incorrect resource cleanup in error case scsi: pmcraid: Fix missing resource cleanup in error case ALSA: hda/realtek - Add HW8326 support virtio-mmio: fix missing put_device() when vm_cmdline_parent registration failed nfc: nfcmrvl: Fix memory leak in nfcmrvl_play_deferred ipv6: Fix signed integer overflow in l2tp_ip6_sendmsg net: ethernet: mtk_eth_soc: fix misuse of mem alloc interface netdev[napi]_alloc_frag gcc-12: disable '-Wdangling-pointer' warning for now mellanox: mlx5: avoid uninitialized variable warning with gcc-12 MIPS: Loongson-3: fix compile mips cpu_hwmon as module build error. random: credit cpu and bootloader seeds by default gpio: dwapb: Don't print error on -EPROBE_DEFER platform/x86: gigabyte-wmi: Add Z690M AORUS ELITE AX DDR4 support platform/x86: gigabyte-wmi: Add support for B450M DS3H-CF platform/x86/intel: hid: Add Surface Go to VGBS allow list staging: r8188eu: fix rtw_alloc_hwxmits error detection for now staging: r8188eu: Use zeroing allocator in wpa_set_encryption() staging: r8188eu: Fix warning of array overflow in ioctl_linux.c pNFS: Don't keep retrying if the server replied NFS4ERR_LAYOUTUNAVAILABLE pNFS: Avoid a live lock condition in pnfs_update_layout() sunrpc: set cl_max_connect when cloning an rpc_clnt clocksource: hyper-v: unexport __init-annotated hv_init_clocksource() i40e: Fix adding ADQ filter to TC0 i40e: Fix calculating the number of queue pairs i40e: Fix call trace in setup_tx_descriptors Drivers: hv: vmbus: Release cpu lock in error case tty: goldfish: Fix free_irq() on remove misc: atmel-ssc: Fix IRQ check in ssc_probe io_uring: fix races with file table unregister io_uring: fix races with buffer table unregister drm/i915/reset: Fix error_state_read ptr + offset use net: hns3: split function hclge_update_port_base_vlan_cfg() net: hns3: set port base vlan tbl_sta to false before removing old vlan net: hns3: don't push link state to VF if unalive net: hns3: fix tm port shapping of fibre port is incorrect after driver initialization nvme: add device name to warning in uuid_show() mlxsw: spectrum_cnt: Reorder counter pools net: bgmac: Fix an erroneous kfree() in bgmac_remove() net: ax25: Fix deadlock caused by skb_recv_datagram in ax25_recvmsg arm64: ftrace: fix branch range checks arm64: ftrace: consistently handle PLTs. certs/blacklist_hashes.c: fix const confusion in certs blacklist init: Initialize noop_backing_dev_info early block: Fix handling of offline queues in blk_mq_alloc_request_hctx() faddr2line: Fix overlapping text section failures, the sequel i2c: npcm7xx: Add check for platform_driver_register irqchip/gic/realview: Fix refcount leak in realview_gic_of_init irqchip/gic-v3: Fix error handling in gic_populate_ppi_partitions irqchip/gic-v3: Fix refcount leak in gic_populate_ppi_partitions irqchip/realtek-rtl: Fix refcount leak in map_interrupts sched: Fix balance_push() vs __sched_setscheduler() i2c: designware: Use standard optional ref clock implementation mei: hbm: drop capability response on early shutdown mei: me: add raptor lake point S DID comedi: vmk80xx: fix expression for tx buffer size crypto: memneq - move into lib/ USB: serial: option: add support for Cinterion MV31 with new baseline USB: serial: io_ti: add Agilent E5805A support usb: dwc2: Fix memory leak in dwc2_hcd_init usb: cdnsp: Fixed setting last_trb incorrectly usb: gadget: lpc32xx_udc: Fix refcount leak in lpc32xx_udc_probe usb: gadget: f_fs: change ep->status safe in ffs_epfile_io() usb: gadget: f_fs: change ep->ep safe in ffs_epfile_io() tty: n_gsm: Debug output allocation must use GFP_ATOMIC serial: 8250: Store to lsr_save_flags after lsr read bus: fsl-mc-bus: fix KASAN use-after-free in fsl_mc_bus_remove() dm mirror log: round up region bitmap size to BITS_PER_LONG drm/amd/display: Cap OLED brightness per max frame-average luminance cfi: Fix __cfi_slowpath_diag RCU usage with cpuidle ext4: fix super block checksum incorrect after mount ext4: fix bug_on ext4_mb_use_inode_pa ext4: make variable "count" signed ext4: add reserved GDT blocks check KVM: arm64: Don't read a HW interrupt pending state in user context ALSA: hda/realtek: fix right sounds and mute/micmute LEDs for HP machine virtio-pci: Remove wrong address verification in vp_del_vqs() powerpc/book3e: get rid of #include <generated/compile.h> clk: imx8mp: fix usb_root_clk parent Linux 5.15.49 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ie4c283ebc18a1c5ff94878cdadcf6a1659e710d4
This commit is contained in:
5
Makefile
5
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 15
|
||||
SUBLEVEL = 48
|
||||
SUBLEVEL = 49
|
||||
EXTRAVERSION =
|
||||
NAME = Trick or Treat
|
||||
|
||||
@@ -833,6 +833,9 @@ endif
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, unused-but-set-variable)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, unused-const-variable)
|
||||
|
||||
# These result in bogus false positives
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, dangling-pointer)
|
||||
|
||||
ifdef CONFIG_FRAME_POINTER
|
||||
KBUILD_CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls
|
||||
else
|
||||
|
||||
@@ -166,6 +166,7 @@
|
||||
pinctrl-0 = <&pinctrl_uart3>;
|
||||
assigned-clocks = <&clk IMX8MM_CLK_UART3>;
|
||||
assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_80M>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
@@ -236,6 +237,8 @@
|
||||
fsl,pins = <
|
||||
MX8MM_IOMUXC_ECSPI1_SCLK_UART3_DCE_RX 0x40
|
||||
MX8MM_IOMUXC_ECSPI1_MOSI_UART3_DCE_TX 0x40
|
||||
MX8MM_IOMUXC_ECSPI1_MISO_UART3_DCE_CTS_B 0x40
|
||||
MX8MM_IOMUXC_ECSPI1_SS0_UART3_DCE_RTS_B 0x40
|
||||
>;
|
||||
};
|
||||
|
||||
|
||||
@@ -176,6 +176,7 @@
|
||||
pinctrl-0 = <&pinctrl_uart3>;
|
||||
assigned-clocks = <&clk IMX8MN_CLK_UART3>;
|
||||
assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_80M>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
@@ -259,6 +260,8 @@
|
||||
fsl,pins = <
|
||||
MX8MN_IOMUXC_ECSPI1_SCLK_UART3_DCE_RX 0x40
|
||||
MX8MN_IOMUXC_ECSPI1_MOSI_UART3_DCE_TX 0x40
|
||||
MX8MN_IOMUXC_ECSPI1_MISO_UART3_DCE_CTS_B 0x40
|
||||
MX8MN_IOMUXC_ECSPI1_SS0_UART3_DCE_RTS_B 0x40
|
||||
>;
|
||||
};
|
||||
|
||||
|
||||
@@ -77,6 +77,66 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Find the address the callsite must branch to in order to reach '*addr'.
|
||||
*
|
||||
* Due to the limited range of 'BL' instructions, modules may be placed too far
|
||||
* away to branch directly and must use a PLT.
|
||||
*
|
||||
* Returns true when '*addr' contains a reachable target address, or has been
|
||||
* modified to contain a PLT address. Returns false otherwise.
|
||||
*/
|
||||
static bool ftrace_find_callable_addr(struct dyn_ftrace *rec,
|
||||
struct module *mod,
|
||||
unsigned long *addr)
|
||||
{
|
||||
unsigned long pc = rec->ip;
|
||||
long offset = (long)*addr - (long)pc;
|
||||
struct plt_entry *plt;
|
||||
|
||||
/*
|
||||
* When the target is within range of the 'BL' instruction, use 'addr'
|
||||
* as-is and branch to that directly.
|
||||
*/
|
||||
if (offset >= -SZ_128M && offset < SZ_128M)
|
||||
return true;
|
||||
|
||||
/*
|
||||
* When the target is outside of the range of a 'BL' instruction, we
|
||||
* must use a PLT to reach it. We can only place PLTs for modules, and
|
||||
* only when module PLT support is built-in.
|
||||
*/
|
||||
if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* 'mod' is only set at module load time, but if we end up
|
||||
* dealing with an out-of-range condition, we can assume it
|
||||
* is due to a module being loaded far away from the kernel.
|
||||
*
|
||||
* NOTE: __module_text_address() must be called with preemption
|
||||
* disabled, but we can rely on ftrace_lock to ensure that 'mod'
|
||||
* retains its validity throughout the remainder of this code.
|
||||
*/
|
||||
if (!mod) {
|
||||
preempt_disable();
|
||||
mod = __module_text_address(pc);
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
if (WARN_ON(!mod))
|
||||
return false;
|
||||
|
||||
plt = get_ftrace_plt(mod, *addr);
|
||||
if (!plt) {
|
||||
pr_err("ftrace: no module PLT for %ps\n", (void *)*addr);
|
||||
return false;
|
||||
}
|
||||
|
||||
*addr = (unsigned long)plt;
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Turn on the call to ftrace_caller() in instrumented function
|
||||
*/
|
||||
@@ -84,40 +144,9 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
|
||||
{
|
||||
unsigned long pc = rec->ip;
|
||||
u32 old, new;
|
||||
long offset = (long)pc - (long)addr;
|
||||
|
||||
if (offset < -SZ_128M || offset >= SZ_128M) {
|
||||
struct module *mod;
|
||||
struct plt_entry *plt;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* On kernels that support module PLTs, the offset between the
|
||||
* branch instruction and its target may legally exceed the
|
||||
* range of an ordinary relative 'bl' opcode. In this case, we
|
||||
* need to branch via a trampoline in the module.
|
||||
*
|
||||
* NOTE: __module_text_address() must be called with preemption
|
||||
* disabled, but we can rely on ftrace_lock to ensure that 'mod'
|
||||
* retains its validity throughout the remainder of this code.
|
||||
*/
|
||||
preempt_disable();
|
||||
mod = __module_text_address(pc);
|
||||
preempt_enable();
|
||||
|
||||
if (WARN_ON(!mod))
|
||||
return -EINVAL;
|
||||
|
||||
plt = get_ftrace_plt(mod, addr);
|
||||
if (!plt) {
|
||||
pr_err("ftrace: no module PLT for %ps\n", (void *)addr);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
addr = (unsigned long)plt;
|
||||
}
|
||||
if (!ftrace_find_callable_addr(rec, NULL, &addr))
|
||||
return -EINVAL;
|
||||
|
||||
old = aarch64_insn_gen_nop();
|
||||
new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
|
||||
@@ -132,6 +161,11 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
|
||||
unsigned long pc = rec->ip;
|
||||
u32 old, new;
|
||||
|
||||
if (!ftrace_find_callable_addr(rec, NULL, &old_addr))
|
||||
return -EINVAL;
|
||||
if (!ftrace_find_callable_addr(rec, NULL, &addr))
|
||||
return -EINVAL;
|
||||
|
||||
old = aarch64_insn_gen_branch_imm(pc, old_addr,
|
||||
AARCH64_INSN_BRANCH_LINK);
|
||||
new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
|
||||
@@ -181,54 +215,15 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
|
||||
unsigned long addr)
|
||||
{
|
||||
unsigned long pc = rec->ip;
|
||||
bool validate = true;
|
||||
u32 old = 0, new;
|
||||
long offset = (long)pc - (long)addr;
|
||||
|
||||
if (offset < -SZ_128M || offset >= SZ_128M) {
|
||||
u32 replaced;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* 'mod' is only set at module load time, but if we end up
|
||||
* dealing with an out-of-range condition, we can assume it
|
||||
* is due to a module being loaded far away from the kernel.
|
||||
*/
|
||||
if (!mod) {
|
||||
preempt_disable();
|
||||
mod = __module_text_address(pc);
|
||||
preempt_enable();
|
||||
|
||||
if (WARN_ON(!mod))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* The instruction we are about to patch may be a branch and
|
||||
* link instruction that was redirected via a PLT entry. In
|
||||
* this case, the normal validation will fail, but we can at
|
||||
* least check that we are dealing with a branch and link
|
||||
* instruction that points into the right module.
|
||||
*/
|
||||
if (aarch64_insn_read((void *)pc, &replaced))
|
||||
return -EFAULT;
|
||||
|
||||
if (!aarch64_insn_is_bl(replaced) ||
|
||||
!within_module(pc + aarch64_get_branch_offset(replaced),
|
||||
mod))
|
||||
return -EINVAL;
|
||||
|
||||
validate = false;
|
||||
} else {
|
||||
old = aarch64_insn_gen_branch_imm(pc, addr,
|
||||
AARCH64_INSN_BRANCH_LINK);
|
||||
}
|
||||
if (!ftrace_find_callable_addr(rec, mod, &addr))
|
||||
return -EINVAL;
|
||||
|
||||
old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
|
||||
new = aarch64_insn_gen_nop();
|
||||
|
||||
return ftrace_modify_code(pc, old, new, validate);
|
||||
return ftrace_modify_code(pc, old, new, true);
|
||||
}
|
||||
|
||||
void arch_ftrace_update_code(int command)
|
||||
|
||||
@@ -418,11 +418,11 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = {
|
||||
VGIC_ACCESS_32bit),
|
||||
REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_SET,
|
||||
vgic_mmio_read_pending, vgic_mmio_write_spending,
|
||||
NULL, vgic_uaccess_write_spending, 1,
|
||||
vgic_uaccess_read_pending, vgic_uaccess_write_spending, 1,
|
||||
VGIC_ACCESS_32bit),
|
||||
REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_CLEAR,
|
||||
vgic_mmio_read_pending, vgic_mmio_write_cpending,
|
||||
NULL, vgic_uaccess_write_cpending, 1,
|
||||
vgic_uaccess_read_pending, vgic_uaccess_write_cpending, 1,
|
||||
VGIC_ACCESS_32bit),
|
||||
REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET,
|
||||
vgic_mmio_read_active, vgic_mmio_write_sactive,
|
||||
|
||||
@@ -226,8 +226,9 @@ int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu,
|
||||
return 0;
|
||||
}
|
||||
|
||||
unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
|
||||
gpa_t addr, unsigned int len)
|
||||
static unsigned long __read_pending(struct kvm_vcpu *vcpu,
|
||||
gpa_t addr, unsigned int len,
|
||||
bool is_user)
|
||||
{
|
||||
u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
|
||||
u32 value = 0;
|
||||
@@ -248,7 +249,7 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
|
||||
IRQCHIP_STATE_PENDING,
|
||||
&val);
|
||||
WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
|
||||
} else if (vgic_irq_is_mapped_level(irq)) {
|
||||
} else if (!is_user && vgic_irq_is_mapped_level(irq)) {
|
||||
val = vgic_get_phys_line_level(irq);
|
||||
} else {
|
||||
val = irq_is_pending(irq);
|
||||
@@ -263,6 +264,18 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
|
||||
return value;
|
||||
}
|
||||
|
||||
unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
|
||||
gpa_t addr, unsigned int len)
|
||||
{
|
||||
return __read_pending(vcpu, addr, len, false);
|
||||
}
|
||||
|
||||
unsigned long vgic_uaccess_read_pending(struct kvm_vcpu *vcpu,
|
||||
gpa_t addr, unsigned int len)
|
||||
{
|
||||
return __read_pending(vcpu, addr, len, true);
|
||||
}
|
||||
|
||||
static bool is_vgic_v2_sgi(struct kvm_vcpu *vcpu, struct vgic_irq *irq)
|
||||
{
|
||||
return (vgic_irq_is_sgi(irq->intid) &&
|
||||
|
||||
@@ -149,6 +149,9 @@ int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu,
|
||||
unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
|
||||
gpa_t addr, unsigned int len);
|
||||
|
||||
unsigned long vgic_uaccess_read_pending(struct kvm_vcpu *vcpu,
|
||||
gpa_t addr, unsigned int len);
|
||||
|
||||
void vgic_mmio_write_spending(struct kvm_vcpu *vcpu,
|
||||
gpa_t addr, unsigned int len,
|
||||
unsigned long val);
|
||||
|
||||
@@ -2124,12 +2124,12 @@ static unsigned long __get_wchan(struct task_struct *p)
|
||||
return 0;
|
||||
|
||||
do {
|
||||
sp = *(unsigned long *)sp;
|
||||
sp = READ_ONCE_NOCHECK(*(unsigned long *)sp);
|
||||
if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD) ||
|
||||
task_is_running(p))
|
||||
return 0;
|
||||
if (count > 0) {
|
||||
ip = ((unsigned long *)sp)[STACK_FRAME_LR_SAVE];
|
||||
ip = READ_ONCE_NOCHECK(((unsigned long *)sp)[STACK_FRAME_LR_SAVE]);
|
||||
if (!in_sched_functions(ip))
|
||||
return ip;
|
||||
}
|
||||
|
||||
@@ -18,7 +18,6 @@
|
||||
#include <asm/prom.h>
|
||||
#include <asm/kdump.h>
|
||||
#include <mm/mmu_decl.h>
|
||||
#include <generated/compile.h>
|
||||
#include <generated/utsrelease.h>
|
||||
|
||||
struct regions {
|
||||
@@ -36,10 +35,6 @@ struct regions {
|
||||
int reserved_mem_size_cells;
|
||||
};
|
||||
|
||||
/* Simplified build-specific string for starting entropy. */
|
||||
static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
|
||||
LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
|
||||
|
||||
struct regions __initdata regions;
|
||||
|
||||
static __init void kaslr_get_cmdline(void *fdt)
|
||||
@@ -72,7 +67,8 @@ static unsigned long __init get_boot_seed(void *fdt)
|
||||
{
|
||||
unsigned long hash = 0;
|
||||
|
||||
hash = rotate_xor(hash, build_str, sizeof(build_str));
|
||||
/* build-specific string for starting entropy. */
|
||||
hash = rotate_xor(hash, linux_banner, strlen(linux_banner));
|
||||
hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
|
||||
|
||||
return hash;
|
||||
|
||||
@@ -479,6 +479,8 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
|
||||
if (!blk_mq_hw_queue_mapped(data.hctx))
|
||||
goto out_queue_exit;
|
||||
cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
|
||||
if (cpu >= nr_cpu_ids)
|
||||
goto out_queue_exit;
|
||||
data.ctx = __blk_mq_get_ctx(q, cpu);
|
||||
|
||||
if (!q->elevator)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include "blacklist.h"
|
||||
|
||||
const char __initdata *const blacklist_hashes[] = {
|
||||
const char __initconst *const blacklist_hashes[] = {
|
||||
#include CONFIG_SYSTEM_BLACKLIST_HASH_LIST
|
||||
, NULL
|
||||
};
|
||||
|
||||
@@ -15,6 +15,7 @@ source "crypto/async_tx/Kconfig"
|
||||
#
|
||||
menuconfig CRYPTO
|
||||
tristate "Cryptographic API"
|
||||
select LIB_MEMNEQ
|
||||
help
|
||||
This option provides the core Cryptographic API.
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
#
|
||||
|
||||
obj-$(CONFIG_CRYPTO) += crypto.o
|
||||
crypto-y := api.o cipher.o compress.o memneq.o
|
||||
crypto-y := api.o cipher.o compress.o
|
||||
|
||||
obj-$(CONFIG_CRYPTO_ENGINE) += crypto_engine.o
|
||||
obj-$(CONFIG_CRYPTO_FIPS) += fips.o
|
||||
|
||||
@@ -5500,7 +5500,7 @@ struct ata_host *ata_host_alloc_pinfo(struct device *dev,
|
||||
const struct ata_port_info * const * ppi,
|
||||
int n_ports)
|
||||
{
|
||||
const struct ata_port_info *pi;
|
||||
const struct ata_port_info *pi = &ata_dummy_port_info;
|
||||
struct ata_host *host;
|
||||
int i, j;
|
||||
|
||||
@@ -5508,7 +5508,7 @@ struct ata_host *ata_host_alloc_pinfo(struct device *dev,
|
||||
if (!host)
|
||||
return NULL;
|
||||
|
||||
for (i = 0, j = 0, pi = NULL; i < host->n_ports; i++) {
|
||||
for (i = 0, j = 0; i < host->n_ports; i++) {
|
||||
struct ata_port *ap = host->ports[i];
|
||||
|
||||
if (ppi[j])
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
#include <linux/init.h>
|
||||
#include <linux/memory.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/backing-dev.h>
|
||||
|
||||
#include "base.h"
|
||||
|
||||
@@ -20,6 +21,7 @@
|
||||
void __init driver_init(void)
|
||||
{
|
||||
/* These are the core pieces */
|
||||
bdi_init(&noop_backing_dev_info);
|
||||
devtmpfs_init();
|
||||
devices_init();
|
||||
buses_init();
|
||||
|
||||
@@ -1236,14 +1236,14 @@ error_cleanup_mc_io:
|
||||
static int fsl_mc_bus_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct fsl_mc *mc = platform_get_drvdata(pdev);
|
||||
struct fsl_mc_io *mc_io;
|
||||
|
||||
if (!fsl_mc_is_root_dprc(&mc->root_mc_bus_dev->dev))
|
||||
return -EINVAL;
|
||||
|
||||
mc_io = mc->root_mc_bus_dev->mc_io;
|
||||
fsl_mc_device_remove(mc->root_mc_bus_dev);
|
||||
|
||||
fsl_destroy_mc_io(mc->root_mc_bus_dev->mc_io);
|
||||
mc->root_mc_bus_dev->mc_io = NULL;
|
||||
fsl_destroy_mc_io(mc_io);
|
||||
|
||||
bus_unregister_notifier(&fsl_mc_bus_type, &fsl_mc_nb);
|
||||
|
||||
|
||||
@@ -428,28 +428,40 @@ config ADI
|
||||
driver include crash and makedumpfile.
|
||||
|
||||
config RANDOM_TRUST_CPU
|
||||
bool "Trust the CPU manufacturer to initialize Linux's CRNG"
|
||||
bool "Initialize RNG using CPU RNG instructions"
|
||||
default y
|
||||
depends on ARCH_RANDOM
|
||||
default n
|
||||
help
|
||||
Assume that CPU manufacturer (e.g., Intel or AMD for RDSEED or
|
||||
RDRAND, IBM for the S390 and Power PC architectures) is trustworthy
|
||||
for the purposes of initializing Linux's CRNG. Since this is not
|
||||
something that can be independently audited, this amounts to trusting
|
||||
that CPU manufacturer (perhaps with the insistence or mandate
|
||||
of a Nation State's intelligence or law enforcement agencies)
|
||||
has not installed a hidden back door to compromise the CPU's
|
||||
random number generation facilities. This can also be configured
|
||||
at boot with "random.trust_cpu=on/off".
|
||||
Initialize the RNG using random numbers supplied by the CPU's
|
||||
RNG instructions (e.g. RDRAND), if supported and available. These
|
||||
random numbers are never used directly, but are rather hashed into
|
||||
the main input pool, and this happens regardless of whether or not
|
||||
this option is enabled. Instead, this option controls whether the
|
||||
they are credited and hence can initialize the RNG. Additionally,
|
||||
other sources of randomness are always used, regardless of this
|
||||
setting. Enabling this implies trusting that the CPU can supply high
|
||||
quality and non-backdoored random numbers.
|
||||
|
||||
Say Y here unless you have reason to mistrust your CPU or believe
|
||||
its RNG facilities may be faulty. This may also be configured at
|
||||
boot time with "random.trust_cpu=on/off".
|
||||
|
||||
config RANDOM_TRUST_BOOTLOADER
|
||||
bool "Trust the bootloader to initialize Linux's CRNG"
|
||||
bool "Initialize RNG using bootloader-supplied seed"
|
||||
default y
|
||||
help
|
||||
Some bootloaders can provide entropy to increase the kernel's initial
|
||||
device randomness. Say Y here to assume the entropy provided by the
|
||||
booloader is trustworthy so it will be added to the kernel's entropy
|
||||
pool. Otherwise, say N here so it will be regarded as device input that
|
||||
only mixes the entropy pool. This can also be configured at boot with
|
||||
"random.trust_bootloader=on/off".
|
||||
Initialize the RNG using a seed supplied by the bootloader or boot
|
||||
environment (e.g. EFI or a bootloader-generated device tree). This
|
||||
seed is not used directly, but is rather hashed into the main input
|
||||
pool, and this happens regardless of whether or not this option is
|
||||
enabled. Instead, this option controls whether the seed is credited
|
||||
and hence can initialize the RNG. Additionally, other sources of
|
||||
randomness are always used, regardless of this setting. Enabling
|
||||
this implies trusting that the bootloader can supply high quality and
|
||||
non-backdoored seeds.
|
||||
|
||||
Say Y here unless you have reason to mistrust your bootloader or
|
||||
believe its RNG facilities may be faulty. This may also be configured
|
||||
at boot time with "random.trust_bootloader=on/off".
|
||||
|
||||
endmenu
|
||||
|
||||
@@ -675,7 +675,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
|
||||
hws[IMX8MP_CLK_UART2_ROOT] = imx_clk_hw_gate4("uart2_root_clk", "uart2", ccm_base + 0x44a0, 0);
|
||||
hws[IMX8MP_CLK_UART3_ROOT] = imx_clk_hw_gate4("uart3_root_clk", "uart3", ccm_base + 0x44b0, 0);
|
||||
hws[IMX8MP_CLK_UART4_ROOT] = imx_clk_hw_gate4("uart4_root_clk", "uart4", ccm_base + 0x44c0, 0);
|
||||
hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "osc_32k", ccm_base + 0x44d0, 0);
|
||||
hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0);
|
||||
hws[IMX8MP_CLK_USB_PHY_ROOT] = imx_clk_hw_gate4("usb_phy_root_clk", "usb_phy_ref", ccm_base + 0x44f0, 0);
|
||||
hws[IMX8MP_CLK_USDHC1_ROOT] = imx_clk_hw_gate4("usdhc1_root_clk", "usdhc1", ccm_base + 0x4510, 0);
|
||||
hws[IMX8MP_CLK_USDHC2_ROOT] = imx_clk_hw_gate4("usdhc2_root_clk", "usdhc2", ccm_base + 0x4520, 0);
|
||||
|
||||
@@ -565,4 +565,3 @@ void __init hv_init_clocksource(void)
|
||||
hv_sched_clock_offset = hv_read_reference_counter();
|
||||
hv_setup_sched_clock(read_hv_sched_clock_msr);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hv_init_clocksource);
|
||||
|
||||
@@ -685,7 +685,7 @@ static int vmk80xx_alloc_usb_buffers(struct comedi_device *dev)
|
||||
if (!devpriv->usb_rx_buf)
|
||||
return -ENOMEM;
|
||||
|
||||
size = max(usb_endpoint_maxp(devpriv->ep_rx), MIN_BUF_SIZE);
|
||||
size = max(usb_endpoint_maxp(devpriv->ep_tx), MIN_BUF_SIZE);
|
||||
devpriv->usb_tx_buf = kzalloc(size, GFP_KERNEL);
|
||||
if (!devpriv->usb_tx_buf)
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -653,10 +653,9 @@ static int dwapb_get_clks(struct dwapb_gpio *gpio)
|
||||
gpio->clks[1].id = "db";
|
||||
err = devm_clk_bulk_get_optional(gpio->dev, DWAPB_NR_CLOCKS,
|
||||
gpio->clks);
|
||||
if (err) {
|
||||
dev_err(gpio->dev, "Cannot get APB/Debounce clocks\n");
|
||||
return err;
|
||||
}
|
||||
if (err)
|
||||
return dev_err_probe(gpio->dev, err,
|
||||
"Cannot get APB/Debounce clocks\n");
|
||||
|
||||
err = clk_bulk_prepare_enable(DWAPB_NR_CLOCKS, gpio->clks);
|
||||
if (err) {
|
||||
|
||||
@@ -1828,9 +1828,6 @@ int amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(struct kgd_dev *kgd,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* delete kgd_mem from kfd_bo_list to avoid re-validating
|
||||
* this BO in BO's restoring after eviction.
|
||||
*/
|
||||
mutex_lock(&mem->process_info->lock);
|
||||
|
||||
ret = amdgpu_bo_reserve(bo, true);
|
||||
@@ -1853,7 +1850,6 @@ int amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(struct kgd_dev *kgd,
|
||||
|
||||
amdgpu_amdkfd_remove_eviction_fence(
|
||||
bo, mem->process_info->eviction_fence);
|
||||
list_del_init(&mem->validate_list.head);
|
||||
|
||||
if (size)
|
||||
*size = amdgpu_bo_size(bo);
|
||||
@@ -2399,12 +2395,15 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence **ef)
|
||||
process_info->eviction_fence = new_fence;
|
||||
*ef = dma_fence_get(&new_fence->base);
|
||||
|
||||
/* Attach new eviction fence to all BOs */
|
||||
/* Attach new eviction fence to all BOs except pinned ones */
|
||||
list_for_each_entry(mem, &process_info->kfd_bo_list,
|
||||
validate_list.head)
|
||||
validate_list.head) {
|
||||
if (mem->bo->tbo.pin_count)
|
||||
continue;
|
||||
|
||||
amdgpu_bo_fence(mem->bo,
|
||||
&process_info->eviction_fence->base, true);
|
||||
|
||||
}
|
||||
/* Attach eviction fence to PD / PT BOs */
|
||||
list_for_each_entry(peer_vm, &process_info->vm_list_head,
|
||||
vm_list_node) {
|
||||
|
||||
@@ -2181,6 +2181,8 @@ svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
|
||||
|
||||
if (range->event == MMU_NOTIFY_RELEASE)
|
||||
return true;
|
||||
if (!mmget_not_zero(mni->mm))
|
||||
return true;
|
||||
|
||||
start = mni->interval_tree.start;
|
||||
last = mni->interval_tree.last;
|
||||
@@ -2207,6 +2209,7 @@ svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
|
||||
}
|
||||
|
||||
svm_range_unlock(prange);
|
||||
mmput(mni->mm);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -2417,7 +2417,7 @@ static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = {
|
||||
|
||||
static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
|
||||
{
|
||||
u32 max_cll, min_cll, max, min, q, r;
|
||||
u32 max_avg, min_cll, max, min, q, r;
|
||||
struct amdgpu_dm_backlight_caps *caps;
|
||||
struct amdgpu_display_manager *dm;
|
||||
struct drm_connector *conn_base;
|
||||
@@ -2447,7 +2447,7 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
|
||||
caps = &dm->backlight_caps[i];
|
||||
caps->ext_caps = &aconnector->dc_link->dpcd_sink_ext_caps;
|
||||
caps->aux_support = false;
|
||||
max_cll = conn_base->hdr_sink_metadata.hdmi_type1.max_cll;
|
||||
max_avg = conn_base->hdr_sink_metadata.hdmi_type1.max_fall;
|
||||
min_cll = conn_base->hdr_sink_metadata.hdmi_type1.min_cll;
|
||||
|
||||
if (caps->ext_caps->bits.oled == 1 /*||
|
||||
@@ -2475,8 +2475,8 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
|
||||
* The results of the above expressions can be verified at
|
||||
* pre_computed_values.
|
||||
*/
|
||||
q = max_cll >> 5;
|
||||
r = max_cll % 32;
|
||||
q = max_avg >> 5;
|
||||
r = max_avg % 32;
|
||||
max = (1 << q) * pre_computed_values[r];
|
||||
|
||||
// min luminance: maxLum * (CV/255)^2 / 100
|
||||
|
||||
@@ -168,9 +168,7 @@ void enc31_hw_init(struct link_encoder *enc)
|
||||
AUX_RX_PHASE_DETECT_LEN, [21,20] = 0x3 default is 3
|
||||
AUX_RX_DETECTION_THRESHOLD [30:28] = 1
|
||||
*/
|
||||
AUX_REG_WRITE(AUX_DPHY_RX_CONTROL0, 0x103d1110);
|
||||
|
||||
AUX_REG_WRITE(AUX_DPHY_TX_CONTROL, 0x21c7a);
|
||||
// dmub will read AUX_DPHY_RX_CONTROL0/AUX_DPHY_TX_CONTROL from vbios table in dp_aux_init
|
||||
|
||||
//AUX_DPHY_TX_REF_CONTROL'AUX_TX_REF_DIV HW default is 0x32;
|
||||
// Set AUX_TX_REF_DIV Divider to generate 2 MHz reference from refclk
|
||||
|
||||
@@ -1294,12 +1294,6 @@ static struct stream_encoder *dcn31_stream_encoder_create(
|
||||
if (!enc1 || !vpg || !afmt)
|
||||
return NULL;
|
||||
|
||||
if (ctx->asic_id.chip_family == FAMILY_YELLOW_CARP &&
|
||||
ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) {
|
||||
if ((eng_id == ENGINE_ID_DIGC) || (eng_id == ENGINE_ID_DIGD))
|
||||
eng_id = eng_id + 3; // For B0 only. C->F, D->G.
|
||||
}
|
||||
|
||||
dcn30_dio_stream_encoder_construct(enc1, ctx, ctx->dc_bios,
|
||||
eng_id, vpg, afmt,
|
||||
&stream_enc_regs[eng_id],
|
||||
|
||||
@@ -445,7 +445,14 @@ static ssize_t error_state_read(struct file *filp, struct kobject *kobj,
|
||||
struct device *kdev = kobj_to_dev(kobj);
|
||||
struct drm_i915_private *i915 = kdev_minor_to_i915(kdev);
|
||||
struct i915_gpu_coredump *gpu;
|
||||
ssize_t ret;
|
||||
ssize_t ret = 0;
|
||||
|
||||
/*
|
||||
* FIXME: Concurrent clients triggering resets and reading + clearing
|
||||
* dumps can cause inconsistent sysfs reads when a user calls in with a
|
||||
* non-zero offset to complete a prior partial read but the
|
||||
* gpu_coredump has been cleared or replaced.
|
||||
*/
|
||||
|
||||
gpu = i915_first_error_state(i915);
|
||||
if (IS_ERR(gpu)) {
|
||||
@@ -457,8 +464,10 @@ static ssize_t error_state_read(struct file *filp, struct kobject *kobj,
|
||||
const char *str = "No error state collected\n";
|
||||
size_t len = strlen(str);
|
||||
|
||||
ret = min_t(size_t, count, len - off);
|
||||
memcpy(buf, str + off, ret);
|
||||
if (off < len) {
|
||||
ret = min_t(size_t, count, len - off);
|
||||
memcpy(buf, str + off, ret);
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
||||
@@ -637,6 +637,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
|
||||
*/
|
||||
if (newchannel->offermsg.offer.sub_channel_index == 0) {
|
||||
mutex_unlock(&vmbus_connection.channel_mutex);
|
||||
cpus_read_unlock();
|
||||
/*
|
||||
* Don't call free_channel(), because newchannel->kobj
|
||||
* is not initialized yet.
|
||||
|
||||
@@ -477,9 +477,6 @@ int i2c_dw_prepare_clk(struct dw_i2c_dev *dev, bool prepare)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (IS_ERR(dev->clk))
|
||||
return PTR_ERR(dev->clk);
|
||||
|
||||
if (prepare) {
|
||||
/* Optional interface clock */
|
||||
ret = clk_prepare_enable(dev->pclk);
|
||||
|
||||
@@ -262,8 +262,17 @@ static int dw_i2c_plat_probe(struct platform_device *pdev)
|
||||
goto exit_reset;
|
||||
}
|
||||
|
||||
dev->clk = devm_clk_get(&pdev->dev, NULL);
|
||||
if (!i2c_dw_prepare_clk(dev, true)) {
|
||||
dev->clk = devm_clk_get_optional(&pdev->dev, NULL);
|
||||
if (IS_ERR(dev->clk)) {
|
||||
ret = PTR_ERR(dev->clk);
|
||||
goto exit_reset;
|
||||
}
|
||||
|
||||
ret = i2c_dw_prepare_clk(dev, true);
|
||||
if (ret)
|
||||
goto exit_reset;
|
||||
|
||||
if (dev->clk) {
|
||||
u64 clk_khz;
|
||||
|
||||
dev->get_clk_rate_khz = i2c_dw_get_clk_rate_khz;
|
||||
|
||||
@@ -2369,8 +2369,7 @@ static struct platform_driver npcm_i2c_bus_driver = {
|
||||
static int __init npcm_i2c_init(void)
|
||||
{
|
||||
npcm_i2c_debugfs_dir = debugfs_create_dir("npcm_i2c", NULL);
|
||||
platform_driver_register(&npcm_i2c_bus_driver);
|
||||
return 0;
|
||||
return platform_driver_register(&npcm_i2c_bus_driver);
|
||||
}
|
||||
module_init(npcm_i2c_init);
|
||||
|
||||
|
||||
@@ -85,13 +85,13 @@ static const struct dmi_system_id dmi_use_low_level_irq[] = {
|
||||
},
|
||||
{
|
||||
/*
|
||||
* Lenovo Yoga Tab2 1051L, something messes with the home-button
|
||||
* Lenovo Yoga Tab2 1051F/1051L, something messes with the home-button
|
||||
* IRQ settings, leading to a non working home-button.
|
||||
*/
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "60073"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "1051L"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "1051"),
|
||||
},
|
||||
},
|
||||
{} /* Terminating entry */
|
||||
|
||||
@@ -57,6 +57,7 @@ realview_gic_of_init(struct device_node *node, struct device_node *parent)
|
||||
|
||||
/* The PB11MPCore GIC needs to be configured in the syscon */
|
||||
map = syscon_node_to_regmap(np);
|
||||
of_node_put(np);
|
||||
if (!IS_ERR(map)) {
|
||||
/* new irq mode with no DCC */
|
||||
regmap_write(map, REALVIEW_SYS_LOCK_OFFSET,
|
||||
|
||||
@@ -1887,7 +1887,7 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
|
||||
|
||||
gic_data.ppi_descs = kcalloc(gic_data.ppi_nr, sizeof(*gic_data.ppi_descs), GFP_KERNEL);
|
||||
if (!gic_data.ppi_descs)
|
||||
return;
|
||||
goto out_put_node;
|
||||
|
||||
nr_parts = of_get_child_count(parts_node);
|
||||
|
||||
@@ -1928,12 +1928,15 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
|
||||
continue;
|
||||
|
||||
cpu = of_cpu_node_to_id(cpu_node);
|
||||
if (WARN_ON(cpu < 0))
|
||||
if (WARN_ON(cpu < 0)) {
|
||||
of_node_put(cpu_node);
|
||||
continue;
|
||||
}
|
||||
|
||||
pr_cont("%pOF[%d] ", cpu_node, cpu);
|
||||
|
||||
cpumask_set_cpu(cpu, &part->mask);
|
||||
of_node_put(cpu_node);
|
||||
}
|
||||
|
||||
pr_cont("}\n");
|
||||
|
||||
@@ -134,9 +134,9 @@ static int __init map_interrupts(struct device_node *node, struct irq_domain *do
|
||||
if (!cpu_ictl)
|
||||
return -EINVAL;
|
||||
ret = of_property_read_u32(cpu_ictl, "#interrupt-cells", &tmp);
|
||||
of_node_put(cpu_ictl);
|
||||
if (ret || tmp != 1)
|
||||
return -EINVAL;
|
||||
of_node_put(cpu_ictl);
|
||||
|
||||
cpu_int = be32_to_cpup(imap + 2);
|
||||
if (cpu_int > 7 || cpu_int < 2)
|
||||
|
||||
@@ -415,8 +415,7 @@ static int create_log_context(struct dm_dirty_log *log, struct dm_target *ti,
|
||||
/*
|
||||
* Work out how many "unsigned long"s we need to hold the bitset.
|
||||
*/
|
||||
bitset_size = dm_round_up(region_count,
|
||||
sizeof(*lc->clean_bits) << BYTE_SHIFT);
|
||||
bitset_size = dm_round_up(region_count, BITS_PER_LONG);
|
||||
bitset_size >>= BYTE_SHIFT;
|
||||
|
||||
lc->bitset_uint32_count = bitset_size / sizeof(*lc->clean_bits);
|
||||
|
||||
@@ -232,9 +232,9 @@ static int ssc_probe(struct platform_device *pdev)
|
||||
clk_disable_unprepare(ssc->clk);
|
||||
|
||||
ssc->irq = platform_get_irq(pdev, 0);
|
||||
if (!ssc->irq) {
|
||||
if (ssc->irq < 0) {
|
||||
dev_dbg(&pdev->dev, "could not get irq\n");
|
||||
return -ENXIO;
|
||||
return ssc->irq;
|
||||
}
|
||||
|
||||
mutex_lock(&user_lock);
|
||||
|
||||
@@ -1351,7 +1351,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
|
||||
|
||||
if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||
|
||||
dev->hbm_state != MEI_HBM_CAP_SETUP) {
|
||||
if (dev->dev_state == MEI_DEV_POWER_DOWN) {
|
||||
if (dev->dev_state == MEI_DEV_POWER_DOWN ||
|
||||
dev->dev_state == MEI_DEV_POWERING_DOWN) {
|
||||
dev_dbg(dev->dev, "hbm: capabilities response: on shutdown, ignoring\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -109,6 +109,8 @@
|
||||
#define MEI_DEV_ID_ADP_P 0x51E0 /* Alder Lake Point P */
|
||||
#define MEI_DEV_ID_ADP_N 0x54E0 /* Alder Lake Point N */
|
||||
|
||||
#define MEI_DEV_ID_RPL_S 0x7A68 /* Raptor Lake Point S */
|
||||
|
||||
/*
|
||||
* MEI HW Section
|
||||
*/
|
||||
|
||||
@@ -115,6 +115,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)},
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
|
||||
|
||||
/* required last entry */
|
||||
{0, }
|
||||
};
|
||||
|
||||
@@ -323,7 +323,6 @@ static void bgmac_remove(struct bcma_device *core)
|
||||
bcma_mdio_mii_unregister(bgmac->mii_bus);
|
||||
bgmac_enet_remove(bgmac);
|
||||
bcma_set_drvdata(core, NULL);
|
||||
kfree(bgmac);
|
||||
}
|
||||
|
||||
static struct bcma_driver bgmac_bcma_driver = {
|
||||
|
||||
@@ -3194,7 +3194,7 @@ static int hclge_tp_port_init(struct hclge_dev *hdev)
|
||||
static int hclge_update_port_info(struct hclge_dev *hdev)
|
||||
{
|
||||
struct hclge_mac *mac = &hdev->hw.mac;
|
||||
int speed = HCLGE_MAC_SPEED_UNKNOWN;
|
||||
int speed;
|
||||
int ret;
|
||||
|
||||
/* get the port info from SFP cmd if not copper port */
|
||||
@@ -3205,10 +3205,13 @@ static int hclge_update_port_info(struct hclge_dev *hdev)
|
||||
if (!hdev->support_sfp_query)
|
||||
return 0;
|
||||
|
||||
if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V2)
|
||||
if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V2) {
|
||||
speed = mac->speed;
|
||||
ret = hclge_get_sfp_info(hdev, mac);
|
||||
else
|
||||
} else {
|
||||
speed = HCLGE_MAC_SPEED_UNKNOWN;
|
||||
ret = hclge_get_sfp_speed(hdev, &speed);
|
||||
}
|
||||
|
||||
if (ret == -EOPNOTSUPP) {
|
||||
hdev->support_sfp_query = false;
|
||||
@@ -3220,6 +3223,8 @@ static int hclge_update_port_info(struct hclge_dev *hdev)
|
||||
if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V2) {
|
||||
if (mac->speed_type == QUERY_ACTIVE_SPEED) {
|
||||
hclge_update_port_capability(hdev, mac);
|
||||
if (mac->speed != speed)
|
||||
(void)hclge_tm_port_shaper_cfg(hdev);
|
||||
return 0;
|
||||
}
|
||||
return hclge_cfg_mac_speed_dup(hdev, mac->speed,
|
||||
@@ -3302,6 +3307,12 @@ static int hclge_set_vf_link_state(struct hnae3_handle *handle, int vf,
|
||||
link_state_old = vport->vf_info.link_state;
|
||||
vport->vf_info.link_state = link_state;
|
||||
|
||||
/* return success directly if the VF is unalive, VF will
|
||||
* query link state itself when it starts work.
|
||||
*/
|
||||
if (!test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state))
|
||||
return 0;
|
||||
|
||||
ret = hclge_push_vf_link_status(vport);
|
||||
if (ret) {
|
||||
vport->vf_info.link_state = link_state_old;
|
||||
@@ -10397,12 +10408,42 @@ static bool hclge_need_update_vlan_filter(const struct hclge_vlan_info *new_cfg,
|
||||
return false;
|
||||
}
|
||||
|
||||
static int hclge_modify_port_base_vlan_tag(struct hclge_vport *vport,
|
||||
struct hclge_vlan_info *new_info,
|
||||
struct hclge_vlan_info *old_info)
|
||||
{
|
||||
struct hclge_dev *hdev = vport->back;
|
||||
int ret;
|
||||
|
||||
/* add new VLAN tag */
|
||||
ret = hclge_set_vlan_filter_hw(hdev, htons(new_info->vlan_proto),
|
||||
vport->vport_id, new_info->vlan_tag,
|
||||
false);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
vport->port_base_vlan_cfg.tbl_sta = false;
|
||||
/* remove old VLAN tag */
|
||||
if (old_info->vlan_tag == 0)
|
||||
ret = hclge_set_vf_vlan_common(hdev, vport->vport_id,
|
||||
true, 0);
|
||||
else
|
||||
ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
|
||||
vport->vport_id,
|
||||
old_info->vlan_tag, true);
|
||||
if (ret)
|
||||
dev_err(&hdev->pdev->dev,
|
||||
"failed to clear vport%u port base vlan %u, ret = %d.\n",
|
||||
vport->vport_id, old_info->vlan_tag, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int hclge_update_port_base_vlan_cfg(struct hclge_vport *vport, u16 state,
|
||||
struct hclge_vlan_info *vlan_info)
|
||||
{
|
||||
struct hnae3_handle *nic = &vport->nic;
|
||||
struct hclge_vlan_info *old_vlan_info;
|
||||
struct hclge_dev *hdev = vport->back;
|
||||
int ret;
|
||||
|
||||
old_vlan_info = &vport->port_base_vlan_cfg.vlan_info;
|
||||
@@ -10415,38 +10456,12 @@ int hclge_update_port_base_vlan_cfg(struct hclge_vport *vport, u16 state,
|
||||
if (!hclge_need_update_vlan_filter(vlan_info, old_vlan_info))
|
||||
goto out;
|
||||
|
||||
if (state == HNAE3_PORT_BASE_VLAN_MODIFY) {
|
||||
/* add new VLAN tag */
|
||||
ret = hclge_set_vlan_filter_hw(hdev,
|
||||
htons(vlan_info->vlan_proto),
|
||||
vport->vport_id,
|
||||
vlan_info->vlan_tag,
|
||||
false);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* remove old VLAN tag */
|
||||
if (old_vlan_info->vlan_tag == 0)
|
||||
ret = hclge_set_vf_vlan_common(hdev, vport->vport_id,
|
||||
true, 0);
|
||||
else
|
||||
ret = hclge_set_vlan_filter_hw(hdev,
|
||||
htons(ETH_P_8021Q),
|
||||
vport->vport_id,
|
||||
old_vlan_info->vlan_tag,
|
||||
true);
|
||||
if (ret) {
|
||||
dev_err(&hdev->pdev->dev,
|
||||
"failed to clear vport%u port base vlan %u, ret = %d.\n",
|
||||
vport->vport_id, old_vlan_info->vlan_tag, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = hclge_update_vlan_filter_entries(vport, state, vlan_info,
|
||||
old_vlan_info);
|
||||
if (state == HNAE3_PORT_BASE_VLAN_MODIFY)
|
||||
ret = hclge_modify_port_base_vlan_tag(vport, vlan_info,
|
||||
old_vlan_info);
|
||||
else
|
||||
ret = hclge_update_vlan_filter_entries(vport, state, vlan_info,
|
||||
old_vlan_info);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
||||
@@ -420,7 +420,7 @@ static int hclge_tm_pg_shapping_cfg(struct hclge_dev *hdev,
|
||||
return hclge_cmd_send(&hdev->hw, &desc, 1);
|
||||
}
|
||||
|
||||
static int hclge_tm_port_shaper_cfg(struct hclge_dev *hdev)
|
||||
int hclge_tm_port_shaper_cfg(struct hclge_dev *hdev)
|
||||
{
|
||||
struct hclge_port_shapping_cmd *shap_cfg_cmd;
|
||||
struct hclge_shaper_ir_para ir_para;
|
||||
|
||||
@@ -231,6 +231,7 @@ int hclge_pause_addr_cfg(struct hclge_dev *hdev, const u8 *mac_addr);
|
||||
void hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats);
|
||||
void hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats);
|
||||
int hclge_tm_qs_shaper_cfg(struct hclge_vport *vport, int max_tx_rate);
|
||||
int hclge_tm_port_shaper_cfg(struct hclge_dev *hdev);
|
||||
int hclge_tm_get_qset_num(struct hclge_dev *hdev, u16 *qset_num);
|
||||
int hclge_tm_get_pri_num(struct hclge_dev *hdev, u8 *pri_num);
|
||||
int hclge_tm_get_qset_map_pri(struct hclge_dev *hdev, u16 qset_id, u8 *priority,
|
||||
|
||||
@@ -2576,15 +2576,16 @@ static void i40e_diag_test(struct net_device *netdev,
|
||||
|
||||
set_bit(__I40E_TESTING, pf->state);
|
||||
|
||||
if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
|
||||
test_bit(__I40E_RESET_INTR_RECEIVED, pf->state)) {
|
||||
dev_warn(&pf->pdev->dev,
|
||||
"Cannot start offline testing when PF is in reset state.\n");
|
||||
goto skip_ol_tests;
|
||||
}
|
||||
|
||||
if (i40e_active_vfs(pf) || i40e_active_vmdqs(pf)) {
|
||||
dev_warn(&pf->pdev->dev,
|
||||
"Please take active VFs and Netqueues offline and restart the adapter before running NIC diagnostics\n");
|
||||
data[I40E_ETH_TEST_REG] = 1;
|
||||
data[I40E_ETH_TEST_EEPROM] = 1;
|
||||
data[I40E_ETH_TEST_INTR] = 1;
|
||||
data[I40E_ETH_TEST_LINK] = 1;
|
||||
eth_test->flags |= ETH_TEST_FL_FAILED;
|
||||
clear_bit(__I40E_TESTING, pf->state);
|
||||
goto skip_ol_tests;
|
||||
}
|
||||
|
||||
@@ -2631,9 +2632,17 @@ static void i40e_diag_test(struct net_device *netdev,
|
||||
data[I40E_ETH_TEST_INTR] = 0;
|
||||
}
|
||||
|
||||
skip_ol_tests:
|
||||
|
||||
netif_info(pf, drv, netdev, "testing finished\n");
|
||||
return;
|
||||
|
||||
skip_ol_tests:
|
||||
data[I40E_ETH_TEST_REG] = 1;
|
||||
data[I40E_ETH_TEST_EEPROM] = 1;
|
||||
data[I40E_ETH_TEST_INTR] = 1;
|
||||
data[I40E_ETH_TEST_LINK] = 1;
|
||||
eth_test->flags |= ETH_TEST_FL_FAILED;
|
||||
clear_bit(__I40E_TESTING, pf->state);
|
||||
netif_info(pf, drv, netdev, "testing failed\n");
|
||||
}
|
||||
|
||||
static void i40e_get_wol(struct net_device *netdev,
|
||||
|
||||
@@ -8523,6 +8523,11 @@ static int i40e_configure_clsflower(struct i40e_vsi *vsi,
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (!tc) {
|
||||
dev_err(&pf->pdev->dev, "Unable to add filter because of invalid destination");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) ||
|
||||
test_bit(__I40E_RESET_INTR_RECEIVED, pf->state))
|
||||
return -EBUSY;
|
||||
|
||||
@@ -2282,7 +2282,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
}
|
||||
|
||||
if (vf->adq_enabled) {
|
||||
for (i = 0; i < I40E_MAX_VF_VSI; i++)
|
||||
for (i = 0; i < vf->num_tc; i++)
|
||||
num_qps_all += vf->ch[i].num_qps;
|
||||
if (num_qps_all != qci->num_queue_pairs) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
|
||||
@@ -820,6 +820,17 @@ static inline bool mtk_rx_get_desc(struct mtk_rx_dma *rxd,
|
||||
return true;
|
||||
}
|
||||
|
||||
static void *mtk_max_lro_buf_alloc(gfp_t gfp_mask)
|
||||
{
|
||||
unsigned int size = mtk_max_frag_size(MTK_MAX_LRO_RX_LENGTH);
|
||||
unsigned long data;
|
||||
|
||||
data = __get_free_pages(gfp_mask | __GFP_COMP | __GFP_NOWARN,
|
||||
get_order(size));
|
||||
|
||||
return (void *)data;
|
||||
}
|
||||
|
||||
/* the qdma core needs scratch memory to be setup */
|
||||
static int mtk_init_fq_dma(struct mtk_eth *eth)
|
||||
{
|
||||
@@ -1311,7 +1322,10 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
|
||||
goto release_desc;
|
||||
|
||||
/* alloc new buffer */
|
||||
new_data = napi_alloc_frag(ring->frag_size);
|
||||
if (ring->frag_size <= PAGE_SIZE)
|
||||
new_data = napi_alloc_frag(ring->frag_size);
|
||||
else
|
||||
new_data = mtk_max_lro_buf_alloc(GFP_ATOMIC);
|
||||
if (unlikely(!new_data)) {
|
||||
netdev->stats.rx_dropped++;
|
||||
goto release_desc;
|
||||
@@ -1725,7 +1739,10 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < rx_dma_size; i++) {
|
||||
ring->data[i] = netdev_alloc_frag(ring->frag_size);
|
||||
if (ring->frag_size <= PAGE_SIZE)
|
||||
ring->data[i] = netdev_alloc_frag(ring->frag_size);
|
||||
else
|
||||
ring->data[i] = mtk_max_lro_buf_alloc(GFP_KERNEL);
|
||||
if (!ring->data[i])
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
@@ -435,7 +435,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
|
||||
{
|
||||
struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
|
||||
struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev;
|
||||
struct lag_tracker tracker;
|
||||
struct lag_tracker tracker = { };
|
||||
bool do_bond, roce_lag;
|
||||
int err;
|
||||
|
||||
|
||||
@@ -8,8 +8,8 @@
|
||||
#include "spectrum.h"
|
||||
|
||||
enum mlxsw_sp_counter_sub_pool_id {
|
||||
MLXSW_SP_COUNTER_SUB_POOL_FLOW,
|
||||
MLXSW_SP_COUNTER_SUB_POOL_RIF,
|
||||
MLXSW_SP_COUNTER_SUB_POOL_FLOW,
|
||||
};
|
||||
|
||||
int mlxsw_sp_counter_alloc(struct mlxsw_sp *mlxsw_sp,
|
||||
|
||||
@@ -388,13 +388,25 @@ static void nfcmrvl_play_deferred(struct nfcmrvl_usb_drv_data *drv_data)
|
||||
int err;
|
||||
|
||||
while ((urb = usb_get_from_anchor(&drv_data->deferred))) {
|
||||
usb_anchor_urb(urb, &drv_data->tx_anchor);
|
||||
|
||||
err = usb_submit_urb(urb, GFP_ATOMIC);
|
||||
if (err)
|
||||
if (err) {
|
||||
kfree(urb->setup_packet);
|
||||
usb_unanchor_urb(urb);
|
||||
usb_free_urb(urb);
|
||||
break;
|
||||
}
|
||||
|
||||
drv_data->tx_in_flight++;
|
||||
usb_free_urb(urb);
|
||||
}
|
||||
|
||||
/* Cleanup the rest deferred urbs. */
|
||||
while ((urb = usb_get_from_anchor(&drv_data->deferred))) {
|
||||
kfree(urb->setup_packet);
|
||||
usb_free_urb(urb);
|
||||
}
|
||||
usb_scuttle_anchored_urbs(&drv_data->deferred);
|
||||
}
|
||||
|
||||
static int nfcmrvl_resume(struct usb_interface *intf)
|
||||
|
||||
@@ -3182,8 +3182,8 @@ static ssize_t uuid_show(struct device *dev, struct device_attribute *attr,
|
||||
* we have no UUID set
|
||||
*/
|
||||
if (uuid_is_null(&ids->uuid)) {
|
||||
printk_ratelimited(KERN_WARNING
|
||||
"No UUID available providing old NGUID\n");
|
||||
dev_warn_ratelimited(dev,
|
||||
"No UUID available providing old NGUID\n");
|
||||
return sysfs_emit(buf, "%pU\n", ids->nguid);
|
||||
}
|
||||
return sysfs_emit(buf, "%pU\n", &ids->uuid);
|
||||
|
||||
@@ -17,7 +17,7 @@ menuconfig MIPS_PLATFORM_DEVICES
|
||||
if MIPS_PLATFORM_DEVICES
|
||||
|
||||
config CPU_HWMON
|
||||
tristate "Loongson-3 CPU HWMon Driver"
|
||||
bool "Loongson-3 CPU HWMon Driver"
|
||||
depends on MACH_LOONGSON64
|
||||
select HWMON
|
||||
default y
|
||||
|
||||
@@ -140,6 +140,7 @@ static u8 gigabyte_wmi_detect_sensor_usability(struct wmi_device *wdev)
|
||||
}}
|
||||
|
||||
static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
|
||||
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M DS3H-CF"),
|
||||
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M S2H V2"),
|
||||
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE AX V2"),
|
||||
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"),
|
||||
@@ -153,6 +154,7 @@ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
|
||||
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 GAMING X"),
|
||||
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 I AORUS PRO WIFI"),
|
||||
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 UD"),
|
||||
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z690M AORUS ELITE AX DDR4"),
|
||||
{ }
|
||||
};
|
||||
|
||||
|
||||
@@ -129,6 +129,12 @@ static const struct dmi_system_id dmi_vgbs_allow_list[] = {
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Convertible 15-df0xxx"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go"),
|
||||
},
|
||||
},
|
||||
{ }
|
||||
};
|
||||
|
||||
|
||||
@@ -9791,7 +9791,7 @@ static int ipr_alloc_mem(struct ipr_ioa_cfg *ioa_cfg)
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!ioa_cfg->hrrq[i].host_rrq) {
|
||||
while (--i > 0)
|
||||
while (--i >= 0)
|
||||
dma_free_coherent(&pdev->dev,
|
||||
sizeof(u32) * ioa_cfg->hrrq[i].size,
|
||||
ioa_cfg->hrrq[i].host_rrq,
|
||||
@@ -10064,7 +10064,7 @@ static int ipr_request_other_msi_irqs(struct ipr_ioa_cfg *ioa_cfg,
|
||||
ioa_cfg->vectors_info[i].desc,
|
||||
&ioa_cfg->hrrq[i]);
|
||||
if (rc) {
|
||||
while (--i >= 0)
|
||||
while (--i > 0)
|
||||
free_irq(pci_irq_vector(pdev, i),
|
||||
&ioa_cfg->hrrq[i]);
|
||||
return rc;
|
||||
|
||||
@@ -2955,18 +2955,10 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
||||
spin_unlock_irq(&ndlp->lock);
|
||||
lpfc_disc_state_machine(vport, ndlp, cmdiocb,
|
||||
NLP_EVT_DEVICE_RM);
|
||||
lpfc_els_free_iocb(phba, cmdiocb);
|
||||
lpfc_nlp_put(ndlp);
|
||||
|
||||
/* Presume the node was released. */
|
||||
return;
|
||||
goto out_rsrc_free;
|
||||
}
|
||||
|
||||
out:
|
||||
/* Driver is done with the IO. */
|
||||
lpfc_els_free_iocb(phba, cmdiocb);
|
||||
lpfc_nlp_put(ndlp);
|
||||
|
||||
/* At this point, the LOGO processing is complete. NOTE: For a
|
||||
* pt2pt topology, we are assuming the NPortID will only change
|
||||
* on link up processing. For a LOGO / PLOGI initiated by the
|
||||
@@ -2993,6 +2985,10 @@ out:
|
||||
ndlp->nlp_DID, irsp->ulpStatus,
|
||||
irsp->un.ulpWord[4], irsp->ulpTimeout,
|
||||
vport->num_disc_nodes);
|
||||
|
||||
lpfc_els_free_iocb(phba, cmdiocb);
|
||||
lpfc_nlp_put(ndlp);
|
||||
|
||||
lpfc_disc_start(vport);
|
||||
return;
|
||||
}
|
||||
@@ -3009,6 +3005,10 @@ out:
|
||||
lpfc_disc_state_machine(vport, ndlp, cmdiocb,
|
||||
NLP_EVT_DEVICE_RM);
|
||||
}
|
||||
out_rsrc_free:
|
||||
/* Driver is done with the I/O. */
|
||||
lpfc_els_free_iocb(phba, cmdiocb);
|
||||
lpfc_nlp_put(ndlp);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -4448,6 +4448,9 @@ struct wqe_common {
|
||||
#define wqe_sup_SHIFT 6
|
||||
#define wqe_sup_MASK 0x00000001
|
||||
#define wqe_sup_WORD word11
|
||||
#define wqe_ffrq_SHIFT 6
|
||||
#define wqe_ffrq_MASK 0x00000001
|
||||
#define wqe_ffrq_WORD word11
|
||||
#define wqe_wqec_SHIFT 7
|
||||
#define wqe_wqec_MASK 0x00000001
|
||||
#define wqe_wqec_WORD word11
|
||||
|
||||
@@ -810,7 +810,8 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
lpfc_nvmet_invalidate_host(phba, ndlp);
|
||||
|
||||
if (ndlp->nlp_DID == Fabric_DID) {
|
||||
if (vport->port_state <= LPFC_FDISC)
|
||||
if (vport->port_state <= LPFC_FDISC ||
|
||||
vport->fc_flag & FC_PT2PT)
|
||||
goto out;
|
||||
lpfc_linkdown_port(vport);
|
||||
spin_lock_irq(shost->host_lock);
|
||||
|
||||
@@ -1182,7 +1182,8 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport,
|
||||
{
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
struct nvmefc_fcp_req *nCmd = lpfc_ncmd->nvmeCmd;
|
||||
struct lpfc_iocbq *pwqeq = &(lpfc_ncmd->cur_iocbq);
|
||||
struct nvme_common_command *sqe;
|
||||
struct lpfc_iocbq *pwqeq = &lpfc_ncmd->cur_iocbq;
|
||||
union lpfc_wqe128 *wqe = &pwqeq->wqe;
|
||||
uint32_t req_len;
|
||||
|
||||
@@ -1239,8 +1240,14 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport,
|
||||
cstat->control_requests++;
|
||||
}
|
||||
|
||||
if (pnode->nlp_nvme_info & NLP_NVME_NSLER)
|
||||
if (pnode->nlp_nvme_info & NLP_NVME_NSLER) {
|
||||
bf_set(wqe_erp, &wqe->generic.wqe_com, 1);
|
||||
sqe = &((struct nvme_fc_cmd_iu *)
|
||||
nCmd->cmdaddr)->sqe.common;
|
||||
if (sqe->opcode == nvme_admin_async_event)
|
||||
bf_set(wqe_ffrq, &wqe->generic.wqe_com, 1);
|
||||
}
|
||||
|
||||
/*
|
||||
* Finish initializing those WQE fields that are independent
|
||||
* of the nvme_cmnd request_buffer
|
||||
|
||||
@@ -5381,6 +5381,7 @@ static int _base_assign_fw_reported_qd(struct MPT3SAS_ADAPTER *ioc)
|
||||
Mpi2ConfigReply_t mpi_reply;
|
||||
Mpi2SasIOUnitPage1_t *sas_iounit_pg1 = NULL;
|
||||
Mpi26PCIeIOUnitPage1_t pcie_iounit_pg1;
|
||||
u16 depth;
|
||||
int sz;
|
||||
int rc = 0;
|
||||
|
||||
@@ -5392,7 +5393,7 @@ static int _base_assign_fw_reported_qd(struct MPT3SAS_ADAPTER *ioc)
|
||||
goto out;
|
||||
/* sas iounit page 1 */
|
||||
sz = offsetof(Mpi2SasIOUnitPage1_t, PhyData);
|
||||
sas_iounit_pg1 = kzalloc(sz, GFP_KERNEL);
|
||||
sas_iounit_pg1 = kzalloc(sizeof(Mpi2SasIOUnitPage1_t), GFP_KERNEL);
|
||||
if (!sas_iounit_pg1) {
|
||||
pr_err("%s: failure at %s:%d/%s()!\n",
|
||||
ioc->name, __FILE__, __LINE__, __func__);
|
||||
@@ -5405,16 +5406,16 @@ static int _base_assign_fw_reported_qd(struct MPT3SAS_ADAPTER *ioc)
|
||||
ioc->name, __FILE__, __LINE__, __func__);
|
||||
goto out;
|
||||
}
|
||||
ioc->max_wideport_qd =
|
||||
(le16_to_cpu(sas_iounit_pg1->SASWideMaxQueueDepth)) ?
|
||||
le16_to_cpu(sas_iounit_pg1->SASWideMaxQueueDepth) :
|
||||
MPT3SAS_SAS_QUEUE_DEPTH;
|
||||
ioc->max_narrowport_qd =
|
||||
(le16_to_cpu(sas_iounit_pg1->SASNarrowMaxQueueDepth)) ?
|
||||
le16_to_cpu(sas_iounit_pg1->SASNarrowMaxQueueDepth) :
|
||||
MPT3SAS_SAS_QUEUE_DEPTH;
|
||||
ioc->max_sata_qd = (sas_iounit_pg1->SATAMaxQDepth) ?
|
||||
sas_iounit_pg1->SATAMaxQDepth : MPT3SAS_SATA_QUEUE_DEPTH;
|
||||
|
||||
depth = le16_to_cpu(sas_iounit_pg1->SASWideMaxQueueDepth);
|
||||
ioc->max_wideport_qd = (depth ? depth : MPT3SAS_SAS_QUEUE_DEPTH);
|
||||
|
||||
depth = le16_to_cpu(sas_iounit_pg1->SASNarrowMaxQueueDepth);
|
||||
ioc->max_narrowport_qd = (depth ? depth : MPT3SAS_SAS_QUEUE_DEPTH);
|
||||
|
||||
depth = sas_iounit_pg1->SATAMaxQDepth;
|
||||
ioc->max_sata_qd = (depth ? depth : MPT3SAS_SATA_QUEUE_DEPTH);
|
||||
|
||||
/* pcie iounit page 1 */
|
||||
rc = mpt3sas_config_get_pcie_iounit_pg1(ioc, &mpi_reply,
|
||||
&pcie_iounit_pg1, sizeof(Mpi26PCIeIOUnitPage1_t));
|
||||
|
||||
@@ -4526,7 +4526,7 @@ pmcraid_register_interrupt_handler(struct pmcraid_instance *pinstance)
|
||||
return 0;
|
||||
|
||||
out_unwind:
|
||||
while (--i > 0)
|
||||
while (--i >= 0)
|
||||
free_irq(pci_irq_vector(pdev, i), &pinstance->hrrq_vector[i]);
|
||||
pci_free_irq_vectors(pdev);
|
||||
return rc;
|
||||
|
||||
@@ -331,8 +331,8 @@ struct PVSCSIRingReqDesc {
|
||||
u8 tag;
|
||||
u8 bus;
|
||||
u8 target;
|
||||
u8 vcpuHint;
|
||||
u8 unused[59];
|
||||
u16 vcpuHint;
|
||||
u8 unused[58];
|
||||
} __packed;
|
||||
|
||||
/*
|
||||
|
||||
@@ -179,8 +179,7 @@ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter)
|
||||
|
||||
pxmitpriv->free_xmit_extbuf_cnt = num_xmit_extbuf;
|
||||
|
||||
res = rtw_alloc_hwxmits(padapter);
|
||||
if (res) {
|
||||
if (rtw_alloc_hwxmits(padapter)) {
|
||||
res = _FAIL;
|
||||
goto exit;
|
||||
}
|
||||
@@ -1534,19 +1533,10 @@ int rtw_alloc_hwxmits(struct adapter *padapter)
|
||||
|
||||
hwxmits = pxmitpriv->hwxmits;
|
||||
|
||||
if (pxmitpriv->hwxmit_entry == 5) {
|
||||
hwxmits[0] .sta_queue = &pxmitpriv->bm_pending;
|
||||
hwxmits[1] .sta_queue = &pxmitpriv->vo_pending;
|
||||
hwxmits[2] .sta_queue = &pxmitpriv->vi_pending;
|
||||
hwxmits[3] .sta_queue = &pxmitpriv->bk_pending;
|
||||
hwxmits[4] .sta_queue = &pxmitpriv->be_pending;
|
||||
} else if (pxmitpriv->hwxmit_entry == 4) {
|
||||
hwxmits[0] .sta_queue = &pxmitpriv->vo_pending;
|
||||
hwxmits[1] .sta_queue = &pxmitpriv->vi_pending;
|
||||
hwxmits[2] .sta_queue = &pxmitpriv->be_pending;
|
||||
hwxmits[3] .sta_queue = &pxmitpriv->bk_pending;
|
||||
} else {
|
||||
}
|
||||
hwxmits[0].sta_queue = &pxmitpriv->vo_pending;
|
||||
hwxmits[1].sta_queue = &pxmitpriv->vi_pending;
|
||||
hwxmits[2].sta_queue = &pxmitpriv->be_pending;
|
||||
hwxmits[3].sta_queue = &pxmitpriv->bk_pending;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -465,12 +465,11 @@ static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param,
|
||||
|
||||
if (wep_key_len > 0) {
|
||||
wep_key_len = wep_key_len <= 5 ? 5 : 13;
|
||||
wep_total_len = wep_key_len + FIELD_OFFSET(struct ndis_802_11_wep, KeyMaterial);
|
||||
pwep = kmalloc(wep_total_len, GFP_KERNEL);
|
||||
wep_total_len = wep_key_len + sizeof(*pwep);
|
||||
pwep = kzalloc(wep_total_len, GFP_KERNEL);
|
||||
if (!pwep)
|
||||
goto exit;
|
||||
|
||||
memset(pwep, 0, wep_total_len);
|
||||
pwep->KeyLength = wep_key_len;
|
||||
pwep->Length = wep_total_len;
|
||||
if (wep_key_len == 13) {
|
||||
|
||||
@@ -428,7 +428,7 @@ static int goldfish_tty_remove(struct platform_device *pdev)
|
||||
tty_unregister_device(goldfish_tty_driver, qtty->console.index);
|
||||
iounmap(qtty->base);
|
||||
qtty->base = NULL;
|
||||
free_irq(qtty->irq, pdev);
|
||||
free_irq(qtty->irq, qtty);
|
||||
tty_port_destroy(&qtty->port);
|
||||
goldfish_tty_current_line_count--;
|
||||
if (goldfish_tty_current_line_count == 0)
|
||||
|
||||
@@ -454,7 +454,7 @@ static void gsm_hex_dump_bytes(const char *fname, const u8 *data,
|
||||
return;
|
||||
}
|
||||
|
||||
prefix = kasprintf(GFP_KERNEL, "%s: ", fname);
|
||||
prefix = kasprintf(GFP_ATOMIC, "%s: ", fname);
|
||||
if (!prefix)
|
||||
return;
|
||||
print_hex_dump(KERN_INFO, prefix, DUMP_PREFIX_OFFSET, 16, 1, data, len,
|
||||
|
||||
@@ -1535,6 +1535,8 @@ static inline void __stop_tx(struct uart_8250_port *p)
|
||||
|
||||
if (em485) {
|
||||
unsigned char lsr = serial_in(p, UART_LSR);
|
||||
p->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
|
||||
|
||||
/*
|
||||
* To provide required timeing and allow FIFO transfer,
|
||||
* __stop_tx_rs485() must be called only when both FIFO and
|
||||
|
||||
@@ -1941,13 +1941,16 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
|
||||
}
|
||||
|
||||
if (enqd_len + trb_buff_len >= full_len) {
|
||||
if (need_zero_pkt)
|
||||
zero_len_trb = !zero_len_trb;
|
||||
|
||||
field &= ~TRB_CHAIN;
|
||||
field |= TRB_IOC;
|
||||
more_trbs_coming = false;
|
||||
preq->td.last_trb = ring->enqueue;
|
||||
if (need_zero_pkt && !zero_len_trb) {
|
||||
zero_len_trb = true;
|
||||
} else {
|
||||
zero_len_trb = false;
|
||||
field &= ~TRB_CHAIN;
|
||||
field |= TRB_IOC;
|
||||
more_trbs_coming = false;
|
||||
need_zero_pkt = false;
|
||||
preq->td.last_trb = ring->enqueue;
|
||||
}
|
||||
}
|
||||
|
||||
/* Only set interrupt on short packet for OUT endpoints. */
|
||||
@@ -1962,7 +1965,7 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
|
||||
length_field = TRB_LEN(trb_buff_len) | TRB_TD_SIZE(remainder) |
|
||||
TRB_INTR_TARGET(0);
|
||||
|
||||
cdnsp_queue_trb(pdev, ring, more_trbs_coming | zero_len_trb,
|
||||
cdnsp_queue_trb(pdev, ring, more_trbs_coming,
|
||||
lower_32_bits(send_addr),
|
||||
upper_32_bits(send_addr),
|
||||
length_field,
|
||||
|
||||
@@ -5194,7 +5194,7 @@ int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (!res) {
|
||||
retval = -EINVAL;
|
||||
goto error1;
|
||||
goto error2;
|
||||
}
|
||||
hcd->rsrc_start = res->start;
|
||||
hcd->rsrc_len = resource_size(res);
|
||||
|
||||
@@ -122,8 +122,6 @@ struct ffs_ep {
|
||||
struct usb_endpoint_descriptor *descs[3];
|
||||
|
||||
u8 num;
|
||||
|
||||
int status; /* P: epfile->mutex */
|
||||
};
|
||||
|
||||
struct ffs_epfile {
|
||||
@@ -227,6 +225,9 @@ struct ffs_io_data {
|
||||
bool use_sg;
|
||||
|
||||
struct ffs_data *ffs;
|
||||
|
||||
int status;
|
||||
struct completion done;
|
||||
};
|
||||
|
||||
struct ffs_desc_helper {
|
||||
@@ -707,12 +708,15 @@ static const struct file_operations ffs_ep0_operations = {
|
||||
|
||||
static void ffs_epfile_io_complete(struct usb_ep *_ep, struct usb_request *req)
|
||||
{
|
||||
struct ffs_io_data *io_data = req->context;
|
||||
|
||||
ENTER();
|
||||
if (req->context) {
|
||||
struct ffs_ep *ep = _ep->driver_data;
|
||||
ep->status = req->status ? req->status : req->actual;
|
||||
complete(req->context);
|
||||
}
|
||||
if (req->status)
|
||||
io_data->status = req->status;
|
||||
else
|
||||
io_data->status = req->actual;
|
||||
|
||||
complete(&io_data->done);
|
||||
}
|
||||
|
||||
static ssize_t ffs_copy_to_iter(void *data, int data_len, struct iov_iter *iter)
|
||||
@@ -1050,7 +1054,6 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
|
||||
WARN(1, "%s: data_len == -EINVAL\n", __func__);
|
||||
ret = -EINVAL;
|
||||
} else if (!io_data->aio) {
|
||||
DECLARE_COMPLETION_ONSTACK(done);
|
||||
bool interrupted = false;
|
||||
|
||||
req = ep->req;
|
||||
@@ -1066,7 +1069,8 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
|
||||
|
||||
io_data->buf = data;
|
||||
|
||||
req->context = &done;
|
||||
init_completion(&io_data->done);
|
||||
req->context = io_data;
|
||||
req->complete = ffs_epfile_io_complete;
|
||||
|
||||
ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC);
|
||||
@@ -1075,7 +1079,12 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
|
||||
|
||||
spin_unlock_irq(&epfile->ffs->eps_lock);
|
||||
|
||||
if (wait_for_completion_interruptible(&done)) {
|
||||
if (wait_for_completion_interruptible(&io_data->done)) {
|
||||
spin_lock_irq(&epfile->ffs->eps_lock);
|
||||
if (epfile->ep != ep) {
|
||||
ret = -ESHUTDOWN;
|
||||
goto error_lock;
|
||||
}
|
||||
/*
|
||||
* To avoid race condition with ffs_epfile_io_complete,
|
||||
* dequeue the request first then check
|
||||
@@ -1083,17 +1092,18 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
|
||||
* condition with req->complete callback.
|
||||
*/
|
||||
usb_ep_dequeue(ep->ep, req);
|
||||
wait_for_completion(&done);
|
||||
interrupted = ep->status < 0;
|
||||
spin_unlock_irq(&epfile->ffs->eps_lock);
|
||||
wait_for_completion(&io_data->done);
|
||||
interrupted = io_data->status < 0;
|
||||
}
|
||||
|
||||
if (interrupted)
|
||||
ret = -EINTR;
|
||||
else if (io_data->read && ep->status > 0)
|
||||
ret = __ffs_epfile_read_data(epfile, data, ep->status,
|
||||
else if (io_data->read && io_data->status > 0)
|
||||
ret = __ffs_epfile_read_data(epfile, data, io_data->status,
|
||||
&io_data->data);
|
||||
else
|
||||
ret = ep->status;
|
||||
ret = io_data->status;
|
||||
goto error_mutex;
|
||||
} else if (!(req = usb_ep_alloc_request(ep->ep, GFP_ATOMIC))) {
|
||||
ret = -ENOMEM;
|
||||
|
||||
@@ -3014,6 +3014,7 @@ static int lpc32xx_udc_probe(struct platform_device *pdev)
|
||||
}
|
||||
|
||||
udc->isp1301_i2c_client = isp1301_get_client(isp1301_node);
|
||||
of_node_put(isp1301_node);
|
||||
if (!udc->isp1301_i2c_client) {
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
|
||||
@@ -166,6 +166,7 @@ static const struct usb_device_id edgeport_2port_id_table[] = {
|
||||
{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_8S) },
|
||||
{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416) },
|
||||
{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416B) },
|
||||
{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_E5805A) },
|
||||
{ }
|
||||
};
|
||||
|
||||
@@ -204,6 +205,7 @@ static const struct usb_device_id id_table_combined[] = {
|
||||
{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_8S) },
|
||||
{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416) },
|
||||
{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416B) },
|
||||
{ USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_E5805A) },
|
||||
{ }
|
||||
};
|
||||
|
||||
|
||||
@@ -212,6 +212,7 @@
|
||||
//
|
||||
// Definitions for other product IDs
|
||||
#define ION_DEVICE_ID_MT4X56USB 0x1403 // OEM device
|
||||
#define ION_DEVICE_ID_E5805A 0x1A01 // OEM device (rebranded Edgeport/4)
|
||||
|
||||
|
||||
#define GENERATION_ID_FROM_USB_PRODUCT_ID(ProductId) \
|
||||
|
||||
@@ -432,6 +432,8 @@ static void option_instat_callback(struct urb *urb);
|
||||
#define CINTERION_PRODUCT_CLS8 0x00b0
|
||||
#define CINTERION_PRODUCT_MV31_MBIM 0x00b3
|
||||
#define CINTERION_PRODUCT_MV31_RMNET 0x00b7
|
||||
#define CINTERION_PRODUCT_MV31_2_MBIM 0x00b8
|
||||
#define CINTERION_PRODUCT_MV31_2_RMNET 0x00b9
|
||||
#define CINTERION_PRODUCT_MV32_WA 0x00f1
|
||||
#define CINTERION_PRODUCT_MV32_WB 0x00f2
|
||||
|
||||
@@ -1979,6 +1981,10 @@ static const struct usb_device_id option_ids[] = {
|
||||
.driver_info = RSVD(3)},
|
||||
{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff),
|
||||
.driver_info = RSVD(0)},
|
||||
{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_2_MBIM, 0xff),
|
||||
.driver_info = RSVD(3)},
|
||||
{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_2_RMNET, 0xff),
|
||||
.driver_info = RSVD(0)},
|
||||
{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA, 0xff),
|
||||
.driver_info = RSVD(3)},
|
||||
{ USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB, 0xff),
|
||||
|
||||
@@ -688,6 +688,7 @@ static int vm_cmdline_set(const char *device,
|
||||
if (!vm_cmdline_parent_registered) {
|
||||
err = device_register(&vm_cmdline_parent);
|
||||
if (err) {
|
||||
put_device(&vm_cmdline_parent);
|
||||
pr_err("Failed to register parent device!\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -254,8 +254,7 @@ void vp_del_vqs(struct virtio_device *vdev)
|
||||
|
||||
if (vp_dev->msix_affinity_masks) {
|
||||
for (i = 0; i < vp_dev->msix_vectors; i++)
|
||||
if (vp_dev->msix_affinity_masks[i])
|
||||
free_cpumask_var(vp_dev->msix_affinity_masks[i]);
|
||||
free_cpumask_var(vp_dev->msix_affinity_masks[i]);
|
||||
}
|
||||
|
||||
if (vp_dev->msix_enabled) {
|
||||
|
||||
@@ -4099,6 +4099,15 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
|
||||
size = size >> bsbits;
|
||||
start = start_off >> bsbits;
|
||||
|
||||
/*
|
||||
* For tiny groups (smaller than 8MB) the chosen allocation
|
||||
* alignment may be larger than group size. Make sure the
|
||||
* alignment does not move allocation to a different group which
|
||||
* makes mballoc fail assertions later.
|
||||
*/
|
||||
start = max(start, rounddown(ac->ac_o_ex.fe_logical,
|
||||
(ext4_lblk_t)EXT4_BLOCKS_PER_GROUP(ac->ac_sb)));
|
||||
|
||||
/* don't cover already allocated blocks in selected range */
|
||||
if (ar->pleft && start <= ar->lleft) {
|
||||
size -= ar->lleft + 1 - start;
|
||||
|
||||
@@ -1929,7 +1929,8 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
|
||||
struct dx_hash_info *hinfo)
|
||||
{
|
||||
unsigned blocksize = dir->i_sb->s_blocksize;
|
||||
unsigned count, continued;
|
||||
unsigned continued;
|
||||
int count;
|
||||
struct buffer_head *bh2;
|
||||
ext4_lblk_t newblock;
|
||||
u32 hash2;
|
||||
|
||||
@@ -52,6 +52,16 @@ int ext4_resize_begin(struct super_block *sb)
|
||||
if (!capable(CAP_SYS_RESOURCE))
|
||||
return -EPERM;
|
||||
|
||||
/*
|
||||
* If the reserved GDT blocks is non-zero, the resize_inode feature
|
||||
* should always be set.
|
||||
*/
|
||||
if (EXT4_SB(sb)->s_es->s_reserved_gdt_blocks &&
|
||||
!ext4_has_feature_resize_inode(sb)) {
|
||||
ext4_error(sb, "resize_inode disabled but reserved GDT blocks non-zero");
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we are not using the primary superblock/GDT copy don't resize,
|
||||
* because the user tools have no way of handling this. Probably a
|
||||
|
||||
@@ -4908,14 +4908,6 @@ no_journal:
|
||||
err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
|
||||
GFP_KERNEL);
|
||||
}
|
||||
/*
|
||||
* Update the checksum after updating free space/inode
|
||||
* counters. Otherwise the superblock can have an incorrect
|
||||
* checksum in the buffer cache until it is written out and
|
||||
* e2fsprogs programs trying to open a file system immediately
|
||||
* after it is mounted can fail.
|
||||
*/
|
||||
ext4_superblock_csum_set(sb);
|
||||
if (!err)
|
||||
err = percpu_counter_init(&sbi->s_dirs_counter,
|
||||
ext4_count_dirs(sb), GFP_KERNEL);
|
||||
@@ -4973,6 +4965,14 @@ no_journal:
|
||||
EXT4_SB(sb)->s_mount_state |= EXT4_ORPHAN_FS;
|
||||
ext4_orphan_cleanup(sb, es);
|
||||
EXT4_SB(sb)->s_mount_state &= ~EXT4_ORPHAN_FS;
|
||||
/*
|
||||
* Update the checksum after updating free space/inode counters and
|
||||
* ext4_orphan_cleanup. Otherwise the superblock can have an incorrect
|
||||
* checksum in the buffer cache until it is written out and
|
||||
* e2fsprogs programs trying to open a file system immediately
|
||||
* after it is mounted can fail.
|
||||
*/
|
||||
ext4_superblock_csum_set(sb);
|
||||
if (needs_recovery) {
|
||||
ext4_msg(sb, KERN_INFO, "recovery complete");
|
||||
err = ext4_mark_recovery_complete(sb, es);
|
||||
|
||||
@@ -7933,11 +7933,19 @@ static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
|
||||
|
||||
static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
|
||||
{
|
||||
unsigned nr = ctx->nr_user_files;
|
||||
int ret;
|
||||
|
||||
if (!ctx->file_data)
|
||||
return -ENXIO;
|
||||
|
||||
/*
|
||||
* Quiesce may unlock ->uring_lock, and while it's not held
|
||||
* prevent new requests using the table.
|
||||
*/
|
||||
ctx->nr_user_files = 0;
|
||||
ret = io_rsrc_ref_quiesce(ctx->file_data, ctx);
|
||||
ctx->nr_user_files = nr;
|
||||
if (!ret)
|
||||
__io_sqe_files_unregister(ctx);
|
||||
return ret;
|
||||
@@ -8897,12 +8905,19 @@ static void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
|
||||
|
||||
static int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
|
||||
{
|
||||
unsigned nr = ctx->nr_user_bufs;
|
||||
int ret;
|
||||
|
||||
if (!ctx->buf_data)
|
||||
return -ENXIO;
|
||||
|
||||
/*
|
||||
* Quiesce may unlock ->uring_lock, and while it's not held
|
||||
* prevent new requests using the table.
|
||||
*/
|
||||
ctx->nr_user_bufs = 0;
|
||||
ret = io_rsrc_ref_quiesce(ctx->buf_data, ctx);
|
||||
ctx->nr_user_bufs = nr;
|
||||
if (!ret)
|
||||
__io_sqe_buffers_unregister(ctx);
|
||||
return ret;
|
||||
|
||||
@@ -288,6 +288,7 @@ static u32 initiate_file_draining(struct nfs_client *clp,
|
||||
rv = NFS4_OK;
|
||||
break;
|
||||
case -ENOENT:
|
||||
set_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags);
|
||||
/* Embrace your forgetfulness! */
|
||||
rv = NFS4ERR_NOMATCHING_LAYOUT;
|
||||
|
||||
|
||||
@@ -469,6 +469,7 @@ pnfs_mark_layout_stateid_invalid(struct pnfs_layout_hdr *lo,
|
||||
pnfs_clear_lseg_state(lseg, lseg_list);
|
||||
pnfs_clear_layoutreturn_info(lo);
|
||||
pnfs_free_returned_lsegs(lo, lseg_list, &range, 0);
|
||||
set_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags);
|
||||
if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags) &&
|
||||
!test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags))
|
||||
pnfs_clear_layoutreturn_waitbit(lo);
|
||||
@@ -1917,8 +1918,9 @@ static void nfs_layoutget_begin(struct pnfs_layout_hdr *lo)
|
||||
|
||||
static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
|
||||
{
|
||||
if (atomic_dec_and_test(&lo->plh_outstanding))
|
||||
wake_up_var(&lo->plh_outstanding);
|
||||
if (atomic_dec_and_test(&lo->plh_outstanding) &&
|
||||
test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags))
|
||||
wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN);
|
||||
}
|
||||
|
||||
static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
|
||||
@@ -2025,11 +2027,11 @@ lookup_again:
|
||||
* If the layout segment list is empty, but there are outstanding
|
||||
* layoutget calls, then they might be subject to a layoutrecall.
|
||||
*/
|
||||
if ((list_empty(&lo->plh_segs) || !pnfs_layout_is_valid(lo)) &&
|
||||
if (test_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags) &&
|
||||
atomic_read(&lo->plh_outstanding) != 0) {
|
||||
spin_unlock(&ino->i_lock);
|
||||
lseg = ERR_PTR(wait_var_event_killable(&lo->plh_outstanding,
|
||||
!atomic_read(&lo->plh_outstanding)));
|
||||
lseg = ERR_PTR(wait_on_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN,
|
||||
TASK_KILLABLE));
|
||||
if (IS_ERR(lseg))
|
||||
goto out_put_layout_hdr;
|
||||
pnfs_put_layout_hdr(lo);
|
||||
@@ -2152,6 +2154,12 @@ lookup_again:
|
||||
case -ERECALLCONFLICT:
|
||||
case -EAGAIN:
|
||||
break;
|
||||
case -ENODATA:
|
||||
/* The server returned NFS4ERR_LAYOUTUNAVAILABLE */
|
||||
pnfs_layout_set_fail_bit(
|
||||
lo, pnfs_iomode_to_fail_bit(iomode));
|
||||
lseg = NULL;
|
||||
goto out_put_layout_hdr;
|
||||
default:
|
||||
if (!nfs_error_is_fatal(PTR_ERR(lseg))) {
|
||||
pnfs_layout_clear_fail_bit(lo, pnfs_iomode_to_fail_bit(iomode));
|
||||
@@ -2407,7 +2415,8 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
|
||||
goto out_forget;
|
||||
}
|
||||
|
||||
if (!pnfs_layout_is_valid(lo) && !pnfs_is_first_layoutget(lo))
|
||||
if (test_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags) &&
|
||||
!pnfs_is_first_layoutget(lo))
|
||||
goto out_forget;
|
||||
|
||||
if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
|
||||
|
||||
@@ -109,6 +109,7 @@ enum {
|
||||
NFS_LAYOUT_FIRST_LAYOUTGET, /* Serialize first layoutget */
|
||||
NFS_LAYOUT_INODE_FREEING, /* The inode is being freed */
|
||||
NFS_LAYOUT_HASHED, /* The layout visible */
|
||||
NFS_LAYOUT_DRAIN,
|
||||
};
|
||||
|
||||
enum layoutdriver_policy_flags {
|
||||
|
||||
@@ -194,7 +194,6 @@ nfsd_file_alloc(struct inode *inode, unsigned int may, unsigned int hashval,
|
||||
__set_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags);
|
||||
}
|
||||
nf->nf_mark = NULL;
|
||||
init_rwsem(&nf->nf_rwsem);
|
||||
trace_nfsd_file_alloc(nf);
|
||||
}
|
||||
return nf;
|
||||
|
||||
@@ -46,7 +46,6 @@ struct nfsd_file {
|
||||
refcount_t nf_ref;
|
||||
unsigned char nf_may;
|
||||
struct nfsd_file_mark *nf_mark;
|
||||
struct rw_semaphore nf_rwsem;
|
||||
};
|
||||
|
||||
int nfsd_file_cache_init(void);
|
||||
|
||||
@@ -1515,6 +1515,9 @@ static void nfsd4_init_copy_res(struct nfsd4_copy *copy, bool sync)
|
||||
|
||||
static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
|
||||
{
|
||||
struct file *dst = copy->nf_dst->nf_file;
|
||||
struct file *src = copy->nf_src->nf_file;
|
||||
errseq_t since;
|
||||
ssize_t bytes_copied = 0;
|
||||
u64 bytes_total = copy->cp_count;
|
||||
u64 src_pos = copy->cp_src_pos;
|
||||
@@ -1527,9 +1530,8 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
|
||||
do {
|
||||
if (kthread_should_stop())
|
||||
break;
|
||||
bytes_copied = nfsd_copy_file_range(copy->nf_src->nf_file,
|
||||
src_pos, copy->nf_dst->nf_file, dst_pos,
|
||||
bytes_total);
|
||||
bytes_copied = nfsd_copy_file_range(src, src_pos, dst, dst_pos,
|
||||
bytes_total);
|
||||
if (bytes_copied <= 0)
|
||||
break;
|
||||
bytes_total -= bytes_copied;
|
||||
@@ -1539,11 +1541,11 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
|
||||
} while (bytes_total > 0 && !copy->cp_synchronous);
|
||||
/* for a non-zero asynchronous copy do a commit of data */
|
||||
if (!copy->cp_synchronous && copy->cp_res.wr_bytes_written > 0) {
|
||||
down_write(©->nf_dst->nf_rwsem);
|
||||
status = vfs_fsync_range(copy->nf_dst->nf_file,
|
||||
copy->cp_dst_pos,
|
||||
since = READ_ONCE(dst->f_wb_err);
|
||||
status = vfs_fsync_range(dst, copy->cp_dst_pos,
|
||||
copy->cp_res.wr_bytes_written, 0);
|
||||
up_write(©->nf_dst->nf_rwsem);
|
||||
if (!status)
|
||||
status = filemap_check_wb_err(dst->f_mapping, since);
|
||||
if (!status)
|
||||
copy->committed = true;
|
||||
}
|
||||
|
||||
@@ -525,10 +525,11 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
|
||||
{
|
||||
struct file *src = nf_src->nf_file;
|
||||
struct file *dst = nf_dst->nf_file;
|
||||
errseq_t since;
|
||||
loff_t cloned;
|
||||
__be32 ret = 0;
|
||||
|
||||
down_write(&nf_dst->nf_rwsem);
|
||||
since = READ_ONCE(dst->f_wb_err);
|
||||
cloned = vfs_clone_file_range(src, src_pos, dst, dst_pos, count, 0);
|
||||
if (cloned < 0) {
|
||||
ret = nfserrno(cloned);
|
||||
@@ -542,6 +543,8 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
|
||||
loff_t dst_end = count ? dst_pos + count - 1 : LLONG_MAX;
|
||||
int status = vfs_fsync_range(dst, dst_pos, dst_end, 0);
|
||||
|
||||
if (!status)
|
||||
status = filemap_check_wb_err(dst->f_mapping, since);
|
||||
if (!status)
|
||||
status = commit_inode_metadata(file_inode(src));
|
||||
if (status < 0) {
|
||||
@@ -551,7 +554,6 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos,
|
||||
}
|
||||
}
|
||||
out_err:
|
||||
up_write(&nf_dst->nf_rwsem);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -954,6 +956,7 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
|
||||
struct super_block *sb = file_inode(file)->i_sb;
|
||||
struct svc_export *exp;
|
||||
struct iov_iter iter;
|
||||
errseq_t since;
|
||||
__be32 nfserr;
|
||||
int host_err;
|
||||
int use_wgather;
|
||||
@@ -991,8 +994,8 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
|
||||
flags |= RWF_SYNC;
|
||||
|
||||
iov_iter_kvec(&iter, WRITE, vec, vlen, *cnt);
|
||||
since = READ_ONCE(file->f_wb_err);
|
||||
if (flags & RWF_SYNC) {
|
||||
down_write(&nf->nf_rwsem);
|
||||
if (verf)
|
||||
nfsd_copy_boot_verifier(verf,
|
||||
net_generic(SVC_NET(rqstp),
|
||||
@@ -1001,15 +1004,12 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
|
||||
if (host_err < 0)
|
||||
nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
|
||||
nfsd_net_id));
|
||||
up_write(&nf->nf_rwsem);
|
||||
} else {
|
||||
down_read(&nf->nf_rwsem);
|
||||
if (verf)
|
||||
nfsd_copy_boot_verifier(verf,
|
||||
net_generic(SVC_NET(rqstp),
|
||||
nfsd_net_id));
|
||||
host_err = vfs_iter_write(file, &iter, &pos, flags);
|
||||
up_read(&nf->nf_rwsem);
|
||||
}
|
||||
if (host_err < 0) {
|
||||
nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp),
|
||||
@@ -1019,6 +1019,9 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
|
||||
*cnt = host_err;
|
||||
nfsd_stats_io_write_add(exp, *cnt);
|
||||
fsnotify_modify(file);
|
||||
host_err = filemap_check_wb_err(file->f_mapping, since);
|
||||
if (host_err < 0)
|
||||
goto out_nfserr;
|
||||
|
||||
if (stable && use_wgather) {
|
||||
host_err = wait_for_concurrent_writes(file);
|
||||
@@ -1099,19 +1102,6 @@ out:
|
||||
}
|
||||
|
||||
#ifdef CONFIG_NFSD_V3
|
||||
static int
|
||||
nfsd_filemap_write_and_wait_range(struct nfsd_file *nf, loff_t offset,
|
||||
loff_t end)
|
||||
{
|
||||
struct address_space *mapping = nf->nf_file->f_mapping;
|
||||
int ret = filemap_fdatawrite_range(mapping, offset, end);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
filemap_fdatawait_range_keep_errors(mapping, offset, end);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Commit all pending writes to stable storage.
|
||||
*
|
||||
@@ -1142,25 +1132,25 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
|
||||
if (err)
|
||||
goto out;
|
||||
if (EX_ISSYNC(fhp->fh_export)) {
|
||||
int err2 = nfsd_filemap_write_and_wait_range(nf, offset, end);
|
||||
errseq_t since = READ_ONCE(nf->nf_file->f_wb_err);
|
||||
int err2;
|
||||
|
||||
down_write(&nf->nf_rwsem);
|
||||
if (!err2)
|
||||
err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
|
||||
err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
|
||||
switch (err2) {
|
||||
case 0:
|
||||
nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
|
||||
nfsd_net_id));
|
||||
err2 = filemap_check_wb_err(nf->nf_file->f_mapping,
|
||||
since);
|
||||
break;
|
||||
case -EINVAL:
|
||||
err = nfserr_notsupp;
|
||||
break;
|
||||
default:
|
||||
err = nfserrno(err2);
|
||||
nfsd_reset_boot_verifier(net_generic(nf->nf_net,
|
||||
nfsd_net_id));
|
||||
}
|
||||
up_write(&nf->nf_rwsem);
|
||||
err = nfserrno(err2);
|
||||
} else
|
||||
nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
|
||||
nfsd_net_id));
|
||||
|
||||
@@ -79,6 +79,7 @@
|
||||
#include <linux/capability.h>
|
||||
#include <linux/quotaops.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/sched/mm.h>
|
||||
#include "../internal.h" /* ugh */
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
@@ -425,9 +426,11 @@ EXPORT_SYMBOL(mark_info_dirty);
|
||||
int dquot_acquire(struct dquot *dquot)
|
||||
{
|
||||
int ret = 0, ret2 = 0;
|
||||
unsigned int memalloc;
|
||||
struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
|
||||
|
||||
mutex_lock(&dquot->dq_lock);
|
||||
memalloc = memalloc_nofs_save();
|
||||
if (!test_bit(DQ_READ_B, &dquot->dq_flags)) {
|
||||
ret = dqopt->ops[dquot->dq_id.type]->read_dqblk(dquot);
|
||||
if (ret < 0)
|
||||
@@ -458,6 +461,7 @@ int dquot_acquire(struct dquot *dquot)
|
||||
smp_mb__before_atomic();
|
||||
set_bit(DQ_ACTIVE_B, &dquot->dq_flags);
|
||||
out_iolock:
|
||||
memalloc_nofs_restore(memalloc);
|
||||
mutex_unlock(&dquot->dq_lock);
|
||||
return ret;
|
||||
}
|
||||
@@ -469,9 +473,11 @@ EXPORT_SYMBOL(dquot_acquire);
|
||||
int dquot_commit(struct dquot *dquot)
|
||||
{
|
||||
int ret = 0;
|
||||
unsigned int memalloc;
|
||||
struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
|
||||
|
||||
mutex_lock(&dquot->dq_lock);
|
||||
memalloc = memalloc_nofs_save();
|
||||
if (!clear_dquot_dirty(dquot))
|
||||
goto out_lock;
|
||||
/* Inactive dquot can be only if there was error during read/init
|
||||
@@ -481,6 +487,7 @@ int dquot_commit(struct dquot *dquot)
|
||||
else
|
||||
ret = -EIO;
|
||||
out_lock:
|
||||
memalloc_nofs_restore(memalloc);
|
||||
mutex_unlock(&dquot->dq_lock);
|
||||
return ret;
|
||||
}
|
||||
@@ -492,9 +499,11 @@ EXPORT_SYMBOL(dquot_commit);
|
||||
int dquot_release(struct dquot *dquot)
|
||||
{
|
||||
int ret = 0, ret2 = 0;
|
||||
unsigned int memalloc;
|
||||
struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
|
||||
|
||||
mutex_lock(&dquot->dq_lock);
|
||||
memalloc = memalloc_nofs_save();
|
||||
/* Check whether we are not racing with some other dqget() */
|
||||
if (dquot_is_busy(dquot))
|
||||
goto out_dqlock;
|
||||
@@ -510,6 +519,7 @@ int dquot_release(struct dquot *dquot)
|
||||
}
|
||||
clear_bit(DQ_ACTIVE_B, &dquot->dq_flags);
|
||||
out_dqlock:
|
||||
memalloc_nofs_restore(memalloc);
|
||||
mutex_unlock(&dquot->dq_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -121,6 +121,8 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
|
||||
|
||||
extern struct backing_dev_info noop_backing_dev_info;
|
||||
|
||||
int bdi_init(struct backing_dev_info *bdi);
|
||||
|
||||
/**
|
||||
* writeback_in_progress - determine whether there is writeback in progress
|
||||
* @wb: bdi_writeback of interest
|
||||
|
||||
@@ -564,7 +564,7 @@ static void add_dma_entry(struct dma_debug_entry *entry, unsigned long attrs)
|
||||
|
||||
rc = active_cacheline_insert(entry);
|
||||
if (rc == -ENOMEM) {
|
||||
pr_err("cacheline tracking ENOMEM, dma-debug disabled\n");
|
||||
pr_err_once("cacheline tracking ENOMEM, dma-debug disabled\n");
|
||||
global_disable = true;
|
||||
} else if (rc == -EEXIST && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
|
||||
err_printk(entry->dev, entry,
|
||||
|
||||
@@ -123,6 +123,9 @@ config INDIRECT_IOMEM_FALLBACK
|
||||
|
||||
source "lib/crypto/Kconfig"
|
||||
|
||||
config LIB_MEMNEQ
|
||||
bool
|
||||
|
||||
config CRC_CCITT
|
||||
tristate "CRC-CCITT functions"
|
||||
help
|
||||
|
||||
@@ -249,6 +249,7 @@ obj-$(CONFIG_DIMLIB) += dim/
|
||||
obj-$(CONFIG_SIGNATURE) += digsig.o
|
||||
|
||||
lib-$(CONFIG_CLZ_TAB) += clz_tab.o
|
||||
lib-$(CONFIG_LIB_MEMNEQ) += memneq.o
|
||||
|
||||
obj-$(CONFIG_GENERIC_STRNCPY_FROM_USER) += strncpy_from_user.o
|
||||
obj-$(CONFIG_GENERIC_STRNLEN_USER) += strnlen_user.o
|
||||
|
||||
@@ -71,6 +71,7 @@ config CRYPTO_LIB_CURVE25519
|
||||
tristate "Curve25519 scalar multiplication library"
|
||||
depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519
|
||||
select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n
|
||||
select LIB_MEMNEQ
|
||||
help
|
||||
Enable the Curve25519 library interface. This interface may be
|
||||
fulfilled by either the generic implementation or an arch-specific
|
||||
|
||||
@@ -229,20 +229,13 @@ static __init int bdi_class_init(void)
|
||||
}
|
||||
postcore_initcall(bdi_class_init);
|
||||
|
||||
static int bdi_init(struct backing_dev_info *bdi);
|
||||
|
||||
static int __init default_bdi_init(void)
|
||||
{
|
||||
int err;
|
||||
|
||||
bdi_wq = alloc_workqueue("writeback", WQ_MEM_RECLAIM | WQ_UNBOUND |
|
||||
WQ_SYSFS, 0);
|
||||
if (!bdi_wq)
|
||||
return -ENOMEM;
|
||||
|
||||
err = bdi_init(&noop_backing_dev_info);
|
||||
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
subsys_initcall(default_bdi_init);
|
||||
|
||||
@@ -784,7 +777,7 @@ static void cgwb_remove_from_bdi_list(struct bdi_writeback *wb)
|
||||
|
||||
#endif /* CONFIG_CGROUP_WRITEBACK */
|
||||
|
||||
static int bdi_init(struct backing_dev_info *bdi)
|
||||
int bdi_init(struct backing_dev_info *bdi)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
||||
@@ -1654,9 +1654,12 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
|
||||
int flags)
|
||||
{
|
||||
struct sock *sk = sock->sk;
|
||||
struct sk_buff *skb;
|
||||
struct sk_buff *skb, *last;
|
||||
struct sk_buff_head *sk_queue;
|
||||
int copied;
|
||||
int err = 0;
|
||||
int off = 0;
|
||||
long timeo;
|
||||
|
||||
lock_sock(sk);
|
||||
/*
|
||||
@@ -1668,11 +1671,29 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Now we can treat all alike */
|
||||
skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
|
||||
flags & MSG_DONTWAIT, &err);
|
||||
if (skb == NULL)
|
||||
goto out;
|
||||
/* We need support for non-blocking reads. */
|
||||
sk_queue = &sk->sk_receive_queue;
|
||||
skb = __skb_try_recv_datagram(sk, sk_queue, flags, &off, &err, &last);
|
||||
/* If no packet is available, release_sock(sk) and try again. */
|
||||
if (!skb) {
|
||||
if (err != -EAGAIN)
|
||||
goto out;
|
||||
release_sock(sk);
|
||||
timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
|
||||
while (timeo && !__skb_wait_for_more_packets(sk, sk_queue, &err,
|
||||
&timeo, last)) {
|
||||
skb = __skb_try_recv_datagram(sk, sk_queue, flags, &off,
|
||||
&err, &last);
|
||||
if (skb)
|
||||
break;
|
||||
|
||||
if (err != -EAGAIN)
|
||||
goto done;
|
||||
}
|
||||
if (!skb)
|
||||
goto done;
|
||||
lock_sock(sk);
|
||||
}
|
||||
|
||||
if (!sk_to_ax25(sk)->pidincl)
|
||||
skb_pull(skb, 1); /* Remove PID */
|
||||
@@ -1719,6 +1740,7 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
|
||||
out:
|
||||
release_sock(sk);
|
||||
|
||||
done:
|
||||
return err;
|
||||
}
|
||||
|
||||
|
||||
@@ -502,14 +502,15 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
|
||||
struct ipcm6_cookie ipc6;
|
||||
int addr_len = msg->msg_namelen;
|
||||
int transhdrlen = 4; /* zero session-id */
|
||||
int ulen = len + transhdrlen;
|
||||
int ulen;
|
||||
int err;
|
||||
|
||||
/* Rough check on arithmetic overflow,
|
||||
* better check is made in ip6_append_data().
|
||||
*/
|
||||
if (len > INT_MAX)
|
||||
if (len > INT_MAX - transhdrlen)
|
||||
return -EMSGSIZE;
|
||||
ulen = len + transhdrlen;
|
||||
|
||||
/* Mirror BSD error message compatibility */
|
||||
if (msg->msg_flags & MSG_OOB)
|
||||
|
||||
@@ -651,6 +651,7 @@ static struct rpc_clnt *__rpc_clone_client(struct rpc_create_args *args,
|
||||
new->cl_discrtry = clnt->cl_discrtry;
|
||||
new->cl_chatty = clnt->cl_chatty;
|
||||
new->cl_principal = clnt->cl_principal;
|
||||
new->cl_max_connect = clnt->cl_max_connect;
|
||||
return new;
|
||||
|
||||
out_err:
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user