Merge 4.9.183 into android-4.9
Changes in 4.9.183 rapidio: fix a NULL pointer dereference when create_workqueue() fails fs/fat/file.c: issue flush after the writeback of FAT sysctl: return -EINVAL if val violates minmax ipc: prevent lockup on alloc_msg and free_msg ARM: prevent tracing IPI_CPU_BACKTRACE hugetlbfs: on restore reserve error path retain subpool reservation mem-hotplug: fix node spanned pages when we have a node with only ZONE_MOVABLE mm/cma.c: fix crash on CMA allocation if bitmap allocation fails mm/cma_debug.c: fix the break condition in cma_maxchunk_get() mm/slab.c: fix an infinite loop in leaks_show() kernel/sys.c: prctl: fix false positive in validate_prctl_map() drivers: thermal: tsens: Don't print error message on -EPROBE_DEFER mfd: tps65912-spi: Add missing of table registration mfd: intel-lpss: Set the device in reset state when init mfd: twl6040: Fix device init errors for ACCCTL register perf/x86/intel: Allow PEBS multi-entry in watermark mode drm/bridge: adv7511: Fix low refresh rate selection objtool: Don't use ignore flag for fake jumps pwm: meson: Use the spin-lock only to protect register modifications ntp: Allow TAI-UTC offset to be set to zero f2fs: fix to avoid panic in do_recover_data() f2fs: fix to clear dirty inode in error path of f2fs_iget() f2fs: fix to do sanity check on valid block count of segment configfs: fix possible use-after-free in configfs_register_group uml: fix a boot splat wrt use of cpu_all_mask watchdog: imx2_wdt: Fix set_timeout for big timeout values watchdog: fix compile time error of pretimeout governors iommu/vt-d: Set intel_iommu_gfx_mapped correctly ALSA: hda - Register irq handler after the chip initialization nvmem: core: fix read buffer in place fuse: retrieve: cap requested size to negotiated max_write nfsd: allow fh_want_write to be called twice x86/PCI: Fix PCI IRQ routing table memory leak platform/chrome: cros_ec_proto: check for NULL transfer function soc: mediatek: pwrap: Zero initialize rdata in pwrap_init_cipher clk: rockchip: Turn on "aclk_dmac1" for suspend on rk3288 ARM: dts: imx6sx: Specify IMX6SX_CLK_IPG as "ahb" clock to SDMA ARM: dts: imx7d: Specify IMX7D_CLK_IPG as "ipg" clock to SDMA ARM: dts: imx6ul: Specify IMX6UL_CLK_IPG as "ipg" clock to SDMA ARM: dts: imx6sx: Specify IMX6SX_CLK_IPG as "ipg" clock to SDMA ARM: dts: imx6qdl: Specify IMX6QDL_CLK_IPG as "ipg" clock to SDMA PCI: rpadlpar: Fix leaked device_node references in add/remove paths platform/x86: intel_pmc_ipc: adding error handling PCI: rcar: Fix a potential NULL pointer dereference PCI: rcar: Fix 64bit MSI message address handling video: hgafb: fix potential NULL pointer dereference video: imsttfb: fix potential NULL pointer dereferences PCI: xilinx: Check for __get_free_pages() failure gpio: gpio-omap: add check for off wake capable gpios dmaengine: idma64: Use actual device for DMA transfers pwm: tiehrpwm: Update shadow register for disabling PWMs ARM: dts: exynos: Always enable necessary APIO_1V8 and ABB_1V8 regulators on Arndale Octa pwm: Fix deadlock warning when removing PWM device ARM: exynos: Fix undefined instruction during Exynos5422 resume Revert "Bluetooth: Align minimum encryption key size for LE and BR/EDR connections" ALSA: seq: Cover unsubscribe_port() in list_mutex ALSA: oxfw: allow PCM capture for Stanton SCS.1m libata: Extend quirks for the ST1000LM024 drives with NOLPM quirk mm/list_lru.c: fix memory leak in __memcg_init_list_lru_node fs/ocfs2: fix race in ocfs2_dentry_attach_lock() signal/ptrace: Don't leak unitialized kernel memory with PTRACE_PEEK_SIGINFO ptrace: restore smp_rmb() in __ptrace_may_access() media: v4l2-ioctl: clear fields in s_parm i2c: acorn: fix i2c warning bcache: fix stack corruption by PRECEDING_KEY() cgroup: Use css_tryget() instead of css_tryget_online() in task_get_css() ASoC: cs42xx8: Add regcache mask dirty ASoC: fsl_asrc: Fix the issue about unsupported rate x86/uaccess, kcov: Disable stack protector ALSA: seq: Protect in-kernel ioctl calls with mutex ALSA: seq: Fix race of get-subscription call vs port-delete ioctls Revert "ALSA: seq: Protect in-kernel ioctl calls with mutex" Drivers: misc: fix out-of-bounds access in function param_set_kgdbts_var scsi: lpfc: add check for loss of ndlp when sending RRQ arm64/mm: Inhibit huge-vmap with ptdump scsi: bnx2fc: fix incorrect cast to u64 on shift operation selftests/timers: Add missing fflush(stdout) calls usbnet: ipheth: fix racing condition KVM: x86/pmu: do not mask the value that is written to fixed PMUs KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGION drm/vmwgfx: integer underflow in vmw_cmd_dx_set_shader() leading to an invalid read drm/vmwgfx: NULL pointer dereference from vmw_cmd_dx_view_define() usb: dwc2: Fix DMA cache alignment issues USB: Fix chipmunk-like voice when using Logitech C270 for recording audio. USB: usb-storage: Add new ID to ums-realtek USB: serial: pl2303: add Allied Telesis VT-Kit3 USB: serial: option: add support for Simcom SIM7500/SIM7600 RNDIS mode USB: serial: option: add Telit 0x1260 and 0x1261 compositions rtc: pcf8523: don't return invalid date when battery is low ax25: fix inconsistent lock state in ax25_destroy_timer be2net: Fix number of Rx queues used for flow hashing ipv6: flowlabel: fl6_sock_lookup() must use atomic_inc_not_zero lapb: fixed leak of control-blocks. neigh: fix use-after-free read in pneigh_get_next sunhv: Fix device naming inconsistency between sunhv_console and sunhv_reg Revert "staging: vc04_services: prevent integer overflow in create_pagelist()" perf/x86/intel/ds: Fix EVENT vs. UEVENT PEBS constraints selftests: netfilter: missing error check when setting up veth interface mISDN: make sure device name is NUL terminated x86/CPU/AMD: Don't force the CPB cap when running under a hypervisor perf/ring_buffer: Fix exposing a temporarily decreased data_head perf/ring_buffer: Add ordering to rb->nest increment gpio: fix gpio-adp5588 build errors net: tulip: de4x5: Drop redundant MODULE_DEVICE_TABLE() i2c: dev: fix potential memory leak in i2cdev_ioctl_rdwr configfs: Fix use-after-free when accessing sd->s_dentry perf data: Fix 'strncat may truncate' build failure with recent gcc perf record: Fix s390 missing module symbol and warning for non-root users ia64: fix build errors by exporting paddr_to_nid() KVM: PPC: Book3S: Use new mutex to synchronize access to rtas token list KVM: PPC: Book3S HV: Don't take kvm->lock around kvm_for_each_vcpu net: sh_eth: fix mdio access in sh_eth_close() for R-Car Gen2 and RZ/A1 SoCs scsi: libcxgbi: add a check for NULL pointer in cxgbi_check_route() scsi: smartpqi: properly set both the DMA mask and the coherent DMA mask scsi: libsas: delete sas port if expander discover failed mlxsw: spectrum: Prevent force of 56G Abort file_remove_privs() for non-reg. files Linux 4.9.183 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
|||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 9
|
PATCHLEVEL = 9
|
||||||
SUBLEVEL = 182
|
SUBLEVEL = 183
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Roaring Lionus
|
NAME = Roaring Lionus
|
||||||
|
|
||||||
|
|||||||
@@ -110,6 +110,7 @@
|
|||||||
regulator-name = "PVDD_APIO_1V8";
|
regulator-name = "PVDD_APIO_1V8";
|
||||||
regulator-min-microvolt = <1800000>;
|
regulator-min-microvolt = <1800000>;
|
||||||
regulator-max-microvolt = <1800000>;
|
regulator-max-microvolt = <1800000>;
|
||||||
|
regulator-always-on;
|
||||||
};
|
};
|
||||||
|
|
||||||
ldo3_reg: LDO3 {
|
ldo3_reg: LDO3 {
|
||||||
@@ -148,6 +149,7 @@
|
|||||||
regulator-name = "PVDD_ABB_1V8";
|
regulator-name = "PVDD_ABB_1V8";
|
||||||
regulator-min-microvolt = <1800000>;
|
regulator-min-microvolt = <1800000>;
|
||||||
regulator-max-microvolt = <1800000>;
|
regulator-max-microvolt = <1800000>;
|
||||||
|
regulator-always-on;
|
||||||
};
|
};
|
||||||
|
|
||||||
ldo9_reg: LDO9 {
|
ldo9_reg: LDO9 {
|
||||||
|
|||||||
@@ -875,7 +875,7 @@
|
|||||||
compatible = "fsl,imx6q-sdma", "fsl,imx35-sdma";
|
compatible = "fsl,imx6q-sdma", "fsl,imx35-sdma";
|
||||||
reg = <0x020ec000 0x4000>;
|
reg = <0x020ec000 0x4000>;
|
||||||
interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX6QDL_CLK_SDMA>,
|
clocks = <&clks IMX6QDL_CLK_IPG>,
|
||||||
<&clks IMX6QDL_CLK_SDMA>;
|
<&clks IMX6QDL_CLK_SDMA>;
|
||||||
clock-names = "ipg", "ahb";
|
clock-names = "ipg", "ahb";
|
||||||
#dma-cells = <3>;
|
#dma-cells = <3>;
|
||||||
|
|||||||
@@ -704,7 +704,7 @@
|
|||||||
reg = <0x020ec000 0x4000>;
|
reg = <0x020ec000 0x4000>;
|
||||||
interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX6SL_CLK_SDMA>,
|
clocks = <&clks IMX6SL_CLK_SDMA>,
|
||||||
<&clks IMX6SL_CLK_SDMA>;
|
<&clks IMX6SL_CLK_AHB>;
|
||||||
clock-names = "ipg", "ahb";
|
clock-names = "ipg", "ahb";
|
||||||
#dma-cells = <3>;
|
#dma-cells = <3>;
|
||||||
/* imx6sl reuses imx6q sdma firmware */
|
/* imx6sl reuses imx6q sdma firmware */
|
||||||
|
|||||||
@@ -751,7 +751,7 @@
|
|||||||
compatible = "fsl,imx6sx-sdma", "fsl,imx6q-sdma";
|
compatible = "fsl,imx6sx-sdma", "fsl,imx6q-sdma";
|
||||||
reg = <0x020ec000 0x4000>;
|
reg = <0x020ec000 0x4000>;
|
||||||
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX6SX_CLK_SDMA>,
|
clocks = <&clks IMX6SX_CLK_IPG>,
|
||||||
<&clks IMX6SX_CLK_SDMA>;
|
<&clks IMX6SX_CLK_SDMA>;
|
||||||
clock-names = "ipg", "ahb";
|
clock-names = "ipg", "ahb";
|
||||||
#dma-cells = <3>;
|
#dma-cells = <3>;
|
||||||
|
|||||||
@@ -669,7 +669,7 @@
|
|||||||
"fsl,imx35-sdma";
|
"fsl,imx35-sdma";
|
||||||
reg = <0x020ec000 0x4000>;
|
reg = <0x020ec000 0x4000>;
|
||||||
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX6UL_CLK_SDMA>,
|
clocks = <&clks IMX6UL_CLK_IPG>,
|
||||||
<&clks IMX6UL_CLK_SDMA>;
|
<&clks IMX6UL_CLK_SDMA>;
|
||||||
clock-names = "ipg", "ahb";
|
clock-names = "ipg", "ahb";
|
||||||
#dma-cells = <3>;
|
#dma-cells = <3>;
|
||||||
|
|||||||
@@ -962,8 +962,8 @@
|
|||||||
compatible = "fsl,imx7d-sdma", "fsl,imx35-sdma";
|
compatible = "fsl,imx7d-sdma", "fsl,imx35-sdma";
|
||||||
reg = <0x30bd0000 0x10000>;
|
reg = <0x30bd0000 0x10000>;
|
||||||
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX7D_SDMA_CORE_CLK>,
|
clocks = <&clks IMX7D_IPG_ROOT_CLK>,
|
||||||
<&clks IMX7D_AHB_CHANNEL_ROOT_CLK>;
|
<&clks IMX7D_SDMA_CORE_CLK>;
|
||||||
clock-names = "ipg", "ahb";
|
clock-names = "ipg", "ahb";
|
||||||
#dma-cells = <3>;
|
#dma-cells = <3>;
|
||||||
fsl,sdma-ram-script-name = "imx/sdma/sdma-imx7d.bin";
|
fsl,sdma-ram-script-name = "imx/sdma/sdma-imx7d.bin";
|
||||||
|
|||||||
@@ -5,6 +5,7 @@
|
|||||||
#include <linux/threads.h>
|
#include <linux/threads.h>
|
||||||
#include <asm/irq.h>
|
#include <asm/irq.h>
|
||||||
|
|
||||||
|
/* number of IPIS _not_ including IPI_CPU_BACKTRACE */
|
||||||
#define NR_IPI 7
|
#define NR_IPI 7
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
|
|||||||
@@ -75,6 +75,10 @@ enum ipi_msg_type {
|
|||||||
IPI_CPU_STOP,
|
IPI_CPU_STOP,
|
||||||
IPI_IRQ_WORK,
|
IPI_IRQ_WORK,
|
||||||
IPI_COMPLETION,
|
IPI_COMPLETION,
|
||||||
|
/*
|
||||||
|
* CPU_BACKTRACE is special and not included in NR_IPI
|
||||||
|
* or tracable with trace_ipi_*
|
||||||
|
*/
|
||||||
IPI_CPU_BACKTRACE,
|
IPI_CPU_BACKTRACE,
|
||||||
/*
|
/*
|
||||||
* SGI8-15 can be reserved by secure firmware, and thus may
|
* SGI8-15 can be reserved by secure firmware, and thus may
|
||||||
@@ -801,7 +805,7 @@ core_initcall(register_cpufreq_notifier);
|
|||||||
|
|
||||||
static void raise_nmi(cpumask_t *mask)
|
static void raise_nmi(cpumask_t *mask)
|
||||||
{
|
{
|
||||||
smp_cross_call(mask, IPI_CPU_BACKTRACE);
|
__smp_cross_call(mask, IPI_CPU_BACKTRACE);
|
||||||
}
|
}
|
||||||
|
|
||||||
void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
|
void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
|
||||||
|
|||||||
@@ -500,8 +500,27 @@ early_wakeup:
|
|||||||
|
|
||||||
static void exynos5420_prepare_pm_resume(void)
|
static void exynos5420_prepare_pm_resume(void)
|
||||||
{
|
{
|
||||||
|
unsigned int mpidr, cluster;
|
||||||
|
|
||||||
|
mpidr = read_cpuid_mpidr();
|
||||||
|
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM))
|
if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM))
|
||||||
WARN_ON(mcpm_cpu_powered_up());
|
WARN_ON(mcpm_cpu_powered_up());
|
||||||
|
|
||||||
|
if (IS_ENABLED(CONFIG_HW_PERF_EVENTS) && cluster != 0) {
|
||||||
|
/*
|
||||||
|
* When system is resumed on the LITTLE/KFC core (cluster 1),
|
||||||
|
* the DSCR is not properly updated until the power is turned
|
||||||
|
* on also for the cluster 0. Enable it for a while to
|
||||||
|
* propagate the SPNIDEN and SPIDEN signals from Secure JTAG
|
||||||
|
* block and avoid undefined instruction issue on CP14 reset.
|
||||||
|
*/
|
||||||
|
pmu_raw_writel(S5P_CORE_LOCAL_PWR_EN,
|
||||||
|
EXYNOS_COMMON_CONFIGURATION(0));
|
||||||
|
pmu_raw_writel(0,
|
||||||
|
EXYNOS_COMMON_CONFIGURATION(0));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void exynos5420_pm_resume(void)
|
static void exynos5420_pm_resume(void)
|
||||||
|
|||||||
@@ -774,13 +774,18 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys)
|
|||||||
|
|
||||||
int __init arch_ioremap_pud_supported(void)
|
int __init arch_ioremap_pud_supported(void)
|
||||||
{
|
{
|
||||||
/* only 4k granule supports level 1 block mappings */
|
/*
|
||||||
return IS_ENABLED(CONFIG_ARM64_4K_PAGES);
|
* Only 4k granule supports level 1 block mappings.
|
||||||
|
* SW table walks can't handle removal of intermediate entries.
|
||||||
|
*/
|
||||||
|
return IS_ENABLED(CONFIG_ARM64_4K_PAGES) &&
|
||||||
|
!IS_ENABLED(CONFIG_ARM64_PTDUMP_DEBUGFS);
|
||||||
}
|
}
|
||||||
|
|
||||||
int __init arch_ioremap_pmd_supported(void)
|
int __init arch_ioremap_pmd_supported(void)
|
||||||
{
|
{
|
||||||
return 1;
|
/* See arch_ioremap_pud_supported() */
|
||||||
|
return !IS_ENABLED(CONFIG_ARM64_PTDUMP_DEBUGFS);
|
||||||
}
|
}
|
||||||
|
|
||||||
int pud_set_huge(pud_t *pud, phys_addr_t phys, pgprot_t prot)
|
int pud_set_huge(pud_t *pud, phys_addr_t phys, pgprot_t prot)
|
||||||
|
|||||||
@@ -49,6 +49,7 @@ paddr_to_nid(unsigned long paddr)
|
|||||||
|
|
||||||
return (i < num_node_memblks) ? node_memblk[i].nid : (num_node_memblks ? -1 : 0);
|
return (i < num_node_memblks) ? node_memblk[i].nid : (num_node_memblks ? -1 : 0);
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL(paddr_to_nid);
|
||||||
|
|
||||||
#if defined(CONFIG_SPARSEMEM) && defined(CONFIG_NUMA)
|
#if defined(CONFIG_SPARSEMEM) && defined(CONFIG_NUMA)
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -271,6 +271,7 @@ struct kvm_arch {
|
|||||||
#ifdef CONFIG_PPC_BOOK3S_64
|
#ifdef CONFIG_PPC_BOOK3S_64
|
||||||
struct list_head spapr_tce_tables;
|
struct list_head spapr_tce_tables;
|
||||||
struct list_head rtas_tokens;
|
struct list_head rtas_tokens;
|
||||||
|
struct mutex rtas_token_lock;
|
||||||
DECLARE_BITMAP(enabled_hcalls, MAX_HCALL_OPCODE/4 + 1);
|
DECLARE_BITMAP(enabled_hcalls, MAX_HCALL_OPCODE/4 + 1);
|
||||||
#endif
|
#endif
|
||||||
#ifdef CONFIG_KVM_MPIC
|
#ifdef CONFIG_KVM_MPIC
|
||||||
|
|||||||
@@ -811,6 +811,7 @@ int kvmppc_core_init_vm(struct kvm *kvm)
|
|||||||
#ifdef CONFIG_PPC64
|
#ifdef CONFIG_PPC64
|
||||||
INIT_LIST_HEAD_RCU(&kvm->arch.spapr_tce_tables);
|
INIT_LIST_HEAD_RCU(&kvm->arch.spapr_tce_tables);
|
||||||
INIT_LIST_HEAD(&kvm->arch.rtas_tokens);
|
INIT_LIST_HEAD(&kvm->arch.rtas_tokens);
|
||||||
|
mutex_init(&kvm->arch.rtas_token_lock);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
return kvm->arch.kvm_ops->init_vm(kvm);
|
return kvm->arch.kvm_ops->init_vm(kvm);
|
||||||
|
|||||||
@@ -374,12 +374,7 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
|
|||||||
|
|
||||||
static struct kvm_vcpu *kvmppc_find_vcpu(struct kvm *kvm, int id)
|
static struct kvm_vcpu *kvmppc_find_vcpu(struct kvm *kvm, int id)
|
||||||
{
|
{
|
||||||
struct kvm_vcpu *ret;
|
return kvm_get_vcpu_by_id(kvm, id);
|
||||||
|
|
||||||
mutex_lock(&kvm->lock);
|
|
||||||
ret = kvm_get_vcpu_by_id(kvm, id);
|
|
||||||
mutex_unlock(&kvm->lock);
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void init_vpa(struct kvm_vcpu *vcpu, struct lppaca *vpa)
|
static void init_vpa(struct kvm_vcpu *vcpu, struct lppaca *vpa)
|
||||||
@@ -1098,7 +1093,6 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr,
|
|||||||
struct kvmppc_vcore *vc = vcpu->arch.vcore;
|
struct kvmppc_vcore *vc = vcpu->arch.vcore;
|
||||||
u64 mask;
|
u64 mask;
|
||||||
|
|
||||||
mutex_lock(&kvm->lock);
|
|
||||||
spin_lock(&vc->lock);
|
spin_lock(&vc->lock);
|
||||||
/*
|
/*
|
||||||
* If ILE (interrupt little-endian) has changed, update the
|
* If ILE (interrupt little-endian) has changed, update the
|
||||||
@@ -1132,7 +1126,6 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr,
|
|||||||
mask &= 0xFFFFFFFF;
|
mask &= 0xFFFFFFFF;
|
||||||
vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask);
|
vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask);
|
||||||
spin_unlock(&vc->lock);
|
spin_unlock(&vc->lock);
|
||||||
mutex_unlock(&kvm->lock);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
|
static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
|
||||||
|
|||||||
@@ -133,7 +133,7 @@ static int rtas_token_undefine(struct kvm *kvm, char *name)
|
|||||||
{
|
{
|
||||||
struct rtas_token_definition *d, *tmp;
|
struct rtas_token_definition *d, *tmp;
|
||||||
|
|
||||||
lockdep_assert_held(&kvm->lock);
|
lockdep_assert_held(&kvm->arch.rtas_token_lock);
|
||||||
|
|
||||||
list_for_each_entry_safe(d, tmp, &kvm->arch.rtas_tokens, list) {
|
list_for_each_entry_safe(d, tmp, &kvm->arch.rtas_tokens, list) {
|
||||||
if (rtas_name_matches(d->handler->name, name)) {
|
if (rtas_name_matches(d->handler->name, name)) {
|
||||||
@@ -154,7 +154,7 @@ static int rtas_token_define(struct kvm *kvm, char *name, u64 token)
|
|||||||
bool found;
|
bool found;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
lockdep_assert_held(&kvm->lock);
|
lockdep_assert_held(&kvm->arch.rtas_token_lock);
|
||||||
|
|
||||||
list_for_each_entry(d, &kvm->arch.rtas_tokens, list) {
|
list_for_each_entry(d, &kvm->arch.rtas_tokens, list) {
|
||||||
if (d->token == token)
|
if (d->token == token)
|
||||||
@@ -193,14 +193,14 @@ int kvm_vm_ioctl_rtas_define_token(struct kvm *kvm, void __user *argp)
|
|||||||
if (copy_from_user(&args, argp, sizeof(args)))
|
if (copy_from_user(&args, argp, sizeof(args)))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
|
|
||||||
mutex_lock(&kvm->lock);
|
mutex_lock(&kvm->arch.rtas_token_lock);
|
||||||
|
|
||||||
if (args.token)
|
if (args.token)
|
||||||
rc = rtas_token_define(kvm, args.name, args.token);
|
rc = rtas_token_define(kvm, args.name, args.token);
|
||||||
else
|
else
|
||||||
rc = rtas_token_undefine(kvm, args.name);
|
rc = rtas_token_undefine(kvm, args.name);
|
||||||
|
|
||||||
mutex_unlock(&kvm->lock);
|
mutex_unlock(&kvm->arch.rtas_token_lock);
|
||||||
|
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
@@ -232,7 +232,7 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
|
|||||||
orig_rets = args.rets;
|
orig_rets = args.rets;
|
||||||
args.rets = &args.args[be32_to_cpu(args.nargs)];
|
args.rets = &args.args[be32_to_cpu(args.nargs)];
|
||||||
|
|
||||||
mutex_lock(&vcpu->kvm->lock);
|
mutex_lock(&vcpu->kvm->arch.rtas_token_lock);
|
||||||
|
|
||||||
rc = -ENOENT;
|
rc = -ENOENT;
|
||||||
list_for_each_entry(d, &vcpu->kvm->arch.rtas_tokens, list) {
|
list_for_each_entry(d, &vcpu->kvm->arch.rtas_tokens, list) {
|
||||||
@@ -243,7 +243,7 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&vcpu->kvm->lock);
|
mutex_unlock(&vcpu->kvm->arch.rtas_token_lock);
|
||||||
|
|
||||||
if (rc == 0) {
|
if (rc == 0) {
|
||||||
args.rets = orig_rets;
|
args.rets = orig_rets;
|
||||||
@@ -269,8 +269,6 @@ void kvmppc_rtas_tokens_free(struct kvm *kvm)
|
|||||||
{
|
{
|
||||||
struct rtas_token_definition *d, *tmp;
|
struct rtas_token_definition *d, *tmp;
|
||||||
|
|
||||||
lockdep_assert_held(&kvm->lock);
|
|
||||||
|
|
||||||
list_for_each_entry_safe(d, tmp, &kvm->arch.rtas_tokens, list) {
|
list_for_each_entry_safe(d, tmp, &kvm->arch.rtas_tokens, list) {
|
||||||
list_del(&d->list);
|
list_del(&d->list);
|
||||||
kfree(d);
|
kfree(d);
|
||||||
|
|||||||
@@ -3288,21 +3288,28 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
|
|||||||
const struct kvm_memory_slot *new,
|
const struct kvm_memory_slot *new,
|
||||||
enum kvm_mr_change change)
|
enum kvm_mr_change change)
|
||||||
{
|
{
|
||||||
int rc;
|
int rc = 0;
|
||||||
|
|
||||||
/* If the basics of the memslot do not change, we do not want
|
switch (change) {
|
||||||
* to update the gmap. Every update causes several unnecessary
|
case KVM_MR_DELETE:
|
||||||
* segment translation exceptions. This is usually handled just
|
rc = gmap_unmap_segment(kvm->arch.gmap, old->base_gfn * PAGE_SIZE,
|
||||||
* fine by the normal fault handler + gmap, but it will also
|
old->npages * PAGE_SIZE);
|
||||||
* cause faults on the prefix page of running guest CPUs.
|
break;
|
||||||
*/
|
case KVM_MR_MOVE:
|
||||||
if (old->userspace_addr == mem->userspace_addr &&
|
rc = gmap_unmap_segment(kvm->arch.gmap, old->base_gfn * PAGE_SIZE,
|
||||||
old->base_gfn * PAGE_SIZE == mem->guest_phys_addr &&
|
old->npages * PAGE_SIZE);
|
||||||
old->npages * PAGE_SIZE == mem->memory_size)
|
if (rc)
|
||||||
return;
|
break;
|
||||||
|
/* FALLTHROUGH */
|
||||||
rc = gmap_map_segment(kvm->arch.gmap, mem->userspace_addr,
|
case KVM_MR_CREATE:
|
||||||
mem->guest_phys_addr, mem->memory_size);
|
rc = gmap_map_segment(kvm->arch.gmap, mem->userspace_addr,
|
||||||
|
mem->guest_phys_addr, mem->memory_size);
|
||||||
|
break;
|
||||||
|
case KVM_MR_FLAGS_ONLY:
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
WARN(1, "Unknown KVM MR CHANGE: %d\n", change);
|
||||||
|
}
|
||||||
if (rc)
|
if (rc)
|
||||||
pr_warn("failed to commit memory region\n");
|
pr_warn("failed to commit memory region\n");
|
||||||
return;
|
return;
|
||||||
|
|||||||
@@ -56,7 +56,7 @@ static int itimer_one_shot(struct clock_event_device *evt)
|
|||||||
static struct clock_event_device timer_clockevent = {
|
static struct clock_event_device timer_clockevent = {
|
||||||
.name = "posix-timer",
|
.name = "posix-timer",
|
||||||
.rating = 250,
|
.rating = 250,
|
||||||
.cpumask = cpu_all_mask,
|
.cpumask = cpu_possible_mask,
|
||||||
.features = CLOCK_EVT_FEAT_PERIODIC |
|
.features = CLOCK_EVT_FEAT_PERIODIC |
|
||||||
CLOCK_EVT_FEAT_ONESHOT,
|
CLOCK_EVT_FEAT_ONESHOT,
|
||||||
.set_state_shutdown = itimer_shutdown,
|
.set_state_shutdown = itimer_shutdown,
|
||||||
|
|||||||
@@ -2867,7 +2867,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
|
|||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
if (event->attr.precise_ip) {
|
if (event->attr.precise_ip) {
|
||||||
if (!(event->attr.freq || event->attr.wakeup_events)) {
|
if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) {
|
||||||
event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
|
event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
|
||||||
if (!(event->attr.sample_type &
|
if (!(event->attr.sample_type &
|
||||||
~intel_pmu_free_running_flags(event)))
|
~intel_pmu_free_running_flags(event)))
|
||||||
|
|||||||
@@ -655,7 +655,7 @@ struct event_constraint intel_core2_pebs_event_constraints[] = {
|
|||||||
INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1), /* MEM_LOAD_RETIRED.* */
|
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1), /* MEM_LOAD_RETIRED.* */
|
||||||
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x01),
|
||||||
EVENT_CONSTRAINT_END
|
EVENT_CONSTRAINT_END
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -664,7 +664,7 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
|
|||||||
INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1), /* MEM_LOAD_RETIRED.* */
|
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1), /* MEM_LOAD_RETIRED.* */
|
||||||
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x01),
|
||||||
/* Allow all events as PEBS with no flags */
|
/* Allow all events as PEBS with no flags */
|
||||||
INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
|
INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
|
||||||
EVENT_CONSTRAINT_END
|
EVENT_CONSTRAINT_END
|
||||||
@@ -672,7 +672,7 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
|
|||||||
|
|
||||||
struct event_constraint intel_slm_pebs_event_constraints[] = {
|
struct event_constraint intel_slm_pebs_event_constraints[] = {
|
||||||
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x1),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x1),
|
||||||
/* Allow all events as PEBS with no flags */
|
/* Allow all events as PEBS with no flags */
|
||||||
INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
|
INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
|
||||||
EVENT_CONSTRAINT_END
|
EVENT_CONSTRAINT_END
|
||||||
@@ -697,7 +697,7 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = {
|
|||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf), /* MEM_LOAD_RETIRED.* */
|
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf), /* MEM_LOAD_RETIRED.* */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf), /* FP_ASSIST.* */
|
INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf), /* FP_ASSIST.* */
|
||||||
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
|
||||||
EVENT_CONSTRAINT_END
|
EVENT_CONSTRAINT_END
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -714,7 +714,7 @@ struct event_constraint intel_westmere_pebs_event_constraints[] = {
|
|||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf), /* MEM_LOAD_RETIRED.* */
|
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf), /* MEM_LOAD_RETIRED.* */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf), /* FP_ASSIST.* */
|
INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf), /* FP_ASSIST.* */
|
||||||
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
|
||||||
EVENT_CONSTRAINT_END
|
EVENT_CONSTRAINT_END
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -723,7 +723,7 @@ struct event_constraint intel_snb_pebs_event_constraints[] = {
|
|||||||
INTEL_PLD_CONSTRAINT(0x01cd, 0x8), /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
|
INTEL_PLD_CONSTRAINT(0x01cd, 0x8), /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
|
||||||
INTEL_PST_CONSTRAINT(0x02cd, 0x8), /* MEM_TRANS_RETIRED.PRECISE_STORES */
|
INTEL_PST_CONSTRAINT(0x02cd, 0x8), /* MEM_TRANS_RETIRED.PRECISE_STORES */
|
||||||
/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
|
/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
|
||||||
INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */
|
INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */
|
||||||
INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
|
INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
|
||||||
INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
|
INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
|
||||||
@@ -738,9 +738,9 @@ struct event_constraint intel_ivb_pebs_event_constraints[] = {
|
|||||||
INTEL_PLD_CONSTRAINT(0x01cd, 0x8), /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
|
INTEL_PLD_CONSTRAINT(0x01cd, 0x8), /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
|
||||||
INTEL_PST_CONSTRAINT(0x02cd, 0x8), /* MEM_TRANS_RETIRED.PRECISE_STORES */
|
INTEL_PST_CONSTRAINT(0x02cd, 0x8), /* MEM_TRANS_RETIRED.PRECISE_STORES */
|
||||||
/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
|
/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
|
||||||
/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
|
/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
|
||||||
INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */
|
INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */
|
||||||
INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
|
INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
|
||||||
INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
|
INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
|
||||||
@@ -754,9 +754,9 @@ struct event_constraint intel_hsw_pebs_event_constraints[] = {
|
|||||||
INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
|
||||||
INTEL_PLD_CONSTRAINT(0x01cd, 0xf), /* MEM_TRANS_RETIRED.* */
|
INTEL_PLD_CONSTRAINT(0x01cd, 0xf), /* MEM_TRANS_RETIRED.* */
|
||||||
/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
|
/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
|
||||||
/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
|
/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
|
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
|
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
|
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
|
||||||
@@ -777,9 +777,9 @@ struct event_constraint intel_bdw_pebs_event_constraints[] = {
|
|||||||
INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
|
||||||
INTEL_PLD_CONSTRAINT(0x01cd, 0xf), /* MEM_TRANS_RETIRED.* */
|
INTEL_PLD_CONSTRAINT(0x01cd, 0xf), /* MEM_TRANS_RETIRED.* */
|
||||||
/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
|
/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
|
||||||
/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
|
/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
|
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
|
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
|
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
|
||||||
@@ -800,9 +800,9 @@ struct event_constraint intel_bdw_pebs_event_constraints[] = {
|
|||||||
struct event_constraint intel_skl_pebs_event_constraints[] = {
|
struct event_constraint intel_skl_pebs_event_constraints[] = {
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT(0x1c0, 0x2), /* INST_RETIRED.PREC_DIST */
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x1c0, 0x2), /* INST_RETIRED.PREC_DIST */
|
||||||
/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
|
/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
|
||||||
/* INST_RETIRED.TOTAL_CYCLES_PS (inv=1, cmask=16) (cycles:p). */
|
/* INST_RETIRED.TOTAL_CYCLES_PS (inv=1, cmask=16) (cycles:p). */
|
||||||
INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
|
INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
|
||||||
INTEL_PLD_CONSTRAINT(0x1cd, 0xf), /* MEM_TRANS_RETIRED.* */
|
INTEL_PLD_CONSTRAINT(0x1cd, 0xf), /* MEM_TRANS_RETIRED.* */
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
|
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
|
||||||
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
|
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
|
||||||
|
|||||||
@@ -766,8 +766,11 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
|
|||||||
{
|
{
|
||||||
set_cpu_cap(c, X86_FEATURE_ZEN);
|
set_cpu_cap(c, X86_FEATURE_ZEN);
|
||||||
|
|
||||||
/* Fix erratum 1076: CPB feature bit not being set in CPUID. */
|
/*
|
||||||
if (!cpu_has(c, X86_FEATURE_CPB))
|
* Fix erratum 1076: CPB feature bit not being set in CPUID.
|
||||||
|
* Always set it, except when running under a hypervisor.
|
||||||
|
*/
|
||||||
|
if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_CPB))
|
||||||
set_cpu_cap(c, X86_FEATURE_CPB);
|
set_cpu_cap(c, X86_FEATURE_CPB);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -235,11 +235,14 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
|
if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0))) {
|
||||||
(pmc = get_fixed_pmc(pmu, msr))) {
|
if (msr_info->host_initiated)
|
||||||
if (!msr_info->host_initiated)
|
pmc->counter = data;
|
||||||
data = (s64)(s32)data;
|
else
|
||||||
pmc->counter += data - pmc_read_counter(pmc);
|
pmc->counter = (s32)data;
|
||||||
|
return 0;
|
||||||
|
} else if ((pmc = get_fixed_pmc(pmu, msr))) {
|
||||||
|
pmc->counter = data;
|
||||||
return 0;
|
return 0;
|
||||||
} else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {
|
} else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {
|
||||||
if (data == pmc->eventsel)
|
if (data == pmc->eventsel)
|
||||||
|
|||||||
@@ -1117,6 +1117,8 @@ static struct dmi_system_id __initdata pciirq_dmi_table[] = {
|
|||||||
|
|
||||||
void __init pcibios_irq_init(void)
|
void __init pcibios_irq_init(void)
|
||||||
{
|
{
|
||||||
|
struct irq_routing_table *rtable = NULL;
|
||||||
|
|
||||||
DBG(KERN_DEBUG "PCI: IRQ init\n");
|
DBG(KERN_DEBUG "PCI: IRQ init\n");
|
||||||
|
|
||||||
if (raw_pci_ops == NULL)
|
if (raw_pci_ops == NULL)
|
||||||
@@ -1127,8 +1129,10 @@ void __init pcibios_irq_init(void)
|
|||||||
pirq_table = pirq_find_routing_table();
|
pirq_table = pirq_find_routing_table();
|
||||||
|
|
||||||
#ifdef CONFIG_PCI_BIOS
|
#ifdef CONFIG_PCI_BIOS
|
||||||
if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN))
|
if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN)) {
|
||||||
pirq_table = pcibios_get_irq_routing_table();
|
pirq_table = pcibios_get_irq_routing_table();
|
||||||
|
rtable = pirq_table;
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
if (pirq_table) {
|
if (pirq_table) {
|
||||||
pirq_peer_trick();
|
pirq_peer_trick();
|
||||||
@@ -1143,8 +1147,10 @@ void __init pcibios_irq_init(void)
|
|||||||
* If we're using the I/O APIC, avoid using the PCI IRQ
|
* If we're using the I/O APIC, avoid using the PCI IRQ
|
||||||
* routing table
|
* routing table
|
||||||
*/
|
*/
|
||||||
if (io_apic_assign_pci_irqs)
|
if (io_apic_assign_pci_irqs) {
|
||||||
|
kfree(rtable);
|
||||||
pirq_table = NULL;
|
pirq_table = NULL;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
x86_init.pci.fixup_irqs();
|
x86_init.pci.fixup_irqs();
|
||||||
|
|||||||
@@ -4355,9 +4355,12 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
|
|||||||
{ "ST3320[68]13AS", "SD1[5-9]", ATA_HORKAGE_NONCQ |
|
{ "ST3320[68]13AS", "SD1[5-9]", ATA_HORKAGE_NONCQ |
|
||||||
ATA_HORKAGE_FIRMWARE_WARN },
|
ATA_HORKAGE_FIRMWARE_WARN },
|
||||||
|
|
||||||
/* drives which fail FPDMA_AA activation (some may freeze afterwards) */
|
/* drives which fail FPDMA_AA activation (some may freeze afterwards)
|
||||||
{ "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA },
|
the ST disks also have LPM issues */
|
||||||
{ "ST1000LM024 HN-M101MBB", "2BA30001", ATA_HORKAGE_BROKEN_FPDMA_AA },
|
{ "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA |
|
||||||
|
ATA_HORKAGE_NOLPM, },
|
||||||
|
{ "ST1000LM024 HN-M101MBB", "2BA30001", ATA_HORKAGE_BROKEN_FPDMA_AA |
|
||||||
|
ATA_HORKAGE_NOLPM, },
|
||||||
{ "VB0250EAVER", "HPG7", ATA_HORKAGE_BROKEN_FPDMA_AA },
|
{ "VB0250EAVER", "HPG7", ATA_HORKAGE_BROKEN_FPDMA_AA },
|
||||||
|
|
||||||
/* Blacklist entries taken from Silicon Image 3124/3132
|
/* Blacklist entries taken from Silicon Image 3124/3132
|
||||||
|
|||||||
@@ -826,6 +826,9 @@ static const int rk3288_saved_cru_reg_ids[] = {
|
|||||||
RK3288_CLKSEL_CON(10),
|
RK3288_CLKSEL_CON(10),
|
||||||
RK3288_CLKSEL_CON(33),
|
RK3288_CLKSEL_CON(33),
|
||||||
RK3288_CLKSEL_CON(37),
|
RK3288_CLKSEL_CON(37),
|
||||||
|
|
||||||
|
/* We turn aclk_dmac1 on for suspend; this will restore it */
|
||||||
|
RK3288_CLKGATE_CON(10),
|
||||||
};
|
};
|
||||||
|
|
||||||
static u32 rk3288_saved_cru_regs[ARRAY_SIZE(rk3288_saved_cru_reg_ids)];
|
static u32 rk3288_saved_cru_regs[ARRAY_SIZE(rk3288_saved_cru_reg_ids)];
|
||||||
@@ -841,6 +844,14 @@ static int rk3288_clk_suspend(void)
|
|||||||
readl_relaxed(rk3288_cru_base + reg_id);
|
readl_relaxed(rk3288_cru_base + reg_id);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Going into deep sleep (specifically setting PMU_CLR_DMA in
|
||||||
|
* RK3288_PMU_PWRMODE_CON1) appears to fail unless
|
||||||
|
* "aclk_dmac1" is on.
|
||||||
|
*/
|
||||||
|
writel_relaxed(1 << (12 + 16),
|
||||||
|
rk3288_cru_base + RK3288_CLKGATE_CON(10));
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Switch PLLs other than DPLL (for SDRAM) to slow mode to
|
* Switch PLLs other than DPLL (for SDRAM) to slow mode to
|
||||||
* avoid crashes on resume. The Mask ROM on the system will
|
* avoid crashes on resume. The Mask ROM on the system will
|
||||||
|
|||||||
@@ -589,7 +589,7 @@ static int idma64_probe(struct idma64_chip *chip)
|
|||||||
idma64->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
|
idma64->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
|
||||||
idma64->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
|
idma64->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
|
||||||
|
|
||||||
idma64->dma.dev = chip->dev;
|
idma64->dma.dev = chip->sysdev;
|
||||||
|
|
||||||
dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK);
|
dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK);
|
||||||
|
|
||||||
@@ -629,6 +629,7 @@ static int idma64_platform_probe(struct platform_device *pdev)
|
|||||||
{
|
{
|
||||||
struct idma64_chip *chip;
|
struct idma64_chip *chip;
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
|
struct device *sysdev = dev->parent;
|
||||||
struct resource *mem;
|
struct resource *mem;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
@@ -645,11 +646,12 @@ static int idma64_platform_probe(struct platform_device *pdev)
|
|||||||
if (IS_ERR(chip->regs))
|
if (IS_ERR(chip->regs))
|
||||||
return PTR_ERR(chip->regs);
|
return PTR_ERR(chip->regs);
|
||||||
|
|
||||||
ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
|
ret = dma_coerce_mask_and_coherent(sysdev, DMA_BIT_MASK(64));
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
chip->dev = dev;
|
chip->dev = dev;
|
||||||
|
chip->sysdev = sysdev;
|
||||||
|
|
||||||
ret = idma64_probe(chip);
|
ret = idma64_probe(chip);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
|||||||
@@ -216,12 +216,14 @@ static inline void idma64_writel(struct idma64 *idma64, int offset, u32 value)
|
|||||||
/**
|
/**
|
||||||
* struct idma64_chip - representation of iDMA 64-bit controller hardware
|
* struct idma64_chip - representation of iDMA 64-bit controller hardware
|
||||||
* @dev: struct device of the DMA controller
|
* @dev: struct device of the DMA controller
|
||||||
|
* @sysdev: struct device of the physical device that does DMA
|
||||||
* @irq: irq line
|
* @irq: irq line
|
||||||
* @regs: memory mapped I/O space
|
* @regs: memory mapped I/O space
|
||||||
* @idma64: struct idma64 that is filed by idma64_probe()
|
* @idma64: struct idma64 that is filed by idma64_probe()
|
||||||
*/
|
*/
|
||||||
struct idma64_chip {
|
struct idma64_chip {
|
||||||
struct device *dev;
|
struct device *dev;
|
||||||
|
struct device *sysdev;
|
||||||
int irq;
|
int irq;
|
||||||
void __iomem *regs;
|
void __iomem *regs;
|
||||||
struct idma64 *idma64;
|
struct idma64 *idma64;
|
||||||
|
|||||||
@@ -670,6 +670,7 @@ config GPIO_ADP5588
|
|||||||
config GPIO_ADP5588_IRQ
|
config GPIO_ADP5588_IRQ
|
||||||
bool "Interrupt controller support for ADP5588"
|
bool "Interrupt controller support for ADP5588"
|
||||||
depends on GPIO_ADP5588=y
|
depends on GPIO_ADP5588=y
|
||||||
|
select GPIOLIB_IRQCHIP
|
||||||
help
|
help
|
||||||
Say yes here to enable the adp5588 to be used as an interrupt
|
Say yes here to enable the adp5588 to be used as an interrupt
|
||||||
controller. It requires the driver to be built in the kernel.
|
controller. It requires the driver to be built in the kernel.
|
||||||
|
|||||||
@@ -296,6 +296,22 @@ static void omap_clear_gpio_debounce(struct gpio_bank *bank, unsigned offset)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Off mode wake-up capable GPIOs in bank(s) that are in the wakeup domain.
|
||||||
|
* See TRM section for GPIO for "Wake-Up Generation" for the list of GPIOs
|
||||||
|
* in wakeup domain. If bank->non_wakeup_gpios is not configured, assume none
|
||||||
|
* are capable waking up the system from off mode.
|
||||||
|
*/
|
||||||
|
static bool omap_gpio_is_off_wakeup_capable(struct gpio_bank *bank, u32 gpio_mask)
|
||||||
|
{
|
||||||
|
u32 no_wake = bank->non_wakeup_gpios;
|
||||||
|
|
||||||
|
if (no_wake)
|
||||||
|
return !!(~no_wake & gpio_mask);
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio,
|
static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio,
|
||||||
unsigned trigger)
|
unsigned trigger)
|
||||||
{
|
{
|
||||||
@@ -327,13 +343,7 @@ static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* This part needs to be executed always for OMAP{34xx, 44xx} */
|
/* This part needs to be executed always for OMAP{34xx, 44xx} */
|
||||||
if (!bank->regs->irqctrl) {
|
if (!bank->regs->irqctrl && !omap_gpio_is_off_wakeup_capable(bank, gpio)) {
|
||||||
/* On omap24xx proceed only when valid GPIO bit is set */
|
|
||||||
if (bank->non_wakeup_gpios) {
|
|
||||||
if (!(bank->non_wakeup_gpios & gpio_bit))
|
|
||||||
goto exit;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Log the edge gpio and manually trigger the IRQ
|
* Log the edge gpio and manually trigger the IRQ
|
||||||
* after resume if the input level changes
|
* after resume if the input level changes
|
||||||
@@ -346,7 +356,6 @@ static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio,
|
|||||||
bank->enabled_non_wakeup_gpios &= ~gpio_bit;
|
bank->enabled_non_wakeup_gpios &= ~gpio_bit;
|
||||||
}
|
}
|
||||||
|
|
||||||
exit:
|
|
||||||
bank->level_mask =
|
bank->level_mask =
|
||||||
readl_relaxed(bank->base + bank->regs->leveldetect0) |
|
readl_relaxed(bank->base + bank->regs->leveldetect0) |
|
||||||
readl_relaxed(bank->base + bank->regs->leveldetect1);
|
readl_relaxed(bank->base + bank->regs->leveldetect1);
|
||||||
|
|||||||
@@ -735,11 +735,11 @@ static void adv7511_mode_set(struct adv7511 *adv7511,
|
|||||||
vsync_polarity = 1;
|
vsync_polarity = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mode->vrefresh <= 24000)
|
if (drm_mode_vrefresh(mode) <= 24)
|
||||||
low_refresh_rate = ADV7511_LOW_REFRESH_RATE_24HZ;
|
low_refresh_rate = ADV7511_LOW_REFRESH_RATE_24HZ;
|
||||||
else if (mode->vrefresh <= 25000)
|
else if (drm_mode_vrefresh(mode) <= 25)
|
||||||
low_refresh_rate = ADV7511_LOW_REFRESH_RATE_25HZ;
|
low_refresh_rate = ADV7511_LOW_REFRESH_RATE_25HZ;
|
||||||
else if (mode->vrefresh <= 30000)
|
else if (drm_mode_vrefresh(mode) <= 30)
|
||||||
low_refresh_rate = ADV7511_LOW_REFRESH_RATE_30HZ;
|
low_refresh_rate = ADV7511_LOW_REFRESH_RATE_30HZ;
|
||||||
else
|
else
|
||||||
low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE;
|
low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE;
|
||||||
|
|||||||
@@ -2493,7 +2493,8 @@ static int vmw_cmd_dx_set_shader(struct vmw_private *dev_priv,
|
|||||||
|
|
||||||
cmd = container_of(header, typeof(*cmd), header);
|
cmd = container_of(header, typeof(*cmd), header);
|
||||||
|
|
||||||
if (cmd->body.type >= SVGA3D_SHADERTYPE_DX10_MAX) {
|
if (cmd->body.type >= SVGA3D_SHADERTYPE_DX10_MAX ||
|
||||||
|
cmd->body.type < SVGA3D_SHADERTYPE_MIN) {
|
||||||
DRM_ERROR("Illegal shader type %u.\n",
|
DRM_ERROR("Illegal shader type %u.\n",
|
||||||
(unsigned) cmd->body.type);
|
(unsigned) cmd->body.type);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@@ -2732,6 +2733,10 @@ static int vmw_cmd_dx_view_define(struct vmw_private *dev_priv,
|
|||||||
if (view_type == vmw_view_max)
|
if (view_type == vmw_view_max)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
cmd = container_of(header, typeof(*cmd), header);
|
cmd = container_of(header, typeof(*cmd), header);
|
||||||
|
if (unlikely(cmd->sid == SVGA3D_INVALID_ID)) {
|
||||||
|
DRM_ERROR("Invalid surface id.\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
ret = vmw_cmd_res_check(dev_priv, sw_context, vmw_res_surface,
|
ret = vmw_cmd_res_check(dev_priv, sw_context, vmw_res_surface,
|
||||||
user_surface_converter,
|
user_surface_converter,
|
||||||
&cmd->sid, &srf_node);
|
&cmd->sid, &srf_node);
|
||||||
|
|||||||
@@ -83,6 +83,7 @@ static struct i2c_algo_bit_data ioc_data = {
|
|||||||
|
|
||||||
static struct i2c_adapter ioc_ops = {
|
static struct i2c_adapter ioc_ops = {
|
||||||
.nr = 0,
|
.nr = 0,
|
||||||
|
.name = "ioc",
|
||||||
.algo_data = &ioc_data,
|
.algo_data = &ioc_data,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -297,6 +297,7 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
|
|||||||
rdwr_pa[i].buf[0] < 1 ||
|
rdwr_pa[i].buf[0] < 1 ||
|
||||||
rdwr_pa[i].len < rdwr_pa[i].buf[0] +
|
rdwr_pa[i].len < rdwr_pa[i].buf[0] +
|
||||||
I2C_SMBUS_BLOCK_MAX) {
|
I2C_SMBUS_BLOCK_MAX) {
|
||||||
|
i++;
|
||||||
res = -EINVAL;
|
res = -EINVAL;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4119,9 +4119,7 @@ static void __init init_no_remapping_devices(void)
|
|||||||
|
|
||||||
/* This IOMMU has *only* gfx devices. Either bypass it or
|
/* This IOMMU has *only* gfx devices. Either bypass it or
|
||||||
set the gfx_mapped flag, as appropriate */
|
set the gfx_mapped flag, as appropriate */
|
||||||
if (dmar_map_gfx) {
|
if (!dmar_map_gfx) {
|
||||||
intel_iommu_gfx_mapped = 1;
|
|
||||||
} else {
|
|
||||||
drhd->ignored = 1;
|
drhd->ignored = 1;
|
||||||
for_each_active_dev_scope(drhd->devices,
|
for_each_active_dev_scope(drhd->devices,
|
||||||
drhd->devices_cnt, i, dev)
|
drhd->devices_cnt, i, dev)
|
||||||
@@ -4870,6 +4868,9 @@ int __init intel_iommu_init(void)
|
|||||||
goto out_free_reserved_range;
|
goto out_free_reserved_range;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (dmar_map_gfx)
|
||||||
|
intel_iommu_gfx_mapped = 1;
|
||||||
|
|
||||||
init_no_remapping_devices();
|
init_no_remapping_devices();
|
||||||
|
|
||||||
ret = init_dmars();
|
ret = init_dmars();
|
||||||
|
|||||||
@@ -394,7 +394,7 @@ data_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
|
|||||||
memcpy(di.channelmap, dev->channelmap,
|
memcpy(di.channelmap, dev->channelmap,
|
||||||
sizeof(di.channelmap));
|
sizeof(di.channelmap));
|
||||||
di.nrbchan = dev->nrbchan;
|
di.nrbchan = dev->nrbchan;
|
||||||
strcpy(di.name, dev_name(&dev->dev));
|
strscpy(di.name, dev_name(&dev->dev), sizeof(di.name));
|
||||||
if (copy_to_user((void __user *)arg, &di, sizeof(di)))
|
if (copy_to_user((void __user *)arg, &di, sizeof(di)))
|
||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
} else
|
} else
|
||||||
@@ -678,7 +678,7 @@ base_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
|
|||||||
memcpy(di.channelmap, dev->channelmap,
|
memcpy(di.channelmap, dev->channelmap,
|
||||||
sizeof(di.channelmap));
|
sizeof(di.channelmap));
|
||||||
di.nrbchan = dev->nrbchan;
|
di.nrbchan = dev->nrbchan;
|
||||||
strcpy(di.name, dev_name(&dev->dev));
|
strscpy(di.name, dev_name(&dev->dev), sizeof(di.name));
|
||||||
if (copy_to_user((void __user *)arg, &di, sizeof(di)))
|
if (copy_to_user((void __user *)arg, &di, sizeof(di)))
|
||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
} else
|
} else
|
||||||
@@ -692,6 +692,7 @@ base_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
|
|||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
dn.name[sizeof(dn.name) - 1] = '\0';
|
||||||
dev = get_mdevice(dn.id);
|
dev = get_mdevice(dn.id);
|
||||||
if (dev)
|
if (dev)
|
||||||
err = device_rename(&dev->dev, dn.name);
|
err = device_rename(&dev->dev, dn.name);
|
||||||
|
|||||||
@@ -823,12 +823,22 @@ unsigned bch_btree_insert_key(struct btree_keys *b, struct bkey *k,
|
|||||||
struct bset *i = bset_tree_last(b)->data;
|
struct bset *i = bset_tree_last(b)->data;
|
||||||
struct bkey *m, *prev = NULL;
|
struct bkey *m, *prev = NULL;
|
||||||
struct btree_iter iter;
|
struct btree_iter iter;
|
||||||
|
struct bkey preceding_key_on_stack = ZERO_KEY;
|
||||||
|
struct bkey *preceding_key_p = &preceding_key_on_stack;
|
||||||
|
|
||||||
BUG_ON(b->ops->is_extents && !KEY_SIZE(k));
|
BUG_ON(b->ops->is_extents && !KEY_SIZE(k));
|
||||||
|
|
||||||
m = bch_btree_iter_init(b, &iter, b->ops->is_extents
|
/*
|
||||||
? PRECEDING_KEY(&START_KEY(k))
|
* If k has preceding key, preceding_key_p will be set to address
|
||||||
: PRECEDING_KEY(k));
|
* of k's preceding key; otherwise preceding_key_p will be set
|
||||||
|
* to NULL inside preceding_key().
|
||||||
|
*/
|
||||||
|
if (b->ops->is_extents)
|
||||||
|
preceding_key(&START_KEY(k), &preceding_key_p);
|
||||||
|
else
|
||||||
|
preceding_key(k, &preceding_key_p);
|
||||||
|
|
||||||
|
m = bch_btree_iter_init(b, &iter, preceding_key_p);
|
||||||
|
|
||||||
if (b->ops->insert_fixup(b, k, &iter, replace_key))
|
if (b->ops->insert_fixup(b, k, &iter, replace_key))
|
||||||
return status;
|
return status;
|
||||||
|
|||||||
@@ -417,20 +417,26 @@ static inline bool bch_cut_back(const struct bkey *where, struct bkey *k)
|
|||||||
return __bch_cut_back(where, k);
|
return __bch_cut_back(where, k);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define PRECEDING_KEY(_k) \
|
/*
|
||||||
({ \
|
* Pointer '*preceding_key_p' points to a memory object to store preceding
|
||||||
struct bkey *_ret = NULL; \
|
* key of k. If the preceding key does not exist, set '*preceding_key_p' to
|
||||||
\
|
* NULL. So the caller of preceding_key() needs to take care of memory
|
||||||
if (KEY_INODE(_k) || KEY_OFFSET(_k)) { \
|
* which '*preceding_key_p' pointed to before calling preceding_key().
|
||||||
_ret = &KEY(KEY_INODE(_k), KEY_OFFSET(_k), 0); \
|
* Currently the only caller of preceding_key() is bch_btree_insert_key(),
|
||||||
\
|
* and it points to an on-stack variable, so the memory release is handled
|
||||||
if (!_ret->low) \
|
* by stackframe itself.
|
||||||
_ret->high--; \
|
*/
|
||||||
_ret->low--; \
|
static inline void preceding_key(struct bkey *k, struct bkey **preceding_key_p)
|
||||||
} \
|
{
|
||||||
\
|
if (KEY_INODE(k) || KEY_OFFSET(k)) {
|
||||||
_ret; \
|
(**preceding_key_p) = KEY(KEY_INODE(k), KEY_OFFSET(k), 0);
|
||||||
})
|
if (!(*preceding_key_p)->low)
|
||||||
|
(*preceding_key_p)->high--;
|
||||||
|
(*preceding_key_p)->low--;
|
||||||
|
} else {
|
||||||
|
(*preceding_key_p) = NULL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static inline bool bch_ptr_invalid(struct btree_keys *b, const struct bkey *k)
|
static inline bool bch_ptr_invalid(struct btree_keys *b, const struct bkey *k)
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -1959,7 +1959,22 @@ static int v4l_s_parm(const struct v4l2_ioctl_ops *ops,
|
|||||||
struct v4l2_streamparm *p = arg;
|
struct v4l2_streamparm *p = arg;
|
||||||
int ret = check_fmt(file, p->type);
|
int ret = check_fmt(file, p->type);
|
||||||
|
|
||||||
return ret ? ret : ops->vidioc_s_parm(file, fh, p);
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
/* Note: extendedmode is never used in drivers */
|
||||||
|
if (V4L2_TYPE_IS_OUTPUT(p->type)) {
|
||||||
|
memset(p->parm.output.reserved, 0,
|
||||||
|
sizeof(p->parm.output.reserved));
|
||||||
|
p->parm.output.extendedmode = 0;
|
||||||
|
p->parm.output.outputmode &= V4L2_MODE_HIGHQUALITY;
|
||||||
|
} else {
|
||||||
|
memset(p->parm.capture.reserved, 0,
|
||||||
|
sizeof(p->parm.capture.reserved));
|
||||||
|
p->parm.capture.extendedmode = 0;
|
||||||
|
p->parm.capture.capturemode &= V4L2_MODE_HIGHQUALITY;
|
||||||
|
}
|
||||||
|
return ops->vidioc_s_parm(file, fh, p);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int v4l_queryctrl(const struct v4l2_ioctl_ops *ops,
|
static int v4l_queryctrl(const struct v4l2_ioctl_ops *ops,
|
||||||
|
|||||||
@@ -273,6 +273,9 @@ static void intel_lpss_init_dev(const struct intel_lpss *lpss)
|
|||||||
{
|
{
|
||||||
u32 value = LPSS_PRIV_SSP_REG_DIS_DMA_FIN;
|
u32 value = LPSS_PRIV_SSP_REG_DIS_DMA_FIN;
|
||||||
|
|
||||||
|
/* Set the device in reset state */
|
||||||
|
writel(0, lpss->priv + LPSS_PRIV_RESETS);
|
||||||
|
|
||||||
intel_lpss_deassert_reset(lpss);
|
intel_lpss_deassert_reset(lpss);
|
||||||
|
|
||||||
intel_lpss_set_remap_addr(lpss);
|
intel_lpss_set_remap_addr(lpss);
|
||||||
|
|||||||
@@ -27,6 +27,7 @@ static const struct of_device_id tps65912_spi_of_match_table[] = {
|
|||||||
{ .compatible = "ti,tps65912", },
|
{ .compatible = "ti,tps65912", },
|
||||||
{ /* sentinel */ }
|
{ /* sentinel */ }
|
||||||
};
|
};
|
||||||
|
MODULE_DEVICE_TABLE(of, tps65912_spi_of_match_table);
|
||||||
|
|
||||||
static int tps65912_spi_probe(struct spi_device *spi)
|
static int tps65912_spi_probe(struct spi_device *spi)
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -322,8 +322,19 @@ int twl6040_power(struct twl6040 *twl6040, int on)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Register access can produce errors after power-up unless we
|
||||||
|
* wait at least 8ms based on measurements on duovero.
|
||||||
|
*/
|
||||||
|
usleep_range(10000, 12000);
|
||||||
|
|
||||||
/* Sync with the HW */
|
/* Sync with the HW */
|
||||||
regcache_sync(twl6040->regmap);
|
ret = regcache_sync(twl6040->regmap);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(twl6040->dev, "Failed to sync with the HW: %i\n",
|
||||||
|
ret);
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
/* Default PLL configuration after power up */
|
/* Default PLL configuration after power up */
|
||||||
twl6040->pll = TWL6040_SYSCLK_SEL_LPPLL;
|
twl6040->pll = TWL6040_SYSCLK_SEL_LPPLL;
|
||||||
|
|||||||
@@ -1133,7 +1133,7 @@ static void kgdbts_put_char(u8 chr)
|
|||||||
static int param_set_kgdbts_var(const char *kmessage,
|
static int param_set_kgdbts_var(const char *kmessage,
|
||||||
const struct kernel_param *kp)
|
const struct kernel_param *kp)
|
||||||
{
|
{
|
||||||
int len = strlen(kmessage);
|
size_t len = strlen(kmessage);
|
||||||
|
|
||||||
if (len >= MAX_CONFIG_LEN) {
|
if (len >= MAX_CONFIG_LEN) {
|
||||||
printk(KERN_ERR "kgdbts: config string too long\n");
|
printk(KERN_ERR "kgdbts: config string too long\n");
|
||||||
@@ -1153,7 +1153,7 @@ static int param_set_kgdbts_var(const char *kmessage,
|
|||||||
|
|
||||||
strcpy(config, kmessage);
|
strcpy(config, kmessage);
|
||||||
/* Chop out \n char as a result of echo */
|
/* Chop out \n char as a result of echo */
|
||||||
if (config[len - 1] == '\n')
|
if (len && config[len - 1] == '\n')
|
||||||
config[len - 1] = '\0';
|
config[len - 1] = '\0';
|
||||||
|
|
||||||
/* Go and configure with the new params. */
|
/* Go and configure with the new params. */
|
||||||
|
|||||||
@@ -2109,7 +2109,6 @@ static struct eisa_driver de4x5_eisa_driver = {
|
|||||||
.remove = de4x5_eisa_remove,
|
.remove = de4x5_eisa_remove,
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(eisa, de4x5_eisa_ids);
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_PCI
|
#ifdef CONFIG_PCI
|
||||||
|
|||||||
@@ -1108,7 +1108,7 @@ static int be_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
|
|||||||
cmd->data = be_get_rss_hash_opts(adapter, cmd->flow_type);
|
cmd->data = be_get_rss_hash_opts(adapter, cmd->flow_type);
|
||||||
break;
|
break;
|
||||||
case ETHTOOL_GRXRINGS:
|
case ETHTOOL_GRXRINGS:
|
||||||
cmd->data = adapter->num_rx_qs - 1;
|
cmd->data = adapter->num_rx_qs;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|||||||
@@ -2044,6 +2044,10 @@ mlxsw_sp_port_set_link_ksettings(struct net_device *dev,
|
|||||||
mlxsw_reg_ptys_unpack(ptys_pl, ð_proto_cap, NULL, NULL);
|
mlxsw_reg_ptys_unpack(ptys_pl, ð_proto_cap, NULL, NULL);
|
||||||
|
|
||||||
autoneg = cmd->base.autoneg == AUTONEG_ENABLE;
|
autoneg = cmd->base.autoneg == AUTONEG_ENABLE;
|
||||||
|
if (!autoneg && cmd->base.speed == SPEED_56000) {
|
||||||
|
netdev_err(dev, "56G not supported with autoneg off\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
eth_proto_new = autoneg ?
|
eth_proto_new = autoneg ?
|
||||||
mlxsw_sp_to_ptys_advert_link(cmd) :
|
mlxsw_sp_to_ptys_advert_link(cmd) :
|
||||||
mlxsw_sp_to_ptys_speed(cmd->base.speed);
|
mlxsw_sp_to_ptys_speed(cmd->base.speed);
|
||||||
|
|||||||
@@ -1388,6 +1388,10 @@ static void sh_eth_dev_exit(struct net_device *ndev)
|
|||||||
sh_eth_get_stats(ndev);
|
sh_eth_get_stats(ndev);
|
||||||
sh_eth_reset(ndev);
|
sh_eth_reset(ndev);
|
||||||
|
|
||||||
|
/* Set the RMII mode again if required */
|
||||||
|
if (mdp->cd->rmiimode)
|
||||||
|
sh_eth_write(ndev, 0x1, RMIIMODE);
|
||||||
|
|
||||||
/* Set MAC address again */
|
/* Set MAC address again */
|
||||||
update_mac_address(ndev);
|
update_mac_address(ndev);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -437,17 +437,18 @@ static int ipheth_tx(struct sk_buff *skb, struct net_device *net)
|
|||||||
dev);
|
dev);
|
||||||
dev->tx_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
|
dev->tx_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
|
||||||
|
|
||||||
|
netif_stop_queue(net);
|
||||||
retval = usb_submit_urb(dev->tx_urb, GFP_ATOMIC);
|
retval = usb_submit_urb(dev->tx_urb, GFP_ATOMIC);
|
||||||
if (retval) {
|
if (retval) {
|
||||||
dev_err(&dev->intf->dev, "%s: usb_submit_urb: %d\n",
|
dev_err(&dev->intf->dev, "%s: usb_submit_urb: %d\n",
|
||||||
__func__, retval);
|
__func__, retval);
|
||||||
dev->net->stats.tx_errors++;
|
dev->net->stats.tx_errors++;
|
||||||
dev_kfree_skb_any(skb);
|
dev_kfree_skb_any(skb);
|
||||||
|
netif_wake_queue(net);
|
||||||
} else {
|
} else {
|
||||||
dev->net->stats.tx_packets++;
|
dev->net->stats.tx_packets++;
|
||||||
dev->net->stats.tx_bytes += skb->len;
|
dev->net->stats.tx_bytes += skb->len;
|
||||||
dev_consume_skb_any(skb);
|
dev_consume_skb_any(skb);
|
||||||
netif_stop_queue(net);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|||||||
@@ -934,7 +934,7 @@ static inline void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell,
|
|||||||
void *buf)
|
void *buf)
|
||||||
{
|
{
|
||||||
u8 *p, *b;
|
u8 *p, *b;
|
||||||
int i, bit_offset = cell->bit_offset;
|
int i, extra, bit_offset = cell->bit_offset;
|
||||||
|
|
||||||
p = b = buf;
|
p = b = buf;
|
||||||
if (bit_offset) {
|
if (bit_offset) {
|
||||||
@@ -949,11 +949,16 @@ static inline void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell,
|
|||||||
p = b;
|
p = b;
|
||||||
*b++ >>= bit_offset;
|
*b++ >>= bit_offset;
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
/* result fits in less bytes */
|
/* point to the msb */
|
||||||
if (cell->bytes != DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE))
|
p += cell->bytes - 1;
|
||||||
*p-- = 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* result fits in less bytes */
|
||||||
|
extra = cell->bytes - DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE);
|
||||||
|
while (--extra >= 0)
|
||||||
|
*p-- = 0;
|
||||||
|
|
||||||
/* clear msb bits if any leftover in the last byte */
|
/* clear msb bits if any leftover in the last byte */
|
||||||
*p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0);
|
*p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -847,7 +847,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
|
|||||||
{
|
{
|
||||||
struct device *dev = pcie->dev;
|
struct device *dev = pcie->dev;
|
||||||
struct rcar_msi *msi = &pcie->msi;
|
struct rcar_msi *msi = &pcie->msi;
|
||||||
unsigned long base;
|
phys_addr_t base;
|
||||||
int err, i;
|
int err, i;
|
||||||
|
|
||||||
mutex_init(&msi->lock);
|
mutex_init(&msi->lock);
|
||||||
@@ -886,10 +886,14 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
|
|||||||
|
|
||||||
/* setup MSI data target */
|
/* setup MSI data target */
|
||||||
msi->pages = __get_free_pages(GFP_KERNEL, 0);
|
msi->pages = __get_free_pages(GFP_KERNEL, 0);
|
||||||
|
if (!msi->pages) {
|
||||||
|
err = -ENOMEM;
|
||||||
|
goto err;
|
||||||
|
}
|
||||||
base = virt_to_phys((void *)msi->pages);
|
base = virt_to_phys((void *)msi->pages);
|
||||||
|
|
||||||
rcar_pci_write_reg(pcie, base | MSIFE, PCIEMSIALR);
|
rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR);
|
||||||
rcar_pci_write_reg(pcie, 0, PCIEMSIAUR);
|
rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR);
|
||||||
|
|
||||||
/* enable all MSI interrupts */
|
/* enable all MSI interrupts */
|
||||||
rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER);
|
rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER);
|
||||||
|
|||||||
@@ -337,14 +337,19 @@ static const struct irq_domain_ops msi_domain_ops = {
|
|||||||
* xilinx_pcie_enable_msi - Enable MSI support
|
* xilinx_pcie_enable_msi - Enable MSI support
|
||||||
* @port: PCIe port information
|
* @port: PCIe port information
|
||||||
*/
|
*/
|
||||||
static void xilinx_pcie_enable_msi(struct xilinx_pcie_port *port)
|
static int xilinx_pcie_enable_msi(struct xilinx_pcie_port *port)
|
||||||
{
|
{
|
||||||
phys_addr_t msg_addr;
|
phys_addr_t msg_addr;
|
||||||
|
|
||||||
port->msi_pages = __get_free_pages(GFP_KERNEL, 0);
|
port->msi_pages = __get_free_pages(GFP_KERNEL, 0);
|
||||||
|
if (!port->msi_pages)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
msg_addr = virt_to_phys((void *)port->msi_pages);
|
msg_addr = virt_to_phys((void *)port->msi_pages);
|
||||||
pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1);
|
pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1);
|
||||||
pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2);
|
pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2);
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* INTx Functions */
|
/* INTx Functions */
|
||||||
@@ -516,6 +521,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
|
|||||||
struct device *dev = port->dev;
|
struct device *dev = port->dev;
|
||||||
struct device_node *node = dev->of_node;
|
struct device_node *node = dev->of_node;
|
||||||
struct device_node *pcie_intc_node;
|
struct device_node *pcie_intc_node;
|
||||||
|
int ret;
|
||||||
|
|
||||||
/* Setup INTx */
|
/* Setup INTx */
|
||||||
pcie_intc_node = of_get_next_child(node, NULL);
|
pcie_intc_node = of_get_next_child(node, NULL);
|
||||||
@@ -544,7 +550,9 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
|
|||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
xilinx_pcie_enable_msi(port);
|
ret = xilinx_pcie_enable_msi(port);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
@@ -55,6 +55,7 @@ static struct device_node *find_vio_slot_node(char *drc_name)
|
|||||||
if ((rc == 0) && (!strcmp(drc_name, name)))
|
if ((rc == 0) && (!strcmp(drc_name, name)))
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
of_node_put(parent);
|
||||||
|
|
||||||
return dn;
|
return dn;
|
||||||
}
|
}
|
||||||
@@ -78,6 +79,7 @@ static struct device_node *find_php_slot_pci_node(char *drc_name,
|
|||||||
return np;
|
return np;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Returns a device_node with its reference count incremented */
|
||||||
static struct device_node *find_dlpar_node(char *drc_name, int *node_type)
|
static struct device_node *find_dlpar_node(char *drc_name, int *node_type)
|
||||||
{
|
{
|
||||||
struct device_node *dn;
|
struct device_node *dn;
|
||||||
@@ -313,6 +315,7 @@ int dlpar_add_slot(char *drc_name)
|
|||||||
rc = dlpar_add_phb(drc_name, dn);
|
rc = dlpar_add_phb(drc_name, dn);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
of_node_put(dn);
|
||||||
|
|
||||||
printk(KERN_INFO "%s: slot %s added\n", DLPAR_MODULE_NAME, drc_name);
|
printk(KERN_INFO "%s: slot %s added\n", DLPAR_MODULE_NAME, drc_name);
|
||||||
exit:
|
exit:
|
||||||
@@ -446,6 +449,7 @@ int dlpar_remove_slot(char *drc_name)
|
|||||||
rc = dlpar_remove_pci_slot(drc_name, dn);
|
rc = dlpar_remove_pci_slot(drc_name, dn);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
of_node_put(dn);
|
||||||
vm_unmap_aliases();
|
vm_unmap_aliases();
|
||||||
|
|
||||||
printk(KERN_INFO "%s: slot %s removed\n", DLPAR_MODULE_NAME, drc_name);
|
printk(KERN_INFO "%s: slot %s removed\n", DLPAR_MODULE_NAME, drc_name);
|
||||||
|
|||||||
@@ -67,6 +67,17 @@ static int send_command(struct cros_ec_device *ec_dev,
|
|||||||
else
|
else
|
||||||
xfer_fxn = ec_dev->cmd_xfer;
|
xfer_fxn = ec_dev->cmd_xfer;
|
||||||
|
|
||||||
|
if (!xfer_fxn) {
|
||||||
|
/*
|
||||||
|
* This error can happen if a communication error happened and
|
||||||
|
* the EC is trying to use protocol v2, on an underlying
|
||||||
|
* communication mechanism that does not support v2.
|
||||||
|
*/
|
||||||
|
dev_err_once(ec_dev->dev,
|
||||||
|
"missing EC transfer API, cannot send command\n");
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
|
||||||
ret = (*xfer_fxn)(ec_dev, msg);
|
ret = (*xfer_fxn)(ec_dev, msg);
|
||||||
if (msg->result == EC_RES_IN_PROGRESS) {
|
if (msg->result == EC_RES_IN_PROGRESS) {
|
||||||
int i;
|
int i;
|
||||||
|
|||||||
@@ -620,13 +620,17 @@ static int ipc_create_pmc_devices(void)
|
|||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(ipcdev.dev, "Failed to add punit platform device\n");
|
dev_err(ipcdev.dev, "Failed to add punit platform device\n");
|
||||||
platform_device_unregister(ipcdev.tco_dev);
|
platform_device_unregister(ipcdev.tco_dev);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!ipcdev.telem_res_inval) {
|
if (!ipcdev.telem_res_inval) {
|
||||||
ret = ipc_create_telemetry_device();
|
ret = ipc_create_telemetry_device();
|
||||||
if (ret)
|
if (ret) {
|
||||||
dev_warn(ipcdev.dev,
|
dev_warn(ipcdev.dev,
|
||||||
"Failed to add telemetry platform device\n");
|
"Failed to add telemetry platform device\n");
|
||||||
|
platform_device_unregister(ipcdev.punit_dev);
|
||||||
|
platform_device_unregister(ipcdev.tco_dev);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|||||||
@@ -302,10 +302,12 @@ int pwmchip_add_with_polarity(struct pwm_chip *chip,
|
|||||||
if (IS_ENABLED(CONFIG_OF))
|
if (IS_ENABLED(CONFIG_OF))
|
||||||
of_pwmchip_add(chip);
|
of_pwmchip_add(chip);
|
||||||
|
|
||||||
pwmchip_sysfs_export(chip);
|
|
||||||
|
|
||||||
out:
|
out:
|
||||||
mutex_unlock(&pwm_lock);
|
mutex_unlock(&pwm_lock);
|
||||||
|
|
||||||
|
if (!ret)
|
||||||
|
pwmchip_sysfs_export(chip);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(pwmchip_add_with_polarity);
|
EXPORT_SYMBOL_GPL(pwmchip_add_with_polarity);
|
||||||
@@ -339,7 +341,7 @@ int pwmchip_remove(struct pwm_chip *chip)
|
|||||||
unsigned int i;
|
unsigned int i;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
pwmchip_sysfs_unexport_children(chip);
|
pwmchip_sysfs_unexport(chip);
|
||||||
|
|
||||||
mutex_lock(&pwm_lock);
|
mutex_lock(&pwm_lock);
|
||||||
|
|
||||||
@@ -359,8 +361,6 @@ int pwmchip_remove(struct pwm_chip *chip)
|
|||||||
|
|
||||||
free_pwms(chip);
|
free_pwms(chip);
|
||||||
|
|
||||||
pwmchip_sysfs_unexport(chip);
|
|
||||||
|
|
||||||
out:
|
out:
|
||||||
mutex_unlock(&pwm_lock);
|
mutex_unlock(&pwm_lock);
|
||||||
return ret;
|
return ret;
|
||||||
|
|||||||
@@ -110,6 +110,10 @@ struct meson_pwm {
|
|||||||
const struct meson_pwm_data *data;
|
const struct meson_pwm_data *data;
|
||||||
void __iomem *base;
|
void __iomem *base;
|
||||||
u8 inverter_mask;
|
u8 inverter_mask;
|
||||||
|
/*
|
||||||
|
* Protects register (write) access to the REG_MISC_AB register
|
||||||
|
* that is shared between the two PWMs.
|
||||||
|
*/
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -230,6 +234,7 @@ static void meson_pwm_enable(struct meson_pwm *meson,
|
|||||||
{
|
{
|
||||||
u32 value, clk_shift, clk_enable, enable;
|
u32 value, clk_shift, clk_enable, enable;
|
||||||
unsigned int offset;
|
unsigned int offset;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
switch (id) {
|
switch (id) {
|
||||||
case 0:
|
case 0:
|
||||||
@@ -250,6 +255,8 @@ static void meson_pwm_enable(struct meson_pwm *meson,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
spin_lock_irqsave(&meson->lock, flags);
|
||||||
|
|
||||||
value = readl(meson->base + REG_MISC_AB);
|
value = readl(meson->base + REG_MISC_AB);
|
||||||
value &= ~(MISC_CLK_DIV_MASK << clk_shift);
|
value &= ~(MISC_CLK_DIV_MASK << clk_shift);
|
||||||
value |= channel->pre_div << clk_shift;
|
value |= channel->pre_div << clk_shift;
|
||||||
@@ -262,11 +269,14 @@ static void meson_pwm_enable(struct meson_pwm *meson,
|
|||||||
value = readl(meson->base + REG_MISC_AB);
|
value = readl(meson->base + REG_MISC_AB);
|
||||||
value |= enable;
|
value |= enable;
|
||||||
writel(value, meson->base + REG_MISC_AB);
|
writel(value, meson->base + REG_MISC_AB);
|
||||||
|
|
||||||
|
spin_unlock_irqrestore(&meson->lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void meson_pwm_disable(struct meson_pwm *meson, unsigned int id)
|
static void meson_pwm_disable(struct meson_pwm *meson, unsigned int id)
|
||||||
{
|
{
|
||||||
u32 value, enable;
|
u32 value, enable;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
switch (id) {
|
switch (id) {
|
||||||
case 0:
|
case 0:
|
||||||
@@ -281,9 +291,13 @@ static void meson_pwm_disable(struct meson_pwm *meson, unsigned int id)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
spin_lock_irqsave(&meson->lock, flags);
|
||||||
|
|
||||||
value = readl(meson->base + REG_MISC_AB);
|
value = readl(meson->base + REG_MISC_AB);
|
||||||
value &= ~enable;
|
value &= ~enable;
|
||||||
writel(value, meson->base + REG_MISC_AB);
|
writel(value, meson->base + REG_MISC_AB);
|
||||||
|
|
||||||
|
spin_unlock_irqrestore(&meson->lock, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
|
static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
|
||||||
@@ -291,19 +305,16 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
|
|||||||
{
|
{
|
||||||
struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
|
struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
|
||||||
struct meson_pwm *meson = to_meson_pwm(chip);
|
struct meson_pwm *meson = to_meson_pwm(chip);
|
||||||
unsigned long flags;
|
|
||||||
int err = 0;
|
int err = 0;
|
||||||
|
|
||||||
if (!state)
|
if (!state)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
spin_lock_irqsave(&meson->lock, flags);
|
|
||||||
|
|
||||||
if (!state->enabled) {
|
if (!state->enabled) {
|
||||||
meson_pwm_disable(meson, pwm->hwpwm);
|
meson_pwm_disable(meson, pwm->hwpwm);
|
||||||
channel->state.enabled = false;
|
channel->state.enabled = false;
|
||||||
|
|
||||||
goto unlock;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (state->period != channel->state.period ||
|
if (state->period != channel->state.period ||
|
||||||
@@ -324,7 +335,7 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
|
|||||||
err = meson_pwm_calc(meson, channel, pwm->hwpwm,
|
err = meson_pwm_calc(meson, channel, pwm->hwpwm,
|
||||||
state->duty_cycle, state->period);
|
state->duty_cycle, state->period);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto unlock;
|
return err;
|
||||||
|
|
||||||
channel->state.polarity = state->polarity;
|
channel->state.polarity = state->polarity;
|
||||||
channel->state.period = state->period;
|
channel->state.period = state->period;
|
||||||
@@ -336,9 +347,7 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
|
|||||||
channel->state.enabled = true;
|
channel->state.enabled = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
unlock:
|
return 0;
|
||||||
spin_unlock_irqrestore(&meson->lock, flags);
|
|
||||||
return err;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void meson_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
|
static void meson_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
|
||||||
|
|||||||
@@ -383,6 +383,8 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Update shadow register first before modifying active register */
|
/* Update shadow register first before modifying active register */
|
||||||
|
ehrpwm_modify(pc->mmio_base, AQSFRC, AQSFRC_RLDCSF_MASK,
|
||||||
|
AQSFRC_RLDCSF_ZRO);
|
||||||
ehrpwm_modify(pc->mmio_base, AQCSFRC, aqcsfrc_mask, aqcsfrc_val);
|
ehrpwm_modify(pc->mmio_base, AQCSFRC, aqcsfrc_mask, aqcsfrc_val);
|
||||||
/*
|
/*
|
||||||
* Changes to immediate action on Action Qualifier. This puts
|
* Changes to immediate action on Action Qualifier. This puts
|
||||||
|
|||||||
@@ -397,19 +397,6 @@ void pwmchip_sysfs_export(struct pwm_chip *chip)
|
|||||||
}
|
}
|
||||||
|
|
||||||
void pwmchip_sysfs_unexport(struct pwm_chip *chip)
|
void pwmchip_sysfs_unexport(struct pwm_chip *chip)
|
||||||
{
|
|
||||||
struct device *parent;
|
|
||||||
|
|
||||||
parent = class_find_device(&pwm_class, NULL, chip,
|
|
||||||
pwmchip_sysfs_match);
|
|
||||||
if (parent) {
|
|
||||||
/* for class_find_device() */
|
|
||||||
put_device(parent);
|
|
||||||
device_unregister(parent);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
|
|
||||||
{
|
{
|
||||||
struct device *parent;
|
struct device *parent;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
@@ -427,6 +414,7 @@ void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
|
|||||||
}
|
}
|
||||||
|
|
||||||
put_device(parent);
|
put_device(parent);
|
||||||
|
device_unregister(parent);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __init pwm_sysfs_init(void)
|
static int __init pwm_sysfs_init(void)
|
||||||
|
|||||||
@@ -2145,6 +2145,14 @@ static int riocm_add_mport(struct device *dev,
|
|||||||
mutex_init(&cm->rx_lock);
|
mutex_init(&cm->rx_lock);
|
||||||
riocm_rx_fill(cm, RIOCM_RX_RING_SIZE);
|
riocm_rx_fill(cm, RIOCM_RX_RING_SIZE);
|
||||||
cm->rx_wq = create_workqueue(DRV_NAME "/rxq");
|
cm->rx_wq = create_workqueue(DRV_NAME "/rxq");
|
||||||
|
if (!cm->rx_wq) {
|
||||||
|
riocm_error("failed to allocate IBMBOX_%d on %s",
|
||||||
|
cmbox, mport->name);
|
||||||
|
rio_release_outb_mbox(mport, cmbox);
|
||||||
|
kfree(cm);
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
|
|
||||||
INIT_WORK(&cm->rx_work, rio_ibmsg_handler);
|
INIT_WORK(&cm->rx_work, rio_ibmsg_handler);
|
||||||
|
|
||||||
cm->tx_slot = 0;
|
cm->tx_slot = 0;
|
||||||
|
|||||||
@@ -82,6 +82,18 @@ static int pcf8523_write(struct i2c_client *client, u8 reg, u8 value)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int pcf8523_voltage_low(struct i2c_client *client)
|
||||||
|
{
|
||||||
|
u8 value;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
err = pcf8523_read(client, REG_CONTROL3, &value);
|
||||||
|
if (err < 0)
|
||||||
|
return err;
|
||||||
|
|
||||||
|
return !!(value & REG_CONTROL3_BLF);
|
||||||
|
}
|
||||||
|
|
||||||
static int pcf8523_select_capacitance(struct i2c_client *client, bool high)
|
static int pcf8523_select_capacitance(struct i2c_client *client, bool high)
|
||||||
{
|
{
|
||||||
u8 value;
|
u8 value;
|
||||||
@@ -164,6 +176,14 @@ static int pcf8523_rtc_read_time(struct device *dev, struct rtc_time *tm)
|
|||||||
struct i2c_msg msgs[2];
|
struct i2c_msg msgs[2];
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
|
err = pcf8523_voltage_low(client);
|
||||||
|
if (err < 0) {
|
||||||
|
return err;
|
||||||
|
} else if (err > 0) {
|
||||||
|
dev_err(dev, "low voltage detected, time is unreliable\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
msgs[0].addr = client->addr;
|
msgs[0].addr = client->addr;
|
||||||
msgs[0].flags = 0;
|
msgs[0].flags = 0;
|
||||||
msgs[0].len = 1;
|
msgs[0].len = 1;
|
||||||
@@ -248,17 +268,13 @@ static int pcf8523_rtc_ioctl(struct device *dev, unsigned int cmd,
|
|||||||
unsigned long arg)
|
unsigned long arg)
|
||||||
{
|
{
|
||||||
struct i2c_client *client = to_i2c_client(dev);
|
struct i2c_client *client = to_i2c_client(dev);
|
||||||
u8 value;
|
int ret;
|
||||||
int ret = 0, err;
|
|
||||||
|
|
||||||
switch (cmd) {
|
switch (cmd) {
|
||||||
case RTC_VL_READ:
|
case RTC_VL_READ:
|
||||||
err = pcf8523_read(client, REG_CONTROL3, &value);
|
ret = pcf8523_voltage_low(client);
|
||||||
if (err < 0)
|
if (ret < 0)
|
||||||
return err;
|
return ret;
|
||||||
|
|
||||||
if (value & REG_CONTROL3_BLF)
|
|
||||||
ret = 1;
|
|
||||||
|
|
||||||
if (copy_to_user((void __user *)arg, &ret, sizeof(int)))
|
if (copy_to_user((void __user *)arg, &ret, sizeof(int)))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
|
|||||||
@@ -829,7 +829,7 @@ ret_err_rqe:
|
|||||||
((u64)err_entry->data.err_warn_bitmap_hi << 32) |
|
((u64)err_entry->data.err_warn_bitmap_hi << 32) |
|
||||||
(u64)err_entry->data.err_warn_bitmap_lo;
|
(u64)err_entry->data.err_warn_bitmap_lo;
|
||||||
for (i = 0; i < BNX2FC_NUM_ERR_BITS; i++) {
|
for (i = 0; i < BNX2FC_NUM_ERR_BITS; i++) {
|
||||||
if (err_warn_bit_map & (u64) (1 << i)) {
|
if (err_warn_bit_map & ((u64)1 << i)) {
|
||||||
err_warn = i;
|
err_warn = i;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -637,6 +637,10 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr)
|
|||||||
|
|
||||||
if (ndev->flags & IFF_LOOPBACK) {
|
if (ndev->flags & IFF_LOOPBACK) {
|
||||||
ndev = ip_dev_find(&init_net, daddr->sin_addr.s_addr);
|
ndev = ip_dev_find(&init_net, daddr->sin_addr.s_addr);
|
||||||
|
if (!ndev) {
|
||||||
|
err = -ENETUNREACH;
|
||||||
|
goto rel_neigh;
|
||||||
|
}
|
||||||
mtu = ndev->mtu;
|
mtu = ndev->mtu;
|
||||||
pr_info("rt dev %s, loopback -> %s, mtu %u.\n",
|
pr_info("rt dev %s, loopback -> %s, mtu %u.\n",
|
||||||
n->dev->name, ndev->name, mtu);
|
n->dev->name, ndev->name, mtu);
|
||||||
|
|||||||
@@ -978,6 +978,8 @@ static struct domain_device *sas_ex_discover_expander(
|
|||||||
list_del(&child->dev_list_node);
|
list_del(&child->dev_list_node);
|
||||||
spin_unlock_irq(&parent->port->dev_list_lock);
|
spin_unlock_irq(&parent->port->dev_list_lock);
|
||||||
sas_put_device(child);
|
sas_put_device(child);
|
||||||
|
sas_port_delete(phy->port);
|
||||||
|
phy->port = NULL;
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
list_add_tail(&child->siblings, &parent->ex_dev.children);
|
list_add_tail(&child->siblings, &parent->ex_dev.children);
|
||||||
|
|||||||
@@ -6789,7 +6789,10 @@ int
|
|||||||
lpfc_send_rrq(struct lpfc_hba *phba, struct lpfc_node_rrq *rrq)
|
lpfc_send_rrq(struct lpfc_hba *phba, struct lpfc_node_rrq *rrq)
|
||||||
{
|
{
|
||||||
struct lpfc_nodelist *ndlp = lpfc_findnode_did(rrq->vport,
|
struct lpfc_nodelist *ndlp = lpfc_findnode_did(rrq->vport,
|
||||||
rrq->nlp_DID);
|
rrq->nlp_DID);
|
||||||
|
if (!ndlp)
|
||||||
|
return 1;
|
||||||
|
|
||||||
if (lpfc_test_rrq_active(phba, ndlp, rrq->xritag))
|
if (lpfc_test_rrq_active(phba, ndlp, rrq->xritag))
|
||||||
return lpfc_issue_els_rrq(rrq->vport, ndlp,
|
return lpfc_issue_els_rrq(rrq->vport, ndlp,
|
||||||
rrq->nlp_DID, rrq);
|
rrq->nlp_DID, rrq);
|
||||||
|
|||||||
@@ -5478,7 +5478,7 @@ static int pqi_pci_init(struct pqi_ctrl_info *ctrl_info)
|
|||||||
else
|
else
|
||||||
mask = DMA_BIT_MASK(32);
|
mask = DMA_BIT_MASK(32);
|
||||||
|
|
||||||
rc = dma_set_mask(&ctrl_info->pci_dev->dev, mask);
|
rc = dma_set_mask_and_coherent(&ctrl_info->pci_dev->dev, mask);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
dev_err(&ctrl_info->pci_dev->dev, "failed to set DMA mask\n");
|
dev_err(&ctrl_info->pci_dev->dev, "failed to set DMA mask\n");
|
||||||
goto disable_device;
|
goto disable_device;
|
||||||
|
|||||||
@@ -778,7 +778,7 @@ static bool pwrap_is_pmic_cipher_ready(struct pmic_wrapper *wrp)
|
|||||||
static int pwrap_init_cipher(struct pmic_wrapper *wrp)
|
static int pwrap_init_cipher(struct pmic_wrapper *wrp)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
u32 rdata;
|
u32 rdata = 0;
|
||||||
|
|
||||||
pwrap_writel(wrp, 0x1, PWRAP_CIPHER_SWRST);
|
pwrap_writel(wrp, 0x1, PWRAP_CIPHER_SWRST);
|
||||||
pwrap_writel(wrp, 0x0, PWRAP_CIPHER_SWRST);
|
pwrap_writel(wrp, 0x0, PWRAP_CIPHER_SWRST);
|
||||||
|
|||||||
@@ -1475,12 +1475,7 @@ static const struct pci_device_id pxa2xx_spi_pci_compound_match[] = {
|
|||||||
|
|
||||||
static bool pxa2xx_spi_idma_filter(struct dma_chan *chan, void *param)
|
static bool pxa2xx_spi_idma_filter(struct dma_chan *chan, void *param)
|
||||||
{
|
{
|
||||||
struct device *dev = param;
|
return param == chan->device->dev;
|
||||||
|
|
||||||
if (dev != chan->device->dev->parent)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct pxa2xx_spi_master *
|
static struct pxa2xx_spi_master *
|
||||||
|
|||||||
@@ -381,18 +381,9 @@ create_pagelist(char __user *buf, size_t count, unsigned short type,
|
|||||||
int run, addridx, actual_pages;
|
int run, addridx, actual_pages;
|
||||||
unsigned long *need_release;
|
unsigned long *need_release;
|
||||||
|
|
||||||
if (count >= INT_MAX - PAGE_SIZE)
|
|
||||||
return NULL;
|
|
||||||
|
|
||||||
offset = (unsigned int)buf & (PAGE_SIZE - 1);
|
offset = (unsigned int)buf & (PAGE_SIZE - 1);
|
||||||
num_pages = (count + offset + PAGE_SIZE - 1) / PAGE_SIZE;
|
num_pages = (count + offset + PAGE_SIZE - 1) / PAGE_SIZE;
|
||||||
|
|
||||||
if (num_pages > (SIZE_MAX - sizeof(PAGELIST_T) -
|
|
||||||
sizeof(struct vchiq_pagelist_info)) /
|
|
||||||
(sizeof(u32) + sizeof(pages[0]) +
|
|
||||||
sizeof(struct scatterlist)))
|
|
||||||
return NULL;
|
|
||||||
|
|
||||||
*ppagelist = NULL;
|
*ppagelist = NULL;
|
||||||
|
|
||||||
/* Allocate enough storage to hold the page pointers and the page
|
/* Allocate enough storage to hold the page pointers and the page
|
||||||
|
|||||||
@@ -162,7 +162,8 @@ static int tsens_probe(struct platform_device *pdev)
|
|||||||
if (tmdev->ops->calibrate) {
|
if (tmdev->ops->calibrate) {
|
||||||
ret = tmdev->ops->calibrate(tmdev);
|
ret = tmdev->ops->calibrate(tmdev);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(dev, "tsens calibration failed\n");
|
if (ret != -EPROBE_DEFER)
|
||||||
|
dev_err(dev, "tsens calibration failed\n");
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -269,7 +269,7 @@ static bool dw8250_fallback_dma_filter(struct dma_chan *chan, void *param)
|
|||||||
|
|
||||||
static bool dw8250_idma_filter(struct dma_chan *chan, void *param)
|
static bool dw8250_idma_filter(struct dma_chan *chan, void *param)
|
||||||
{
|
{
|
||||||
return param == chan->device->dev->parent;
|
return param == chan->device->dev;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dw8250_quirks(struct uart_port *p, struct dw8250_data *data)
|
static void dw8250_quirks(struct uart_port *p, struct dw8250_data *data)
|
||||||
@@ -311,7 +311,7 @@ static void dw8250_quirks(struct uart_port *p, struct dw8250_data *data)
|
|||||||
p->set_termios = dw8250_set_termios;
|
p->set_termios = dw8250_set_termios;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Platforms with iDMA */
|
/* Platforms with iDMA 64-bit */
|
||||||
if (platform_get_resource_byname(to_platform_device(p->dev),
|
if (platform_get_resource_byname(to_platform_device(p->dev),
|
||||||
IORESOURCE_MEM, "lpss_priv")) {
|
IORESOURCE_MEM, "lpss_priv")) {
|
||||||
p->set_termios = dw8250_set_termios;
|
p->set_termios = dw8250_set_termios;
|
||||||
|
|||||||
@@ -392,7 +392,7 @@ static struct uart_ops sunhv_pops = {
|
|||||||
static struct uart_driver sunhv_reg = {
|
static struct uart_driver sunhv_reg = {
|
||||||
.owner = THIS_MODULE,
|
.owner = THIS_MODULE,
|
||||||
.driver_name = "sunhv",
|
.driver_name = "sunhv",
|
||||||
.dev_name = "ttyS",
|
.dev_name = "ttyHV",
|
||||||
.major = TTY_MAJOR,
|
.major = TTY_MAJOR,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -70,6 +70,9 @@ static const struct usb_device_id usb_quirk_list[] = {
|
|||||||
/* Cherry Stream G230 2.0 (G85-231) and 3.0 (G85-232) */
|
/* Cherry Stream G230 2.0 (G85-231) and 3.0 (G85-232) */
|
||||||
{ USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME },
|
{ USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||||
|
|
||||||
|
/* Logitech HD Webcam C270 */
|
||||||
|
{ USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||||
|
|
||||||
/* Logitech HD Pro Webcams C920, C920-C, C925e and C930e */
|
/* Logitech HD Pro Webcams C920, C920-C, C925e and C930e */
|
||||||
{ USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
|
{ USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
|
||||||
{ USB_DEVICE(0x046d, 0x0841), .driver_info = USB_QUIRK_DELAY_INIT },
|
{ USB_DEVICE(0x046d, 0x0841), .driver_info = USB_QUIRK_DELAY_INIT },
|
||||||
|
|||||||
@@ -2552,8 +2552,10 @@ static void dwc2_free_dma_aligned_buffer(struct urb *urb)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
/* Restore urb->transfer_buffer from the end of the allocated area */
|
/* Restore urb->transfer_buffer from the end of the allocated area */
|
||||||
memcpy(&stored_xfer_buffer, urb->transfer_buffer +
|
memcpy(&stored_xfer_buffer,
|
||||||
urb->transfer_buffer_length, sizeof(urb->transfer_buffer));
|
PTR_ALIGN(urb->transfer_buffer + urb->transfer_buffer_length,
|
||||||
|
dma_get_cache_alignment()),
|
||||||
|
sizeof(urb->transfer_buffer));
|
||||||
|
|
||||||
if (usb_urb_dir_in(urb))
|
if (usb_urb_dir_in(urb))
|
||||||
memcpy(stored_xfer_buffer, urb->transfer_buffer,
|
memcpy(stored_xfer_buffer, urb->transfer_buffer,
|
||||||
@@ -2580,6 +2582,7 @@ static int dwc2_alloc_dma_aligned_buffer(struct urb *urb, gfp_t mem_flags)
|
|||||||
* DMA
|
* DMA
|
||||||
*/
|
*/
|
||||||
kmalloc_size = urb->transfer_buffer_length +
|
kmalloc_size = urb->transfer_buffer_length +
|
||||||
|
(dma_get_cache_alignment() - 1) +
|
||||||
sizeof(urb->transfer_buffer);
|
sizeof(urb->transfer_buffer);
|
||||||
|
|
||||||
kmalloc_ptr = kmalloc(kmalloc_size, mem_flags);
|
kmalloc_ptr = kmalloc(kmalloc_size, mem_flags);
|
||||||
@@ -2590,7 +2593,8 @@ static int dwc2_alloc_dma_aligned_buffer(struct urb *urb, gfp_t mem_flags)
|
|||||||
* Position value of original urb->transfer_buffer pointer to the end
|
* Position value of original urb->transfer_buffer pointer to the end
|
||||||
* of allocation for later referencing
|
* of allocation for later referencing
|
||||||
*/
|
*/
|
||||||
memcpy(kmalloc_ptr + urb->transfer_buffer_length,
|
memcpy(PTR_ALIGN(kmalloc_ptr + urb->transfer_buffer_length,
|
||||||
|
dma_get_cache_alignment()),
|
||||||
&urb->transfer_buffer, sizeof(urb->transfer_buffer));
|
&urb->transfer_buffer, sizeof(urb->transfer_buffer));
|
||||||
|
|
||||||
if (usb_urb_dir_out(urb))
|
if (usb_urb_dir_out(urb))
|
||||||
|
|||||||
@@ -1166,6 +1166,10 @@ static const struct usb_device_id option_ids[] = {
|
|||||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1213, 0xff) },
|
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1213, 0xff) },
|
||||||
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1214),
|
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1214),
|
||||||
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) },
|
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) },
|
||||||
|
{ USB_DEVICE(TELIT_VENDOR_ID, 0x1260),
|
||||||
|
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
|
||||||
|
{ USB_DEVICE(TELIT_VENDOR_ID, 0x1261),
|
||||||
|
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
|
||||||
{ USB_DEVICE(TELIT_VENDOR_ID, 0x1900), /* Telit LN940 (QMI) */
|
{ USB_DEVICE(TELIT_VENDOR_ID, 0x1900), /* Telit LN940 (QMI) */
|
||||||
.driver_info = NCTRL(0) | RSVD(1) },
|
.driver_info = NCTRL(0) | RSVD(1) },
|
||||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */
|
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */
|
||||||
@@ -1767,6 +1771,8 @@ static const struct usb_device_id option_ids[] = {
|
|||||||
{ USB_DEVICE(ALINK_VENDOR_ID, SIMCOM_PRODUCT_SIM7100E),
|
{ USB_DEVICE(ALINK_VENDOR_ID, SIMCOM_PRODUCT_SIM7100E),
|
||||||
.driver_info = RSVD(5) | RSVD(6) },
|
.driver_info = RSVD(5) | RSVD(6) },
|
||||||
{ USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9003, 0xff) }, /* Simcom SIM7500/SIM7600 MBIM mode */
|
{ USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9003, 0xff) }, /* Simcom SIM7500/SIM7600 MBIM mode */
|
||||||
|
{ USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9011, 0xff), /* Simcom SIM7500/SIM7600 RNDIS mode */
|
||||||
|
.driver_info = RSVD(7) },
|
||||||
{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200),
|
{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200),
|
||||||
.driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) },
|
.driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) },
|
||||||
{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X220_X500D),
|
{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X220_X500D),
|
||||||
|
|||||||
@@ -101,6 +101,7 @@ static const struct usb_device_id id_table[] = {
|
|||||||
{ USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) },
|
{ USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) },
|
{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
|
{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
|
||||||
|
{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
|
||||||
{ } /* Terminating entry */
|
{ } /* Terminating entry */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -159,3 +159,6 @@
|
|||||||
#define SMART_VENDOR_ID 0x0b8c
|
#define SMART_VENDOR_ID 0x0b8c
|
||||||
#define SMART_PRODUCT_ID 0x2303
|
#define SMART_PRODUCT_ID 0x2303
|
||||||
|
|
||||||
|
/* Allied Telesis VT-Kit3 */
|
||||||
|
#define AT_VENDOR_ID 0x0caa
|
||||||
|
#define AT_VTKIT3_PRODUCT_ID 0x3001
|
||||||
|
|||||||
@@ -29,6 +29,11 @@ UNUSUAL_DEV(0x0bda, 0x0138, 0x0000, 0x9999,
|
|||||||
"USB Card Reader",
|
"USB Card Reader",
|
||||||
USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0),
|
USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0),
|
||||||
|
|
||||||
|
UNUSUAL_DEV(0x0bda, 0x0153, 0x0000, 0x9999,
|
||||||
|
"Realtek",
|
||||||
|
"USB Card Reader",
|
||||||
|
USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0),
|
||||||
|
|
||||||
UNUSUAL_DEV(0x0bda, 0x0158, 0x0000, 0x9999,
|
UNUSUAL_DEV(0x0bda, 0x0158, 0x0000, 0x9999,
|
||||||
"Realtek",
|
"Realtek",
|
||||||
"USB Card Reader",
|
"USB Card Reader",
|
||||||
|
|||||||
@@ -285,6 +285,8 @@ static int hga_card_detect(void)
|
|||||||
hga_vram_len = 0x08000;
|
hga_vram_len = 0x08000;
|
||||||
|
|
||||||
hga_vram = ioremap(0xb0000, hga_vram_len);
|
hga_vram = ioremap(0xb0000, hga_vram_len);
|
||||||
|
if (!hga_vram)
|
||||||
|
goto error;
|
||||||
|
|
||||||
if (request_region(0x3b0, 12, "hgafb"))
|
if (request_region(0x3b0, 12, "hgafb"))
|
||||||
release_io_ports = 1;
|
release_io_ports = 1;
|
||||||
|
|||||||
@@ -1516,6 +1516,11 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||||||
info->fix.smem_start = addr;
|
info->fix.smem_start = addr;
|
||||||
info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ?
|
info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ?
|
||||||
0x400000 : 0x800000);
|
0x400000 : 0x800000);
|
||||||
|
if (!info->screen_base) {
|
||||||
|
release_mem_region(addr, size);
|
||||||
|
framebuffer_release(info);
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
info->fix.mmio_start = addr + 0x800000;
|
info->fix.mmio_start = addr + 0x800000;
|
||||||
par->dc_regs = ioremap(addr + 0x800000, 0x1000);
|
par->dc_regs = ioremap(addr + 0x800000, 0x1000);
|
||||||
par->cmap_regs_phys = addr + 0x840000;
|
par->cmap_regs_phys = addr + 0x840000;
|
||||||
|
|||||||
@@ -1850,6 +1850,7 @@ comment "Watchdog Pretimeout Governors"
|
|||||||
|
|
||||||
config WATCHDOG_PRETIMEOUT_GOV
|
config WATCHDOG_PRETIMEOUT_GOV
|
||||||
bool "Enable watchdog pretimeout governors"
|
bool "Enable watchdog pretimeout governors"
|
||||||
|
depends on WATCHDOG_CORE
|
||||||
help
|
help
|
||||||
The option allows to select watchdog pretimeout governors.
|
The option allows to select watchdog pretimeout governors.
|
||||||
|
|
||||||
|
|||||||
@@ -181,8 +181,10 @@ static void __imx2_wdt_set_timeout(struct watchdog_device *wdog,
|
|||||||
static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
|
static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
|
||||||
unsigned int new_timeout)
|
unsigned int new_timeout)
|
||||||
{
|
{
|
||||||
__imx2_wdt_set_timeout(wdog, new_timeout);
|
unsigned int actual;
|
||||||
|
|
||||||
|
actual = min(new_timeout, wdog->max_hw_heartbeat_ms * 1000);
|
||||||
|
__imx2_wdt_set_timeout(wdog, actual);
|
||||||
wdog->timeout = new_timeout;
|
wdog->timeout = new_timeout;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -58,15 +58,13 @@ static void configfs_d_iput(struct dentry * dentry,
|
|||||||
if (sd) {
|
if (sd) {
|
||||||
/* Coordinate with configfs_readdir */
|
/* Coordinate with configfs_readdir */
|
||||||
spin_lock(&configfs_dirent_lock);
|
spin_lock(&configfs_dirent_lock);
|
||||||
/* Coordinate with configfs_attach_attr where will increase
|
/*
|
||||||
* sd->s_count and update sd->s_dentry to new allocated one.
|
* Set sd->s_dentry to null only when this dentry is the one
|
||||||
* Only set sd->dentry to null when this dentry is the only
|
* that is going to be killed. Otherwise configfs_d_iput may
|
||||||
* sd owner.
|
* run just after configfs_attach_attr and set sd->s_dentry to
|
||||||
* If not do so, configfs_d_iput may run just after
|
* NULL even it's still in use.
|
||||||
* configfs_attach_attr and set sd->s_dentry to null
|
|
||||||
* even it's still in use.
|
|
||||||
*/
|
*/
|
||||||
if (atomic_read(&sd->s_count) <= 2)
|
if (sd->s_dentry == dentry)
|
||||||
sd->s_dentry = NULL;
|
sd->s_dentry = NULL;
|
||||||
|
|
||||||
spin_unlock(&configfs_dirent_lock);
|
spin_unlock(&configfs_dirent_lock);
|
||||||
@@ -1755,12 +1753,19 @@ int configfs_register_group(struct config_group *parent_group,
|
|||||||
|
|
||||||
inode_lock_nested(d_inode(parent), I_MUTEX_PARENT);
|
inode_lock_nested(d_inode(parent), I_MUTEX_PARENT);
|
||||||
ret = create_default_group(parent_group, group);
|
ret = create_default_group(parent_group, group);
|
||||||
if (!ret) {
|
if (ret)
|
||||||
spin_lock(&configfs_dirent_lock);
|
goto err_out;
|
||||||
configfs_dir_set_ready(group->cg_item.ci_dentry->d_fsdata);
|
|
||||||
spin_unlock(&configfs_dirent_lock);
|
spin_lock(&configfs_dirent_lock);
|
||||||
}
|
configfs_dir_set_ready(group->cg_item.ci_dentry->d_fsdata);
|
||||||
|
spin_unlock(&configfs_dirent_lock);
|
||||||
inode_unlock(d_inode(parent));
|
inode_unlock(d_inode(parent));
|
||||||
|
return 0;
|
||||||
|
err_out:
|
||||||
|
inode_unlock(d_inode(parent));
|
||||||
|
mutex_lock(&subsys->su_mutex);
|
||||||
|
unlink_group(group);
|
||||||
|
mutex_unlock(&subsys->su_mutex);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(configfs_register_group);
|
EXPORT_SYMBOL(configfs_register_group);
|
||||||
|
|||||||
@@ -160,12 +160,17 @@ static int fat_file_release(struct inode *inode, struct file *filp)
|
|||||||
int fat_file_fsync(struct file *filp, loff_t start, loff_t end, int datasync)
|
int fat_file_fsync(struct file *filp, loff_t start, loff_t end, int datasync)
|
||||||
{
|
{
|
||||||
struct inode *inode = filp->f_mapping->host;
|
struct inode *inode = filp->f_mapping->host;
|
||||||
int res, err;
|
int err;
|
||||||
|
|
||||||
|
err = __generic_file_fsync(filp, start, end, datasync);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
|
||||||
res = generic_file_fsync(filp, start, end, datasync);
|
|
||||||
err = sync_mapping_buffers(MSDOS_SB(inode->i_sb)->fat_inode->i_mapping);
|
err = sync_mapping_buffers(MSDOS_SB(inode->i_sb)->fat_inode->i_mapping);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
|
||||||
return res ? res : err;
|
return blkdev_issue_flush(inode->i_sb->s_bdev, GFP_KERNEL, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1672,7 +1672,7 @@ static int fuse_retrieve(struct fuse_conn *fc, struct inode *inode,
|
|||||||
offset = outarg->offset & ~PAGE_MASK;
|
offset = outarg->offset & ~PAGE_MASK;
|
||||||
file_size = i_size_read(inode);
|
file_size = i_size_read(inode);
|
||||||
|
|
||||||
num = outarg->size;
|
num = min(outarg->size, fc->max_write);
|
||||||
if (outarg->offset > file_size)
|
if (outarg->offset > file_size)
|
||||||
num = 0;
|
num = 0;
|
||||||
else if (outarg->offset + num > file_size)
|
else if (outarg->offset + num > file_size)
|
||||||
|
|||||||
@@ -1804,8 +1804,13 @@ int file_remove_privs(struct file *file)
|
|||||||
int kill;
|
int kill;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
/* Fast path for nothing security related */
|
/*
|
||||||
if (IS_NOSEC(inode))
|
* Fast path for nothing security related.
|
||||||
|
* As well for non-regular files, e.g. blkdev inodes.
|
||||||
|
* For example, blkdev_write_iter() might get here
|
||||||
|
* trying to remove privs which it is not allowed to.
|
||||||
|
*/
|
||||||
|
if (IS_NOSEC(inode) || !S_ISREG(inode->i_mode))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
kill = dentry_needs_remove_privs(dentry);
|
kill = dentry_needs_remove_privs(dentry);
|
||||||
|
|||||||
@@ -116,8 +116,11 @@ void nfsd_put_raparams(struct file *file, struct raparms *ra);
|
|||||||
|
|
||||||
static inline int fh_want_write(struct svc_fh *fh)
|
static inline int fh_want_write(struct svc_fh *fh)
|
||||||
{
|
{
|
||||||
int ret = mnt_want_write(fh->fh_export->ex_path.mnt);
|
int ret;
|
||||||
|
|
||||||
|
if (fh->fh_want_write)
|
||||||
|
return 0;
|
||||||
|
ret = mnt_want_write(fh->fh_export->ex_path.mnt);
|
||||||
if (!ret)
|
if (!ret)
|
||||||
fh->fh_want_write = true;
|
fh->fh_want_write = true;
|
||||||
return ret;
|
return ret;
|
||||||
|
|||||||
@@ -310,6 +310,18 @@ int ocfs2_dentry_attach_lock(struct dentry *dentry,
|
|||||||
|
|
||||||
out_attach:
|
out_attach:
|
||||||
spin_lock(&dentry_attach_lock);
|
spin_lock(&dentry_attach_lock);
|
||||||
|
if (unlikely(dentry->d_fsdata && !alias)) {
|
||||||
|
/* d_fsdata is set by a racing thread which is doing
|
||||||
|
* the same thing as this thread is doing. Leave the racing
|
||||||
|
* thread going ahead and we return here.
|
||||||
|
*/
|
||||||
|
spin_unlock(&dentry_attach_lock);
|
||||||
|
iput(dl->dl_inode);
|
||||||
|
ocfs2_lock_res_free(&dl->dl_lockres);
|
||||||
|
kfree(dl);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
dentry->d_fsdata = dl;
|
dentry->d_fsdata = dl;
|
||||||
dl->dl_count++;
|
dl->dl_count++;
|
||||||
spin_unlock(&dentry_attach_lock);
|
spin_unlock(&dentry_attach_lock);
|
||||||
|
|||||||
@@ -472,7 +472,7 @@ static inline struct cgroup_subsys_state *task_css(struct task_struct *task,
|
|||||||
*
|
*
|
||||||
* Find the css for the (@task, @subsys_id) combination, increment a
|
* Find the css for the (@task, @subsys_id) combination, increment a
|
||||||
* reference on and return it. This function is guaranteed to return a
|
* reference on and return it. This function is guaranteed to return a
|
||||||
* valid css.
|
* valid css. The returned css may already have been offlined.
|
||||||
*/
|
*/
|
||||||
static inline struct cgroup_subsys_state *
|
static inline struct cgroup_subsys_state *
|
||||||
task_get_css(struct task_struct *task, int subsys_id)
|
task_get_css(struct task_struct *task, int subsys_id)
|
||||||
@@ -482,7 +482,13 @@ task_get_css(struct task_struct *task, int subsys_id)
|
|||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
while (true) {
|
while (true) {
|
||||||
css = task_css(task, subsys_id);
|
css = task_css(task, subsys_id);
|
||||||
if (likely(css_tryget_online(css)))
|
/*
|
||||||
|
* Can't use css_tryget_online() here. A task which has
|
||||||
|
* PF_EXITING set may stay associated with an offline css.
|
||||||
|
* If such task calls this function, css_tryget_online()
|
||||||
|
* will keep failing.
|
||||||
|
*/
|
||||||
|
if (likely(css_tryget(css)))
|
||||||
break;
|
break;
|
||||||
cpu_relax();
|
cpu_relax();
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -641,7 +641,6 @@ static inline void pwm_remove_table(struct pwm_lookup *table, size_t num)
|
|||||||
#ifdef CONFIG_PWM_SYSFS
|
#ifdef CONFIG_PWM_SYSFS
|
||||||
void pwmchip_sysfs_export(struct pwm_chip *chip);
|
void pwmchip_sysfs_export(struct pwm_chip *chip);
|
||||||
void pwmchip_sysfs_unexport(struct pwm_chip *chip);
|
void pwmchip_sysfs_unexport(struct pwm_chip *chip);
|
||||||
void pwmchip_sysfs_unexport_children(struct pwm_chip *chip);
|
|
||||||
#else
|
#else
|
||||||
static inline void pwmchip_sysfs_export(struct pwm_chip *chip)
|
static inline void pwmchip_sysfs_export(struct pwm_chip *chip)
|
||||||
{
|
{
|
||||||
@@ -650,10 +649,6 @@ static inline void pwmchip_sysfs_export(struct pwm_chip *chip)
|
|||||||
static inline void pwmchip_sysfs_unexport(struct pwm_chip *chip)
|
static inline void pwmchip_sysfs_unexport(struct pwm_chip *chip)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void pwmchip_sysfs_unexport_children(struct pwm_chip *chip)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
#endif /* CONFIG_PWM_SYSFS */
|
#endif /* CONFIG_PWM_SYSFS */
|
||||||
|
|
||||||
#endif /* __LINUX_PWM_H */
|
#endif /* __LINUX_PWM_H */
|
||||||
|
|||||||
@@ -176,9 +176,6 @@ struct adv_info {
|
|||||||
|
|
||||||
#define HCI_MAX_SHORT_NAME_LENGTH 10
|
#define HCI_MAX_SHORT_NAME_LENGTH 10
|
||||||
|
|
||||||
/* Min encryption key size to match with SMP */
|
|
||||||
#define HCI_MIN_ENC_KEY_SIZE 7
|
|
||||||
|
|
||||||
/* Default LE RPA expiry time, 15 minutes */
|
/* Default LE RPA expiry time, 15 minutes */
|
||||||
#define HCI_DEFAULT_RPA_TIMEOUT (15 * 60)
|
#define HCI_DEFAULT_RPA_TIMEOUT (15 * 60)
|
||||||
|
|
||||||
|
|||||||
10
ipc/mqueue.c
10
ipc/mqueue.c
@@ -371,7 +371,8 @@ static void mqueue_evict_inode(struct inode *inode)
|
|||||||
struct user_struct *user;
|
struct user_struct *user;
|
||||||
unsigned long mq_bytes, mq_treesize;
|
unsigned long mq_bytes, mq_treesize;
|
||||||
struct ipc_namespace *ipc_ns;
|
struct ipc_namespace *ipc_ns;
|
||||||
struct msg_msg *msg;
|
struct msg_msg *msg, *nmsg;
|
||||||
|
LIST_HEAD(tmp_msg);
|
||||||
|
|
||||||
clear_inode(inode);
|
clear_inode(inode);
|
||||||
|
|
||||||
@@ -382,10 +383,15 @@ static void mqueue_evict_inode(struct inode *inode)
|
|||||||
info = MQUEUE_I(inode);
|
info = MQUEUE_I(inode);
|
||||||
spin_lock(&info->lock);
|
spin_lock(&info->lock);
|
||||||
while ((msg = msg_get(info)) != NULL)
|
while ((msg = msg_get(info)) != NULL)
|
||||||
free_msg(msg);
|
list_add_tail(&msg->m_list, &tmp_msg);
|
||||||
kfree(info->node_cache);
|
kfree(info->node_cache);
|
||||||
spin_unlock(&info->lock);
|
spin_unlock(&info->lock);
|
||||||
|
|
||||||
|
list_for_each_entry_safe(msg, nmsg, &tmp_msg, m_list) {
|
||||||
|
list_del(&msg->m_list);
|
||||||
|
free_msg(msg);
|
||||||
|
}
|
||||||
|
|
||||||
/* Total amount of bytes accounted for the mqueue */
|
/* Total amount of bytes accounted for the mqueue */
|
||||||
mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) +
|
mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) +
|
||||||
min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) *
|
min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) *
|
||||||
|
|||||||
@@ -18,6 +18,7 @@
|
|||||||
#include <linux/utsname.h>
|
#include <linux/utsname.h>
|
||||||
#include <linux/proc_ns.h>
|
#include <linux/proc_ns.h>
|
||||||
#include <linux/uaccess.h>
|
#include <linux/uaccess.h>
|
||||||
|
#include <linux/sched.h>
|
||||||
|
|
||||||
#include "util.h"
|
#include "util.h"
|
||||||
|
|
||||||
@@ -64,6 +65,9 @@ static struct msg_msg *alloc_msg(size_t len)
|
|||||||
pseg = &msg->next;
|
pseg = &msg->next;
|
||||||
while (len > 0) {
|
while (len > 0) {
|
||||||
struct msg_msgseg *seg;
|
struct msg_msgseg *seg;
|
||||||
|
|
||||||
|
cond_resched();
|
||||||
|
|
||||||
alen = min(len, DATALEN_SEG);
|
alen = min(len, DATALEN_SEG);
|
||||||
seg = kmalloc(sizeof(*seg) + alen, GFP_KERNEL_ACCOUNT);
|
seg = kmalloc(sizeof(*seg) + alen, GFP_KERNEL_ACCOUNT);
|
||||||
if (seg == NULL)
|
if (seg == NULL)
|
||||||
@@ -176,6 +180,8 @@ void free_msg(struct msg_msg *msg)
|
|||||||
kfree(msg);
|
kfree(msg);
|
||||||
while (seg != NULL) {
|
while (seg != NULL) {
|
||||||
struct msg_msgseg *tmp = seg->next;
|
struct msg_msgseg *tmp = seg->next;
|
||||||
|
|
||||||
|
cond_resched();
|
||||||
kfree(seg);
|
kfree(seg);
|
||||||
seg = tmp;
|
seg = tmp;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -28,6 +28,7 @@ KCOV_INSTRUMENT_extable.o := n
|
|||||||
# Don't self-instrument.
|
# Don't self-instrument.
|
||||||
KCOV_INSTRUMENT_kcov.o := n
|
KCOV_INSTRUMENT_kcov.o := n
|
||||||
KASAN_SANITIZE_kcov.o := n
|
KASAN_SANITIZE_kcov.o := n
|
||||||
|
CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
|
||||||
|
|
||||||
# cond_syscall is currently not LTO compatible
|
# cond_syscall is currently not LTO compatible
|
||||||
CFLAGS_sys_ni.o = $(DISABLE_LTO)
|
CFLAGS_sys_ni.o = $(DISABLE_LTO)
|
||||||
|
|||||||
@@ -447,6 +447,15 @@ int commit_creds(struct cred *new)
|
|||||||
if (task->mm)
|
if (task->mm)
|
||||||
set_dumpable(task->mm, suid_dumpable);
|
set_dumpable(task->mm, suid_dumpable);
|
||||||
task->pdeath_signal = 0;
|
task->pdeath_signal = 0;
|
||||||
|
/*
|
||||||
|
* If a task drops privileges and becomes nondumpable,
|
||||||
|
* the dumpability change must become visible before
|
||||||
|
* the credential change; otherwise, a __ptrace_may_access()
|
||||||
|
* racing with this change may be able to attach to a task it
|
||||||
|
* shouldn't be able to attach to (as if the task had dropped
|
||||||
|
* privileges without becoming nondumpable).
|
||||||
|
* Pairs with a read barrier in __ptrace_may_access().
|
||||||
|
*/
|
||||||
smp_wmb();
|
smp_wmb();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -49,14 +49,30 @@ static void perf_output_put_handle(struct perf_output_handle *handle)
|
|||||||
unsigned long head;
|
unsigned long head;
|
||||||
|
|
||||||
again:
|
again:
|
||||||
|
/*
|
||||||
|
* In order to avoid publishing a head value that goes backwards,
|
||||||
|
* we must ensure the load of @rb->head happens after we've
|
||||||
|
* incremented @rb->nest.
|
||||||
|
*
|
||||||
|
* Otherwise we can observe a @rb->head value before one published
|
||||||
|
* by an IRQ/NMI happening between the load and the increment.
|
||||||
|
*/
|
||||||
|
barrier();
|
||||||
head = local_read(&rb->head);
|
head = local_read(&rb->head);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* IRQ/NMI can happen here, which means we can miss a head update.
|
* IRQ/NMI can happen here and advance @rb->head, causing our
|
||||||
|
* load above to be stale.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
if (!local_dec_and_test(&rb->nest))
|
/*
|
||||||
|
* If this isn't the outermost nesting, we don't have to update
|
||||||
|
* @rb->user_page->data_head.
|
||||||
|
*/
|
||||||
|
if (local_read(&rb->nest) > 1) {
|
||||||
|
local_dec(&rb->nest);
|
||||||
goto out;
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Since the mmap() consumer (userspace) can run on a different CPU:
|
* Since the mmap() consumer (userspace) can run on a different CPU:
|
||||||
@@ -88,9 +104,18 @@ again:
|
|||||||
rb->user_page->data_head = head;
|
rb->user_page->data_head = head;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Now check if we missed an update -- rely on previous implied
|
* We must publish the head before decrementing the nest count,
|
||||||
* compiler barriers to force a re-read.
|
* otherwise an IRQ/NMI can publish a more recent head value and our
|
||||||
|
* write will (temporarily) publish a stale value.
|
||||||
*/
|
*/
|
||||||
|
barrier();
|
||||||
|
local_set(&rb->nest, 0);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Ensure we decrement @rb->nest before we validate the @rb->head.
|
||||||
|
* Otherwise we cannot be sure we caught the 'last' nested update.
|
||||||
|
*/
|
||||||
|
barrier();
|
||||||
if (unlikely(head != local_read(&rb->head))) {
|
if (unlikely(head != local_read(&rb->head))) {
|
||||||
local_inc(&rb->nest);
|
local_inc(&rb->nest);
|
||||||
goto again;
|
goto again;
|
||||||
|
|||||||
@@ -322,6 +322,16 @@ static int __ptrace_may_access(struct task_struct *task, unsigned int mode)
|
|||||||
return -EPERM;
|
return -EPERM;
|
||||||
ok:
|
ok:
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
|
/*
|
||||||
|
* If a task drops privileges and becomes nondumpable (through a syscall
|
||||||
|
* like setresuid()) while we are trying to access it, we must ensure
|
||||||
|
* that the dumpability is read after the credentials; otherwise,
|
||||||
|
* we may be able to attach to a task that we shouldn't be able to
|
||||||
|
* attach to (as if the task had dropped privileges without becoming
|
||||||
|
* nondumpable).
|
||||||
|
* Pairs with a write barrier in commit_creds().
|
||||||
|
*/
|
||||||
|
smp_rmb();
|
||||||
mm = task->mm;
|
mm = task->mm;
|
||||||
if (mm &&
|
if (mm &&
|
||||||
((get_dumpable(mm) != SUID_DUMP_USER) &&
|
((get_dumpable(mm) != SUID_DUMP_USER) &&
|
||||||
@@ -710,6 +720,10 @@ static int ptrace_peek_siginfo(struct task_struct *child,
|
|||||||
if (arg.nr < 0)
|
if (arg.nr < 0)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
/* Ensure arg.off fits in an unsigned long */
|
||||||
|
if (arg.off > ULONG_MAX)
|
||||||
|
return 0;
|
||||||
|
|
||||||
if (arg.flags & PTRACE_PEEKSIGINFO_SHARED)
|
if (arg.flags & PTRACE_PEEKSIGINFO_SHARED)
|
||||||
pending = &child->signal->shared_pending;
|
pending = &child->signal->shared_pending;
|
||||||
else
|
else
|
||||||
@@ -717,18 +731,20 @@ static int ptrace_peek_siginfo(struct task_struct *child,
|
|||||||
|
|
||||||
for (i = 0; i < arg.nr; ) {
|
for (i = 0; i < arg.nr; ) {
|
||||||
siginfo_t info;
|
siginfo_t info;
|
||||||
s32 off = arg.off + i;
|
unsigned long off = arg.off + i;
|
||||||
|
bool found = false;
|
||||||
|
|
||||||
spin_lock_irq(&child->sighand->siglock);
|
spin_lock_irq(&child->sighand->siglock);
|
||||||
list_for_each_entry(q, &pending->list, list) {
|
list_for_each_entry(q, &pending->list, list) {
|
||||||
if (!off--) {
|
if (!off--) {
|
||||||
|
found = true;
|
||||||
copy_siginfo(&info, &q->info);
|
copy_siginfo(&info, &q->info);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
spin_unlock_irq(&child->sighand->siglock);
|
spin_unlock_irq(&child->sighand->siglock);
|
||||||
|
|
||||||
if (off >= 0) /* beyond the end of the list */
|
if (!found) /* beyond the end of the list */
|
||||||
break;
|
break;
|
||||||
|
|
||||||
#ifdef CONFIG_COMPAT
|
#ifdef CONFIG_COMPAT
|
||||||
|
|||||||
@@ -1765,7 +1765,7 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map)
|
|||||||
((unsigned long)prctl_map->__m1 __op \
|
((unsigned long)prctl_map->__m1 __op \
|
||||||
(unsigned long)prctl_map->__m2) ? 0 : -EINVAL
|
(unsigned long)prctl_map->__m2) ? 0 : -EINVAL
|
||||||
error = __prctl_check_order(start_code, <, end_code);
|
error = __prctl_check_order(start_code, <, end_code);
|
||||||
error |= __prctl_check_order(start_data, <, end_data);
|
error |= __prctl_check_order(start_data,<=, end_data);
|
||||||
error |= __prctl_check_order(start_brk, <=, brk);
|
error |= __prctl_check_order(start_brk, <=, brk);
|
||||||
error |= __prctl_check_order(arg_start, <=, arg_end);
|
error |= __prctl_check_order(arg_start, <=, arg_end);
|
||||||
error |= __prctl_check_order(env_start, <=, env_end);
|
error |= __prctl_check_order(env_start, <=, env_end);
|
||||||
|
|||||||
@@ -2595,8 +2595,10 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, int
|
|||||||
if (neg)
|
if (neg)
|
||||||
continue;
|
continue;
|
||||||
val = convmul * val / convdiv;
|
val = convmul * val / convdiv;
|
||||||
if ((min && val < *min) || (max && val > *max))
|
if ((min && val < *min) || (max && val > *max)) {
|
||||||
continue;
|
err = -EINVAL;
|
||||||
|
break;
|
||||||
|
}
|
||||||
*i = val;
|
*i = val;
|
||||||
} else {
|
} else {
|
||||||
val = convdiv * (*i) / convmul;
|
val = convdiv * (*i) / convmul;
|
||||||
|
|||||||
@@ -639,7 +639,7 @@ static inline void process_adjtimex_modes(struct timex *txc,
|
|||||||
time_constant = max(time_constant, 0l);
|
time_constant = max(time_constant, 0l);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (txc->modes & ADJ_TAI && txc->constant > 0)
|
if (txc->modes & ADJ_TAI && txc->constant >= 0)
|
||||||
*time_tai = txc->constant;
|
*time_tai = txc->constant;
|
||||||
|
|
||||||
if (txc->modes & ADJ_OFFSET)
|
if (txc->modes & ADJ_OFFSET)
|
||||||
|
|||||||
4
mm/cma.c
4
mm/cma.c
@@ -100,8 +100,10 @@ static int __init cma_activate_area(struct cma *cma)
|
|||||||
|
|
||||||
cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
|
cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
|
||||||
|
|
||||||
if (!cma->bitmap)
|
if (!cma->bitmap) {
|
||||||
|
cma->count = 0;
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
}
|
||||||
|
|
||||||
WARN_ON_ONCE(!pfn_valid(pfn));
|
WARN_ON_ONCE(!pfn_valid(pfn));
|
||||||
zone = page_zone(pfn_to_page(pfn));
|
zone = page_zone(pfn_to_page(pfn));
|
||||||
|
|||||||
@@ -57,7 +57,7 @@ static int cma_maxchunk_get(void *data, u64 *val)
|
|||||||
mutex_lock(&cma->lock);
|
mutex_lock(&cma->lock);
|
||||||
for (;;) {
|
for (;;) {
|
||||||
start = find_next_zero_bit(cma->bitmap, bitmap_maxno, end);
|
start = find_next_zero_bit(cma->bitmap, bitmap_maxno, end);
|
||||||
if (start >= cma->count)
|
if (start >= bitmap_maxno)
|
||||||
break;
|
break;
|
||||||
end = find_next_bit(cma->bitmap, bitmap_maxno, start);
|
end = find_next_bit(cma->bitmap, bitmap_maxno, start);
|
||||||
maxchunk = max(end - start, maxchunk);
|
maxchunk = max(end - start, maxchunk);
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user