Merge 4.9.90 into android-4.9
Changes in 4.9.90 tpm: fix potential buffer overruns caused by bit glitches on the bus ASoC: rsnd: check src mod pointer for rsnd_mod_id() SMB3: Validate negotiate request must always be signed CIFS: Enable encryption during session setup phase staging: android: ashmem: Fix possible deadlock in ashmem_ioctl Revert "led: core: Fix brightness setting when setting delay_off=0" led: core: Clear LED_BLINK_SW flag in led_blink_set() platform/x86: asus-nb-wmi: Add wapf4 quirk for the X302UA bonding: handle link transition from FAIL to UP correctly regulator: anatop: set default voltage selector for pcie power: supply: bq24190_charger: Limit over/under voltage fault logging x86: i8259: export legacy_pic symbol rtc: cmos: Do not assume irq 8 for rtc when there are no legacy irqs Input: ar1021_i2c - fix too long name in driver's device table time: Change posix clocks ops interfaces to use timespec64 ACPI/processor: Fix error handling in __acpi_processor_start() ACPI/processor: Replace racy task affinity logic cpufreq/sh: Replace racy task affinity logic genirq: Use irqd_get_trigger_type to compare the trigger type for shared IRQs i2c: i2c-scmi: add a MS HID net: ipv6: send unsolicited NA on admin up media/dvb-core: Race condition when writing to CAM btrfs: fix a bogus warning when converting only data or metadata ASoC: Intel: Atom: update Thinkpad 10 quirk tools/testing/nvdimm: fix nfit_test shutdown crash spi: dw: Disable clock after unregistering the host powerpc/64s: Remove SAO feature from Power9 DD1 ath: Fix updating radar flags for coutry code India clk: ns2: Correct SDIO bits iwlwifi: split the handler and the wake parts of the notification infra iwlwifi: a000: fix memory offsets and lengths scsi: virtio_scsi: Always try to read VPD pages KVM: PPC: Book3S PR: Exit KVM on failed mapping mwifiex: don't leak 'chan_stats' on reset x86/reboot: Turn off KVM when halting a CPU ARM: 8668/1: ftrace: Fix dynamic ftrace with DEBUG_RODATA and !FRAME_POINTER irqchip/mips-gic: Separate IPI reservation & usage tracking iommu/omap: Register driver before setting IOMMU ops md/raid10: wait up frozen array in handle_write_completed NFS: Fix missing pg_cleanup after nfs_pageio_cond_complete() tcp: remove poll() flakes with FastOpen e1000e: fix timing for 82579 Gigabit Ethernet controller ALSA: hda - Fix headset microphone detection for ASUS N551 and N751 IB/ipoib: Fix deadlock between ipoib_stop and mcast join flow IB/ipoib: Update broadcast object if PKey value was changed in index 0 HSI: ssi_protocol: double free in ssip_pn_xmit() IB/mlx4: Take write semaphore when changing the vma struct IB/mlx4: Change vma from shared to private IB/mlx5: Take write semaphore when changing the vma struct IB/mlx5: Change vma from shared to private IB/mlx5: Set correct SL in completion for RoCE ASoC: Intel: Skylake: Uninitialized variable in probe_codec() ibmvnic: Disable irq prior to close netvsc: Deal with rescinded channels correctly Fix driver usage of 128B WQEs when WQ_CREATE is V1. Fix Express lane queue creation. gpio: gpio-wcove: fix irq pending status bit width netfilter: xt_CT: fix refcnt leak on error path openvswitch: Delete conntrack entry clashing with an expectation. netfilter: nf_ct_helper: permit cthelpers with different names via nfnetlink mmc: host: omap_hsmmc: checking for NULL instead of IS_ERR() tipc: check return value of nlmsg_new wan: pc300too: abort path on failure qlcnic: fix unchecked return value netfilter: nft_dynset: continue to next expr if _OP_ADD succeeded platform/x86: intel-vbtn: add volume up and down scsi: mac_esp: Replace bogus memory barrier with spinlock infiniband/uverbs: Fix integer overflows pNFS: Fix use after free issues in pnfs_do_read() xprtrdma: Cancel refresh worker during buffer shutdown NFS: don't try to cross a mountpount when there isn't one there. iio: st_pressure: st_accel: Initialise sensor platform data properly mt7601u: check return value of alloc_skb libertas: check return value of alloc_workqueue rndis_wlan: add return value validation Btrfs: fix incorrect space accounting after failure to insert inline extent Btrfs: send, fix file hole not being preserved due to inline extent Btrfs: fix extent map leak during fallocate error path orangefs: do not wait for timeout if umounting mac80211: don't parse encrypted management frames in ieee80211_frame_acked ACPICA: iasl: Fix IORT SMMU GSI disassembling iio: hid-sensor: fix return of -EINVAL on invalid values in ret or value dt-bindings: mfd: axp20x: Add "xpowers,master-mode" property for AXP806 PMICs mfd: palmas: Reset the POWERHOLD mux during power off mtip32xx: use runtime tag to initialize command header x86/KASLR: Fix kexec kernel boot crash when KASLR randomization fails gpio: gpio-wcove: fix GPIO IRQ status mask staging: unisys: visorhba: fix s-Par to boot with option CONFIG_VMAP_STACK set to y staging: wilc1000: fix unchecked return value ipvs: explicitly forbid ipv6 service/dest creation if ipv6 mod is disabled mac80211: Fix possible sband related NULL pointer de-reference mmc: sdhci-of-esdhc: limit SD clock for ls1012a/ls1046a netfilter: x_tables: unlock on error in xt_find_table_lock() ARM: DRA7: clockdomain: Change the CLKTRCTRL of CM_PCIE_CLKSTCTRL to SW_WKUP IB/rdmavt: restore IRQs on error path in rvt_create_ah() IB/hfi1: Fix softlockup issue platform/x86: asus-wmi: try to set als by default ipmi/watchdog: fix wdog hang on panic waiting for ipmi response ACPI / PMIC: xpower: Fix power_table addresses drm/amdgpu: fix gpu reset crash drm/nouveau/kms: Increase max retries in scanout position queries. jbd2: Fix lockdep splat with generic/270 test ixgbevf: fix size of queue stats length net: ethernet: ucc_geth: fix MEM_PART_MURAM mode soc/fsl/qe: round brg_freq to 1kHz granularity Bluetooth: hci_ldisc: Add protocol check to hci_uart_dequeue() Bluetooth: hci_ldisc: Add protocol check to hci_uart_tx_wakeup() vxlan: correctly handle ipv6.disable module parameter qed: Unlock on error in qed_vf_pf_acquire() bnx2x: Align RX buffers power: supply: bq24190_charger: Add disable-reset device-property power: supply: isp1704: Fix unchecked return value of devm_kzalloc power: supply: pda_power: move from timer to delayed_work Input: twl4030-pwrbutton - use correct device for irq request IB/rxe: Don't clamp residual length to mtu md/raid10: skip spare disk as 'first' disk ACPI / power: Delay turning off unused power resources after suspend ia64: fix module loading for gcc-5.4 tcm_fileio: Prevent information leak for short reads x86/xen: split xen_smp_prepare_boot_cpu() video: fbdev: udlfb: Fix buffer on stack sm501fb: don't return zero on failure path in sm501fb_start() pNFS: Fix a deadlock when coalescing writes and returning the layout net: hns: fix ethtool_get_strings overflow in hns driver cifs: small underflow in cnvrtDosUnixTm() mm: fix check for reclaimable pages in PF_MEMALLOC reclaim throttling mm, vmstat: suppress pcp stats for unpopulated zones in zoneinfo mm: hwpoison: call shake_page() after try_to_unmap() for mlocked page rtc: ds1374: wdt: Fix issue with timeout scaling from secs to wdt ticks rtc: ds1374: wdt: Fix stop/start ioctl always returning -EINVAL ath10k: fix out of bounds access to local buffer perf tests kmod-path: Don't fail if compressed modules aren't supported block/mq: Cure cpu hotplug lock inversion Bluetooth: hci_qca: Avoid setup failure on missing rampatch Bluetooth: btqcomsmd: Fix skb double free corruption media: c8sectpfe: fix potential NULL pointer dereference in c8sectpfe_timer_interrupt drm/msm: fix leak in failed get_pages RDMA/iwpm: Fix uninitialized error code in iwpm_send_mapinfo() rtlwifi: rtl_pci: Fix the bug when inactiveps is enabled. media: bt8xx: Fix err 'bt878_probe()' ath10k: handling qos at STA side based on AP WMM enable/disable media: [RESEND] media: dvb-frontends: Add delay to Si2168 restart qmi_wwan: set FLAG_SEND_ZLP to avoid network initiated disconnect serial: 8250_dw: Disable clock on error cros_ec: fix nul-termination for firmware build info watchdog: Fix potential kref imbalance when opening watchdog platform/chrome: Use proper protocol transfer function dmaengine: zynqmp_dma: Fix race condition in the probe drm/tilcdc: ensure nonatomic iowrite64 is not used mmc: avoid removing non-removable hosts during suspend IB/ipoib: Avoid memory leak if the SA returns a different DGID RDMA/cma: Use correct size when writing netlink stats IB/umem: Fix use of npages/nmap fields iser-target: avoid reinitializing rdma contexts for isert commands vgacon: Set VGA struct resource types omapdrm: panel: fix compatible vendor string for td028ttec1 drm/omap: DMM: Check for DMM readiness after successful transaction commit pty: cancel pty slave port buf's work in tty_release coresight: Fix disabling of CoreSight TPIU pinctrl: Really force states during suspend/resume pinctrl: rockchip: enable clock when reading pin direction register iommu/vt-d: clean up pr_irq if request_threaded_irq fails ip6_vti: adjust vti mtu according to mtu of lower device RDMA/ocrdma: Fix permissions for OCRDMA_RESET_STATS ARM: dts: aspeed-evb: Add unit name to memory node nfsd4: permit layoutget of executable-only files clk: Don't touch hardware when reparenting during registration clk: axi-clkgen: Correctly handle nocount bit in recalc_rate() clk: si5351: Rename internal plls to avoid name collisions dmaengine: ti-dma-crossbar: Fix event mapping for TPCC_EVT_MUX_60_63 IB/mlx5: Fix integer overflows in mlx5_ib_create_srq IB/mlx5: Fix out-of-bounds read in create_raw_packet_qp_rq clk: migrate the count of orphaned clocks at init RDMA/ucma: Fix access to non-initialized CM_ID object RDMA/ucma: Don't allow join attempts for unsupported AF family usb: gadget: f_hid: fix: Move IN request allocation to set_alt() Linux 4.9.90 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -2,7 +2,7 @@ Toppoly TD028TTEC1 Panel
|
||||
========================
|
||||
|
||||
Required properties:
|
||||
- compatible: "toppoly,td028ttec1"
|
||||
- compatible: "tpo,td028ttec1"
|
||||
|
||||
Optional properties:
|
||||
- label: a symbolic name for the panel
|
||||
@@ -14,7 +14,7 @@ Example
|
||||
-------
|
||||
|
||||
lcd-panel: td028ttec1@0 {
|
||||
compatible = "toppoly,td028ttec1";
|
||||
compatible = "tpo,td028ttec1";
|
||||
reg = <0>;
|
||||
spi-max-frequency = <100000>;
|
||||
spi-cpol;
|
||||
@@ -28,6 +28,9 @@ Optional properties:
|
||||
regulator to drive the OTG VBus, rather then as an input pin
|
||||
which signals whether the board is driving OTG VBus or not.
|
||||
|
||||
- x-powers,master-mode: Boolean (axp806 only). Set this when the PMIC is
|
||||
wired for master mode. The default is slave mode.
|
||||
|
||||
- <input>-supply: a phandle to the regulator supply node. May be omitted if
|
||||
inputs are unregulated, such as using the IPSOUT output
|
||||
from the PMIC.
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 89
|
||||
SUBLEVEL = 90
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@
|
||||
struct pci_controller *pci_vga_hose;
|
||||
static struct resource alpha_vga = {
|
||||
.name = "alpha-vga+",
|
||||
.flags = IORESOURCE_IO,
|
||||
.start = 0x3C0,
|
||||
.end = 0x3DF
|
||||
};
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
bootargs = "console=ttyS4,115200 earlyprintk";
|
||||
};
|
||||
|
||||
memory {
|
||||
memory@80000000 {
|
||||
reg = <0x80000000 0x20000000>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -29,11 +29,6 @@
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
#ifdef CONFIG_OLD_MCOUNT
|
||||
#define OLD_MCOUNT_ADDR ((unsigned long) mcount)
|
||||
#define OLD_FTRACE_ADDR ((unsigned long) ftrace_caller_old)
|
||||
|
||||
#define OLD_NOP 0xe1a00000 /* mov r0, r0 */
|
||||
|
||||
static int __ftrace_modify_code(void *data)
|
||||
{
|
||||
@@ -51,6 +46,12 @@ void arch_ftrace_update_code(int command)
|
||||
stop_machine(__ftrace_modify_code, &command, NULL);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_OLD_MCOUNT
|
||||
#define OLD_MCOUNT_ADDR ((unsigned long) mcount)
|
||||
#define OLD_FTRACE_ADDR ((unsigned long) ftrace_caller_old)
|
||||
|
||||
#define OLD_NOP 0xe1a00000 /* mov r0, r0 */
|
||||
|
||||
static unsigned long ftrace_nop_replace(struct dyn_ftrace *rec)
|
||||
{
|
||||
return rec->arch.old_mcount ? OLD_NOP : NOP;
|
||||
|
||||
@@ -524,7 +524,7 @@ static struct clockdomain pcie_7xx_clkdm = {
|
||||
.dep_bit = DRA7XX_PCIE_STATDEP_SHIFT,
|
||||
.wkdep_srcs = pcie_wkup_sleep_deps,
|
||||
.sleepdep_srcs = pcie_wkup_sleep_deps,
|
||||
.flags = CLKDM_CAN_HWSUP_SWSUP,
|
||||
.flags = CLKDM_CAN_SWSUP,
|
||||
};
|
||||
|
||||
static struct clockdomain atl_7xx_clkdm = {
|
||||
|
||||
@@ -153,7 +153,7 @@ slot (const struct insn *insn)
|
||||
static int
|
||||
apply_imm64 (struct module *mod, struct insn *insn, uint64_t val)
|
||||
{
|
||||
if (slot(insn) != 2) {
|
||||
if (slot(insn) != 1 && slot(insn) != 2) {
|
||||
printk(KERN_ERR "%s: invalid slot number %d for IMM64\n",
|
||||
mod->name, slot(insn));
|
||||
return 0;
|
||||
@@ -165,7 +165,7 @@ apply_imm64 (struct module *mod, struct insn *insn, uint64_t val)
|
||||
static int
|
||||
apply_imm60 (struct module *mod, struct insn *insn, uint64_t val)
|
||||
{
|
||||
if (slot(insn) != 2) {
|
||||
if (slot(insn) != 1 && slot(insn) != 2) {
|
||||
printk(KERN_ERR "%s: invalid slot number %d for IMM60\n",
|
||||
mod->name, slot(insn));
|
||||
return 0;
|
||||
|
||||
@@ -474,7 +474,8 @@ enum {
|
||||
CPU_FTR_ICSWX | CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
|
||||
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
|
||||
CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_ARCH_300)
|
||||
#define CPU_FTRS_POWER9_DD1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD1)
|
||||
#define CPU_FTRS_POWER9_DD1 ((CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD1) & \
|
||||
(~CPU_FTR_SAO))
|
||||
#define CPU_FTRS_CELL (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
|
||||
CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \
|
||||
|
||||
@@ -177,12 +177,15 @@ map_again:
|
||||
ret = mmu_hash_ops.hpte_insert(hpteg, vpn, hpaddr, rflags, vflags,
|
||||
hpsize, hpsize, MMU_SEGSIZE_256M);
|
||||
|
||||
if (ret < 0) {
|
||||
if (ret == -1) {
|
||||
/* If we couldn't map a primary PTE, try a secondary */
|
||||
hash = ~hash;
|
||||
vflags ^= HPTE_V_SECONDARY;
|
||||
attempt++;
|
||||
goto map_again;
|
||||
} else if (ret < 0) {
|
||||
r = -EIO;
|
||||
goto out_unlock;
|
||||
} else {
|
||||
trace_kvm_book3s_64_mmu_map(rflags, hpteg,
|
||||
vpn, hpaddr, orig_pte);
|
||||
|
||||
@@ -627,7 +627,11 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
kvmppc_mmu_unmap_page(vcpu, &pte);
|
||||
}
|
||||
/* The guest's PTE is not mapped yet. Map on the host */
|
||||
kvmppc_mmu_map_page(vcpu, &pte, iswrite);
|
||||
if (kvmppc_mmu_map_page(vcpu, &pte, iswrite) == -EIO) {
|
||||
/* Exit KVM if mapping failed */
|
||||
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
|
||||
return RESUME_HOST;
|
||||
}
|
||||
if (data)
|
||||
vcpu->stat.sp_storage++;
|
||||
else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
|
||||
|
||||
@@ -460,10 +460,17 @@ void choose_random_location(unsigned long input,
|
||||
add_identity_map(random_addr, output_size);
|
||||
*output = random_addr;
|
||||
}
|
||||
|
||||
/*
|
||||
* This loads the identity mapping page table.
|
||||
* This should only be done if a new physical address
|
||||
* is found for the kernel, otherwise we should keep
|
||||
* the old page table to make it be like the "nokaslr"
|
||||
* case.
|
||||
*/
|
||||
finalize_identity_maps();
|
||||
}
|
||||
|
||||
/* This actually loads the identity pagetable on x86_64. */
|
||||
finalize_identity_maps();
|
||||
|
||||
/* Pick random virtual address starting from LOAD_PHYSICAL_ADDR. */
|
||||
if (IS_ENABLED(CONFIG_X86_64))
|
||||
|
||||
@@ -418,6 +418,7 @@ struct legacy_pic default_legacy_pic = {
|
||||
};
|
||||
|
||||
struct legacy_pic *legacy_pic = &default_legacy_pic;
|
||||
EXPORT_SYMBOL(legacy_pic);
|
||||
|
||||
static int __init i8259A_init_ops(void)
|
||||
{
|
||||
|
||||
@@ -33,6 +33,7 @@
|
||||
#include <asm/mce.h>
|
||||
#include <asm/trace/irq_vectors.h>
|
||||
#include <asm/kexec.h>
|
||||
#include <asm/virtext.h>
|
||||
|
||||
/*
|
||||
* Some notes on x86 processor bugs affecting SMP operation:
|
||||
@@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
|
||||
if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
|
||||
return NMI_HANDLED;
|
||||
|
||||
cpu_emergency_vmxoff();
|
||||
stop_this_cpu(NULL);
|
||||
|
||||
return NMI_HANDLED;
|
||||
@@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
|
||||
asmlinkage __visible void smp_reboot_interrupt(void)
|
||||
{
|
||||
ipi_entering_ack_irq();
|
||||
cpu_emergency_vmxoff();
|
||||
stop_this_cpu(NULL);
|
||||
irq_exit();
|
||||
}
|
||||
|
||||
@@ -299,35 +299,46 @@ static void __init xen_filter_cpu_maps(void)
|
||||
|
||||
}
|
||||
|
||||
static void __init xen_smp_prepare_boot_cpu(void)
|
||||
static void __init xen_pv_smp_prepare_boot_cpu(void)
|
||||
{
|
||||
BUG_ON(smp_processor_id() != 0);
|
||||
native_smp_prepare_boot_cpu();
|
||||
|
||||
if (xen_pv_domain()) {
|
||||
if (!xen_feature(XENFEAT_writable_page_tables))
|
||||
/* We've switched to the "real" per-cpu gdt, so make
|
||||
* sure the old memory can be recycled. */
|
||||
make_lowmem_page_readwrite(xen_initial_gdt);
|
||||
if (!xen_feature(XENFEAT_writable_page_tables))
|
||||
/* We've switched to the "real" per-cpu gdt, so make
|
||||
* sure the old memory can be recycled. */
|
||||
make_lowmem_page_readwrite(xen_initial_gdt);
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
/*
|
||||
* Xen starts us with XEN_FLAT_RING1_DS, but linux code
|
||||
* expects __USER_DS
|
||||
*/
|
||||
loadsegment(ds, __USER_DS);
|
||||
loadsegment(es, __USER_DS);
|
||||
/*
|
||||
* Xen starts us with XEN_FLAT_RING1_DS, but linux code
|
||||
* expects __USER_DS
|
||||
*/
|
||||
loadsegment(ds, __USER_DS);
|
||||
loadsegment(es, __USER_DS);
|
||||
#endif
|
||||
|
||||
xen_filter_cpu_maps();
|
||||
xen_setup_vcpu_info_placement();
|
||||
}
|
||||
xen_filter_cpu_maps();
|
||||
xen_setup_vcpu_info_placement();
|
||||
|
||||
/*
|
||||
* The alternative logic (which patches the unlock/lock) runs before
|
||||
* the smp bootup up code is activated. Hence we need to set this up
|
||||
* the core kernel is being patched. Otherwise we will have only
|
||||
* modules patched but not core code.
|
||||
*/
|
||||
xen_init_spinlocks();
|
||||
}
|
||||
|
||||
static void __init xen_hvm_smp_prepare_boot_cpu(void)
|
||||
{
|
||||
BUG_ON(smp_processor_id() != 0);
|
||||
native_smp_prepare_boot_cpu();
|
||||
|
||||
/*
|
||||
* Setup vcpu_info for boot CPU.
|
||||
*/
|
||||
if (xen_hvm_domain())
|
||||
xen_vcpu_setup(0);
|
||||
xen_vcpu_setup(0);
|
||||
|
||||
/*
|
||||
* The alternative logic (which patches the unlock/lock) runs before
|
||||
@@ -733,7 +744,7 @@ static irqreturn_t xen_irq_work_interrupt(int irq, void *dev_id)
|
||||
}
|
||||
|
||||
static const struct smp_ops xen_smp_ops __initconst = {
|
||||
.smp_prepare_boot_cpu = xen_smp_prepare_boot_cpu,
|
||||
.smp_prepare_boot_cpu = xen_pv_smp_prepare_boot_cpu,
|
||||
.smp_prepare_cpus = xen_smp_prepare_cpus,
|
||||
.smp_cpus_done = xen_smp_cpus_done,
|
||||
|
||||
@@ -772,5 +783,5 @@ void __init xen_hvm_smp_init(void)
|
||||
smp_ops.cpu_die = xen_cpu_die;
|
||||
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
|
||||
smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
|
||||
smp_ops.smp_prepare_boot_cpu = xen_smp_prepare_boot_cpu;
|
||||
smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
|
||||
}
|
||||
|
||||
@@ -2014,15 +2014,15 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
|
||||
|
||||
blk_mq_init_cpu_queues(q, set->nr_hw_queues);
|
||||
|
||||
get_online_cpus();
|
||||
mutex_lock(&all_q_mutex);
|
||||
get_online_cpus();
|
||||
|
||||
list_add_tail(&q->all_q_node, &all_q_list);
|
||||
blk_mq_add_queue_tag_set(set, q);
|
||||
blk_mq_map_swqueue(q, cpu_online_mask);
|
||||
|
||||
mutex_unlock(&all_q_mutex);
|
||||
put_online_cpus();
|
||||
mutex_unlock(&all_q_mutex);
|
||||
|
||||
return q;
|
||||
|
||||
|
||||
@@ -28,97 +28,97 @@ static struct pmic_table power_table[] = {
|
||||
.address = 0x00,
|
||||
.reg = 0x13,
|
||||
.bit = 0x05,
|
||||
},
|
||||
}, /* ALD1 */
|
||||
{
|
||||
.address = 0x04,
|
||||
.reg = 0x13,
|
||||
.bit = 0x06,
|
||||
},
|
||||
}, /* ALD2 */
|
||||
{
|
||||
.address = 0x08,
|
||||
.reg = 0x13,
|
||||
.bit = 0x07,
|
||||
},
|
||||
}, /* ALD3 */
|
||||
{
|
||||
.address = 0x0c,
|
||||
.reg = 0x12,
|
||||
.bit = 0x03,
|
||||
},
|
||||
}, /* DLD1 */
|
||||
{
|
||||
.address = 0x10,
|
||||
.reg = 0x12,
|
||||
.bit = 0x04,
|
||||
},
|
||||
}, /* DLD2 */
|
||||
{
|
||||
.address = 0x14,
|
||||
.reg = 0x12,
|
||||
.bit = 0x05,
|
||||
},
|
||||
}, /* DLD3 */
|
||||
{
|
||||
.address = 0x18,
|
||||
.reg = 0x12,
|
||||
.bit = 0x06,
|
||||
},
|
||||
}, /* DLD4 */
|
||||
{
|
||||
.address = 0x1c,
|
||||
.reg = 0x12,
|
||||
.bit = 0x00,
|
||||
},
|
||||
}, /* ELD1 */
|
||||
{
|
||||
.address = 0x20,
|
||||
.reg = 0x12,
|
||||
.bit = 0x01,
|
||||
},
|
||||
}, /* ELD2 */
|
||||
{
|
||||
.address = 0x24,
|
||||
.reg = 0x12,
|
||||
.bit = 0x02,
|
||||
},
|
||||
}, /* ELD3 */
|
||||
{
|
||||
.address = 0x28,
|
||||
.reg = 0x13,
|
||||
.bit = 0x02,
|
||||
},
|
||||
}, /* FLD1 */
|
||||
{
|
||||
.address = 0x2c,
|
||||
.reg = 0x13,
|
||||
.bit = 0x03,
|
||||
},
|
||||
}, /* FLD2 */
|
||||
{
|
||||
.address = 0x30,
|
||||
.reg = 0x13,
|
||||
.bit = 0x04,
|
||||
},
|
||||
}, /* FLD3 */
|
||||
{
|
||||
.address = 0x34,
|
||||
.reg = 0x10,
|
||||
.bit = 0x03,
|
||||
}, /* BUC1 */
|
||||
{
|
||||
.address = 0x38,
|
||||
.reg = 0x10,
|
||||
.bit = 0x03,
|
||||
},
|
||||
.bit = 0x06,
|
||||
}, /* BUC2 */
|
||||
{
|
||||
.address = 0x3c,
|
||||
.reg = 0x10,
|
||||
.bit = 0x06,
|
||||
},
|
||||
.bit = 0x05,
|
||||
}, /* BUC3 */
|
||||
{
|
||||
.address = 0x40,
|
||||
.reg = 0x10,
|
||||
.bit = 0x05,
|
||||
},
|
||||
.bit = 0x04,
|
||||
}, /* BUC4 */
|
||||
{
|
||||
.address = 0x44,
|
||||
.reg = 0x10,
|
||||
.bit = 0x04,
|
||||
},
|
||||
.bit = 0x01,
|
||||
}, /* BUC5 */
|
||||
{
|
||||
.address = 0x48,
|
||||
.reg = 0x10,
|
||||
.bit = 0x01,
|
||||
},
|
||||
{
|
||||
.address = 0x4c,
|
||||
.reg = 0x10,
|
||||
.bit = 0x00
|
||||
},
|
||||
}, /* BUC6 */
|
||||
};
|
||||
|
||||
/* TMP0 - TMP5 are the same, all from GPADC */
|
||||
|
||||
@@ -864,6 +864,16 @@ void acpi_resume_power_resources(void)
|
||||
|
||||
mutex_unlock(&resource->resource_lock);
|
||||
}
|
||||
|
||||
mutex_unlock(&power_resource_list_lock);
|
||||
}
|
||||
|
||||
void acpi_turn_off_unused_power_resources(void)
|
||||
{
|
||||
struct acpi_power_resource *resource;
|
||||
|
||||
mutex_lock(&power_resource_list_lock);
|
||||
|
||||
list_for_each_entry_reverse(resource, &acpi_power_resource_list, list_node) {
|
||||
int result, state;
|
||||
|
||||
|
||||
@@ -251,6 +251,9 @@ static int __acpi_processor_start(struct acpi_device *device)
|
||||
if (ACPI_SUCCESS(status))
|
||||
return 0;
|
||||
|
||||
result = -ENODEV;
|
||||
acpi_pss_perf_exit(pr, device);
|
||||
|
||||
err_power_exit:
|
||||
acpi_processor_power_exit(pr);
|
||||
return result;
|
||||
@@ -259,11 +262,16 @@ err_power_exit:
|
||||
static int acpi_processor_start(struct device *dev)
|
||||
{
|
||||
struct acpi_device *device = ACPI_COMPANION(dev);
|
||||
int ret;
|
||||
|
||||
if (!device)
|
||||
return -ENODEV;
|
||||
|
||||
return __acpi_processor_start(device);
|
||||
/* Protect against concurrent CPU hotplug operations */
|
||||
get_online_cpus();
|
||||
ret = __acpi_processor_start(device);
|
||||
put_online_cpus();
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int acpi_processor_stop(struct device *dev)
|
||||
|
||||
@@ -62,8 +62,8 @@ struct acpi_processor_throttling_arg {
|
||||
#define THROTTLING_POSTCHANGE (2)
|
||||
|
||||
static int acpi_processor_get_throttling(struct acpi_processor *pr);
|
||||
int acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
int state, bool force);
|
||||
static int __acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
int state, bool force, bool direct);
|
||||
|
||||
static int acpi_processor_update_tsd_coord(void)
|
||||
{
|
||||
@@ -891,7 +891,8 @@ static int acpi_processor_get_throttling_ptc(struct acpi_processor *pr)
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
||||
"Invalid throttling state, reset\n"));
|
||||
state = 0;
|
||||
ret = acpi_processor_set_throttling(pr, state, true);
|
||||
ret = __acpi_processor_set_throttling(pr, state, true,
|
||||
true);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
@@ -901,36 +902,31 @@ static int acpi_processor_get_throttling_ptc(struct acpi_processor *pr)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static long __acpi_processor_get_throttling(void *data)
|
||||
{
|
||||
struct acpi_processor *pr = data;
|
||||
|
||||
return pr->throttling.acpi_processor_get_throttling(pr);
|
||||
}
|
||||
|
||||
static int acpi_processor_get_throttling(struct acpi_processor *pr)
|
||||
{
|
||||
cpumask_var_t saved_mask;
|
||||
int ret;
|
||||
|
||||
if (!pr)
|
||||
return -EINVAL;
|
||||
|
||||
if (!pr->flags.throttling)
|
||||
return -ENODEV;
|
||||
|
||||
if (!alloc_cpumask_var(&saved_mask, GFP_KERNEL))
|
||||
return -ENOMEM;
|
||||
|
||||
/*
|
||||
* Migrate task to the cpu pointed by pr.
|
||||
* This is either called from the CPU hotplug callback of
|
||||
* processor_driver or via the ACPI probe function. In the latter
|
||||
* case the CPU is not guaranteed to be online. Both call sites are
|
||||
* protected against CPU hotplug.
|
||||
*/
|
||||
cpumask_copy(saved_mask, ¤t->cpus_allowed);
|
||||
/* FIXME: use work_on_cpu() */
|
||||
if (set_cpus_allowed_ptr(current, cpumask_of(pr->id))) {
|
||||
/* Can't migrate to the target pr->id CPU. Exit */
|
||||
free_cpumask_var(saved_mask);
|
||||
if (!cpu_online(pr->id))
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = pr->throttling.acpi_processor_get_throttling(pr);
|
||||
/* restore the previous state */
|
||||
set_cpus_allowed_ptr(current, saved_mask);
|
||||
free_cpumask_var(saved_mask);
|
||||
|
||||
return ret;
|
||||
return work_on_cpu(pr->id, __acpi_processor_get_throttling, pr);
|
||||
}
|
||||
|
||||
static int acpi_processor_get_fadt_info(struct acpi_processor *pr)
|
||||
@@ -1080,8 +1076,15 @@ static long acpi_processor_throttling_fn(void *data)
|
||||
arg->target_state, arg->force);
|
||||
}
|
||||
|
||||
int acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
int state, bool force)
|
||||
static int call_on_cpu(int cpu, long (*fn)(void *), void *arg, bool direct)
|
||||
{
|
||||
if (direct)
|
||||
return fn(arg);
|
||||
return work_on_cpu(cpu, fn, arg);
|
||||
}
|
||||
|
||||
static int __acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
int state, bool force, bool direct)
|
||||
{
|
||||
int ret = 0;
|
||||
unsigned int i;
|
||||
@@ -1130,7 +1133,8 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
arg.pr = pr;
|
||||
arg.target_state = state;
|
||||
arg.force = force;
|
||||
ret = work_on_cpu(pr->id, acpi_processor_throttling_fn, &arg);
|
||||
ret = call_on_cpu(pr->id, acpi_processor_throttling_fn, &arg,
|
||||
direct);
|
||||
} else {
|
||||
/*
|
||||
* When the T-state coordination is SW_ALL or HW_ALL,
|
||||
@@ -1163,8 +1167,8 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
arg.pr = match_pr;
|
||||
arg.target_state = state;
|
||||
arg.force = force;
|
||||
ret = work_on_cpu(pr->id, acpi_processor_throttling_fn,
|
||||
&arg);
|
||||
ret = call_on_cpu(pr->id, acpi_processor_throttling_fn,
|
||||
&arg, direct);
|
||||
}
|
||||
}
|
||||
/*
|
||||
@@ -1182,6 +1186,12 @@ int acpi_processor_set_throttling(struct acpi_processor *pr,
|
||||
return ret;
|
||||
}
|
||||
|
||||
int acpi_processor_set_throttling(struct acpi_processor *pr, int state,
|
||||
bool force)
|
||||
{
|
||||
return __acpi_processor_set_throttling(pr, state, force, false);
|
||||
}
|
||||
|
||||
int acpi_processor_get_throttling_info(struct acpi_processor *pr)
|
||||
{
|
||||
int result = 0;
|
||||
|
||||
@@ -474,6 +474,7 @@ static void acpi_pm_start(u32 acpi_state)
|
||||
*/
|
||||
static void acpi_pm_end(void)
|
||||
{
|
||||
acpi_turn_off_unused_power_resources();
|
||||
acpi_scan_lock_release();
|
||||
/*
|
||||
* This is necessary in case acpi_pm_finish() is not called during a
|
||||
|
||||
@@ -6,6 +6,7 @@ extern struct list_head acpi_wakeup_device_list;
|
||||
extern struct mutex acpi_device_lock;
|
||||
|
||||
extern void acpi_resume_power_resources(void);
|
||||
extern void acpi_turn_off_unused_power_resources(void);
|
||||
|
||||
static inline acpi_status acpi_set_waking_vector(u32 wakeup_address)
|
||||
{
|
||||
|
||||
@@ -169,6 +169,25 @@ static bool mtip_check_surprise_removal(struct pci_dev *pdev)
|
||||
return false; /* device present */
|
||||
}
|
||||
|
||||
/* we have to use runtime tag to setup command header */
|
||||
static void mtip_init_cmd_header(struct request *rq)
|
||||
{
|
||||
struct driver_data *dd = rq->q->queuedata;
|
||||
struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
|
||||
u32 host_cap_64 = readl(dd->mmio + HOST_CAP) & HOST_CAP_64;
|
||||
|
||||
/* Point the command headers at the command tables. */
|
||||
cmd->command_header = dd->port->command_list +
|
||||
(sizeof(struct mtip_cmd_hdr) * rq->tag);
|
||||
cmd->command_header_dma = dd->port->command_list_dma +
|
||||
(sizeof(struct mtip_cmd_hdr) * rq->tag);
|
||||
|
||||
if (host_cap_64)
|
||||
cmd->command_header->ctbau = __force_bit2int cpu_to_le32((cmd->command_dma >> 16) >> 16);
|
||||
|
||||
cmd->command_header->ctba = __force_bit2int cpu_to_le32(cmd->command_dma & 0xFFFFFFFF);
|
||||
}
|
||||
|
||||
static struct mtip_cmd *mtip_get_int_command(struct driver_data *dd)
|
||||
{
|
||||
struct request *rq;
|
||||
@@ -180,6 +199,9 @@ static struct mtip_cmd *mtip_get_int_command(struct driver_data *dd)
|
||||
if (IS_ERR(rq))
|
||||
return NULL;
|
||||
|
||||
/* Internal cmd isn't submitted via .queue_rq */
|
||||
mtip_init_cmd_header(rq);
|
||||
|
||||
return blk_mq_rq_to_pdu(rq);
|
||||
}
|
||||
|
||||
@@ -3811,6 +3833,8 @@ static int mtip_queue_rq(struct blk_mq_hw_ctx *hctx,
|
||||
struct request *rq = bd->rq;
|
||||
int ret;
|
||||
|
||||
mtip_init_cmd_header(rq);
|
||||
|
||||
if (unlikely(mtip_check_unal_depth(hctx, rq)))
|
||||
return BLK_MQ_RQ_QUEUE_BUSY;
|
||||
|
||||
@@ -3842,7 +3866,6 @@ static int mtip_init_cmd(void *data, struct request *rq, unsigned int hctx_idx,
|
||||
{
|
||||
struct driver_data *dd = data;
|
||||
struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
|
||||
u32 host_cap_64 = readl(dd->mmio + HOST_CAP) & HOST_CAP_64;
|
||||
|
||||
/*
|
||||
* For flush requests, request_idx starts at the end of the
|
||||
@@ -3859,17 +3882,6 @@ static int mtip_init_cmd(void *data, struct request *rq, unsigned int hctx_idx,
|
||||
|
||||
memset(cmd->command, 0, CMD_DMA_ALLOC_SZ);
|
||||
|
||||
/* Point the command headers at the command tables. */
|
||||
cmd->command_header = dd->port->command_list +
|
||||
(sizeof(struct mtip_cmd_hdr) * request_idx);
|
||||
cmd->command_header_dma = dd->port->command_list_dma +
|
||||
(sizeof(struct mtip_cmd_hdr) * request_idx);
|
||||
|
||||
if (host_cap_64)
|
||||
cmd->command_header->ctbau = __force_bit2int cpu_to_le32((cmd->command_dma >> 16) >> 16);
|
||||
|
||||
cmd->command_header->ctba = __force_bit2int cpu_to_le32(cmd->command_dma & 0xFFFFFFFF);
|
||||
|
||||
sg_init_table(cmd->sg, MTIP_MAX_SG);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -85,7 +85,8 @@ static int btqcomsmd_send(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
break;
|
||||
}
|
||||
|
||||
kfree_skb(skb);
|
||||
if (!ret)
|
||||
kfree_skb(skb);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -113,16 +113,21 @@ static inline struct sk_buff *hci_uart_dequeue(struct hci_uart *hu)
|
||||
{
|
||||
struct sk_buff *skb = hu->tx_skb;
|
||||
|
||||
if (!skb)
|
||||
skb = hu->proto->dequeue(hu);
|
||||
else
|
||||
if (!skb) {
|
||||
if (test_bit(HCI_UART_PROTO_READY, &hu->flags))
|
||||
skb = hu->proto->dequeue(hu);
|
||||
} else {
|
||||
hu->tx_skb = NULL;
|
||||
}
|
||||
|
||||
return skb;
|
||||
}
|
||||
|
||||
int hci_uart_tx_wakeup(struct hci_uart *hu)
|
||||
{
|
||||
if (!test_bit(HCI_UART_PROTO_READY, &hu->flags))
|
||||
return 0;
|
||||
|
||||
if (test_and_set_bit(HCI_UART_SENDING, &hu->tx_state)) {
|
||||
set_bit(HCI_UART_TX_WAKEUP, &hu->tx_state);
|
||||
return 0;
|
||||
|
||||
@@ -936,6 +936,9 @@ static int qca_setup(struct hci_uart *hu)
|
||||
if (!ret) {
|
||||
set_bit(STATE_IN_BAND_SLEEP_ENABLED, &qca->flags);
|
||||
qca_debugfs_init(hdev);
|
||||
} else if (ret == -ENOENT) {
|
||||
/* No patch/nvm-config found, run with original fw/config */
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
/* Setup bdaddr */
|
||||
|
||||
@@ -515,7 +515,7 @@ static void panic_halt_ipmi_heartbeat(void)
|
||||
msg.cmd = IPMI_WDOG_RESET_TIMER;
|
||||
msg.data = NULL;
|
||||
msg.data_len = 0;
|
||||
atomic_add(2, &panic_done_count);
|
||||
atomic_add(1, &panic_done_count);
|
||||
rv = ipmi_request_supply_msgs(watchdog_user,
|
||||
(struct ipmi_addr *) &addr,
|
||||
0,
|
||||
@@ -525,7 +525,7 @@ static void panic_halt_ipmi_heartbeat(void)
|
||||
&panic_halt_heartbeat_recv_msg,
|
||||
1);
|
||||
if (rv)
|
||||
atomic_sub(2, &panic_done_count);
|
||||
atomic_sub(1, &panic_done_count);
|
||||
}
|
||||
|
||||
static struct ipmi_smi_msg panic_halt_smi_msg = {
|
||||
@@ -549,12 +549,12 @@ static void panic_halt_ipmi_set_timeout(void)
|
||||
/* Wait for the messages to be free. */
|
||||
while (atomic_read(&panic_done_count) != 0)
|
||||
ipmi_poll_interface(watchdog_user);
|
||||
atomic_add(2, &panic_done_count);
|
||||
atomic_add(1, &panic_done_count);
|
||||
rv = i_ipmi_set_timeout(&panic_halt_smi_msg,
|
||||
&panic_halt_recv_msg,
|
||||
&send_heartbeat_now);
|
||||
if (rv) {
|
||||
atomic_sub(2, &panic_done_count);
|
||||
atomic_sub(1, &panic_done_count);
|
||||
printk(KERN_WARNING PFX
|
||||
"Unable to extend the watchdog timeout.");
|
||||
} else {
|
||||
|
||||
@@ -1078,6 +1078,11 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max)
|
||||
break;
|
||||
|
||||
recd = be32_to_cpu(tpm_cmd.params.getrandom_out.rng_data_len);
|
||||
if (recd > num_bytes) {
|
||||
total = -EFAULT;
|
||||
break;
|
||||
}
|
||||
|
||||
memcpy(dest, tpm_cmd.params.getrandom_out.rng_data, recd);
|
||||
|
||||
dest += recd;
|
||||
|
||||
@@ -668,6 +668,11 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
|
||||
if (!rc) {
|
||||
data_len = be16_to_cpup(
|
||||
(__be16 *) &buf.data[TPM_HEADER_SIZE + 4]);
|
||||
if (data_len < MIN_KEY_SIZE || data_len > MAX_KEY_SIZE + 1) {
|
||||
rc = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
data = &buf.data[TPM_HEADER_SIZE + 6];
|
||||
|
||||
memcpy(payload->key, data, data_len - 1);
|
||||
@@ -675,6 +680,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
|
||||
payload->migratable = data[data_len - 1];
|
||||
}
|
||||
|
||||
out:
|
||||
tpm_buf_destroy(&buf);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@@ -103,7 +103,7 @@ CLK_OF_DECLARE(ns2_genpll_src_clk, "brcm,ns2-genpll-scr",
|
||||
|
||||
static const struct iproc_pll_ctrl genpll_sw = {
|
||||
.flags = IPROC_CLK_AON | IPROC_CLK_PLL_SPLIT_STAT_CTRL,
|
||||
.aon = AON_VAL(0x0, 2, 9, 8),
|
||||
.aon = AON_VAL(0x0, 1, 11, 10),
|
||||
.reset = RESET_VAL(0x4, 2, 1),
|
||||
.dig_filter = DF_VAL(0x0, 9, 3, 5, 4, 2, 3),
|
||||
.ndiv_int = REG_VAL(0x8, 4, 10),
|
||||
|
||||
@@ -40,6 +40,10 @@
|
||||
#define MMCM_REG_FILTER1 0x4e
|
||||
#define MMCM_REG_FILTER2 0x4f
|
||||
|
||||
#define MMCM_CLKOUT_NOCOUNT BIT(6)
|
||||
|
||||
#define MMCM_CLK_DIV_NOCOUNT BIT(12)
|
||||
|
||||
struct axi_clkgen {
|
||||
void __iomem *base;
|
||||
struct clk_hw clk_hw;
|
||||
@@ -315,12 +319,27 @@ static unsigned long axi_clkgen_recalc_rate(struct clk_hw *clk_hw,
|
||||
unsigned int reg;
|
||||
unsigned long long tmp;
|
||||
|
||||
axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLKOUT0_1, ®);
|
||||
dout = (reg & 0x3f) + ((reg >> 6) & 0x3f);
|
||||
axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLKOUT0_2, ®);
|
||||
if (reg & MMCM_CLKOUT_NOCOUNT) {
|
||||
dout = 1;
|
||||
} else {
|
||||
axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLKOUT0_1, ®);
|
||||
dout = (reg & 0x3f) + ((reg >> 6) & 0x3f);
|
||||
}
|
||||
|
||||
axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_DIV, ®);
|
||||
d = (reg & 0x3f) + ((reg >> 6) & 0x3f);
|
||||
axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_FB1, ®);
|
||||
m = (reg & 0x3f) + ((reg >> 6) & 0x3f);
|
||||
if (reg & MMCM_CLK_DIV_NOCOUNT)
|
||||
d = 1;
|
||||
else
|
||||
d = (reg & 0x3f) + ((reg >> 6) & 0x3f);
|
||||
|
||||
axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_FB2, ®);
|
||||
if (reg & MMCM_CLKOUT_NOCOUNT) {
|
||||
m = 1;
|
||||
} else {
|
||||
axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_FB1, ®);
|
||||
m = (reg & 0x3f) + ((reg >> 6) & 0x3f);
|
||||
}
|
||||
|
||||
if (d == 0 || dout == 0)
|
||||
return 0;
|
||||
|
||||
@@ -72,7 +72,7 @@ static const char * const si5351_input_names[] = {
|
||||
"xtal", "clkin"
|
||||
};
|
||||
static const char * const si5351_pll_names[] = {
|
||||
"plla", "pllb", "vxco"
|
||||
"si5351_plla", "si5351_pllb", "si5351_vxco"
|
||||
};
|
||||
static const char * const si5351_msynth_names[] = {
|
||||
"ms0", "ms1", "ms2", "ms3", "ms4", "ms5", "ms6", "ms7"
|
||||
|
||||
@@ -2437,6 +2437,21 @@ static int __clk_core_init(struct clk_core *core)
|
||||
rate = 0;
|
||||
core->rate = core->req_rate = rate;
|
||||
|
||||
/*
|
||||
* Enable CLK_IS_CRITICAL clocks so newly added critical clocks
|
||||
* don't get accidentally disabled when walking the orphan tree and
|
||||
* reparenting clocks
|
||||
*/
|
||||
if (core->flags & CLK_IS_CRITICAL) {
|
||||
unsigned long flags;
|
||||
|
||||
clk_core_prepare(core);
|
||||
|
||||
flags = clk_enable_lock();
|
||||
clk_core_enable(core);
|
||||
clk_enable_unlock(flags);
|
||||
}
|
||||
|
||||
/*
|
||||
* walk the list of orphan clocks and reparent any that newly finds a
|
||||
* parent.
|
||||
@@ -2445,10 +2460,13 @@ static int __clk_core_init(struct clk_core *core)
|
||||
struct clk_core *parent = __clk_init_parent(orphan);
|
||||
|
||||
/*
|
||||
* we could call __clk_set_parent, but that would result in a
|
||||
* redundant call to the .set_rate op, if it exists
|
||||
* We need to use __clk_set_parent_before() and _after() to
|
||||
* to properly migrate any prepare/enable count of the orphan
|
||||
* clock. This is important for CLK_IS_CRITICAL clocks, which
|
||||
* are enabled during init but might not have a parent yet.
|
||||
*/
|
||||
if (parent) {
|
||||
/* update the clk tree topology */
|
||||
__clk_set_parent_before(orphan, parent);
|
||||
__clk_set_parent_after(orphan, parent, NULL);
|
||||
__clk_recalc_accuracies(orphan);
|
||||
@@ -2467,16 +2485,6 @@ static int __clk_core_init(struct clk_core *core)
|
||||
if (core->ops->init)
|
||||
core->ops->init(core->hw);
|
||||
|
||||
if (core->flags & CLK_IS_CRITICAL) {
|
||||
unsigned long flags;
|
||||
|
||||
clk_core_prepare(core);
|
||||
|
||||
flags = clk_enable_lock();
|
||||
clk_core_enable(core);
|
||||
clk_enable_unlock(flags);
|
||||
}
|
||||
|
||||
kref_init(&core->ref);
|
||||
out:
|
||||
clk_prepare_unlock();
|
||||
|
||||
@@ -30,11 +30,51 @@
|
||||
|
||||
static DEFINE_PER_CPU(struct clk, sh_cpuclk);
|
||||
|
||||
struct cpufreq_target {
|
||||
struct cpufreq_policy *policy;
|
||||
unsigned int freq;
|
||||
};
|
||||
|
||||
static unsigned int sh_cpufreq_get(unsigned int cpu)
|
||||
{
|
||||
return (clk_get_rate(&per_cpu(sh_cpuclk, cpu)) + 500) / 1000;
|
||||
}
|
||||
|
||||
static long __sh_cpufreq_target(void *arg)
|
||||
{
|
||||
struct cpufreq_target *target = arg;
|
||||
struct cpufreq_policy *policy = target->policy;
|
||||
int cpu = policy->cpu;
|
||||
struct clk *cpuclk = &per_cpu(sh_cpuclk, cpu);
|
||||
struct cpufreq_freqs freqs;
|
||||
struct device *dev;
|
||||
long freq;
|
||||
|
||||
if (smp_processor_id() != cpu)
|
||||
return -ENODEV;
|
||||
|
||||
dev = get_cpu_device(cpu);
|
||||
|
||||
/* Convert target_freq from kHz to Hz */
|
||||
freq = clk_round_rate(cpuclk, target->freq * 1000);
|
||||
|
||||
if (freq < (policy->min * 1000) || freq > (policy->max * 1000))
|
||||
return -EINVAL;
|
||||
|
||||
dev_dbg(dev, "requested frequency %u Hz\n", target->freq * 1000);
|
||||
|
||||
freqs.old = sh_cpufreq_get(cpu);
|
||||
freqs.new = (freq + 500) / 1000;
|
||||
freqs.flags = 0;
|
||||
|
||||
cpufreq_freq_transition_begin(target->policy, &freqs);
|
||||
clk_set_rate(cpuclk, freq);
|
||||
cpufreq_freq_transition_end(target->policy, &freqs, 0);
|
||||
|
||||
dev_dbg(dev, "set frequency %lu Hz\n", freq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Here we notify other drivers of the proposed change and the final change.
|
||||
*/
|
||||
@@ -42,40 +82,9 @@ static int sh_cpufreq_target(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation)
|
||||
{
|
||||
unsigned int cpu = policy->cpu;
|
||||
struct clk *cpuclk = &per_cpu(sh_cpuclk, cpu);
|
||||
cpumask_t cpus_allowed;
|
||||
struct cpufreq_freqs freqs;
|
||||
struct device *dev;
|
||||
long freq;
|
||||
struct cpufreq_target data = { .policy = policy, .freq = target_freq };
|
||||
|
||||
cpus_allowed = current->cpus_allowed;
|
||||
set_cpus_allowed_ptr(current, cpumask_of(cpu));
|
||||
|
||||
BUG_ON(smp_processor_id() != cpu);
|
||||
|
||||
dev = get_cpu_device(cpu);
|
||||
|
||||
/* Convert target_freq from kHz to Hz */
|
||||
freq = clk_round_rate(cpuclk, target_freq * 1000);
|
||||
|
||||
if (freq < (policy->min * 1000) || freq > (policy->max * 1000))
|
||||
return -EINVAL;
|
||||
|
||||
dev_dbg(dev, "requested frequency %u Hz\n", target_freq * 1000);
|
||||
|
||||
freqs.old = sh_cpufreq_get(cpu);
|
||||
freqs.new = (freq + 500) / 1000;
|
||||
freqs.flags = 0;
|
||||
|
||||
cpufreq_freq_transition_begin(policy, &freqs);
|
||||
set_cpus_allowed_ptr(current, &cpus_allowed);
|
||||
clk_set_rate(cpuclk, freq);
|
||||
cpufreq_freq_transition_end(policy, &freqs, 0);
|
||||
|
||||
dev_dbg(dev, "set frequency %lu Hz\n", freq);
|
||||
|
||||
return 0;
|
||||
return work_on_cpu(policy->cpu, __sh_cpufreq_target, &data);
|
||||
}
|
||||
|
||||
static int sh_cpufreq_verify(struct cpufreq_policy *policy)
|
||||
|
||||
@@ -54,7 +54,15 @@ struct ti_am335x_xbar_map {
|
||||
|
||||
static inline void ti_am335x_xbar_write(void __iomem *iomem, int event, u8 val)
|
||||
{
|
||||
writeb_relaxed(val, iomem + event);
|
||||
/*
|
||||
* TPCC_EVT_MUX_60_63 register layout is different than the
|
||||
* rest, in the sense, that event 63 is mapped to lowest byte
|
||||
* and event 60 is mapped to highest, handle it separately.
|
||||
*/
|
||||
if (event >= 60 && event <= 63)
|
||||
writeb_relaxed(val, iomem + (63 - event % 4));
|
||||
else
|
||||
writeb_relaxed(val, iomem + event);
|
||||
}
|
||||
|
||||
static void ti_am335x_xbar_free(struct device *dev, void *route_data)
|
||||
|
||||
@@ -933,7 +933,8 @@ static void zynqmp_dma_chan_remove(struct zynqmp_dma_chan *chan)
|
||||
if (!chan)
|
||||
return;
|
||||
|
||||
devm_free_irq(chan->zdev->dev, chan->irq, chan);
|
||||
if (chan->irq)
|
||||
devm_free_irq(chan->zdev->dev, chan->irq, chan);
|
||||
tasklet_kill(&chan->tasklet);
|
||||
list_del(&chan->common.device_node);
|
||||
clk_disable_unprepare(chan->clk_apb);
|
||||
|
||||
@@ -51,6 +51,8 @@
|
||||
#define GROUP1_NR_IRQS 6
|
||||
#define IRQ_MASK_BASE 0x4e19
|
||||
#define IRQ_STATUS_BASE 0x4e0b
|
||||
#define GPIO_IRQ0_MASK GENMASK(6, 0)
|
||||
#define GPIO_IRQ1_MASK GENMASK(5, 0)
|
||||
#define UPDATE_IRQ_TYPE BIT(0)
|
||||
#define UPDATE_IRQ_MASK BIT(1)
|
||||
|
||||
@@ -310,7 +312,7 @@ static irqreturn_t wcove_gpio_irq_handler(int irq, void *data)
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
pending = p[0] | (p[1] << 8);
|
||||
pending = (p[0] & GPIO_IRQ0_MASK) | ((p[1] & GPIO_IRQ1_MASK) << 7);
|
||||
if (!pending)
|
||||
return IRQ_NONE;
|
||||
|
||||
@@ -318,7 +320,7 @@ static irqreturn_t wcove_gpio_irq_handler(int irq, void *data)
|
||||
while (pending) {
|
||||
/* One iteration is for all pending bits */
|
||||
for_each_set_bit(gpio, (const unsigned long *)&pending,
|
||||
GROUP0_NR_IRQS) {
|
||||
WCOVE_GPIO_NUM) {
|
||||
offset = (gpio > GROUP0_NR_IRQS) ? 1 : 0;
|
||||
mask = (offset == 1) ? BIT(gpio - GROUP0_NR_IRQS) :
|
||||
BIT(gpio);
|
||||
@@ -334,7 +336,7 @@ static irqreturn_t wcove_gpio_irq_handler(int irq, void *data)
|
||||
break;
|
||||
}
|
||||
|
||||
pending = p[0] | (p[1] << 8);
|
||||
pending = (p[0] & GPIO_IRQ0_MASK) | ((p[1] & GPIO_IRQ1_MASK) << 7);
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
|
||||
@@ -2237,7 +2237,7 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
|
||||
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
|
||||
struct amdgpu_ring *ring = adev->rings[i];
|
||||
|
||||
if (!ring)
|
||||
if (!ring || !ring->sched.thread)
|
||||
continue;
|
||||
kthread_park(ring->sched.thread);
|
||||
amd_sched_hw_job_reset(&ring->sched);
|
||||
@@ -2326,7 +2326,8 @@ retry:
|
||||
}
|
||||
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
|
||||
struct amdgpu_ring *ring = adev->rings[i];
|
||||
if (!ring)
|
||||
|
||||
if (!ring || !ring->sched.thread)
|
||||
continue;
|
||||
|
||||
amd_sched_job_recovery(&ring->sched);
|
||||
@@ -2335,7 +2336,7 @@ retry:
|
||||
} else {
|
||||
dev_err(adev->dev, "asic resume failed (%d).\n", r);
|
||||
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
|
||||
if (adev->rings[i]) {
|
||||
if (adev->rings[i] && adev->rings[i]->sched.thread) {
|
||||
kthread_unpark(adev->rings[i]->sched.thread);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -91,13 +91,16 @@ static struct page **get_pages(struct drm_gem_object *obj)
|
||||
return p;
|
||||
}
|
||||
|
||||
msm_obj->pages = p;
|
||||
|
||||
msm_obj->sgt = drm_prime_pages_to_sg(p, npages);
|
||||
if (IS_ERR(msm_obj->sgt)) {
|
||||
dev_err(dev->dev, "failed to allocate sgt\n");
|
||||
return ERR_CAST(msm_obj->sgt);
|
||||
}
|
||||
void *ptr = ERR_CAST(msm_obj->sgt);
|
||||
|
||||
msm_obj->pages = p;
|
||||
dev_err(dev->dev, "failed to allocate sgt\n");
|
||||
msm_obj->sgt = NULL;
|
||||
return ptr;
|
||||
}
|
||||
|
||||
/* For non-cached buffers, ensure the new pages are clean
|
||||
* because display controller, GPU, etc. are not coherent:
|
||||
@@ -121,7 +124,10 @@ static void put_pages(struct drm_gem_object *obj)
|
||||
if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
|
||||
dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
|
||||
msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
|
||||
sg_free_table(msm_obj->sgt);
|
||||
|
||||
if (msm_obj->sgt)
|
||||
sg_free_table(msm_obj->sgt);
|
||||
|
||||
kfree(msm_obj->sgt);
|
||||
|
||||
if (use_pages(obj))
|
||||
|
||||
@@ -106,7 +106,7 @@ nouveau_display_scanoutpos_head(struct drm_crtc *crtc, int *vpos, int *hpos,
|
||||
};
|
||||
struct nouveau_display *disp = nouveau_display(crtc->dev);
|
||||
struct drm_vblank_crtc *vblank = &crtc->dev->vblank[drm_crtc_index(crtc)];
|
||||
int ret, retry = 1;
|
||||
int ret, retry = 20;
|
||||
|
||||
do {
|
||||
ret = nvif_mthd(&disp->disp, 0, &args, sizeof(args));
|
||||
|
||||
@@ -456,6 +456,8 @@ static int td028ttec1_panel_remove(struct spi_device *spi)
|
||||
}
|
||||
|
||||
static const struct of_device_id td028ttec1_of_match[] = {
|
||||
{ .compatible = "omapdss,tpo,td028ttec1", },
|
||||
/* keep to not break older DTB */
|
||||
{ .compatible = "omapdss,toppoly,td028ttec1", },
|
||||
{},
|
||||
};
|
||||
@@ -475,6 +477,7 @@ static struct spi_driver td028ttec1_spi_driver = {
|
||||
|
||||
module_spi_driver(td028ttec1_spi_driver);
|
||||
|
||||
MODULE_ALIAS("spi:tpo,td028ttec1");
|
||||
MODULE_ALIAS("spi:toppoly,td028ttec1");
|
||||
MODULE_AUTHOR("H. Nikolaus Schaller <hns@goldelico.com>");
|
||||
MODULE_DESCRIPTION("Toppoly TD028TTEC1 panel driver");
|
||||
|
||||
@@ -298,7 +298,12 @@ static int dmm_txn_commit(struct dmm_txn *txn, bool wait)
|
||||
msecs_to_jiffies(100))) {
|
||||
dev_err(dmm->dev, "timed out waiting for done\n");
|
||||
ret = -ETIMEDOUT;
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/* Check the engine status before continue */
|
||||
ret = wait_status(engine, DMM_PATSTATUS_READY |
|
||||
DMM_PATSTATUS_VALID | DMM_PATSTATUS_DONE);
|
||||
}
|
||||
|
||||
cleanup:
|
||||
|
||||
@@ -124,7 +124,7 @@ static inline void tilcdc_write64(struct drm_device *dev, u32 reg, u64 data)
|
||||
struct tilcdc_drm_private *priv = dev->dev_private;
|
||||
volatile void __iomem *addr = priv->mmio + reg;
|
||||
|
||||
#ifdef iowrite64
|
||||
#if defined(iowrite64) && !defined(iowrite64_is_nonatomic)
|
||||
iowrite64(data, addr);
|
||||
#else
|
||||
__iowmb();
|
||||
|
||||
@@ -989,7 +989,7 @@ static int ssip_pn_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
goto drop;
|
||||
/* Pad to 32-bits - FIXME: Revisit*/
|
||||
if ((skb->len & 3) && skb_pad(skb, 4 - (skb->len & 3)))
|
||||
goto drop;
|
||||
goto inc_dropped;
|
||||
|
||||
/*
|
||||
* Modem sends Phonet messages over SSI with its own endianess...
|
||||
@@ -1041,8 +1041,9 @@ static int ssip_pn_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
drop2:
|
||||
hsi_free_msg(msg);
|
||||
drop:
|
||||
dev->stats.tx_dropped++;
|
||||
dev_kfree_skb(skb);
|
||||
inc_dropped:
|
||||
dev->stats.tx_dropped++;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -18,6 +18,9 @@
|
||||
#define ACPI_SMBUS_HC_CLASS "smbus"
|
||||
#define ACPI_SMBUS_HC_DEVICE_NAME "cmi"
|
||||
|
||||
/* SMBUS HID definition as supported by Microsoft Windows */
|
||||
#define ACPI_SMBUS_MS_HID "SMB0001"
|
||||
|
||||
ACPI_MODULE_NAME("smbus_cmi");
|
||||
|
||||
struct smbus_methods_t {
|
||||
@@ -51,6 +54,7 @@ static const struct smbus_methods_t ibm_smbus_methods = {
|
||||
static const struct acpi_device_id acpi_smbus_cmi_ids[] = {
|
||||
{"SMBUS01", (kernel_ulong_t)&smbus_methods},
|
||||
{ACPI_SMBUS_IBM_HID, (kernel_ulong_t)&ibm_smbus_methods},
|
||||
{ACPI_SMBUS_MS_HID, (kernel_ulong_t)&smbus_methods},
|
||||
{"", 0}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, acpi_smbus_cmi_ids);
|
||||
|
||||
@@ -827,6 +827,8 @@ static const struct iio_trigger_ops st_accel_trigger_ops = {
|
||||
int st_accel_common_probe(struct iio_dev *indio_dev)
|
||||
{
|
||||
struct st_sensor_data *adata = iio_priv(indio_dev);
|
||||
struct st_sensors_platform_data *pdata =
|
||||
(struct st_sensors_platform_data *)adata->dev->platform_data;
|
||||
int irq = adata->get_irq_data_ready(indio_dev);
|
||||
int err;
|
||||
|
||||
@@ -853,9 +855,8 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
|
||||
&adata->sensor_settings->fs.fs_avl[0];
|
||||
adata->odr = adata->sensor_settings->odr.odr_avl[0].hz;
|
||||
|
||||
if (!adata->dev->platform_data)
|
||||
adata->dev->platform_data =
|
||||
(struct st_sensors_platform_data *)&default_accel_pdata;
|
||||
if (!pdata)
|
||||
pdata = (struct st_sensors_platform_data *)&default_accel_pdata;
|
||||
|
||||
err = st_sensors_init_sensor(indio_dev, adata->dev->platform_data);
|
||||
if (err < 0)
|
||||
|
||||
@@ -215,7 +215,7 @@ int hid_sensor_write_samp_freq_value(struct hid_sensor_common *st,
|
||||
ret = sensor_hub_set_feature(st->hsdev, st->poll.report_id,
|
||||
st->poll.index, sizeof(value), &value);
|
||||
if (ret < 0 || value < 0)
|
||||
ret = -EINVAL;
|
||||
return -EINVAL;
|
||||
|
||||
ret = sensor_hub_get_feature(st->hsdev,
|
||||
st->poll.report_id,
|
||||
@@ -265,7 +265,7 @@ int hid_sensor_write_raw_hyst_value(struct hid_sensor_common *st,
|
||||
st->sensitivity.index, sizeof(value),
|
||||
&value);
|
||||
if (ret < 0 || value < 0)
|
||||
ret = -EINVAL;
|
||||
return -EINVAL;
|
||||
|
||||
ret = sensor_hub_get_feature(st->hsdev,
|
||||
st->sensitivity.report_id,
|
||||
|
||||
@@ -638,6 +638,8 @@ static const struct iio_trigger_ops st_press_trigger_ops = {
|
||||
int st_press_common_probe(struct iio_dev *indio_dev)
|
||||
{
|
||||
struct st_sensor_data *press_data = iio_priv(indio_dev);
|
||||
struct st_sensors_platform_data *pdata =
|
||||
(struct st_sensors_platform_data *)press_data->dev->platform_data;
|
||||
int irq = press_data->get_irq_data_ready(indio_dev);
|
||||
int err;
|
||||
|
||||
@@ -673,10 +675,8 @@ int st_press_common_probe(struct iio_dev *indio_dev)
|
||||
press_data->odr = press_data->sensor_settings->odr.odr_avl[0].hz;
|
||||
|
||||
/* Some devices don't support a data ready pin. */
|
||||
if (!press_data->dev->platform_data &&
|
||||
press_data->sensor_settings->drdy_irq.addr)
|
||||
press_data->dev->platform_data =
|
||||
(struct st_sensors_platform_data *)&default_press_pdata;
|
||||
if (!pdata && press_data->sensor_settings->drdy_irq.addr)
|
||||
pdata = (struct st_sensors_platform_data *)&default_press_pdata;
|
||||
|
||||
err = st_sensors_init_sensor(indio_dev, press_data->dev->platform_data);
|
||||
if (err < 0)
|
||||
|
||||
@@ -4039,6 +4039,9 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
|
||||
struct cma_multicast *mc;
|
||||
int ret;
|
||||
|
||||
if (!id->device)
|
||||
return -EINVAL;
|
||||
|
||||
id_priv = container_of(id, struct rdma_id_private, id);
|
||||
if (!cma_comp(id_priv, RDMA_CM_ADDR_BOUND) &&
|
||||
!cma_comp(id_priv, RDMA_CM_ADDR_RESOLVED))
|
||||
@@ -4336,7 +4339,7 @@ static int cma_get_id_stats(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
RDMA_NL_RDMA_CM_ATTR_SRC_ADDR))
|
||||
goto out;
|
||||
if (ibnl_put_attr(skb, nlh,
|
||||
rdma_addr_size(cma_src_addr(id_priv)),
|
||||
rdma_addr_size(cma_dst_addr(id_priv)),
|
||||
cma_dst_addr(id_priv),
|
||||
RDMA_NL_RDMA_CM_ATTR_DST_ADDR))
|
||||
goto out;
|
||||
|
||||
@@ -664,6 +664,7 @@ int iwpm_send_mapinfo(u8 nl_client, int iwpm_pid)
|
||||
}
|
||||
skb_num++;
|
||||
spin_lock_irqsave(&iwpm_mapinfo_lock, flags);
|
||||
ret = -EINVAL;
|
||||
for (i = 0; i < IWPM_MAPINFO_HASH_SIZE; i++) {
|
||||
hlist_for_each_entry(map_info, &iwpm_hash_bucket[i],
|
||||
hlist_node) {
|
||||
|
||||
@@ -1330,7 +1330,7 @@ static ssize_t ucma_process_join(struct ucma_file *file,
|
||||
return -ENOSPC;
|
||||
|
||||
addr = (struct sockaddr *) &cmd->addr;
|
||||
if (!cmd->addr_size || (cmd->addr_size != rdma_addr_size(addr)))
|
||||
if (cmd->addr_size != rdma_addr_size(addr))
|
||||
return -EINVAL;
|
||||
|
||||
if (cmd->join_flags == RDMA_MC_JOIN_FLAG_FULLMEMBER)
|
||||
@@ -1398,6 +1398,9 @@ static ssize_t ucma_join_ip_multicast(struct ucma_file *file,
|
||||
join_cmd.uid = cmd.uid;
|
||||
join_cmd.id = cmd.id;
|
||||
join_cmd.addr_size = rdma_addr_size((struct sockaddr *) &cmd.addr);
|
||||
if (!join_cmd.addr_size)
|
||||
return -EINVAL;
|
||||
|
||||
join_cmd.join_flags = RDMA_MC_JOIN_FLAG_FULLMEMBER;
|
||||
memcpy(&join_cmd.addr, &cmd.addr, join_cmd.addr_size);
|
||||
|
||||
@@ -1413,6 +1416,9 @@ static ssize_t ucma_join_multicast(struct ucma_file *file,
|
||||
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
||||
return -EFAULT;
|
||||
|
||||
if (!rdma_addr_size((struct sockaddr *)&cmd.addr))
|
||||
return -EINVAL;
|
||||
|
||||
return ucma_process_join(file, &cmd, out_len);
|
||||
}
|
||||
|
||||
|
||||
@@ -357,7 +357,7 @@ int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->nmap, dst, length,
|
||||
ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->npages, dst, length,
|
||||
offset + ib_umem_offset(umem));
|
||||
|
||||
if (ret < 0)
|
||||
|
||||
@@ -2491,9 +2491,13 @@ ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file,
|
||||
|
||||
static void *alloc_wr(size_t wr_size, __u32 num_sge)
|
||||
{
|
||||
if (num_sge >= (U32_MAX - ALIGN(wr_size, sizeof (struct ib_sge))) /
|
||||
sizeof (struct ib_sge))
|
||||
return NULL;
|
||||
|
||||
return kmalloc(ALIGN(wr_size, sizeof (struct ib_sge)) +
|
||||
num_sge * sizeof (struct ib_sge), GFP_KERNEL);
|
||||
};
|
||||
}
|
||||
|
||||
ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file,
|
||||
struct ib_device *ib_dev,
|
||||
@@ -2720,6 +2724,13 @@ static struct ib_recv_wr *ib_uverbs_unmarshall_recv(const char __user *buf,
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (user_wr->num_sge >=
|
||||
(U32_MAX - ALIGN(sizeof *next, sizeof (struct ib_sge))) /
|
||||
sizeof (struct ib_sge)) {
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
next = kmalloc(ALIGN(sizeof *next, sizeof (struct ib_sge)) +
|
||||
user_wr->num_sge * sizeof (struct ib_sge),
|
||||
GFP_KERNEL);
|
||||
|
||||
@@ -6379,18 +6379,17 @@ static void lcb_shutdown(struct hfi1_devdata *dd, int abort)
|
||||
*
|
||||
* The expectation is that the caller of this routine would have taken
|
||||
* care of properly transitioning the link into the correct state.
|
||||
* NOTE: the caller needs to acquire the dd->dc8051_lock lock
|
||||
* before calling this function.
|
||||
*/
|
||||
static void dc_shutdown(struct hfi1_devdata *dd)
|
||||
static void _dc_shutdown(struct hfi1_devdata *dd)
|
||||
{
|
||||
unsigned long flags;
|
||||
lockdep_assert_held(&dd->dc8051_lock);
|
||||
|
||||
spin_lock_irqsave(&dd->dc8051_lock, flags);
|
||||
if (dd->dc_shutdown) {
|
||||
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
|
||||
if (dd->dc_shutdown)
|
||||
return;
|
||||
}
|
||||
|
||||
dd->dc_shutdown = 1;
|
||||
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
|
||||
/* Shutdown the LCB */
|
||||
lcb_shutdown(dd, 1);
|
||||
/*
|
||||
@@ -6401,35 +6400,45 @@ static void dc_shutdown(struct hfi1_devdata *dd)
|
||||
write_csr(dd, DC_DC8051_CFG_RST, 0x1);
|
||||
}
|
||||
|
||||
static void dc_shutdown(struct hfi1_devdata *dd)
|
||||
{
|
||||
mutex_lock(&dd->dc8051_lock);
|
||||
_dc_shutdown(dd);
|
||||
mutex_unlock(&dd->dc8051_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
* Calling this after the DC has been brought out of reset should not
|
||||
* do any damage.
|
||||
* NOTE: the caller needs to acquire the dd->dc8051_lock lock
|
||||
* before calling this function.
|
||||
*/
|
||||
static void dc_start(struct hfi1_devdata *dd)
|
||||
static void _dc_start(struct hfi1_devdata *dd)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
lockdep_assert_held(&dd->dc8051_lock);
|
||||
|
||||
spin_lock_irqsave(&dd->dc8051_lock, flags);
|
||||
if (!dd->dc_shutdown)
|
||||
goto done;
|
||||
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
|
||||
return;
|
||||
|
||||
/* Take the 8051 out of reset */
|
||||
write_csr(dd, DC_DC8051_CFG_RST, 0ull);
|
||||
/* Wait until 8051 is ready */
|
||||
ret = wait_fm_ready(dd, TIMEOUT_8051_START);
|
||||
if (ret) {
|
||||
if (wait_fm_ready(dd, TIMEOUT_8051_START))
|
||||
dd_dev_err(dd, "%s: timeout starting 8051 firmware\n",
|
||||
__func__);
|
||||
}
|
||||
|
||||
/* Take away reset for LCB and RX FPE (set in lcb_shutdown). */
|
||||
write_csr(dd, DCC_CFG_RESET, 0x10);
|
||||
/* lcb_shutdown() with abort=1 does not restore these */
|
||||
write_csr(dd, DC_LCB_ERR_EN, dd->lcb_err_en);
|
||||
spin_lock_irqsave(&dd->dc8051_lock, flags);
|
||||
dd->dc_shutdown = 0;
|
||||
done:
|
||||
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
|
||||
}
|
||||
|
||||
static void dc_start(struct hfi1_devdata *dd)
|
||||
{
|
||||
mutex_lock(&dd->dc8051_lock);
|
||||
_dc_start(dd);
|
||||
mutex_unlock(&dd->dc8051_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -8418,16 +8427,11 @@ static int do_8051_command(
|
||||
{
|
||||
u64 reg, completed;
|
||||
int return_code;
|
||||
unsigned long flags;
|
||||
unsigned long timeout;
|
||||
|
||||
hfi1_cdbg(DC8051, "type %d, data 0x%012llx", type, in_data);
|
||||
|
||||
/*
|
||||
* Alternative to holding the lock for a long time:
|
||||
* - keep busy wait - have other users bounce off
|
||||
*/
|
||||
spin_lock_irqsave(&dd->dc8051_lock, flags);
|
||||
mutex_lock(&dd->dc8051_lock);
|
||||
|
||||
/* We can't send any commands to the 8051 if it's in reset */
|
||||
if (dd->dc_shutdown) {
|
||||
@@ -8453,10 +8457,8 @@ static int do_8051_command(
|
||||
return_code = -ENXIO;
|
||||
goto fail;
|
||||
}
|
||||
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
|
||||
dc_shutdown(dd);
|
||||
dc_start(dd);
|
||||
spin_lock_irqsave(&dd->dc8051_lock, flags);
|
||||
_dc_shutdown(dd);
|
||||
_dc_start(dd);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -8534,8 +8536,7 @@ static int do_8051_command(
|
||||
write_csr(dd, DC_DC8051_CFG_HOST_CMD_0, 0);
|
||||
|
||||
fail:
|
||||
spin_unlock_irqrestore(&dd->dc8051_lock, flags);
|
||||
|
||||
mutex_unlock(&dd->dc8051_lock);
|
||||
return return_code;
|
||||
}
|
||||
|
||||
@@ -11849,6 +11850,10 @@ static void free_cntrs(struct hfi1_devdata *dd)
|
||||
dd->scntrs = NULL;
|
||||
kfree(dd->cntrnames);
|
||||
dd->cntrnames = NULL;
|
||||
if (dd->update_cntr_wq) {
|
||||
destroy_workqueue(dd->update_cntr_wq);
|
||||
dd->update_cntr_wq = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static u64 read_dev_port_cntr(struct hfi1_devdata *dd, struct cntr_entry *entry,
|
||||
@@ -12004,7 +12009,7 @@ u64 write_port_cntr(struct hfi1_pportdata *ppd, int index, int vl, u64 data)
|
||||
return write_dev_port_cntr(ppd->dd, entry, sval, ppd, vl, data);
|
||||
}
|
||||
|
||||
static void update_synth_timer(unsigned long opaque)
|
||||
static void do_update_synth_timer(struct work_struct *work)
|
||||
{
|
||||
u64 cur_tx;
|
||||
u64 cur_rx;
|
||||
@@ -12013,8 +12018,8 @@ static void update_synth_timer(unsigned long opaque)
|
||||
int i, j, vl;
|
||||
struct hfi1_pportdata *ppd;
|
||||
struct cntr_entry *entry;
|
||||
|
||||
struct hfi1_devdata *dd = (struct hfi1_devdata *)opaque;
|
||||
struct hfi1_devdata *dd = container_of(work, struct hfi1_devdata,
|
||||
update_cntr_work);
|
||||
|
||||
/*
|
||||
* Rather than keep beating on the CSRs pick a minimal set that we can
|
||||
@@ -12097,7 +12102,13 @@ static void update_synth_timer(unsigned long opaque)
|
||||
} else {
|
||||
hfi1_cdbg(CNTR, "[%d] No update necessary", dd->unit);
|
||||
}
|
||||
}
|
||||
|
||||
static void update_synth_timer(unsigned long opaque)
|
||||
{
|
||||
struct hfi1_devdata *dd = (struct hfi1_devdata *)opaque;
|
||||
|
||||
queue_work(dd->update_cntr_wq, &dd->update_cntr_work);
|
||||
mod_timer(&dd->synth_stats_timer, jiffies + HZ * SYNTH_CNT_TIME);
|
||||
}
|
||||
|
||||
@@ -12333,6 +12344,13 @@ static int init_cntrs(struct hfi1_devdata *dd)
|
||||
if (init_cpu_counters(dd))
|
||||
goto bail;
|
||||
|
||||
dd->update_cntr_wq = alloc_ordered_workqueue("hfi1_update_cntr_%d",
|
||||
WQ_MEM_RECLAIM, dd->unit);
|
||||
if (!dd->update_cntr_wq)
|
||||
goto bail;
|
||||
|
||||
INIT_WORK(&dd->update_cntr_work, do_update_synth_timer);
|
||||
|
||||
mod_timer(&dd->synth_stats_timer, jiffies + HZ * SYNTH_CNT_TIME);
|
||||
return 0;
|
||||
bail:
|
||||
|
||||
@@ -475,7 +475,7 @@ struct rvt_sge_state;
|
||||
#define HFI1_PART_ENFORCE_OUT 0x2
|
||||
|
||||
/* how often we check for synthetic counter wrap around */
|
||||
#define SYNTH_CNT_TIME 2
|
||||
#define SYNTH_CNT_TIME 3
|
||||
|
||||
/* Counter flags */
|
||||
#define CNTR_NORMAL 0x0 /* Normal counters, just read register */
|
||||
@@ -929,8 +929,9 @@ struct hfi1_devdata {
|
||||
spinlock_t rcvctrl_lock; /* protect changes to RcvCtrl */
|
||||
/* around rcd and (user ctxts) ctxt_cnt use (intr vs free) */
|
||||
spinlock_t uctxt_lock; /* rcd and user context changes */
|
||||
/* exclusive access to 8051 */
|
||||
spinlock_t dc8051_lock;
|
||||
struct mutex dc8051_lock; /* exclusive access to 8051 */
|
||||
struct workqueue_struct *update_cntr_wq;
|
||||
struct work_struct update_cntr_work;
|
||||
/* exclusive access to 8051 memory */
|
||||
spinlock_t dc8051_memlock;
|
||||
int dc8051_timed_out; /* remember if the 8051 timed out */
|
||||
|
||||
@@ -1078,11 +1078,11 @@ struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra)
|
||||
spin_lock_init(&dd->uctxt_lock);
|
||||
spin_lock_init(&dd->hfi1_diag_trans_lock);
|
||||
spin_lock_init(&dd->sc_init_lock);
|
||||
spin_lock_init(&dd->dc8051_lock);
|
||||
spin_lock_init(&dd->dc8051_memlock);
|
||||
seqlock_init(&dd->sc2vl_lock);
|
||||
spin_lock_init(&dd->sde_map_lock);
|
||||
spin_lock_init(&dd->pio_map_lock);
|
||||
mutex_init(&dd->dc8051_lock);
|
||||
init_waitqueue_head(&dd->event_queue);
|
||||
|
||||
dd->int_counter = alloc_percpu(u64);
|
||||
|
||||
@@ -1168,7 +1168,7 @@ static void mlx4_ib_disassociate_ucontext(struct ib_ucontext *ibcontext)
|
||||
/* need to protect from a race on closing the vma as part of
|
||||
* mlx4_ib_vma_close().
|
||||
*/
|
||||
down_read(&owning_mm->mmap_sem);
|
||||
down_write(&owning_mm->mmap_sem);
|
||||
for (i = 0; i < HW_BAR_COUNT; i++) {
|
||||
vma = context->hw_bar_info[i].vma;
|
||||
if (!vma)
|
||||
@@ -1182,11 +1182,13 @@ static void mlx4_ib_disassociate_ucontext(struct ib_ucontext *ibcontext)
|
||||
BUG_ON(1);
|
||||
}
|
||||
|
||||
context->hw_bar_info[i].vma->vm_flags &=
|
||||
~(VM_SHARED | VM_MAYSHARE);
|
||||
/* context going to be destroyed, should not access ops any more */
|
||||
context->hw_bar_info[i].vma->vm_ops = NULL;
|
||||
}
|
||||
|
||||
up_read(&owning_mm->mmap_sem);
|
||||
up_write(&owning_mm->mmap_sem);
|
||||
mmput(owning_mm);
|
||||
put_task_struct(owning_process);
|
||||
}
|
||||
|
||||
@@ -172,6 +172,8 @@ static void handle_responder(struct ib_wc *wc, struct mlx5_cqe64 *cqe,
|
||||
struct mlx5_ib_srq *srq;
|
||||
struct mlx5_ib_wq *wq;
|
||||
u16 wqe_ctr;
|
||||
u8 roce_packet_type;
|
||||
bool vlan_present;
|
||||
u8 g;
|
||||
|
||||
if (qp->ibqp.srq || qp->ibqp.xrcd) {
|
||||
@@ -223,7 +225,6 @@ static void handle_responder(struct ib_wc *wc, struct mlx5_cqe64 *cqe,
|
||||
break;
|
||||
}
|
||||
wc->slid = be16_to_cpu(cqe->slid);
|
||||
wc->sl = (be32_to_cpu(cqe->flags_rqpn) >> 24) & 0xf;
|
||||
wc->src_qp = be32_to_cpu(cqe->flags_rqpn) & 0xffffff;
|
||||
wc->dlid_path_bits = cqe->ml_path;
|
||||
g = (be32_to_cpu(cqe->flags_rqpn) >> 28) & 3;
|
||||
@@ -237,10 +238,22 @@ static void handle_responder(struct ib_wc *wc, struct mlx5_cqe64 *cqe,
|
||||
wc->pkey_index = 0;
|
||||
}
|
||||
|
||||
if (ll != IB_LINK_LAYER_ETHERNET)
|
||||
if (ll != IB_LINK_LAYER_ETHERNET) {
|
||||
wc->sl = (be32_to_cpu(cqe->flags_rqpn) >> 24) & 0xf;
|
||||
return;
|
||||
}
|
||||
|
||||
switch (wc->sl & 0x3) {
|
||||
vlan_present = cqe->l4_l3_hdr_type & 0x1;
|
||||
roce_packet_type = (be32_to_cpu(cqe->flags_rqpn) >> 24) & 0x3;
|
||||
if (vlan_present) {
|
||||
wc->vlan_id = (be16_to_cpu(cqe->vlan_info)) & 0xfff;
|
||||
wc->sl = (be16_to_cpu(cqe->vlan_info) >> 13) & 0x7;
|
||||
wc->wc_flags |= IB_WC_WITH_VLAN;
|
||||
} else {
|
||||
wc->sl = 0;
|
||||
}
|
||||
|
||||
switch (roce_packet_type) {
|
||||
case MLX5_CQE_ROCE_L3_HEADER_TYPE_GRH:
|
||||
wc->network_hdr_type = RDMA_NETWORK_IB;
|
||||
break;
|
||||
|
||||
@@ -1313,7 +1313,7 @@ static void mlx5_ib_disassociate_ucontext(struct ib_ucontext *ibcontext)
|
||||
/* need to protect from a race on closing the vma as part of
|
||||
* mlx5_ib_vma_close.
|
||||
*/
|
||||
down_read(&owning_mm->mmap_sem);
|
||||
down_write(&owning_mm->mmap_sem);
|
||||
list_for_each_entry_safe(vma_private, n, &context->vma_private_list,
|
||||
list) {
|
||||
vma = vma_private->vma;
|
||||
@@ -1323,11 +1323,12 @@ static void mlx5_ib_disassociate_ucontext(struct ib_ucontext *ibcontext)
|
||||
/* context going to be destroyed, should
|
||||
* not access ops any more.
|
||||
*/
|
||||
vma->vm_flags &= ~(VM_SHARED | VM_MAYSHARE);
|
||||
vma->vm_ops = NULL;
|
||||
list_del(&vma_private->list);
|
||||
kfree(vma_private);
|
||||
}
|
||||
up_read(&owning_mm->mmap_sem);
|
||||
up_write(&owning_mm->mmap_sem);
|
||||
mmput(owning_mm);
|
||||
put_task_struct(owning_process);
|
||||
}
|
||||
|
||||
@@ -1130,7 +1130,7 @@ static void destroy_raw_packet_qp_sq(struct mlx5_ib_dev *dev,
|
||||
ib_umem_release(sq->ubuffer.umem);
|
||||
}
|
||||
|
||||
static int get_rq_pas_size(void *qpc)
|
||||
static size_t get_rq_pas_size(void *qpc)
|
||||
{
|
||||
u32 log_page_size = MLX5_GET(qpc, qpc, log_page_size) + 12;
|
||||
u32 log_rq_stride = MLX5_GET(qpc, qpc, log_rq_stride);
|
||||
@@ -1146,7 +1146,8 @@ static int get_rq_pas_size(void *qpc)
|
||||
}
|
||||
|
||||
static int create_raw_packet_qp_rq(struct mlx5_ib_dev *dev,
|
||||
struct mlx5_ib_rq *rq, void *qpin)
|
||||
struct mlx5_ib_rq *rq, void *qpin,
|
||||
size_t qpinlen)
|
||||
{
|
||||
struct mlx5_ib_qp *mqp = rq->base.container_mibqp;
|
||||
__be64 *pas;
|
||||
@@ -1155,9 +1156,12 @@ static int create_raw_packet_qp_rq(struct mlx5_ib_dev *dev,
|
||||
void *rqc;
|
||||
void *wq;
|
||||
void *qpc = MLX5_ADDR_OF(create_qp_in, qpin, qpc);
|
||||
int inlen;
|
||||
size_t rq_pas_size = get_rq_pas_size(qpc);
|
||||
size_t inlen;
|
||||
int err;
|
||||
u32 rq_pas_size = get_rq_pas_size(qpc);
|
||||
|
||||
if (qpinlen < rq_pas_size + MLX5_BYTE_OFF(create_qp_in, pas))
|
||||
return -EINVAL;
|
||||
|
||||
inlen = MLX5_ST_SZ_BYTES(create_rq_in) + rq_pas_size;
|
||||
in = mlx5_vzalloc(inlen);
|
||||
@@ -1235,7 +1239,7 @@ static void destroy_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
|
||||
}
|
||||
|
||||
static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
|
||||
u32 *in,
|
||||
u32 *in, size_t inlen,
|
||||
struct ib_pd *pd)
|
||||
{
|
||||
struct mlx5_ib_raw_packet_qp *raw_packet_qp = &qp->raw_packet_qp;
|
||||
@@ -1262,7 +1266,7 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
|
||||
if (qp->rq.wqe_cnt) {
|
||||
rq->base.container_mibqp = qp;
|
||||
|
||||
err = create_raw_packet_qp_rq(dev, rq, in);
|
||||
err = create_raw_packet_qp_rq(dev, rq, in, inlen);
|
||||
if (err)
|
||||
goto err_destroy_sq;
|
||||
|
||||
@@ -1753,10 +1757,15 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
|
||||
qp->flags |= MLX5_IB_QP_LSO;
|
||||
}
|
||||
|
||||
if (inlen < 0) {
|
||||
err = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (init_attr->qp_type == IB_QPT_RAW_PACKET) {
|
||||
qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd.sq_buf_addr;
|
||||
raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
|
||||
err = create_raw_packet_qp(dev, qp, in, pd);
|
||||
err = create_raw_packet_qp(dev, qp, in, inlen, pd);
|
||||
} else {
|
||||
err = mlx5_core_create_qp(dev->mdev, &base->mqp, in, inlen);
|
||||
}
|
||||
@@ -1796,6 +1805,7 @@ err_create:
|
||||
else if (qp->create_type == MLX5_QP_KERNEL)
|
||||
destroy_qp_kernel(dev, qp);
|
||||
|
||||
err:
|
||||
kvfree(in);
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -243,8 +243,8 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
|
||||
{
|
||||
struct mlx5_ib_dev *dev = to_mdev(pd->device);
|
||||
struct mlx5_ib_srq *srq;
|
||||
int desc_size;
|
||||
int buf_size;
|
||||
size_t desc_size;
|
||||
size_t buf_size;
|
||||
int err;
|
||||
struct mlx5_srq_attr in = {0};
|
||||
__u32 max_srq_wqes = 1 << MLX5_CAP_GEN(dev->mdev, log_max_srq_sz);
|
||||
@@ -268,15 +268,18 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
|
||||
|
||||
desc_size = sizeof(struct mlx5_wqe_srq_next_seg) +
|
||||
srq->msrq.max_gs * sizeof(struct mlx5_wqe_data_seg);
|
||||
if (desc_size == 0 || srq->msrq.max_gs > desc_size)
|
||||
return ERR_PTR(-EINVAL);
|
||||
desc_size = roundup_pow_of_two(desc_size);
|
||||
desc_size = max_t(int, 32, desc_size);
|
||||
desc_size = max_t(size_t, 32, desc_size);
|
||||
if (desc_size < sizeof(struct mlx5_wqe_srq_next_seg))
|
||||
return ERR_PTR(-EINVAL);
|
||||
srq->msrq.max_avail_gather = (desc_size - sizeof(struct mlx5_wqe_srq_next_seg)) /
|
||||
sizeof(struct mlx5_wqe_data_seg);
|
||||
srq->msrq.wqe_shift = ilog2(desc_size);
|
||||
buf_size = srq->msrq.max * desc_size;
|
||||
mlx5_ib_dbg(dev, "desc_size 0x%x, req wr 0x%x, srq size 0x%x, max_gs 0x%x, max_avail_gather 0x%x\n",
|
||||
desc_size, init_attr->attr.max_wr, srq->msrq.max, srq->msrq.max_gs,
|
||||
srq->msrq.max_avail_gather);
|
||||
if (buf_size < desc_size)
|
||||
return ERR_PTR(-EINVAL);
|
||||
in.type = init_attr->srq_type;
|
||||
|
||||
if (pd->uobject)
|
||||
|
||||
@@ -836,7 +836,7 @@ void ocrdma_add_port_stats(struct ocrdma_dev *dev)
|
||||
|
||||
dev->reset_stats.type = OCRDMA_RESET_STATS;
|
||||
dev->reset_stats.dev = dev;
|
||||
if (!debugfs_create_file("reset_stats", S_IRUSR, dev->dir,
|
||||
if (!debugfs_create_file("reset_stats", 0200, dev->dir,
|
||||
&dev->reset_stats, &ocrdma_dbg_ops))
|
||||
goto err;
|
||||
|
||||
|
||||
@@ -119,7 +119,7 @@ struct ib_ah *rvt_create_ah(struct ib_pd *pd,
|
||||
|
||||
spin_lock_irqsave(&dev->n_ahs_lock, flags);
|
||||
if (dev->n_ahs_allocated == dev->dparms.props.max_ah) {
|
||||
spin_unlock(&dev->n_ahs_lock);
|
||||
spin_unlock_irqrestore(&dev->n_ahs_lock, flags);
|
||||
kfree(ah);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
@@ -471,8 +471,6 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
|
||||
state = RESPST_ERR_LENGTH;
|
||||
goto err;
|
||||
}
|
||||
|
||||
qp->resp.resid = mtu;
|
||||
} else {
|
||||
if (pktlen != resid) {
|
||||
state = RESPST_ERR_LENGTH;
|
||||
|
||||
@@ -974,6 +974,19 @@ static inline int update_parent_pkey(struct ipoib_dev_priv *priv)
|
||||
*/
|
||||
priv->dev->broadcast[8] = priv->pkey >> 8;
|
||||
priv->dev->broadcast[9] = priv->pkey & 0xff;
|
||||
|
||||
/*
|
||||
* Update the broadcast address in the priv->broadcast object,
|
||||
* in case it already exists, otherwise no one will do that.
|
||||
*/
|
||||
if (priv->broadcast) {
|
||||
spin_lock_irq(&priv->lock);
|
||||
memcpy(priv->broadcast->mcmember.mgid.raw,
|
||||
priv->dev->broadcast + 4,
|
||||
sizeof(union ib_gid));
|
||||
spin_unlock_irq(&priv->lock);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -799,6 +799,22 @@ static void path_rec_completion(int status,
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
||||
if (!IS_ERR_OR_NULL(ah)) {
|
||||
/*
|
||||
* pathrec.dgid is used as the database key from the LLADDR,
|
||||
* it must remain unchanged even if the SA returns a different
|
||||
* GID to use in the AH.
|
||||
*/
|
||||
if (memcmp(pathrec->dgid.raw, path->pathrec.dgid.raw,
|
||||
sizeof(union ib_gid))) {
|
||||
ipoib_dbg(
|
||||
priv,
|
||||
"%s got PathRec for gid %pI6 while asked for %pI6\n",
|
||||
dev->name, pathrec->dgid.raw,
|
||||
path->pathrec.dgid.raw);
|
||||
memcpy(pathrec->dgid.raw, path->pathrec.dgid.raw,
|
||||
sizeof(union ib_gid));
|
||||
}
|
||||
|
||||
path->pathrec = *pathrec;
|
||||
|
||||
old_ah = path->ah;
|
||||
|
||||
@@ -487,6 +487,9 @@ static int ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
|
||||
!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags))
|
||||
return -EINVAL;
|
||||
|
||||
init_completion(&mcast->done);
|
||||
set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
|
||||
|
||||
ipoib_dbg_mcast(priv, "joining MGID %pI6\n", mcast->mcmember.mgid.raw);
|
||||
|
||||
rec.mgid = mcast->mcmember.mgid;
|
||||
@@ -645,8 +648,6 @@ void ipoib_mcast_join_task(struct work_struct *work)
|
||||
if (mcast->backoff == 1 ||
|
||||
time_after_eq(jiffies, mcast->delay_until)) {
|
||||
/* Found the next unjoined group */
|
||||
init_completion(&mcast->done);
|
||||
set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
|
||||
if (ipoib_mcast_join(dev, mcast)) {
|
||||
spin_unlock_irq(&priv->lock);
|
||||
return;
|
||||
@@ -666,11 +667,9 @@ out:
|
||||
queue_delayed_work(priv->wq, &priv->mcast_task,
|
||||
delay_until - jiffies);
|
||||
}
|
||||
if (mcast) {
|
||||
init_completion(&mcast->done);
|
||||
set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
|
||||
if (mcast)
|
||||
ipoib_mcast_join(dev, mcast);
|
||||
}
|
||||
|
||||
spin_unlock_irq(&priv->lock);
|
||||
}
|
||||
|
||||
|
||||
@@ -2098,6 +2098,9 @@ isert_rdma_rw_ctx_post(struct isert_cmd *cmd, struct isert_conn *conn,
|
||||
u32 rkey, offset;
|
||||
int ret;
|
||||
|
||||
if (cmd->ctx_init_done)
|
||||
goto rdma_ctx_post;
|
||||
|
||||
if (dir == DMA_FROM_DEVICE) {
|
||||
addr = cmd->write_va;
|
||||
rkey = cmd->write_stag;
|
||||
@@ -2125,11 +2128,15 @@ isert_rdma_rw_ctx_post(struct isert_cmd *cmd, struct isert_conn *conn,
|
||||
se_cmd->t_data_sg, se_cmd->t_data_nents,
|
||||
offset, addr, rkey, dir);
|
||||
}
|
||||
|
||||
if (ret < 0) {
|
||||
isert_err("Cmd: %p failed to prepare RDMA res\n", cmd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
cmd->ctx_init_done = true;
|
||||
|
||||
rdma_ctx_post:
|
||||
ret = rdma_rw_ctx_post(&cmd->rw, conn->qp, port_num, cqe, chain_wr);
|
||||
if (ret < 0)
|
||||
isert_err("Cmd: %p failed to post RDMA res\n", cmd);
|
||||
|
||||
@@ -124,6 +124,7 @@ struct isert_cmd {
|
||||
struct rdma_rw_ctx rw;
|
||||
struct work_struct comp_work;
|
||||
struct scatterlist sg;
|
||||
bool ctx_init_done;
|
||||
};
|
||||
|
||||
static inline struct isert_cmd *tx_desc_to_cmd(struct iser_tx_desc *desc)
|
||||
|
||||
@@ -70,7 +70,7 @@ static int twl4030_pwrbutton_probe(struct platform_device *pdev)
|
||||
pwr->phys = "twl4030_pwrbutton/input0";
|
||||
pwr->dev.parent = &pdev->dev;
|
||||
|
||||
err = devm_request_threaded_irq(&pwr->dev, irq, NULL, powerbutton_irq,
|
||||
err = devm_request_threaded_irq(&pdev->dev, irq, NULL, powerbutton_irq,
|
||||
IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING |
|
||||
IRQF_ONESHOT,
|
||||
"twl4030_pwrbutton", pwr);
|
||||
|
||||
@@ -152,7 +152,7 @@ static int __maybe_unused ar1021_i2c_resume(struct device *dev)
|
||||
static SIMPLE_DEV_PM_OPS(ar1021_i2c_pm, ar1021_i2c_suspend, ar1021_i2c_resume);
|
||||
|
||||
static const struct i2c_device_id ar1021_i2c_id[] = {
|
||||
{ "MICROCHIP_AR1021_I2C", 0 },
|
||||
{ "ar1021", 0 },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(i2c, ar1021_i2c_id);
|
||||
|
||||
@@ -127,6 +127,7 @@ int intel_svm_enable_prq(struct intel_iommu *iommu)
|
||||
pr_err("IOMMU: %s: Failed to request IRQ for page request queue\n",
|
||||
iommu->name);
|
||||
dmar_free_hwirq(irq);
|
||||
iommu->pr_irq = 0;
|
||||
goto err;
|
||||
}
|
||||
dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL);
|
||||
@@ -142,9 +143,11 @@ int intel_svm_finish_prq(struct intel_iommu *iommu)
|
||||
dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL);
|
||||
dmar_writeq(iommu->reg + DMAR_PQA_REG, 0ULL);
|
||||
|
||||
free_irq(iommu->pr_irq, iommu);
|
||||
dmar_free_hwirq(iommu->pr_irq);
|
||||
iommu->pr_irq = 0;
|
||||
if (iommu->pr_irq) {
|
||||
free_irq(iommu->pr_irq, iommu);
|
||||
dmar_free_hwirq(iommu->pr_irq);
|
||||
iommu->pr_irq = 0;
|
||||
}
|
||||
|
||||
free_pages((unsigned long)iommu->prq, PRQ_ORDER);
|
||||
iommu->prq = NULL;
|
||||
|
||||
@@ -1299,6 +1299,7 @@ static int __init omap_iommu_init(void)
|
||||
const unsigned long flags = SLAB_HWCACHE_ALIGN;
|
||||
size_t align = 1 << 10; /* L2 pagetable alignement */
|
||||
struct device_node *np;
|
||||
int ret;
|
||||
|
||||
np = of_find_matching_node(NULL, omap_iommu_of_match);
|
||||
if (!np)
|
||||
@@ -1312,11 +1313,25 @@ static int __init omap_iommu_init(void)
|
||||
return -ENOMEM;
|
||||
iopte_cachep = p;
|
||||
|
||||
bus_set_iommu(&platform_bus_type, &omap_iommu_ops);
|
||||
|
||||
omap_iommu_debugfs_init();
|
||||
|
||||
return platform_driver_register(&omap_iommu_driver);
|
||||
ret = platform_driver_register(&omap_iommu_driver);
|
||||
if (ret) {
|
||||
pr_err("%s: failed to register driver\n", __func__);
|
||||
goto fail_driver;
|
||||
}
|
||||
|
||||
ret = bus_set_iommu(&platform_bus_type, &omap_iommu_ops);
|
||||
if (ret)
|
||||
goto fail_bus;
|
||||
|
||||
return 0;
|
||||
|
||||
fail_bus:
|
||||
platform_driver_unregister(&omap_iommu_driver);
|
||||
fail_driver:
|
||||
kmem_cache_destroy(iopte_cachep);
|
||||
return ret;
|
||||
}
|
||||
subsys_initcall(omap_iommu_init);
|
||||
/* must be ready before omap3isp is probed */
|
||||
|
||||
@@ -55,6 +55,7 @@ static unsigned int gic_cpu_pin;
|
||||
static unsigned int timer_cpu_pin;
|
||||
static struct irq_chip gic_level_irq_controller, gic_edge_irq_controller;
|
||||
DECLARE_BITMAP(ipi_resrv, GIC_MAX_INTRS);
|
||||
DECLARE_BITMAP(ipi_available, GIC_MAX_INTRS);
|
||||
|
||||
static void __gic_irq_dispatch(void);
|
||||
|
||||
@@ -746,17 +747,17 @@ static int gic_irq_domain_alloc(struct irq_domain *d, unsigned int virq,
|
||||
|
||||
return gic_setup_dev_chip(d, virq, spec->hwirq);
|
||||
} else {
|
||||
base_hwirq = find_first_bit(ipi_resrv, gic_shared_intrs);
|
||||
base_hwirq = find_first_bit(ipi_available, gic_shared_intrs);
|
||||
if (base_hwirq == gic_shared_intrs) {
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* check that we have enough space */
|
||||
for (i = base_hwirq; i < nr_irqs; i++) {
|
||||
if (!test_bit(i, ipi_resrv))
|
||||
if (!test_bit(i, ipi_available))
|
||||
return -EBUSY;
|
||||
}
|
||||
bitmap_clear(ipi_resrv, base_hwirq, nr_irqs);
|
||||
bitmap_clear(ipi_available, base_hwirq, nr_irqs);
|
||||
|
||||
/* map the hwirq for each cpu consecutively */
|
||||
i = 0;
|
||||
@@ -787,7 +788,7 @@ static int gic_irq_domain_alloc(struct irq_domain *d, unsigned int virq,
|
||||
|
||||
return 0;
|
||||
error:
|
||||
bitmap_set(ipi_resrv, base_hwirq, nr_irqs);
|
||||
bitmap_set(ipi_available, base_hwirq, nr_irqs);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -802,7 +803,7 @@ void gic_irq_domain_free(struct irq_domain *d, unsigned int virq,
|
||||
return;
|
||||
|
||||
base_hwirq = GIC_HWIRQ_TO_SHARED(irqd_to_hwirq(data));
|
||||
bitmap_set(ipi_resrv, base_hwirq, nr_irqs);
|
||||
bitmap_set(ipi_available, base_hwirq, nr_irqs);
|
||||
}
|
||||
|
||||
int gic_irq_domain_match(struct irq_domain *d, struct device_node *node,
|
||||
@@ -1066,6 +1067,7 @@ static void __init __gic_init(unsigned long gic_base_addr,
|
||||
2 * gic_vpes);
|
||||
}
|
||||
|
||||
bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS);
|
||||
gic_basic_init();
|
||||
}
|
||||
|
||||
|
||||
@@ -186,8 +186,9 @@ void led_blink_set(struct led_classdev *led_cdev,
|
||||
unsigned long *delay_on,
|
||||
unsigned long *delay_off)
|
||||
{
|
||||
led_stop_software_blink(led_cdev);
|
||||
del_timer_sync(&led_cdev->blink_timer);
|
||||
|
||||
led_cdev->flags &= ~LED_BLINK_SW;
|
||||
led_cdev->flags &= ~LED_BLINK_ONESHOT;
|
||||
led_cdev->flags &= ~LED_BLINK_ONESHOT_STOP;
|
||||
|
||||
|
||||
@@ -2704,6 +2704,11 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
|
||||
list_add(&r10_bio->retry_list, &conf->bio_end_io_list);
|
||||
conf->nr_queued++;
|
||||
spin_unlock_irq(&conf->device_lock);
|
||||
/*
|
||||
* In case freeze_array() is waiting for condition
|
||||
* nr_pending == nr_queued + extra to be true.
|
||||
*/
|
||||
wake_up(&conf->wait_barrier);
|
||||
md_wakeup_thread(conf->mddev->thread);
|
||||
} else {
|
||||
if (test_bit(R10BIO_WriteError,
|
||||
@@ -4084,6 +4089,7 @@ static int raid10_start_reshape(struct mddev *mddev)
|
||||
diff = 0;
|
||||
if (first || diff < min_offset_diff)
|
||||
min_offset_diff = diff;
|
||||
first = 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -779,6 +779,29 @@ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot, u8 * b
|
||||
goto exit;
|
||||
}
|
||||
|
||||
/*
|
||||
* It may need some time for the CAM to settle down, or there might
|
||||
* be a race condition between the CAM, writing HC and our last
|
||||
* check for DA. This happens, if the CAM asserts DA, just after
|
||||
* checking DA before we are setting HC. In this case it might be
|
||||
* a bug in the CAM to keep the FR bit, the lower layer/HW
|
||||
* communication requires a longer timeout or the CAM needs more
|
||||
* time internally. But this happens in reality!
|
||||
* We need to read the status from the HW again and do the same
|
||||
* we did for the previous check for DA
|
||||
*/
|
||||
status = ca->pub->read_cam_control(ca->pub, slot, CTRLIF_STATUS);
|
||||
if (status < 0)
|
||||
goto exit;
|
||||
|
||||
if (status & (STATUSREG_DA | STATUSREG_RE)) {
|
||||
if (status & STATUSREG_DA)
|
||||
dvb_ca_en50221_thread_wakeup(ca);
|
||||
|
||||
status = -EAGAIN;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
/* send the amount of data */
|
||||
if ((status = ca->pub->write_cam_control(ca->pub, slot, CTRLIF_SIZE_HIGH, bytes_write >> 8)) != 0)
|
||||
goto exit;
|
||||
|
||||
@@ -14,6 +14,8 @@
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include "si2168_priv.h"
|
||||
|
||||
static const struct dvb_frontend_ops si2168_ops;
|
||||
@@ -378,6 +380,7 @@ static int si2168_init(struct dvb_frontend *fe)
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
udelay(100);
|
||||
memcpy(cmd.args, "\x85", 1);
|
||||
cmd.wlen = 1;
|
||||
cmd.rlen = 1;
|
||||
|
||||
@@ -422,8 +422,7 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
|
||||
bt878_num);
|
||||
if (bt878_num >= BT878_MAX) {
|
||||
printk(KERN_ERR "bt878: Too many devices inserted\n");
|
||||
result = -ENOMEM;
|
||||
goto fail0;
|
||||
return -ENOMEM;
|
||||
}
|
||||
if (pci_enable_device(dev))
|
||||
return -EIO;
|
||||
|
||||
@@ -83,7 +83,7 @@ static void c8sectpfe_timer_interrupt(unsigned long ac8sectpfei)
|
||||
static void channel_swdemux_tsklet(unsigned long data)
|
||||
{
|
||||
struct channel_info *channel = (struct channel_info *)data;
|
||||
struct c8sectpfei *fei = channel->fei;
|
||||
struct c8sectpfei *fei;
|
||||
unsigned long wp, rp;
|
||||
int pos, num_packets, n, size;
|
||||
u8 *buf;
|
||||
@@ -91,6 +91,8 @@ static void channel_swdemux_tsklet(unsigned long data)
|
||||
if (unlikely(!channel || !channel->irec))
|
||||
return;
|
||||
|
||||
fei = channel->fei;
|
||||
|
||||
wp = readl(channel->irec + DMA_PRDS_BUSWP_TP(0));
|
||||
rp = readl(channel->irec + DMA_PRDS_BUSRP_TP(0));
|
||||
|
||||
|
||||
@@ -430,6 +430,20 @@ static void palmas_power_off(void)
|
||||
{
|
||||
unsigned int addr;
|
||||
int ret, slave;
|
||||
struct device_node *np = palmas_dev->dev->of_node;
|
||||
|
||||
if (of_property_read_bool(np, "ti,palmas-override-powerhold")) {
|
||||
addr = PALMAS_BASE_TO_REG(PALMAS_PU_PD_OD_BASE,
|
||||
PALMAS_PRIMARY_SECONDARY_PAD2);
|
||||
slave = PALMAS_BASE_TO_SLAVE(PALMAS_PU_PD_OD_BASE);
|
||||
|
||||
ret = regmap_update_bits(palmas_dev->regmap[slave], addr,
|
||||
PALMAS_PRIMARY_SECONDARY_PAD2_GPIO_7_MASK, 0);
|
||||
if (ret)
|
||||
dev_err(palmas_dev->dev,
|
||||
"Unable to write PRIMARY_SECONDARY_PAD2 %d\n",
|
||||
ret);
|
||||
}
|
||||
|
||||
if (!palmas_dev)
|
||||
return;
|
||||
|
||||
@@ -2996,6 +2996,14 @@ static int mmc_pm_notify(struct notifier_block *notify_block,
|
||||
if (!err)
|
||||
break;
|
||||
|
||||
if (!mmc_card_is_removable(host)) {
|
||||
dev_warn(mmc_dev(host),
|
||||
"pre_suspend failed for non-removable host: "
|
||||
"%d\n", err);
|
||||
/* Avoid removing non-removable hosts */
|
||||
break;
|
||||
}
|
||||
|
||||
/* Calling bus_ops->remove() with a claimed host can deadlock */
|
||||
host->bus_ops->remove(host);
|
||||
mmc_claim_host(host);
|
||||
|
||||
@@ -1762,8 +1762,8 @@ static int omap_hsmmc_configure_wake_irq(struct omap_hsmmc_host *host)
|
||||
*/
|
||||
if (host->pdata->controller_flags & OMAP_HSMMC_SWAKEUP_MISSING) {
|
||||
struct pinctrl *p = devm_pinctrl_get(host->dev);
|
||||
if (!p) {
|
||||
ret = -ENODEV;
|
||||
if (IS_ERR(p)) {
|
||||
ret = PTR_ERR(p);
|
||||
goto err_free_irq;
|
||||
}
|
||||
if (IS_ERR(pinctrl_lookup_state(p, PINCTRL_STATE_DEFAULT))) {
|
||||
|
||||
@@ -432,6 +432,20 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock)
|
||||
if (esdhc->vendor_ver < VENDOR_V_23)
|
||||
pre_div = 2;
|
||||
|
||||
/*
|
||||
* Limit SD clock to 167MHz for ls1046a according to its datasheet
|
||||
*/
|
||||
if (clock > 167000000 &&
|
||||
of_find_compatible_node(NULL, NULL, "fsl,ls1046a-esdhc"))
|
||||
clock = 167000000;
|
||||
|
||||
/*
|
||||
* Limit SD clock to 125MHz for ls1012a according to its datasheet
|
||||
*/
|
||||
if (clock > 125000000 &&
|
||||
of_find_compatible_node(NULL, NULL, "fsl,ls1012a-esdhc"))
|
||||
clock = 125000000;
|
||||
|
||||
/* Workaround to reduce the clock frequency for p1010 esdhc */
|
||||
if (of_find_compatible_node(NULL, NULL, "fsl,p1010-esdhc")) {
|
||||
if (clock > 20000000)
|
||||
|
||||
@@ -2067,6 +2067,7 @@ static int bond_miimon_inspect(struct bonding *bond)
|
||||
(bond->params.downdelay - slave->delay) *
|
||||
bond->params.miimon,
|
||||
slave->dev->name);
|
||||
commit++;
|
||||
continue;
|
||||
}
|
||||
|
||||
@@ -2104,7 +2105,7 @@ static int bond_miimon_inspect(struct bonding *bond)
|
||||
(bond->params.updelay - slave->delay) *
|
||||
bond->params.miimon,
|
||||
slave->dev->name);
|
||||
|
||||
commit++;
|
||||
continue;
|
||||
}
|
||||
|
||||
|
||||
@@ -2026,6 +2026,7 @@ static void bnx2x_set_rx_buf_size(struct bnx2x *bp)
|
||||
ETH_OVREHEAD +
|
||||
mtu +
|
||||
BNX2X_FW_RX_ALIGN_END;
|
||||
fp->rx_buf_size = SKB_DATA_ALIGN(fp->rx_buf_size);
|
||||
/* Note : rx_buf_size doesn't take into account NET_SKB_PAD */
|
||||
if (fp->rx_buf_size + NET_SKB_PAD <= PAGE_SIZE)
|
||||
fp->rx_frag_size = fp->rx_buf_size + NET_SKB_PAD;
|
||||
|
||||
@@ -2594,11 +2594,10 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth)
|
||||
} else if (ugeth->ug_info->uf_info.bd_mem_part ==
|
||||
MEM_PART_MURAM) {
|
||||
out_be32(&ugeth->p_send_q_mem_reg->sqqd[i].bd_ring_base,
|
||||
(u32) immrbar_virt_to_phys(ugeth->
|
||||
p_tx_bd_ring[i]));
|
||||
(u32)qe_muram_dma(ugeth->p_tx_bd_ring[i]));
|
||||
out_be32(&ugeth->p_send_q_mem_reg->sqqd[i].
|
||||
last_bd_completed_address,
|
||||
(u32) immrbar_virt_to_phys(endOfRing));
|
||||
(u32)qe_muram_dma(endOfRing));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2844,8 +2843,7 @@ static int ucc_geth_startup(struct ucc_geth_private *ugeth)
|
||||
} else if (ugeth->ug_info->uf_info.bd_mem_part ==
|
||||
MEM_PART_MURAM) {
|
||||
out_be32(&ugeth->p_rx_bd_qs_tbl[i].externalbdbaseptr,
|
||||
(u32) immrbar_virt_to_phys(ugeth->
|
||||
p_rx_bd_ring[i]));
|
||||
(u32)qe_muram_dma(ugeth->p_rx_bd_ring[i]));
|
||||
}
|
||||
/* rest of fields handled by QE */
|
||||
}
|
||||
|
||||
@@ -671,7 +671,7 @@ static void hns_gmac_get_strings(u32 stringset, u8 *data)
|
||||
|
||||
static int hns_gmac_get_sset_count(int stringset)
|
||||
{
|
||||
if (stringset == ETH_SS_STATS)
|
||||
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
|
||||
return ARRAY_SIZE(g_gmac_stats_string);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -422,7 +422,7 @@ void hns_ppe_update_stats(struct hns_ppe_cb *ppe_cb)
|
||||
|
||||
int hns_ppe_get_sset_count(int stringset)
|
||||
{
|
||||
if (stringset == ETH_SS_STATS)
|
||||
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
|
||||
return ETH_PPE_STATIC_NUM;
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -798,7 +798,7 @@ void hns_rcb_get_stats(struct hnae_queue *queue, u64 *data)
|
||||
*/
|
||||
int hns_rcb_get_ring_sset_count(int stringset)
|
||||
{
|
||||
if (stringset == ETH_SS_STATS)
|
||||
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
|
||||
return HNS_RING_STATIC_REG_NUM;
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -776,7 +776,7 @@ static void hns_xgmac_get_strings(u32 stringset, u8 *data)
|
||||
*/
|
||||
static int hns_xgmac_get_sset_count(int stringset)
|
||||
{
|
||||
if (stringset == ETH_SS_STATS)
|
||||
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
|
||||
return ARRAY_SIZE(g_xgmac_stats_string);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -511,6 +511,23 @@ alloc_napi_failed:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static void disable_sub_crqs(struct ibmvnic_adapter *adapter)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (adapter->tx_scrq) {
|
||||
for (i = 0; i < adapter->req_tx_queues; i++)
|
||||
if (adapter->tx_scrq[i])
|
||||
disable_irq(adapter->tx_scrq[i]->irq);
|
||||
}
|
||||
|
||||
if (adapter->rx_scrq) {
|
||||
for (i = 0; i < adapter->req_rx_queues; i++)
|
||||
if (adapter->rx_scrq[i])
|
||||
disable_irq(adapter->rx_scrq[i]->irq);
|
||||
}
|
||||
}
|
||||
|
||||
static int ibmvnic_close(struct net_device *netdev)
|
||||
{
|
||||
struct ibmvnic_adapter *adapter = netdev_priv(netdev);
|
||||
@@ -519,6 +536,7 @@ static int ibmvnic_close(struct net_device *netdev)
|
||||
int i;
|
||||
|
||||
adapter->closing = true;
|
||||
disable_sub_crqs(adapter);
|
||||
|
||||
for (i = 0; i < adapter->req_rx_queues; i++)
|
||||
napi_disable(&adapter->napi[i]);
|
||||
|
||||
@@ -3528,6 +3528,12 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
|
||||
|
||||
switch (hw->mac.type) {
|
||||
case e1000_pch2lan:
|
||||
/* Stable 96MHz frequency */
|
||||
incperiod = INCPERIOD_96MHz;
|
||||
incvalue = INCVALUE_96MHz;
|
||||
shift = INCVALUE_SHIFT_96MHz;
|
||||
adapter->cc.shift = shift + INCPERIOD_SHIFT_96MHz;
|
||||
break;
|
||||
case e1000_pch_lpt:
|
||||
if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {
|
||||
/* Stable 96MHz frequency */
|
||||
|
||||
@@ -80,7 +80,7 @@ static struct ixgbe_stats ixgbevf_gstrings_stats[] = {
|
||||
#define IXGBEVF_QUEUE_STATS_LEN ( \
|
||||
(((struct ixgbevf_adapter *)netdev_priv(netdev))->num_tx_queues + \
|
||||
((struct ixgbevf_adapter *)netdev_priv(netdev))->num_rx_queues) * \
|
||||
(sizeof(struct ixgbe_stats) / sizeof(u64)))
|
||||
(sizeof(struct ixgbevf_stats) / sizeof(u64)))
|
||||
#define IXGBEVF_GLOBAL_STATS_LEN ARRAY_SIZE(ixgbevf_gstrings_stats)
|
||||
|
||||
#define IXGBEVF_STATS_LEN (IXGBEVF_GLOBAL_STATS_LEN + IXGBEVF_QUEUE_STATS_LEN)
|
||||
|
||||
@@ -204,7 +204,7 @@ static int qed_vf_pf_acquire(struct qed_hwfn *p_hwfn)
|
||||
/* send acquire request */
|
||||
rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
|
||||
if (rc)
|
||||
return rc;
|
||||
goto exit;
|
||||
|
||||
/* copy acquire response from buffer to p_hwfn */
|
||||
memcpy(&p_iov->acquire_resp, resp, sizeof(p_iov->acquire_resp));
|
||||
|
||||
@@ -128,6 +128,8 @@ static int qlcnic_sriov_virtid_fn(struct qlcnic_adapter *adapter, int vf_id)
|
||||
return 0;
|
||||
|
||||
pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV);
|
||||
if (!pos)
|
||||
return 0;
|
||||
pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &offset);
|
||||
pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &stride);
|
||||
|
||||
|
||||
@@ -151,6 +151,13 @@ static void netvsc_destroy_buf(struct hv_device *device)
|
||||
sizeof(struct nvsp_message),
|
||||
(unsigned long)revoke_packet,
|
||||
VM_PKT_DATA_INBAND, 0);
|
||||
/* If the failure is because the channel is rescinded;
|
||||
* ignore the failure since we cannot send on a rescinded
|
||||
* channel. This would allow us to properly cleanup
|
||||
* even when the channel is rescinded.
|
||||
*/
|
||||
if (device->channel->rescind)
|
||||
ret = 0;
|
||||
/*
|
||||
* If we failed here, we might as well return and
|
||||
* have a leak rather than continue and a bugchk
|
||||
@@ -211,6 +218,15 @@ static void netvsc_destroy_buf(struct hv_device *device)
|
||||
sizeof(struct nvsp_message),
|
||||
(unsigned long)revoke_packet,
|
||||
VM_PKT_DATA_INBAND, 0);
|
||||
|
||||
/* If the failure is because the channel is rescinded;
|
||||
* ignore the failure since we cannot send on a rescinded
|
||||
* channel. This would allow us to properly cleanup
|
||||
* even when the channel is rescinded.
|
||||
*/
|
||||
if (device->channel->rescind)
|
||||
ret = 0;
|
||||
|
||||
/* If we failed here, we might as well return and
|
||||
* have a leak rather than continue and a bugchk
|
||||
*/
|
||||
|
||||
@@ -531,7 +531,7 @@ err:
|
||||
|
||||
static const struct driver_info qmi_wwan_info = {
|
||||
.description = "WWAN/QMI device",
|
||||
.flags = FLAG_WWAN,
|
||||
.flags = FLAG_WWAN | FLAG_SEND_ZLP,
|
||||
.bind = qmi_wwan_bind,
|
||||
.unbind = qmi_wwan_unbind,
|
||||
.manage_power = qmi_wwan_manage_power,
|
||||
@@ -540,7 +540,7 @@ static const struct driver_info qmi_wwan_info = {
|
||||
|
||||
static const struct driver_info qmi_wwan_info_quirk_dtr = {
|
||||
.description = "WWAN/QMI device",
|
||||
.flags = FLAG_WWAN,
|
||||
.flags = FLAG_WWAN | FLAG_SEND_ZLP,
|
||||
.bind = qmi_wwan_bind,
|
||||
.unbind = qmi_wwan_unbind,
|
||||
.manage_power = qmi_wwan_manage_power,
|
||||
|
||||
@@ -2816,17 +2816,21 @@ static int __vxlan_sock_add(struct vxlan_dev *vxlan, bool ipv6)
|
||||
|
||||
static int vxlan_sock_add(struct vxlan_dev *vxlan)
|
||||
{
|
||||
bool ipv6 = vxlan->flags & VXLAN_F_IPV6;
|
||||
bool metadata = vxlan->flags & VXLAN_F_COLLECT_METADATA;
|
||||
bool ipv6 = vxlan->flags & VXLAN_F_IPV6 || metadata;
|
||||
bool ipv4 = !ipv6 || metadata;
|
||||
int ret = 0;
|
||||
|
||||
RCU_INIT_POINTER(vxlan->vn4_sock, NULL);
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
RCU_INIT_POINTER(vxlan->vn6_sock, NULL);
|
||||
if (ipv6 || metadata)
|
||||
if (ipv6) {
|
||||
ret = __vxlan_sock_add(vxlan, true);
|
||||
if (ret < 0 && ret != -EAFNOSUPPORT)
|
||||
ipv4 = false;
|
||||
}
|
||||
#endif
|
||||
if (!ret && (!ipv6 || metadata))
|
||||
if (ipv4)
|
||||
ret = __vxlan_sock_add(vxlan, false);
|
||||
if (ret < 0)
|
||||
vxlan_sock_release(vxlan);
|
||||
|
||||
@@ -347,6 +347,7 @@ static int pc300_pci_init_one(struct pci_dev *pdev,
|
||||
card->rambase == NULL) {
|
||||
pr_err("ioremap() failed\n");
|
||||
pc300_pci_remove_one(pdev);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* PLX PCI 9050 workaround for local configuration register read bug */
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user