Merge 5.4.24 into android-5.4
Changes in 5.4.24 io_uring: grab ->fs as part of async offload EDAC: skx_common: downgrade message importance on missing PCI device net: dsa: b53: Ensure the default VID is untagged net: fib_rules: Correctly set table field when table number exceeds 8 bits net: macb: ensure interface is not suspended on at91rm9200 net: mscc: fix in frame extraction net: phy: restore mdio regs in the iproc mdio driver net: sched: correct flower port blocking net/tls: Fix to avoid gettig invalid tls record nfc: pn544: Fix occasional HW initialization failure qede: Fix race between rdma destroy workqueue and link change event Revert "net: dev: introduce support for sch BYPASS for lockless qdisc" udp: rehash on disconnect sctp: move the format error check out of __sctp_sf_do_9_1_abort bnxt_en: Improve device shutdown method. bnxt_en: Issue PCIe FLR in kdump kernel to cleanup pending DMAs. bonding: add missing netdev_update_lockdep_key() net: export netdev_next_lower_dev_rcu() bonding: fix lockdep warning in bond_get_stats() ipv6: Fix route replacement with dev-only route ipv6: Fix nlmsg_flags when splitting a multipath route ipmi:ssif: Handle a possible NULL pointer reference drm/msm: Set dma maximum segment size for mdss sched/core: Don't skip remote tick for idle CPUs timers/nohz: Update NOHZ load in remote tick sched/fair: Prevent unlimited runtime on throttled group dax: pass NOWAIT flag to iomap_apply mac80211: consider more elements in parsing CRC cfg80211: check wiphy driver existence for drvinfo report s390/zcrypt: fix card and queue total counter wrap qmi_wwan: re-add DW5821e pre-production variant qmi_wwan: unconditionally reject 2 ep interfaces NFSv4: Fix races between open and dentry revalidation perf/smmuv3: Use platform_get_irq_optional() for wired interrupt perf/x86/intel: Add Elkhart Lake support perf/x86/cstate: Add Tremont support perf/x86/msr: Add Tremont support ceph: do not execute direct write in parallel if O_APPEND is specified ARM: dts: sti: fixup sound frame-inversion for stihxxx-b2120.dtsi drm/amd/display: Do not set optimized_require to false after plane disable RDMA/siw: Remove unwanted WARN_ON in siw_cm_llp_data_ready() drm/amd/display: Check engine is not NULL before acquiring drm/amd/display: Limit minimum DPPCLK to 100MHz. drm/amd/display: Add initialitions for PLL2 clock source amdgpu: Prevent build errors regarding soft/hard-float FP ABI tags soc/tegra: fuse: Fix build with Tegra194 configuration i40e: Fix the conditional for i40e_vc_validate_vqs_bitmaps net: ena: fix potential crash when rxfh key is NULL net: ena: fix uses of round_jiffies() net: ena: add missing ethtool TX timestamping indication net: ena: fix incorrect default RSS key net: ena: rss: do not allocate key when not supported net: ena: rss: fix failure to get indirection table net: ena: rss: store hash function as values and not bits net: ena: fix incorrectly saving queue numbers when setting RSS indirection table net: ena: fix corruption of dev_idx_to_host_tbl net: ena: ethtool: use correct value for crc32 hash net: ena: ena-com.c: prevent NULL pointer dereference ice: update Unit Load Status bitmask to check after reset cifs: Fix mode output in debugging statements cfg80211: add missing policy for NL80211_ATTR_STATUS_CODE mac80211: fix wrong 160/80+80 MHz setting net: hns3: add management table after IMP reset net: hns3: fix a copying IPv6 address error in hclge_fd_get_flow_tuples() nvme/tcp: fix bug on double requeue when send fails nvme: prevent warning triggered by nvme_stop_keep_alive nvme/pci: move cqe check after device shutdown ext4: potential crash on allocation error in ext4_alloc_flex_bg_array() audit: fix error handling in audit_data_to_entry() audit: always check the netlink payload length in audit_receive_msg() ACPICA: Introduce ACPI_ACCESS_BYTE_WIDTH() macro ACPI: watchdog: Fix gas->access_width usage KVM: VMX: check descriptor table exits on instruction emulation HID: ite: Only bind to keyboard USB interface on Acer SW5-012 keyboard dock HID: core: fix off-by-one memset in hid_report_raw_event() HID: core: increase HID report buffer size to 8KiB drm/amdgpu: Drop DRIVER_USE_AGP drm/radeon: Inline drm_get_pci_dev macintosh: therm_windtunnel: fix regression when instantiating devices tracing: Disable trace_printk() on post poned tests Revert "PM / devfreq: Modify the device name as devfreq(X) for sysfs" amdgpu/gmc_v9: save/restore sdpif regs during S3 cpufreq: Fix policy initialization for internal governor drivers io_uring: fix 32-bit compatability with sendmsg/recvmsg netfilter: ipset: Fix "INFO: rcu detected stall in hash_xxx" reports net/smc: transfer fasync_list in case of fallback vhost: Check docket sk_family instead of call getname netfilter: ipset: Fix forceadd evaluation path netfilter: xt_hashlimit: reduce hashlimit_mutex scope for htable_put() HID: alps: Fix an error handling path in 'alps_input_configured()' HID: hiddev: Fix race in in hiddev_disconnect() MIPS: VPE: Fix a double free and a memory leak in 'release_vpe()' i2c: altera: Fix potential integer overflow i2c: jz4780: silence log flood on txabrt drm/i915/gvt: Fix orphan vgpu dmabuf_objs' lifetime drm/i915/gvt: Separate display reset from ALL_ENGINES reset nl80211: fix potential leak in AP start mac80211: Remove a redundant mutex unlock kbuild: fix DT binding schema rule to detect command line changes hv_netvsc: Fix unwanted wakeup in netvsc_attach() usb: charger: assign specific number for enum value nvme-pci: Hold cq_poll_lock while completing CQEs s390/qeth: vnicc Fix EOPNOTSUPP precedence net: netlink: cap max groups which will be considered in netlink_bind() net: atlantic: fix use after free kasan warn net: atlantic: fix potential error handling net: atlantic: fix out of range usage of active_vlans array net/smc: no peer ID in CLC decline for SMCD net: ena: make ena rxfh support ETH_RSS_HASH_NO_CHANGE selftests: Install settings files to fix TIMEOUT failures kbuild: remove header compile test kbuild: move headers_check rule to usr/include/Makefile kbuild: remove unneeded variable, single-all kbuild: make single target builds even faster namei: only return -ECHILD from follow_dotdot_rcu() mwifiex: drop most magic numbers from mwifiex_process_tdls_action_frame() mwifiex: delete unused mwifiex_get_intf_num() KVM: SVM: Override default MMIO mask if memory encryption is enabled KVM: Check for a bad hva before dropping into the ghc slow path sched/fair: Optimize select_idle_cpu f2fs: fix to add swap extent correctly RDMA/hns: Simplify the calculation and usage of wqe idx for post verbs RDMA/hns: Bugfix for posting a wqe with sge drivers: net: xgene: Fix the order of the arguments of 'alloc_etherdev_mqs()' ima: ima/lsm policy rule loading logic bug fixes kprobes: Set unoptimized flag after unoptimizing code lib/vdso: Make __arch_update_vdso_data() logic understandable lib/vdso: Update coarse timekeeper unconditionally pwm: omap-dmtimer: put_device() after of_find_device_by_node() perf hists browser: Restore ESC as "Zoom out" of DSO/thread/etc perf ui gtk: Add missing zalloc object x86/resctrl: Check monitoring static key in the MBM overflow handler KVM: x86: Remove spurious kvm_mmu_unload() from vcpu destruction path KVM: x86: Remove spurious clearing of async #PF MSR rcu: Allow only one expedited GP to run concurrently with wakeups ubifs: Fix ino_t format warnings in orphan_delete() thermal: db8500: Depromote debug print thermal: brcmstb_thermal: Do not use DT coefficients netfilter: nft_tunnel: no need to call htons() when dumping ports netfilter: nf_flowtable: fix documentation bus: tegra-aconnect: Remove PM_CLK dependency xfs: clear kernel only flags in XFS_IOC_ATTRMULTI_BY_HANDLE locking/lockdep: Fix lockdep_stats indentation problem mm/debug.c: always print flags in dump_page() mm/gup: allow FOLL_FORCE for get_user_pages_fast() mm/huge_memory.c: use head to check huge zero page mm, thp: fix defrag setting if newline is not used kvm: nVMX: VMWRITE checks VMCS-link pointer before VMCS field kvm: nVMX: VMWRITE checks unsupported field before read-only field blktrace: Protect q->blk_trace with RCU Linux 5.4.24 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I0b31557e16c72bd30d1e6938ed199918ff326d88
This commit is contained in:
@@ -1115,23 +1115,6 @@ When kbuild executes, the following steps are followed (roughly):
|
||||
In this example, extra-y is used to list object files that
|
||||
shall be built, but shall not be linked as part of built-in.a.
|
||||
|
||||
header-test-y
|
||||
|
||||
header-test-y specifies headers (`*.h`) in the current directory that
|
||||
should be compile tested to ensure they are self-contained,
|
||||
i.e. compilable as standalone units. If CONFIG_HEADER_TEST is enabled,
|
||||
this builds them as part of extra-y.
|
||||
|
||||
header-test-pattern-y
|
||||
|
||||
This works as a weaker version of header-test-y, and accepts wildcard
|
||||
patterns. The typical usage is::
|
||||
|
||||
header-test-pattern-y += *.h
|
||||
|
||||
This specifies all the files that matches to `*.h` in the current
|
||||
directory, but the files in 'header-test-' are excluded.
|
||||
|
||||
6.7 Commands useful for building a boot image
|
||||
---------------------------------------------
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ flowtable and add one rule to your forward chain.
|
||||
|
||||
table inet x {
|
||||
flowtable f {
|
||||
hook ingress priority 0 devices = { eth0, eth1 };
|
||||
hook ingress priority 0; devices = { eth0, eth1 };
|
||||
}
|
||||
chain y {
|
||||
type filter hook forward priority 0; policy accept;
|
||||
|
||||
106
Makefile
106
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 23
|
||||
SUBLEVEL = 24
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
@@ -622,7 +622,6 @@ ifeq ($(KBUILD_EXTMOD),)
|
||||
init-y := init/
|
||||
drivers-y := drivers/ sound/
|
||||
drivers-$(CONFIG_SAMPLES) += samples/
|
||||
drivers-$(CONFIG_KERNEL_HEADER_TEST) += include/
|
||||
net-y := net/
|
||||
libs-y := lib/
|
||||
core-y := usr/
|
||||
@@ -1262,19 +1261,15 @@ headers: $(version_h) scripts_unifdef uapi-asm-generic archheaders archscripts
|
||||
$(Q)$(MAKE) $(hdr-inst)=include/uapi
|
||||
$(Q)$(MAKE) $(hdr-inst)=arch/$(SRCARCH)/include/uapi
|
||||
|
||||
# Deprecated. It is no-op now.
|
||||
PHONY += headers_check
|
||||
headers_check: headers
|
||||
$(Q)$(MAKE) $(hdr-inst)=include/uapi HDRCHECK=1
|
||||
$(Q)$(MAKE) $(hdr-inst)=arch/$(SRCARCH)/include/uapi HDRCHECK=1
|
||||
headers_check:
|
||||
@:
|
||||
|
||||
ifdef CONFIG_HEADERS_INSTALL
|
||||
prepare: headers
|
||||
endif
|
||||
|
||||
ifdef CONFIG_HEADERS_CHECK
|
||||
all: headers_check
|
||||
endif
|
||||
|
||||
PHONY += scripts_unifdef
|
||||
scripts_unifdef: scripts_basic
|
||||
$(Q)$(MAKE) $(build)=scripts scripts/unifdef
|
||||
@@ -1542,7 +1537,6 @@ help:
|
||||
@echo ' versioncheck - Sanity check on version.h usage'
|
||||
@echo ' includecheck - Check for duplicate included header files'
|
||||
@echo ' export_report - List the usages of all exported symbols'
|
||||
@echo ' headers_check - Sanity check on exported headers'
|
||||
@echo ' headerdep - Detect inclusion cycles in headers'
|
||||
@echo ' coccicheck - Check with Coccinelle'
|
||||
@echo ''
|
||||
@@ -1707,6 +1701,50 @@ help:
|
||||
PHONY += prepare
|
||||
endif # KBUILD_EXTMOD
|
||||
|
||||
# Single targets
|
||||
# ---------------------------------------------------------------------------
|
||||
# To build individual files in subdirectories, you can do like this:
|
||||
#
|
||||
# make foo/bar/baz.s
|
||||
#
|
||||
# The supported suffixes for single-target are listed in 'single-targets'
|
||||
#
|
||||
# To build only under specific subdirectories, you can do like this:
|
||||
#
|
||||
# make foo/bar/baz/
|
||||
|
||||
ifdef single-build
|
||||
|
||||
# .ko is special because modpost is needed
|
||||
single-ko := $(sort $(filter %.ko, $(MAKECMDGOALS)))
|
||||
single-no-ko := $(sort $(patsubst %.ko,%.mod, $(MAKECMDGOALS)))
|
||||
|
||||
$(single-ko): single_modpost
|
||||
@:
|
||||
$(single-no-ko): descend
|
||||
@:
|
||||
|
||||
ifeq ($(KBUILD_EXTMOD),)
|
||||
# For the single build of in-tree modules, use a temporary file to avoid
|
||||
# the situation of modules_install installing an invalid modules.order.
|
||||
MODORDER := .modules.tmp
|
||||
endif
|
||||
|
||||
PHONY += single_modpost
|
||||
single_modpost: $(single-no-ko)
|
||||
$(Q){ $(foreach m, $(single-ko), echo $(extmod-prefix)$m;) } > $(MODORDER)
|
||||
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost
|
||||
|
||||
KBUILD_MODULES := 1
|
||||
|
||||
export KBUILD_SINGLE_TARGETS := $(addprefix $(extmod-prefix), $(single-no-ko))
|
||||
|
||||
# trim unrelated directories
|
||||
build-dirs := $(foreach d, $(build-dirs), \
|
||||
$(if $(filter $(d)/%, $(KBUILD_SINGLE_TARGETS)), $(d)))
|
||||
|
||||
endif
|
||||
|
||||
# Handle descending into subdirectories listed in $(build-dirs)
|
||||
# Preset locale variables to speed up the build process. Limit locale
|
||||
# tweaks to this spot to avoid wrong language settings when running
|
||||
@@ -1715,7 +1753,9 @@ endif # KBUILD_EXTMOD
|
||||
PHONY += descend $(build-dirs)
|
||||
descend: $(build-dirs)
|
||||
$(build-dirs): prepare
|
||||
$(Q)$(MAKE) $(build)=$@ single-build=$(single-build) need-builtin=1 need-modorder=1
|
||||
$(Q)$(MAKE) $(build)=$@ \
|
||||
single-build=$(if $(filter-out $@/, $(single-no-ko)),1) \
|
||||
need-builtin=1 need-modorder=1
|
||||
|
||||
clean-dirs := $(addprefix _clean_, $(clean-dirs))
|
||||
PHONY += $(clean-dirs) clean
|
||||
@@ -1820,50 +1860,6 @@ tools/%: FORCE
|
||||
$(Q)mkdir -p $(objtree)/tools
|
||||
$(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $*
|
||||
|
||||
# Single targets
|
||||
# ---------------------------------------------------------------------------
|
||||
# To build individual files in subdirectories, you can do like this:
|
||||
#
|
||||
# make foo/bar/baz.s
|
||||
#
|
||||
# The supported suffixes for single-target are listed in 'single-targets'
|
||||
#
|
||||
# To build only under specific subdirectories, you can do like this:
|
||||
#
|
||||
# make foo/bar/baz/
|
||||
|
||||
ifdef single-build
|
||||
|
||||
single-all := $(filter $(single-targets), $(MAKECMDGOALS))
|
||||
|
||||
# .ko is special because modpost is needed
|
||||
single-ko := $(sort $(filter %.ko, $(single-all)))
|
||||
single-no-ko := $(sort $(patsubst %.ko,%.mod, $(single-all)))
|
||||
|
||||
$(single-ko): single_modpost
|
||||
@:
|
||||
$(single-no-ko): descend
|
||||
@:
|
||||
|
||||
ifeq ($(KBUILD_EXTMOD),)
|
||||
# For the single build of in-tree modules, use a temporary file to avoid
|
||||
# the situation of modules_install installing an invalid modules.order.
|
||||
MODORDER := .modules.tmp
|
||||
endif
|
||||
|
||||
PHONY += single_modpost
|
||||
single_modpost: $(single-no-ko)
|
||||
$(Q){ $(foreach m, $(single-ko), echo $(extmod-prefix)$m;) } > $(MODORDER)
|
||||
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost
|
||||
|
||||
KBUILD_MODULES := 1
|
||||
|
||||
export KBUILD_SINGLE_TARGETS := $(addprefix $(extmod-prefix), $(single-no-ko))
|
||||
|
||||
single-build = $(if $(filter-out $@/, $(single-no-ko)),1)
|
||||
|
||||
endif
|
||||
|
||||
# FIXME Should go into a make.lib or something
|
||||
# ===========================================================================
|
||||
|
||||
|
||||
@@ -46,7 +46,7 @@
|
||||
/* DAC */
|
||||
format = "i2s";
|
||||
mclk-fs = <256>;
|
||||
frame-inversion = <1>;
|
||||
frame-inversion;
|
||||
cpu {
|
||||
sound-dai = <&sti_uni_player2>;
|
||||
};
|
||||
|
||||
@@ -134,7 +134,7 @@ void release_vpe(struct vpe *v)
|
||||
{
|
||||
list_del(&v->list);
|
||||
if (v->load_addr)
|
||||
release_progmem(v);
|
||||
release_progmem(v->load_addr);
|
||||
kfree(v);
|
||||
}
|
||||
|
||||
|
||||
@@ -4747,6 +4747,7 @@ __init int intel_pmu_init(void)
|
||||
break;
|
||||
|
||||
case INTEL_FAM6_ATOM_TREMONT_D:
|
||||
case INTEL_FAM6_ATOM_TREMONT:
|
||||
x86_pmu.late_ack = true;
|
||||
memcpy(hw_cache_event_ids, glp_hw_cache_event_ids,
|
||||
sizeof(hw_cache_event_ids));
|
||||
|
||||
@@ -40,17 +40,18 @@
|
||||
* Model specific counters:
|
||||
* MSR_CORE_C1_RES: CORE C1 Residency Counter
|
||||
* perf code: 0x00
|
||||
* Available model: SLM,AMT,GLM,CNL
|
||||
* Available model: SLM,AMT,GLM,CNL,TNT
|
||||
* Scope: Core (each processor core has a MSR)
|
||||
* MSR_CORE_C3_RESIDENCY: CORE C3 Residency Counter
|
||||
* perf code: 0x01
|
||||
* Available model: NHM,WSM,SNB,IVB,HSW,BDW,SKL,GLM,
|
||||
* CNL,KBL,CML
|
||||
* CNL,KBL,CML,TNT
|
||||
* Scope: Core
|
||||
* MSR_CORE_C6_RESIDENCY: CORE C6 Residency Counter
|
||||
* perf code: 0x02
|
||||
* Available model: SLM,AMT,NHM,WSM,SNB,IVB,HSW,BDW,
|
||||
* SKL,KNL,GLM,CNL,KBL,CML,ICL,TGL
|
||||
* SKL,KNL,GLM,CNL,KBL,CML,ICL,TGL,
|
||||
* TNT
|
||||
* Scope: Core
|
||||
* MSR_CORE_C7_RESIDENCY: CORE C7 Residency Counter
|
||||
* perf code: 0x03
|
||||
@@ -60,17 +61,18 @@
|
||||
* MSR_PKG_C2_RESIDENCY: Package C2 Residency Counter.
|
||||
* perf code: 0x00
|
||||
* Available model: SNB,IVB,HSW,BDW,SKL,KNL,GLM,CNL,
|
||||
* KBL,CML,ICL,TGL
|
||||
* KBL,CML,ICL,TGL,TNT
|
||||
* Scope: Package (physical package)
|
||||
* MSR_PKG_C3_RESIDENCY: Package C3 Residency Counter.
|
||||
* perf code: 0x01
|
||||
* Available model: NHM,WSM,SNB,IVB,HSW,BDW,SKL,KNL,
|
||||
* GLM,CNL,KBL,CML,ICL,TGL
|
||||
* GLM,CNL,KBL,CML,ICL,TGL,TNT
|
||||
* Scope: Package (physical package)
|
||||
* MSR_PKG_C6_RESIDENCY: Package C6 Residency Counter.
|
||||
* perf code: 0x02
|
||||
* Available model: SLM,AMT,NHM,WSM,SNB,IVB,HSW,BDW
|
||||
* SKL,KNL,GLM,CNL,KBL,CML,ICL,TGL
|
||||
* Available model: SLM,AMT,NHM,WSM,SNB,IVB,HSW,BDW,
|
||||
* SKL,KNL,GLM,CNL,KBL,CML,ICL,TGL,
|
||||
* TNT
|
||||
* Scope: Package (physical package)
|
||||
* MSR_PKG_C7_RESIDENCY: Package C7 Residency Counter.
|
||||
* perf code: 0x03
|
||||
@@ -87,7 +89,8 @@
|
||||
* Scope: Package (physical package)
|
||||
* MSR_PKG_C10_RESIDENCY: Package C10 Residency Counter.
|
||||
* perf code: 0x06
|
||||
* Available model: HSW ULT,KBL,GLM,CNL,CML,ICL,TGL
|
||||
* Available model: HSW ULT,KBL,GLM,CNL,CML,ICL,TGL,
|
||||
* TNT
|
||||
* Scope: Package (physical package)
|
||||
*
|
||||
*/
|
||||
@@ -640,8 +643,9 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
|
||||
|
||||
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT, glm_cstates),
|
||||
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT_D, glm_cstates),
|
||||
|
||||
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT_PLUS, glm_cstates),
|
||||
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_TREMONT_D, glm_cstates),
|
||||
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_TREMONT, glm_cstates),
|
||||
|
||||
X86_CSTATES_MODEL(INTEL_FAM6_ICELAKE_L, icl_cstates),
|
||||
X86_CSTATES_MODEL(INTEL_FAM6_ICELAKE, icl_cstates),
|
||||
|
||||
@@ -75,8 +75,9 @@ static bool test_intel(int idx, void *data)
|
||||
|
||||
case INTEL_FAM6_ATOM_GOLDMONT:
|
||||
case INTEL_FAM6_ATOM_GOLDMONT_D:
|
||||
|
||||
case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
|
||||
case INTEL_FAM6_ATOM_TREMONT_D:
|
||||
case INTEL_FAM6_ATOM_TREMONT:
|
||||
|
||||
case INTEL_FAM6_XEON_PHI_KNL:
|
||||
case INTEL_FAM6_XEON_PHI_KNM:
|
||||
|
||||
@@ -57,6 +57,7 @@ static inline struct rdt_fs_context *rdt_fc2context(struct fs_context *fc)
|
||||
}
|
||||
|
||||
DECLARE_STATIC_KEY_FALSE(rdt_enable_key);
|
||||
DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
|
||||
|
||||
/**
|
||||
* struct mon_evt - Entry in the event list of a resource
|
||||
|
||||
@@ -514,7 +514,7 @@ void mbm_handle_overflow(struct work_struct *work)
|
||||
|
||||
mutex_lock(&rdtgroup_mutex);
|
||||
|
||||
if (!static_branch_likely(&rdt_enable_key))
|
||||
if (!static_branch_likely(&rdt_mon_enable_key))
|
||||
goto out_unlock;
|
||||
|
||||
d = get_domain_from_cpu(cpu, &rdt_resources_all[RDT_RESOURCE_L3]);
|
||||
@@ -543,7 +543,7 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
|
||||
unsigned long delay = msecs_to_jiffies(delay_ms);
|
||||
int cpu;
|
||||
|
||||
if (!static_branch_likely(&rdt_enable_key))
|
||||
if (!static_branch_likely(&rdt_mon_enable_key))
|
||||
return;
|
||||
cpu = cpumask_any(&dom->cpu_mask);
|
||||
dom->mbm_work_cpu = cpu;
|
||||
|
||||
@@ -1298,6 +1298,47 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* The default MMIO mask is a single bit (excluding the present bit),
|
||||
* which could conflict with the memory encryption bit. Check for
|
||||
* memory encryption support and override the default MMIO mask if
|
||||
* memory encryption is enabled.
|
||||
*/
|
||||
static __init void svm_adjust_mmio_mask(void)
|
||||
{
|
||||
unsigned int enc_bit, mask_bit;
|
||||
u64 msr, mask;
|
||||
|
||||
/* If there is no memory encryption support, use existing mask */
|
||||
if (cpuid_eax(0x80000000) < 0x8000001f)
|
||||
return;
|
||||
|
||||
/* If memory encryption is not enabled, use existing mask */
|
||||
rdmsrl(MSR_K8_SYSCFG, msr);
|
||||
if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT))
|
||||
return;
|
||||
|
||||
enc_bit = cpuid_ebx(0x8000001f) & 0x3f;
|
||||
mask_bit = boot_cpu_data.x86_phys_bits;
|
||||
|
||||
/* Increment the mask bit if it is the same as the encryption bit */
|
||||
if (enc_bit == mask_bit)
|
||||
mask_bit++;
|
||||
|
||||
/*
|
||||
* If the mask bit location is below 52, then some bits above the
|
||||
* physical addressing limit will always be reserved, so use the
|
||||
* rsvd_bits() function to generate the mask. This mask, along with
|
||||
* the present bit, will be used to generate a page fault with
|
||||
* PFER.RSV = 1.
|
||||
*
|
||||
* If the mask bit location is 52 (or above), then clear the mask.
|
||||
*/
|
||||
mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
|
||||
|
||||
kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
|
||||
}
|
||||
|
||||
static __init int svm_hardware_setup(void)
|
||||
{
|
||||
int cpu;
|
||||
@@ -1352,6 +1393,8 @@ static __init int svm_hardware_setup(void)
|
||||
}
|
||||
}
|
||||
|
||||
svm_adjust_mmio_mask();
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
r = svm_cpu_init(cpu);
|
||||
if (r)
|
||||
|
||||
@@ -4609,32 +4609,28 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
unsigned long field;
|
||||
u64 field_value;
|
||||
struct vcpu_vmx *vmx = to_vmx(vcpu);
|
||||
unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
|
||||
u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
|
||||
int len;
|
||||
gva_t gva = 0;
|
||||
struct vmcs12 *vmcs12;
|
||||
struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu)
|
||||
: get_vmcs12(vcpu);
|
||||
struct x86_exception e;
|
||||
short offset;
|
||||
|
||||
if (!nested_vmx_check_permission(vcpu))
|
||||
return 1;
|
||||
|
||||
if (to_vmx(vcpu)->nested.current_vmptr == -1ull)
|
||||
/*
|
||||
* In VMX non-root operation, when the VMCS-link pointer is -1ull,
|
||||
* any VMREAD sets the ALU flags for VMfailInvalid.
|
||||
*/
|
||||
if (vmx->nested.current_vmptr == -1ull ||
|
||||
(is_guest_mode(vcpu) &&
|
||||
get_vmcs12(vcpu)->vmcs_link_pointer == -1ull))
|
||||
return nested_vmx_failInvalid(vcpu);
|
||||
|
||||
if (!is_guest_mode(vcpu))
|
||||
vmcs12 = get_vmcs12(vcpu);
|
||||
else {
|
||||
/*
|
||||
* When vmcs->vmcs_link_pointer is -1ull, any VMREAD
|
||||
* to shadowed-field sets the ALU flags for VMfailInvalid.
|
||||
*/
|
||||
if (get_vmcs12(vcpu)->vmcs_link_pointer == -1ull)
|
||||
return nested_vmx_failInvalid(vcpu);
|
||||
vmcs12 = get_shadow_vmcs12(vcpu);
|
||||
}
|
||||
|
||||
/* Decode instruction info and find the field to read */
|
||||
field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf));
|
||||
|
||||
@@ -4713,13 +4709,20 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu)
|
||||
*/
|
||||
u64 field_value = 0;
|
||||
struct x86_exception e;
|
||||
struct vmcs12 *vmcs12;
|
||||
struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu)
|
||||
: get_vmcs12(vcpu);
|
||||
short offset;
|
||||
|
||||
if (!nested_vmx_check_permission(vcpu))
|
||||
return 1;
|
||||
|
||||
if (vmx->nested.current_vmptr == -1ull)
|
||||
/*
|
||||
* In VMX non-root operation, when the VMCS-link pointer is -1ull,
|
||||
* any VMWRITE sets the ALU flags for VMfailInvalid.
|
||||
*/
|
||||
if (vmx->nested.current_vmptr == -1ull ||
|
||||
(is_guest_mode(vcpu) &&
|
||||
get_vmcs12(vcpu)->vmcs_link_pointer == -1ull))
|
||||
return nested_vmx_failInvalid(vcpu);
|
||||
|
||||
if (vmx_instruction_info & (1u << 10))
|
||||
@@ -4738,6 +4741,12 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu)
|
||||
|
||||
|
||||
field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf));
|
||||
|
||||
offset = vmcs_field_to_offset(field);
|
||||
if (offset < 0)
|
||||
return nested_vmx_failValid(vcpu,
|
||||
VMXERR_UNSUPPORTED_VMCS_COMPONENT);
|
||||
|
||||
/*
|
||||
* If the vCPU supports "VMWRITE to any supported field in the
|
||||
* VMCS," then the "read-only" fields are actually read/write.
|
||||
@@ -4747,29 +4756,12 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu)
|
||||
return nested_vmx_failValid(vcpu,
|
||||
VMXERR_VMWRITE_READ_ONLY_VMCS_COMPONENT);
|
||||
|
||||
if (!is_guest_mode(vcpu)) {
|
||||
vmcs12 = get_vmcs12(vcpu);
|
||||
|
||||
/*
|
||||
* Ensure vmcs12 is up-to-date before any VMWRITE that dirties
|
||||
* vmcs12, else we may crush a field or consume a stale value.
|
||||
*/
|
||||
if (!is_shadow_field_rw(field))
|
||||
copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12);
|
||||
} else {
|
||||
/*
|
||||
* When vmcs->vmcs_link_pointer is -1ull, any VMWRITE
|
||||
* to shadowed-field sets the ALU flags for VMfailInvalid.
|
||||
*/
|
||||
if (get_vmcs12(vcpu)->vmcs_link_pointer == -1ull)
|
||||
return nested_vmx_failInvalid(vcpu);
|
||||
vmcs12 = get_shadow_vmcs12(vcpu);
|
||||
}
|
||||
|
||||
offset = vmcs_field_to_offset(field);
|
||||
if (offset < 0)
|
||||
return nested_vmx_failValid(vcpu,
|
||||
VMXERR_UNSUPPORTED_VMCS_COMPONENT);
|
||||
/*
|
||||
* Ensure vmcs12 is up-to-date before any VMWRITE that dirties
|
||||
* vmcs12, else we may crush a field or consume a stale value.
|
||||
*/
|
||||
if (!is_guest_mode(vcpu) && !is_shadow_field_rw(field))
|
||||
copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12);
|
||||
|
||||
/*
|
||||
* Some Intel CPUs intentionally drop the reserved bits of the AR byte
|
||||
|
||||
@@ -7165,6 +7165,7 @@ static int vmx_check_intercept_io(struct kvm_vcpu *vcpu,
|
||||
else
|
||||
intercept = nested_vmx_check_io_bitmaps(vcpu, port, size);
|
||||
|
||||
/* FIXME: produce nested vmexit and return X86EMUL_INTERCEPTED. */
|
||||
return intercept ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
|
||||
}
|
||||
|
||||
@@ -7194,6 +7195,20 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu,
|
||||
case x86_intercept_outs:
|
||||
return vmx_check_intercept_io(vcpu, info);
|
||||
|
||||
case x86_intercept_lgdt:
|
||||
case x86_intercept_lidt:
|
||||
case x86_intercept_lldt:
|
||||
case x86_intercept_ltr:
|
||||
case x86_intercept_sgdt:
|
||||
case x86_intercept_sidt:
|
||||
case x86_intercept_sldt:
|
||||
case x86_intercept_str:
|
||||
if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_DESC))
|
||||
return X86EMUL_CONTINUE;
|
||||
|
||||
/* FIXME: produce nested vmexit and return X86EMUL_INTERCEPTED. */
|
||||
break;
|
||||
|
||||
/* TODO: check more intercepts... */
|
||||
default:
|
||||
break;
|
||||
|
||||
@@ -9192,12 +9192,6 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
|
||||
|
||||
void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
vcpu->arch.apf.msr_val = 0;
|
||||
|
||||
vcpu_load(vcpu);
|
||||
kvm_mmu_unload(vcpu);
|
||||
vcpu_put(vcpu);
|
||||
|
||||
kvm_arch_vcpu_free(vcpu);
|
||||
}
|
||||
|
||||
|
||||
@@ -126,12 +126,11 @@ void __init acpi_watchdog_init(void)
|
||||
gas = &entries[i].register_region;
|
||||
|
||||
res.start = gas->address;
|
||||
res.end = res.start + ACPI_ACCESS_BYTE_WIDTH(gas->access_width) - 1;
|
||||
if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
|
||||
res.flags = IORESOURCE_MEM;
|
||||
res.end = res.start + ALIGN(gas->access_width, 4) - 1;
|
||||
} else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
|
||||
res.flags = IORESOURCE_IO;
|
||||
res.end = res.start + gas->access_width - 1;
|
||||
} else {
|
||||
pr_warn("Unsupported address space: %u\n",
|
||||
gas->space_id);
|
||||
|
||||
@@ -138,7 +138,6 @@ config TEGRA_ACONNECT
|
||||
tristate "Tegra ACONNECT Bus Driver"
|
||||
depends on ARCH_TEGRA_210_SOC
|
||||
depends on OF && PM
|
||||
select PM_CLK
|
||||
help
|
||||
Driver for the Tegra ACONNECT bus which is used to interface with
|
||||
the devices inside the Audio Processing Engine (APE) for Tegra210.
|
||||
|
||||
@@ -775,10 +775,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
|
||||
flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
|
||||
msg = ssif_info->curr_msg;
|
||||
if (msg) {
|
||||
if (data) {
|
||||
if (len > IPMI_MAX_MSG_LENGTH)
|
||||
len = IPMI_MAX_MSG_LENGTH;
|
||||
memcpy(msg->rsp, data, len);
|
||||
} else {
|
||||
len = 0;
|
||||
}
|
||||
msg->rsp_size = len;
|
||||
if (msg->rsp_size > IPMI_MAX_MSG_LENGTH)
|
||||
msg->rsp_size = IPMI_MAX_MSG_LENGTH;
|
||||
memcpy(msg->rsp, data, msg->rsp_size);
|
||||
ssif_info->curr_msg = NULL;
|
||||
}
|
||||
|
||||
|
||||
@@ -1079,9 +1079,17 @@ static int cpufreq_init_policy(struct cpufreq_policy *policy)
|
||||
pol = policy->last_policy;
|
||||
} else if (def_gov) {
|
||||
pol = cpufreq_parse_policy(def_gov->name);
|
||||
} else {
|
||||
return -ENODATA;
|
||||
/*
|
||||
* In case the default governor is neiter "performance"
|
||||
* nor "powersave", fall back to the initial policy
|
||||
* value set by the driver.
|
||||
*/
|
||||
if (pol == CPUFREQ_POLICY_UNKNOWN)
|
||||
pol = policy->policy;
|
||||
}
|
||||
if (pol != CPUFREQ_POLICY_PERFORMANCE &&
|
||||
pol != CPUFREQ_POLICY_POWERSAVE)
|
||||
return -ENODATA;
|
||||
}
|
||||
|
||||
return cpufreq_set_policy(policy, gov, pol);
|
||||
|
||||
@@ -613,7 +613,6 @@ struct devfreq *devfreq_add_device(struct device *dev,
|
||||
{
|
||||
struct devfreq *devfreq;
|
||||
struct devfreq_governor *governor;
|
||||
static atomic_t devfreq_no = ATOMIC_INIT(-1);
|
||||
int err = 0;
|
||||
|
||||
if (!dev || !profile || !governor_name) {
|
||||
@@ -677,8 +676,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
|
||||
devfreq->suspend_freq = dev_pm_opp_get_suspend_opp_freq(dev);
|
||||
atomic_set(&devfreq->suspend_count, 0);
|
||||
|
||||
dev_set_name(&devfreq->dev, "devfreq%d",
|
||||
atomic_inc_return(&devfreq_no));
|
||||
dev_set_name(&devfreq->dev, "%s", dev_name(dev));
|
||||
err = device_register(&devfreq->dev);
|
||||
if (err) {
|
||||
mutex_unlock(&devfreq->lock);
|
||||
|
||||
@@ -235,7 +235,7 @@ int skx_get_hi_lo(unsigned int did, int off[], u64 *tolm, u64 *tohm)
|
||||
|
||||
pdev = pci_get_device(PCI_VENDOR_ID_INTEL, did, NULL);
|
||||
if (!pdev) {
|
||||
skx_printk(KERN_ERR, "Can't get tolm/tohm\n");
|
||||
edac_dbg(2, "Can't get tolm/tohm\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
|
||||
@@ -1421,7 +1421,7 @@ amdgpu_get_crtc_scanout_position(struct drm_device *dev, unsigned int pipe,
|
||||
|
||||
static struct drm_driver kms_driver = {
|
||||
.driver_features =
|
||||
DRIVER_USE_AGP | DRIVER_ATOMIC |
|
||||
DRIVER_ATOMIC |
|
||||
DRIVER_GEM |
|
||||
DRIVER_RENDER | DRIVER_MODESET | DRIVER_SYNCOBJ,
|
||||
.load = amdgpu_driver_load_kms,
|
||||
|
||||
@@ -157,6 +157,7 @@ struct amdgpu_gmc {
|
||||
uint32_t srbm_soft_reset;
|
||||
bool prt_warning;
|
||||
uint64_t stolen_size;
|
||||
uint32_t sdpif_register;
|
||||
/* apertures */
|
||||
u64 shared_aperture_start;
|
||||
u64 shared_aperture_end;
|
||||
|
||||
@@ -1382,6 +1382,19 @@ static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* gmc_v9_0_restore_registers - restores regs
|
||||
*
|
||||
* @adev: amdgpu_device pointer
|
||||
*
|
||||
* This restores register values, saved at suspend.
|
||||
*/
|
||||
static void gmc_v9_0_restore_registers(struct amdgpu_device *adev)
|
||||
{
|
||||
if (adev->asic_type == CHIP_RAVEN)
|
||||
WREG32(mmDCHUBBUB_SDPIF_MMIO_CNTRL_0, adev->gmc.sdpif_register);
|
||||
}
|
||||
|
||||
/**
|
||||
* gmc_v9_0_gart_enable - gart enable
|
||||
*
|
||||
@@ -1478,6 +1491,20 @@ static int gmc_v9_0_hw_init(void *handle)
|
||||
return r;
|
||||
}
|
||||
|
||||
/**
|
||||
* gmc_v9_0_save_registers - saves regs
|
||||
*
|
||||
* @adev: amdgpu_device pointer
|
||||
*
|
||||
* This saves potential register values that should be
|
||||
* restored upon resume
|
||||
*/
|
||||
static void gmc_v9_0_save_registers(struct amdgpu_device *adev)
|
||||
{
|
||||
if (adev->asic_type == CHIP_RAVEN)
|
||||
adev->gmc.sdpif_register = RREG32(mmDCHUBBUB_SDPIF_MMIO_CNTRL_0);
|
||||
}
|
||||
|
||||
/**
|
||||
* gmc_v9_0_gart_disable - gart disable
|
||||
*
|
||||
@@ -1514,9 +1541,16 @@ static int gmc_v9_0_hw_fini(void *handle)
|
||||
|
||||
static int gmc_v9_0_suspend(void *handle)
|
||||
{
|
||||
int r;
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
return gmc_v9_0_hw_fini(adev);
|
||||
r = gmc_v9_0_hw_fini(adev);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
gmc_v9_0_save_registers(adev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gmc_v9_0_resume(void *handle)
|
||||
@@ -1524,6 +1558,7 @@ static int gmc_v9_0_resume(void *handle)
|
||||
int r;
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
gmc_v9_0_restore_registers(adev);
|
||||
r = gmc_v9_0_hw_init(adev);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
@@ -91,6 +91,12 @@ ifdef CONFIG_DRM_AMD_DC_DCN2_1
|
||||
###############################################################################
|
||||
CLK_MGR_DCN21 = rn_clk_mgr.o rn_clk_mgr_vbios_smu.o
|
||||
|
||||
# prevent build errors regarding soft-float vs hard-float FP ABI tags
|
||||
# this code is currently unused on ppc64, as it applies to Renoir APUs only
|
||||
ifdef CONFIG_PPC64
|
||||
CFLAGS_$(AMDDALPATH)/dc/clk_mgr/dcn21/rn_clk_mgr.o := $(call cc-option,-mno-gnu-attribute)
|
||||
endif
|
||||
|
||||
AMD_DAL_CLK_MGR_DCN21 = $(addprefix $(AMDDALPATH)/dc/clk_mgr/dcn21/,$(CLK_MGR_DCN21))
|
||||
|
||||
AMD_DISPLAY_FILES += $(AMD_DAL_CLK_MGR_DCN21)
|
||||
|
||||
@@ -91,6 +91,12 @@ void rn_update_clocks(struct clk_mgr *clk_mgr_base,
|
||||
rn_vbios_smu_set_min_deep_sleep_dcfclk(clk_mgr, clk_mgr_base->clks.dcfclk_deep_sleep_khz);
|
||||
}
|
||||
|
||||
// workaround: Limit dppclk to 100Mhz to avoid lower eDP panel switch to plus 4K monitor underflow.
|
||||
if (!IS_DIAG_DC(dc->ctx->dce_environment)) {
|
||||
if (new_clocks->dppclk_khz < 100000)
|
||||
new_clocks->dppclk_khz = 100000;
|
||||
}
|
||||
|
||||
if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr->base.clks.dppclk_khz)) {
|
||||
if (clk_mgr->base.clks.dppclk_khz > new_clocks->dppclk_khz)
|
||||
dpp_clock_lowered = true;
|
||||
|
||||
@@ -386,7 +386,7 @@ static bool acquire(
|
||||
{
|
||||
enum gpio_result result;
|
||||
|
||||
if (!is_engine_available(engine))
|
||||
if ((engine == NULL) || !is_engine_available(engine))
|
||||
return false;
|
||||
|
||||
result = dal_ddc_open(ddc, GPIO_MODE_HARDWARE,
|
||||
|
||||
@@ -493,7 +493,6 @@ static void dcn20_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
|
||||
dpp->funcs->dpp_dppclk_control(dpp, false, false);
|
||||
|
||||
hubp->power_gated = true;
|
||||
dc->optimized_required = false; /* We're powering off, no need to optimize */
|
||||
|
||||
dc->hwss.plane_atomic_power_down(dc,
|
||||
pipe_ctx->plane_res.dpp,
|
||||
|
||||
@@ -57,6 +57,7 @@
|
||||
#include "dcn20/dcn20_dccg.h"
|
||||
#include "dcn21_hubbub.h"
|
||||
#include "dcn10/dcn10_resource.h"
|
||||
#include "dce110/dce110_resource.h"
|
||||
|
||||
#include "dcn20/dcn20_dwb.h"
|
||||
#include "dcn20/dcn20_mmhubbub.h"
|
||||
@@ -824,6 +825,7 @@ static const struct dc_debug_options debug_defaults_diags = {
|
||||
enum dcn20_clk_src_array_id {
|
||||
DCN20_CLK_SRC_PLL0,
|
||||
DCN20_CLK_SRC_PLL1,
|
||||
DCN20_CLK_SRC_PLL2,
|
||||
DCN20_CLK_SRC_TOTAL_DCN21
|
||||
};
|
||||
|
||||
@@ -1492,6 +1494,10 @@ static bool construct(
|
||||
dcn21_clock_source_create(ctx, ctx->dc_bios,
|
||||
CLOCK_SOURCE_COMBO_PHY_PLL1,
|
||||
&clk_src_regs[1], false);
|
||||
pool->base.clock_sources[DCN20_CLK_SRC_PLL2] =
|
||||
dcn21_clock_source_create(ctx, ctx->dc_bios,
|
||||
CLOCK_SOURCE_COMBO_PHY_PLL2,
|
||||
&clk_src_regs[2], false);
|
||||
|
||||
pool->base.clk_src_count = DCN20_CLK_SRC_TOTAL_DCN21;
|
||||
|
||||
|
||||
@@ -7376,6 +7376,8 @@
|
||||
#define mmCRTC4_CRTC_DRR_CONTROL 0x0f3e
|
||||
#define mmCRTC4_CRTC_DRR_CONTROL_BASE_IDX 2
|
||||
|
||||
#define mmDCHUBBUB_SDPIF_MMIO_CNTRL_0 0x395d
|
||||
#define mmDCHUBBUB_SDPIF_MMIO_CNTRL_0_BASE_IDX 2
|
||||
|
||||
// addressBlock: dce_dc_fmt4_dispdec
|
||||
// base address: 0x2000
|
||||
|
||||
@@ -7,7 +7,6 @@ config DRM_I915_WERROR
|
||||
# We use the dependency on !COMPILE_TEST to not be enabled in
|
||||
# allmodconfig or allyesconfig configurations
|
||||
depends on !COMPILE_TEST
|
||||
select HEADER_TEST
|
||||
default n
|
||||
help
|
||||
Add -Werror to the build flags for (and only for) i915.ko.
|
||||
|
||||
@@ -96,12 +96,12 @@ static void dmabuf_gem_object_free(struct kref *kref)
|
||||
dmabuf_obj = container_of(pos,
|
||||
struct intel_vgpu_dmabuf_obj, list);
|
||||
if (dmabuf_obj == obj) {
|
||||
list_del(pos);
|
||||
intel_gvt_hypervisor_put_vfio_device(vgpu);
|
||||
idr_remove(&vgpu->object_idr,
|
||||
dmabuf_obj->dmabuf_id);
|
||||
kfree(dmabuf_obj->info);
|
||||
kfree(dmabuf_obj);
|
||||
list_del(pos);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -560,9 +560,9 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
|
||||
|
||||
intel_vgpu_reset_mmio(vgpu, dmlr);
|
||||
populate_pvinfo_page(vgpu);
|
||||
intel_vgpu_reset_display(vgpu);
|
||||
|
||||
if (dmlr) {
|
||||
intel_vgpu_reset_display(vgpu);
|
||||
intel_vgpu_reset_cfg_space(vgpu);
|
||||
/* only reset the failsafe mode when dmlr reset */
|
||||
vgpu->failsafe = false;
|
||||
|
||||
@@ -441,6 +441,14 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv)
|
||||
if (ret)
|
||||
goto err_msm_uninit;
|
||||
|
||||
if (!dev->dma_parms) {
|
||||
dev->dma_parms = devm_kzalloc(dev, sizeof(*dev->dma_parms),
|
||||
GFP_KERNEL);
|
||||
if (!dev->dma_parms)
|
||||
return -ENOMEM;
|
||||
}
|
||||
dma_set_max_seg_size(dev, DMA_BIT_MASK(32));
|
||||
|
||||
msm_gem_shrinker_init(ddev);
|
||||
|
||||
switch (get_mdp_ver(pdev)) {
|
||||
|
||||
@@ -37,6 +37,7 @@
|
||||
#include <linux/vga_switcheroo.h>
|
||||
#include <linux/mmu_notifier.h>
|
||||
|
||||
#include <drm/drm_agpsupport.h>
|
||||
#include <drm/drm_crtc_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
@@ -325,6 +326,7 @@ static int radeon_pci_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *ent)
|
||||
{
|
||||
unsigned long flags = 0;
|
||||
struct drm_device *dev;
|
||||
int ret;
|
||||
|
||||
if (!ent)
|
||||
@@ -365,7 +367,44 @@ static int radeon_pci_probe(struct pci_dev *pdev,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return drm_get_pci_dev(pdev, ent, &kms_driver);
|
||||
dev = drm_dev_alloc(&kms_driver, &pdev->dev);
|
||||
if (IS_ERR(dev))
|
||||
return PTR_ERR(dev);
|
||||
|
||||
ret = pci_enable_device(pdev);
|
||||
if (ret)
|
||||
goto err_free;
|
||||
|
||||
dev->pdev = pdev;
|
||||
#ifdef __alpha__
|
||||
dev->hose = pdev->sysdata;
|
||||
#endif
|
||||
|
||||
pci_set_drvdata(pdev, dev);
|
||||
|
||||
if (pci_find_capability(dev->pdev, PCI_CAP_ID_AGP))
|
||||
dev->agp = drm_agp_init(dev);
|
||||
if (dev->agp) {
|
||||
dev->agp->agp_mtrr = arch_phys_wc_add(
|
||||
dev->agp->agp_info.aper_base,
|
||||
dev->agp->agp_info.aper_size *
|
||||
1024 * 1024);
|
||||
}
|
||||
|
||||
ret = drm_dev_register(dev, ent->driver_data);
|
||||
if (ret)
|
||||
goto err_agp;
|
||||
|
||||
return 0;
|
||||
|
||||
err_agp:
|
||||
if (dev->agp)
|
||||
arch_phys_wc_del(dev->agp->agp_mtrr);
|
||||
kfree(dev->agp);
|
||||
pci_disable_device(pdev);
|
||||
err_free:
|
||||
drm_dev_put(dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void
|
||||
@@ -578,7 +617,7 @@ radeon_get_crtc_scanout_position(struct drm_device *dev, unsigned int pipe,
|
||||
|
||||
static struct drm_driver kms_driver = {
|
||||
.driver_features =
|
||||
DRIVER_USE_AGP | DRIVER_GEM | DRIVER_RENDER,
|
||||
DRIVER_GEM | DRIVER_RENDER,
|
||||
.load = radeon_driver_load_kms,
|
||||
.open = radeon_driver_open_kms,
|
||||
.postclose = radeon_driver_postclose_kms,
|
||||
|
||||
@@ -31,6 +31,7 @@
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/vga_switcheroo.h>
|
||||
|
||||
#include <drm/drm_agpsupport.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_file.h>
|
||||
#include <drm/drm_ioctl.h>
|
||||
@@ -77,6 +78,11 @@ void radeon_driver_unload_kms(struct drm_device *dev)
|
||||
radeon_modeset_fini(rdev);
|
||||
radeon_device_fini(rdev);
|
||||
|
||||
if (dev->agp)
|
||||
arch_phys_wc_del(dev->agp->agp_mtrr);
|
||||
kfree(dev->agp);
|
||||
dev->agp = NULL;
|
||||
|
||||
done_free:
|
||||
kfree(rdev);
|
||||
dev->dev_private = NULL;
|
||||
|
||||
@@ -730,7 +730,7 @@ static int alps_input_configured(struct hid_device *hdev, struct hid_input *hi)
|
||||
if (data->has_sp) {
|
||||
input2 = input_allocate_device();
|
||||
if (!input2) {
|
||||
input_free_device(input2);
|
||||
ret = -ENOMEM;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
|
||||
@@ -1741,7 +1741,9 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
|
||||
|
||||
rsize = ((report->size - 1) >> 3) + 1;
|
||||
|
||||
if (rsize > HID_MAX_BUFFER_SIZE)
|
||||
if (report_enum->numbered && rsize >= HID_MAX_BUFFER_SIZE)
|
||||
rsize = HID_MAX_BUFFER_SIZE - 1;
|
||||
else if (rsize > HID_MAX_BUFFER_SIZE)
|
||||
rsize = HID_MAX_BUFFER_SIZE;
|
||||
|
||||
if (csize < rsize) {
|
||||
|
||||
@@ -41,8 +41,9 @@ static const struct hid_device_id ite_devices[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ITE, USB_DEVICE_ID_ITE8595) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_258A, USB_DEVICE_ID_258A_6A88) },
|
||||
/* ITE8595 USB kbd ctlr, with Synaptics touchpad connected to it. */
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS,
|
||||
USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012) },
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
USB_VENDOR_ID_SYNAPTICS,
|
||||
USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012) },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(hid, ite_devices);
|
||||
|
||||
@@ -941,9 +941,9 @@ void hiddev_disconnect(struct hid_device *hid)
|
||||
hiddev->exist = 0;
|
||||
|
||||
if (hiddev->open) {
|
||||
mutex_unlock(&hiddev->existancelock);
|
||||
hid_hw_close(hiddev->hid);
|
||||
wake_up_interruptible(&hiddev->wait);
|
||||
mutex_unlock(&hiddev->existancelock);
|
||||
} else {
|
||||
mutex_unlock(&hiddev->existancelock);
|
||||
kfree(hiddev);
|
||||
|
||||
@@ -171,7 +171,7 @@ static void altr_i2c_init(struct altr_i2c_dev *idev)
|
||||
/* SCL Low Time */
|
||||
writel(t_low, idev->base + ALTR_I2C_SCL_LOW);
|
||||
/* SDA Hold Time, 300ns */
|
||||
writel(div_u64(300 * clk_mhz, 1000), idev->base + ALTR_I2C_SDA_HOLD);
|
||||
writel(3 * clk_mhz / 10, idev->base + ALTR_I2C_SDA_HOLD);
|
||||
|
||||
/* Mask all master interrupt bits */
|
||||
altr_i2c_int_enable(idev, ALTR_I2C_ALL_IRQ, false);
|
||||
|
||||
@@ -73,25 +73,6 @@
|
||||
#define JZ4780_I2C_STA_TFNF BIT(1)
|
||||
#define JZ4780_I2C_STA_ACT BIT(0)
|
||||
|
||||
static const char * const jz4780_i2c_abrt_src[] = {
|
||||
"ABRT_7B_ADDR_NOACK",
|
||||
"ABRT_10ADDR1_NOACK",
|
||||
"ABRT_10ADDR2_NOACK",
|
||||
"ABRT_XDATA_NOACK",
|
||||
"ABRT_GCALL_NOACK",
|
||||
"ABRT_GCALL_READ",
|
||||
"ABRT_HS_ACKD",
|
||||
"SBYTE_ACKDET",
|
||||
"ABRT_HS_NORSTRT",
|
||||
"SBYTE_NORSTRT",
|
||||
"ABRT_10B_RD_NORSTRT",
|
||||
"ABRT_MASTER_DIS",
|
||||
"ARB_LOST",
|
||||
"SLVFLUSH_TXFIFO",
|
||||
"SLV_ARBLOST",
|
||||
"SLVRD_INTX",
|
||||
};
|
||||
|
||||
#define JZ4780_I2C_INTST_IGC BIT(11)
|
||||
#define JZ4780_I2C_INTST_ISTT BIT(10)
|
||||
#define JZ4780_I2C_INTST_ISTP BIT(9)
|
||||
@@ -529,21 +510,8 @@ done:
|
||||
|
||||
static void jz4780_i2c_txabrt(struct jz4780_i2c *i2c, int src)
|
||||
{
|
||||
int i;
|
||||
|
||||
dev_err(&i2c->adap.dev, "txabrt: 0x%08x\n", src);
|
||||
dev_err(&i2c->adap.dev, "device addr=%x\n",
|
||||
jz4780_i2c_readw(i2c, JZ4780_I2C_TAR));
|
||||
dev_err(&i2c->adap.dev, "send cmd count:%d %d\n",
|
||||
i2c->cmd, i2c->cmd_buf[i2c->cmd]);
|
||||
dev_err(&i2c->adap.dev, "receive data count:%d %d\n",
|
||||
i2c->cmd, i2c->data_buf[i2c->cmd]);
|
||||
|
||||
for (i = 0; i < 16; i++) {
|
||||
if (src & BIT(i))
|
||||
dev_dbg(&i2c->adap.dev, "I2C TXABRT[%d]=%s\n",
|
||||
i, jz4780_i2c_abrt_src[i]);
|
||||
}
|
||||
dev_dbg(&i2c->adap.dev, "txabrt: 0x%08x, cmd: %d, send: %d, recv: %d\n",
|
||||
src, i2c->cmd, i2c->cmd_buf[i2c->cmd], i2c->data_buf[i2c->cmd]);
|
||||
}
|
||||
|
||||
static inline int jz4780_i2c_xfer_read(struct jz4780_i2c *i2c,
|
||||
|
||||
@@ -425,7 +425,7 @@ struct hns_roce_mr_table {
|
||||
struct hns_roce_wq {
|
||||
u64 *wrid; /* Work request ID */
|
||||
spinlock_t lock;
|
||||
int wqe_cnt; /* WQE num */
|
||||
u32 wqe_cnt; /* WQE num */
|
||||
u32 max_post;
|
||||
int max_gs;
|
||||
int offset;
|
||||
@@ -658,7 +658,6 @@ struct hns_roce_qp {
|
||||
u8 sdb_en;
|
||||
u32 doorbell_qpn;
|
||||
u32 sq_signal_bits;
|
||||
u32 sq_next_wqe;
|
||||
struct hns_roce_wq sq;
|
||||
|
||||
struct ib_umem *umem;
|
||||
|
||||
@@ -74,8 +74,8 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
|
||||
unsigned long flags = 0;
|
||||
void *wqe = NULL;
|
||||
__le32 doorbell[2];
|
||||
u32 wqe_idx = 0;
|
||||
int nreq = 0;
|
||||
u32 ind = 0;
|
||||
int ret = 0;
|
||||
u8 *smac;
|
||||
int loopback;
|
||||
@@ -88,7 +88,7 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&qp->sq.lock, flags);
|
||||
ind = qp->sq_next_wqe;
|
||||
|
||||
for (nreq = 0; wr; ++nreq, wr = wr->next) {
|
||||
if (hns_roce_wq_overflow(&qp->sq, nreq, qp->ibqp.send_cq)) {
|
||||
ret = -ENOMEM;
|
||||
@@ -96,6 +96,8 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
|
||||
goto out;
|
||||
}
|
||||
|
||||
wqe_idx = (qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1);
|
||||
|
||||
if (unlikely(wr->num_sge > qp->sq.max_gs)) {
|
||||
dev_err(dev, "num_sge=%d > qp->sq.max_gs=%d\n",
|
||||
wr->num_sge, qp->sq.max_gs);
|
||||
@@ -104,9 +106,8 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
|
||||
goto out;
|
||||
}
|
||||
|
||||
wqe = get_send_wqe(qp, ind & (qp->sq.wqe_cnt - 1));
|
||||
qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] =
|
||||
wr->wr_id;
|
||||
wqe = get_send_wqe(qp, wqe_idx);
|
||||
qp->sq.wrid[wqe_idx] = wr->wr_id;
|
||||
|
||||
/* Corresponding to the RC and RD type wqe process separately */
|
||||
if (ibqp->qp_type == IB_QPT_GSI) {
|
||||
@@ -210,7 +211,6 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
|
||||
cpu_to_le32((wr->sg_list[1].addr) >> 32);
|
||||
ud_sq_wqe->l_key1 =
|
||||
cpu_to_le32(wr->sg_list[1].lkey);
|
||||
ind++;
|
||||
} else if (ibqp->qp_type == IB_QPT_RC) {
|
||||
u32 tmp_len = 0;
|
||||
|
||||
@@ -308,7 +308,6 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
|
||||
ctrl->flag |= cpu_to_le32(wr->num_sge <<
|
||||
HNS_ROCE_WQE_SGE_NUM_BIT);
|
||||
}
|
||||
ind++;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -336,7 +335,6 @@ out:
|
||||
doorbell[1] = sq_db.u32_8;
|
||||
|
||||
hns_roce_write64_k(doorbell, qp->sq.db_reg_l);
|
||||
qp->sq_next_wqe = ind;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&qp->sq.lock, flags);
|
||||
@@ -348,12 +346,6 @@ static int hns_roce_v1_post_recv(struct ib_qp *ibqp,
|
||||
const struct ib_recv_wr *wr,
|
||||
const struct ib_recv_wr **bad_wr)
|
||||
{
|
||||
int ret = 0;
|
||||
int nreq = 0;
|
||||
int ind = 0;
|
||||
int i = 0;
|
||||
u32 reg_val;
|
||||
unsigned long flags = 0;
|
||||
struct hns_roce_rq_wqe_ctrl *ctrl = NULL;
|
||||
struct hns_roce_wqe_data_seg *scat = NULL;
|
||||
struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
|
||||
@@ -361,9 +353,14 @@ static int hns_roce_v1_post_recv(struct ib_qp *ibqp,
|
||||
struct device *dev = &hr_dev->pdev->dev;
|
||||
struct hns_roce_rq_db rq_db;
|
||||
__le32 doorbell[2] = {0};
|
||||
unsigned long flags = 0;
|
||||
unsigned int wqe_idx;
|
||||
int ret = 0;
|
||||
int nreq = 0;
|
||||
int i = 0;
|
||||
u32 reg_val;
|
||||
|
||||
spin_lock_irqsave(&hr_qp->rq.lock, flags);
|
||||
ind = hr_qp->rq.head & (hr_qp->rq.wqe_cnt - 1);
|
||||
|
||||
for (nreq = 0; wr; ++nreq, wr = wr->next) {
|
||||
if (hns_roce_wq_overflow(&hr_qp->rq, nreq,
|
||||
@@ -373,6 +370,8 @@ static int hns_roce_v1_post_recv(struct ib_qp *ibqp,
|
||||
goto out;
|
||||
}
|
||||
|
||||
wqe_idx = (hr_qp->rq.head + nreq) & (hr_qp->rq.wqe_cnt - 1);
|
||||
|
||||
if (unlikely(wr->num_sge > hr_qp->rq.max_gs)) {
|
||||
dev_err(dev, "rq:num_sge=%d > qp->sq.max_gs=%d\n",
|
||||
wr->num_sge, hr_qp->rq.max_gs);
|
||||
@@ -381,7 +380,7 @@ static int hns_roce_v1_post_recv(struct ib_qp *ibqp,
|
||||
goto out;
|
||||
}
|
||||
|
||||
ctrl = get_recv_wqe(hr_qp, ind);
|
||||
ctrl = get_recv_wqe(hr_qp, wqe_idx);
|
||||
|
||||
roce_set_field(ctrl->rwqe_byte_12,
|
||||
RQ_WQE_CTRL_RWQE_BYTE_12_RWQE_SGE_NUM_M,
|
||||
@@ -393,9 +392,7 @@ static int hns_roce_v1_post_recv(struct ib_qp *ibqp,
|
||||
for (i = 0; i < wr->num_sge; i++)
|
||||
set_data_seg(scat + i, wr->sg_list + i);
|
||||
|
||||
hr_qp->rq.wrid[ind] = wr->wr_id;
|
||||
|
||||
ind = (ind + 1) & (hr_qp->rq.wqe_cnt - 1);
|
||||
hr_qp->rq.wrid[wqe_idx] = wr->wr_id;
|
||||
}
|
||||
|
||||
out:
|
||||
@@ -2702,7 +2699,6 @@ static int hns_roce_v1_m_sqp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
|
||||
hr_qp->rq.tail = 0;
|
||||
hr_qp->sq.head = 0;
|
||||
hr_qp->sq.tail = 0;
|
||||
hr_qp->sq_next_wqe = 0;
|
||||
}
|
||||
|
||||
kfree(context);
|
||||
@@ -3316,7 +3312,6 @@ static int hns_roce_v1_m_qp(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
|
||||
hr_qp->rq.tail = 0;
|
||||
hr_qp->sq.head = 0;
|
||||
hr_qp->sq.tail = 0;
|
||||
hr_qp->sq_next_wqe = 0;
|
||||
}
|
||||
out:
|
||||
kfree(context);
|
||||
|
||||
@@ -110,7 +110,7 @@ static void set_atomic_seg(struct hns_roce_wqe_atomic_seg *aseg,
|
||||
}
|
||||
|
||||
static void set_extend_sge(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
|
||||
unsigned int *sge_ind)
|
||||
unsigned int *sge_ind, int valid_num_sge)
|
||||
{
|
||||
struct hns_roce_v2_wqe_data_seg *dseg;
|
||||
struct ib_sge *sg;
|
||||
@@ -123,7 +123,7 @@ static void set_extend_sge(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
|
||||
|
||||
if (qp->ibqp.qp_type == IB_QPT_RC || qp->ibqp.qp_type == IB_QPT_UC)
|
||||
num_in_wqe = HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE;
|
||||
extend_sge_num = wr->num_sge - num_in_wqe;
|
||||
extend_sge_num = valid_num_sge - num_in_wqe;
|
||||
sg = wr->sg_list + num_in_wqe;
|
||||
shift = qp->hr_buf.page_shift;
|
||||
|
||||
@@ -159,14 +159,16 @@ static void set_extend_sge(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
|
||||
static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
|
||||
struct hns_roce_v2_rc_send_wqe *rc_sq_wqe,
|
||||
void *wqe, unsigned int *sge_ind,
|
||||
int valid_num_sge,
|
||||
const struct ib_send_wr **bad_wr)
|
||||
{
|
||||
struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
|
||||
struct hns_roce_v2_wqe_data_seg *dseg = wqe;
|
||||
struct hns_roce_qp *qp = to_hr_qp(ibqp);
|
||||
int j = 0;
|
||||
int i;
|
||||
|
||||
if (wr->send_flags & IB_SEND_INLINE && wr->num_sge) {
|
||||
if (wr->send_flags & IB_SEND_INLINE && valid_num_sge) {
|
||||
if (le32_to_cpu(rc_sq_wqe->msg_len) >
|
||||
hr_dev->caps.max_sq_inline) {
|
||||
*bad_wr = wr;
|
||||
@@ -190,7 +192,7 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
|
||||
roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S,
|
||||
1);
|
||||
} else {
|
||||
if (wr->num_sge <= HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE) {
|
||||
if (valid_num_sge <= HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE) {
|
||||
for (i = 0; i < wr->num_sge; i++) {
|
||||
if (likely(wr->sg_list[i].length)) {
|
||||
set_data_seg_v2(dseg, wr->sg_list + i);
|
||||
@@ -203,19 +205,21 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
|
||||
V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
|
||||
(*sge_ind) & (qp->sge.sge_cnt - 1));
|
||||
|
||||
for (i = 0; i < HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE; i++) {
|
||||
for (i = 0; i < wr->num_sge &&
|
||||
j < HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE; i++) {
|
||||
if (likely(wr->sg_list[i].length)) {
|
||||
set_data_seg_v2(dseg, wr->sg_list + i);
|
||||
dseg++;
|
||||
j++;
|
||||
}
|
||||
}
|
||||
|
||||
set_extend_sge(qp, wr, sge_ind);
|
||||
set_extend_sge(qp, wr, sge_ind, valid_num_sge);
|
||||
}
|
||||
|
||||
roce_set_field(rc_sq_wqe->byte_16,
|
||||
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
|
||||
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, wr->num_sge);
|
||||
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, valid_num_sge);
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -239,10 +243,11 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
|
||||
struct device *dev = hr_dev->dev;
|
||||
struct hns_roce_v2_db sq_db;
|
||||
struct ib_qp_attr attr;
|
||||
unsigned int sge_ind;
|
||||
unsigned int owner_bit;
|
||||
unsigned int sge_idx;
|
||||
unsigned int wqe_idx;
|
||||
unsigned long flags;
|
||||
unsigned int ind;
|
||||
int valid_num_sge;
|
||||
void *wqe = NULL;
|
||||
bool loopback;
|
||||
int attr_mask;
|
||||
@@ -269,8 +274,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&qp->sq.lock, flags);
|
||||
ind = qp->sq_next_wqe;
|
||||
sge_ind = qp->next_sge;
|
||||
sge_idx = qp->next_sge;
|
||||
|
||||
for (nreq = 0; wr; ++nreq, wr = wr->next) {
|
||||
if (hns_roce_wq_overflow(&qp->sq, nreq, qp->ibqp.send_cq)) {
|
||||
@@ -279,6 +283,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
|
||||
goto out;
|
||||
}
|
||||
|
||||
wqe_idx = (qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1);
|
||||
|
||||
if (unlikely(wr->num_sge > qp->sq.max_gs)) {
|
||||
dev_err(dev, "num_sge=%d > qp->sq.max_gs=%d\n",
|
||||
wr->num_sge, qp->sq.max_gs);
|
||||
@@ -287,14 +293,20 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
|
||||
goto out;
|
||||
}
|
||||
|
||||
wqe = get_send_wqe(qp, ind & (qp->sq.wqe_cnt - 1));
|
||||
qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] =
|
||||
wr->wr_id;
|
||||
|
||||
wqe = get_send_wqe(qp, wqe_idx);
|
||||
qp->sq.wrid[wqe_idx] = wr->wr_id;
|
||||
owner_bit =
|
||||
~(((qp->sq.head + nreq) >> ilog2(qp->sq.wqe_cnt)) & 0x1);
|
||||
valid_num_sge = 0;
|
||||
tmp_len = 0;
|
||||
|
||||
for (i = 0; i < wr->num_sge; i++) {
|
||||
if (likely(wr->sg_list[i].length)) {
|
||||
tmp_len += wr->sg_list[i].length;
|
||||
valid_num_sge++;
|
||||
}
|
||||
}
|
||||
|
||||
/* Corresponding to the QP type, wqe process separately */
|
||||
if (ibqp->qp_type == IB_QPT_GSI) {
|
||||
ud_sq_wqe = wqe;
|
||||
@@ -330,9 +342,6 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
|
||||
V2_UD_SEND_WQE_BYTE_4_OPCODE_S,
|
||||
HNS_ROCE_V2_WQE_OP_SEND);
|
||||
|
||||
for (i = 0; i < wr->num_sge; i++)
|
||||
tmp_len += wr->sg_list[i].length;
|
||||
|
||||
ud_sq_wqe->msg_len =
|
||||
cpu_to_le32(le32_to_cpu(ud_sq_wqe->msg_len) + tmp_len);
|
||||
|
||||
@@ -368,12 +377,12 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
|
||||
roce_set_field(ud_sq_wqe->byte_16,
|
||||
V2_UD_SEND_WQE_BYTE_16_SGE_NUM_M,
|
||||
V2_UD_SEND_WQE_BYTE_16_SGE_NUM_S,
|
||||
wr->num_sge);
|
||||
valid_num_sge);
|
||||
|
||||
roce_set_field(ud_sq_wqe->byte_20,
|
||||
V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M,
|
||||
V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
|
||||
sge_ind & (qp->sge.sge_cnt - 1));
|
||||
sge_idx & (qp->sge.sge_cnt - 1));
|
||||
|
||||
roce_set_field(ud_sq_wqe->byte_24,
|
||||
V2_UD_SEND_WQE_BYTE_24_UDPSPN_M,
|
||||
@@ -423,13 +432,10 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
|
||||
memcpy(&ud_sq_wqe->dgid[0], &ah->av.dgid[0],
|
||||
GID_LEN_V2);
|
||||
|
||||
set_extend_sge(qp, wr, &sge_ind);
|
||||
ind++;
|
||||
set_extend_sge(qp, wr, &sge_idx, valid_num_sge);
|
||||
} else if (ibqp->qp_type == IB_QPT_RC) {
|
||||
rc_sq_wqe = wqe;
|
||||
memset(rc_sq_wqe, 0, sizeof(*rc_sq_wqe));
|
||||
for (i = 0; i < wr->num_sge; i++)
|
||||
tmp_len += wr->sg_list[i].length;
|
||||
|
||||
rc_sq_wqe->msg_len =
|
||||
cpu_to_le32(le32_to_cpu(rc_sq_wqe->msg_len) + tmp_len);
|
||||
@@ -550,15 +556,14 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp,
|
||||
roce_set_field(rc_sq_wqe->byte_16,
|
||||
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M,
|
||||
V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S,
|
||||
wr->num_sge);
|
||||
valid_num_sge);
|
||||
} else if (wr->opcode != IB_WR_REG_MR) {
|
||||
ret = set_rwqe_data_seg(ibqp, wr, rc_sq_wqe,
|
||||
wqe, &sge_ind, bad_wr);
|
||||
wqe, &sge_idx,
|
||||
valid_num_sge, bad_wr);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ind++;
|
||||
} else {
|
||||
dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type);
|
||||
spin_unlock_irqrestore(&qp->sq.lock, flags);
|
||||
@@ -588,8 +593,7 @@ out:
|
||||
|
||||
hns_roce_write64(hr_dev, (__le32 *)&sq_db, qp->sq.db_reg_l);
|
||||
|
||||
qp->sq_next_wqe = ind;
|
||||
qp->next_sge = sge_ind;
|
||||
qp->next_sge = sge_idx;
|
||||
|
||||
if (qp->state == IB_QPS_ERR) {
|
||||
attr_mask = IB_QP_STATE;
|
||||
@@ -623,13 +627,12 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
|
||||
unsigned long flags;
|
||||
void *wqe = NULL;
|
||||
int attr_mask;
|
||||
u32 wqe_idx;
|
||||
int ret = 0;
|
||||
int nreq;
|
||||
int ind;
|
||||
int i;
|
||||
|
||||
spin_lock_irqsave(&hr_qp->rq.lock, flags);
|
||||
ind = hr_qp->rq.head & (hr_qp->rq.wqe_cnt - 1);
|
||||
|
||||
if (hr_qp->state == IB_QPS_RESET) {
|
||||
spin_unlock_irqrestore(&hr_qp->rq.lock, flags);
|
||||
@@ -645,6 +648,8 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
|
||||
goto out;
|
||||
}
|
||||
|
||||
wqe_idx = (hr_qp->rq.head + nreq) & (hr_qp->rq.wqe_cnt - 1);
|
||||
|
||||
if (unlikely(wr->num_sge > hr_qp->rq.max_gs)) {
|
||||
dev_err(dev, "rq:num_sge=%d > qp->sq.max_gs=%d\n",
|
||||
wr->num_sge, hr_qp->rq.max_gs);
|
||||
@@ -653,7 +658,7 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
|
||||
goto out;
|
||||
}
|
||||
|
||||
wqe = get_recv_wqe(hr_qp, ind);
|
||||
wqe = get_recv_wqe(hr_qp, wqe_idx);
|
||||
dseg = (struct hns_roce_v2_wqe_data_seg *)wqe;
|
||||
for (i = 0; i < wr->num_sge; i++) {
|
||||
if (!wr->sg_list[i].length)
|
||||
@@ -669,8 +674,8 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
|
||||
|
||||
/* rq support inline data */
|
||||
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RQ_INLINE) {
|
||||
sge_list = hr_qp->rq_inl_buf.wqe_list[ind].sg_list;
|
||||
hr_qp->rq_inl_buf.wqe_list[ind].sge_cnt =
|
||||
sge_list = hr_qp->rq_inl_buf.wqe_list[wqe_idx].sg_list;
|
||||
hr_qp->rq_inl_buf.wqe_list[wqe_idx].sge_cnt =
|
||||
(u32)wr->num_sge;
|
||||
for (i = 0; i < wr->num_sge; i++) {
|
||||
sge_list[i].addr =
|
||||
@@ -679,9 +684,7 @@ static int hns_roce_v2_post_recv(struct ib_qp *ibqp,
|
||||
}
|
||||
}
|
||||
|
||||
hr_qp->rq.wrid[ind] = wr->wr_id;
|
||||
|
||||
ind = (ind + 1) & (hr_qp->rq.wqe_cnt - 1);
|
||||
hr_qp->rq.wrid[wqe_idx] = wr->wr_id;
|
||||
}
|
||||
|
||||
out:
|
||||
@@ -4465,7 +4468,6 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
|
||||
hr_qp->rq.tail = 0;
|
||||
hr_qp->sq.head = 0;
|
||||
hr_qp->sq.tail = 0;
|
||||
hr_qp->sq_next_wqe = 0;
|
||||
hr_qp->next_sge = 0;
|
||||
if (hr_qp->rq.wqe_cnt)
|
||||
*hr_qp->rdb.db_record = 0;
|
||||
|
||||
@@ -1225,10 +1225,9 @@ static void siw_cm_llp_data_ready(struct sock *sk)
|
||||
read_lock(&sk->sk_callback_lock);
|
||||
|
||||
cep = sk_to_cep(sk);
|
||||
if (!cep) {
|
||||
WARN_ON(1);
|
||||
if (!cep)
|
||||
goto out;
|
||||
}
|
||||
|
||||
siw_dbg_cep(cep, "state: %d\n", cep->state);
|
||||
|
||||
switch (cep->state) {
|
||||
|
||||
@@ -300,9 +300,11 @@ static int control_loop(void *dummy)
|
||||
/* i2c probing and setup */
|
||||
/************************************************************************/
|
||||
|
||||
static int
|
||||
do_attach( struct i2c_adapter *adapter )
|
||||
static void do_attach(struct i2c_adapter *adapter)
|
||||
{
|
||||
struct i2c_board_info info = { };
|
||||
struct device_node *np;
|
||||
|
||||
/* scan 0x48-0x4f (DS1775) and 0x2c-2x2f (ADM1030) */
|
||||
static const unsigned short scan_ds1775[] = {
|
||||
0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
|
||||
@@ -313,25 +315,24 @@ do_attach( struct i2c_adapter *adapter )
|
||||
I2C_CLIENT_END
|
||||
};
|
||||
|
||||
if( strncmp(adapter->name, "uni-n", 5) )
|
||||
return 0;
|
||||
if (x.running || strncmp(adapter->name, "uni-n", 5))
|
||||
return;
|
||||
|
||||
if( !x.running ) {
|
||||
struct i2c_board_info info;
|
||||
|
||||
memset(&info, 0, sizeof(struct i2c_board_info));
|
||||
strlcpy(info.type, "therm_ds1775", I2C_NAME_SIZE);
|
||||
np = of_find_compatible_node(adapter->dev.of_node, NULL, "MAC,ds1775");
|
||||
if (np) {
|
||||
of_node_put(np);
|
||||
} else {
|
||||
strlcpy(info.type, "MAC,ds1775", I2C_NAME_SIZE);
|
||||
i2c_new_probed_device(adapter, &info, scan_ds1775, NULL);
|
||||
|
||||
strlcpy(info.type, "therm_adm1030", I2C_NAME_SIZE);
|
||||
i2c_new_probed_device(adapter, &info, scan_adm1030, NULL);
|
||||
|
||||
if( x.thermostat && x.fan ) {
|
||||
x.running = 1;
|
||||
x.poll_task = kthread_run(control_loop, NULL, "g4fand");
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
||||
np = of_find_compatible_node(adapter->dev.of_node, NULL, "MAC,adm1030");
|
||||
if (np) {
|
||||
of_node_put(np);
|
||||
} else {
|
||||
strlcpy(info.type, "MAC,adm1030", I2C_NAME_SIZE);
|
||||
i2c_new_probed_device(adapter, &info, scan_adm1030, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
static int
|
||||
@@ -404,8 +405,8 @@ out:
|
||||
enum chip { ds1775, adm1030 };
|
||||
|
||||
static const struct i2c_device_id therm_windtunnel_id[] = {
|
||||
{ "therm_ds1775", ds1775 },
|
||||
{ "therm_adm1030", adm1030 },
|
||||
{ "MAC,ds1775", ds1775 },
|
||||
{ "MAC,adm1030", adm1030 },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(i2c, therm_windtunnel_id);
|
||||
@@ -414,6 +415,7 @@ static int
|
||||
do_probe(struct i2c_client *cl, const struct i2c_device_id *id)
|
||||
{
|
||||
struct i2c_adapter *adapter = cl->adapter;
|
||||
int ret = 0;
|
||||
|
||||
if( !i2c_check_functionality(adapter, I2C_FUNC_SMBUS_WORD_DATA
|
||||
| I2C_FUNC_SMBUS_WRITE_BYTE) )
|
||||
@@ -421,11 +423,19 @@ do_probe(struct i2c_client *cl, const struct i2c_device_id *id)
|
||||
|
||||
switch (id->driver_data) {
|
||||
case adm1030:
|
||||
return attach_fan( cl );
|
||||
ret = attach_fan(cl);
|
||||
break;
|
||||
case ds1775:
|
||||
return attach_thermostat(cl);
|
||||
ret = attach_thermostat(cl);
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
|
||||
if (!x.running && x.thermostat && x.fan) {
|
||||
x.running = 1;
|
||||
x.poll_task = kthread_run(control_loop, NULL, "g4fand");
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct i2c_driver g4fan_driver = {
|
||||
|
||||
@@ -3436,6 +3436,47 @@ static void bond_fold_stats(struct rtnl_link_stats64 *_res,
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_LOCKDEP
|
||||
static int bond_get_lowest_level_rcu(struct net_device *dev)
|
||||
{
|
||||
struct net_device *ldev, *next, *now, *dev_stack[MAX_NEST_DEV + 1];
|
||||
struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1];
|
||||
int cur = 0, max = 0;
|
||||
|
||||
now = dev;
|
||||
iter = &dev->adj_list.lower;
|
||||
|
||||
while (1) {
|
||||
next = NULL;
|
||||
while (1) {
|
||||
ldev = netdev_next_lower_dev_rcu(now, &iter);
|
||||
if (!ldev)
|
||||
break;
|
||||
|
||||
next = ldev;
|
||||
niter = &ldev->adj_list.lower;
|
||||
dev_stack[cur] = now;
|
||||
iter_stack[cur++] = iter;
|
||||
if (max <= cur)
|
||||
max = cur;
|
||||
break;
|
||||
}
|
||||
|
||||
if (!next) {
|
||||
if (!cur)
|
||||
return max;
|
||||
next = dev_stack[--cur];
|
||||
niter = iter_stack[cur];
|
||||
}
|
||||
|
||||
now = next;
|
||||
iter = niter;
|
||||
}
|
||||
|
||||
return max;
|
||||
}
|
||||
#endif
|
||||
|
||||
static void bond_get_stats(struct net_device *bond_dev,
|
||||
struct rtnl_link_stats64 *stats)
|
||||
{
|
||||
@@ -3443,11 +3484,17 @@ static void bond_get_stats(struct net_device *bond_dev,
|
||||
struct rtnl_link_stats64 temp;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
int nest_level = 0;
|
||||
|
||||
spin_lock(&bond->stats_lock);
|
||||
memcpy(stats, &bond->bond_stats, sizeof(*stats));
|
||||
|
||||
rcu_read_lock();
|
||||
#ifdef CONFIG_LOCKDEP
|
||||
nest_level = bond_get_lowest_level_rcu(bond_dev);
|
||||
#endif
|
||||
|
||||
spin_lock_nested(&bond->stats_lock, nest_level);
|
||||
memcpy(stats, &bond->bond_stats, sizeof(*stats));
|
||||
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
const struct rtnl_link_stats64 *new =
|
||||
dev_get_stats(slave->dev, &temp);
|
||||
@@ -3457,10 +3504,10 @@ static void bond_get_stats(struct net_device *bond_dev,
|
||||
/* save off the slave stats for the next run */
|
||||
memcpy(&slave->slave_stats, new, sizeof(*new));
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
memcpy(&bond->bond_stats, stats, sizeof(*stats));
|
||||
spin_unlock(&bond->stats_lock);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static int bond_do_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cmd)
|
||||
@@ -3550,6 +3597,8 @@ static int bond_do_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cmd
|
||||
case BOND_RELEASE_OLD:
|
||||
case SIOCBONDRELEASE:
|
||||
res = bond_release(bond_dev, slave_dev);
|
||||
if (!res)
|
||||
netdev_update_lockdep_key(slave_dev);
|
||||
break;
|
||||
case BOND_SETHWADDR_OLD:
|
||||
case SIOCBONDSETHWADDR:
|
||||
|
||||
@@ -1398,6 +1398,8 @@ static int bond_option_slaves_set(struct bonding *bond,
|
||||
case '-':
|
||||
slave_dbg(bond->dev, dev, "Releasing interface\n");
|
||||
ret = bond_release(bond->dev, dev);
|
||||
if (!ret)
|
||||
netdev_update_lockdep_key(dev);
|
||||
break;
|
||||
|
||||
default:
|
||||
|
||||
@@ -1353,6 +1353,9 @@ void b53_vlan_add(struct dsa_switch *ds, int port,
|
||||
|
||||
b53_get_vlan_entry(dev, vid, vl);
|
||||
|
||||
if (vid == 0 && vid == b53_default_pvid(dev))
|
||||
untagged = true;
|
||||
|
||||
vl->members |= BIT(port);
|
||||
if (untagged && !dsa_is_cpu_port(ds, port))
|
||||
vl->untag |= BIT(port);
|
||||
|
||||
@@ -200,6 +200,11 @@ static void comp_ctxt_release(struct ena_com_admin_queue *queue,
|
||||
static struct ena_comp_ctx *get_comp_ctxt(struct ena_com_admin_queue *queue,
|
||||
u16 command_id, bool capture)
|
||||
{
|
||||
if (unlikely(!queue->comp_ctx)) {
|
||||
pr_err("Completion context is NULL\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (unlikely(command_id >= queue->q_depth)) {
|
||||
pr_err("command id is larger than the queue size. cmd_id: %u queue size %d\n",
|
||||
command_id, queue->q_depth);
|
||||
@@ -1041,9 +1046,41 @@ static int ena_com_get_feature(struct ena_com_dev *ena_dev,
|
||||
feature_ver);
|
||||
}
|
||||
|
||||
static void ena_com_hash_key_fill_default_key(struct ena_com_dev *ena_dev)
|
||||
{
|
||||
struct ena_admin_feature_rss_flow_hash_control *hash_key =
|
||||
(ena_dev->rss).hash_key;
|
||||
|
||||
netdev_rss_key_fill(&hash_key->key, sizeof(hash_key->key));
|
||||
/* The key is stored in the device in u32 array
|
||||
* as well as the API requires the key to be passed in this
|
||||
* format. Thus the size of our array should be divided by 4
|
||||
*/
|
||||
hash_key->keys_num = sizeof(hash_key->key) / sizeof(u32);
|
||||
}
|
||||
|
||||
int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev)
|
||||
{
|
||||
return ena_dev->rss.hash_func;
|
||||
}
|
||||
|
||||
static int ena_com_hash_key_allocate(struct ena_com_dev *ena_dev)
|
||||
{
|
||||
struct ena_rss *rss = &ena_dev->rss;
|
||||
struct ena_admin_feature_rss_flow_hash_control *hash_key;
|
||||
struct ena_admin_get_feat_resp get_resp;
|
||||
int rc;
|
||||
|
||||
hash_key = (ena_dev->rss).hash_key;
|
||||
|
||||
rc = ena_com_get_feature_ex(ena_dev, &get_resp,
|
||||
ENA_ADMIN_RSS_HASH_FUNCTION,
|
||||
ena_dev->rss.hash_key_dma_addr,
|
||||
sizeof(ena_dev->rss.hash_key), 0);
|
||||
if (unlikely(rc)) {
|
||||
hash_key = NULL;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
rss->hash_key =
|
||||
dma_alloc_coherent(ena_dev->dmadev, sizeof(*rss->hash_key),
|
||||
@@ -1254,30 +1291,6 @@ static int ena_com_ind_tbl_convert_to_device(struct ena_com_dev *ena_dev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ena_com_ind_tbl_convert_from_device(struct ena_com_dev *ena_dev)
|
||||
{
|
||||
u16 dev_idx_to_host_tbl[ENA_TOTAL_NUM_QUEUES] = { (u16)-1 };
|
||||
struct ena_rss *rss = &ena_dev->rss;
|
||||
u8 idx;
|
||||
u16 i;
|
||||
|
||||
for (i = 0; i < ENA_TOTAL_NUM_QUEUES; i++)
|
||||
dev_idx_to_host_tbl[ena_dev->io_sq_queues[i].idx] = i;
|
||||
|
||||
for (i = 0; i < 1 << rss->tbl_log_size; i++) {
|
||||
if (rss->rss_ind_tbl[i].cq_idx > ENA_TOTAL_NUM_QUEUES)
|
||||
return -EINVAL;
|
||||
idx = (u8)rss->rss_ind_tbl[i].cq_idx;
|
||||
|
||||
if (dev_idx_to_host_tbl[idx] > ENA_TOTAL_NUM_QUEUES)
|
||||
return -EINVAL;
|
||||
|
||||
rss->host_rss_ind_tbl[i] = dev_idx_to_host_tbl[idx];
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev,
|
||||
u16 intr_delay_resolution)
|
||||
{
|
||||
@@ -2297,15 +2310,16 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
|
||||
|
||||
switch (func) {
|
||||
case ENA_ADMIN_TOEPLITZ:
|
||||
if (key_len > sizeof(hash_key->key)) {
|
||||
pr_err("key len (%hu) is bigger than the max supported (%zu)\n",
|
||||
key_len, sizeof(hash_key->key));
|
||||
return -EINVAL;
|
||||
if (key) {
|
||||
if (key_len != sizeof(hash_key->key)) {
|
||||
pr_err("key len (%hu) doesn't equal the supported size (%zu)\n",
|
||||
key_len, sizeof(hash_key->key));
|
||||
return -EINVAL;
|
||||
}
|
||||
memcpy(hash_key->key, key, key_len);
|
||||
rss->hash_init_val = init_val;
|
||||
hash_key->keys_num = key_len >> 2;
|
||||
}
|
||||
|
||||
memcpy(hash_key->key, key, key_len);
|
||||
rss->hash_init_val = init_val;
|
||||
hash_key->keys_num = key_len >> 2;
|
||||
break;
|
||||
case ENA_ADMIN_CRC32:
|
||||
rss->hash_init_val = init_val;
|
||||
@@ -2342,7 +2356,11 @@ int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
|
||||
if (unlikely(rc))
|
||||
return rc;
|
||||
|
||||
rss->hash_func = get_resp.u.flow_hash_func.selected_func;
|
||||
/* ffs() returns 1 in case the lsb is set */
|
||||
rss->hash_func = ffs(get_resp.u.flow_hash_func.selected_func);
|
||||
if (rss->hash_func)
|
||||
rss->hash_func--;
|
||||
|
||||
if (func)
|
||||
*func = rss->hash_func;
|
||||
|
||||
@@ -2606,10 +2624,6 @@ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl)
|
||||
if (!ind_tbl)
|
||||
return 0;
|
||||
|
||||
rc = ena_com_ind_tbl_convert_from_device(ena_dev);
|
||||
if (unlikely(rc))
|
||||
return rc;
|
||||
|
||||
for (i = 0; i < (1 << rss->tbl_log_size); i++)
|
||||
ind_tbl[i] = rss->host_rss_ind_tbl[i];
|
||||
|
||||
@@ -2626,9 +2640,15 @@ int ena_com_rss_init(struct ena_com_dev *ena_dev, u16 indr_tbl_log_size)
|
||||
if (unlikely(rc))
|
||||
goto err_indr_tbl;
|
||||
|
||||
/* The following function might return unsupported in case the
|
||||
* device doesn't support setting the key / hash function. We can safely
|
||||
* ignore this error and have indirection table support only.
|
||||
*/
|
||||
rc = ena_com_hash_key_allocate(ena_dev);
|
||||
if (unlikely(rc))
|
||||
if (unlikely(rc) && rc != -EOPNOTSUPP)
|
||||
goto err_hash_key;
|
||||
else if (rc != -EOPNOTSUPP)
|
||||
ena_com_hash_key_fill_default_key(ena_dev);
|
||||
|
||||
rc = ena_com_hash_ctrl_init(ena_dev);
|
||||
if (unlikely(rc))
|
||||
|
||||
@@ -44,6 +44,7 @@
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/netdevice.h>
|
||||
|
||||
#include "ena_common_defs.h"
|
||||
#include "ena_admin_defs.h"
|
||||
@@ -655,6 +656,14 @@ int ena_com_rss_init(struct ena_com_dev *ena_dev, u16 log_size);
|
||||
*/
|
||||
void ena_com_rss_destroy(struct ena_com_dev *ena_dev);
|
||||
|
||||
/* ena_com_get_current_hash_function - Get RSS hash function
|
||||
* @ena_dev: ENA communication layer struct
|
||||
*
|
||||
* Return the current hash function.
|
||||
* @return: 0 or one of the ena_admin_hash_functions values.
|
||||
*/
|
||||
int ena_com_get_current_hash_function(struct ena_com_dev *ena_dev);
|
||||
|
||||
/* ena_com_fill_hash_function - Fill RSS hash function
|
||||
* @ena_dev: ENA communication layer struct
|
||||
* @func: The hash function (Toeplitz or crc)
|
||||
|
||||
@@ -636,6 +636,28 @@ static u32 ena_get_rxfh_key_size(struct net_device *netdev)
|
||||
return ENA_HASH_KEY_SIZE;
|
||||
}
|
||||
|
||||
static int ena_indirection_table_get(struct ena_adapter *adapter, u32 *indir)
|
||||
{
|
||||
struct ena_com_dev *ena_dev = adapter->ena_dev;
|
||||
int i, rc;
|
||||
|
||||
if (!indir)
|
||||
return 0;
|
||||
|
||||
rc = ena_com_indirect_table_get(ena_dev, indir);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Our internal representation of the indices is: even indices
|
||||
* for Tx and uneven indices for Rx. We need to convert the Rx
|
||||
* indices to be consecutive
|
||||
*/
|
||||
for (i = 0; i < ENA_RX_RSS_TABLE_SIZE; i++)
|
||||
indir[i] = ENA_IO_RXQ_IDX_TO_COMBINED_IDX(indir[i]);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int ena_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
|
||||
u8 *hfunc)
|
||||
{
|
||||
@@ -644,11 +666,25 @@ static int ena_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
|
||||
u8 func;
|
||||
int rc;
|
||||
|
||||
rc = ena_com_indirect_table_get(adapter->ena_dev, indir);
|
||||
rc = ena_indirection_table_get(adapter, indir);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* We call this function in order to check if the device
|
||||
* supports getting/setting the hash function.
|
||||
*/
|
||||
rc = ena_com_get_hash_function(adapter->ena_dev, &ena_func, key);
|
||||
|
||||
if (rc) {
|
||||
if (rc == -EOPNOTSUPP) {
|
||||
key = NULL;
|
||||
hfunc = NULL;
|
||||
rc = 0;
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
@@ -657,7 +693,7 @@ static int ena_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
|
||||
func = ETH_RSS_HASH_TOP;
|
||||
break;
|
||||
case ENA_ADMIN_CRC32:
|
||||
func = ETH_RSS_HASH_XOR;
|
||||
func = ETH_RSS_HASH_CRC32;
|
||||
break;
|
||||
default:
|
||||
netif_err(adapter, drv, netdev,
|
||||
@@ -700,10 +736,13 @@ static int ena_set_rxfh(struct net_device *netdev, const u32 *indir,
|
||||
}
|
||||
|
||||
switch (hfunc) {
|
||||
case ETH_RSS_HASH_NO_CHANGE:
|
||||
func = ena_com_get_current_hash_function(ena_dev);
|
||||
break;
|
||||
case ETH_RSS_HASH_TOP:
|
||||
func = ENA_ADMIN_TOEPLITZ;
|
||||
break;
|
||||
case ETH_RSS_HASH_XOR:
|
||||
case ETH_RSS_HASH_CRC32:
|
||||
func = ENA_ADMIN_CRC32;
|
||||
break;
|
||||
default:
|
||||
@@ -805,6 +844,7 @@ static const struct ethtool_ops ena_ethtool_ops = {
|
||||
.get_channels = ena_get_channels,
|
||||
.get_tunable = ena_get_tunable,
|
||||
.set_tunable = ena_set_tunable,
|
||||
.get_ts_info = ethtool_op_get_ts_info,
|
||||
};
|
||||
|
||||
void ena_set_ethtool_ops(struct net_device *netdev)
|
||||
|
||||
@@ -3035,8 +3035,8 @@ static void check_for_missing_keep_alive(struct ena_adapter *adapter)
|
||||
if (adapter->keep_alive_timeout == ENA_HW_HINTS_NO_TIMEOUT)
|
||||
return;
|
||||
|
||||
keep_alive_expired = round_jiffies(adapter->last_keep_alive_jiffies +
|
||||
adapter->keep_alive_timeout);
|
||||
keep_alive_expired = adapter->last_keep_alive_jiffies +
|
||||
adapter->keep_alive_timeout;
|
||||
if (unlikely(time_is_before_jiffies(keep_alive_expired))) {
|
||||
netif_err(adapter, drv, adapter->netdev,
|
||||
"Keep alive watchdog timeout.\n");
|
||||
@@ -3138,7 +3138,7 @@ static void ena_timer_service(struct timer_list *t)
|
||||
}
|
||||
|
||||
/* Reset the timer */
|
||||
mod_timer(&adapter->timer_service, jiffies + HZ);
|
||||
mod_timer(&adapter->timer_service, round_jiffies(jiffies + HZ));
|
||||
}
|
||||
|
||||
static int ena_calc_io_queue_num(struct pci_dev *pdev,
|
||||
|
||||
@@ -127,6 +127,8 @@
|
||||
|
||||
#define ENA_IO_TXQ_IDX(q) (2 * (q))
|
||||
#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1)
|
||||
#define ENA_IO_TXQ_IDX_TO_COMBINED_IDX(q) ((q) / 2)
|
||||
#define ENA_IO_RXQ_IDX_TO_COMBINED_IDX(q) (((q) - 1) / 2)
|
||||
|
||||
#define ENA_MGMNT_IRQ_IDX 0
|
||||
#define ENA_IO_IRQ_FIRST_IDX 1
|
||||
|
||||
@@ -2020,7 +2020,7 @@ static int xgene_enet_probe(struct platform_device *pdev)
|
||||
int ret;
|
||||
|
||||
ndev = alloc_etherdev_mqs(sizeof(struct xgene_enet_pdata),
|
||||
XGENE_NUM_RX_RING, XGENE_NUM_TX_RING);
|
||||
XGENE_NUM_TX_RING, XGENE_NUM_RX_RING);
|
||||
if (!ndev)
|
||||
return -ENOMEM;
|
||||
|
||||
|
||||
@@ -158,7 +158,7 @@ aq_check_approve_fvlan(struct aq_nic_s *aq_nic,
|
||||
}
|
||||
|
||||
if ((aq_nic->ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER) &&
|
||||
(!test_bit(be16_to_cpu(fsp->h_ext.vlan_tci),
|
||||
(!test_bit(be16_to_cpu(fsp->h_ext.vlan_tci) & VLAN_VID_MASK,
|
||||
aq_nic->active_vlans))) {
|
||||
netdev_err(aq_nic->ndev,
|
||||
"ethtool: unknown vlan-id specified");
|
||||
|
||||
@@ -467,8 +467,10 @@ static unsigned int aq_nic_map_skb(struct aq_nic_s *self,
|
||||
dx_buff->len,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
if (unlikely(dma_mapping_error(aq_nic_get_dev(self), dx_buff->pa)))
|
||||
if (unlikely(dma_mapping_error(aq_nic_get_dev(self), dx_buff->pa))) {
|
||||
ret = 0;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
first = dx_buff;
|
||||
dx_buff->len_pkt = skb->len;
|
||||
@@ -598,10 +600,6 @@ int aq_nic_xmit(struct aq_nic_s *self, struct sk_buff *skb)
|
||||
if (likely(frags)) {
|
||||
err = self->aq_hw_ops->hw_ring_tx_xmit(self->aq_hw,
|
||||
ring, frags);
|
||||
if (err >= 0) {
|
||||
++ring->stats.tx.packets;
|
||||
ring->stats.tx.bytes += skb->len;
|
||||
}
|
||||
} else {
|
||||
err = NETDEV_TX_BUSY;
|
||||
}
|
||||
|
||||
@@ -243,9 +243,12 @@ bool aq_ring_tx_clean(struct aq_ring_s *self)
|
||||
}
|
||||
}
|
||||
|
||||
if (unlikely(buff->is_eop))
|
||||
dev_kfree_skb_any(buff->skb);
|
||||
if (unlikely(buff->is_eop)) {
|
||||
++self->stats.rx.packets;
|
||||
self->stats.tx.bytes += buff->skb->len;
|
||||
|
||||
dev_kfree_skb_any(buff->skb);
|
||||
}
|
||||
buff->pa = 0U;
|
||||
buff->eop_index = 0xffffU;
|
||||
self->sw_head = aq_ring_next_dx(self, self->sw_head);
|
||||
|
||||
@@ -11712,6 +11712,14 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
if (version_printed++ == 0)
|
||||
pr_info("%s", version);
|
||||
|
||||
/* Clear any pending DMA transactions from crash kernel
|
||||
* while loading driver in capture kernel.
|
||||
*/
|
||||
if (is_kdump_kernel()) {
|
||||
pci_clear_master(pdev);
|
||||
pcie_flr(pdev);
|
||||
}
|
||||
|
||||
max_irqs = bnxt_get_max_irq(pdev);
|
||||
dev = alloc_etherdev_mq(sizeof(*bp), max_irqs);
|
||||
if (!dev)
|
||||
@@ -11908,10 +11916,10 @@ static void bnxt_shutdown(struct pci_dev *pdev)
|
||||
dev_close(dev);
|
||||
|
||||
bnxt_ulp_shutdown(bp);
|
||||
bnxt_clear_int_mode(bp);
|
||||
pci_disable_device(pdev);
|
||||
|
||||
if (system_state == SYSTEM_POWER_OFF) {
|
||||
bnxt_clear_int_mode(bp);
|
||||
pci_disable_device(pdev);
|
||||
pci_wake_from_d3(pdev, bp->wol);
|
||||
pci_set_power_state(pdev, PCI_D3hot);
|
||||
}
|
||||
|
||||
@@ -3690,6 +3690,10 @@ static int at91ether_open(struct net_device *dev)
|
||||
u32 ctl;
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_get_sync(&lp->pdev->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* Clear internal statistics */
|
||||
ctl = macb_readl(lp, NCR);
|
||||
macb_writel(lp, NCR, ctl | MACB_BIT(CLRSTAT));
|
||||
@@ -3750,7 +3754,7 @@ static int at91ether_close(struct net_device *dev)
|
||||
q->rx_buffers, q->rx_buffers_dma);
|
||||
q->rx_buffers = NULL;
|
||||
|
||||
return 0;
|
||||
return pm_runtime_put(&lp->pdev->dev);
|
||||
}
|
||||
|
||||
/* Transmit packet */
|
||||
|
||||
@@ -6003,6 +6003,9 @@ static int hclge_get_all_rules(struct hnae3_handle *handle,
|
||||
static void hclge_fd_get_flow_tuples(const struct flow_keys *fkeys,
|
||||
struct hclge_fd_rule_tuples *tuples)
|
||||
{
|
||||
#define flow_ip6_src fkeys->addrs.v6addrs.src.in6_u.u6_addr32
|
||||
#define flow_ip6_dst fkeys->addrs.v6addrs.dst.in6_u.u6_addr32
|
||||
|
||||
tuples->ether_proto = be16_to_cpu(fkeys->basic.n_proto);
|
||||
tuples->ip_proto = fkeys->basic.ip_proto;
|
||||
tuples->dst_port = be16_to_cpu(fkeys->ports.dst);
|
||||
@@ -6011,12 +6014,12 @@ static void hclge_fd_get_flow_tuples(const struct flow_keys *fkeys,
|
||||
tuples->src_ip[3] = be32_to_cpu(fkeys->addrs.v4addrs.src);
|
||||
tuples->dst_ip[3] = be32_to_cpu(fkeys->addrs.v4addrs.dst);
|
||||
} else {
|
||||
memcpy(tuples->src_ip,
|
||||
fkeys->addrs.v6addrs.src.in6_u.u6_addr32,
|
||||
sizeof(tuples->src_ip));
|
||||
memcpy(tuples->dst_ip,
|
||||
fkeys->addrs.v6addrs.dst.in6_u.u6_addr32,
|
||||
sizeof(tuples->dst_ip));
|
||||
int i;
|
||||
|
||||
for (i = 0; i < IPV6_SIZE; i++) {
|
||||
tuples->src_ip[i] = be32_to_cpu(flow_ip6_src[i]);
|
||||
tuples->dst_ip[i] = be32_to_cpu(flow_ip6_dst[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9437,6 +9440,13 @@ static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = init_mgr_tbl(hdev);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev,
|
||||
"failed to reinit manager table, ret = %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = hclge_init_fd_config(hdev);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "fd table init fail, ret=%d\n", ret);
|
||||
|
||||
@@ -2363,7 +2363,7 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
goto error_param;
|
||||
}
|
||||
|
||||
if (i40e_vc_validate_vqs_bitmaps(vqs)) {
|
||||
if (!i40e_vc_validate_vqs_bitmaps(vqs)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
}
|
||||
@@ -2425,7 +2425,7 @@ static int i40e_vc_disable_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
goto error_param;
|
||||
}
|
||||
|
||||
if (i40e_vc_validate_vqs_bitmaps(vqs)) {
|
||||
if (!i40e_vc_validate_vqs_bitmaps(vqs)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
}
|
||||
|
||||
@@ -934,7 +934,7 @@ void ice_deinit_hw(struct ice_hw *hw)
|
||||
*/
|
||||
enum ice_status ice_check_reset(struct ice_hw *hw)
|
||||
{
|
||||
u32 cnt, reg = 0, grst_delay;
|
||||
u32 cnt, reg = 0, grst_delay, uld_mask;
|
||||
|
||||
/* Poll for Device Active state in case a recent CORER, GLOBR,
|
||||
* or EMPR has occurred. The grst delay value is in 100ms units.
|
||||
@@ -956,13 +956,20 @@ enum ice_status ice_check_reset(struct ice_hw *hw)
|
||||
return ICE_ERR_RESET_FAILED;
|
||||
}
|
||||
|
||||
#define ICE_RESET_DONE_MASK (GLNVM_ULD_CORER_DONE_M | \
|
||||
GLNVM_ULD_GLOBR_DONE_M)
|
||||
#define ICE_RESET_DONE_MASK (GLNVM_ULD_PCIER_DONE_M |\
|
||||
GLNVM_ULD_PCIER_DONE_1_M |\
|
||||
GLNVM_ULD_CORER_DONE_M |\
|
||||
GLNVM_ULD_GLOBR_DONE_M |\
|
||||
GLNVM_ULD_POR_DONE_M |\
|
||||
GLNVM_ULD_POR_DONE_1_M |\
|
||||
GLNVM_ULD_PCIER_DONE_2_M)
|
||||
|
||||
uld_mask = ICE_RESET_DONE_MASK;
|
||||
|
||||
/* Device is Active; check Global Reset processes are done */
|
||||
for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
|
||||
reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
|
||||
if (reg == ICE_RESET_DONE_MASK) {
|
||||
reg = rd32(hw, GLNVM_ULD) & uld_mask;
|
||||
if (reg == uld_mask) {
|
||||
ice_debug(hw, ICE_DBG_INIT,
|
||||
"Global reset processes done. %d\n", cnt);
|
||||
break;
|
||||
|
||||
@@ -273,8 +273,14 @@
|
||||
#define GLNVM_GENS_SR_SIZE_S 5
|
||||
#define GLNVM_GENS_SR_SIZE_M ICE_M(0x7, 5)
|
||||
#define GLNVM_ULD 0x000B6008
|
||||
#define GLNVM_ULD_PCIER_DONE_M BIT(0)
|
||||
#define GLNVM_ULD_PCIER_DONE_1_M BIT(1)
|
||||
#define GLNVM_ULD_CORER_DONE_M BIT(3)
|
||||
#define GLNVM_ULD_GLOBR_DONE_M BIT(4)
|
||||
#define GLNVM_ULD_POR_DONE_M BIT(5)
|
||||
#define GLNVM_ULD_POR_DONE_1_M BIT(8)
|
||||
#define GLNVM_ULD_PCIER_DONE_2_M BIT(9)
|
||||
#define GLNVM_ULD_PE_DONE_M BIT(10)
|
||||
#define GLPCI_CNF2 0x000BE004
|
||||
#define GLPCI_CNF2_CACHELINE_SIZE_M BIT(1)
|
||||
#define PF_FUNC_RID 0x0009E880
|
||||
|
||||
@@ -112,6 +112,14 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
|
||||
if (err != 4)
|
||||
break;
|
||||
|
||||
/* At this point the IFH was read correctly, so it is safe to
|
||||
* presume that there is no error. The err needs to be reset
|
||||
* otherwise a frame could come in CPU queue between the while
|
||||
* condition and the check for error later on. And in that case
|
||||
* the new frame is just removed and not processed.
|
||||
*/
|
||||
err = 0;
|
||||
|
||||
ocelot_parse_ifh(ifh, &info);
|
||||
|
||||
dev = ocelot->ports[info.port]->dev;
|
||||
|
||||
@@ -163,6 +163,8 @@ struct qede_rdma_dev {
|
||||
struct list_head entry;
|
||||
struct list_head rdma_event_list;
|
||||
struct workqueue_struct *rdma_wq;
|
||||
struct kref refcnt;
|
||||
struct completion event_comp;
|
||||
bool exp_recovery;
|
||||
};
|
||||
|
||||
|
||||
@@ -59,6 +59,9 @@ static void _qede_rdma_dev_add(struct qede_dev *edev)
|
||||
static int qede_rdma_create_wq(struct qede_dev *edev)
|
||||
{
|
||||
INIT_LIST_HEAD(&edev->rdma_info.rdma_event_list);
|
||||
kref_init(&edev->rdma_info.refcnt);
|
||||
init_completion(&edev->rdma_info.event_comp);
|
||||
|
||||
edev->rdma_info.rdma_wq = create_singlethread_workqueue("rdma_wq");
|
||||
if (!edev->rdma_info.rdma_wq) {
|
||||
DP_NOTICE(edev, "qedr: Could not create workqueue\n");
|
||||
@@ -83,8 +86,23 @@ static void qede_rdma_cleanup_event(struct qede_dev *edev)
|
||||
}
|
||||
}
|
||||
|
||||
static void qede_rdma_complete_event(struct kref *ref)
|
||||
{
|
||||
struct qede_rdma_dev *rdma_dev =
|
||||
container_of(ref, struct qede_rdma_dev, refcnt);
|
||||
|
||||
/* no more events will be added after this */
|
||||
complete(&rdma_dev->event_comp);
|
||||
}
|
||||
|
||||
static void qede_rdma_destroy_wq(struct qede_dev *edev)
|
||||
{
|
||||
/* Avoid race with add_event flow, make sure it finishes before
|
||||
* we start accessing the list and cleaning up the work
|
||||
*/
|
||||
kref_put(&edev->rdma_info.refcnt, qede_rdma_complete_event);
|
||||
wait_for_completion(&edev->rdma_info.event_comp);
|
||||
|
||||
qede_rdma_cleanup_event(edev);
|
||||
destroy_workqueue(edev->rdma_info.rdma_wq);
|
||||
}
|
||||
@@ -310,15 +328,24 @@ static void qede_rdma_add_event(struct qede_dev *edev,
|
||||
if (!edev->rdma_info.qedr_dev)
|
||||
return;
|
||||
|
||||
/* We don't want the cleanup flow to start while we're allocating and
|
||||
* scheduling the work
|
||||
*/
|
||||
if (!kref_get_unless_zero(&edev->rdma_info.refcnt))
|
||||
return; /* already being destroyed */
|
||||
|
||||
event_node = qede_rdma_get_free_event_node(edev);
|
||||
if (!event_node)
|
||||
return;
|
||||
goto out;
|
||||
|
||||
event_node->event = event;
|
||||
event_node->ptr = edev;
|
||||
|
||||
INIT_WORK(&event_node->work, qede_rdma_handle_event);
|
||||
queue_work(edev->rdma_info.rdma_wq, &event_node->work);
|
||||
|
||||
out:
|
||||
kref_put(&edev->rdma_info.refcnt, qede_rdma_complete_event);
|
||||
}
|
||||
|
||||
void qede_rdma_dev_event_open(struct qede_dev *edev)
|
||||
|
||||
@@ -99,7 +99,7 @@ static struct netvsc_device *alloc_net_device(void)
|
||||
|
||||
init_waitqueue_head(&net_device->wait_drain);
|
||||
net_device->destroy = false;
|
||||
net_device->tx_disable = false;
|
||||
net_device->tx_disable = true;
|
||||
|
||||
net_device->max_pkt = RNDIS_MAX_PKT_DEFAULT;
|
||||
net_device->pkt_align = RNDIS_PKT_ALIGN_DEFAULT;
|
||||
|
||||
@@ -973,6 +973,7 @@ static int netvsc_attach(struct net_device *ndev,
|
||||
}
|
||||
|
||||
/* In any case device is now ready */
|
||||
nvdev->tx_disable = false;
|
||||
netif_device_attach(ndev);
|
||||
|
||||
/* Note: enable and attach happen when sub-channels setup */
|
||||
@@ -2350,6 +2351,8 @@ static int netvsc_probe(struct hv_device *dev,
|
||||
else
|
||||
net->max_mtu = ETH_DATA_LEN;
|
||||
|
||||
nvdev->tx_disable = false;
|
||||
|
||||
ret = register_netdevice(net);
|
||||
if (ret != 0) {
|
||||
pr_err("Unable to register netdev.\n");
|
||||
|
||||
@@ -178,6 +178,23 @@ static int iproc_mdio_remove(struct platform_device *pdev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
int iproc_mdio_resume(struct device *dev)
|
||||
{
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
struct iproc_mdio_priv *priv = platform_get_drvdata(pdev);
|
||||
|
||||
/* restore the mii clock configuration */
|
||||
iproc_mdio_config_clk(priv->base);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops iproc_mdio_pm_ops = {
|
||||
.resume = iproc_mdio_resume
|
||||
};
|
||||
#endif /* CONFIG_PM_SLEEP */
|
||||
|
||||
static const struct of_device_id iproc_mdio_of_match[] = {
|
||||
{ .compatible = "brcm,iproc-mdio", },
|
||||
{ /* sentinel */ },
|
||||
@@ -188,6 +205,9 @@ static struct platform_driver iproc_mdio_driver = {
|
||||
.driver = {
|
||||
.name = "iproc-mdio",
|
||||
.of_match_table = iproc_mdio_of_match,
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
.pm = &iproc_mdio_pm_ops,
|
||||
#endif
|
||||
},
|
||||
.probe = iproc_mdio_probe,
|
||||
.remove = iproc_mdio_remove,
|
||||
|
||||
@@ -61,7 +61,6 @@ enum qmi_wwan_flags {
|
||||
|
||||
enum qmi_wwan_quirks {
|
||||
QMI_WWAN_QUIRK_DTR = 1 << 0, /* needs "set DTR" request */
|
||||
QMI_WWAN_QUIRK_QUECTEL_DYNCFG = 1 << 1, /* check num. endpoints */
|
||||
};
|
||||
|
||||
struct qmimux_hdr {
|
||||
@@ -916,16 +915,6 @@ static const struct driver_info qmi_wwan_info_quirk_dtr = {
|
||||
.data = QMI_WWAN_QUIRK_DTR,
|
||||
};
|
||||
|
||||
static const struct driver_info qmi_wwan_info_quirk_quectel_dyncfg = {
|
||||
.description = "WWAN/QMI device",
|
||||
.flags = FLAG_WWAN | FLAG_SEND_ZLP,
|
||||
.bind = qmi_wwan_bind,
|
||||
.unbind = qmi_wwan_unbind,
|
||||
.manage_power = qmi_wwan_manage_power,
|
||||
.rx_fixup = qmi_wwan_rx_fixup,
|
||||
.data = QMI_WWAN_QUIRK_DTR | QMI_WWAN_QUIRK_QUECTEL_DYNCFG,
|
||||
};
|
||||
|
||||
#define HUAWEI_VENDOR_ID 0x12D1
|
||||
|
||||
/* map QMI/wwan function by a fixed interface number */
|
||||
@@ -946,14 +935,18 @@ static const struct driver_info qmi_wwan_info_quirk_quectel_dyncfg = {
|
||||
#define QMI_GOBI_DEVICE(vend, prod) \
|
||||
QMI_FIXED_INTF(vend, prod, 0)
|
||||
|
||||
/* Quectel does not use fixed interface numbers on at least some of their
|
||||
* devices. We need to check the number of endpoints to ensure that we bind to
|
||||
* the correct interface.
|
||||
/* Many devices have QMI and DIAG functions which are distinguishable
|
||||
* from other vendor specific functions by class, subclass and
|
||||
* protocol all being 0xff. The DIAG function has exactly 2 endpoints
|
||||
* and is silently rejected when probed.
|
||||
*
|
||||
* This makes it possible to match dynamically numbered QMI functions
|
||||
* as seen on e.g. many Quectel modems.
|
||||
*/
|
||||
#define QMI_QUIRK_QUECTEL_DYNCFG(vend, prod) \
|
||||
#define QMI_MATCH_FF_FF_FF(vend, prod) \
|
||||
USB_DEVICE_AND_INTERFACE_INFO(vend, prod, USB_CLASS_VENDOR_SPEC, \
|
||||
USB_SUBCLASS_VENDOR_SPEC, 0xff), \
|
||||
.driver_info = (unsigned long)&qmi_wwan_info_quirk_quectel_dyncfg
|
||||
.driver_info = (unsigned long)&qmi_wwan_info_quirk_dtr
|
||||
|
||||
static const struct usb_device_id products[] = {
|
||||
/* 1. CDC ECM like devices match on the control interface */
|
||||
@@ -1059,10 +1052,10 @@ static const struct usb_device_id products[] = {
|
||||
USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7),
|
||||
.driver_info = (unsigned long)&qmi_wwan_info,
|
||||
},
|
||||
{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
|
||||
{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */
|
||||
{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */
|
||||
{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0800)}, /* Quectel RM500Q-GL */
|
||||
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
|
||||
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */
|
||||
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */
|
||||
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0800)}, /* Quectel RM500Q-GL */
|
||||
|
||||
/* 3. Combined interface devices matching on interface number */
|
||||
{QMI_FIXED_INTF(0x0408, 0xea42, 4)}, /* Yota / Megafon M100-1 */
|
||||
@@ -1363,6 +1356,7 @@ static const struct usb_device_id products[] = {
|
||||
{QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */
|
||||
{QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */
|
||||
{QMI_FIXED_INTF(0x413c, 0x81d7, 0)}, /* Dell Wireless 5821e */
|
||||
{QMI_FIXED_INTF(0x413c, 0x81d7, 1)}, /* Dell Wireless 5821e preproduction config */
|
||||
{QMI_FIXED_INTF(0x413c, 0x81e0, 0)}, /* Dell Wireless 5821e with eSIM support*/
|
||||
{QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */
|
||||
{QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */
|
||||
@@ -1454,7 +1448,6 @@ static int qmi_wwan_probe(struct usb_interface *intf,
|
||||
{
|
||||
struct usb_device_id *id = (struct usb_device_id *)prod;
|
||||
struct usb_interface_descriptor *desc = &intf->cur_altsetting->desc;
|
||||
const struct driver_info *info;
|
||||
|
||||
/* Workaround to enable dynamic IDs. This disables usbnet
|
||||
* blacklisting functionality. Which, if required, can be
|
||||
@@ -1490,12 +1483,8 @@ static int qmi_wwan_probe(struct usb_interface *intf,
|
||||
* different. Ignore the current interface if the number of endpoints
|
||||
* equals the number for the diag interface (two).
|
||||
*/
|
||||
info = (void *)id->driver_info;
|
||||
|
||||
if (info->data & QMI_WWAN_QUIRK_QUECTEL_DYNCFG) {
|
||||
if (desc->bNumEndpoints == 2)
|
||||
return -ENODEV;
|
||||
}
|
||||
if (desc->bNumEndpoints == 2)
|
||||
return -ENODEV;
|
||||
|
||||
return usbnet_probe(intf, id);
|
||||
}
|
||||
|
||||
@@ -1295,19 +1295,6 @@ mwifiex_copy_rates(u8 *dest, u32 pos, u8 *src, int len)
|
||||
return pos;
|
||||
}
|
||||
|
||||
/* This function return interface number with the same bss_type.
|
||||
*/
|
||||
static inline u8
|
||||
mwifiex_get_intf_num(struct mwifiex_adapter *adapter, u8 bss_type)
|
||||
{
|
||||
u8 i, num = 0;
|
||||
|
||||
for (i = 0; i < adapter->priv_num; i++)
|
||||
if (adapter->priv[i] && adapter->priv[i]->bss_type == bss_type)
|
||||
num++;
|
||||
return num;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function returns the correct private structure pointer based
|
||||
* upon the BSS type and BSS number.
|
||||
|
||||
@@ -894,7 +894,7 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
u8 *peer, *pos, *end;
|
||||
u8 i, action, basic;
|
||||
u16 cap = 0;
|
||||
int ie_len = 0;
|
||||
int ies_len = 0;
|
||||
|
||||
if (len < (sizeof(struct ethhdr) + 3))
|
||||
return;
|
||||
@@ -916,7 +916,7 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
pos = buf + sizeof(struct ethhdr) + 4;
|
||||
/* payload 1+ category 1 + action 1 + dialog 1 */
|
||||
cap = get_unaligned_le16(pos);
|
||||
ie_len = len - sizeof(struct ethhdr) - TDLS_REQ_FIX_LEN;
|
||||
ies_len = len - sizeof(struct ethhdr) - TDLS_REQ_FIX_LEN;
|
||||
pos += 2;
|
||||
break;
|
||||
|
||||
@@ -926,7 +926,7 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
/* payload 1+ category 1 + action 1 + dialog 1 + status code 2*/
|
||||
pos = buf + sizeof(struct ethhdr) + 6;
|
||||
cap = get_unaligned_le16(pos);
|
||||
ie_len = len - sizeof(struct ethhdr) - TDLS_RESP_FIX_LEN;
|
||||
ies_len = len - sizeof(struct ethhdr) - TDLS_RESP_FIX_LEN;
|
||||
pos += 2;
|
||||
break;
|
||||
|
||||
@@ -934,7 +934,7 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
if (len < (sizeof(struct ethhdr) + TDLS_CONFIRM_FIX_LEN))
|
||||
return;
|
||||
pos = buf + sizeof(struct ethhdr) + TDLS_CONFIRM_FIX_LEN;
|
||||
ie_len = len - sizeof(struct ethhdr) - TDLS_CONFIRM_FIX_LEN;
|
||||
ies_len = len - sizeof(struct ethhdr) - TDLS_CONFIRM_FIX_LEN;
|
||||
break;
|
||||
default:
|
||||
mwifiex_dbg(priv->adapter, ERROR, "Unknown TDLS frame type.\n");
|
||||
@@ -947,33 +947,33 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
|
||||
sta_ptr->tdls_cap.capab = cpu_to_le16(cap);
|
||||
|
||||
for (end = pos + ie_len; pos + 1 < end; pos += 2 + pos[1]) {
|
||||
if (pos + 2 + pos[1] > end)
|
||||
for (end = pos + ies_len; pos + 1 < end; pos += 2 + pos[1]) {
|
||||
u8 ie_len = pos[1];
|
||||
|
||||
if (pos + 2 + ie_len > end)
|
||||
break;
|
||||
|
||||
switch (*pos) {
|
||||
case WLAN_EID_SUPP_RATES:
|
||||
if (pos[1] > 32)
|
||||
if (ie_len > sizeof(sta_ptr->tdls_cap.rates))
|
||||
return;
|
||||
sta_ptr->tdls_cap.rates_len = pos[1];
|
||||
for (i = 0; i < pos[1]; i++)
|
||||
sta_ptr->tdls_cap.rates_len = ie_len;
|
||||
for (i = 0; i < ie_len; i++)
|
||||
sta_ptr->tdls_cap.rates[i] = pos[i + 2];
|
||||
break;
|
||||
|
||||
case WLAN_EID_EXT_SUPP_RATES:
|
||||
if (pos[1] > 32)
|
||||
if (ie_len > sizeof(sta_ptr->tdls_cap.rates))
|
||||
return;
|
||||
basic = sta_ptr->tdls_cap.rates_len;
|
||||
if (pos[1] > 32 - basic)
|
||||
if (ie_len > sizeof(sta_ptr->tdls_cap.rates) - basic)
|
||||
return;
|
||||
for (i = 0; i < pos[1]; i++)
|
||||
for (i = 0; i < ie_len; i++)
|
||||
sta_ptr->tdls_cap.rates[basic + i] = pos[i + 2];
|
||||
sta_ptr->tdls_cap.rates_len += pos[1];
|
||||
sta_ptr->tdls_cap.rates_len += ie_len;
|
||||
break;
|
||||
case WLAN_EID_HT_CAPABILITY:
|
||||
if (pos > end - sizeof(struct ieee80211_ht_cap) - 2)
|
||||
return;
|
||||
if (pos[1] != sizeof(struct ieee80211_ht_cap))
|
||||
if (ie_len != sizeof(struct ieee80211_ht_cap))
|
||||
return;
|
||||
/* copy the ie's value into ht_capb*/
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.ht_capb, pos + 2,
|
||||
@@ -981,59 +981,45 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
sta_ptr->is_11n_enabled = 1;
|
||||
break;
|
||||
case WLAN_EID_HT_OPERATION:
|
||||
if (pos > end -
|
||||
sizeof(struct ieee80211_ht_operation) - 2)
|
||||
return;
|
||||
if (pos[1] != sizeof(struct ieee80211_ht_operation))
|
||||
if (ie_len != sizeof(struct ieee80211_ht_operation))
|
||||
return;
|
||||
/* copy the ie's value into ht_oper*/
|
||||
memcpy(&sta_ptr->tdls_cap.ht_oper, pos + 2,
|
||||
sizeof(struct ieee80211_ht_operation));
|
||||
break;
|
||||
case WLAN_EID_BSS_COEX_2040:
|
||||
if (pos > end - 3)
|
||||
return;
|
||||
if (pos[1] != 1)
|
||||
if (ie_len != sizeof(pos[2]))
|
||||
return;
|
||||
sta_ptr->tdls_cap.coex_2040 = pos[2];
|
||||
break;
|
||||
case WLAN_EID_EXT_CAPABILITY:
|
||||
if (pos > end - sizeof(struct ieee_types_header))
|
||||
if (ie_len < sizeof(struct ieee_types_header))
|
||||
return;
|
||||
if (pos[1] < sizeof(struct ieee_types_header))
|
||||
return;
|
||||
if (pos[1] > 8)
|
||||
if (ie_len > 8)
|
||||
return;
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.extcap, pos,
|
||||
sizeof(struct ieee_types_header) +
|
||||
min_t(u8, pos[1], 8));
|
||||
min_t(u8, ie_len, 8));
|
||||
break;
|
||||
case WLAN_EID_RSN:
|
||||
if (pos > end - sizeof(struct ieee_types_header))
|
||||
if (ie_len < sizeof(struct ieee_types_header))
|
||||
return;
|
||||
if (pos[1] < sizeof(struct ieee_types_header))
|
||||
return;
|
||||
if (pos[1] > IEEE_MAX_IE_SIZE -
|
||||
if (ie_len > IEEE_MAX_IE_SIZE -
|
||||
sizeof(struct ieee_types_header))
|
||||
return;
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.rsn_ie, pos,
|
||||
sizeof(struct ieee_types_header) +
|
||||
min_t(u8, pos[1], IEEE_MAX_IE_SIZE -
|
||||
min_t(u8, ie_len, IEEE_MAX_IE_SIZE -
|
||||
sizeof(struct ieee_types_header)));
|
||||
break;
|
||||
case WLAN_EID_QOS_CAPA:
|
||||
if (pos > end - 3)
|
||||
return;
|
||||
if (pos[1] != 1)
|
||||
if (ie_len != sizeof(pos[2]))
|
||||
return;
|
||||
sta_ptr->tdls_cap.qos_info = pos[2];
|
||||
break;
|
||||
case WLAN_EID_VHT_OPERATION:
|
||||
if (priv->adapter->is_hw_11ac_capable) {
|
||||
if (pos > end -
|
||||
sizeof(struct ieee80211_vht_operation) - 2)
|
||||
return;
|
||||
if (pos[1] !=
|
||||
if (ie_len !=
|
||||
sizeof(struct ieee80211_vht_operation))
|
||||
return;
|
||||
/* copy the ie's value into vhtoper*/
|
||||
@@ -1043,10 +1029,7 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
break;
|
||||
case WLAN_EID_VHT_CAPABILITY:
|
||||
if (priv->adapter->is_hw_11ac_capable) {
|
||||
if (pos > end -
|
||||
sizeof(struct ieee80211_vht_cap) - 2)
|
||||
return;
|
||||
if (pos[1] != sizeof(struct ieee80211_vht_cap))
|
||||
if (ie_len != sizeof(struct ieee80211_vht_cap))
|
||||
return;
|
||||
/* copy the ie's value into vhtcap*/
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.vhtcap, pos + 2,
|
||||
@@ -1056,9 +1039,7 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
break;
|
||||
case WLAN_EID_AID:
|
||||
if (priv->adapter->is_hw_11ac_capable) {
|
||||
if (pos > end - 4)
|
||||
return;
|
||||
if (pos[1] != 2)
|
||||
if (ie_len != sizeof(u16))
|
||||
return;
|
||||
sta_ptr->tdls_cap.aid =
|
||||
get_unaligned_le16((pos + 2));
|
||||
|
||||
@@ -225,6 +225,7 @@ static void pn544_hci_i2c_platform_init(struct pn544_i2c_phy *phy)
|
||||
|
||||
out:
|
||||
gpiod_set_value_cansleep(phy->gpiod_en, !phy->en_polarity);
|
||||
usleep_range(10000, 15000);
|
||||
}
|
||||
|
||||
static void pn544_hci_i2c_enable_mode(struct pn544_i2c_phy *phy, int run_mode)
|
||||
|
||||
@@ -66,8 +66,8 @@ MODULE_PARM_DESC(streams, "turn on support for Streams write directives");
|
||||
* nvme_reset_wq - hosts nvme reset works
|
||||
* nvme_delete_wq - hosts nvme delete works
|
||||
*
|
||||
* nvme_wq will host works such are scan, aen handling, fw activation,
|
||||
* keep-alive error recovery, periodic reconnects etc. nvme_reset_wq
|
||||
* nvme_wq will host works such as scan, aen handling, fw activation,
|
||||
* keep-alive, periodic reconnects etc. nvme_reset_wq
|
||||
* runs reset works which also flush works hosted on nvme_wq for
|
||||
* serialization purposes. nvme_delete_wq host controller deletion
|
||||
* works which flush reset works for serialization.
|
||||
@@ -972,7 +972,7 @@ static void nvme_keep_alive_end_io(struct request *rq, blk_status_t status)
|
||||
startka = true;
|
||||
spin_unlock_irqrestore(&ctrl->lock, flags);
|
||||
if (startka)
|
||||
schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ);
|
||||
queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ);
|
||||
}
|
||||
|
||||
static int nvme_keep_alive(struct nvme_ctrl *ctrl)
|
||||
@@ -1002,7 +1002,7 @@ static void nvme_keep_alive_work(struct work_struct *work)
|
||||
dev_dbg(ctrl->device,
|
||||
"reschedule traffic based keep-alive timer\n");
|
||||
ctrl->comp_seen = false;
|
||||
schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ);
|
||||
queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1019,7 +1019,7 @@ static void nvme_start_keep_alive(struct nvme_ctrl *ctrl)
|
||||
if (unlikely(ctrl->kato == 0))
|
||||
return;
|
||||
|
||||
schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ);
|
||||
queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ);
|
||||
}
|
||||
|
||||
void nvme_stop_keep_alive(struct nvme_ctrl *ctrl)
|
||||
|
||||
@@ -1084,9 +1084,9 @@ static int nvme_poll(struct blk_mq_hw_ctx *hctx)
|
||||
|
||||
spin_lock(&nvmeq->cq_poll_lock);
|
||||
found = nvme_process_cq(nvmeq, &start, &end, -1);
|
||||
nvme_complete_cqes(nvmeq, start, end);
|
||||
spin_unlock(&nvmeq->cq_poll_lock);
|
||||
|
||||
nvme_complete_cqes(nvmeq, start, end);
|
||||
return found;
|
||||
}
|
||||
|
||||
@@ -1407,6 +1407,23 @@ static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown)
|
||||
nvme_poll_irqdisable(nvmeq, -1);
|
||||
}
|
||||
|
||||
/*
|
||||
* Called only on a device that has been disabled and after all other threads
|
||||
* that can check this device's completion queues have synced. This is the
|
||||
* last chance for the driver to see a natural completion before
|
||||
* nvme_cancel_request() terminates all incomplete requests.
|
||||
*/
|
||||
static void nvme_reap_pending_cqes(struct nvme_dev *dev)
|
||||
{
|
||||
u16 start, end;
|
||||
int i;
|
||||
|
||||
for (i = dev->ctrl.queue_count - 1; i > 0; i--) {
|
||||
nvme_process_cq(&dev->queues[i], &start, &end, -1);
|
||||
nvme_complete_cqes(&dev->queues[i], start, end);
|
||||
}
|
||||
}
|
||||
|
||||
static int nvme_cmb_qdepth(struct nvme_dev *dev, int nr_io_queues,
|
||||
int entry_size)
|
||||
{
|
||||
@@ -2241,11 +2258,6 @@ static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode)
|
||||
if (timeout == 0)
|
||||
return false;
|
||||
|
||||
/* handle any remaining CQEs */
|
||||
if (opcode == nvme_admin_delete_cq &&
|
||||
!test_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags))
|
||||
nvme_poll_irqdisable(nvmeq, -1);
|
||||
|
||||
sent--;
|
||||
if (nr_queues)
|
||||
goto retry;
|
||||
@@ -2434,6 +2446,7 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
|
||||
nvme_suspend_io_queues(dev);
|
||||
nvme_suspend_queue(&dev->queues[0]);
|
||||
nvme_pci_disable(dev);
|
||||
nvme_reap_pending_cqes(dev);
|
||||
|
||||
blk_mq_tagset_busy_iter(&dev->tagset, nvme_cancel_request, &dev->ctrl);
|
||||
blk_mq_tagset_busy_iter(&dev->admin_tagset, nvme_cancel_request, &dev->ctrl);
|
||||
|
||||
@@ -1088,7 +1088,7 @@ static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl)
|
||||
if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING))
|
||||
return;
|
||||
|
||||
queue_work(nvme_wq, &ctrl->err_work);
|
||||
queue_work(nvme_reset_wq, &ctrl->err_work);
|
||||
}
|
||||
|
||||
static void nvme_rdma_wr_error(struct ib_cq *cq, struct ib_wc *wc,
|
||||
|
||||
@@ -422,7 +422,7 @@ static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl)
|
||||
if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
|
||||
return;
|
||||
|
||||
queue_work(nvme_wq, &to_tcp_ctrl(ctrl)->err_work);
|
||||
queue_work(nvme_reset_wq, &to_tcp_ctrl(ctrl)->err_work);
|
||||
}
|
||||
|
||||
static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
|
||||
@@ -1054,7 +1054,12 @@ static void nvme_tcp_io_work(struct work_struct *w)
|
||||
} else if (unlikely(result < 0)) {
|
||||
dev_err(queue->ctrl->ctrl.device,
|
||||
"failed to send request %d\n", result);
|
||||
if (result != -EPIPE)
|
||||
|
||||
/*
|
||||
* Fail the request unless peer closed the connection,
|
||||
* in which case error recovery flow will complete all.
|
||||
*/
|
||||
if ((result != -EPIPE) && (result != -ECONNRESET))
|
||||
nvme_tcp_fail_request(queue->request);
|
||||
nvme_tcp_done_send_req(queue);
|
||||
return;
|
||||
|
||||
@@ -772,7 +772,7 @@ static int smmu_pmu_probe(struct platform_device *pdev)
|
||||
smmu_pmu->reloc_base = smmu_pmu->reg_base;
|
||||
}
|
||||
|
||||
irq = platform_get_irq(pdev, 0);
|
||||
irq = platform_get_irq_optional(pdev, 0);
|
||||
if (irq > 0)
|
||||
smmu_pmu->irq = irq;
|
||||
|
||||
|
||||
@@ -256,7 +256,7 @@ static int pwm_omap_dmtimer_probe(struct platform_device *pdev)
|
||||
if (!timer_pdev) {
|
||||
dev_err(&pdev->dev, "Unable to find Timer pdev\n");
|
||||
ret = -ENODEV;
|
||||
goto put;
|
||||
goto err_find_timer_pdev;
|
||||
}
|
||||
|
||||
timer_pdata = dev_get_platdata(&timer_pdev->dev);
|
||||
@@ -264,7 +264,7 @@ static int pwm_omap_dmtimer_probe(struct platform_device *pdev)
|
||||
dev_dbg(&pdev->dev,
|
||||
"dmtimer pdata structure NULL, deferring probe\n");
|
||||
ret = -EPROBE_DEFER;
|
||||
goto put;
|
||||
goto err_platdata;
|
||||
}
|
||||
|
||||
pdata = timer_pdata->timer_ops;
|
||||
@@ -283,19 +283,19 @@ static int pwm_omap_dmtimer_probe(struct platform_device *pdev)
|
||||
!pdata->write_counter) {
|
||||
dev_err(&pdev->dev, "Incomplete dmtimer pdata structure\n");
|
||||
ret = -EINVAL;
|
||||
goto put;
|
||||
goto err_platdata;
|
||||
}
|
||||
|
||||
if (!of_get_property(timer, "ti,timer-pwm", NULL)) {
|
||||
dev_err(&pdev->dev, "Missing ti,timer-pwm capability\n");
|
||||
ret = -ENODEV;
|
||||
goto put;
|
||||
goto err_timer_property;
|
||||
}
|
||||
|
||||
dm_timer = pdata->request_by_node(timer);
|
||||
if (!dm_timer) {
|
||||
ret = -EPROBE_DEFER;
|
||||
goto put;
|
||||
goto err_request_timer;
|
||||
}
|
||||
|
||||
omap = devm_kzalloc(&pdev->dev, sizeof(*omap), GFP_KERNEL);
|
||||
@@ -352,7 +352,14 @@ err_pwmchip_add:
|
||||
err_alloc_omap:
|
||||
|
||||
pdata->free(dm_timer);
|
||||
put:
|
||||
err_request_timer:
|
||||
|
||||
err_timer_property:
|
||||
err_platdata:
|
||||
|
||||
put_device(&timer_pdev->dev);
|
||||
err_find_timer_pdev:
|
||||
|
||||
of_node_put(timer);
|
||||
|
||||
return ret;
|
||||
@@ -372,6 +379,8 @@ static int pwm_omap_dmtimer_remove(struct platform_device *pdev)
|
||||
|
||||
omap->pdata->free(omap->dm_timer);
|
||||
|
||||
put_device(&omap->dm_timer_pdev->dev);
|
||||
|
||||
mutex_destroy(&omap->mutex);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -162,7 +162,7 @@ struct ap_card {
|
||||
unsigned int functions; /* AP device function bitfield. */
|
||||
int queue_depth; /* AP queue depth.*/
|
||||
int id; /* AP card number. */
|
||||
atomic_t total_request_count; /* # requests ever for this AP device.*/
|
||||
atomic64_t total_request_count; /* # requests ever for this AP device.*/
|
||||
};
|
||||
|
||||
#define to_ap_card(x) container_of((x), struct ap_card, ap_dev.device)
|
||||
@@ -179,7 +179,7 @@ struct ap_queue {
|
||||
enum ap_state state; /* State of the AP device. */
|
||||
int pendingq_count; /* # requests on pendingq list. */
|
||||
int requestq_count; /* # requests on requestq list. */
|
||||
int total_request_count; /* # requests ever for this AP device.*/
|
||||
u64 total_request_count; /* # requests ever for this AP device.*/
|
||||
int request_timeout; /* Request timeout in jiffies. */
|
||||
struct timer_list timeout; /* Timer for request timeouts. */
|
||||
struct list_head pendingq; /* List of message sent to AP queue. */
|
||||
|
||||
@@ -63,13 +63,13 @@ static ssize_t request_count_show(struct device *dev,
|
||||
char *buf)
|
||||
{
|
||||
struct ap_card *ac = to_ap_card(dev);
|
||||
unsigned int req_cnt;
|
||||
u64 req_cnt;
|
||||
|
||||
req_cnt = 0;
|
||||
spin_lock_bh(&ap_list_lock);
|
||||
req_cnt = atomic_read(&ac->total_request_count);
|
||||
req_cnt = atomic64_read(&ac->total_request_count);
|
||||
spin_unlock_bh(&ap_list_lock);
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", req_cnt);
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n", req_cnt);
|
||||
}
|
||||
|
||||
static ssize_t request_count_store(struct device *dev,
|
||||
@@ -83,7 +83,7 @@ static ssize_t request_count_store(struct device *dev,
|
||||
for_each_ap_queue(aq, ac)
|
||||
aq->total_request_count = 0;
|
||||
spin_unlock_bh(&ap_list_lock);
|
||||
atomic_set(&ac->total_request_count, 0);
|
||||
atomic64_set(&ac->total_request_count, 0);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
@@ -479,12 +479,12 @@ static ssize_t request_count_show(struct device *dev,
|
||||
char *buf)
|
||||
{
|
||||
struct ap_queue *aq = to_ap_queue(dev);
|
||||
unsigned int req_cnt;
|
||||
u64 req_cnt;
|
||||
|
||||
spin_lock_bh(&aq->lock);
|
||||
req_cnt = aq->total_request_count;
|
||||
spin_unlock_bh(&aq->lock);
|
||||
return snprintf(buf, PAGE_SIZE, "%d\n", req_cnt);
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n", req_cnt);
|
||||
}
|
||||
|
||||
static ssize_t request_count_store(struct device *dev,
|
||||
@@ -676,7 +676,7 @@ void ap_queue_message(struct ap_queue *aq, struct ap_message *ap_msg)
|
||||
list_add_tail(&ap_msg->list, &aq->requestq);
|
||||
aq->requestq_count++;
|
||||
aq->total_request_count++;
|
||||
atomic_inc(&aq->card->total_request_count);
|
||||
atomic64_inc(&aq->card->total_request_count);
|
||||
/* Send/receive as many request from the queue as possible. */
|
||||
ap_wait(ap_sm_event_loop(aq, AP_EVENT_POLL));
|
||||
spin_unlock_bh(&aq->lock);
|
||||
|
||||
@@ -605,8 +605,8 @@ static inline bool zcrypt_card_compare(struct zcrypt_card *zc,
|
||||
weight += atomic_read(&zc->load);
|
||||
pref_weight += atomic_read(&pref_zc->load);
|
||||
if (weight == pref_weight)
|
||||
return atomic_read(&zc->card->total_request_count) >
|
||||
atomic_read(&pref_zc->card->total_request_count);
|
||||
return atomic64_read(&zc->card->total_request_count) >
|
||||
atomic64_read(&pref_zc->card->total_request_count);
|
||||
return weight > pref_weight;
|
||||
}
|
||||
|
||||
@@ -1216,11 +1216,12 @@ static void zcrypt_qdepth_mask(char qdepth[], size_t max_adapters)
|
||||
spin_unlock(&zcrypt_list_lock);
|
||||
}
|
||||
|
||||
static void zcrypt_perdev_reqcnt(int reqcnt[], size_t max_adapters)
|
||||
static void zcrypt_perdev_reqcnt(u32 reqcnt[], size_t max_adapters)
|
||||
{
|
||||
struct zcrypt_card *zc;
|
||||
struct zcrypt_queue *zq;
|
||||
int card;
|
||||
u64 cnt;
|
||||
|
||||
memset(reqcnt, 0, sizeof(int) * max_adapters);
|
||||
spin_lock(&zcrypt_list_lock);
|
||||
@@ -1232,8 +1233,9 @@ static void zcrypt_perdev_reqcnt(int reqcnt[], size_t max_adapters)
|
||||
|| card >= max_adapters)
|
||||
continue;
|
||||
spin_lock(&zq->queue->lock);
|
||||
reqcnt[card] = zq->queue->total_request_count;
|
||||
cnt = zq->queue->total_request_count;
|
||||
spin_unlock(&zq->queue->lock);
|
||||
reqcnt[card] = (cnt < UINT_MAX) ? (u32) cnt : UINT_MAX;
|
||||
}
|
||||
}
|
||||
local_bh_enable();
|
||||
@@ -1411,9 +1413,9 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd,
|
||||
return 0;
|
||||
}
|
||||
case ZCRYPT_PERDEV_REQCNT: {
|
||||
int *reqcnt;
|
||||
u32 *reqcnt;
|
||||
|
||||
reqcnt = kcalloc(AP_DEVICES, sizeof(int), GFP_KERNEL);
|
||||
reqcnt = kcalloc(AP_DEVICES, sizeof(u32), GFP_KERNEL);
|
||||
if (!reqcnt)
|
||||
return -ENOMEM;
|
||||
zcrypt_perdev_reqcnt(reqcnt, AP_DEVICES);
|
||||
@@ -1470,7 +1472,7 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd,
|
||||
}
|
||||
case Z90STAT_PERDEV_REQCNT: {
|
||||
/* the old ioctl supports only 64 adapters */
|
||||
int reqcnt[MAX_ZDEV_CARDIDS];
|
||||
u32 reqcnt[MAX_ZDEV_CARDIDS];
|
||||
|
||||
zcrypt_perdev_reqcnt(reqcnt, MAX_ZDEV_CARDIDS);
|
||||
if (copy_to_user((int __user *) arg, reqcnt, sizeof(reqcnt)))
|
||||
|
||||
@@ -1846,15 +1846,14 @@ int qeth_l2_vnicc_set_state(struct qeth_card *card, u32 vnicc, bool state)
|
||||
|
||||
QETH_CARD_TEXT(card, 2, "vniccsch");
|
||||
|
||||
/* do not change anything if BridgePort is enabled */
|
||||
if (qeth_bridgeport_is_in_use(card))
|
||||
return -EBUSY;
|
||||
|
||||
/* check if characteristic and enable/disable are supported */
|
||||
if (!(card->options.vnicc.sup_chars & vnicc) ||
|
||||
!(card->options.vnicc.set_char_sup & vnicc))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (qeth_bridgeport_is_in_use(card))
|
||||
return -EBUSY;
|
||||
|
||||
/* set enable/disable command and store wanted characteristic */
|
||||
if (state) {
|
||||
cmd = IPA_VNICC_ENABLE;
|
||||
@@ -1900,14 +1899,13 @@ int qeth_l2_vnicc_get_state(struct qeth_card *card, u32 vnicc, bool *state)
|
||||
|
||||
QETH_CARD_TEXT(card, 2, "vniccgch");
|
||||
|
||||
/* do not get anything if BridgePort is enabled */
|
||||
if (qeth_bridgeport_is_in_use(card))
|
||||
return -EBUSY;
|
||||
|
||||
/* check if characteristic is supported */
|
||||
if (!(card->options.vnicc.sup_chars & vnicc))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (qeth_bridgeport_is_in_use(card))
|
||||
return -EBUSY;
|
||||
|
||||
/* if card is ready, query current VNICC state */
|
||||
if (qeth_card_hw_is_reachable(card))
|
||||
rc = qeth_l2_vnicc_query_chars(card);
|
||||
@@ -1925,15 +1923,14 @@ int qeth_l2_vnicc_set_timeout(struct qeth_card *card, u32 timeout)
|
||||
|
||||
QETH_CARD_TEXT(card, 2, "vniccsto");
|
||||
|
||||
/* do not change anything if BridgePort is enabled */
|
||||
if (qeth_bridgeport_is_in_use(card))
|
||||
return -EBUSY;
|
||||
|
||||
/* check if characteristic and set_timeout are supported */
|
||||
if (!(card->options.vnicc.sup_chars & QETH_VNICC_LEARNING) ||
|
||||
!(card->options.vnicc.getset_timeout_sup & QETH_VNICC_LEARNING))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (qeth_bridgeport_is_in_use(card))
|
||||
return -EBUSY;
|
||||
|
||||
/* do we need to do anything? */
|
||||
if (card->options.vnicc.learning_timeout == timeout)
|
||||
return rc;
|
||||
@@ -1962,14 +1959,14 @@ int qeth_l2_vnicc_get_timeout(struct qeth_card *card, u32 *timeout)
|
||||
|
||||
QETH_CARD_TEXT(card, 2, "vniccgto");
|
||||
|
||||
/* do not get anything if BridgePort is enabled */
|
||||
if (qeth_bridgeport_is_in_use(card))
|
||||
return -EBUSY;
|
||||
|
||||
/* check if characteristic and get_timeout are supported */
|
||||
if (!(card->options.vnicc.sup_chars & QETH_VNICC_LEARNING) ||
|
||||
!(card->options.vnicc.getset_timeout_sup & QETH_VNICC_LEARNING))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (qeth_bridgeport_is_in_use(card))
|
||||
return -EBUSY;
|
||||
|
||||
/* if card is ready, get timeout. Otherwise, just return stored value */
|
||||
*timeout = card->options.vnicc.learning_timeout;
|
||||
if (qeth_card_hw_is_reachable(card))
|
||||
|
||||
@@ -35,7 +35,8 @@
|
||||
defined(CONFIG_ARCH_TEGRA_124_SOC) || \
|
||||
defined(CONFIG_ARCH_TEGRA_132_SOC) || \
|
||||
defined(CONFIG_ARCH_TEGRA_210_SOC) || \
|
||||
defined(CONFIG_ARCH_TEGRA_186_SOC)
|
||||
defined(CONFIG_ARCH_TEGRA_186_SOC) || \
|
||||
defined(CONFIG_ARCH_TEGRA_194_SOC)
|
||||
static u32 tegra30_fuse_read_early(struct tegra_fuse *fuse, unsigned int offset)
|
||||
{
|
||||
if (WARN_ON(!fuse->base))
|
||||
|
||||
@@ -49,7 +49,7 @@
|
||||
#define AVS_TMON_TP_TEST_ENABLE 0x20
|
||||
|
||||
/* Default coefficients */
|
||||
#define AVS_TMON_TEMP_SLOPE -487
|
||||
#define AVS_TMON_TEMP_SLOPE 487
|
||||
#define AVS_TMON_TEMP_OFFSET 410040
|
||||
|
||||
/* HW related temperature constants */
|
||||
@@ -108,23 +108,12 @@ struct brcmstb_thermal_priv {
|
||||
struct thermal_zone_device *thermal;
|
||||
};
|
||||
|
||||
static void avs_tmon_get_coeffs(struct thermal_zone_device *tz, int *slope,
|
||||
int *offset)
|
||||
{
|
||||
*slope = thermal_zone_get_slope(tz);
|
||||
*offset = thermal_zone_get_offset(tz);
|
||||
}
|
||||
|
||||
/* Convert a HW code to a temperature reading (millidegree celsius) */
|
||||
static inline int avs_tmon_code_to_temp(struct thermal_zone_device *tz,
|
||||
u32 code)
|
||||
{
|
||||
const int val = code & AVS_TMON_TEMP_MASK;
|
||||
int slope, offset;
|
||||
|
||||
avs_tmon_get_coeffs(tz, &slope, &offset);
|
||||
|
||||
return slope * val + offset;
|
||||
return (AVS_TMON_TEMP_OFFSET -
|
||||
(int)((code & AVS_TMON_TEMP_MAX) * AVS_TMON_TEMP_SLOPE));
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -136,20 +125,18 @@ static inline int avs_tmon_code_to_temp(struct thermal_zone_device *tz,
|
||||
static inline u32 avs_tmon_temp_to_code(struct thermal_zone_device *tz,
|
||||
int temp, bool low)
|
||||
{
|
||||
int slope, offset;
|
||||
|
||||
if (temp < AVS_TMON_TEMP_MIN)
|
||||
return AVS_TMON_TEMP_MAX; /* Maximum code value */
|
||||
return AVS_TMON_TEMP_MAX; /* Maximum code value */
|
||||
|
||||
avs_tmon_get_coeffs(tz, &slope, &offset);
|
||||
|
||||
if (temp >= offset)
|
||||
if (temp >= AVS_TMON_TEMP_OFFSET)
|
||||
return 0; /* Minimum code value */
|
||||
|
||||
if (low)
|
||||
return (u32)(DIV_ROUND_UP(offset - temp, abs(slope)));
|
||||
return (u32)(DIV_ROUND_UP(AVS_TMON_TEMP_OFFSET - temp,
|
||||
AVS_TMON_TEMP_SLOPE));
|
||||
else
|
||||
return (u32)((offset - temp) / abs(slope));
|
||||
return (u32)((AVS_TMON_TEMP_OFFSET - temp) /
|
||||
AVS_TMON_TEMP_SLOPE);
|
||||
}
|
||||
|
||||
static int brcmstb_get_temp(void *data, int *temp)
|
||||
|
||||
@@ -152,8 +152,8 @@ static irqreturn_t prcmu_high_irq_handler(int irq, void *irq_data)
|
||||
db8500_thermal_update_config(th, idx, THERMAL_TREND_RAISING,
|
||||
next_low, next_high);
|
||||
|
||||
dev_info(&th->tz->device,
|
||||
"PRCMU set max %ld, min %ld\n", next_high, next_low);
|
||||
dev_dbg(&th->tz->device,
|
||||
"PRCMU set max %ld, min %ld\n", next_high, next_low);
|
||||
} else if (idx == num_points - 1)
|
||||
/* So we roof out 1 degree over the max point */
|
||||
th->interpolated_temp = db8500_thermal_points[idx] + 1;
|
||||
|
||||
@@ -1414,10 +1414,6 @@ static int vhost_net_release(struct inode *inode, struct file *f)
|
||||
|
||||
static struct socket *get_raw_socket(int fd)
|
||||
{
|
||||
struct {
|
||||
struct sockaddr_ll sa;
|
||||
char buf[MAX_ADDR_LEN];
|
||||
} uaddr;
|
||||
int r;
|
||||
struct socket *sock = sockfd_lookup(fd, &r);
|
||||
|
||||
@@ -1430,11 +1426,7 @@ static struct socket *get_raw_socket(int fd)
|
||||
goto err;
|
||||
}
|
||||
|
||||
r = sock->ops->getname(sock, (struct sockaddr *)&uaddr.sa, 0);
|
||||
if (r < 0)
|
||||
goto err;
|
||||
|
||||
if (uaddr.sa.sll_family != AF_PACKET) {
|
||||
if (sock->sk->sk_family != AF_PACKET) {
|
||||
r = -EPFNOSUPPORT;
|
||||
goto err;
|
||||
}
|
||||
|
||||
@@ -389,7 +389,7 @@ static int wdat_wdt_probe(struct platform_device *pdev)
|
||||
|
||||
memset(&r, 0, sizeof(r));
|
||||
r.start = gas->address;
|
||||
r.end = r.start + gas->access_width - 1;
|
||||
r.end = r.start + ACPI_ACCESS_BYTE_WIDTH(gas->access_width) - 1;
|
||||
if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
|
||||
r.flags = IORESOURCE_MEM;
|
||||
} else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
|
||||
|
||||
@@ -1418,6 +1418,7 @@ static ssize_t ceph_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
||||
struct ceph_cap_flush *prealloc_cf;
|
||||
ssize_t count, written = 0;
|
||||
int err, want, got;
|
||||
bool direct_lock = false;
|
||||
loff_t pos;
|
||||
loff_t limit = max(i_size_read(inode), fsc->max_file_size);
|
||||
|
||||
@@ -1428,8 +1429,11 @@ static ssize_t ceph_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
||||
if (!prealloc_cf)
|
||||
return -ENOMEM;
|
||||
|
||||
if ((iocb->ki_flags & (IOCB_DIRECT | IOCB_APPEND)) == IOCB_DIRECT)
|
||||
direct_lock = true;
|
||||
|
||||
retry_snap:
|
||||
if (iocb->ki_flags & IOCB_DIRECT)
|
||||
if (direct_lock)
|
||||
ceph_start_io_direct(inode);
|
||||
else
|
||||
ceph_start_io_write(inode);
|
||||
@@ -1519,14 +1523,15 @@ retry_snap:
|
||||
|
||||
/* we might need to revert back to that point */
|
||||
data = *from;
|
||||
if (iocb->ki_flags & IOCB_DIRECT) {
|
||||
if (iocb->ki_flags & IOCB_DIRECT)
|
||||
written = ceph_direct_read_write(iocb, &data, snapc,
|
||||
&prealloc_cf);
|
||||
ceph_end_io_direct(inode);
|
||||
} else {
|
||||
else
|
||||
written = ceph_sync_write(iocb, &data, pos, snapc);
|
||||
if (direct_lock)
|
||||
ceph_end_io_direct(inode);
|
||||
else
|
||||
ceph_end_io_write(inode);
|
||||
}
|
||||
if (written > 0)
|
||||
iov_iter_advance(from, written);
|
||||
ceph_put_snap_context(snapc);
|
||||
@@ -1577,7 +1582,7 @@ retry_snap:
|
||||
|
||||
goto out_unlocked;
|
||||
out:
|
||||
if (iocb->ki_flags & IOCB_DIRECT)
|
||||
if (direct_lock)
|
||||
ceph_end_io_direct(inode);
|
||||
else
|
||||
ceph_end_io_write(inode);
|
||||
|
||||
@@ -603,7 +603,7 @@ static void access_flags_to_mode(__le32 ace_flags, int type, umode_t *pmode,
|
||||
((flags & FILE_EXEC_RIGHTS) == FILE_EXEC_RIGHTS))
|
||||
*pmode |= (S_IXUGO & (*pbits_to_set));
|
||||
|
||||
cifs_dbg(NOISY, "access flags 0x%x mode now 0x%x\n", flags, *pmode);
|
||||
cifs_dbg(NOISY, "access flags 0x%x mode now %04o\n", flags, *pmode);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -632,7 +632,7 @@ static void mode_to_access_flags(umode_t mode, umode_t bits_to_use,
|
||||
if (mode & S_IXUGO)
|
||||
*pace_flags |= SET_FILE_EXEC_RIGHTS;
|
||||
|
||||
cifs_dbg(NOISY, "mode: 0x%x, access flags now 0x%x\n",
|
||||
cifs_dbg(NOISY, "mode: %04o, access flags now 0x%x\n",
|
||||
mode, *pace_flags);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -4094,7 +4094,7 @@ int cifs_setup_cifs_sb(struct smb_vol *pvolume_info,
|
||||
cifs_sb->mnt_gid = pvolume_info->linux_gid;
|
||||
cifs_sb->mnt_file_mode = pvolume_info->file_mode;
|
||||
cifs_sb->mnt_dir_mode = pvolume_info->dir_mode;
|
||||
cifs_dbg(FYI, "file mode: 0x%hx dir mode: 0x%hx\n",
|
||||
cifs_dbg(FYI, "file mode: %04ho dir mode: %04ho\n",
|
||||
cifs_sb->mnt_file_mode, cifs_sb->mnt_dir_mode);
|
||||
|
||||
cifs_sb->actimeo = pvolume_info->actimeo;
|
||||
|
||||
@@ -1586,7 +1586,7 @@ int cifs_mkdir(struct inode *inode, struct dentry *direntry, umode_t mode)
|
||||
struct TCP_Server_Info *server;
|
||||
char *full_path;
|
||||
|
||||
cifs_dbg(FYI, "In cifs_mkdir, mode = 0x%hx inode = 0x%p\n",
|
||||
cifs_dbg(FYI, "In cifs_mkdir, mode = %04ho inode = 0x%p\n",
|
||||
mode, inode);
|
||||
|
||||
cifs_sb = CIFS_SB(inode->i_sb);
|
||||
|
||||
3
fs/dax.c
3
fs/dax.c
@@ -1207,6 +1207,9 @@ dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter,
|
||||
lockdep_assert_held(&inode->i_rwsem);
|
||||
}
|
||||
|
||||
if (iocb->ki_flags & IOCB_NOWAIT)
|
||||
flags |= IOMAP_NOWAIT;
|
||||
|
||||
while (iov_iter_count(iter)) {
|
||||
ret = iomap_apply(inode, pos, iov_iter_count(iter), flags, ops,
|
||||
iter, dax_iomap_actor);
|
||||
|
||||
@@ -2370,7 +2370,7 @@ int ext4_alloc_flex_bg_array(struct super_block *sb, ext4_group_t ngroup)
|
||||
{
|
||||
struct ext4_sb_info *sbi = EXT4_SB(sb);
|
||||
struct flex_groups **old_groups, **new_groups;
|
||||
int size, i;
|
||||
int size, i, j;
|
||||
|
||||
if (!sbi->s_log_groups_per_flex)
|
||||
return 0;
|
||||
@@ -2391,8 +2391,8 @@ int ext4_alloc_flex_bg_array(struct super_block *sb, ext4_group_t ngroup)
|
||||
sizeof(struct flex_groups)),
|
||||
GFP_KERNEL);
|
||||
if (!new_groups[i]) {
|
||||
for (i--; i >= sbi->s_flex_groups_allocated; i--)
|
||||
kvfree(new_groups[i]);
|
||||
for (j = sbi->s_flex_groups_allocated; j < i; j++)
|
||||
kvfree(new_groups[j]);
|
||||
kvfree(new_groups);
|
||||
ext4_msg(sb, KERN_ERR,
|
||||
"not enough memory for %d flex groups", size);
|
||||
|
||||
@@ -71,6 +71,7 @@
|
||||
#include <linux/sizes.h>
|
||||
#include <linux/hugetlb.h>
|
||||
#include <linux/highmem.h>
|
||||
#include <linux/fs_struct.h>
|
||||
|
||||
#include <uapi/linux/io_uring.h>
|
||||
|
||||
@@ -334,6 +335,8 @@ struct io_kiocb {
|
||||
u32 result;
|
||||
u32 sequence;
|
||||
|
||||
struct fs_struct *fs;
|
||||
|
||||
struct work_struct work;
|
||||
};
|
||||
|
||||
@@ -651,6 +654,7 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
|
||||
/* one is dropped after submission, the other at completion */
|
||||
refcount_set(&req->refs, 2);
|
||||
req->result = 0;
|
||||
req->fs = NULL;
|
||||
return req;
|
||||
out:
|
||||
percpu_ref_put(&ctx->refs);
|
||||
@@ -1653,6 +1657,11 @@ static int io_send_recvmsg(struct io_kiocb *req, const struct io_uring_sqe *sqe,
|
||||
else if (force_nonblock)
|
||||
flags |= MSG_DONTWAIT;
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
if (req->ctx->compat)
|
||||
flags |= MSG_CMSG_COMPAT;
|
||||
#endif
|
||||
|
||||
msg = (struct user_msghdr __user *) (unsigned long)
|
||||
READ_ONCE(sqe->addr);
|
||||
|
||||
@@ -1663,6 +1672,16 @@ static int io_send_recvmsg(struct io_kiocb *req, const struct io_uring_sqe *sqe,
|
||||
ret = -EINTR;
|
||||
}
|
||||
|
||||
if (req->fs) {
|
||||
struct fs_struct *fs = req->fs;
|
||||
|
||||
spin_lock(&req->fs->lock);
|
||||
if (--fs->users)
|
||||
fs = NULL;
|
||||
spin_unlock(&req->fs->lock);
|
||||
if (fs)
|
||||
free_fs_struct(fs);
|
||||
}
|
||||
io_cqring_add_event(req->ctx, sqe->user_data, ret);
|
||||
io_put_req(req);
|
||||
return 0;
|
||||
@@ -2159,6 +2178,7 @@ static inline bool io_sqe_needs_user(const struct io_uring_sqe *sqe)
|
||||
static void io_sq_wq_submit_work(struct work_struct *work)
|
||||
{
|
||||
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
||||
struct fs_struct *old_fs_struct = current->fs;
|
||||
struct io_ring_ctx *ctx = req->ctx;
|
||||
struct mm_struct *cur_mm = NULL;
|
||||
struct async_list *async_list;
|
||||
@@ -2178,6 +2198,15 @@ restart:
|
||||
/* Ensure we clear previously set non-block flag */
|
||||
req->rw.ki_flags &= ~IOCB_NOWAIT;
|
||||
|
||||
if (req->fs != current->fs && current->fs != old_fs_struct) {
|
||||
task_lock(current);
|
||||
if (req->fs)
|
||||
current->fs = req->fs;
|
||||
else
|
||||
current->fs = old_fs_struct;
|
||||
task_unlock(current);
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
if (io_sqe_needs_user(sqe) && !cur_mm) {
|
||||
if (!mmget_not_zero(ctx->sqo_mm)) {
|
||||
@@ -2276,6 +2305,11 @@ out:
|
||||
mmput(cur_mm);
|
||||
}
|
||||
revert_creds(old_cred);
|
||||
if (old_fs_struct) {
|
||||
task_lock(current);
|
||||
current->fs = old_fs_struct;
|
||||
task_unlock(current);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -2503,6 +2537,23 @@ err:
|
||||
|
||||
req->user_data = s->sqe->user_data;
|
||||
|
||||
#if defined(CONFIG_NET)
|
||||
switch (READ_ONCE(s->sqe->opcode)) {
|
||||
case IORING_OP_SENDMSG:
|
||||
case IORING_OP_RECVMSG:
|
||||
spin_lock(¤t->fs->lock);
|
||||
if (!current->fs->in_exec) {
|
||||
req->fs = current->fs;
|
||||
req->fs->users++;
|
||||
}
|
||||
spin_unlock(¤t->fs->lock);
|
||||
if (!req->fs) {
|
||||
ret = -EAGAIN;
|
||||
goto err_req;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* If we already have a head request, queue this one for async
|
||||
* submittal once the head completes. If we don't have a head but
|
||||
|
||||
@@ -1454,7 +1454,7 @@ static int follow_dotdot_rcu(struct nameidata *nd)
|
||||
nd->path.dentry = parent;
|
||||
nd->seq = seq;
|
||||
if (unlikely(!path_connected(&nd->path)))
|
||||
return -ENOENT;
|
||||
return -ECHILD;
|
||||
break;
|
||||
} else {
|
||||
struct mount *mnt = real_mount(nd->path.mnt);
|
||||
|
||||
@@ -86,7 +86,6 @@ nfs4_file_open(struct inode *inode, struct file *filp)
|
||||
if (inode != d_inode(dentry))
|
||||
goto out_drop;
|
||||
|
||||
nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
|
||||
nfs_file_set_open_context(filp, ctx);
|
||||
nfs_fscache_open_file(inode, filp);
|
||||
err = 0;
|
||||
|
||||
@@ -2962,10 +2962,13 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
|
||||
struct dentry *dentry;
|
||||
struct nfs4_state *state;
|
||||
fmode_t acc_mode = _nfs4_ctx_to_accessmode(ctx);
|
||||
struct inode *dir = d_inode(opendata->dir);
|
||||
unsigned long dir_verifier;
|
||||
unsigned int seq;
|
||||
int ret;
|
||||
|
||||
seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
|
||||
dir_verifier = nfs_save_change_attribute(dir);
|
||||
|
||||
ret = _nfs4_proc_open(opendata, ctx);
|
||||
if (ret != 0)
|
||||
@@ -2993,8 +2996,19 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
|
||||
dput(ctx->dentry);
|
||||
ctx->dentry = dentry = alias;
|
||||
}
|
||||
nfs_set_verifier(dentry,
|
||||
nfs_save_change_attribute(d_inode(opendata->dir)));
|
||||
}
|
||||
|
||||
switch(opendata->o_arg.claim) {
|
||||
default:
|
||||
break;
|
||||
case NFS4_OPEN_CLAIM_NULL:
|
||||
case NFS4_OPEN_CLAIM_DELEGATE_CUR:
|
||||
case NFS4_OPEN_CLAIM_DELEGATE_PREV:
|
||||
if (!opendata->rpc_done)
|
||||
break;
|
||||
if (opendata->o_res.delegation_type != 0)
|
||||
dir_verifier = nfs_save_change_attribute(dir);
|
||||
nfs_set_verifier(dentry, dir_verifier);
|
||||
}
|
||||
|
||||
/* Parse layoutget results before we check for access */
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user