Merge 5.15.77 into android13-5.15-lts
Changes in 5.15.77 NFSv4: Fix free of uninitialized nfs4_label on referral lookup. NFSv4: Add an fattr allocation to _nfs4_discover_trunking() can: j1939: transport: j1939_session_skb_drop_old(): spin_unlock_irqrestore() before kfree_skb() can: kvaser_usb: Fix possible completions during init_completion ALSA: Use del_timer_sync() before freeing timer ALSA: usb-audio: Add quirks for M-Audio Fast Track C400/600 ALSA: au88x0: use explicitly signed char ALSA: rme9652: use explicitly signed char USB: add RESET_RESUME quirk for NVIDIA Jetson devices in RCM usb: gadget: uvc: fix sg handling in error case usb: gadget: uvc: fix sg handling during video encode usb: dwc3: gadget: Stop processing more requests on IMI usb: dwc3: gadget: Don't set IMI for no_interrupt usb: bdc: change state when port disconnected usb: xhci: add XHCI_SPURIOUS_SUCCESS to ASM1042 despite being a V0.96 controller mtd: rawnand: marvell: Use correct logic for nand-keep-config xhci: Add quirk to reset host back to default state at shutdown xhci-pci: Set runtime PM as default policy on all xHC 1.2 or later devices xhci: Remove device endpoints from bandwidth list when freeing the device tools: iio: iio_utils: fix digit calculation iio: light: tsl2583: Fix module unloading iio: temperature: ltc2983: allocate iio channels once iio: adxl372: Fix unsafe buffer attributes fbdev: smscufx: Fix several use-after-free bugs cpufreq: intel_pstate: Read all MSRs on the target CPU cpufreq: intel_pstate: hybrid: Use known scaling factor for P-cores fs/binfmt_elf: Fix memory leak in load_elf_binary() exec: Copy oldsighand->action under spin-lock mac802154: Fix LQI recording scsi: qla2xxx: Use transport-defined speed mask for supported_speeds drm/amdgpu: disallow gfxoff until GC IP blocks complete s2idle resume drm/msm/dsi: fix memory corruption with too many bridges drm/msm/hdmi: fix memory corruption with too many bridges drm/msm/dp: fix IRQ lifetime coresight: cti: Fix hang in cti_disable_hw() mmc: sdhci_am654: 'select', not 'depends' REGMAP_MMIO mmc: core: Fix kernel panic when remove non-standard SDIO card mmc: sdhci-pci-core: Disable ES for ASUS BIOS on Jasper Lake mmc: sdhci-esdhc-imx: Propagate ESDHC_FLAG_HS400* only on 8bit bus counter: microchip-tcb-capture: Handle Signal1 read and Synapse kernfs: fix use-after-free in __kernfs_remove pinctrl: Ingenic: JZ4755 bug fixes ARC: mm: fix leakage of memory allocated for PTE perf auxtrace: Fix address filter symbol name match for modules s390/futex: add missing EX_TABLE entry to __futex_atomic_op() s390/pci: add missing EX_TABLE entries to __pcistg_mio_inuser()/__pcilg_mio_inuser() Revert "scsi: lpfc: Resolve some cleanup issues following SLI path refactoring" Revert "scsi: lpfc: Fix element offset in __lpfc_sli_release_iocbq_s4()" Revert "scsi: lpfc: Fix locking for lpfc_sli_iocbq_lookup()" Revert "scsi: lpfc: SLI path split: Refactor SCSI paths" Revert "scsi: lpfc: SLI path split: Refactor fast and slow paths to native SLI4" Revert "scsi: lpfc: SLI path split: Refactor lpfc_iocbq" mmc: block: Remove error check of hw_reset on reset ethtool: eeprom: fix null-deref on genl_info in dump net: ieee802154: fix error return code in dgram_bind() media: v4l2: Fix v4l2_i2c_subdev_set_name function documentation media: atomisp: prevent integer overflow in sh_css_set_black_frame() drm/msm: Fix return type of mdp4_lvds_connector_mode_valid KVM: selftests: Fix number of pages for memory slot in memslot_modification_stress_test ASoC: qcom: lpass-cpu: mark HDMI TX registers as volatile perf: Fix missing SIGTRAPs sched/core: Fix comparison in sched_group_cookie_match() arc: iounmap() arg is volatile mtd: rawnand: intel: Add missing of_node_put() in ebu_nand_probe() ASoC: qcom: lpass-cpu: Mark HDMI TX parity register as volatile ALSA: ac97: fix possible memory leak in snd_ac97_dev_register() perf/x86/intel/lbr: Use setup_clear_cpu_cap() instead of clear_cpu_cap() tipc: fix a null-ptr-deref in tipc_topsrv_accept net: netsec: fix error handling in netsec_register_mdio() net: hinic: fix incorrect assignment issue in hinic_set_interrupt_cfg() net: hinic: fix memory leak when reading function table net: hinic: fix the issue of CMDQ memory leaks net: hinic: fix the issue of double release MBOX callback of VF net: macb: Specify PHY PM management done by MAC nfc: virtual_ncidev: Fix memory leak in virtual_nci_send() x86/unwind/orc: Fix unreliable stack dump with gcov amd-xgbe: fix the SFP compliance codes check for DAC cables amd-xgbe: add the bit rate quirk for Molex cables drm/i915/dp: Reset frl trained flag before restarting FRL training atlantic: fix deadlock at aq_nic_stop kcm: annotate data-races around kcm->rx_psock kcm: annotate data-races around kcm->rx_wait net: fix UAF issue in nfqnl_nf_hook_drop() when ops_init() failed net: lantiq_etop: don't free skb when returning NETDEV_TX_BUSY tcp: minor optimization in tcp_add_backlog() tcp: fix a signed-integer-overflow bug in tcp_add_backlog() tcp: fix indefinite deferral of RTO with SACK reneging net-memcg: avoid stalls when under memory pressure drm/amdkfd: Fix memory leak in kfd_mem_dmamap_userptr() can: mscan: mpc5xxx: mpc5xxx_can_probe(): add missing put_clock() in error path can: mcp251x: mcp251x_can_probe(): add missing unregister_candev() in error path PM: hibernate: Allow hybrid sleep to work with s2idle media: vivid: s_fbuf: add more sanity checks media: vivid: dev->bitmap_cap wasn't freed in all cases media: v4l2-dv-timings: add sanity checks for blanking values media: videodev2.h: V4L2_DV_BT_BLANKING_HEIGHT should check 'interlaced' media: vivid: set num_in/outputs to 0 if not supported perf vendor events power10: Fix hv-24x7 metric events ipv6: ensure sane device mtu in tunnels i40e: Fix ethtool rx-flow-hash setting for X722 i40e: Fix VF hang when reset is triggered on another VF i40e: Fix flow-type by setting GL_HASH_INSET registers net: ksz884x: fix missing pci_disable_device() on error in pcidev_init() PM: domains: Fix handling of unavailable/disabled idle states perf vendor events arm64: Fix incorrect Hisi hip08 L3 metrics net: fec: limit register access on i.MX6UL net: ethernet: ave: Fix MAC to be in charge of PHY PM ALSA: aoa: i2sbus: fix possible memory leak in i2sbus_add_dev() ALSA: aoa: Fix I2S device accounting openvswitch: switch from WARN to pr_warn net: ehea: fix possible memory leak in ehea_register_port() net: bcmsysport: Indicate MAC is in charge of PHY PM nh: fix scope used to find saddr when adding non gw nh net: broadcom: bcm4908enet: remove redundant variable bytes net: broadcom: bcm4908_enet: update TX stats after actual transmission netdevsim: remove dir in nsim_dev_debugfs_init() when creating ports dir failed net/mlx5e: Do not increment ESN when updating IPsec ESN state net/mlx5e: Extend SKB room check to include PTP-SQ net/mlx5: Fix possible use-after-free in async command interface net/mlx5: Print more info on pci error handlers net/mlx5: Update fw fatal reporter state on PCI handlers successful recover net/mlx5: Fix crash during sync firmware reset net: do not sense pfmemalloc status in skb_append_pagefrags() kcm: do not sense pfmemalloc status in kcm_sendpage() net: enetc: survive memory pressure without crashing arm64: Add AMPERE1 to the Spectre-BHB affected list scsi: sd: Revert "scsi: sd: Remove a local variable" can: rcar_canfd: fix channel specific IRQ handling for RZ/G2L can: rcar_canfd: rcar_canfd_handle_global_receive(): fix IRQ storm on global FIFO receive serial: core: move RS485 configuration tasks from drivers into core serial: Deassert Transmit Enable on probe in driver-specific way tcp/udp: Fix memory leak in ipv6_renew_options(). Linux 5.15.77 Change-Id: I12b819ae10adbb80730c67c40f5cf275d2865634 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 15
|
||||
SUBLEVEL = 76
|
||||
SUBLEVEL = 77
|
||||
EXTRAVERSION =
|
||||
NAME = Trick or Treat
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ static inline void ioport_unmap(void __iomem *addr)
|
||||
{
|
||||
}
|
||||
|
||||
extern void iounmap(const void __iomem *addr);
|
||||
extern void iounmap(const volatile void __iomem *addr);
|
||||
|
||||
/*
|
||||
* io{read,write}{16,32}be() macros
|
||||
|
||||
@@ -163,7 +163,7 @@
|
||||
#define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK)
|
||||
#define pmd_page(pmd) virt_to_page(pmd_page_vaddr(pmd))
|
||||
#define set_pmd(pmdp, pmd) (*(pmdp) = pmd)
|
||||
#define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd))
|
||||
#define pmd_pgtable(pmd) ((pgtable_t) pmd_page(pmd))
|
||||
|
||||
/*
|
||||
* 4th level paging: pte
|
||||
|
||||
@@ -94,7 +94,7 @@ void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
|
||||
EXPORT_SYMBOL(ioremap_prot);
|
||||
|
||||
|
||||
void iounmap(const void __iomem *addr)
|
||||
void iounmap(const volatile void __iomem *addr)
|
||||
{
|
||||
/* weird double cast to handle phys_addr_t > 32 bits */
|
||||
if (arc_uncached_addr_space((phys_addr_t)(u32)addr))
|
||||
|
||||
@@ -60,6 +60,7 @@
|
||||
#define ARM_CPU_IMP_FUJITSU 0x46
|
||||
#define ARM_CPU_IMP_HISI 0x48
|
||||
#define ARM_CPU_IMP_APPLE 0x61
|
||||
#define ARM_CPU_IMP_AMPERE 0xC0
|
||||
|
||||
#define ARM_CPU_PART_AEM_V8 0xD0F
|
||||
#define ARM_CPU_PART_FOUNDATION 0xD00
|
||||
@@ -112,6 +113,8 @@
|
||||
#define APPLE_CPU_PART_M1_ICESTORM 0x022
|
||||
#define APPLE_CPU_PART_M1_FIRESTORM 0x023
|
||||
|
||||
#define AMPERE_CPU_PART_AMPERE1 0xAC3
|
||||
|
||||
#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
|
||||
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
|
||||
#define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
|
||||
@@ -151,6 +154,7 @@
|
||||
#define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110)
|
||||
#define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM)
|
||||
#define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM)
|
||||
#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
|
||||
|
||||
/* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
|
||||
#define MIDR_FUJITSU_ERRATUM_010001 MIDR_FUJITSU_A64FX
|
||||
|
||||
@@ -868,6 +868,10 @@ u8 spectre_bhb_loop_affected(int scope)
|
||||
MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
|
||||
{},
|
||||
};
|
||||
static const struct midr_range spectre_bhb_k11_list[] = {
|
||||
MIDR_ALL_VERSIONS(MIDR_AMPERE1),
|
||||
{},
|
||||
};
|
||||
static const struct midr_range spectre_bhb_k8_list[] = {
|
||||
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
|
||||
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
|
||||
@@ -878,6 +882,8 @@ u8 spectre_bhb_loop_affected(int scope)
|
||||
k = 32;
|
||||
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
|
||||
k = 24;
|
||||
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
|
||||
k = 11;
|
||||
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
|
||||
k = 8;
|
||||
|
||||
|
||||
@@ -16,7 +16,8 @@
|
||||
"3: jl 1b\n" \
|
||||
" lhi %0,0\n" \
|
||||
"4: sacf 768\n" \
|
||||
EX_TABLE(0b,4b) EX_TABLE(2b,4b) EX_TABLE(3b,4b) \
|
||||
EX_TABLE(0b,4b) EX_TABLE(1b,4b) \
|
||||
EX_TABLE(2b,4b) EX_TABLE(3b,4b) \
|
||||
: "=d" (ret), "=&d" (oldval), "=&d" (newval), \
|
||||
"=m" (*uaddr) \
|
||||
: "0" (-EFAULT), "d" (oparg), "a" (uaddr), \
|
||||
|
||||
@@ -63,7 +63,7 @@ static inline int __pcistg_mio_inuser(
|
||||
asm volatile (
|
||||
" sacf 256\n"
|
||||
"0: llgc %[tmp],0(%[src])\n"
|
||||
" sllg %[val],%[val],8\n"
|
||||
"4: sllg %[val],%[val],8\n"
|
||||
" aghi %[src],1\n"
|
||||
" ogr %[val],%[tmp]\n"
|
||||
" brctg %[cnt],0b\n"
|
||||
@@ -71,7 +71,7 @@ static inline int __pcistg_mio_inuser(
|
||||
"2: ipm %[cc]\n"
|
||||
" srl %[cc],28\n"
|
||||
"3: sacf 768\n"
|
||||
EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
|
||||
EX_TABLE(0b, 3b) EX_TABLE(4b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
|
||||
:
|
||||
[src] "+a" (src), [cnt] "+d" (cnt),
|
||||
[val] "+d" (val), [tmp] "=d" (tmp),
|
||||
@@ -214,10 +214,10 @@ static inline int __pcilg_mio_inuser(
|
||||
"2: ahi %[shift],-8\n"
|
||||
" srlg %[tmp],%[val],0(%[shift])\n"
|
||||
"3: stc %[tmp],0(%[dst])\n"
|
||||
" aghi %[dst],1\n"
|
||||
"5: aghi %[dst],1\n"
|
||||
" brctg %[cnt],2b\n"
|
||||
"4: sacf 768\n"
|
||||
EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b)
|
||||
EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b) EX_TABLE(5b, 4b)
|
||||
:
|
||||
[ioaddr_len] "+&d" (ioaddr_len.pair),
|
||||
[cc] "+d" (cc), [val] "=d" (val),
|
||||
|
||||
@@ -1847,7 +1847,7 @@ void __init intel_pmu_arch_lbr_init(void)
|
||||
return;
|
||||
|
||||
clear_arch_lbr:
|
||||
clear_cpu_cap(&boot_cpu_data, X86_FEATURE_ARCH_LBR);
|
||||
setup_clear_cpu_cap(X86_FEATURE_ARCH_LBR);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -700,7 +700,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
|
||||
/* Otherwise, skip ahead to the user-specified starting frame: */
|
||||
while (!unwind_done(state) &&
|
||||
(!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
|
||||
state->sp < (unsigned long)first_frame))
|
||||
state->sp <= (unsigned long)first_frame))
|
||||
unwind_next_frame(state);
|
||||
|
||||
return;
|
||||
|
||||
@@ -2889,6 +2889,10 @@ static int genpd_iterate_idle_states(struct device_node *dn,
|
||||
np = it.node;
|
||||
if (!of_match_node(idle_state_match, np))
|
||||
continue;
|
||||
|
||||
if (!of_device_is_available(np))
|
||||
continue;
|
||||
|
||||
if (states) {
|
||||
ret = genpd_parse_state(&states[i], np);
|
||||
if (ret) {
|
||||
|
||||
@@ -29,7 +29,6 @@ struct mchp_tc_data {
|
||||
int qdec_mode;
|
||||
int num_channels;
|
||||
int channel[2];
|
||||
bool trig_inverted;
|
||||
};
|
||||
|
||||
enum mchp_tc_count_function {
|
||||
@@ -166,7 +165,7 @@ static int mchp_tc_count_signal_read(struct counter_device *counter,
|
||||
|
||||
regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], SR), &sr);
|
||||
|
||||
if (priv->trig_inverted)
|
||||
if (signal->id == 1)
|
||||
sigstatus = (sr & ATMEL_TC_MTIOB);
|
||||
else
|
||||
sigstatus = (sr & ATMEL_TC_MTIOA);
|
||||
@@ -184,6 +183,17 @@ static int mchp_tc_count_action_get(struct counter_device *counter,
|
||||
struct mchp_tc_data *const priv = counter->priv;
|
||||
u32 cmr;
|
||||
|
||||
if (priv->qdec_mode) {
|
||||
*action = COUNTER_SYNAPSE_ACTION_BOTH_EDGES;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Only TIOA signal is evaluated in non-QDEC mode */
|
||||
if (synapse->signal->id != 0) {
|
||||
*action = COUNTER_SYNAPSE_ACTION_NONE;
|
||||
return 0;
|
||||
}
|
||||
|
||||
regmap_read(priv->regmap, ATMEL_TC_REG(priv->channel[0], CMR), &cmr);
|
||||
|
||||
switch (cmr & ATMEL_TC_ETRGEDG) {
|
||||
@@ -212,8 +222,8 @@ static int mchp_tc_count_action_set(struct counter_device *counter,
|
||||
struct mchp_tc_data *const priv = counter->priv;
|
||||
u32 edge = ATMEL_TC_ETRGEDG_NONE;
|
||||
|
||||
/* QDEC mode is rising edge only */
|
||||
if (priv->qdec_mode)
|
||||
/* QDEC mode is rising edge only; only TIOA handled in non-QDEC mode */
|
||||
if (priv->qdec_mode || synapse->signal->id != 0)
|
||||
return -EINVAL;
|
||||
|
||||
switch (action) {
|
||||
|
||||
@@ -27,6 +27,7 @@
|
||||
#include <linux/pm_qos.h>
|
||||
#include <trace/events/power.h>
|
||||
|
||||
#include <asm/cpu.h>
|
||||
#include <asm/div64.h>
|
||||
#include <asm/msr.h>
|
||||
#include <asm/cpu_device_id.h>
|
||||
@@ -277,10 +278,10 @@ static struct cpudata **all_cpu_data;
|
||||
* structure is used to store those callbacks.
|
||||
*/
|
||||
struct pstate_funcs {
|
||||
int (*get_max)(void);
|
||||
int (*get_max_physical)(void);
|
||||
int (*get_min)(void);
|
||||
int (*get_turbo)(void);
|
||||
int (*get_max)(int cpu);
|
||||
int (*get_max_physical)(int cpu);
|
||||
int (*get_min)(int cpu);
|
||||
int (*get_turbo)(int cpu);
|
||||
int (*get_scaling)(void);
|
||||
int (*get_cpu_scaling)(int cpu);
|
||||
int (*get_aperf_mperf_shift)(void);
|
||||
@@ -395,16 +396,6 @@ static int intel_pstate_get_cppc_guaranteed(int cpu)
|
||||
|
||||
return cppc_perf.nominal_perf;
|
||||
}
|
||||
|
||||
static u32 intel_pstate_cppc_nominal(int cpu)
|
||||
{
|
||||
u64 nominal_perf;
|
||||
|
||||
if (cppc_get_nominal_perf(cpu, &nominal_perf))
|
||||
return 0;
|
||||
|
||||
return nominal_perf;
|
||||
}
|
||||
#else /* CONFIG_ACPI_CPPC_LIB */
|
||||
static inline void intel_pstate_set_itmt_prio(int cpu)
|
||||
{
|
||||
@@ -528,35 +519,18 @@ static void intel_pstate_hybrid_hwp_adjust(struct cpudata *cpu)
|
||||
{
|
||||
int perf_ctl_max_phys = cpu->pstate.max_pstate_physical;
|
||||
int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
|
||||
int perf_ctl_turbo = pstate_funcs.get_turbo();
|
||||
int turbo_freq = perf_ctl_turbo * perf_ctl_scaling;
|
||||
int perf_ctl_turbo = pstate_funcs.get_turbo(cpu->cpu);
|
||||
int scaling = cpu->pstate.scaling;
|
||||
|
||||
pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys);
|
||||
pr_debug("CPU%d: perf_ctl_max = %d\n", cpu->cpu, pstate_funcs.get_max());
|
||||
pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo);
|
||||
pr_debug("CPU%d: perf_ctl_scaling = %d\n", cpu->cpu, perf_ctl_scaling);
|
||||
pr_debug("CPU%d: HWP_CAP guaranteed = %d\n", cpu->cpu, cpu->pstate.max_pstate);
|
||||
pr_debug("CPU%d: HWP_CAP highest = %d\n", cpu->cpu, cpu->pstate.turbo_pstate);
|
||||
pr_debug("CPU%d: HWP-to-frequency scaling factor: %d\n", cpu->cpu, scaling);
|
||||
|
||||
/*
|
||||
* If the product of the HWP performance scaling factor and the HWP_CAP
|
||||
* highest performance is greater than the maximum turbo frequency
|
||||
* corresponding to the pstate_funcs.get_turbo() return value, the
|
||||
* scaling factor is too high, so recompute it to make the HWP_CAP
|
||||
* highest performance correspond to the maximum turbo frequency.
|
||||
*/
|
||||
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * scaling;
|
||||
if (turbo_freq < cpu->pstate.turbo_freq) {
|
||||
cpu->pstate.turbo_freq = turbo_freq;
|
||||
scaling = DIV_ROUND_UP(turbo_freq, cpu->pstate.turbo_pstate);
|
||||
cpu->pstate.scaling = scaling;
|
||||
|
||||
pr_debug("CPU%d: refined HWP-to-frequency scaling factor: %d\n",
|
||||
cpu->cpu, scaling);
|
||||
}
|
||||
|
||||
cpu->pstate.turbo_freq = rounddown(cpu->pstate.turbo_pstate * scaling,
|
||||
perf_ctl_scaling);
|
||||
cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling,
|
||||
perf_ctl_scaling);
|
||||
|
||||
@@ -1581,7 +1555,7 @@ static void intel_pstate_hwp_enable(struct cpudata *cpudata)
|
||||
cpudata->epp_default = intel_pstate_get_epp(cpudata, 0);
|
||||
}
|
||||
|
||||
static int atom_get_min_pstate(void)
|
||||
static int atom_get_min_pstate(int not_used)
|
||||
{
|
||||
u64 value;
|
||||
|
||||
@@ -1589,7 +1563,7 @@ static int atom_get_min_pstate(void)
|
||||
return (value >> 8) & 0x7F;
|
||||
}
|
||||
|
||||
static int atom_get_max_pstate(void)
|
||||
static int atom_get_max_pstate(int not_used)
|
||||
{
|
||||
u64 value;
|
||||
|
||||
@@ -1597,7 +1571,7 @@ static int atom_get_max_pstate(void)
|
||||
return (value >> 16) & 0x7F;
|
||||
}
|
||||
|
||||
static int atom_get_turbo_pstate(void)
|
||||
static int atom_get_turbo_pstate(int not_used)
|
||||
{
|
||||
u64 value;
|
||||
|
||||
@@ -1675,23 +1649,23 @@ static void atom_get_vid(struct cpudata *cpudata)
|
||||
cpudata->vid.turbo = value & 0x7f;
|
||||
}
|
||||
|
||||
static int core_get_min_pstate(void)
|
||||
static int core_get_min_pstate(int cpu)
|
||||
{
|
||||
u64 value;
|
||||
|
||||
rdmsrl(MSR_PLATFORM_INFO, value);
|
||||
rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
|
||||
return (value >> 40) & 0xFF;
|
||||
}
|
||||
|
||||
static int core_get_max_pstate_physical(void)
|
||||
static int core_get_max_pstate_physical(int cpu)
|
||||
{
|
||||
u64 value;
|
||||
|
||||
rdmsrl(MSR_PLATFORM_INFO, value);
|
||||
rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
|
||||
return (value >> 8) & 0xFF;
|
||||
}
|
||||
|
||||
static int core_get_tdp_ratio(u64 plat_info)
|
||||
static int core_get_tdp_ratio(int cpu, u64 plat_info)
|
||||
{
|
||||
/* Check how many TDP levels present */
|
||||
if (plat_info & 0x600000000) {
|
||||
@@ -1701,13 +1675,13 @@ static int core_get_tdp_ratio(u64 plat_info)
|
||||
int err;
|
||||
|
||||
/* Get the TDP level (0, 1, 2) to get ratios */
|
||||
err = rdmsrl_safe(MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
|
||||
err = rdmsrl_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* TDP MSR are continuous starting at 0x648 */
|
||||
tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x03);
|
||||
err = rdmsrl_safe(tdp_msr, &tdp_ratio);
|
||||
err = rdmsrl_safe_on_cpu(cpu, tdp_msr, &tdp_ratio);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
@@ -1724,7 +1698,7 @@ static int core_get_tdp_ratio(u64 plat_info)
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
static int core_get_max_pstate(void)
|
||||
static int core_get_max_pstate(int cpu)
|
||||
{
|
||||
u64 tar;
|
||||
u64 plat_info;
|
||||
@@ -1732,10 +1706,10 @@ static int core_get_max_pstate(void)
|
||||
int tdp_ratio;
|
||||
int err;
|
||||
|
||||
rdmsrl(MSR_PLATFORM_INFO, plat_info);
|
||||
rdmsrl_on_cpu(cpu, MSR_PLATFORM_INFO, &plat_info);
|
||||
max_pstate = (plat_info >> 8) & 0xFF;
|
||||
|
||||
tdp_ratio = core_get_tdp_ratio(plat_info);
|
||||
tdp_ratio = core_get_tdp_ratio(cpu, plat_info);
|
||||
if (tdp_ratio <= 0)
|
||||
return max_pstate;
|
||||
|
||||
@@ -1744,7 +1718,7 @@ static int core_get_max_pstate(void)
|
||||
return tdp_ratio;
|
||||
}
|
||||
|
||||
err = rdmsrl_safe(MSR_TURBO_ACTIVATION_RATIO, &tar);
|
||||
err = rdmsrl_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar);
|
||||
if (!err) {
|
||||
int tar_levels;
|
||||
|
||||
@@ -1759,13 +1733,13 @@ static int core_get_max_pstate(void)
|
||||
return max_pstate;
|
||||
}
|
||||
|
||||
static int core_get_turbo_pstate(void)
|
||||
static int core_get_turbo_pstate(int cpu)
|
||||
{
|
||||
u64 value;
|
||||
int nont, ret;
|
||||
|
||||
rdmsrl(MSR_TURBO_RATIO_LIMIT, value);
|
||||
nont = core_get_max_pstate();
|
||||
rdmsrl_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
|
||||
nont = core_get_max_pstate(cpu);
|
||||
ret = (value) & 255;
|
||||
if (ret <= nont)
|
||||
ret = nont;
|
||||
@@ -1793,51 +1767,38 @@ static int knl_get_aperf_mperf_shift(void)
|
||||
return 10;
|
||||
}
|
||||
|
||||
static int knl_get_turbo_pstate(void)
|
||||
static int knl_get_turbo_pstate(int cpu)
|
||||
{
|
||||
u64 value;
|
||||
int nont, ret;
|
||||
|
||||
rdmsrl(MSR_TURBO_RATIO_LIMIT, value);
|
||||
nont = core_get_max_pstate();
|
||||
rdmsrl_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
|
||||
nont = core_get_max_pstate(cpu);
|
||||
ret = (((value) >> 8) & 0xFF);
|
||||
if (ret <= nont)
|
||||
ret = nont;
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ACPI_CPPC_LIB
|
||||
static u32 hybrid_ref_perf;
|
||||
static void hybrid_get_type(void *data)
|
||||
{
|
||||
u8 *cpu_type = data;
|
||||
|
||||
*cpu_type = get_this_hybrid_cpu_type();
|
||||
}
|
||||
|
||||
static int hybrid_get_cpu_scaling(int cpu)
|
||||
{
|
||||
return DIV_ROUND_UP(core_get_scaling() * hybrid_ref_perf,
|
||||
intel_pstate_cppc_nominal(cpu));
|
||||
u8 cpu_type = 0;
|
||||
|
||||
smp_call_function_single(cpu, hybrid_get_type, &cpu_type, 1);
|
||||
/* P-cores have a smaller perf level-to-freqency scaling factor. */
|
||||
if (cpu_type == 0x40)
|
||||
return 78741;
|
||||
|
||||
return core_get_scaling();
|
||||
}
|
||||
|
||||
static void intel_pstate_cppc_set_cpu_scaling(void)
|
||||
{
|
||||
u32 min_nominal_perf = U32_MAX;
|
||||
int cpu;
|
||||
|
||||
for_each_present_cpu(cpu) {
|
||||
u32 nominal_perf = intel_pstate_cppc_nominal(cpu);
|
||||
|
||||
if (nominal_perf && nominal_perf < min_nominal_perf)
|
||||
min_nominal_perf = nominal_perf;
|
||||
}
|
||||
|
||||
if (min_nominal_perf < U32_MAX) {
|
||||
hybrid_ref_perf = min_nominal_perf;
|
||||
pstate_funcs.get_cpu_scaling = hybrid_get_cpu_scaling;
|
||||
}
|
||||
}
|
||||
#else
|
||||
static inline void intel_pstate_cppc_set_cpu_scaling(void)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_ACPI_CPPC_LIB */
|
||||
|
||||
static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
|
||||
{
|
||||
trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu);
|
||||
@@ -1866,10 +1827,10 @@ static void intel_pstate_max_within_limits(struct cpudata *cpu)
|
||||
|
||||
static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
|
||||
{
|
||||
int perf_ctl_max_phys = pstate_funcs.get_max_physical();
|
||||
int perf_ctl_max_phys = pstate_funcs.get_max_physical(cpu->cpu);
|
||||
int perf_ctl_scaling = pstate_funcs.get_scaling();
|
||||
|
||||
cpu->pstate.min_pstate = pstate_funcs.get_min();
|
||||
cpu->pstate.min_pstate = pstate_funcs.get_min(cpu->cpu);
|
||||
cpu->pstate.max_pstate_physical = perf_ctl_max_phys;
|
||||
cpu->pstate.perf_ctl_scaling = perf_ctl_scaling;
|
||||
|
||||
@@ -1885,8 +1846,8 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
|
||||
}
|
||||
} else {
|
||||
cpu->pstate.scaling = perf_ctl_scaling;
|
||||
cpu->pstate.max_pstate = pstate_funcs.get_max();
|
||||
cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
|
||||
cpu->pstate.max_pstate = pstate_funcs.get_max(cpu->cpu);
|
||||
cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(cpu->cpu);
|
||||
}
|
||||
|
||||
if (cpu->pstate.scaling == perf_ctl_scaling) {
|
||||
@@ -3063,9 +3024,9 @@ static unsigned int force_load __initdata;
|
||||
|
||||
static int __init intel_pstate_msrs_not_valid(void)
|
||||
{
|
||||
if (!pstate_funcs.get_max() ||
|
||||
!pstate_funcs.get_min() ||
|
||||
!pstate_funcs.get_turbo())
|
||||
if (!pstate_funcs.get_max(0) ||
|
||||
!pstate_funcs.get_min(0) ||
|
||||
!pstate_funcs.get_turbo(0))
|
||||
return -ENODEV;
|
||||
|
||||
return 0;
|
||||
@@ -3281,7 +3242,7 @@ static int __init intel_pstate_init(void)
|
||||
default_driver = &intel_pstate;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_HYBRID_CPU))
|
||||
intel_pstate_cppc_set_cpu_scaling();
|
||||
pstate_funcs.get_cpu_scaling = hybrid_get_cpu_scaling;
|
||||
|
||||
goto hwp_cpu_matched;
|
||||
}
|
||||
|
||||
@@ -476,13 +476,13 @@ kfd_mem_dmamap_userptr(struct kgd_mem *mem,
|
||||
struct ttm_tt *ttm = bo->tbo.ttm;
|
||||
int ret;
|
||||
|
||||
if (WARN_ON(ttm->num_pages != src_ttm->num_pages))
|
||||
return -EINVAL;
|
||||
|
||||
ttm->sg = kmalloc(sizeof(*ttm->sg), GFP_KERNEL);
|
||||
if (unlikely(!ttm->sg))
|
||||
return -ENOMEM;
|
||||
|
||||
if (WARN_ON(ttm->num_pages != src_ttm->num_pages))
|
||||
return -EINVAL;
|
||||
|
||||
/* Same sequence as in amdgpu_ttm_tt_pin_userptr */
|
||||
ret = sg_alloc_table_from_pages(ttm->sg, src_ttm->pages,
|
||||
ttm->num_pages, 0,
|
||||
|
||||
@@ -3185,6 +3185,15 @@ static int amdgpu_device_ip_resume_phase2(struct amdgpu_device *adev)
|
||||
return r;
|
||||
}
|
||||
adev->ip_blocks[i].status.hw = true;
|
||||
|
||||
if (adev->in_s0ix && adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC) {
|
||||
/* disable gfxoff for IP resume. The gfxoff will be re-enabled in
|
||||
* amdgpu_device_resume() after IP resume.
|
||||
*/
|
||||
amdgpu_gfx_off_ctrl(adev, false);
|
||||
DRM_DEBUG("will disable gfxoff for re-initializing other blocks\n");
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -4114,6 +4123,13 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
|
||||
/* Make sure IB tests flushed */
|
||||
flush_delayed_work(&adev->delayed_init_work);
|
||||
|
||||
if (adev->in_s0ix) {
|
||||
/* re-enable gfxoff after IP resume. This re-enables gfxoff after
|
||||
* it was disabled for IP resume in amdgpu_device_ip_resume_phase2().
|
||||
*/
|
||||
amdgpu_gfx_off_ctrl(adev, true);
|
||||
DRM_DEBUG("will enable gfxoff for the mission mode\n");
|
||||
}
|
||||
if (fbcon)
|
||||
amdgpu_fbdev_set_suspend(adev, 0);
|
||||
|
||||
|
||||
@@ -3497,6 +3497,8 @@ intel_dp_handle_hdmi_link_status_change(struct intel_dp *intel_dp)
|
||||
|
||||
drm_dp_pcon_hdmi_frl_link_error_count(&intel_dp->aux, &intel_dp->attached_connector->base);
|
||||
|
||||
intel_dp->frl.is_trained = false;
|
||||
|
||||
/* Restart FRL training or fall back to TMDS mode */
|
||||
intel_dp_check_frl_training(intel_dp);
|
||||
}
|
||||
|
||||
@@ -56,8 +56,9 @@ static int mdp4_lvds_connector_get_modes(struct drm_connector *connector)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int mdp4_lvds_connector_mode_valid(struct drm_connector *connector,
|
||||
struct drm_display_mode *mode)
|
||||
static enum drm_mode_status
|
||||
mdp4_lvds_connector_mode_valid(struct drm_connector *connector,
|
||||
struct drm_display_mode *mode)
|
||||
{
|
||||
struct mdp4_lvds_connector *mdp4_lvds_connector =
|
||||
to_mdp4_lvds_connector(connector);
|
||||
|
||||
@@ -1229,7 +1229,7 @@ int dp_display_request_irq(struct msm_dp *dp_display)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
rc = devm_request_irq(&dp->pdev->dev, dp->irq,
|
||||
rc = devm_request_irq(dp_display->drm_dev->dev, dp->irq,
|
||||
dp_display_irq_handler,
|
||||
IRQF_TRIGGER_HIGH, "dp_display_isr", dp);
|
||||
if (rc < 0) {
|
||||
|
||||
@@ -212,6 +212,12 @@ int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
|
||||
return -EINVAL;
|
||||
|
||||
priv = dev->dev_private;
|
||||
|
||||
if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {
|
||||
DRM_DEV_ERROR(dev->dev, "too many bridges\n");
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
msm_dsi->dev = dev;
|
||||
|
||||
ret = msm_dsi_host_modeset_init(msm_dsi->host, dev);
|
||||
|
||||
@@ -295,6 +295,11 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
|
||||
struct platform_device *pdev = hdmi->pdev;
|
||||
int ret;
|
||||
|
||||
if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {
|
||||
DRM_DEV_ERROR(dev->dev, "too many bridges\n");
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
hdmi->dev = dev;
|
||||
hdmi->encoder = encoder;
|
||||
|
||||
|
||||
@@ -90,11 +90,9 @@ void cti_write_all_hw_regs(struct cti_drvdata *drvdata)
|
||||
static int cti_enable_hw(struct cti_drvdata *drvdata)
|
||||
{
|
||||
struct cti_config *config = &drvdata->config;
|
||||
struct device *dev = &drvdata->csdev->dev;
|
||||
unsigned long flags;
|
||||
int rc = 0;
|
||||
|
||||
pm_runtime_get_sync(dev->parent);
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
|
||||
/* no need to do anything if enabled or unpowered*/
|
||||
@@ -119,7 +117,6 @@ cti_state_unchanged:
|
||||
/* cannot enable due to error */
|
||||
cti_err_not_enabled:
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
pm_runtime_put(dev->parent);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@@ -153,7 +150,6 @@ cti_hp_not_enabled:
|
||||
static int cti_disable_hw(struct cti_drvdata *drvdata)
|
||||
{
|
||||
struct cti_config *config = &drvdata->config;
|
||||
struct device *dev = &drvdata->csdev->dev;
|
||||
struct coresight_device *csdev = drvdata->csdev;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
@@ -175,7 +171,6 @@ static int cti_disable_hw(struct cti_drvdata *drvdata)
|
||||
coresight_disclaim_device_unlocked(csdev);
|
||||
CS_LOCK(drvdata->base);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
pm_runtime_put(dev->parent);
|
||||
return 0;
|
||||
|
||||
/* not disabled this call */
|
||||
|
||||
@@ -998,17 +998,30 @@ static ssize_t adxl372_get_fifo_watermark(struct device *dev,
|
||||
return sprintf(buf, "%d\n", st->watermark);
|
||||
}
|
||||
|
||||
static IIO_CONST_ATTR(hwfifo_watermark_min, "1");
|
||||
static IIO_CONST_ATTR(hwfifo_watermark_max,
|
||||
__stringify(ADXL372_FIFO_SIZE));
|
||||
static ssize_t hwfifo_watermark_min_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%s\n", "1");
|
||||
}
|
||||
|
||||
static ssize_t hwfifo_watermark_max_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%s\n", __stringify(ADXL372_FIFO_SIZE));
|
||||
}
|
||||
|
||||
static IIO_DEVICE_ATTR_RO(hwfifo_watermark_min, 0);
|
||||
static IIO_DEVICE_ATTR_RO(hwfifo_watermark_max, 0);
|
||||
static IIO_DEVICE_ATTR(hwfifo_watermark, 0444,
|
||||
adxl372_get_fifo_watermark, NULL, 0);
|
||||
static IIO_DEVICE_ATTR(hwfifo_enabled, 0444,
|
||||
adxl372_get_fifo_enabled, NULL, 0);
|
||||
|
||||
static const struct attribute *adxl372_fifo_attributes[] = {
|
||||
&iio_const_attr_hwfifo_watermark_min.dev_attr.attr,
|
||||
&iio_const_attr_hwfifo_watermark_max.dev_attr.attr,
|
||||
&iio_dev_attr_hwfifo_watermark_min.dev_attr.attr,
|
||||
&iio_dev_attr_hwfifo_watermark_max.dev_attr.attr,
|
||||
&iio_dev_attr_hwfifo_watermark.dev_attr.attr,
|
||||
&iio_dev_attr_hwfifo_enabled.dev_attr.attr,
|
||||
NULL,
|
||||
|
||||
@@ -858,7 +858,7 @@ static int tsl2583_probe(struct i2c_client *clientp,
|
||||
TSL2583_POWER_OFF_DELAY_MS);
|
||||
pm_runtime_use_autosuspend(&clientp->dev);
|
||||
|
||||
ret = devm_iio_device_register(indio_dev->dev.parent, indio_dev);
|
||||
ret = iio_device_register(indio_dev);
|
||||
if (ret) {
|
||||
dev_err(&clientp->dev, "%s: iio registration failed\n",
|
||||
__func__);
|
||||
|
||||
@@ -1376,13 +1376,6 @@ static int ltc2983_setup(struct ltc2983_data *st, bool assign_iio)
|
||||
return ret;
|
||||
}
|
||||
|
||||
st->iio_chan = devm_kzalloc(&st->spi->dev,
|
||||
st->iio_channels * sizeof(*st->iio_chan),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!st->iio_chan)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = regmap_update_bits(st->regmap, LTC2983_GLOBAL_CONFIG_REG,
|
||||
LTC2983_NOTCH_FREQ_MASK,
|
||||
LTC2983_NOTCH_FREQ(st->filter_notch_freq));
|
||||
@@ -1494,6 +1487,12 @@ static int ltc2983_probe(struct spi_device *spi)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
st->iio_chan = devm_kzalloc(&spi->dev,
|
||||
st->iio_channels * sizeof(*st->iio_chan),
|
||||
GFP_KERNEL);
|
||||
if (!st->iio_chan)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = ltc2983_setup(st, true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -330,6 +330,28 @@ static int vidioc_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *a
|
||||
return vivid_vid_out_g_fbuf(file, fh, a);
|
||||
}
|
||||
|
||||
/*
|
||||
* Only support the framebuffer of one of the vivid instances.
|
||||
* Anything else is rejected.
|
||||
*/
|
||||
bool vivid_validate_fb(const struct v4l2_framebuffer *a)
|
||||
{
|
||||
struct vivid_dev *dev;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < n_devs; i++) {
|
||||
dev = vivid_devs[i];
|
||||
if (!dev || !dev->video_pbase)
|
||||
continue;
|
||||
if ((unsigned long)a->base == dev->video_pbase &&
|
||||
a->fmt.width <= dev->display_width &&
|
||||
a->fmt.height <= dev->display_height &&
|
||||
a->fmt.bytesperline <= dev->display_byte_stride)
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static int vidioc_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffer *a)
|
||||
{
|
||||
struct video_device *vdev = video_devdata(file);
|
||||
@@ -910,8 +932,12 @@ static int vivid_detect_feature_set(struct vivid_dev *dev, int inst,
|
||||
|
||||
/* how many inputs do we have and of what type? */
|
||||
dev->num_inputs = num_inputs[inst];
|
||||
if (dev->num_inputs < 1)
|
||||
dev->num_inputs = 1;
|
||||
if (node_type & 0x20007) {
|
||||
if (dev->num_inputs < 1)
|
||||
dev->num_inputs = 1;
|
||||
} else {
|
||||
dev->num_inputs = 0;
|
||||
}
|
||||
if (dev->num_inputs >= MAX_INPUTS)
|
||||
dev->num_inputs = MAX_INPUTS;
|
||||
for (i = 0; i < dev->num_inputs; i++) {
|
||||
@@ -928,8 +954,12 @@ static int vivid_detect_feature_set(struct vivid_dev *dev, int inst,
|
||||
|
||||
/* how many outputs do we have and of what type? */
|
||||
dev->num_outputs = num_outputs[inst];
|
||||
if (dev->num_outputs < 1)
|
||||
dev->num_outputs = 1;
|
||||
if (node_type & 0x40300) {
|
||||
if (dev->num_outputs < 1)
|
||||
dev->num_outputs = 1;
|
||||
} else {
|
||||
dev->num_outputs = 0;
|
||||
}
|
||||
if (dev->num_outputs >= MAX_OUTPUTS)
|
||||
dev->num_outputs = MAX_OUTPUTS;
|
||||
for (i = 0; i < dev->num_outputs; i++) {
|
||||
|
||||
@@ -610,4 +610,6 @@ static inline bool vivid_is_hdmi_out(const struct vivid_dev *dev)
|
||||
return dev->output_type[dev->output] == HDMI;
|
||||
}
|
||||
|
||||
bool vivid_validate_fb(const struct v4l2_framebuffer *a);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -452,6 +452,12 @@ void vivid_update_format_cap(struct vivid_dev *dev, bool keep_controls)
|
||||
tpg_reset_source(&dev->tpg, dev->src_rect.width, dev->src_rect.height, dev->field_cap);
|
||||
dev->crop_cap = dev->src_rect;
|
||||
dev->crop_bounds_cap = dev->src_rect;
|
||||
if (dev->bitmap_cap &&
|
||||
(dev->compose_cap.width != dev->crop_cap.width ||
|
||||
dev->compose_cap.height != dev->crop_cap.height)) {
|
||||
vfree(dev->bitmap_cap);
|
||||
dev->bitmap_cap = NULL;
|
||||
}
|
||||
dev->compose_cap = dev->crop_cap;
|
||||
if (V4L2_FIELD_HAS_T_OR_B(dev->field_cap))
|
||||
dev->compose_cap.height /= 2;
|
||||
@@ -909,6 +915,8 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
|
||||
struct vivid_dev *dev = video_drvdata(file);
|
||||
struct v4l2_rect *crop = &dev->crop_cap;
|
||||
struct v4l2_rect *compose = &dev->compose_cap;
|
||||
unsigned orig_compose_w = compose->width;
|
||||
unsigned orig_compose_h = compose->height;
|
||||
unsigned factor = V4L2_FIELD_HAS_T_OR_B(dev->field_cap) ? 2 : 1;
|
||||
int ret;
|
||||
|
||||
@@ -1025,17 +1033,17 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
|
||||
s->r.height /= factor;
|
||||
}
|
||||
v4l2_rect_map_inside(&s->r, &dev->fmt_cap_rect);
|
||||
if (dev->bitmap_cap && (compose->width != s->r.width ||
|
||||
compose->height != s->r.height)) {
|
||||
vfree(dev->bitmap_cap);
|
||||
dev->bitmap_cap = NULL;
|
||||
}
|
||||
*compose = s->r;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (dev->bitmap_cap && (compose->width != orig_compose_w ||
|
||||
compose->height != orig_compose_h)) {
|
||||
vfree(dev->bitmap_cap);
|
||||
dev->bitmap_cap = NULL;
|
||||
}
|
||||
tpg_s_crop_compose(&dev->tpg, crop, compose);
|
||||
return 0;
|
||||
}
|
||||
@@ -1272,7 +1280,14 @@ int vivid_vid_cap_s_fbuf(struct file *file, void *fh,
|
||||
return -EINVAL;
|
||||
if (a->fmt.bytesperline < (a->fmt.width * fmt->bit_depth[0]) / 8)
|
||||
return -EINVAL;
|
||||
if (a->fmt.height * a->fmt.bytesperline < a->fmt.sizeimage)
|
||||
if (a->fmt.bytesperline > a->fmt.sizeimage / a->fmt.height)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Only support the framebuffer of one of the vivid instances.
|
||||
* Anything else is rejected.
|
||||
*/
|
||||
if (!vivid_validate_fb(a))
|
||||
return -EINVAL;
|
||||
|
||||
dev->fb_vbase_cap = phys_to_virt((unsigned long)a->base);
|
||||
|
||||
@@ -161,6 +161,20 @@ bool v4l2_valid_dv_timings(const struct v4l2_dv_timings *t,
|
||||
(bt->interlaced && !(caps & V4L2_DV_BT_CAP_INTERLACED)) ||
|
||||
(!bt->interlaced && !(caps & V4L2_DV_BT_CAP_PROGRESSIVE)))
|
||||
return false;
|
||||
|
||||
/* sanity checks for the blanking timings */
|
||||
if (!bt->interlaced &&
|
||||
(bt->il_vbackporch || bt->il_vsync || bt->il_vfrontporch))
|
||||
return false;
|
||||
if (bt->hfrontporch > 2 * bt->width ||
|
||||
bt->hsync > 1024 || bt->hbackporch > 1024)
|
||||
return false;
|
||||
if (bt->vfrontporch > 4096 ||
|
||||
bt->vsync > 128 || bt->vbackporch > 4096)
|
||||
return false;
|
||||
if (bt->interlaced && (bt->il_vfrontporch > 4096 ||
|
||||
bt->il_vsync > 128 || bt->il_vbackporch > 4096))
|
||||
return false;
|
||||
return fnc == NULL || fnc(t, fnc_handle);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(v4l2_valid_dv_timings);
|
||||
|
||||
@@ -135,6 +135,7 @@ struct mmc_blk_data {
|
||||
* track of the current selected device partition.
|
||||
*/
|
||||
unsigned int part_curr;
|
||||
#define MMC_BLK_PART_INVALID UINT_MAX /* Unknown partition active */
|
||||
int area_type;
|
||||
|
||||
/* debugfs files (only in main mmc_blk_data) */
|
||||
@@ -986,9 +987,16 @@ static unsigned int mmc_blk_data_timeout_ms(struct mmc_host *host,
|
||||
return ms;
|
||||
}
|
||||
|
||||
/*
|
||||
* Attempts to reset the card and get back to the requested partition.
|
||||
* Therefore any error here must result in cancelling the block layer
|
||||
* request, it must not be reattempted without going through the mmc_blk
|
||||
* partition sanity checks.
|
||||
*/
|
||||
static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host,
|
||||
int type)
|
||||
{
|
||||
struct mmc_blk_data *main_md = dev_get_drvdata(&host->card->dev);
|
||||
int err;
|
||||
|
||||
if (md->reset_done & type)
|
||||
@@ -996,25 +1004,24 @@ static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host,
|
||||
|
||||
md->reset_done |= type;
|
||||
err = mmc_hw_reset(host);
|
||||
/* Ensure we switch back to the correct partition */
|
||||
/*
|
||||
* A successful reset will leave the card in the main partition, but
|
||||
* upon failure it might not be, so set it to MMC_BLK_PART_INVALID
|
||||
* in that case.
|
||||
*/
|
||||
main_md->part_curr = err ? MMC_BLK_PART_INVALID : main_md->part_type;
|
||||
if (err) {
|
||||
struct mmc_blk_data *main_md =
|
||||
dev_get_drvdata(&host->card->dev);
|
||||
int part_err;
|
||||
|
||||
main_md->part_curr = main_md->part_type;
|
||||
part_err = mmc_blk_part_switch(host->card, md->part_type);
|
||||
if (part_err) {
|
||||
/*
|
||||
* We have failed to get back into the correct
|
||||
* partition, so we need to abort the whole request.
|
||||
*/
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
trace_android_vh_mmc_blk_reset(host, err);
|
||||
return err;
|
||||
}
|
||||
return err;
|
||||
/* Ensure we switch back to the correct partition */
|
||||
if (mmc_blk_part_switch(host->card, md->part_type))
|
||||
/*
|
||||
* We have failed to get back into the correct
|
||||
* partition, so we need to abort the whole request.
|
||||
*/
|
||||
return -ENODEV;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
|
||||
@@ -1860,8 +1867,9 @@ static void mmc_blk_mq_rw_recovery(struct mmc_queue *mq, struct request *req)
|
||||
return;
|
||||
|
||||
/* Reset before last retry */
|
||||
if (mqrq->retries + 1 == MMC_MAX_RETRIES)
|
||||
mmc_blk_reset(md, card->host, type);
|
||||
if (mqrq->retries + 1 == MMC_MAX_RETRIES &&
|
||||
mmc_blk_reset(md, card->host, type))
|
||||
return;
|
||||
|
||||
/* Command errors fail fast, so use all MMC_MAX_RETRIES */
|
||||
if (brq->sbc.error || brq->cmd.error)
|
||||
|
||||
@@ -290,7 +290,8 @@ static void sdio_release_func(struct device *dev)
|
||||
{
|
||||
struct sdio_func *func = dev_to_sdio_func(dev);
|
||||
|
||||
sdio_free_func_cis(func);
|
||||
if (!(func->card->quirks & MMC_QUIRK_NONSTD_SDIO))
|
||||
sdio_free_func_cis(func);
|
||||
|
||||
kfree(func->info);
|
||||
kfree(func->tmpbuf);
|
||||
|
||||
@@ -1069,9 +1069,10 @@ config MMC_SDHCI_OMAP
|
||||
|
||||
config MMC_SDHCI_AM654
|
||||
tristate "Support for the SDHCI Controller in TI's AM654 SOCs"
|
||||
depends on MMC_SDHCI_PLTFM && OF && REGMAP_MMIO
|
||||
depends on MMC_SDHCI_PLTFM && OF
|
||||
select MMC_SDHCI_IO_ACCESSORS
|
||||
select MMC_CQHCI
|
||||
select REGMAP_MMIO
|
||||
help
|
||||
This selects the Secure Digital Host Controller Interface (SDHCI)
|
||||
support present in TI's AM654 SOCs. The controller supports
|
||||
|
||||
@@ -1643,6 +1643,10 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
|
||||
host->mmc_host_ops.execute_tuning = usdhc_execute_tuning;
|
||||
}
|
||||
|
||||
err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
|
||||
if (err)
|
||||
goto disable_ahb_clk;
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING)
|
||||
sdhci_esdhc_ops.platform_execute_tuning =
|
||||
esdhc_executing_tuning;
|
||||
@@ -1650,13 +1654,15 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536)
|
||||
host->quirks |= SDHCI_QUIRK_BROKEN_ADMA;
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_HS400)
|
||||
if (host->caps & MMC_CAP_8_BIT_DATA &&
|
||||
imx_data->socdata->flags & ESDHC_FLAG_HS400)
|
||||
host->mmc->caps2 |= MMC_CAP2_HS400;
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23)
|
||||
host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN;
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
|
||||
if (host->caps & MMC_CAP_8_BIT_DATA &&
|
||||
imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
|
||||
host->mmc->caps2 |= MMC_CAP2_HS400_ES;
|
||||
host->mmc_host_ops.hs400_enhanced_strobe =
|
||||
esdhc_hs400_enhanced_strobe;
|
||||
@@ -1678,10 +1684,6 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
|
||||
goto disable_ahb_clk;
|
||||
}
|
||||
|
||||
err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
|
||||
if (err)
|
||||
goto disable_ahb_clk;
|
||||
|
||||
sdhci_esdhc_imx_hwinit(host);
|
||||
|
||||
err = sdhci_add_host(host);
|
||||
|
||||
@@ -978,6 +978,12 @@ static bool glk_broken_cqhci(struct sdhci_pci_slot *slot)
|
||||
dmi_match(DMI_SYS_VENDOR, "IRBIS"));
|
||||
}
|
||||
|
||||
static bool jsl_broken_hs400es(struct sdhci_pci_slot *slot)
|
||||
{
|
||||
return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_JSL_EMMC &&
|
||||
dmi_match(DMI_BIOS_VENDOR, "ASUSTeK COMPUTER INC.");
|
||||
}
|
||||
|
||||
static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
|
||||
{
|
||||
int ret = byt_emmc_probe_slot(slot);
|
||||
@@ -986,9 +992,11 @@ static int glk_emmc_probe_slot(struct sdhci_pci_slot *slot)
|
||||
slot->host->mmc->caps2 |= MMC_CAP2_CQE;
|
||||
|
||||
if (slot->chip->pdev->device != PCI_DEVICE_ID_INTEL_GLK_EMMC) {
|
||||
slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES;
|
||||
slot->host->mmc_host_ops.hs400_enhanced_strobe =
|
||||
intel_hs400_enhanced_strobe;
|
||||
if (!jsl_broken_hs400es(slot)) {
|
||||
slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES;
|
||||
slot->host->mmc_host_ops.hs400_enhanced_strobe =
|
||||
intel_hs400_enhanced_strobe;
|
||||
}
|
||||
slot->host->mmc->caps2 |= MMC_CAP2_CQE_DCMD;
|
||||
}
|
||||
|
||||
|
||||
@@ -614,11 +614,12 @@ static int ebu_nand_probe(struct platform_device *pdev)
|
||||
ret = of_property_read_u32(chip_np, "reg", &cs);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to get chip select: %d\n", ret);
|
||||
return ret;
|
||||
goto err_of_node_put;
|
||||
}
|
||||
if (cs >= MAX_CS) {
|
||||
dev_err(dev, "got invalid chip select: %d\n", cs);
|
||||
return -EINVAL;
|
||||
ret = -EINVAL;
|
||||
goto err_of_node_put;
|
||||
}
|
||||
|
||||
ebu_host->cs_num = cs;
|
||||
@@ -627,18 +628,20 @@ static int ebu_nand_probe(struct platform_device *pdev)
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname);
|
||||
ebu_host->cs[cs].chipaddr = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(ebu_host->cs[cs].chipaddr))
|
||||
return PTR_ERR(ebu_host->cs[cs].chipaddr);
|
||||
goto err_of_node_put;
|
||||
ebu_host->cs[cs].nand_pa = res->start;
|
||||
|
||||
ebu_host->clk = devm_clk_get(dev, NULL);
|
||||
if (IS_ERR(ebu_host->clk))
|
||||
return dev_err_probe(dev, PTR_ERR(ebu_host->clk),
|
||||
"failed to get clock\n");
|
||||
if (IS_ERR(ebu_host->clk)) {
|
||||
ret = dev_err_probe(dev, PTR_ERR(ebu_host->clk),
|
||||
"failed to get clock\n");
|
||||
goto err_of_node_put;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(ebu_host->clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to enable clock: %d\n", ret);
|
||||
return ret;
|
||||
goto err_of_node_put;
|
||||
}
|
||||
ebu_host->clk_rate = clk_get_rate(ebu_host->clk);
|
||||
|
||||
@@ -703,6 +706,8 @@ err_cleanup_dma:
|
||||
ebu_dma_cleanup(ebu_host);
|
||||
err_disable_unprepare_clk:
|
||||
clk_disable_unprepare(ebu_host->clk);
|
||||
err_of_node_put:
|
||||
of_node_put(chip_np);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -2672,7 +2672,7 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc,
|
||||
chip->controller = &nfc->controller;
|
||||
nand_set_flash_node(chip, np);
|
||||
|
||||
if (!of_property_read_bool(np, "marvell,nand-keep-config"))
|
||||
if (of_property_read_bool(np, "marvell,nand-keep-config"))
|
||||
chip->options |= NAND_KEEP_TIMINGS;
|
||||
|
||||
mtd = nand_to_mtd(chip);
|
||||
|
||||
@@ -322,14 +322,14 @@ static int mpc5xxx_can_probe(struct platform_device *ofdev)
|
||||
&mscan_clksrc);
|
||||
if (!priv->can.clock.freq) {
|
||||
dev_err(&ofdev->dev, "couldn't get MSCAN clock properties\n");
|
||||
goto exit_free_mscan;
|
||||
goto exit_put_clock;
|
||||
}
|
||||
|
||||
err = register_mscandev(dev, mscan_clksrc);
|
||||
if (err) {
|
||||
dev_err(&ofdev->dev, "registering %s failed (err=%d)\n",
|
||||
DRV_NAME, err);
|
||||
goto exit_free_mscan;
|
||||
goto exit_put_clock;
|
||||
}
|
||||
|
||||
dev_info(&ofdev->dev, "MSCAN at 0x%p, irq %d, clock %d Hz\n",
|
||||
@@ -337,7 +337,9 @@ static int mpc5xxx_can_probe(struct platform_device *ofdev)
|
||||
|
||||
return 0;
|
||||
|
||||
exit_free_mscan:
|
||||
exit_put_clock:
|
||||
if (data->put_clock)
|
||||
data->put_clock(ofdev);
|
||||
free_candev(dev);
|
||||
exit_dispose_irq:
|
||||
irq_dispose_mapping(irq);
|
||||
|
||||
@@ -1106,11 +1106,13 @@ static void rcar_canfd_handle_global_receive(struct rcar_canfd_global *gpriv, u3
|
||||
{
|
||||
struct rcar_canfd_channel *priv = gpriv->ch[ch];
|
||||
u32 ridx = ch + RCANFD_RFFIFO_IDX;
|
||||
u32 sts;
|
||||
u32 sts, cc;
|
||||
|
||||
/* Handle Rx interrupts */
|
||||
sts = rcar_canfd_read(priv->base, RCANFD_RFSTS(ridx));
|
||||
if (likely(sts & RCANFD_RFSTS_RFIF)) {
|
||||
cc = rcar_canfd_read(priv->base, RCANFD_RFCC(ridx));
|
||||
if (likely(sts & RCANFD_RFSTS_RFIF &&
|
||||
cc & RCANFD_RFCC_RFIE)) {
|
||||
if (napi_schedule_prep(&priv->napi)) {
|
||||
/* Disable Rx FIFO interrupts */
|
||||
rcar_canfd_clear_bit(priv->base,
|
||||
@@ -1195,11 +1197,9 @@ static void rcar_canfd_handle_channel_tx(struct rcar_canfd_global *gpriv, u32 ch
|
||||
|
||||
static irqreturn_t rcar_canfd_channel_tx_interrupt(int irq, void *dev_id)
|
||||
{
|
||||
struct rcar_canfd_global *gpriv = dev_id;
|
||||
u32 ch;
|
||||
struct rcar_canfd_channel *priv = dev_id;
|
||||
|
||||
for_each_set_bit(ch, &gpriv->channels_mask, RCANFD_NUM_CHANNELS)
|
||||
rcar_canfd_handle_channel_tx(gpriv, ch);
|
||||
rcar_canfd_handle_channel_tx(priv->gpriv, priv->channel);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
@@ -1227,11 +1227,9 @@ static void rcar_canfd_handle_channel_err(struct rcar_canfd_global *gpriv, u32 c
|
||||
|
||||
static irqreturn_t rcar_canfd_channel_err_interrupt(int irq, void *dev_id)
|
||||
{
|
||||
struct rcar_canfd_global *gpriv = dev_id;
|
||||
u32 ch;
|
||||
struct rcar_canfd_channel *priv = dev_id;
|
||||
|
||||
for_each_set_bit(ch, &gpriv->channels_mask, RCANFD_NUM_CHANNELS)
|
||||
rcar_canfd_handle_channel_err(gpriv, ch);
|
||||
rcar_canfd_handle_channel_err(priv->gpriv, priv->channel);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
@@ -1649,6 +1647,7 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
|
||||
priv->ndev = ndev;
|
||||
priv->base = gpriv->base;
|
||||
priv->channel = ch;
|
||||
priv->gpriv = gpriv;
|
||||
priv->can.clock.freq = fcan_freq;
|
||||
dev_info(&pdev->dev, "can_clk rate is %u\n", priv->can.clock.freq);
|
||||
|
||||
@@ -1677,7 +1676,7 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
|
||||
}
|
||||
err = devm_request_irq(&pdev->dev, err_irq,
|
||||
rcar_canfd_channel_err_interrupt, 0,
|
||||
irq_name, gpriv);
|
||||
irq_name, priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "devm_request_irq CH Err(%d) failed, error %d\n",
|
||||
err_irq, err);
|
||||
@@ -1691,7 +1690,7 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
|
||||
}
|
||||
err = devm_request_irq(&pdev->dev, tx_irq,
|
||||
rcar_canfd_channel_tx_interrupt, 0,
|
||||
irq_name, gpriv);
|
||||
irq_name, priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "devm_request_irq Tx (%d) failed, error %d\n",
|
||||
tx_irq, err);
|
||||
@@ -1715,7 +1714,6 @@ static int rcar_canfd_channel_probe(struct rcar_canfd_global *gpriv, u32 ch,
|
||||
|
||||
priv->can.do_set_mode = rcar_canfd_do_set_mode;
|
||||
priv->can.do_get_berr_counter = rcar_canfd_get_berr_counter;
|
||||
priv->gpriv = gpriv;
|
||||
SET_NETDEV_DEV(ndev, &pdev->dev);
|
||||
|
||||
netif_napi_add(ndev, &priv->napi, rcar_canfd_rx_poll,
|
||||
|
||||
@@ -1419,11 +1419,14 @@ static int mcp251x_can_probe(struct spi_device *spi)
|
||||
|
||||
ret = mcp251x_gpio_setup(priv);
|
||||
if (ret)
|
||||
goto error_probe;
|
||||
goto out_unregister_candev;
|
||||
|
||||
netdev_info(net, "MCP%x successfully initialized.\n", priv->model);
|
||||
return 0;
|
||||
|
||||
out_unregister_candev:
|
||||
unregister_candev(net);
|
||||
|
||||
error_probe:
|
||||
destroy_workqueue(priv->wq);
|
||||
priv->wq = NULL;
|
||||
|
||||
@@ -1873,7 +1873,7 @@ static int kvaser_usb_hydra_start_chip(struct kvaser_usb_net_priv *priv)
|
||||
{
|
||||
int err;
|
||||
|
||||
init_completion(&priv->start_comp);
|
||||
reinit_completion(&priv->start_comp);
|
||||
|
||||
err = kvaser_usb_hydra_send_simple_cmd(priv->dev, CMD_START_CHIP_REQ,
|
||||
priv->channel);
|
||||
@@ -1891,7 +1891,7 @@ static int kvaser_usb_hydra_stop_chip(struct kvaser_usb_net_priv *priv)
|
||||
{
|
||||
int err;
|
||||
|
||||
init_completion(&priv->stop_comp);
|
||||
reinit_completion(&priv->stop_comp);
|
||||
|
||||
/* Make sure we do not report invalid BUS_OFF from CMD_CHIP_STATE_EVENT
|
||||
* see comment in kvaser_usb_hydra_update_state()
|
||||
|
||||
@@ -1324,7 +1324,7 @@ static int kvaser_usb_leaf_start_chip(struct kvaser_usb_net_priv *priv)
|
||||
{
|
||||
int err;
|
||||
|
||||
init_completion(&priv->start_comp);
|
||||
reinit_completion(&priv->start_comp);
|
||||
|
||||
err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_START_CHIP,
|
||||
priv->channel);
|
||||
@@ -1342,7 +1342,7 @@ static int kvaser_usb_leaf_stop_chip(struct kvaser_usb_net_priv *priv)
|
||||
{
|
||||
int err;
|
||||
|
||||
init_completion(&priv->stop_comp);
|
||||
reinit_completion(&priv->stop_comp);
|
||||
|
||||
err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_STOP_CHIP,
|
||||
priv->channel);
|
||||
|
||||
@@ -239,6 +239,7 @@ enum xgbe_sfp_speed {
|
||||
#define XGBE_SFP_BASE_BR_1GBE_MAX 0x0d
|
||||
#define XGBE_SFP_BASE_BR_10GBE_MIN 0x64
|
||||
#define XGBE_SFP_BASE_BR_10GBE_MAX 0x68
|
||||
#define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX 0x78
|
||||
|
||||
#define XGBE_SFP_BASE_CU_CABLE_LEN 18
|
||||
|
||||
@@ -284,6 +285,8 @@ struct xgbe_sfp_eeprom {
|
||||
#define XGBE_BEL_FUSE_VENDOR "BEL-FUSE "
|
||||
#define XGBE_BEL_FUSE_PARTNO "1GBT-SFP06 "
|
||||
|
||||
#define XGBE_MOLEX_VENDOR "Molex Inc. "
|
||||
|
||||
struct xgbe_sfp_ascii {
|
||||
union {
|
||||
char vendor[XGBE_SFP_BASE_VENDOR_NAME_LEN + 1];
|
||||
@@ -834,7 +837,11 @@ static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
|
||||
break;
|
||||
case XGBE_SFP_SPEED_10000:
|
||||
min = XGBE_SFP_BASE_BR_10GBE_MIN;
|
||||
max = XGBE_SFP_BASE_BR_10GBE_MAX;
|
||||
if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME],
|
||||
XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0)
|
||||
max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX;
|
||||
else
|
||||
max = XGBE_SFP_BASE_BR_10GBE_MAX;
|
||||
break;
|
||||
default:
|
||||
return false;
|
||||
@@ -1151,7 +1158,10 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
|
||||
}
|
||||
|
||||
/* Determine the type of SFP */
|
||||
if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
|
||||
if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE &&
|
||||
xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
|
||||
phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
|
||||
else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
|
||||
phy_data->sfp_base = XGBE_SFP_BASE_10000_SR;
|
||||
else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_LR)
|
||||
phy_data->sfp_base = XGBE_SFP_BASE_10000_LR;
|
||||
@@ -1167,9 +1177,6 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
|
||||
phy_data->sfp_base = XGBE_SFP_BASE_1000_CX;
|
||||
else if (sfp_base[XGBE_SFP_BASE_1GBE_CC] & XGBE_SFP_BASE_1GBE_CC_T)
|
||||
phy_data->sfp_base = XGBE_SFP_BASE_1000_T;
|
||||
else if ((phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE) &&
|
||||
xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
|
||||
phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
|
||||
|
||||
switch (phy_data->sfp_base) {
|
||||
case XGBE_SFP_BASE_1000_T:
|
||||
|
||||
@@ -1451,26 +1451,57 @@ static void aq_check_txsa_expiration(struct aq_nic_s *nic)
|
||||
egress_sa_threshold_expired);
|
||||
}
|
||||
|
||||
#define AQ_LOCKED_MDO_DEF(mdo) \
|
||||
static int aq_locked_mdo_##mdo(struct macsec_context *ctx) \
|
||||
{ \
|
||||
struct aq_nic_s *nic = netdev_priv(ctx->netdev); \
|
||||
int ret; \
|
||||
mutex_lock(&nic->macsec_mutex); \
|
||||
ret = aq_mdo_##mdo(ctx); \
|
||||
mutex_unlock(&nic->macsec_mutex); \
|
||||
return ret; \
|
||||
}
|
||||
|
||||
AQ_LOCKED_MDO_DEF(dev_open)
|
||||
AQ_LOCKED_MDO_DEF(dev_stop)
|
||||
AQ_LOCKED_MDO_DEF(add_secy)
|
||||
AQ_LOCKED_MDO_DEF(upd_secy)
|
||||
AQ_LOCKED_MDO_DEF(del_secy)
|
||||
AQ_LOCKED_MDO_DEF(add_rxsc)
|
||||
AQ_LOCKED_MDO_DEF(upd_rxsc)
|
||||
AQ_LOCKED_MDO_DEF(del_rxsc)
|
||||
AQ_LOCKED_MDO_DEF(add_rxsa)
|
||||
AQ_LOCKED_MDO_DEF(upd_rxsa)
|
||||
AQ_LOCKED_MDO_DEF(del_rxsa)
|
||||
AQ_LOCKED_MDO_DEF(add_txsa)
|
||||
AQ_LOCKED_MDO_DEF(upd_txsa)
|
||||
AQ_LOCKED_MDO_DEF(del_txsa)
|
||||
AQ_LOCKED_MDO_DEF(get_dev_stats)
|
||||
AQ_LOCKED_MDO_DEF(get_tx_sc_stats)
|
||||
AQ_LOCKED_MDO_DEF(get_tx_sa_stats)
|
||||
AQ_LOCKED_MDO_DEF(get_rx_sc_stats)
|
||||
AQ_LOCKED_MDO_DEF(get_rx_sa_stats)
|
||||
|
||||
const struct macsec_ops aq_macsec_ops = {
|
||||
.mdo_dev_open = aq_mdo_dev_open,
|
||||
.mdo_dev_stop = aq_mdo_dev_stop,
|
||||
.mdo_add_secy = aq_mdo_add_secy,
|
||||
.mdo_upd_secy = aq_mdo_upd_secy,
|
||||
.mdo_del_secy = aq_mdo_del_secy,
|
||||
.mdo_add_rxsc = aq_mdo_add_rxsc,
|
||||
.mdo_upd_rxsc = aq_mdo_upd_rxsc,
|
||||
.mdo_del_rxsc = aq_mdo_del_rxsc,
|
||||
.mdo_add_rxsa = aq_mdo_add_rxsa,
|
||||
.mdo_upd_rxsa = aq_mdo_upd_rxsa,
|
||||
.mdo_del_rxsa = aq_mdo_del_rxsa,
|
||||
.mdo_add_txsa = aq_mdo_add_txsa,
|
||||
.mdo_upd_txsa = aq_mdo_upd_txsa,
|
||||
.mdo_del_txsa = aq_mdo_del_txsa,
|
||||
.mdo_get_dev_stats = aq_mdo_get_dev_stats,
|
||||
.mdo_get_tx_sc_stats = aq_mdo_get_tx_sc_stats,
|
||||
.mdo_get_tx_sa_stats = aq_mdo_get_tx_sa_stats,
|
||||
.mdo_get_rx_sc_stats = aq_mdo_get_rx_sc_stats,
|
||||
.mdo_get_rx_sa_stats = aq_mdo_get_rx_sa_stats,
|
||||
.mdo_dev_open = aq_locked_mdo_dev_open,
|
||||
.mdo_dev_stop = aq_locked_mdo_dev_stop,
|
||||
.mdo_add_secy = aq_locked_mdo_add_secy,
|
||||
.mdo_upd_secy = aq_locked_mdo_upd_secy,
|
||||
.mdo_del_secy = aq_locked_mdo_del_secy,
|
||||
.mdo_add_rxsc = aq_locked_mdo_add_rxsc,
|
||||
.mdo_upd_rxsc = aq_locked_mdo_upd_rxsc,
|
||||
.mdo_del_rxsc = aq_locked_mdo_del_rxsc,
|
||||
.mdo_add_rxsa = aq_locked_mdo_add_rxsa,
|
||||
.mdo_upd_rxsa = aq_locked_mdo_upd_rxsa,
|
||||
.mdo_del_rxsa = aq_locked_mdo_del_rxsa,
|
||||
.mdo_add_txsa = aq_locked_mdo_add_txsa,
|
||||
.mdo_upd_txsa = aq_locked_mdo_upd_txsa,
|
||||
.mdo_del_txsa = aq_locked_mdo_del_txsa,
|
||||
.mdo_get_dev_stats = aq_locked_mdo_get_dev_stats,
|
||||
.mdo_get_tx_sc_stats = aq_locked_mdo_get_tx_sc_stats,
|
||||
.mdo_get_tx_sa_stats = aq_locked_mdo_get_tx_sa_stats,
|
||||
.mdo_get_rx_sc_stats = aq_locked_mdo_get_rx_sc_stats,
|
||||
.mdo_get_rx_sa_stats = aq_locked_mdo_get_rx_sa_stats,
|
||||
};
|
||||
|
||||
int aq_macsec_init(struct aq_nic_s *nic)
|
||||
@@ -1492,6 +1523,7 @@ int aq_macsec_init(struct aq_nic_s *nic)
|
||||
|
||||
nic->ndev->features |= NETIF_F_HW_MACSEC;
|
||||
nic->ndev->macsec_ops = &aq_macsec_ops;
|
||||
mutex_init(&nic->macsec_mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -1515,7 +1547,7 @@ int aq_macsec_enable(struct aq_nic_s *nic)
|
||||
if (!nic->macsec_cfg)
|
||||
return 0;
|
||||
|
||||
rtnl_lock();
|
||||
mutex_lock(&nic->macsec_mutex);
|
||||
|
||||
if (nic->aq_fw_ops->send_macsec_req) {
|
||||
struct macsec_cfg_request cfg = { 0 };
|
||||
@@ -1564,7 +1596,7 @@ int aq_macsec_enable(struct aq_nic_s *nic)
|
||||
ret = aq_apply_macsec_cfg(nic);
|
||||
|
||||
unlock:
|
||||
rtnl_unlock();
|
||||
mutex_unlock(&nic->macsec_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1576,9 +1608,9 @@ void aq_macsec_work(struct aq_nic_s *nic)
|
||||
if (!netif_carrier_ok(nic->ndev))
|
||||
return;
|
||||
|
||||
rtnl_lock();
|
||||
mutex_lock(&nic->macsec_mutex);
|
||||
aq_check_txsa_expiration(nic);
|
||||
rtnl_unlock();
|
||||
mutex_unlock(&nic->macsec_mutex);
|
||||
}
|
||||
|
||||
int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
|
||||
@@ -1589,21 +1621,30 @@ int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic)
|
||||
if (!cfg)
|
||||
return 0;
|
||||
|
||||
mutex_lock(&nic->macsec_mutex);
|
||||
|
||||
for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
|
||||
if (!test_bit(i, &cfg->rxsc_idx_busy))
|
||||
continue;
|
||||
cnt += hweight_long(cfg->aq_rxsc[i].rx_sa_idx_busy);
|
||||
}
|
||||
|
||||
mutex_unlock(&nic->macsec_mutex);
|
||||
return cnt;
|
||||
}
|
||||
|
||||
int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic)
|
||||
{
|
||||
int cnt;
|
||||
|
||||
if (!nic->macsec_cfg)
|
||||
return 0;
|
||||
|
||||
return hweight_long(nic->macsec_cfg->txsc_idx_busy);
|
||||
mutex_lock(&nic->macsec_mutex);
|
||||
cnt = hweight_long(nic->macsec_cfg->txsc_idx_busy);
|
||||
mutex_unlock(&nic->macsec_mutex);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
|
||||
@@ -1614,12 +1655,15 @@ int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic)
|
||||
if (!cfg)
|
||||
return 0;
|
||||
|
||||
mutex_lock(&nic->macsec_mutex);
|
||||
|
||||
for (i = 0; i < AQ_MACSEC_MAX_SC; i++) {
|
||||
if (!test_bit(i, &cfg->txsc_idx_busy))
|
||||
continue;
|
||||
cnt += hweight_long(cfg->aq_txsc[i].tx_sa_idx_busy);
|
||||
}
|
||||
|
||||
mutex_unlock(&nic->macsec_mutex);
|
||||
return cnt;
|
||||
}
|
||||
|
||||
@@ -1691,6 +1735,8 @@ u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data)
|
||||
if (!cfg)
|
||||
return data;
|
||||
|
||||
mutex_lock(&nic->macsec_mutex);
|
||||
|
||||
aq_macsec_update_stats(nic);
|
||||
|
||||
common_stats = &cfg->stats;
|
||||
@@ -1773,5 +1819,7 @@ u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data)
|
||||
|
||||
data += i;
|
||||
|
||||
mutex_unlock(&nic->macsec_mutex);
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
@@ -154,6 +154,8 @@ struct aq_nic_s {
|
||||
struct mutex fwreq_mutex;
|
||||
#if IS_ENABLED(CONFIG_MACSEC)
|
||||
struct aq_macsec_cfg *macsec_cfg;
|
||||
/* mutex to protect data in macsec_cfg */
|
||||
struct mutex macsec_mutex;
|
||||
#endif
|
||||
/* PTP support */
|
||||
struct aq_ptp_s *aq_ptp;
|
||||
|
||||
@@ -561,8 +561,6 @@ static int bcm4908_enet_start_xmit(struct sk_buff *skb, struct net_device *netde
|
||||
|
||||
if (++ring->write_idx == ring->length - 1)
|
||||
ring->write_idx = 0;
|
||||
enet->netdev->stats.tx_bytes += skb->len;
|
||||
enet->netdev->stats.tx_packets++;
|
||||
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
@@ -646,13 +644,17 @@ static int bcm4908_enet_poll_tx(struct napi_struct *napi, int weight)
|
||||
|
||||
dma_unmap_single(dev, slot->dma_addr, slot->len, DMA_TO_DEVICE);
|
||||
dev_kfree_skb(slot->skb);
|
||||
bytes += slot->len;
|
||||
if (++tx_ring->read_idx == tx_ring->length)
|
||||
tx_ring->read_idx = 0;
|
||||
|
||||
handled++;
|
||||
bytes += slot->len;
|
||||
|
||||
if (++tx_ring->read_idx == tx_ring->length)
|
||||
tx_ring->read_idx = 0;
|
||||
}
|
||||
|
||||
enet->netdev->stats.tx_packets += handled;
|
||||
enet->netdev->stats.tx_bytes += bytes;
|
||||
|
||||
if (handled < weight) {
|
||||
napi_complete_done(napi, handled);
|
||||
bcm4908_enet_dma_ring_intrs_on(enet, tx_ring);
|
||||
|
||||
@@ -1991,6 +1991,9 @@ static int bcm_sysport_open(struct net_device *dev)
|
||||
goto out_clk_disable;
|
||||
}
|
||||
|
||||
/* Indicate that the MAC is responsible for PHY PM */
|
||||
phydev->mac_managed_pm = true;
|
||||
|
||||
/* Reset house keeping link status */
|
||||
priv->old_duplex = -1;
|
||||
priv->old_link = -1;
|
||||
|
||||
@@ -880,6 +880,7 @@ static int macb_mii_probe(struct net_device *dev)
|
||||
|
||||
bp->phylink_config.dev = &dev->dev;
|
||||
bp->phylink_config.type = PHYLINK_NETDEV;
|
||||
bp->phylink_config.mac_managed_pm = true;
|
||||
|
||||
if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
|
||||
bp->phylink_config.poll_fixed_state = true;
|
||||
|
||||
@@ -1800,7 +1800,12 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
|
||||
else
|
||||
enetc_rxbdr_wr(hw, idx, ENETC_RBBSR, ENETC_RXB_DMA_SIZE);
|
||||
|
||||
/* Also prepare the consumer index in case page allocation never
|
||||
* succeeds. In that case, hardware will never advance producer index
|
||||
* to match consumer index, and will drop all frames.
|
||||
*/
|
||||
enetc_rxbdr_wr(hw, idx, ENETC_RBPIR, 0);
|
||||
enetc_rxbdr_wr(hw, idx, ENETC_RBCIR, 1);
|
||||
|
||||
/* enable Rx ints by setting pkt thr to 1 */
|
||||
enetc_rxbdr_wr(hw, idx, ENETC_RBICR0, ENETC_RBICR0_ICEN | 0x1);
|
||||
|
||||
@@ -2336,6 +2336,31 @@ static u32 fec_enet_register_offset[] = {
|
||||
IEEE_R_DROP, IEEE_R_FRAME_OK, IEEE_R_CRC, IEEE_R_ALIGN, IEEE_R_MACERR,
|
||||
IEEE_R_FDXFC, IEEE_R_OCTETS_OK
|
||||
};
|
||||
/* for i.MX6ul */
|
||||
static u32 fec_enet_register_offset_6ul[] = {
|
||||
FEC_IEVENT, FEC_IMASK, FEC_R_DES_ACTIVE_0, FEC_X_DES_ACTIVE_0,
|
||||
FEC_ECNTRL, FEC_MII_DATA, FEC_MII_SPEED, FEC_MIB_CTRLSTAT, FEC_R_CNTRL,
|
||||
FEC_X_CNTRL, FEC_ADDR_LOW, FEC_ADDR_HIGH, FEC_OPD, FEC_TXIC0, FEC_RXIC0,
|
||||
FEC_HASH_TABLE_HIGH, FEC_HASH_TABLE_LOW, FEC_GRP_HASH_TABLE_HIGH,
|
||||
FEC_GRP_HASH_TABLE_LOW, FEC_X_WMRK, FEC_R_DES_START_0,
|
||||
FEC_X_DES_START_0, FEC_R_BUFF_SIZE_0, FEC_R_FIFO_RSFL, FEC_R_FIFO_RSEM,
|
||||
FEC_R_FIFO_RAEM, FEC_R_FIFO_RAFL, FEC_RACC,
|
||||
RMON_T_DROP, RMON_T_PACKETS, RMON_T_BC_PKT, RMON_T_MC_PKT,
|
||||
RMON_T_CRC_ALIGN, RMON_T_UNDERSIZE, RMON_T_OVERSIZE, RMON_T_FRAG,
|
||||
RMON_T_JAB, RMON_T_COL, RMON_T_P64, RMON_T_P65TO127, RMON_T_P128TO255,
|
||||
RMON_T_P256TO511, RMON_T_P512TO1023, RMON_T_P1024TO2047,
|
||||
RMON_T_P_GTE2048, RMON_T_OCTETS,
|
||||
IEEE_T_DROP, IEEE_T_FRAME_OK, IEEE_T_1COL, IEEE_T_MCOL, IEEE_T_DEF,
|
||||
IEEE_T_LCOL, IEEE_T_EXCOL, IEEE_T_MACERR, IEEE_T_CSERR, IEEE_T_SQE,
|
||||
IEEE_T_FDXFC, IEEE_T_OCTETS_OK,
|
||||
RMON_R_PACKETS, RMON_R_BC_PKT, RMON_R_MC_PKT, RMON_R_CRC_ALIGN,
|
||||
RMON_R_UNDERSIZE, RMON_R_OVERSIZE, RMON_R_FRAG, RMON_R_JAB,
|
||||
RMON_R_RESVD_O, RMON_R_P64, RMON_R_P65TO127, RMON_R_P128TO255,
|
||||
RMON_R_P256TO511, RMON_R_P512TO1023, RMON_R_P1024TO2047,
|
||||
RMON_R_P_GTE2048, RMON_R_OCTETS,
|
||||
IEEE_R_DROP, IEEE_R_FRAME_OK, IEEE_R_CRC, IEEE_R_ALIGN, IEEE_R_MACERR,
|
||||
IEEE_R_FDXFC, IEEE_R_OCTETS_OK
|
||||
};
|
||||
#else
|
||||
static __u32 fec_enet_register_version = 1;
|
||||
static u32 fec_enet_register_offset[] = {
|
||||
@@ -2360,7 +2385,24 @@ static void fec_enet_get_regs(struct net_device *ndev,
|
||||
u32 *buf = (u32 *)regbuf;
|
||||
u32 i, off;
|
||||
int ret;
|
||||
#if defined(CONFIG_M523x) || defined(CONFIG_M527x) || defined(CONFIG_M528x) || \
|
||||
defined(CONFIG_M520x) || defined(CONFIG_M532x) || defined(CONFIG_ARM) || \
|
||||
defined(CONFIG_ARM64) || defined(CONFIG_COMPILE_TEST)
|
||||
u32 *reg_list;
|
||||
u32 reg_cnt;
|
||||
|
||||
if (!of_machine_is_compatible("fsl,imx6ul")) {
|
||||
reg_list = fec_enet_register_offset;
|
||||
reg_cnt = ARRAY_SIZE(fec_enet_register_offset);
|
||||
} else {
|
||||
reg_list = fec_enet_register_offset_6ul;
|
||||
reg_cnt = ARRAY_SIZE(fec_enet_register_offset_6ul);
|
||||
}
|
||||
#else
|
||||
/* coldfire */
|
||||
static u32 *reg_list = fec_enet_register_offset;
|
||||
static const u32 reg_cnt = ARRAY_SIZE(fec_enet_register_offset);
|
||||
#endif
|
||||
ret = pm_runtime_resume_and_get(dev);
|
||||
if (ret < 0)
|
||||
return;
|
||||
@@ -2369,8 +2411,8 @@ static void fec_enet_get_regs(struct net_device *ndev,
|
||||
|
||||
memset(buf, 0, regs->len);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) {
|
||||
off = fec_enet_register_offset[i];
|
||||
for (i = 0; i < reg_cnt; i++) {
|
||||
off = reg_list[i];
|
||||
|
||||
if ((off == FEC_R_BOUND || off == FEC_R_FSTART) &&
|
||||
!(fep->quirks & FEC_QUIRK_HAS_FRREG))
|
||||
|
||||
@@ -85,6 +85,7 @@ static int hinic_dbg_get_func_table(struct hinic_dev *nic_dev, int idx)
|
||||
struct tag_sml_funcfg_tbl *funcfg_table_elem;
|
||||
struct hinic_cmd_lt_rd *read_data;
|
||||
u16 out_size = sizeof(*read_data);
|
||||
int ret = ~0;
|
||||
int err;
|
||||
|
||||
read_data = kzalloc(sizeof(*read_data), GFP_KERNEL);
|
||||
@@ -111,20 +112,25 @@ static int hinic_dbg_get_func_table(struct hinic_dev *nic_dev, int idx)
|
||||
|
||||
switch (idx) {
|
||||
case VALID:
|
||||
return funcfg_table_elem->dw0.bs.valid;
|
||||
ret = funcfg_table_elem->dw0.bs.valid;
|
||||
break;
|
||||
case RX_MODE:
|
||||
return funcfg_table_elem->dw0.bs.nic_rx_mode;
|
||||
ret = funcfg_table_elem->dw0.bs.nic_rx_mode;
|
||||
break;
|
||||
case MTU:
|
||||
return funcfg_table_elem->dw1.bs.mtu;
|
||||
ret = funcfg_table_elem->dw1.bs.mtu;
|
||||
break;
|
||||
case RQ_DEPTH:
|
||||
return funcfg_table_elem->dw13.bs.cfg_rq_depth;
|
||||
ret = funcfg_table_elem->dw13.bs.cfg_rq_depth;
|
||||
break;
|
||||
case QUEUE_NUM:
|
||||
return funcfg_table_elem->dw13.bs.cfg_q_num;
|
||||
ret = funcfg_table_elem->dw13.bs.cfg_q_num;
|
||||
break;
|
||||
}
|
||||
|
||||
kfree(read_data);
|
||||
|
||||
return ~0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t hinic_dbg_cmd_read(struct file *filp, char __user *buffer, size_t count,
|
||||
|
||||
@@ -929,7 +929,7 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
|
||||
|
||||
err_set_cmdq_depth:
|
||||
hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ);
|
||||
|
||||
free_cmdq(&cmdqs->cmdq[HINIC_CMDQ_SYNC]);
|
||||
err_cmdq_ctxt:
|
||||
hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
|
||||
HINIC_MAX_CMDQ_TYPES);
|
||||
|
||||
@@ -892,7 +892,7 @@ int hinic_set_interrupt_cfg(struct hinic_hwdev *hwdev,
|
||||
if (err)
|
||||
return -EINVAL;
|
||||
|
||||
interrupt_info->lli_credit_cnt = temp_info.lli_timer_cnt;
|
||||
interrupt_info->lli_credit_cnt = temp_info.lli_credit_cnt;
|
||||
interrupt_info->lli_timer_cnt = temp_info.lli_timer_cnt;
|
||||
|
||||
err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
|
||||
|
||||
@@ -1180,7 +1180,6 @@ int hinic_vf_func_init(struct hinic_hwdev *hwdev)
|
||||
dev_err(&hwdev->hwif->pdev->dev,
|
||||
"Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
|
||||
err, register_info.status, out_size);
|
||||
hinic_unregister_vf_mbox_cb(hwdev, HINIC_MOD_L2NIC);
|
||||
return -EIO;
|
||||
}
|
||||
} else {
|
||||
|
||||
@@ -2898,6 +2898,7 @@ static struct device *ehea_register_port(struct ehea_port *port,
|
||||
ret = of_device_register(&port->ofdev);
|
||||
if (ret) {
|
||||
pr_err("failed to register device. ret=%d\n", ret);
|
||||
put_device(&port->ofdev.dev);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
||||
@@ -3085,10 +3085,17 @@ static int i40e_get_rss_hash_opts(struct i40e_pf *pf, struct ethtool_rxnfc *cmd)
|
||||
|
||||
if (cmd->flow_type == TCP_V4_FLOW ||
|
||||
cmd->flow_type == UDP_V4_FLOW) {
|
||||
if (i_set & I40E_L3_SRC_MASK)
|
||||
cmd->data |= RXH_IP_SRC;
|
||||
if (i_set & I40E_L3_DST_MASK)
|
||||
cmd->data |= RXH_IP_DST;
|
||||
if (hw->mac.type == I40E_MAC_X722) {
|
||||
if (i_set & I40E_X722_L3_SRC_MASK)
|
||||
cmd->data |= RXH_IP_SRC;
|
||||
if (i_set & I40E_X722_L3_DST_MASK)
|
||||
cmd->data |= RXH_IP_DST;
|
||||
} else {
|
||||
if (i_set & I40E_L3_SRC_MASK)
|
||||
cmd->data |= RXH_IP_SRC;
|
||||
if (i_set & I40E_L3_DST_MASK)
|
||||
cmd->data |= RXH_IP_DST;
|
||||
}
|
||||
} else if (cmd->flow_type == TCP_V6_FLOW ||
|
||||
cmd->flow_type == UDP_V6_FLOW) {
|
||||
if (i_set & I40E_L3_V6_SRC_MASK)
|
||||
@@ -3446,12 +3453,15 @@ static int i40e_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
|
||||
|
||||
/**
|
||||
* i40e_get_rss_hash_bits - Read RSS Hash bits from register
|
||||
* @hw: hw structure
|
||||
* @nfc: pointer to user request
|
||||
* @i_setc: bits currently set
|
||||
*
|
||||
* Returns value of bits to be set per user request
|
||||
**/
|
||||
static u64 i40e_get_rss_hash_bits(struct ethtool_rxnfc *nfc, u64 i_setc)
|
||||
static u64 i40e_get_rss_hash_bits(struct i40e_hw *hw,
|
||||
struct ethtool_rxnfc *nfc,
|
||||
u64 i_setc)
|
||||
{
|
||||
u64 i_set = i_setc;
|
||||
u64 src_l3 = 0, dst_l3 = 0;
|
||||
@@ -3470,8 +3480,13 @@ static u64 i40e_get_rss_hash_bits(struct ethtool_rxnfc *nfc, u64 i_setc)
|
||||
dst_l3 = I40E_L3_V6_DST_MASK;
|
||||
} else if (nfc->flow_type == TCP_V4_FLOW ||
|
||||
nfc->flow_type == UDP_V4_FLOW) {
|
||||
src_l3 = I40E_L3_SRC_MASK;
|
||||
dst_l3 = I40E_L3_DST_MASK;
|
||||
if (hw->mac.type == I40E_MAC_X722) {
|
||||
src_l3 = I40E_X722_L3_SRC_MASK;
|
||||
dst_l3 = I40E_X722_L3_DST_MASK;
|
||||
} else {
|
||||
src_l3 = I40E_L3_SRC_MASK;
|
||||
dst_l3 = I40E_L3_DST_MASK;
|
||||
}
|
||||
} else {
|
||||
/* Any other flow type are not supported here */
|
||||
return i_set;
|
||||
@@ -3489,6 +3504,7 @@ static u64 i40e_get_rss_hash_bits(struct ethtool_rxnfc *nfc, u64 i_setc)
|
||||
return i_set;
|
||||
}
|
||||
|
||||
#define FLOW_PCTYPES_SIZE 64
|
||||
/**
|
||||
* i40e_set_rss_hash_opt - Enable/Disable flow types for RSS hash
|
||||
* @pf: pointer to the physical function struct
|
||||
@@ -3501,9 +3517,11 @@ static int i40e_set_rss_hash_opt(struct i40e_pf *pf, struct ethtool_rxnfc *nfc)
|
||||
struct i40e_hw *hw = &pf->hw;
|
||||
u64 hena = (u64)i40e_read_rx_ctl(hw, I40E_PFQF_HENA(0)) |
|
||||
((u64)i40e_read_rx_ctl(hw, I40E_PFQF_HENA(1)) << 32);
|
||||
u8 flow_pctype = 0;
|
||||
DECLARE_BITMAP(flow_pctypes, FLOW_PCTYPES_SIZE);
|
||||
u64 i_set, i_setc;
|
||||
|
||||
bitmap_zero(flow_pctypes, FLOW_PCTYPES_SIZE);
|
||||
|
||||
if (pf->flags & I40E_FLAG_MFP_ENABLED) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
"Change of RSS hash input set is not supported when MFP mode is enabled\n");
|
||||
@@ -3519,36 +3537,35 @@ static int i40e_set_rss_hash_opt(struct i40e_pf *pf, struct ethtool_rxnfc *nfc)
|
||||
|
||||
switch (nfc->flow_type) {
|
||||
case TCP_V4_FLOW:
|
||||
flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV4_TCP;
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_TCP, flow_pctypes);
|
||||
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
|
||||
hena |=
|
||||
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK,
|
||||
flow_pctypes);
|
||||
break;
|
||||
case TCP_V6_FLOW:
|
||||
flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV6_TCP;
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_TCP, flow_pctypes);
|
||||
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
|
||||
hena |=
|
||||
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
|
||||
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
|
||||
hena |=
|
||||
BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK);
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK,
|
||||
flow_pctypes);
|
||||
break;
|
||||
case UDP_V4_FLOW:
|
||||
flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP;
|
||||
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
|
||||
hena |=
|
||||
BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) |
|
||||
BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP);
|
||||
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_IPV4_UDP, flow_pctypes);
|
||||
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE) {
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP,
|
||||
flow_pctypes);
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP,
|
||||
flow_pctypes);
|
||||
}
|
||||
hena |= BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV4);
|
||||
break;
|
||||
case UDP_V6_FLOW:
|
||||
flow_pctype = I40E_FILTER_PCTYPE_NONF_IPV6_UDP;
|
||||
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE)
|
||||
hena |=
|
||||
BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) |
|
||||
BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP);
|
||||
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_IPV6_UDP, flow_pctypes);
|
||||
if (pf->hw_features & I40E_HW_MULTIPLE_TCP_UDP_RSS_PCTYPE) {
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP,
|
||||
flow_pctypes);
|
||||
set_bit(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP,
|
||||
flow_pctypes);
|
||||
}
|
||||
hena |= BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV6);
|
||||
break;
|
||||
case AH_ESP_V4_FLOW:
|
||||
@@ -3581,17 +3598,20 @@ static int i40e_set_rss_hash_opt(struct i40e_pf *pf, struct ethtool_rxnfc *nfc)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (flow_pctype) {
|
||||
i_setc = (u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0,
|
||||
flow_pctype)) |
|
||||
((u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(1,
|
||||
flow_pctype)) << 32);
|
||||
i_set = i40e_get_rss_hash_bits(nfc, i_setc);
|
||||
i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_pctype),
|
||||
(u32)i_set);
|
||||
i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_pctype),
|
||||
(u32)(i_set >> 32));
|
||||
hena |= BIT_ULL(flow_pctype);
|
||||
if (bitmap_weight(flow_pctypes, FLOW_PCTYPES_SIZE)) {
|
||||
u8 flow_id;
|
||||
|
||||
for_each_set_bit(flow_id, flow_pctypes, FLOW_PCTYPES_SIZE) {
|
||||
i_setc = (u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id)) |
|
||||
((u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id)) << 32);
|
||||
i_set = i40e_get_rss_hash_bits(&pf->hw, nfc, i_setc);
|
||||
|
||||
i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id),
|
||||
(u32)i_set);
|
||||
i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id),
|
||||
(u32)(i_set >> 32));
|
||||
hena |= BIT_ULL(flow_id);
|
||||
}
|
||||
}
|
||||
|
||||
i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), (u32)hena);
|
||||
|
||||
@@ -1404,6 +1404,10 @@ struct i40e_lldp_variables {
|
||||
#define I40E_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000
|
||||
|
||||
/* INPUT SET MASK for RSS, flow director, and flexible payload */
|
||||
#define I40E_X722_L3_SRC_SHIFT 49
|
||||
#define I40E_X722_L3_SRC_MASK (0x3ULL << I40E_X722_L3_SRC_SHIFT)
|
||||
#define I40E_X722_L3_DST_SHIFT 41
|
||||
#define I40E_X722_L3_DST_MASK (0x3ULL << I40E_X722_L3_DST_SHIFT)
|
||||
#define I40E_L3_SRC_SHIFT 47
|
||||
#define I40E_L3_SRC_MASK (0x3ULL << I40E_L3_SRC_SHIFT)
|
||||
#define I40E_L3_V6_SRC_SHIFT 43
|
||||
|
||||
@@ -1536,10 +1536,12 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
|
||||
if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state))
|
||||
return true;
|
||||
|
||||
/* If the VFs have been disabled, this means something else is
|
||||
* resetting the VF, so we shouldn't continue.
|
||||
*/
|
||||
if (test_and_set_bit(__I40E_VF_DISABLE, pf->state))
|
||||
/* Bail out if VFs are disabled. */
|
||||
if (test_bit(__I40E_VF_DISABLE, pf->state))
|
||||
return true;
|
||||
|
||||
/* If VF is being reset already we don't need to continue. */
|
||||
if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
|
||||
return true;
|
||||
|
||||
i40e_trigger_vf_reset(vf, flr);
|
||||
@@ -1576,7 +1578,7 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
|
||||
i40e_cleanup_reset_vf(vf);
|
||||
|
||||
i40e_flush(hw);
|
||||
clear_bit(__I40E_VF_DISABLE, pf->state);
|
||||
clear_bit(I40E_VF_STATE_RESETTING, &vf->vf_states);
|
||||
|
||||
return true;
|
||||
}
|
||||
@@ -1609,8 +1611,12 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
|
||||
return false;
|
||||
|
||||
/* Begin reset on all VFs at once */
|
||||
for (v = 0; v < pf->num_alloc_vfs; v++)
|
||||
i40e_trigger_vf_reset(&pf->vf[v], flr);
|
||||
for (v = 0; v < pf->num_alloc_vfs; v++) {
|
||||
vf = &pf->vf[v];
|
||||
/* If VF is being reset no need to trigger reset again */
|
||||
if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
|
||||
i40e_trigger_vf_reset(&pf->vf[v], flr);
|
||||
}
|
||||
|
||||
/* HW requires some time to make sure it can flush the FIFO for a VF
|
||||
* when it resets it. Poll the VPGEN_VFRSTAT register for each VF in
|
||||
@@ -1626,9 +1632,11 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
|
||||
*/
|
||||
while (v < pf->num_alloc_vfs) {
|
||||
vf = &pf->vf[v];
|
||||
reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id));
|
||||
if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK))
|
||||
break;
|
||||
if (!test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) {
|
||||
reg = rd32(hw, I40E_VPGEN_VFRSTAT(vf->vf_id));
|
||||
if (!(reg & I40E_VPGEN_VFRSTAT_VFRD_MASK))
|
||||
break;
|
||||
}
|
||||
|
||||
/* If the current VF has finished resetting, move on
|
||||
* to the next VF in sequence.
|
||||
@@ -1656,6 +1664,10 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
|
||||
if (pf->vf[v].lan_vsi_idx == 0)
|
||||
continue;
|
||||
|
||||
/* If VF is reset in another thread just continue */
|
||||
if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
|
||||
continue;
|
||||
|
||||
i40e_vsi_stop_rings_no_wait(pf->vsi[pf->vf[v].lan_vsi_idx]);
|
||||
}
|
||||
|
||||
@@ -1667,6 +1679,10 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
|
||||
if (pf->vf[v].lan_vsi_idx == 0)
|
||||
continue;
|
||||
|
||||
/* If VF is reset in another thread just continue */
|
||||
if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
|
||||
continue;
|
||||
|
||||
i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[v].lan_vsi_idx]);
|
||||
}
|
||||
|
||||
@@ -1676,8 +1692,13 @@ bool i40e_reset_all_vfs(struct i40e_pf *pf, bool flr)
|
||||
mdelay(50);
|
||||
|
||||
/* Finish the reset on each VF */
|
||||
for (v = 0; v < pf->num_alloc_vfs; v++)
|
||||
for (v = 0; v < pf->num_alloc_vfs; v++) {
|
||||
/* If VF is reset in another thread just continue */
|
||||
if (test_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
|
||||
continue;
|
||||
|
||||
i40e_cleanup_reset_vf(&pf->vf[v]);
|
||||
}
|
||||
|
||||
i40e_flush(hw);
|
||||
clear_bit(__I40E_VF_DISABLE, pf->state);
|
||||
|
||||
@@ -39,6 +39,7 @@ enum i40e_vf_states {
|
||||
I40E_VF_STATE_MC_PROMISC,
|
||||
I40E_VF_STATE_UC_PROMISC,
|
||||
I40E_VF_STATE_PRE_ENABLE,
|
||||
I40E_VF_STATE_RESETTING
|
||||
};
|
||||
|
||||
/* VF capabilities */
|
||||
|
||||
@@ -466,7 +466,6 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
|
||||
len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len;
|
||||
|
||||
if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
|
||||
dev_kfree_skb_any(skb);
|
||||
netdev_err(dev, "tx ring full\n");
|
||||
netif_tx_stop_queue(txq);
|
||||
return NETDEV_TX_BUSY;
|
||||
|
||||
@@ -1865,7 +1865,7 @@ void mlx5_cmd_init_async_ctx(struct mlx5_core_dev *dev,
|
||||
ctx->dev = dev;
|
||||
/* Starts at 1 to avoid doing wake_up if we are not cleaning up */
|
||||
atomic_set(&ctx->num_inflight, 1);
|
||||
init_waitqueue_head(&ctx->wait);
|
||||
init_completion(&ctx->inflight_done);
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_cmd_init_async_ctx);
|
||||
|
||||
@@ -1879,8 +1879,8 @@ EXPORT_SYMBOL(mlx5_cmd_init_async_ctx);
|
||||
*/
|
||||
void mlx5_cmd_cleanup_async_ctx(struct mlx5_async_ctx *ctx)
|
||||
{
|
||||
atomic_dec(&ctx->num_inflight);
|
||||
wait_event(ctx->wait, atomic_read(&ctx->num_inflight) == 0);
|
||||
if (!atomic_dec_and_test(&ctx->num_inflight))
|
||||
wait_for_completion(&ctx->inflight_done);
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_cmd_cleanup_async_ctx);
|
||||
|
||||
@@ -1891,7 +1891,7 @@ static void mlx5_cmd_exec_cb_handler(int status, void *_work)
|
||||
|
||||
work->user_callback(status, work);
|
||||
if (atomic_dec_and_test(&ctx->num_inflight))
|
||||
wake_up(&ctx->wait);
|
||||
complete(&ctx->inflight_done);
|
||||
}
|
||||
|
||||
int mlx5_cmd_exec_cb(struct mlx5_async_ctx *ctx, void *in, int in_size,
|
||||
@@ -1907,7 +1907,7 @@ int mlx5_cmd_exec_cb(struct mlx5_async_ctx *ctx, void *in, int in_size,
|
||||
ret = cmd_exec(ctx->dev, in, in_size, out, out_size,
|
||||
mlx5_cmd_exec_cb_handler, work, false);
|
||||
if (ret && atomic_dec_and_test(&ctx->num_inflight))
|
||||
wake_up(&ctx->wait);
|
||||
complete(&ctx->inflight_done);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
|
||||
#include "en.h"
|
||||
#include "en_stats.h"
|
||||
#include "en/txrx.h"
|
||||
#include <linux/ptp_classify.h>
|
||||
|
||||
#define MLX5E_PTP_CHANNEL_IX 0
|
||||
@@ -67,6 +68,14 @@ static inline bool mlx5e_use_ptpsq(struct sk_buff *skb)
|
||||
fk.ports.dst == htons(PTP_EV_PORT));
|
||||
}
|
||||
|
||||
static inline bool mlx5e_ptpsq_fifo_has_room(struct mlx5e_txqsq *sq)
|
||||
{
|
||||
if (!sq->ptpsq)
|
||||
return true;
|
||||
|
||||
return mlx5e_skb_fifo_has_room(&sq->ptpsq->skb_fifo);
|
||||
}
|
||||
|
||||
int mlx5e_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params,
|
||||
u8 lag_port, struct mlx5e_ptp **cp);
|
||||
void mlx5e_ptp_close(struct mlx5e_ptp *c);
|
||||
|
||||
@@ -73,6 +73,12 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev);
|
||||
bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget);
|
||||
void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq);
|
||||
|
||||
static inline bool
|
||||
mlx5e_skb_fifo_has_room(struct mlx5e_skb_fifo *fifo)
|
||||
{
|
||||
return (*fifo->pc - *fifo->cc) < fifo->mask;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
mlx5e_wqc_has_room_for(struct mlx5_wq_cyc *wq, u16 cc, u16 pc, u16 n)
|
||||
{
|
||||
|
||||
@@ -113,7 +113,6 @@ static bool mlx5e_ipsec_update_esn_state(struct mlx5e_ipsec_sa_entry *sa_entry)
|
||||
struct xfrm_replay_state_esn *replay_esn;
|
||||
u32 seq_bottom = 0;
|
||||
u8 overlap;
|
||||
u32 *esn;
|
||||
|
||||
if (!(sa_entry->x->props.flags & XFRM_STATE_ESN)) {
|
||||
sa_entry->esn_state.trigger = 0;
|
||||
@@ -128,11 +127,9 @@ static bool mlx5e_ipsec_update_esn_state(struct mlx5e_ipsec_sa_entry *sa_entry)
|
||||
|
||||
sa_entry->esn_state.esn = xfrm_replay_seqhi(sa_entry->x,
|
||||
htonl(seq_bottom));
|
||||
esn = &sa_entry->esn_state.esn;
|
||||
|
||||
sa_entry->esn_state.trigger = 1;
|
||||
if (unlikely(overlap && seq_bottom < MLX5E_IPSEC_ESN_SCOPE_MID)) {
|
||||
++(*esn);
|
||||
sa_entry->esn_state.overlap = 0;
|
||||
return true;
|
||||
} else if (unlikely(!overlap &&
|
||||
|
||||
@@ -479,6 +479,11 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
|
||||
if (unlikely(sq->ptpsq)) {
|
||||
mlx5e_skb_cb_hwtstamp_init(skb);
|
||||
mlx5e_skb_fifo_push(&sq->ptpsq->skb_fifo, skb);
|
||||
if (!netif_tx_queue_stopped(sq->txq) &&
|
||||
!mlx5e_skb_fifo_has_room(&sq->ptpsq->skb_fifo)) {
|
||||
netif_tx_stop_queue(sq->txq);
|
||||
sq->stats->stopped++;
|
||||
}
|
||||
skb_get(skb);
|
||||
}
|
||||
|
||||
@@ -906,6 +911,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
|
||||
|
||||
if (netif_tx_queue_stopped(sq->txq) &&
|
||||
mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) &&
|
||||
mlx5e_ptpsq_fifo_has_room(sq) &&
|
||||
!test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) {
|
||||
netif_tx_wake_queue(sq->txq);
|
||||
stats->wake++;
|
||||
|
||||
@@ -122,7 +122,7 @@ void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev)
|
||||
{
|
||||
struct mlx5_mpfs *mpfs = dev->priv.mpfs;
|
||||
|
||||
if (!MLX5_ESWITCH_MANAGER(dev))
|
||||
if (!mpfs)
|
||||
return;
|
||||
|
||||
WARN_ON(!hlist_empty(mpfs->hash));
|
||||
@@ -137,7 +137,7 @@ int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac)
|
||||
int err = 0;
|
||||
u32 index;
|
||||
|
||||
if (!MLX5_ESWITCH_MANAGER(dev))
|
||||
if (!mpfs)
|
||||
return 0;
|
||||
|
||||
mutex_lock(&mpfs->lock);
|
||||
@@ -185,7 +185,7 @@ int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac)
|
||||
int err = 0;
|
||||
u32 index;
|
||||
|
||||
if (!MLX5_ESWITCH_MANAGER(dev))
|
||||
if (!mpfs)
|
||||
return 0;
|
||||
|
||||
mutex_lock(&mpfs->lock);
|
||||
|
||||
@@ -1576,12 +1576,28 @@ static void remove_one(struct pci_dev *pdev)
|
||||
mlx5_devlink_free(devlink);
|
||||
}
|
||||
|
||||
#define mlx5_pci_trace(dev, fmt, ...) ({ \
|
||||
struct mlx5_core_dev *__dev = (dev); \
|
||||
mlx5_core_info(__dev, "%s Device state = %d health sensors: %d pci_status: %d. " fmt, \
|
||||
__func__, __dev->state, mlx5_health_check_fatal_sensors(__dev), \
|
||||
__dev->pci_status, ##__VA_ARGS__); \
|
||||
})
|
||||
|
||||
static const char *result2str(enum pci_ers_result result)
|
||||
{
|
||||
return result == PCI_ERS_RESULT_NEED_RESET ? "need reset" :
|
||||
result == PCI_ERS_RESULT_DISCONNECT ? "disconnect" :
|
||||
result == PCI_ERS_RESULT_RECOVERED ? "recovered" :
|
||||
"unknown";
|
||||
}
|
||||
|
||||
static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev,
|
||||
pci_channel_state_t state)
|
||||
{
|
||||
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
|
||||
enum pci_ers_result res;
|
||||
|
||||
mlx5_core_info(dev, "%s was called\n", __func__);
|
||||
mlx5_pci_trace(dev, "Enter, pci channel state = %d\n", state);
|
||||
|
||||
mlx5_enter_error_state(dev, false);
|
||||
mlx5_error_sw_reset(dev);
|
||||
@@ -1589,8 +1605,11 @@ static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev,
|
||||
mlx5_drain_health_wq(dev);
|
||||
mlx5_pci_disable_device(dev);
|
||||
|
||||
return state == pci_channel_io_perm_failure ?
|
||||
res = state == pci_channel_io_perm_failure ?
|
||||
PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET;
|
||||
|
||||
mlx5_pci_trace(dev, "Exit, result = %d, %s\n", res, result2str(res));
|
||||
return res;
|
||||
}
|
||||
|
||||
/* wait for the device to show vital signs by waiting
|
||||
@@ -1624,28 +1643,34 @@ static int wait_vital(struct pci_dev *pdev)
|
||||
|
||||
static pci_ers_result_t mlx5_pci_slot_reset(struct pci_dev *pdev)
|
||||
{
|
||||
enum pci_ers_result res = PCI_ERS_RESULT_DISCONNECT;
|
||||
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
|
||||
int err;
|
||||
|
||||
mlx5_core_info(dev, "%s was called\n", __func__);
|
||||
mlx5_pci_trace(dev, "Enter\n");
|
||||
|
||||
err = mlx5_pci_enable_device(dev);
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "%s: mlx5_pci_enable_device failed with error code: %d\n",
|
||||
__func__, err);
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
pci_set_master(pdev);
|
||||
pci_restore_state(pdev);
|
||||
pci_save_state(pdev);
|
||||
|
||||
if (wait_vital(pdev)) {
|
||||
mlx5_core_err(dev, "%s: wait_vital timed out\n", __func__);
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
err = wait_vital(pdev);
|
||||
if (err) {
|
||||
mlx5_core_err(dev, "%s: wait vital failed with error code: %d\n",
|
||||
__func__, err);
|
||||
goto out;
|
||||
}
|
||||
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
res = PCI_ERS_RESULT_RECOVERED;
|
||||
out:
|
||||
mlx5_pci_trace(dev, "Exit, err = %d, result = %d, %s\n", err, res, result2str(res));
|
||||
return res;
|
||||
}
|
||||
|
||||
static void mlx5_pci_resume(struct pci_dev *pdev)
|
||||
@@ -1653,14 +1678,16 @@ static void mlx5_pci_resume(struct pci_dev *pdev)
|
||||
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
|
||||
int err;
|
||||
|
||||
mlx5_core_info(dev, "%s was called\n", __func__);
|
||||
mlx5_pci_trace(dev, "Enter, loading driver..\n");
|
||||
|
||||
err = mlx5_load_one(dev);
|
||||
if (err)
|
||||
mlx5_core_err(dev, "%s: mlx5_load_one failed with error code: %d\n",
|
||||
__func__, err);
|
||||
else
|
||||
mlx5_core_info(dev, "%s: device recovered\n", __func__);
|
||||
|
||||
if (!err)
|
||||
devlink_health_reporter_state_update(dev->priv.health.fw_fatal_reporter,
|
||||
DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
|
||||
|
||||
mlx5_pci_trace(dev, "Done, err = %d, device %s\n", err,
|
||||
!err ? "recovered" : "Failed");
|
||||
}
|
||||
|
||||
static const struct pci_error_handlers mlx5_err_handler = {
|
||||
|
||||
@@ -6848,7 +6848,7 @@ static int pcidev_init(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
char banner[sizeof(version)];
|
||||
struct ksz_switch *sw = NULL;
|
||||
|
||||
result = pci_enable_device(pdev);
|
||||
result = pcim_enable_device(pdev);
|
||||
if (result)
|
||||
return result;
|
||||
|
||||
|
||||
@@ -1964,11 +1964,13 @@ static int netsec_register_mdio(struct netsec_priv *priv, u32 phy_addr)
|
||||
ret = PTR_ERR(priv->phydev);
|
||||
dev_err(priv->dev, "get_phy_device err(%d)\n", ret);
|
||||
priv->phydev = NULL;
|
||||
mdiobus_unregister(bus);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
ret = phy_device_register(priv->phydev);
|
||||
if (ret) {
|
||||
phy_device_free(priv->phydev);
|
||||
mdiobus_unregister(bus);
|
||||
dev_err(priv->dev,
|
||||
"phy_device_register err(%d)\n", ret);
|
||||
|
||||
@@ -1229,6 +1229,8 @@ static int ave_init(struct net_device *ndev)
|
||||
|
||||
phy_support_asym_pause(phydev);
|
||||
|
||||
phydev->mac_managed_pm = true;
|
||||
|
||||
phy_attached_info(phydev);
|
||||
|
||||
return 0;
|
||||
@@ -1758,6 +1760,10 @@ static int ave_resume(struct device *dev)
|
||||
|
||||
ave_global_reset(ndev);
|
||||
|
||||
ret = phy_init_hw(ndev->phydev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ave_ethtool_get_wol(ndev, &wol);
|
||||
wol.wolopts = priv->wolopts;
|
||||
__ave_ethtool_set_wol(ndev, &wol);
|
||||
|
||||
@@ -229,8 +229,10 @@ static int nsim_dev_debugfs_init(struct nsim_dev *nsim_dev)
|
||||
if (IS_ERR(nsim_dev->ddir))
|
||||
return PTR_ERR(nsim_dev->ddir);
|
||||
nsim_dev->ports_ddir = debugfs_create_dir("ports", nsim_dev->ddir);
|
||||
if (IS_ERR(nsim_dev->ports_ddir))
|
||||
return PTR_ERR(nsim_dev->ports_ddir);
|
||||
if (IS_ERR(nsim_dev->ports_ddir)) {
|
||||
err = PTR_ERR(nsim_dev->ports_ddir);
|
||||
goto err_ddir;
|
||||
}
|
||||
debugfs_create_bool("fw_update_status", 0600, nsim_dev->ddir,
|
||||
&nsim_dev->fw_update_status);
|
||||
debugfs_create_u32("fw_update_overwrite_mask", 0600, nsim_dev->ddir,
|
||||
@@ -267,7 +269,7 @@ static int nsim_dev_debugfs_init(struct nsim_dev *nsim_dev)
|
||||
nsim_dev->nodes_ddir = debugfs_create_dir("rate_nodes", nsim_dev->ddir);
|
||||
if (IS_ERR(nsim_dev->nodes_ddir)) {
|
||||
err = PTR_ERR(nsim_dev->nodes_ddir);
|
||||
goto err_out;
|
||||
goto err_ports_ddir;
|
||||
}
|
||||
debugfs_create_bool("fail_trap_drop_counter_get", 0600,
|
||||
nsim_dev->ddir,
|
||||
@@ -275,8 +277,9 @@ static int nsim_dev_debugfs_init(struct nsim_dev *nsim_dev)
|
||||
nsim_udp_tunnels_debugfs_create(nsim_dev);
|
||||
return 0;
|
||||
|
||||
err_out:
|
||||
err_ports_ddir:
|
||||
debugfs_remove_recursive(nsim_dev->ports_ddir);
|
||||
err_ddir:
|
||||
debugfs_remove_recursive(nsim_dev->ddir);
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -54,16 +54,19 @@ static int virtual_nci_send(struct nci_dev *ndev, struct sk_buff *skb)
|
||||
mutex_lock(&nci_mutex);
|
||||
if (state != virtual_ncidev_enabled) {
|
||||
mutex_unlock(&nci_mutex);
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (send_buff) {
|
||||
mutex_unlock(&nci_mutex);
|
||||
kfree_skb(skb);
|
||||
return -1;
|
||||
}
|
||||
send_buff = skb_copy(skb, GFP_KERNEL);
|
||||
mutex_unlock(&nci_mutex);
|
||||
wake_up_interruptible(&wq);
|
||||
consume_skb(skb);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -643,7 +643,7 @@ static u8 jz4755_lcd_24bit_funcs[] = { 1, 1, 1, 1, 0, 0, };
|
||||
static const struct group_desc jz4755_groups[] = {
|
||||
INGENIC_PIN_GROUP("uart0-data", jz4755_uart0_data, 0),
|
||||
INGENIC_PIN_GROUP("uart0-hwflow", jz4755_uart0_hwflow, 0),
|
||||
INGENIC_PIN_GROUP("uart1-data", jz4755_uart1_data, 0),
|
||||
INGENIC_PIN_GROUP("uart1-data", jz4755_uart1_data, 1),
|
||||
INGENIC_PIN_GROUP("uart2-data", jz4755_uart2_data, 1),
|
||||
INGENIC_PIN_GROUP("ssi-dt-b", jz4755_ssi_dt_b, 0),
|
||||
INGENIC_PIN_GROUP("ssi-dt-f", jz4755_ssi_dt_f, 0),
|
||||
@@ -697,7 +697,7 @@ static const char *jz4755_ssi_groups[] = {
|
||||
"ssi-ce1-b", "ssi-ce1-f",
|
||||
};
|
||||
static const char *jz4755_mmc0_groups[] = { "mmc0-1bit", "mmc0-4bit", };
|
||||
static const char *jz4755_mmc1_groups[] = { "mmc0-1bit", "mmc0-4bit", };
|
||||
static const char *jz4755_mmc1_groups[] = { "mmc1-1bit", "mmc1-4bit", };
|
||||
static const char *jz4755_i2c_groups[] = { "i2c-data", };
|
||||
static const char *jz4755_cim_groups[] = { "cim-data", };
|
||||
static const char *jz4755_lcd_groups[] = {
|
||||
|
||||
@@ -920,10 +920,6 @@ struct lpfc_hba {
|
||||
(struct lpfc_vport *vport,
|
||||
struct lpfc_io_buf *lpfc_cmd,
|
||||
uint8_t tmo);
|
||||
int (*lpfc_scsi_prep_task_mgmt_cmd)
|
||||
(struct lpfc_vport *vport,
|
||||
struct lpfc_io_buf *lpfc_cmd,
|
||||
u64 lun, u8 task_mgmt_cmd);
|
||||
|
||||
/* IOCB interface function jump table entries */
|
||||
int (*__lpfc_sli_issue_iocb)
|
||||
@@ -1807,39 +1803,3 @@ static inline int lpfc_is_vmid_enabled(struct lpfc_hba *phba)
|
||||
{
|
||||
return phba->cfg_vmid_app_header || phba->cfg_vmid_priority_tagging;
|
||||
}
|
||||
|
||||
static inline
|
||||
u8 get_job_ulpstatus(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
|
||||
{
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
return bf_get(lpfc_wcqe_c_status, &iocbq->wcqe_cmpl);
|
||||
else
|
||||
return iocbq->iocb.ulpStatus;
|
||||
}
|
||||
|
||||
static inline
|
||||
u32 get_job_word4(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
|
||||
{
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
return iocbq->wcqe_cmpl.parameter;
|
||||
else
|
||||
return iocbq->iocb.un.ulpWord[4];
|
||||
}
|
||||
|
||||
static inline
|
||||
u8 get_job_cmnd(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
|
||||
{
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
return bf_get(wqe_cmnd, &iocbq->wqe.generic.wqe_com);
|
||||
else
|
||||
return iocbq->iocb.ulpCommand;
|
||||
}
|
||||
|
||||
static inline
|
||||
u16 get_job_ulpcontext(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
|
||||
{
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
return bf_get(wqe_ctxt_tag, &iocbq->wqe.generic.wqe_com);
|
||||
else
|
||||
return iocbq->iocb.ulpContext;
|
||||
}
|
||||
|
||||
@@ -325,7 +325,7 @@ lpfc_bsg_send_mgmt_cmd_cmp(struct lpfc_hba *phba,
|
||||
|
||||
/* Close the timeout handler abort window */
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
cmdiocbq->cmd_flag &= ~LPFC_IO_CMD_OUTSTANDING;
|
||||
cmdiocbq->iocb_flag &= ~LPFC_IO_CMD_OUTSTANDING;
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
|
||||
iocb = &dd_data->context_un.iocb;
|
||||
@@ -481,11 +481,11 @@ lpfc_bsg_send_mgmt_cmd(struct bsg_job *job)
|
||||
cmd->ulpOwner = OWN_CHIP;
|
||||
cmdiocbq->vport = phba->pport;
|
||||
cmdiocbq->context3 = bmp;
|
||||
cmdiocbq->cmd_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->iocb_flag |= LPFC_IO_LIBDFC;
|
||||
timeout = phba->fc_ratov * 2;
|
||||
cmd->ulpTimeout = timeout;
|
||||
|
||||
cmdiocbq->cmd_cmpl = lpfc_bsg_send_mgmt_cmd_cmp;
|
||||
cmdiocbq->iocb_cmpl = lpfc_bsg_send_mgmt_cmd_cmp;
|
||||
cmdiocbq->context1 = dd_data;
|
||||
cmdiocbq->context2 = cmp;
|
||||
cmdiocbq->context3 = bmp;
|
||||
@@ -516,9 +516,9 @@ lpfc_bsg_send_mgmt_cmd(struct bsg_job *job)
|
||||
if (iocb_stat == IOCB_SUCCESS) {
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
/* make sure the I/O had not been completed yet */
|
||||
if (cmdiocbq->cmd_flag & LPFC_IO_LIBDFC) {
|
||||
if (cmdiocbq->iocb_flag & LPFC_IO_LIBDFC) {
|
||||
/* open up abort window to timeout handler */
|
||||
cmdiocbq->cmd_flag |= LPFC_IO_CMD_OUTSTANDING;
|
||||
cmdiocbq->iocb_flag |= LPFC_IO_CMD_OUTSTANDING;
|
||||
}
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
return 0; /* done for now */
|
||||
@@ -600,7 +600,7 @@ lpfc_bsg_rport_els_cmp(struct lpfc_hba *phba,
|
||||
|
||||
/* Close the timeout handler abort window */
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
cmdiocbq->cmd_flag &= ~LPFC_IO_CMD_OUTSTANDING;
|
||||
cmdiocbq->iocb_flag &= ~LPFC_IO_CMD_OUTSTANDING;
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
|
||||
rsp = &rspiocbq->iocb;
|
||||
@@ -726,10 +726,10 @@ lpfc_bsg_rport_els(struct bsg_job *job)
|
||||
cmdiocbq->iocb.ulpContext = phba->sli4_hba.rpi_ids[rpi];
|
||||
else
|
||||
cmdiocbq->iocb.ulpContext = rpi;
|
||||
cmdiocbq->cmd_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->iocb_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->context1 = dd_data;
|
||||
cmdiocbq->context_un.ndlp = ndlp;
|
||||
cmdiocbq->cmd_cmpl = lpfc_bsg_rport_els_cmp;
|
||||
cmdiocbq->iocb_cmpl = lpfc_bsg_rport_els_cmp;
|
||||
dd_data->type = TYPE_IOCB;
|
||||
dd_data->set_job = job;
|
||||
dd_data->context_un.iocb.cmdiocbq = cmdiocbq;
|
||||
@@ -757,9 +757,9 @@ lpfc_bsg_rport_els(struct bsg_job *job)
|
||||
if (rc == IOCB_SUCCESS) {
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
/* make sure the I/O had not been completed/released */
|
||||
if (cmdiocbq->cmd_flag & LPFC_IO_LIBDFC) {
|
||||
if (cmdiocbq->iocb_flag & LPFC_IO_LIBDFC) {
|
||||
/* open up abort window to timeout handler */
|
||||
cmdiocbq->cmd_flag |= LPFC_IO_CMD_OUTSTANDING;
|
||||
cmdiocbq->iocb_flag |= LPFC_IO_CMD_OUTSTANDING;
|
||||
}
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
return 0; /* done for now */
|
||||
@@ -1053,7 +1053,7 @@ lpfc_bsg_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
||||
lpfc_in_buf_free(phba,
|
||||
dmabuf);
|
||||
} else {
|
||||
lpfc_sli3_post_buffer(phba,
|
||||
lpfc_post_buffer(phba,
|
||||
pring,
|
||||
1);
|
||||
}
|
||||
@@ -1061,7 +1061,7 @@ lpfc_bsg_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
||||
default:
|
||||
if (!(phba->sli3_options &
|
||||
LPFC_SLI3_HBQ_ENABLED))
|
||||
lpfc_sli3_post_buffer(phba,
|
||||
lpfc_post_buffer(phba,
|
||||
pring,
|
||||
1);
|
||||
break;
|
||||
@@ -1395,7 +1395,7 @@ lpfc_issue_ct_rsp_cmp(struct lpfc_hba *phba,
|
||||
|
||||
/* Close the timeout handler abort window */
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
cmdiocbq->cmd_flag &= ~LPFC_IO_CMD_OUTSTANDING;
|
||||
cmdiocbq->iocb_flag &= ~LPFC_IO_CMD_OUTSTANDING;
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
|
||||
ndlp = dd_data->context_un.iocb.ndlp;
|
||||
@@ -1549,13 +1549,13 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct bsg_job *job, uint32_t tag,
|
||||
"2722 Xmit CT response on exchange x%x Data: x%x x%x x%x\n",
|
||||
icmd->ulpContext, icmd->ulpIoTag, tag, phba->link_state);
|
||||
|
||||
ctiocb->cmd_flag |= LPFC_IO_LIBDFC;
|
||||
ctiocb->iocb_flag |= LPFC_IO_LIBDFC;
|
||||
ctiocb->vport = phba->pport;
|
||||
ctiocb->context1 = dd_data;
|
||||
ctiocb->context2 = cmp;
|
||||
ctiocb->context3 = bmp;
|
||||
ctiocb->context_un.ndlp = ndlp;
|
||||
ctiocb->cmd_cmpl = lpfc_issue_ct_rsp_cmp;
|
||||
ctiocb->iocb_cmpl = lpfc_issue_ct_rsp_cmp;
|
||||
|
||||
dd_data->type = TYPE_IOCB;
|
||||
dd_data->set_job = job;
|
||||
@@ -1582,9 +1582,9 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct bsg_job *job, uint32_t tag,
|
||||
if (rc == IOCB_SUCCESS) {
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
/* make sure the I/O had not been completed/released */
|
||||
if (ctiocb->cmd_flag & LPFC_IO_LIBDFC) {
|
||||
if (ctiocb->iocb_flag & LPFC_IO_LIBDFC) {
|
||||
/* open up abort window to timeout handler */
|
||||
ctiocb->cmd_flag |= LPFC_IO_CMD_OUTSTANDING;
|
||||
ctiocb->iocb_flag |= LPFC_IO_CMD_OUTSTANDING;
|
||||
}
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
return 0; /* done for now */
|
||||
@@ -2713,9 +2713,9 @@ static int lpfcdiag_loop_get_xri(struct lpfc_hba *phba, uint16_t rpi,
|
||||
cmd->ulpClass = CLASS3;
|
||||
cmd->ulpContext = rpi;
|
||||
|
||||
cmdiocbq->cmd_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->iocb_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->vport = phba->pport;
|
||||
cmdiocbq->cmd_cmpl = NULL;
|
||||
cmdiocbq->iocb_cmpl = NULL;
|
||||
|
||||
iocb_stat = lpfc_sli_issue_iocb_wait(phba, LPFC_ELS_RING, cmdiocbq,
|
||||
rspiocbq,
|
||||
@@ -3286,10 +3286,10 @@ lpfc_bsg_diag_loopback_run(struct bsg_job *job)
|
||||
cmdiocbq->sli4_xritag = NO_XRI;
|
||||
cmd->unsli3.rcvsli3.ox_id = 0xffff;
|
||||
}
|
||||
cmdiocbq->cmd_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->cmd_flag |= LPFC_IO_LOOPBACK;
|
||||
cmdiocbq->iocb_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->iocb_flag |= LPFC_IO_LOOPBACK;
|
||||
cmdiocbq->vport = phba->pport;
|
||||
cmdiocbq->cmd_cmpl = NULL;
|
||||
cmdiocbq->iocb_cmpl = NULL;
|
||||
iocb_stat = lpfc_sli_issue_iocb_wait(phba, LPFC_ELS_RING, cmdiocbq,
|
||||
rspiocbq, (phba->fc_ratov * 2) +
|
||||
LPFC_DRVR_TIMEOUT);
|
||||
@@ -5273,11 +5273,11 @@ lpfc_menlo_cmd(struct bsg_job *job)
|
||||
cmd->ulpClass = CLASS3;
|
||||
cmd->ulpOwner = OWN_CHIP;
|
||||
cmd->ulpLe = 1; /* Limited Edition */
|
||||
cmdiocbq->cmd_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->iocb_flag |= LPFC_IO_LIBDFC;
|
||||
cmdiocbq->vport = phba->pport;
|
||||
/* We want the firmware to timeout before we do */
|
||||
cmd->ulpTimeout = MENLO_TIMEOUT - 5;
|
||||
cmdiocbq->cmd_cmpl = lpfc_bsg_menlo_cmd_cmp;
|
||||
cmdiocbq->iocb_cmpl = lpfc_bsg_menlo_cmd_cmp;
|
||||
cmdiocbq->context1 = dd_data;
|
||||
cmdiocbq->context2 = cmp;
|
||||
cmdiocbq->context3 = bmp;
|
||||
@@ -6001,7 +6001,7 @@ lpfc_bsg_timeout(struct bsg_job *job)
|
||||
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
/* make sure the I/O abort window is still open */
|
||||
if (!(cmdiocb->cmd_flag & LPFC_IO_CMD_OUTSTANDING)) {
|
||||
if (!(cmdiocb->iocb_flag & LPFC_IO_CMD_OUTSTANDING)) {
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
@@ -129,7 +129,6 @@ void lpfc_disc_list_loopmap(struct lpfc_vport *);
|
||||
void lpfc_disc_start(struct lpfc_vport *);
|
||||
void lpfc_cleanup_discovery_resources(struct lpfc_vport *);
|
||||
void lpfc_cleanup(struct lpfc_vport *);
|
||||
void lpfc_prep_embed_io(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_ncmd);
|
||||
void lpfc_disc_timeout(struct timer_list *);
|
||||
|
||||
int lpfc_unregister_fcf_prep(struct lpfc_hba *);
|
||||
@@ -211,7 +210,7 @@ int lpfc_config_port_post(struct lpfc_hba *);
|
||||
int lpfc_hba_down_prep(struct lpfc_hba *);
|
||||
int lpfc_hba_down_post(struct lpfc_hba *);
|
||||
void lpfc_hba_init(struct lpfc_hba *, uint32_t *);
|
||||
int lpfc_sli3_post_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, int cnt);
|
||||
int lpfc_post_buffer(struct lpfc_hba *, struct lpfc_sli_ring *, int);
|
||||
void lpfc_decode_firmware_rev(struct lpfc_hba *, char *, int);
|
||||
int lpfc_online(struct lpfc_hba *);
|
||||
void lpfc_unblock_mgmt_io(struct lpfc_hba *);
|
||||
|
||||
@@ -239,7 +239,7 @@ lpfc_ct_reject_event(struct lpfc_nodelist *ndlp,
|
||||
cmdiocbq->context1 = lpfc_nlp_get(ndlp);
|
||||
cmdiocbq->context2 = (uint8_t *)mp;
|
||||
cmdiocbq->context3 = (uint8_t *)bmp;
|
||||
cmdiocbq->cmd_cmpl = lpfc_ct_unsol_cmpl;
|
||||
cmdiocbq->iocb_cmpl = lpfc_ct_unsol_cmpl;
|
||||
icmd->ulpContext = rx_id; /* Xri / rx_id */
|
||||
icmd->unsli3.rcvsli3.ox_id = ox_id;
|
||||
icmd->un.ulpWord[3] =
|
||||
@@ -370,7 +370,7 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
||||
/* Not enough posted buffers; Try posting more buffers */
|
||||
phba->fc_stat.NoRcvBuf++;
|
||||
if (!(phba->sli3_options & LPFC_SLI3_HBQ_ENABLED))
|
||||
lpfc_sli3_post_buffer(phba, pring, 2);
|
||||
lpfc_post_buffer(phba, pring, 2);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -447,7 +447,7 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
||||
lpfc_ct_unsol_buffer(phba, iocbq, mp, size);
|
||||
lpfc_in_buf_free(phba, mp);
|
||||
}
|
||||
lpfc_sli3_post_buffer(phba, pring, i);
|
||||
lpfc_post_buffer(phba, pring, i);
|
||||
}
|
||||
list_del(&head);
|
||||
}
|
||||
@@ -652,7 +652,7 @@ lpfc_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
|
||||
"Data: x%x x%x\n",
|
||||
ndlp->nlp_DID, icmd->ulpIoTag,
|
||||
vport->port_state);
|
||||
geniocb->cmd_cmpl = cmpl;
|
||||
geniocb->iocb_cmpl = cmpl;
|
||||
geniocb->drvrTimeout = icmd->ulpTimeout + LPFC_DRVR_TIMEOUT;
|
||||
geniocb->vport = vport;
|
||||
geniocb->retry = retry;
|
||||
|
||||
@@ -192,23 +192,23 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
|
||||
(elscmd == ELS_CMD_LOGO)))
|
||||
switch (elscmd) {
|
||||
case ELS_CMD_FLOGI:
|
||||
elsiocb->cmd_flag |=
|
||||
elsiocb->iocb_flag |=
|
||||
((LPFC_ELS_ID_FLOGI << LPFC_FIP_ELS_ID_SHIFT)
|
||||
& LPFC_FIP_ELS_ID_MASK);
|
||||
break;
|
||||
case ELS_CMD_FDISC:
|
||||
elsiocb->cmd_flag |=
|
||||
elsiocb->iocb_flag |=
|
||||
((LPFC_ELS_ID_FDISC << LPFC_FIP_ELS_ID_SHIFT)
|
||||
& LPFC_FIP_ELS_ID_MASK);
|
||||
break;
|
||||
case ELS_CMD_LOGO:
|
||||
elsiocb->cmd_flag |=
|
||||
elsiocb->iocb_flag |=
|
||||
((LPFC_ELS_ID_LOGO << LPFC_FIP_ELS_ID_SHIFT)
|
||||
& LPFC_FIP_ELS_ID_MASK);
|
||||
break;
|
||||
}
|
||||
else
|
||||
elsiocb->cmd_flag &= ~LPFC_FIP_ELS_ID_MASK;
|
||||
elsiocb->iocb_flag &= ~LPFC_FIP_ELS_ID_MASK;
|
||||
|
||||
icmd = &elsiocb->iocb;
|
||||
|
||||
@@ -1252,10 +1252,10 @@ lpfc_cmpl_els_link_down(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
||||
"6445 ELS completes after LINK_DOWN: "
|
||||
" Status %x/%x cmd x%x flg x%x\n",
|
||||
irsp->ulpStatus, irsp->un.ulpWord[4], cmd,
|
||||
cmdiocb->cmd_flag);
|
||||
cmdiocb->iocb_flag);
|
||||
|
||||
if (cmdiocb->cmd_flag & LPFC_IO_FABRIC) {
|
||||
cmdiocb->cmd_flag &= ~LPFC_IO_FABRIC;
|
||||
if (cmdiocb->iocb_flag & LPFC_IO_FABRIC) {
|
||||
cmdiocb->iocb_flag &= ~LPFC_IO_FABRIC;
|
||||
atomic_dec(&phba->fabric_iocb_count);
|
||||
}
|
||||
lpfc_els_free_iocb(phba, cmdiocb);
|
||||
@@ -1370,7 +1370,7 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
phba->fc_ratov = tmo;
|
||||
|
||||
phba->fc_stat.elsXmitFLOGI++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_flogi;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_flogi;
|
||||
|
||||
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
|
||||
"Issue FLOGI: opt:x%x",
|
||||
@@ -1463,7 +1463,7 @@ lpfc_els_abort_flogi(struct lpfc_hba *phba)
|
||||
if (ndlp && ndlp->nlp_DID == Fabric_DID) {
|
||||
if ((phba->pport->fc_flag & FC_PT2PT) &&
|
||||
!(phba->pport->fc_flag & FC_PT2PT_PLOGI))
|
||||
iocb->fabric_cmd_cmpl =
|
||||
iocb->fabric_iocb_cmpl =
|
||||
lpfc_ignore_els_cmpl;
|
||||
lpfc_sli_issue_abort_iotag(phba, pring, iocb,
|
||||
NULL);
|
||||
@@ -2226,7 +2226,7 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry)
|
||||
}
|
||||
|
||||
phba->fc_stat.elsXmitPLOGI++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_plogi;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_plogi;
|
||||
|
||||
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
|
||||
"Issue PLOGI: did:x%x refcnt %d",
|
||||
@@ -2478,7 +2478,7 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
/* For FCP support */
|
||||
npr->prliType = PRLI_FCP_TYPE;
|
||||
npr->initiatorFunc = 1;
|
||||
elsiocb->cmd_flag |= LPFC_PRLI_FCP_REQ;
|
||||
elsiocb->iocb_flag |= LPFC_PRLI_FCP_REQ;
|
||||
|
||||
/* Remove FCP type - processed. */
|
||||
local_nlp_type &= ~NLP_FC4_FCP;
|
||||
@@ -2512,14 +2512,14 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
|
||||
npr_nvme->word1 = cpu_to_be32(npr_nvme->word1);
|
||||
npr_nvme->word4 = cpu_to_be32(npr_nvme->word4);
|
||||
elsiocb->cmd_flag |= LPFC_PRLI_NVME_REQ;
|
||||
elsiocb->iocb_flag |= LPFC_PRLI_NVME_REQ;
|
||||
|
||||
/* Remove NVME type - processed. */
|
||||
local_nlp_type &= ~NLP_FC4_NVME;
|
||||
}
|
||||
|
||||
phba->fc_stat.elsXmitPRLI++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_prli;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_prli;
|
||||
spin_lock_irq(&ndlp->lock);
|
||||
ndlp->nlp_flag |= NLP_PRLI_SND;
|
||||
|
||||
@@ -2842,7 +2842,7 @@ lpfc_issue_els_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
ap->DID = be32_to_cpu(vport->fc_myDID);
|
||||
|
||||
phba->fc_stat.elsXmitADISC++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_adisc;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_adisc;
|
||||
spin_lock_irq(&ndlp->lock);
|
||||
ndlp->nlp_flag |= NLP_ADISC_SND;
|
||||
spin_unlock_irq(&ndlp->lock);
|
||||
@@ -3065,7 +3065,7 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
memcpy(pcmd, &vport->fc_portname, sizeof(struct lpfc_name));
|
||||
|
||||
phba->fc_stat.elsXmitLOGO++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_logo;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_logo;
|
||||
spin_lock_irq(&ndlp->lock);
|
||||
ndlp->nlp_flag |= NLP_LOGO_SND;
|
||||
ndlp->nlp_flag &= ~NLP_ISSUE_LOGO;
|
||||
@@ -3417,7 +3417,7 @@ lpfc_issue_els_scr(struct lpfc_vport *vport, uint8_t retry)
|
||||
ndlp->nlp_DID, 0, 0);
|
||||
|
||||
phba->fc_stat.elsXmitSCR++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_disc_cmd;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_disc_cmd;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -3514,7 +3514,7 @@ lpfc_issue_els_rscn(struct lpfc_vport *vport, uint8_t retry)
|
||||
event->portid.rscn_fid[2] = nportid & 0x000000FF;
|
||||
|
||||
phba->fc_stat.elsXmitRSCN++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_cmd;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -3613,7 +3613,7 @@ lpfc_issue_els_farpr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
|
||||
ndlp->nlp_DID, 0, 0);
|
||||
|
||||
phba->fc_stat.elsXmitFARPR++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_cmd;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -3704,7 +3704,7 @@ lpfc_issue_els_rdf(struct lpfc_vport *vport, uint8_t retry)
|
||||
phba->cgn_reg_fpin);
|
||||
|
||||
phba->cgn_fpin_frequency = LPFC_FPIN_INIT_FREQ;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_disc_cmd;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_disc_cmd;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -4154,7 +4154,7 @@ lpfc_issue_els_edc(struct lpfc_vport *vport, uint8_t retry)
|
||||
ndlp->nlp_DID, phba->cgn_reg_signal,
|
||||
phba->cgn_reg_fpin);
|
||||
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_disc_cmd;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_disc_cmd;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -4968,12 +4968,12 @@ lpfc_els_free_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *elsiocb)
|
||||
|
||||
/* context2 = cmd, context2->next = rsp, context3 = bpl */
|
||||
if (elsiocb->context2) {
|
||||
if (elsiocb->cmd_flag & LPFC_DELAY_MEM_FREE) {
|
||||
if (elsiocb->iocb_flag & LPFC_DELAY_MEM_FREE) {
|
||||
/* Firmware could still be in progress of DMAing
|
||||
* payload, so don't free data buffer till after
|
||||
* a hbeat.
|
||||
*/
|
||||
elsiocb->cmd_flag &= ~LPFC_DELAY_MEM_FREE;
|
||||
elsiocb->iocb_flag &= ~LPFC_DELAY_MEM_FREE;
|
||||
buf_ptr = elsiocb->context2;
|
||||
elsiocb->context2 = NULL;
|
||||
if (buf_ptr) {
|
||||
@@ -5480,9 +5480,9 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
|
||||
ndlp->nlp_flag & NLP_REG_LOGIN_SEND))
|
||||
ndlp->nlp_flag &= ~NLP_LOGO_ACC;
|
||||
spin_unlock_irq(&ndlp->lock);
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_logo_acc;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_logo_acc;
|
||||
} else {
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
}
|
||||
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
@@ -5577,7 +5577,7 @@ lpfc_els_rsp_reject(struct lpfc_vport *vport, uint32_t rejectError,
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, rejectError);
|
||||
|
||||
phba->fc_stat.elsXmitLSRJT++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -5657,7 +5657,7 @@ lpfc_issue_els_edc_rsp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
|
||||
"Issue EDC ACC: did:x%x flg:x%x refcnt %d",
|
||||
ndlp->nlp_DID, ndlp->nlp_flag,
|
||||
kref_read(&ndlp->kref));
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
@@ -5750,7 +5750,7 @@ lpfc_els_rsp_adisc_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb,
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, kref_read(&ndlp->kref));
|
||||
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -5924,7 +5924,7 @@ lpfc_els_rsp_prli_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb,
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, kref_read(&ndlp->kref));
|
||||
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -6025,7 +6025,7 @@ lpfc_els_rsp_rnid_acc(struct lpfc_vport *vport, uint8_t format,
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, kref_read(&ndlp->kref));
|
||||
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -6139,7 +6139,7 @@ lpfc_els_rsp_echo_acc(struct lpfc_vport *vport, uint8_t *data,
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, kref_read(&ndlp->kref));
|
||||
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -6803,7 +6803,7 @@ lpfc_els_rdp_cmpl(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context,
|
||||
rdp_context->page_a0, vport);
|
||||
|
||||
rdp_res->length = cpu_to_be32(len - 8);
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
|
||||
/* Now that we know the true size of the payload, update the BPL */
|
||||
bpl = (struct ulp_bde64 *)
|
||||
@@ -6844,7 +6844,7 @@ error:
|
||||
stat->un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
|
||||
|
||||
phba->fc_stat.elsXmitLSRJT++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
lpfc_els_free_iocb(phba, elsiocb);
|
||||
@@ -7066,7 +7066,7 @@ lpfc_els_lcb_rsp(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
||||
lcb_res->capability = lcb_context->capability;
|
||||
lcb_res->lcb_frequency = lcb_context->frequency;
|
||||
lcb_res->lcb_duration = lcb_context->duration;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
@@ -7105,7 +7105,7 @@ error:
|
||||
if (shdr_add_status == ADD_STATUS_OPERATION_ALREADY_ACTIVE)
|
||||
stat->un.b.lsRjtRsnCodeExp = LSEXP_CMD_IN_PROGRESS;
|
||||
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
phba->fc_stat.elsXmitLSRJT++;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
@@ -8172,7 +8172,7 @@ lpfc_els_rsp_rls_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
||||
elsiocb->iotag, elsiocb->iocb.ulpContext,
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
|
||||
ndlp->nlp_rpi);
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
@@ -8324,7 +8324,7 @@ lpfc_els_rcv_rtv(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
|
||||
ndlp->nlp_rpi,
|
||||
rtv_rsp->ratov, rtv_rsp->edtov, rtv_rsp->qtov);
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
@@ -8401,7 +8401,7 @@ lpfc_issue_els_rrq(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
"Issue RRQ: did:x%x",
|
||||
did, rrq->xritag, rrq->rxid);
|
||||
elsiocb->context_un.rrq = rrq;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rrq;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rrq;
|
||||
|
||||
lpfc_nlp_get(ndlp);
|
||||
elsiocb->context1 = ndlp;
|
||||
@@ -8507,7 +8507,7 @@ lpfc_els_rsp_rpl_acc(struct lpfc_vport *vport, uint16_t cmdsize,
|
||||
elsiocb->iotag, elsiocb->iocb.ulpContext,
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
|
||||
ndlp->nlp_rpi);
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
|
||||
phba->fc_stat.elsXmitACC++;
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
@@ -8947,7 +8947,7 @@ lpfc_els_timeout_handler(struct lpfc_vport *vport)
|
||||
list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
|
||||
cmd = &piocb->iocb;
|
||||
|
||||
if ((piocb->cmd_flag & LPFC_IO_LIBDFC) != 0 ||
|
||||
if ((piocb->iocb_flag & LPFC_IO_LIBDFC) != 0 ||
|
||||
piocb->iocb.ulpCommand == CMD_ABORT_XRI_CN ||
|
||||
piocb->iocb.ulpCommand == CMD_CLOSE_XRI_CN)
|
||||
continue;
|
||||
@@ -9060,13 +9060,13 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
|
||||
|
||||
/* First we need to issue aborts to outstanding cmds on txcmpl */
|
||||
list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
|
||||
if (piocb->cmd_flag & LPFC_IO_LIBDFC)
|
||||
if (piocb->iocb_flag & LPFC_IO_LIBDFC)
|
||||
continue;
|
||||
|
||||
if (piocb->vport != vport)
|
||||
continue;
|
||||
|
||||
if (piocb->cmd_flag & LPFC_DRIVER_ABORTED)
|
||||
if (piocb->iocb_flag & LPFC_DRIVER_ABORTED)
|
||||
continue;
|
||||
|
||||
/* On the ELS ring we can have ELS_REQUESTs or
|
||||
@@ -9084,7 +9084,7 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
|
||||
* and avoid any retry logic.
|
||||
*/
|
||||
if (phba->link_state == LPFC_LINK_DOWN)
|
||||
piocb->cmd_cmpl = lpfc_cmpl_els_link_down;
|
||||
piocb->iocb_cmpl = lpfc_cmpl_els_link_down;
|
||||
}
|
||||
if (cmd->ulpCommand == CMD_GEN_REQUEST64_CR)
|
||||
list_add_tail(&piocb->dlist, &abort_list);
|
||||
@@ -9119,8 +9119,9 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
|
||||
list_for_each_entry_safe(piocb, tmp_iocb, &pring->txq, list) {
|
||||
cmd = &piocb->iocb;
|
||||
|
||||
if (piocb->cmd_flag & LPFC_IO_LIBDFC)
|
||||
if (piocb->iocb_flag & LPFC_IO_LIBDFC) {
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Do not flush out the QUE_RING and ABORT/CLOSE iocbs */
|
||||
if (cmd->ulpCommand == CMD_QUE_RING_BUF_CN ||
|
||||
@@ -9765,7 +9766,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
||||
payload_len = elsiocb->iocb.unsli3.rcvsli3.acc_len;
|
||||
cmd = *payload;
|
||||
if ((phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) == 0)
|
||||
lpfc_sli3_post_buffer(phba, pring, 1);
|
||||
lpfc_post_buffer(phba, pring, 1);
|
||||
|
||||
did = icmd->un.rcvels.remoteID;
|
||||
if (icmd->ulpStatus) {
|
||||
@@ -10238,7 +10239,7 @@ lpfc_els_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
|
||||
phba->fc_stat.NoRcvBuf++;
|
||||
/* Not enough posted buffers; Try posting more buffers */
|
||||
if (!(phba->sli3_options & LPFC_SLI3_HBQ_ENABLED))
|
||||
lpfc_sli3_post_buffer(phba, pring, 0);
|
||||
lpfc_post_buffer(phba, pring, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -10874,7 +10875,7 @@ lpfc_issue_els_fdisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
lpfc_set_disctmo(vport);
|
||||
|
||||
phba->fc_stat.elsXmitFDISC++;
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_fdisc;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_fdisc;
|
||||
|
||||
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
|
||||
"Issue FDISC: did:x%x",
|
||||
@@ -10998,7 +10999,7 @@ lpfc_issue_els_npiv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
|
||||
"Issue LOGO npiv did:x%x flg:x%x",
|
||||
ndlp->nlp_DID, ndlp->nlp_flag, 0);
|
||||
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_npiv_logo;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_npiv_logo;
|
||||
spin_lock_irq(&ndlp->lock);
|
||||
ndlp->nlp_flag |= NLP_LOGO_SND;
|
||||
spin_unlock_irq(&ndlp->lock);
|
||||
@@ -11083,9 +11084,9 @@ repeat:
|
||||
}
|
||||
spin_unlock_irqrestore(&phba->hbalock, iflags);
|
||||
if (iocb) {
|
||||
iocb->fabric_cmd_cmpl = iocb->cmd_cmpl;
|
||||
iocb->cmd_cmpl = lpfc_cmpl_fabric_iocb;
|
||||
iocb->cmd_flag |= LPFC_IO_FABRIC;
|
||||
iocb->fabric_iocb_cmpl = iocb->iocb_cmpl;
|
||||
iocb->iocb_cmpl = lpfc_cmpl_fabric_iocb;
|
||||
iocb->iocb_flag |= LPFC_IO_FABRIC;
|
||||
|
||||
lpfc_debugfs_disc_trc(iocb->vport, LPFC_DISC_TRC_ELS_CMD,
|
||||
"Fabric sched1: ste:x%x",
|
||||
@@ -11094,13 +11095,13 @@ repeat:
|
||||
ret = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, iocb, 0);
|
||||
|
||||
if (ret == IOCB_ERROR) {
|
||||
iocb->cmd_cmpl = iocb->fabric_cmd_cmpl;
|
||||
iocb->fabric_cmd_cmpl = NULL;
|
||||
iocb->cmd_flag &= ~LPFC_IO_FABRIC;
|
||||
iocb->iocb_cmpl = iocb->fabric_iocb_cmpl;
|
||||
iocb->fabric_iocb_cmpl = NULL;
|
||||
iocb->iocb_flag &= ~LPFC_IO_FABRIC;
|
||||
cmd = &iocb->iocb;
|
||||
cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
|
||||
cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
|
||||
iocb->cmd_cmpl(phba, iocb, iocb);
|
||||
iocb->iocb_cmpl(phba, iocb, iocb);
|
||||
|
||||
atomic_dec(&phba->fabric_iocb_count);
|
||||
goto repeat;
|
||||
@@ -11156,8 +11157,8 @@ lpfc_block_fabric_iocbs(struct lpfc_hba *phba)
|
||||
* @rspiocb: pointer to lpfc response iocb data structure.
|
||||
*
|
||||
* This routine is the callback function that is put to the fabric iocb's
|
||||
* callback function pointer (iocb->cmd_cmpl). The original iocb's callback
|
||||
* function pointer has been stored in iocb->fabric_cmd_cmpl. This callback
|
||||
* callback function pointer (iocb->iocb_cmpl). The original iocb's callback
|
||||
* function pointer has been stored in iocb->fabric_iocb_cmpl. This callback
|
||||
* function first restores and invokes the original iocb's callback function
|
||||
* and then invokes the lpfc_resume_fabric_iocbs() routine to issue the next
|
||||
* fabric bound iocb from the driver internal fabric iocb list onto the wire.
|
||||
@@ -11168,7 +11169,7 @@ lpfc_cmpl_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
||||
{
|
||||
struct ls_rjt stat;
|
||||
|
||||
WARN_ON((cmdiocb->cmd_flag & LPFC_IO_FABRIC) != LPFC_IO_FABRIC);
|
||||
BUG_ON((cmdiocb->iocb_flag & LPFC_IO_FABRIC) != LPFC_IO_FABRIC);
|
||||
|
||||
switch (rspiocb->iocb.ulpStatus) {
|
||||
case IOSTAT_NPORT_RJT:
|
||||
@@ -11194,10 +11195,10 @@ lpfc_cmpl_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
||||
|
||||
BUG_ON(atomic_read(&phba->fabric_iocb_count) == 0);
|
||||
|
||||
cmdiocb->cmd_cmpl = cmdiocb->fabric_cmd_cmpl;
|
||||
cmdiocb->fabric_cmd_cmpl = NULL;
|
||||
cmdiocb->cmd_flag &= ~LPFC_IO_FABRIC;
|
||||
cmdiocb->cmd_cmpl(phba, cmdiocb, rspiocb);
|
||||
cmdiocb->iocb_cmpl = cmdiocb->fabric_iocb_cmpl;
|
||||
cmdiocb->fabric_iocb_cmpl = NULL;
|
||||
cmdiocb->iocb_flag &= ~LPFC_IO_FABRIC;
|
||||
cmdiocb->iocb_cmpl(phba, cmdiocb, rspiocb);
|
||||
|
||||
atomic_dec(&phba->fabric_iocb_count);
|
||||
if (!test_bit(FABRIC_COMANDS_BLOCKED, &phba->bit_flags)) {
|
||||
@@ -11248,9 +11249,9 @@ lpfc_issue_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *iocb)
|
||||
atomic_inc(&phba->fabric_iocb_count);
|
||||
spin_unlock_irqrestore(&phba->hbalock, iflags);
|
||||
if (ready) {
|
||||
iocb->fabric_cmd_cmpl = iocb->cmd_cmpl;
|
||||
iocb->cmd_cmpl = lpfc_cmpl_fabric_iocb;
|
||||
iocb->cmd_flag |= LPFC_IO_FABRIC;
|
||||
iocb->fabric_iocb_cmpl = iocb->iocb_cmpl;
|
||||
iocb->iocb_cmpl = lpfc_cmpl_fabric_iocb;
|
||||
iocb->iocb_flag |= LPFC_IO_FABRIC;
|
||||
|
||||
lpfc_debugfs_disc_trc(iocb->vport, LPFC_DISC_TRC_ELS_CMD,
|
||||
"Fabric sched2: ste:x%x",
|
||||
@@ -11259,9 +11260,9 @@ lpfc_issue_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *iocb)
|
||||
ret = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, iocb, 0);
|
||||
|
||||
if (ret == IOCB_ERROR) {
|
||||
iocb->cmd_cmpl = iocb->fabric_cmd_cmpl;
|
||||
iocb->fabric_cmd_cmpl = NULL;
|
||||
iocb->cmd_flag &= ~LPFC_IO_FABRIC;
|
||||
iocb->iocb_cmpl = iocb->fabric_iocb_cmpl;
|
||||
iocb->fabric_iocb_cmpl = NULL;
|
||||
iocb->iocb_flag &= ~LPFC_IO_FABRIC;
|
||||
atomic_dec(&phba->fabric_iocb_count);
|
||||
}
|
||||
} else {
|
||||
@@ -11654,7 +11655,7 @@ int lpfc_issue_els_qfpa(struct lpfc_vport *vport)
|
||||
*((u32 *)(pcmd)) = ELS_CMD_QFPA;
|
||||
pcmd += 4;
|
||||
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_qfpa;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_qfpa;
|
||||
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
@@ -11737,7 +11738,7 @@ lpfc_vmid_uvem(struct lpfc_vport *vport,
|
||||
}
|
||||
inst_desc->word6 = cpu_to_be32(inst_desc->word6);
|
||||
|
||||
elsiocb->cmd_cmpl = lpfc_cmpl_els_uvem;
|
||||
elsiocb->iocb_cmpl = lpfc_cmpl_els_uvem;
|
||||
|
||||
elsiocb->context1 = lpfc_nlp_get(ndlp);
|
||||
if (!elsiocb->context1) {
|
||||
|
||||
@@ -60,13 +60,6 @@
|
||||
((ptr)->name##_WORD = ((((value) & name##_MASK) << name##_SHIFT) | \
|
||||
((ptr)->name##_WORD & ~(name##_MASK << name##_SHIFT))))
|
||||
|
||||
#define get_wqe_reqtag(x) (((x)->wqe.words[9] >> 0) & 0xFFFF)
|
||||
|
||||
#define get_job_ulpword(x, y) ((x)->iocb.un.ulpWord[y])
|
||||
|
||||
#define set_job_ulpstatus(x, y) bf_set(lpfc_wcqe_c_status, &(x)->wcqe_cmpl, y)
|
||||
#define set_job_ulpword4(x, y) ((&(x)->wcqe_cmpl)->parameter = y)
|
||||
|
||||
struct dma_address {
|
||||
uint32_t addr_lo;
|
||||
uint32_t addr_hi;
|
||||
|
||||
@@ -982,7 +982,7 @@ lpfc_hba_clean_txcmplq(struct lpfc_hba *phba)
|
||||
spin_lock_irq(&pring->ring_lock);
|
||||
list_for_each_entry_safe(piocb, next_iocb,
|
||||
&pring->txcmplq, list)
|
||||
piocb->cmd_flag &= ~LPFC_IO_ON_TXCMPLQ;
|
||||
piocb->iocb_flag &= ~LPFC_IO_ON_TXCMPLQ;
|
||||
list_splice_init(&pring->txcmplq, &completions);
|
||||
pring->txcmplq_cnt = 0;
|
||||
spin_unlock_irq(&pring->ring_lock);
|
||||
@@ -2643,7 +2643,7 @@ lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_sli3_post_buffer - Post IOCB(s) with DMA buffer descriptor(s) to a IOCB ring
|
||||
* lpfc_post_buffer - Post IOCB(s) with DMA buffer descriptor(s) to a IOCB ring
|
||||
* @phba: pointer to lpfc hba data structure.
|
||||
* @pring: pointer to a IOCB ring.
|
||||
* @cnt: the number of IOCBs to be posted to the IOCB ring.
|
||||
@@ -2655,7 +2655,7 @@ lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
|
||||
* The number of IOCBs NOT able to be posted to the IOCB ring.
|
||||
**/
|
||||
int
|
||||
lpfc_sli3_post_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, int cnt)
|
||||
lpfc_post_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, int cnt)
|
||||
{
|
||||
IOCB_t *icmd;
|
||||
struct lpfc_iocbq *iocb;
|
||||
@@ -2761,7 +2761,7 @@ lpfc_post_rcv_buf(struct lpfc_hba *phba)
|
||||
struct lpfc_sli *psli = &phba->sli;
|
||||
|
||||
/* Ring 0, ELS / CT buffers */
|
||||
lpfc_sli3_post_buffer(phba, &psli->sli3_ring[LPFC_ELS_RING], LPFC_BUF_RING0);
|
||||
lpfc_post_buffer(phba, &psli->sli3_ring[LPFC_ELS_RING], LPFC_BUF_RING0);
|
||||
/* Ring 2 - FCP no buffers needed */
|
||||
|
||||
return 0;
|
||||
@@ -4215,7 +4215,8 @@ lpfc_io_buf_replenish(struct lpfc_hba *phba, struct list_head *cbuf)
|
||||
qp = &phba->sli4_hba.hdwq[idx];
|
||||
lpfc_cmd->hdwq_no = idx;
|
||||
lpfc_cmd->hdwq = qp;
|
||||
lpfc_cmd->cur_iocbq.cmd_cmpl = NULL;
|
||||
lpfc_cmd->cur_iocbq.wqe_cmpl = NULL;
|
||||
lpfc_cmd->cur_iocbq.iocb_cmpl = NULL;
|
||||
spin_lock(&qp->io_buf_list_put_lock);
|
||||
list_add_tail(&lpfc_cmd->list,
|
||||
&qp->lpfc_io_buf_list_put);
|
||||
@@ -11968,7 +11969,7 @@ lpfc_sli_enable_msi(struct lpfc_hba *phba)
|
||||
rc = pci_enable_msi(phba->pcidev);
|
||||
if (!rc)
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
|
||||
"0012 PCI enable MSI mode success.\n");
|
||||
"0462 PCI enable MSI mode success.\n");
|
||||
else {
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
|
||||
"0471 PCI enable MSI mode failed (%d)\n", rc);
|
||||
|
||||
@@ -2139,9 +2139,9 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
npr = NULL;
|
||||
nvpr = NULL;
|
||||
temp_ptr = lpfc_check_elscmpl_iocb(phba, cmdiocb, rspiocb);
|
||||
if (cmdiocb->cmd_flag & LPFC_PRLI_FCP_REQ)
|
||||
if (cmdiocb->iocb_flag & LPFC_PRLI_FCP_REQ)
|
||||
npr = (PRLI *) temp_ptr;
|
||||
else if (cmdiocb->cmd_flag & LPFC_PRLI_NVME_REQ)
|
||||
else if (cmdiocb->iocb_flag & LPFC_PRLI_NVME_REQ)
|
||||
nvpr = (struct lpfc_nvme_prli *) temp_ptr;
|
||||
|
||||
irsp = &rspiocb->iocb;
|
||||
|
||||
@@ -352,12 +352,11 @@ __lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_vport *vport,
|
||||
|
||||
static void
|
||||
lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_vport *vport = cmdwqe->vport;
|
||||
struct lpfc_nvme_lport *lport;
|
||||
uint32_t status;
|
||||
struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl;
|
||||
|
||||
status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK;
|
||||
|
||||
@@ -381,7 +380,7 @@ lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
|
||||
struct lpfc_dmabuf *inp,
|
||||
struct nvmefc_ls_req *pnvme_lsreq,
|
||||
void (*cmpl)(struct lpfc_hba *, struct lpfc_iocbq *,
|
||||
struct lpfc_iocbq *),
|
||||
struct lpfc_wcqe_complete *),
|
||||
struct lpfc_nodelist *ndlp, uint32_t num_entry,
|
||||
uint32_t tmo, uint8_t retry)
|
||||
{
|
||||
@@ -402,7 +401,7 @@ lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
|
||||
memset(wqe, 0, sizeof(union lpfc_wqe));
|
||||
|
||||
genwqe->context3 = (uint8_t *)bmp;
|
||||
genwqe->cmd_flag |= LPFC_IO_NVME_LS;
|
||||
genwqe->iocb_flag |= LPFC_IO_NVME_LS;
|
||||
|
||||
/* Save for completion so we can release these resources */
|
||||
genwqe->context1 = lpfc_nlp_get(ndlp);
|
||||
@@ -433,7 +432,7 @@ lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
|
||||
first_len = xmit_len;
|
||||
}
|
||||
|
||||
genwqe->num_bdes = num_entry;
|
||||
genwqe->rsvd2 = num_entry;
|
||||
genwqe->hba_wqidx = 0;
|
||||
|
||||
/* Words 0 - 2 */
|
||||
@@ -484,7 +483,8 @@ lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
|
||||
|
||||
|
||||
/* Issue GEN REQ WQE for NPORT <did> */
|
||||
genwqe->cmd_cmpl = cmpl;
|
||||
genwqe->wqe_cmpl = cmpl;
|
||||
genwqe->iocb_cmpl = NULL;
|
||||
genwqe->drvrTimeout = tmo + LPFC_DRVR_TIMEOUT;
|
||||
genwqe->vport = vport;
|
||||
genwqe->retry = retry;
|
||||
@@ -534,7 +534,7 @@ __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
struct nvmefc_ls_req *pnvme_lsreq,
|
||||
void (*gen_req_cmp)(struct lpfc_hba *phba,
|
||||
struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe))
|
||||
struct lpfc_wcqe_complete *wcqe))
|
||||
{
|
||||
struct lpfc_dmabuf *bmp;
|
||||
struct ulp_bde64 *bpl;
|
||||
@@ -722,7 +722,7 @@ __lpfc_nvme_ls_abort(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
spin_lock(&pring->ring_lock);
|
||||
list_for_each_entry_safe(wqe, next_wqe, &pring->txcmplq, list) {
|
||||
if (wqe->context2 == pnvme_lsreq) {
|
||||
wqe->cmd_flag |= LPFC_DRIVER_ABORTED;
|
||||
wqe->iocb_flag |= LPFC_DRIVER_ABORTED;
|
||||
foundit = true;
|
||||
break;
|
||||
}
|
||||
@@ -906,7 +906,7 @@ lpfc_nvme_adj_fcp_sgls(struct lpfc_vport *vport,
|
||||
|
||||
|
||||
/*
|
||||
* lpfc_nvme_io_cmd_cmpl - Complete an NVME-over-FCP IO
|
||||
* lpfc_nvme_io_cmd_wqe_cmpl - Complete an NVME-over-FCP IO
|
||||
*
|
||||
* Driver registers this routine as it io request handler. This
|
||||
* routine issues an fcp WQE with data from the @lpfc_nvme_fcpreq
|
||||
@@ -917,12 +917,11 @@ lpfc_nvme_adj_fcp_sgls(struct lpfc_vport *vport,
|
||||
* TODO: What are the failure codes.
|
||||
**/
|
||||
static void
|
||||
lpfc_nvme_io_cmd_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
||||
struct lpfc_iocbq *pwqeOut)
|
||||
lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_io_buf *lpfc_ncmd =
|
||||
(struct lpfc_io_buf *)pwqeIn->context1;
|
||||
struct lpfc_wcqe_complete *wcqe = &pwqeOut->wcqe_cmpl;
|
||||
struct lpfc_vport *vport = pwqeIn->vport;
|
||||
struct nvmefc_fcp_req *nCmd;
|
||||
struct nvme_fc_ersp_iu *ep;
|
||||
@@ -1874,7 +1873,7 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
|
||||
}
|
||||
|
||||
/* Don't abort IOs no longer on the pending queue. */
|
||||
if (!(nvmereq_wqe->cmd_flag & LPFC_IO_ON_TXCMPLQ)) {
|
||||
if (!(nvmereq_wqe->iocb_flag & LPFC_IO_ON_TXCMPLQ)) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
|
||||
"6142 NVME IO req x%px not queued - skipping "
|
||||
"abort req xri x%x\n",
|
||||
@@ -1888,7 +1887,7 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
|
||||
nvmereq_wqe->hba_wqidx, pnvme_rport->port_id);
|
||||
|
||||
/* Outstanding abort is in progress */
|
||||
if (nvmereq_wqe->cmd_flag & LPFC_DRIVER_ABORTED) {
|
||||
if (nvmereq_wqe->iocb_flag & LPFC_DRIVER_ABORTED) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
|
||||
"6144 Outstanding NVME I/O Abort Request "
|
||||
"still pending on nvme_fcreq x%px, "
|
||||
@@ -1983,8 +1982,8 @@ lpfc_get_nvme_buf(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
|
||||
/* Setup key fields in buffer that may have been changed
|
||||
* if other protocols used this buffer.
|
||||
*/
|
||||
pwqeq->cmd_flag = LPFC_IO_NVME;
|
||||
pwqeq->cmd_cmpl = lpfc_nvme_io_cmd_cmpl;
|
||||
pwqeq->iocb_flag = LPFC_IO_NVME;
|
||||
pwqeq->wqe_cmpl = lpfc_nvme_io_cmd_wqe_cmpl;
|
||||
lpfc_ncmd->start_time = jiffies;
|
||||
lpfc_ncmd->flags = 0;
|
||||
|
||||
@@ -2750,7 +2749,6 @@ lpfc_nvme_cancel_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
||||
if (phba->sli.sli_flag & LPFC_SLI_ACTIVE)
|
||||
bf_set(lpfc_wcqe_c_xb, wcqep, 1);
|
||||
|
||||
memcpy(&pwqeIn->wcqe_cmpl, wcqep, sizeof(*wcqep));
|
||||
(pwqeIn->cmd_cmpl)(phba, pwqeIn, pwqeIn);
|
||||
(pwqeIn->wqe_cmpl)(phba, pwqeIn, wcqep);
|
||||
#endif
|
||||
}
|
||||
|
||||
@@ -234,7 +234,7 @@ int __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
struct nvmefc_ls_req *pnvme_lsreq,
|
||||
void (*gen_req_cmp)(struct lpfc_hba *phba,
|
||||
struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe));
|
||||
struct lpfc_wcqe_complete *wcqe));
|
||||
void __lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_vport *vport,
|
||||
struct lpfc_iocbq *cmdwqe, struct lpfc_wcqe_complete *wcqe);
|
||||
int __lpfc_nvme_ls_abort(struct lpfc_vport *vport,
|
||||
@@ -248,6 +248,6 @@ int __lpfc_nvme_xmt_ls_rsp(struct lpfc_async_xchg_ctx *axchg,
|
||||
struct nvmefc_ls_rsp *ls_rsp,
|
||||
void (*xmt_ls_rsp_cmp)(struct lpfc_hba *phba,
|
||||
struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe));
|
||||
struct lpfc_wcqe_complete *wcqe));
|
||||
void __lpfc_nvme_xmt_ls_rsp_cmp(struct lpfc_hba *phba,
|
||||
struct lpfc_iocbq *cmdwqe, struct lpfc_iocbq *rspwqe);
|
||||
struct lpfc_iocbq *cmdwqe, struct lpfc_wcqe_complete *wcqe);
|
||||
|
||||
@@ -285,7 +285,7 @@ lpfc_nvmet_defer_release(struct lpfc_hba *phba,
|
||||
* transmission of an NVME LS response.
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @cmdwqe: Pointer to driver command WQE object.
|
||||
* @rspwqe: Pointer to driver response WQE object.
|
||||
* @wcqe: Pointer to driver response CQE object.
|
||||
*
|
||||
* The function is called from SLI ring event handler with no
|
||||
* lock held. The function frees memory resources used for the command
|
||||
@@ -293,10 +293,9 @@ lpfc_nvmet_defer_release(struct lpfc_hba *phba,
|
||||
**/
|
||||
void
|
||||
__lpfc_nvme_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_async_xchg_ctx *axchg = cmdwqe->context2;
|
||||
struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl;
|
||||
struct nvmefc_ls_rsp *ls_rsp = &axchg->ls_rsp;
|
||||
uint32_t status, result;
|
||||
|
||||
@@ -332,7 +331,7 @@ __lpfc_nvme_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
* lpfc_nvmet_xmt_ls_rsp_cmp - Completion handler for LS Response
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @cmdwqe: Pointer to driver command WQE object.
|
||||
* @rspwqe: Pointer to driver response WQE object.
|
||||
* @wcqe: Pointer to driver response CQE object.
|
||||
*
|
||||
* The function is called from SLI ring event handler with no
|
||||
* lock held. This function is the completion handler for NVME LS commands
|
||||
@@ -341,11 +340,10 @@ __lpfc_nvme_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
**/
|
||||
static void
|
||||
lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
uint32_t status, result;
|
||||
struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl;
|
||||
|
||||
if (!phba->targetport)
|
||||
goto finish;
|
||||
@@ -367,7 +365,7 @@ lpfc_nvmet_xmt_ls_rsp_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
}
|
||||
|
||||
finish:
|
||||
__lpfc_nvme_xmt_ls_rsp_cmp(phba, cmdwqe, rspwqe);
|
||||
__lpfc_nvme_xmt_ls_rsp_cmp(phba, cmdwqe, wcqe);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -709,7 +707,7 @@ out:
|
||||
* lpfc_nvmet_xmt_fcp_op_cmp - Completion handler for FCP Response
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @cmdwqe: Pointer to driver command WQE object.
|
||||
* @rspwqe: Pointer to driver response WQE object.
|
||||
* @wcqe: Pointer to driver response CQE object.
|
||||
*
|
||||
* The function is called from SLI ring event handler with no
|
||||
* lock held. This function is the completion handler for NVME FCP commands
|
||||
@@ -717,13 +715,12 @@ out:
|
||||
**/
|
||||
static void
|
||||
lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
struct nvmefc_tgt_fcp_req *rsp;
|
||||
struct lpfc_async_xchg_ctx *ctxp;
|
||||
uint32_t status, result, op, start_clean, logerr;
|
||||
struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl;
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
int id;
|
||||
#endif
|
||||
@@ -820,7 +817,7 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
/* lpfc_nvmet_xmt_fcp_release() will recycle the context */
|
||||
} else {
|
||||
ctxp->entry_cnt++;
|
||||
start_clean = offsetof(struct lpfc_iocbq, cmd_flag);
|
||||
start_clean = offsetof(struct lpfc_iocbq, iocb_flag);
|
||||
memset(((char *)cmdwqe) + start_clean, 0,
|
||||
(sizeof(struct lpfc_iocbq) - start_clean));
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
@@ -865,7 +862,7 @@ __lpfc_nvme_xmt_ls_rsp(struct lpfc_async_xchg_ctx *axchg,
|
||||
struct nvmefc_ls_rsp *ls_rsp,
|
||||
void (*xmt_ls_rsp_cmp)(struct lpfc_hba *phba,
|
||||
struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe))
|
||||
struct lpfc_wcqe_complete *wcqe))
|
||||
{
|
||||
struct lpfc_hba *phba = axchg->phba;
|
||||
struct hbq_dmabuf *nvmebuf = (struct hbq_dmabuf *)axchg->rqb_buffer;
|
||||
@@ -901,7 +898,7 @@ __lpfc_nvme_xmt_ls_rsp(struct lpfc_async_xchg_ctx *axchg,
|
||||
}
|
||||
|
||||
/* Save numBdes for bpl2sgl */
|
||||
nvmewqeq->num_bdes = 1;
|
||||
nvmewqeq->rsvd2 = 1;
|
||||
nvmewqeq->hba_wqidx = 0;
|
||||
nvmewqeq->context3 = &dmabuf;
|
||||
dmabuf.virt = &bpl;
|
||||
@@ -916,7 +913,8 @@ __lpfc_nvme_xmt_ls_rsp(struct lpfc_async_xchg_ctx *axchg,
|
||||
* be referenced after it returns back to this routine.
|
||||
*/
|
||||
|
||||
nvmewqeq->cmd_cmpl = xmt_ls_rsp_cmp;
|
||||
nvmewqeq->wqe_cmpl = xmt_ls_rsp_cmp;
|
||||
nvmewqeq->iocb_cmpl = NULL;
|
||||
nvmewqeq->context2 = axchg;
|
||||
|
||||
lpfc_nvmeio_data(phba, "NVMEx LS RSP: xri x%x wqidx x%x len x%x\n",
|
||||
@@ -1074,9 +1072,10 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
|
||||
goto aerr;
|
||||
}
|
||||
|
||||
nvmewqeq->cmd_cmpl = lpfc_nvmet_xmt_fcp_op_cmp;
|
||||
nvmewqeq->wqe_cmpl = lpfc_nvmet_xmt_fcp_op_cmp;
|
||||
nvmewqeq->iocb_cmpl = NULL;
|
||||
nvmewqeq->context2 = ctxp;
|
||||
nvmewqeq->cmd_flag |= LPFC_IO_NVMET;
|
||||
nvmewqeq->iocb_flag |= LPFC_IO_NVMET;
|
||||
ctxp->wqeq->hba_wqidx = rsp->hwqid;
|
||||
|
||||
lpfc_nvmeio_data(phba, "NVMET FCP CMND: xri x%x op x%x len x%x\n",
|
||||
@@ -1276,7 +1275,7 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
|
||||
* lpfc_nvmet_ls_req_cmp - completion handler for a nvme ls request
|
||||
* @phba: Pointer to HBA context object
|
||||
* @cmdwqe: Pointer to driver command WQE object.
|
||||
* @rspwqe: Pointer to driver response WQE object.
|
||||
* @wcqe: Pointer to driver response CQE object.
|
||||
*
|
||||
* This function is the completion handler for NVME LS requests.
|
||||
* The function updates any states and statistics, then calls the
|
||||
@@ -1284,9 +1283,8 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
|
||||
**/
|
||||
static void
|
||||
lpfc_nvmet_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl;
|
||||
__lpfc_nvme_ls_req_cmp(phba, cmdwqe->vport, cmdwqe, wcqe);
|
||||
}
|
||||
|
||||
@@ -1583,7 +1581,7 @@ lpfc_nvmet_setup_io_context(struct lpfc_hba *phba)
|
||||
"6406 Ran out of NVMET iocb/WQEs\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
ctx_buf->iocbq->cmd_flag = LPFC_IO_NVMET;
|
||||
ctx_buf->iocbq->iocb_flag = LPFC_IO_NVMET;
|
||||
nvmewqe = ctx_buf->iocbq;
|
||||
wqe = &nvmewqe->wqe;
|
||||
|
||||
@@ -2029,10 +2027,8 @@ lpfc_nvmet_wqfull_flush(struct lpfc_hba *phba, struct lpfc_queue *wq,
|
||||
list_del(&nvmewqeq->list);
|
||||
spin_unlock_irqrestore(&pring->ring_lock,
|
||||
iflags);
|
||||
memcpy(&nvmewqeq->wcqe_cmpl, wcqep,
|
||||
sizeof(*wcqep));
|
||||
lpfc_nvmet_xmt_fcp_op_cmp(phba, nvmewqeq,
|
||||
nvmewqeq);
|
||||
wcqep);
|
||||
return;
|
||||
}
|
||||
continue;
|
||||
@@ -2040,8 +2036,7 @@ lpfc_nvmet_wqfull_flush(struct lpfc_hba *phba, struct lpfc_queue *wq,
|
||||
/* Flush all IOs */
|
||||
list_del(&nvmewqeq->list);
|
||||
spin_unlock_irqrestore(&pring->ring_lock, iflags);
|
||||
memcpy(&nvmewqeq->wcqe_cmpl, wcqep, sizeof(*wcqep));
|
||||
lpfc_nvmet_xmt_fcp_op_cmp(phba, nvmewqeq, nvmewqeq);
|
||||
lpfc_nvmet_xmt_fcp_op_cmp(phba, nvmewqeq, wcqep);
|
||||
spin_lock_irqsave(&pring->ring_lock, iflags);
|
||||
}
|
||||
}
|
||||
@@ -2681,7 +2676,7 @@ lpfc_nvmet_prep_ls_wqe(struct lpfc_hba *phba,
|
||||
nvmewqe->retry = 1;
|
||||
nvmewqe->vport = phba->pport;
|
||||
nvmewqe->drvrTimeout = (phba->fc_ratov * 3) + LPFC_DRVR_TIMEOUT;
|
||||
nvmewqe->cmd_flag |= LPFC_IO_NVME_LS;
|
||||
nvmewqe->iocb_flag |= LPFC_IO_NVME_LS;
|
||||
|
||||
/* Xmit NVMET response to remote NPORT <did> */
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
|
||||
@@ -3038,7 +3033,7 @@ lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *phba,
|
||||
* lpfc_nvmet_sol_fcp_abort_cmp - Completion handler for ABTS
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @cmdwqe: Pointer to driver command WQE object.
|
||||
* @rspwqe: Pointer to driver response WQE object.
|
||||
* @wcqe: Pointer to driver response CQE object.
|
||||
*
|
||||
* The function is called from SLI ring event handler with no
|
||||
* lock held. This function is the completion handler for NVME ABTS for FCP cmds
|
||||
@@ -3046,14 +3041,13 @@ lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *phba,
|
||||
**/
|
||||
static void
|
||||
lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_async_xchg_ctx *ctxp;
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
uint32_t result;
|
||||
unsigned long flags;
|
||||
bool released = false;
|
||||
struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl;
|
||||
|
||||
ctxp = cmdwqe->context2;
|
||||
result = wcqe->parameter;
|
||||
@@ -3108,7 +3102,7 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
* lpfc_nvmet_unsol_fcp_abort_cmp - Completion handler for ABTS
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @cmdwqe: Pointer to driver command WQE object.
|
||||
* @rspwqe: Pointer to driver response WQE object.
|
||||
* @wcqe: Pointer to driver response CQE object.
|
||||
*
|
||||
* The function is called from SLI ring event handler with no
|
||||
* lock held. This function is the completion handler for NVME ABTS for FCP cmds
|
||||
@@ -3116,14 +3110,13 @@ lpfc_nvmet_sol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
**/
|
||||
static void
|
||||
lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_async_xchg_ctx *ctxp;
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
unsigned long flags;
|
||||
uint32_t result;
|
||||
bool released = false;
|
||||
struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl;
|
||||
|
||||
ctxp = cmdwqe->context2;
|
||||
result = wcqe->parameter;
|
||||
@@ -3190,7 +3183,7 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
* lpfc_nvmet_xmt_ls_abort_cmp - Completion handler for ABTS
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @cmdwqe: Pointer to driver command WQE object.
|
||||
* @rspwqe: Pointer to driver response WQE object.
|
||||
* @wcqe: Pointer to driver response CQE object.
|
||||
*
|
||||
* The function is called from SLI ring event handler with no
|
||||
* lock held. This function is the completion handler for NVME ABTS for LS cmds
|
||||
@@ -3198,12 +3191,11 @@ lpfc_nvmet_unsol_fcp_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
**/
|
||||
static void
|
||||
lpfc_nvmet_xmt_ls_abort_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
|
||||
struct lpfc_iocbq *rspwqe)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_async_xchg_ctx *ctxp;
|
||||
struct lpfc_nvmet_tgtport *tgtp;
|
||||
uint32_t result;
|
||||
struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl;
|
||||
|
||||
ctxp = cmdwqe->context2;
|
||||
result = wcqe->parameter;
|
||||
@@ -3327,7 +3319,7 @@ lpfc_nvmet_unsol_issue_abort(struct lpfc_hba *phba,
|
||||
abts_wqeq->context1 = ndlp;
|
||||
abts_wqeq->context2 = ctxp;
|
||||
abts_wqeq->context3 = NULL;
|
||||
abts_wqeq->num_bdes = 0;
|
||||
abts_wqeq->rsvd2 = 0;
|
||||
/* hba_wqidx should already be setup from command we are aborting */
|
||||
abts_wqeq->iocb.ulpCommand = CMD_XMIT_SEQUENCE64_CR;
|
||||
abts_wqeq->iocb.ulpLe = 1;
|
||||
@@ -3456,7 +3448,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
|
||||
}
|
||||
|
||||
/* Outstanding abort is in progress */
|
||||
if (abts_wqeq->cmd_flag & LPFC_DRIVER_ABORTED) {
|
||||
if (abts_wqeq->iocb_flag & LPFC_DRIVER_ABORTED) {
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
atomic_inc(&tgtp->xmt_abort_rsp_error);
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
|
||||
@@ -3471,14 +3463,15 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
|
||||
}
|
||||
|
||||
/* Ready - mark outstanding as aborted by driver. */
|
||||
abts_wqeq->cmd_flag |= LPFC_DRIVER_ABORTED;
|
||||
abts_wqeq->iocb_flag |= LPFC_DRIVER_ABORTED;
|
||||
|
||||
lpfc_nvmet_prep_abort_wqe(abts_wqeq, ctxp->wqeq->sli4_xritag, opt);
|
||||
|
||||
/* ABTS WQE must go to the same WQ as the WQE to be aborted */
|
||||
abts_wqeq->hba_wqidx = ctxp->wqeq->hba_wqidx;
|
||||
abts_wqeq->cmd_cmpl = lpfc_nvmet_sol_fcp_abort_cmp;
|
||||
abts_wqeq->cmd_flag |= LPFC_IO_NVME;
|
||||
abts_wqeq->wqe_cmpl = lpfc_nvmet_sol_fcp_abort_cmp;
|
||||
abts_wqeq->iocb_cmpl = NULL;
|
||||
abts_wqeq->iocb_flag |= LPFC_IO_NVME;
|
||||
abts_wqeq->context2 = ctxp;
|
||||
abts_wqeq->vport = phba->pport;
|
||||
if (!ctxp->hdwq)
|
||||
@@ -3535,8 +3528,9 @@ lpfc_nvmet_unsol_fcp_issue_abort(struct lpfc_hba *phba,
|
||||
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
abts_wqeq = ctxp->wqeq;
|
||||
abts_wqeq->cmd_cmpl = lpfc_nvmet_unsol_fcp_abort_cmp;
|
||||
abts_wqeq->cmd_flag |= LPFC_IO_NVMET;
|
||||
abts_wqeq->wqe_cmpl = lpfc_nvmet_unsol_fcp_abort_cmp;
|
||||
abts_wqeq->iocb_cmpl = NULL;
|
||||
abts_wqeq->iocb_flag |= LPFC_IO_NVMET;
|
||||
if (!ctxp->hdwq)
|
||||
ctxp->hdwq = &phba->sli4_hba.hdwq[abts_wqeq->hba_wqidx];
|
||||
|
||||
@@ -3620,8 +3614,9 @@ lpfc_nvme_unsol_ls_issue_abort(struct lpfc_hba *phba,
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&phba->hbalock, flags);
|
||||
abts_wqeq->cmd_cmpl = lpfc_nvmet_xmt_ls_abort_cmp;
|
||||
abts_wqeq->cmd_flag |= LPFC_IO_NVME_LS;
|
||||
abts_wqeq->wqe_cmpl = lpfc_nvmet_xmt_ls_abort_cmp;
|
||||
abts_wqeq->iocb_cmpl = NULL;
|
||||
abts_wqeq->iocb_flag |= LPFC_IO_NVME_LS;
|
||||
rc = lpfc_sli4_issue_wqe(phba, ctxp->hdwq, abts_wqeq);
|
||||
spin_unlock_irqrestore(&phba->hbalock, flags);
|
||||
if (rc == WQE_SUCCESS) {
|
||||
|
||||
@@ -362,7 +362,7 @@ lpfc_new_scsi_buf_s3(struct lpfc_vport *vport, int num_to_alloc)
|
||||
kfree(psb);
|
||||
break;
|
||||
}
|
||||
psb->cur_iocbq.cmd_flag |= LPFC_IO_FCP;
|
||||
psb->cur_iocbq.iocb_flag |= LPFC_IO_FCP;
|
||||
|
||||
psb->fcp_cmnd = psb->data;
|
||||
psb->fcp_rsp = psb->data + sizeof(struct fcp_cmnd);
|
||||
@@ -468,7 +468,7 @@ lpfc_sli4_vport_delete_fcp_xri_aborted(struct lpfc_vport *vport)
|
||||
spin_lock(&qp->abts_io_buf_list_lock);
|
||||
list_for_each_entry_safe(psb, next_psb,
|
||||
&qp->lpfc_abts_io_buf_list, list) {
|
||||
if (psb->cur_iocbq.cmd_flag & LPFC_IO_NVME)
|
||||
if (psb->cur_iocbq.iocb_flag & LPFC_IO_NVME)
|
||||
continue;
|
||||
|
||||
if (psb->rdata && psb->rdata->pnode &&
|
||||
@@ -524,7 +524,7 @@ lpfc_sli4_io_xri_aborted(struct lpfc_hba *phba,
|
||||
list_del_init(&psb->list);
|
||||
psb->flags &= ~LPFC_SBUF_XBUSY;
|
||||
psb->status = IOSTAT_SUCCESS;
|
||||
if (psb->cur_iocbq.cmd_flag & LPFC_IO_NVME) {
|
||||
if (psb->cur_iocbq.iocb_flag & LPFC_IO_NVME) {
|
||||
qp->abts_nvme_io_bufs--;
|
||||
spin_unlock(&qp->abts_io_buf_list_lock);
|
||||
spin_unlock_irqrestore(&phba->hbalock, iflag);
|
||||
@@ -571,7 +571,7 @@ lpfc_sli4_io_xri_aborted(struct lpfc_hba *phba,
|
||||
* for command completion wake up the thread.
|
||||
*/
|
||||
spin_lock_irqsave(&psb->buf_lock, iflag);
|
||||
psb->cur_iocbq.cmd_flag &=
|
||||
psb->cur_iocbq.iocb_flag &=
|
||||
~LPFC_DRIVER_ABORTED;
|
||||
if (psb->waitq)
|
||||
wake_up(psb->waitq);
|
||||
@@ -593,8 +593,8 @@ lpfc_sli4_io_xri_aborted(struct lpfc_hba *phba,
|
||||
for (i = 1; i <= phba->sli.last_iotag; i++) {
|
||||
iocbq = phba->sli.iocbq_lookup[i];
|
||||
|
||||
if (!(iocbq->cmd_flag & LPFC_IO_FCP) ||
|
||||
(iocbq->cmd_flag & LPFC_IO_LIBDFC))
|
||||
if (!(iocbq->iocb_flag & LPFC_IO_FCP) ||
|
||||
(iocbq->iocb_flag & LPFC_IO_LIBDFC))
|
||||
continue;
|
||||
if (iocbq->sli4_xritag != xri)
|
||||
continue;
|
||||
@@ -695,7 +695,7 @@ lpfc_get_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
|
||||
/* Setup key fields in buffer that may have been changed
|
||||
* if other protocols used this buffer.
|
||||
*/
|
||||
lpfc_cmd->cur_iocbq.cmd_flag = LPFC_IO_FCP;
|
||||
lpfc_cmd->cur_iocbq.iocb_flag = LPFC_IO_FCP;
|
||||
lpfc_cmd->prot_seg_cnt = 0;
|
||||
lpfc_cmd->seg_cnt = 0;
|
||||
lpfc_cmd->timeout = 0;
|
||||
@@ -783,7 +783,7 @@ lpfc_release_scsi_buf_s3(struct lpfc_hba *phba, struct lpfc_io_buf *psb)
|
||||
|
||||
spin_lock_irqsave(&phba->scsi_buf_list_put_lock, iflag);
|
||||
psb->pCmd = NULL;
|
||||
psb->cur_iocbq.cmd_flag = LPFC_IO_FCP;
|
||||
psb->cur_iocbq.iocb_flag = LPFC_IO_FCP;
|
||||
list_add_tail(&psb->list, &phba->lpfc_scsi_buf_list_put);
|
||||
spin_unlock_irqrestore(&phba->scsi_buf_list_put_lock, iflag);
|
||||
}
|
||||
@@ -931,7 +931,7 @@ lpfc_scsi_prep_dma_buf_s3(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd)
|
||||
physaddr = sg_dma_address(sgel);
|
||||
if (phba->sli_rev == 3 &&
|
||||
!(phba->sli3_options & LPFC_SLI3_BG_ENABLED) &&
|
||||
!(iocbq->cmd_flag & DSS_SECURITY_OP) &&
|
||||
!(iocbq->iocb_flag & DSS_SECURITY_OP) &&
|
||||
nseg <= LPFC_EXT_DATA_BDE_COUNT) {
|
||||
data_bde->tus.f.bdeFlags = BUFF_TYPE_BDE_64;
|
||||
data_bde->tus.f.bdeSize = sg_dma_len(sgel);
|
||||
@@ -959,7 +959,7 @@ lpfc_scsi_prep_dma_buf_s3(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd)
|
||||
*/
|
||||
if (phba->sli_rev == 3 &&
|
||||
!(phba->sli3_options & LPFC_SLI3_BG_ENABLED) &&
|
||||
!(iocbq->cmd_flag & DSS_SECURITY_OP)) {
|
||||
!(iocbq->iocb_flag & DSS_SECURITY_OP)) {
|
||||
if (num_bde > LPFC_EXT_DATA_BDE_COUNT) {
|
||||
/*
|
||||
* The extended IOCB format can only fit 3 BDE or a BPL.
|
||||
@@ -2942,59 +2942,155 @@ out:
|
||||
* -1 - Internal error (bad profile, ...etc)
|
||||
*/
|
||||
static int
|
||||
lpfc_parse_bg_err(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd,
|
||||
struct lpfc_iocbq *pIocbOut)
|
||||
lpfc_sli4_parse_bg_err(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd,
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct scsi_cmnd *cmd = lpfc_cmd->pCmd;
|
||||
struct sli3_bg_fields *bgf;
|
||||
int ret = 0;
|
||||
struct lpfc_wcqe_complete *wcqe;
|
||||
u32 status;
|
||||
u32 status = bf_get(lpfc_wcqe_c_status, wcqe);
|
||||
u32 bghm = 0;
|
||||
u32 bgstat = 0;
|
||||
u64 failing_sector = 0;
|
||||
|
||||
if (phba->sli_rev == LPFC_SLI_REV4) {
|
||||
wcqe = &pIocbOut->wcqe_cmpl;
|
||||
status = bf_get(lpfc_wcqe_c_status, wcqe);
|
||||
if (status == CQE_STATUS_DI_ERROR) {
|
||||
if (bf_get(lpfc_wcqe_c_bg_ge, wcqe)) /* Guard Check failed */
|
||||
bgstat |= BGS_GUARD_ERR_MASK;
|
||||
if (bf_get(lpfc_wcqe_c_bg_ae, wcqe)) /* AppTag Check failed */
|
||||
bgstat |= BGS_APPTAG_ERR_MASK;
|
||||
if (bf_get(lpfc_wcqe_c_bg_re, wcqe)) /* RefTag Check failed */
|
||||
bgstat |= BGS_REFTAG_ERR_MASK;
|
||||
|
||||
if (status == CQE_STATUS_DI_ERROR) {
|
||||
/* Guard Check failed */
|
||||
if (bf_get(lpfc_wcqe_c_bg_ge, wcqe))
|
||||
bgstat |= BGS_GUARD_ERR_MASK;
|
||||
|
||||
/* AppTag Check failed */
|
||||
if (bf_get(lpfc_wcqe_c_bg_ae, wcqe))
|
||||
bgstat |= BGS_APPTAG_ERR_MASK;
|
||||
|
||||
/* RefTag Check failed */
|
||||
if (bf_get(lpfc_wcqe_c_bg_re, wcqe))
|
||||
bgstat |= BGS_REFTAG_ERR_MASK;
|
||||
|
||||
/* Check to see if there was any good data before the
|
||||
* error
|
||||
*/
|
||||
if (bf_get(lpfc_wcqe_c_bg_tdpv, wcqe)) {
|
||||
bgstat |= BGS_HI_WATER_MARK_PRESENT_MASK;
|
||||
bghm = wcqe->total_data_placed;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set ALL the error bits to indicate we don't know what
|
||||
* type of error it is.
|
||||
*/
|
||||
if (!bgstat)
|
||||
bgstat |= (BGS_REFTAG_ERR_MASK |
|
||||
BGS_APPTAG_ERR_MASK |
|
||||
BGS_GUARD_ERR_MASK);
|
||||
/* Check to see if there was any good data before the error */
|
||||
if (bf_get(lpfc_wcqe_c_bg_tdpv, wcqe)) {
|
||||
bgstat |= BGS_HI_WATER_MARK_PRESENT_MASK;
|
||||
bghm = wcqe->total_data_placed;
|
||||
}
|
||||
|
||||
} else {
|
||||
bgf = &pIocbOut->iocb.unsli3.sli3_bg;
|
||||
bghm = bgf->bghm;
|
||||
bgstat = bgf->bgstat;
|
||||
/*
|
||||
* Set ALL the error bits to indicate we don't know what
|
||||
* type of error it is.
|
||||
*/
|
||||
if (!bgstat)
|
||||
bgstat |= (BGS_REFTAG_ERR_MASK | BGS_APPTAG_ERR_MASK |
|
||||
BGS_GUARD_ERR_MASK);
|
||||
}
|
||||
|
||||
if (lpfc_bgs_get_guard_err(bgstat)) {
|
||||
ret = 1;
|
||||
|
||||
scsi_build_sense(cmd, 1, ILLEGAL_REQUEST, 0x10, 0x1);
|
||||
set_host_byte(cmd, DID_ABORT);
|
||||
phba->bg_guard_err_cnt++;
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_FCP | LOG_BG,
|
||||
"9059 BLKGRD: Guard Tag error in cmd"
|
||||
" 0x%x lba 0x%llx blk cnt 0x%x "
|
||||
"bgstat=x%x bghm=x%x\n", cmd->cmnd[0],
|
||||
(unsigned long long)scsi_get_lba(cmd),
|
||||
scsi_logical_block_count(cmd), bgstat, bghm);
|
||||
}
|
||||
|
||||
if (lpfc_bgs_get_reftag_err(bgstat)) {
|
||||
ret = 1;
|
||||
|
||||
scsi_build_sense(cmd, 1, ILLEGAL_REQUEST, 0x10, 0x3);
|
||||
set_host_byte(cmd, DID_ABORT);
|
||||
|
||||
phba->bg_reftag_err_cnt++;
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_FCP | LOG_BG,
|
||||
"9060 BLKGRD: Ref Tag error in cmd"
|
||||
" 0x%x lba 0x%llx blk cnt 0x%x "
|
||||
"bgstat=x%x bghm=x%x\n", cmd->cmnd[0],
|
||||
(unsigned long long)scsi_get_lba(cmd),
|
||||
scsi_logical_block_count(cmd), bgstat, bghm);
|
||||
}
|
||||
|
||||
if (lpfc_bgs_get_apptag_err(bgstat)) {
|
||||
ret = 1;
|
||||
|
||||
scsi_build_sense(cmd, 1, ILLEGAL_REQUEST, 0x10, 0x2);
|
||||
set_host_byte(cmd, DID_ABORT);
|
||||
|
||||
phba->bg_apptag_err_cnt++;
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_FCP | LOG_BG,
|
||||
"9062 BLKGRD: App Tag error in cmd"
|
||||
" 0x%x lba 0x%llx blk cnt 0x%x "
|
||||
"bgstat=x%x bghm=x%x\n", cmd->cmnd[0],
|
||||
(unsigned long long)scsi_get_lba(cmd),
|
||||
scsi_logical_block_count(cmd), bgstat, bghm);
|
||||
}
|
||||
|
||||
if (lpfc_bgs_get_hi_water_mark_present(bgstat)) {
|
||||
/*
|
||||
* setup sense data descriptor 0 per SPC-4 as an information
|
||||
* field, and put the failing LBA in it.
|
||||
* This code assumes there was also a guard/app/ref tag error
|
||||
* indication.
|
||||
*/
|
||||
cmd->sense_buffer[7] = 0xc; /* Additional sense length */
|
||||
cmd->sense_buffer[8] = 0; /* Information descriptor type */
|
||||
cmd->sense_buffer[9] = 0xa; /* Additional descriptor length */
|
||||
cmd->sense_buffer[10] = 0x80; /* Validity bit */
|
||||
|
||||
/* bghm is a "on the wire" FC frame based count */
|
||||
switch (scsi_get_prot_op(cmd)) {
|
||||
case SCSI_PROT_READ_INSERT:
|
||||
case SCSI_PROT_WRITE_STRIP:
|
||||
bghm /= cmd->device->sector_size;
|
||||
break;
|
||||
case SCSI_PROT_READ_STRIP:
|
||||
case SCSI_PROT_WRITE_INSERT:
|
||||
case SCSI_PROT_READ_PASS:
|
||||
case SCSI_PROT_WRITE_PASS:
|
||||
bghm /= (cmd->device->sector_size +
|
||||
sizeof(struct scsi_dif_tuple));
|
||||
break;
|
||||
}
|
||||
|
||||
failing_sector = scsi_get_lba(cmd);
|
||||
failing_sector += bghm;
|
||||
|
||||
/* Descriptor Information */
|
||||
put_unaligned_be64(failing_sector, &cmd->sense_buffer[12]);
|
||||
}
|
||||
|
||||
if (!ret) {
|
||||
/* No error was reported - problem in FW? */
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_FCP | LOG_BG,
|
||||
"9068 BLKGRD: Unknown error in cmd"
|
||||
" 0x%x lba 0x%llx blk cnt 0x%x "
|
||||
"bgstat=x%x bghm=x%x\n", cmd->cmnd[0],
|
||||
(unsigned long long)scsi_get_lba(cmd),
|
||||
scsi_logical_block_count(cmd), bgstat, bghm);
|
||||
|
||||
/* Calculate what type of error it was */
|
||||
lpfc_calc_bg_err(phba, lpfc_cmd);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function checks for BlockGuard errors detected by
|
||||
* the HBA. In case of errors, the ASC/ASCQ fields in the
|
||||
* sense buffer will be set accordingly, paired with
|
||||
* ILLEGAL_REQUEST to signal to the kernel that the HBA
|
||||
* detected corruption.
|
||||
*
|
||||
* Returns:
|
||||
* 0 - No error found
|
||||
* 1 - BlockGuard error found
|
||||
* -1 - Internal error (bad profile, ...etc)
|
||||
*/
|
||||
static int
|
||||
lpfc_parse_bg_err(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd,
|
||||
struct lpfc_iocbq *pIocbOut)
|
||||
{
|
||||
struct scsi_cmnd *cmd = lpfc_cmd->pCmd;
|
||||
struct sli3_bg_fields *bgf = &pIocbOut->iocb.unsli3.sli3_bg;
|
||||
int ret = 0;
|
||||
uint32_t bghm = bgf->bghm;
|
||||
uint32_t bgstat = bgf->bgstat;
|
||||
uint64_t failing_sector = 0;
|
||||
|
||||
if (lpfc_bgs_get_invalid_prof(bgstat)) {
|
||||
cmd->result = DID_ERROR << 16;
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_FCP | LOG_BG,
|
||||
@@ -3021,6 +3117,7 @@ lpfc_parse_bg_err(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd,
|
||||
|
||||
if (lpfc_bgs_get_guard_err(bgstat)) {
|
||||
ret = 1;
|
||||
|
||||
scsi_build_sense(cmd, 1, ILLEGAL_REQUEST, 0x10, 0x1);
|
||||
set_host_byte(cmd, DID_ABORT);
|
||||
phba->bg_guard_err_cnt++;
|
||||
@@ -3034,8 +3131,10 @@ lpfc_parse_bg_err(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd,
|
||||
|
||||
if (lpfc_bgs_get_reftag_err(bgstat)) {
|
||||
ret = 1;
|
||||
|
||||
scsi_build_sense(cmd, 1, ILLEGAL_REQUEST, 0x10, 0x3);
|
||||
set_host_byte(cmd, DID_ABORT);
|
||||
|
||||
phba->bg_reftag_err_cnt++;
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_FCP | LOG_BG,
|
||||
"9056 BLKGRD: Ref Tag error in cmd "
|
||||
@@ -3047,8 +3146,10 @@ lpfc_parse_bg_err(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd,
|
||||
|
||||
if (lpfc_bgs_get_apptag_err(bgstat)) {
|
||||
ret = 1;
|
||||
|
||||
scsi_build_sense(cmd, 1, ILLEGAL_REQUEST, 0x10, 0x2);
|
||||
set_host_byte(cmd, DID_ABORT);
|
||||
|
||||
phba->bg_apptag_err_cnt++;
|
||||
lpfc_printf_log(phba, KERN_WARNING, LOG_FCP | LOG_BG,
|
||||
"9061 BLKGRD: App Tag error in cmd "
|
||||
@@ -3333,7 +3434,7 @@ lpfc_scsi_prep_dma_buf_s4(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd)
|
||||
*/
|
||||
if ((phba->cfg_fof) && ((struct lpfc_device_data *)
|
||||
scsi_cmnd->device->hostdata)->oas_enabled) {
|
||||
lpfc_cmd->cur_iocbq.cmd_flag |= (LPFC_IO_OAS | LPFC_IO_FOF);
|
||||
lpfc_cmd->cur_iocbq.iocb_flag |= (LPFC_IO_OAS | LPFC_IO_FOF);
|
||||
lpfc_cmd->cur_iocbq.priority = ((struct lpfc_device_data *)
|
||||
scsi_cmnd->device->hostdata)->priority;
|
||||
|
||||
@@ -3490,15 +3591,15 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba,
|
||||
switch (scsi_get_prot_op(scsi_cmnd)) {
|
||||
case SCSI_PROT_WRITE_STRIP:
|
||||
case SCSI_PROT_READ_STRIP:
|
||||
lpfc_cmd->cur_iocbq.cmd_flag |= LPFC_IO_DIF_STRIP;
|
||||
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_DIF_STRIP;
|
||||
break;
|
||||
case SCSI_PROT_WRITE_INSERT:
|
||||
case SCSI_PROT_READ_INSERT:
|
||||
lpfc_cmd->cur_iocbq.cmd_flag |= LPFC_IO_DIF_INSERT;
|
||||
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_DIF_INSERT;
|
||||
break;
|
||||
case SCSI_PROT_WRITE_PASS:
|
||||
case SCSI_PROT_READ_PASS:
|
||||
lpfc_cmd->cur_iocbq.cmd_flag |= LPFC_IO_DIF_PASS;
|
||||
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_DIF_PASS;
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -3529,7 +3630,7 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba,
|
||||
*/
|
||||
if ((phba->cfg_fof) && ((struct lpfc_device_data *)
|
||||
scsi_cmnd->device->hostdata)->oas_enabled) {
|
||||
lpfc_cmd->cur_iocbq.cmd_flag |= (LPFC_IO_OAS | LPFC_IO_FOF);
|
||||
lpfc_cmd->cur_iocbq.iocb_flag |= (LPFC_IO_OAS | LPFC_IO_FOF);
|
||||
|
||||
/* Word 10 */
|
||||
bf_set(wqe_oas, &wqe->generic.wqe_com, 1);
|
||||
@@ -3539,14 +3640,14 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba,
|
||||
}
|
||||
|
||||
/* Word 7. DIF Flags */
|
||||
if (lpfc_cmd->cur_iocbq.cmd_flag & LPFC_IO_DIF_PASS)
|
||||
if (lpfc_cmd->cur_iocbq.iocb_flag & LPFC_IO_DIF_PASS)
|
||||
bf_set(wqe_dif, &wqe->generic.wqe_com, LPFC_WQE_DIF_PASSTHRU);
|
||||
else if (lpfc_cmd->cur_iocbq.cmd_flag & LPFC_IO_DIF_STRIP)
|
||||
else if (lpfc_cmd->cur_iocbq.iocb_flag & LPFC_IO_DIF_STRIP)
|
||||
bf_set(wqe_dif, &wqe->generic.wqe_com, LPFC_WQE_DIF_STRIP);
|
||||
else if (lpfc_cmd->cur_iocbq.cmd_flag & LPFC_IO_DIF_INSERT)
|
||||
else if (lpfc_cmd->cur_iocbq.iocb_flag & LPFC_IO_DIF_INSERT)
|
||||
bf_set(wqe_dif, &wqe->generic.wqe_com, LPFC_WQE_DIF_INSERT);
|
||||
|
||||
lpfc_cmd->cur_iocbq.cmd_flag &= ~(LPFC_IO_DIF_PASS |
|
||||
lpfc_cmd->cur_iocbq.iocb_flag &= ~(LPFC_IO_DIF_PASS |
|
||||
LPFC_IO_DIF_STRIP | LPFC_IO_DIF_INSERT);
|
||||
|
||||
return 0;
|
||||
@@ -4071,7 +4172,7 @@ lpfc_handle_fcp_err(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd,
|
||||
* lpfc_fcp_io_cmd_wqe_cmpl - Complete a FCP IO
|
||||
* @phba: The hba for which this call is being executed.
|
||||
* @pwqeIn: The command WQE for the scsi cmnd.
|
||||
* @pwqeOut: Pointer to driver response WQE object.
|
||||
* @wcqe: Pointer to driver response CQE object.
|
||||
*
|
||||
* This routine assigns scsi command result by looking into response WQE
|
||||
* status field appropriately. This routine handles QUEUE FULL condition as
|
||||
@@ -4079,11 +4180,10 @@ lpfc_handle_fcp_err(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd,
|
||||
**/
|
||||
static void
|
||||
lpfc_fcp_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
||||
struct lpfc_iocbq *pwqeOut)
|
||||
struct lpfc_wcqe_complete *wcqe)
|
||||
{
|
||||
struct lpfc_io_buf *lpfc_cmd =
|
||||
(struct lpfc_io_buf *)pwqeIn->context1;
|
||||
struct lpfc_wcqe_complete *wcqe = &pwqeOut->wcqe_cmpl;
|
||||
struct lpfc_vport *vport = pwqeIn->vport;
|
||||
struct lpfc_rport_data *rdata;
|
||||
struct lpfc_nodelist *ndlp;
|
||||
@@ -4093,6 +4193,7 @@ lpfc_fcp_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
||||
struct Scsi_Host *shost;
|
||||
u32 logit = LOG_FCP;
|
||||
u32 status, idx;
|
||||
unsigned long iflags = 0;
|
||||
u32 lat;
|
||||
u8 wait_xb_clr = 0;
|
||||
|
||||
@@ -4107,16 +4208,30 @@ lpfc_fcp_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
||||
rdata = lpfc_cmd->rdata;
|
||||
ndlp = rdata->pnode;
|
||||
|
||||
if (bf_get(lpfc_wcqe_c_xb, wcqe)) {
|
||||
/* TOREMOVE - currently this flag is checked during
|
||||
* the release of lpfc_iocbq. Remove once we move
|
||||
* to lpfc_wqe_job construct.
|
||||
*
|
||||
* This needs to be done outside buf_lock
|
||||
*/
|
||||
spin_lock_irqsave(&phba->hbalock, iflags);
|
||||
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_EXCHANGE_BUSY;
|
||||
spin_unlock_irqrestore(&phba->hbalock, iflags);
|
||||
}
|
||||
|
||||
/* Guard against abort handler being called at same time */
|
||||
spin_lock(&lpfc_cmd->buf_lock);
|
||||
|
||||
/* Sanity check on return of outstanding command */
|
||||
cmd = lpfc_cmd->pCmd;
|
||||
if (!cmd) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
|
||||
"9042 I/O completion: Not an active IO\n");
|
||||
spin_unlock(&lpfc_cmd->buf_lock);
|
||||
lpfc_release_scsi_buf(phba, lpfc_cmd);
|
||||
return;
|
||||
}
|
||||
/* Guard against abort handler being called at same time */
|
||||
spin_lock(&lpfc_cmd->buf_lock);
|
||||
idx = lpfc_cmd->cur_iocbq.hba_wqidx;
|
||||
if (phba->sli4_hba.hdwq)
|
||||
phba->sli4_hba.hdwq[idx].scsi_cstat.io_cmpls++;
|
||||
@@ -4290,14 +4405,12 @@ lpfc_fcp_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
||||
* This is a response for a BG enabled
|
||||
* cmd. Parse BG error
|
||||
*/
|
||||
lpfc_parse_bg_err(phba, lpfc_cmd, pwqeOut);
|
||||
lpfc_sli4_parse_bg_err(phba, lpfc_cmd,
|
||||
wcqe);
|
||||
break;
|
||||
} else {
|
||||
lpfc_printf_vlog(vport, KERN_WARNING,
|
||||
LOG_BG,
|
||||
"9040 non-zero BGSTAT "
|
||||
"on unprotected cmd\n");
|
||||
}
|
||||
lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG,
|
||||
"9040 non-zero BGSTAT on unprotected cmd\n");
|
||||
}
|
||||
lpfc_printf_vlog(vport, KERN_WARNING, logit,
|
||||
"9036 Local Reject FCP cmd x%x failed"
|
||||
@@ -4394,7 +4507,7 @@ lpfc_fcp_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
|
||||
* wake up the thread.
|
||||
*/
|
||||
spin_lock(&lpfc_cmd->buf_lock);
|
||||
lpfc_cmd->cur_iocbq.cmd_flag &= ~LPFC_DRIVER_ABORTED;
|
||||
lpfc_cmd->cur_iocbq.iocb_flag &= ~LPFC_DRIVER_ABORTED;
|
||||
if (lpfc_cmd->waitq)
|
||||
wake_up(lpfc_cmd->waitq);
|
||||
spin_unlock(&lpfc_cmd->buf_lock);
|
||||
@@ -4454,7 +4567,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
|
||||
lpfc_cmd->status = pIocbOut->iocb.ulpStatus;
|
||||
/* pick up SLI4 exchange busy status from HBA */
|
||||
lpfc_cmd->flags &= ~LPFC_SBUF_XBUSY;
|
||||
if (pIocbOut->cmd_flag & LPFC_EXCHANGE_BUSY)
|
||||
if (pIocbOut->iocb_flag & LPFC_EXCHANGE_BUSY)
|
||||
lpfc_cmd->flags |= LPFC_SBUF_XBUSY;
|
||||
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
@@ -4663,7 +4776,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
|
||||
* wake up the thread.
|
||||
*/
|
||||
spin_lock(&lpfc_cmd->buf_lock);
|
||||
lpfc_cmd->cur_iocbq.cmd_flag &= ~LPFC_DRIVER_ABORTED;
|
||||
lpfc_cmd->cur_iocbq.iocb_flag &= ~LPFC_DRIVER_ABORTED;
|
||||
if (lpfc_cmd->waitq)
|
||||
wake_up(lpfc_cmd->waitq);
|
||||
spin_unlock(&lpfc_cmd->buf_lock);
|
||||
@@ -4741,8 +4854,8 @@ static int lpfc_scsi_prep_cmnd_buf_s3(struct lpfc_vport *vport,
|
||||
|
||||
piocbq->iocb.ulpClass = (pnode->nlp_fcp_info & 0x0f);
|
||||
piocbq->context1 = lpfc_cmd;
|
||||
if (!piocbq->cmd_cmpl)
|
||||
piocbq->cmd_cmpl = lpfc_scsi_cmd_iocb_cmpl;
|
||||
if (!piocbq->iocb_cmpl)
|
||||
piocbq->iocb_cmpl = lpfc_scsi_cmd_iocb_cmpl;
|
||||
piocbq->iocb.ulpTimeout = tmo;
|
||||
piocbq->vport = vport;
|
||||
return 0;
|
||||
@@ -4855,7 +4968,7 @@ static int lpfc_scsi_prep_cmnd_buf_s4(struct lpfc_vport *vport,
|
||||
pwqeq->vport = vport;
|
||||
pwqeq->context1 = lpfc_cmd;
|
||||
pwqeq->hba_wqidx = lpfc_cmd->hdwq_no;
|
||||
pwqeq->cmd_cmpl = lpfc_fcp_io_cmd_wqe_cmpl;
|
||||
pwqeq->wqe_cmpl = lpfc_fcp_io_cmd_wqe_cmpl;
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -4902,7 +5015,7 @@ lpfc_scsi_prep_cmnd(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd,
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_scsi_prep_task_mgmt_cmd_s3 - Convert SLI3 scsi TM cmd to FCP info unit
|
||||
* lpfc_scsi_prep_task_mgmt_cmd - Convert SLI3 scsi TM cmd to FCP info unit
|
||||
* @vport: The virtual port for which this call is being executed.
|
||||
* @lpfc_cmd: Pointer to lpfc_io_buf data structure.
|
||||
* @lun: Logical unit number.
|
||||
@@ -4916,9 +5029,10 @@ lpfc_scsi_prep_cmnd(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd,
|
||||
* 1 - Success
|
||||
**/
|
||||
static int
|
||||
lpfc_scsi_prep_task_mgmt_cmd_s3(struct lpfc_vport *vport,
|
||||
struct lpfc_io_buf *lpfc_cmd,
|
||||
u64 lun, u8 task_mgmt_cmd)
|
||||
lpfc_scsi_prep_task_mgmt_cmd(struct lpfc_vport *vport,
|
||||
struct lpfc_io_buf *lpfc_cmd,
|
||||
uint64_t lun,
|
||||
uint8_t task_mgmt_cmd)
|
||||
{
|
||||
struct lpfc_iocbq *piocbq;
|
||||
IOCB_t *piocb;
|
||||
@@ -4939,10 +5053,15 @@ lpfc_scsi_prep_task_mgmt_cmd_s3(struct lpfc_vport *vport,
|
||||
memset(fcp_cmnd, 0, sizeof(struct fcp_cmnd));
|
||||
int_to_scsilun(lun, &fcp_cmnd->fcp_lun);
|
||||
fcp_cmnd->fcpCntl2 = task_mgmt_cmd;
|
||||
if (!(vport->phba->sli3_options & LPFC_SLI3_BG_ENABLED))
|
||||
if (vport->phba->sli_rev == 3 &&
|
||||
!(vport->phba->sli3_options & LPFC_SLI3_BG_ENABLED))
|
||||
lpfc_fcpcmd_to_iocb(piocb->unsli3.fcp_ext.icd, fcp_cmnd);
|
||||
piocb->ulpCommand = CMD_FCP_ICMND64_CR;
|
||||
piocb->ulpContext = ndlp->nlp_rpi;
|
||||
if (vport->phba->sli_rev == LPFC_SLI_REV4) {
|
||||
piocb->ulpContext =
|
||||
vport->phba->sli4_hba.rpi_ids[ndlp->nlp_rpi];
|
||||
}
|
||||
piocb->ulpFCP2Rcvy = (ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) ? 1 : 0;
|
||||
piocb->ulpClass = (ndlp->nlp_fcp_info & 0x0f);
|
||||
piocb->ulpPU = 0;
|
||||
@@ -4958,79 +5077,8 @@ lpfc_scsi_prep_task_mgmt_cmd_s3(struct lpfc_vport *vport,
|
||||
} else
|
||||
piocb->ulpTimeout = lpfc_cmd->timeout;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_scsi_prep_task_mgmt_cmd_s4 - Convert SLI4 scsi TM cmd to FCP info unit
|
||||
* @vport: The virtual port for which this call is being executed.
|
||||
* @lpfc_cmd: Pointer to lpfc_io_buf data structure.
|
||||
* @lun: Logical unit number.
|
||||
* @task_mgmt_cmd: SCSI task management command.
|
||||
*
|
||||
* This routine creates FCP information unit corresponding to @task_mgmt_cmd
|
||||
* for device with SLI-4 interface spec.
|
||||
*
|
||||
* Return codes:
|
||||
* 0 - Error
|
||||
* 1 - Success
|
||||
**/
|
||||
static int
|
||||
lpfc_scsi_prep_task_mgmt_cmd_s4(struct lpfc_vport *vport,
|
||||
struct lpfc_io_buf *lpfc_cmd,
|
||||
u64 lun, u8 task_mgmt_cmd)
|
||||
{
|
||||
struct lpfc_iocbq *pwqeq = &lpfc_cmd->cur_iocbq;
|
||||
union lpfc_wqe128 *wqe = &pwqeq->wqe;
|
||||
struct fcp_cmnd *fcp_cmnd;
|
||||
struct lpfc_rport_data *rdata = lpfc_cmd->rdata;
|
||||
struct lpfc_nodelist *ndlp = rdata->pnode;
|
||||
|
||||
if (!ndlp || ndlp->nlp_state != NLP_STE_MAPPED_NODE)
|
||||
return 0;
|
||||
|
||||
pwqeq->vport = vport;
|
||||
/* Initialize 64 bytes only */
|
||||
memset(wqe, 0, sizeof(union lpfc_wqe128));
|
||||
|
||||
/* From the icmnd template, initialize words 4 - 11 */
|
||||
memcpy(&wqe->words[4], &lpfc_icmnd_cmd_template.words[4],
|
||||
sizeof(uint32_t) * 8);
|
||||
|
||||
fcp_cmnd = lpfc_cmd->fcp_cmnd;
|
||||
/* Clear out any old data in the FCP command area */
|
||||
memset(fcp_cmnd, 0, sizeof(struct fcp_cmnd));
|
||||
int_to_scsilun(lun, &fcp_cmnd->fcp_lun);
|
||||
fcp_cmnd->fcpCntl3 = 0;
|
||||
fcp_cmnd->fcpCntl2 = task_mgmt_cmd;
|
||||
|
||||
bf_set(payload_offset_len, &wqe->fcp_icmd,
|
||||
sizeof(struct fcp_cmnd) + sizeof(struct fcp_rsp));
|
||||
bf_set(cmd_buff_len, &wqe->fcp_icmd, 0);
|
||||
bf_set(wqe_ctxt_tag, &wqe->generic.wqe_com, /* ulpContext */
|
||||
vport->phba->sli4_hba.rpi_ids[ndlp->nlp_rpi]);
|
||||
bf_set(wqe_erp, &wqe->fcp_icmd.wqe_com,
|
||||
((ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) ? 1 : 0));
|
||||
bf_set(wqe_class, &wqe->fcp_icmd.wqe_com,
|
||||
(ndlp->nlp_fcp_info & 0x0f));
|
||||
|
||||
/* ulpTimeout is only one byte */
|
||||
if (lpfc_cmd->timeout > 0xff) {
|
||||
/*
|
||||
* Do not timeout the command at the firmware level.
|
||||
* The driver will provide the timeout mechanism.
|
||||
*/
|
||||
bf_set(wqe_tmo, &wqe->fcp_icmd.wqe_com, 0);
|
||||
} else {
|
||||
bf_set(wqe_tmo, &wqe->fcp_icmd.wqe_com, lpfc_cmd->timeout);
|
||||
}
|
||||
|
||||
lpfc_prep_embed_io(vport->phba, lpfc_cmd);
|
||||
bf_set(wqe_xri_tag, &wqe->generic.wqe_com, pwqeq->sli4_xritag);
|
||||
wqe->generic.wqe_com.abort_tag = pwqeq->iotag;
|
||||
bf_set(wqe_reqtag, &wqe->generic.wqe_com, pwqeq->iotag);
|
||||
|
||||
lpfc_sli4_set_rsp_sgl_last(vport->phba, lpfc_cmd);
|
||||
if (vport->phba->sli_rev == LPFC_SLI_REV4)
|
||||
lpfc_sli4_set_rsp_sgl_last(vport->phba, lpfc_cmd);
|
||||
|
||||
return 1;
|
||||
}
|
||||
@@ -5057,8 +5105,6 @@ lpfc_scsi_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
|
||||
phba->lpfc_release_scsi_buf = lpfc_release_scsi_buf_s3;
|
||||
phba->lpfc_get_scsi_buf = lpfc_get_scsi_buf_s3;
|
||||
phba->lpfc_scsi_prep_cmnd_buf = lpfc_scsi_prep_cmnd_buf_s3;
|
||||
phba->lpfc_scsi_prep_task_mgmt_cmd =
|
||||
lpfc_scsi_prep_task_mgmt_cmd_s3;
|
||||
break;
|
||||
case LPFC_PCI_DEV_OC:
|
||||
phba->lpfc_scsi_prep_dma_buf = lpfc_scsi_prep_dma_buf_s4;
|
||||
@@ -5066,8 +5112,6 @@ lpfc_scsi_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
|
||||
phba->lpfc_release_scsi_buf = lpfc_release_scsi_buf_s4;
|
||||
phba->lpfc_get_scsi_buf = lpfc_get_scsi_buf_s4;
|
||||
phba->lpfc_scsi_prep_cmnd_buf = lpfc_scsi_prep_cmnd_buf_s4;
|
||||
phba->lpfc_scsi_prep_task_mgmt_cmd =
|
||||
lpfc_scsi_prep_task_mgmt_cmd_s4;
|
||||
break;
|
||||
default:
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
|
||||
@@ -5546,7 +5590,6 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
|
||||
{
|
||||
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
struct lpfc_iocbq *cur_iocbq = NULL;
|
||||
struct lpfc_rport_data *rdata;
|
||||
struct lpfc_nodelist *ndlp;
|
||||
struct lpfc_io_buf *lpfc_cmd;
|
||||
@@ -5640,7 +5683,6 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
|
||||
}
|
||||
lpfc_cmd->rx_cmd_start = start;
|
||||
|
||||
cur_iocbq = &lpfc_cmd->cur_iocbq;
|
||||
/*
|
||||
* Store the midlayer's command structure for the completion phase
|
||||
* and complete the command initialization.
|
||||
@@ -5648,7 +5690,7 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
|
||||
lpfc_cmd->pCmd = cmnd;
|
||||
lpfc_cmd->rdata = rdata;
|
||||
lpfc_cmd->ndlp = ndlp;
|
||||
cur_iocbq->cmd_cmpl = NULL;
|
||||
lpfc_cmd->cur_iocbq.iocb_cmpl = NULL;
|
||||
cmnd->host_scribble = (unsigned char *)lpfc_cmd;
|
||||
|
||||
err = lpfc_scsi_prep_cmnd(vport, lpfc_cmd, ndlp);
|
||||
@@ -5690,6 +5732,7 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
|
||||
goto out_host_busy_free_buf;
|
||||
}
|
||||
|
||||
|
||||
/* check the necessary and sufficient condition to support VMID */
|
||||
if (lpfc_is_vmid_enabled(phba) &&
|
||||
(ndlp->vmid_support ||
|
||||
@@ -5702,9 +5745,9 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
|
||||
if (uuid) {
|
||||
err = lpfc_vmid_get_appid(vport, uuid, cmnd,
|
||||
(union lpfc_vmid_io_tag *)
|
||||
&cur_iocbq->vmid_tag);
|
||||
&lpfc_cmd->cur_iocbq.vmid_tag);
|
||||
if (!err)
|
||||
cur_iocbq->cmd_flag |= LPFC_IO_VMID;
|
||||
lpfc_cmd->cur_iocbq.iocb_flag |= LPFC_IO_VMID;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5713,7 +5756,8 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
|
||||
this_cpu_inc(phba->sli4_hba.c_stat->xmt_io);
|
||||
#endif
|
||||
/* Issue I/O to adapter */
|
||||
err = lpfc_sli_issue_fcp_io(phba, LPFC_FCP_RING, cur_iocbq,
|
||||
err = lpfc_sli_issue_fcp_io(phba, LPFC_FCP_RING,
|
||||
&lpfc_cmd->cur_iocbq,
|
||||
SLI_IOCB_RET_IOCB);
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
if (start) {
|
||||
@@ -5726,25 +5770,25 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
|
||||
#endif
|
||||
if (err) {
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
|
||||
"3376 FCP could not issue iocb err %x "
|
||||
"FCP cmd x%x <%d/%llu> "
|
||||
"sid: x%x did: x%x oxid: x%x "
|
||||
"Data: x%x x%x x%x x%x\n",
|
||||
err, cmnd->cmnd[0],
|
||||
cmnd->device ? cmnd->device->id : 0xffff,
|
||||
cmnd->device ? cmnd->device->lun : (u64)-1,
|
||||
vport->fc_myDID, ndlp->nlp_DID,
|
||||
phba->sli_rev == LPFC_SLI_REV4 ?
|
||||
cur_iocbq->sli4_xritag : 0xffff,
|
||||
phba->sli_rev == LPFC_SLI_REV4 ?
|
||||
phba->sli4_hba.rpi_ids[ndlp->nlp_rpi] :
|
||||
cur_iocbq->iocb.ulpContext,
|
||||
cur_iocbq->iotag,
|
||||
phba->sli_rev == LPFC_SLI_REV4 ?
|
||||
bf_get(wqe_tmo,
|
||||
&cur_iocbq->wqe.generic.wqe_com) :
|
||||
cur_iocbq->iocb.ulpTimeout,
|
||||
(uint32_t)(scsi_cmd_to_rq(cmnd)->timeout / 1000));
|
||||
"3376 FCP could not issue IOCB err %x "
|
||||
"FCP cmd x%x <%d/%llu> "
|
||||
"sid: x%x did: x%x oxid: x%x "
|
||||
"Data: x%x x%x x%x x%x\n",
|
||||
err, cmnd->cmnd[0],
|
||||
cmnd->device ? cmnd->device->id : 0xffff,
|
||||
cmnd->device ? cmnd->device->lun : (u64)-1,
|
||||
vport->fc_myDID, ndlp->nlp_DID,
|
||||
phba->sli_rev == LPFC_SLI_REV4 ?
|
||||
lpfc_cmd->cur_iocbq.sli4_xritag : 0xffff,
|
||||
phba->sli_rev == LPFC_SLI_REV4 ?
|
||||
phba->sli4_hba.rpi_ids[ndlp->nlp_rpi] :
|
||||
lpfc_cmd->cur_iocbq.iocb.ulpContext,
|
||||
lpfc_cmd->cur_iocbq.iotag,
|
||||
phba->sli_rev == LPFC_SLI_REV4 ?
|
||||
bf_get(wqe_tmo,
|
||||
&lpfc_cmd->cur_iocbq.wqe.generic.wqe_com) :
|
||||
lpfc_cmd->cur_iocbq.iocb.ulpTimeout,
|
||||
(uint32_t)(scsi_cmd_to_rq(cmnd)->timeout / 1000));
|
||||
|
||||
goto out_host_busy_free_buf;
|
||||
}
|
||||
@@ -5890,7 +5934,7 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
|
||||
spin_lock(&pring_s4->ring_lock);
|
||||
}
|
||||
/* the command is in process of being cancelled */
|
||||
if (!(iocb->cmd_flag & LPFC_IO_ON_TXCMPLQ)) {
|
||||
if (!(iocb->iocb_flag & LPFC_IO_ON_TXCMPLQ)) {
|
||||
lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
|
||||
"3169 SCSI Layer abort requested I/O has been "
|
||||
"cancelled by LLD.\n");
|
||||
@@ -5913,7 +5957,7 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
|
||||
BUG_ON(iocb->context1 != lpfc_cmd);
|
||||
|
||||
/* abort issued in recovery is still in progress */
|
||||
if (iocb->cmd_flag & LPFC_DRIVER_ABORTED) {
|
||||
if (iocb->iocb_flag & LPFC_DRIVER_ABORTED) {
|
||||
lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
|
||||
"3389 SCSI Layer I/O Abort Request is pending\n");
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
@@ -5954,7 +5998,7 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
|
||||
|
||||
wait_for_cmpl:
|
||||
/*
|
||||
* cmd_flag is set to LPFC_DRIVER_ABORTED before we wait
|
||||
* iocb_flag is set to LPFC_DRIVER_ABORTED before we wait
|
||||
* for abort to complete.
|
||||
*/
|
||||
wait_event_timeout(waitq,
|
||||
@@ -6121,7 +6165,7 @@ lpfc_send_taskmgmt(struct lpfc_vport *vport, struct scsi_cmnd *cmnd,
|
||||
return FAILED;
|
||||
pnode = rdata->pnode;
|
||||
|
||||
lpfc_cmd = lpfc_get_scsi_buf(phba, rdata->pnode, NULL);
|
||||
lpfc_cmd = lpfc_get_scsi_buf(phba, pnode, NULL);
|
||||
if (lpfc_cmd == NULL)
|
||||
return FAILED;
|
||||
lpfc_cmd->timeout = phba->cfg_task_mgmt_tmo;
|
||||
@@ -6129,8 +6173,8 @@ lpfc_send_taskmgmt(struct lpfc_vport *vport, struct scsi_cmnd *cmnd,
|
||||
lpfc_cmd->pCmd = cmnd;
|
||||
lpfc_cmd->ndlp = pnode;
|
||||
|
||||
status = phba->lpfc_scsi_prep_task_mgmt_cmd(vport, lpfc_cmd, lun_id,
|
||||
task_mgmt_cmd);
|
||||
status = lpfc_scsi_prep_task_mgmt_cmd(vport, lpfc_cmd, lun_id,
|
||||
task_mgmt_cmd);
|
||||
if (!status) {
|
||||
lpfc_release_scsi_buf(phba, lpfc_cmd);
|
||||
return FAILED;
|
||||
@@ -6142,41 +6186,38 @@ lpfc_send_taskmgmt(struct lpfc_vport *vport, struct scsi_cmnd *cmnd,
|
||||
lpfc_release_scsi_buf(phba, lpfc_cmd);
|
||||
return FAILED;
|
||||
}
|
||||
iocbq->cmd_cmpl = lpfc_tskmgmt_def_cmpl;
|
||||
iocbq->vport = vport;
|
||||
iocbq->iocb_cmpl = lpfc_tskmgmt_def_cmpl;
|
||||
|
||||
lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
|
||||
"0702 Issue %s to TGT %d LUN %llu "
|
||||
"rpi x%x nlp_flag x%x Data: x%x x%x\n",
|
||||
lpfc_taskmgmt_name(task_mgmt_cmd), tgt_id, lun_id,
|
||||
pnode->nlp_rpi, pnode->nlp_flag, iocbq->sli4_xritag,
|
||||
iocbq->cmd_flag);
|
||||
iocbq->iocb_flag);
|
||||
|
||||
status = lpfc_sli_issue_iocb_wait(phba, LPFC_FCP_RING,
|
||||
iocbq, iocbqrsp, lpfc_cmd->timeout);
|
||||
if ((status != IOCB_SUCCESS) ||
|
||||
(get_job_ulpstatus(phba, iocbqrsp) != IOSTAT_SUCCESS)) {
|
||||
(iocbqrsp->iocb.ulpStatus != IOSTAT_SUCCESS)) {
|
||||
if (status != IOCB_SUCCESS ||
|
||||
get_job_ulpstatus(phba, iocbqrsp) != IOSTAT_FCP_RSP_ERROR)
|
||||
iocbqrsp->iocb.ulpStatus != IOSTAT_FCP_RSP_ERROR)
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
|
||||
"0727 TMF %s to TGT %d LUN %llu "
|
||||
"failed (%d, %d) cmd_flag x%x\n",
|
||||
"failed (%d, %d) iocb_flag x%x\n",
|
||||
lpfc_taskmgmt_name(task_mgmt_cmd),
|
||||
tgt_id, lun_id,
|
||||
get_job_ulpstatus(phba, iocbqrsp),
|
||||
get_job_word4(phba, iocbqrsp),
|
||||
iocbq->cmd_flag);
|
||||
iocbqrsp->iocb.ulpStatus,
|
||||
iocbqrsp->iocb.un.ulpWord[4],
|
||||
iocbq->iocb_flag);
|
||||
/* if ulpStatus != IOCB_SUCCESS, then status == IOCB_SUCCESS */
|
||||
if (status == IOCB_SUCCESS) {
|
||||
if (get_job_ulpstatus(phba, iocbqrsp) ==
|
||||
IOSTAT_FCP_RSP_ERROR)
|
||||
if (iocbqrsp->iocb.ulpStatus == IOSTAT_FCP_RSP_ERROR)
|
||||
/* Something in the FCP_RSP was invalid.
|
||||
* Check conditions */
|
||||
ret = lpfc_check_fcp_rsp(vport, lpfc_cmd);
|
||||
else
|
||||
ret = FAILED;
|
||||
} else if ((status == IOCB_TIMEDOUT) ||
|
||||
(status == IOCB_ABORTED)) {
|
||||
} else if (status == IOCB_TIMEDOUT) {
|
||||
ret = TIMEOUT_ERROR;
|
||||
} else {
|
||||
ret = FAILED;
|
||||
@@ -6186,7 +6227,7 @@ lpfc_send_taskmgmt(struct lpfc_vport *vport, struct scsi_cmnd *cmnd,
|
||||
|
||||
lpfc_sli_release_iocbq(phba, iocbqrsp);
|
||||
|
||||
if (status != IOCB_TIMEDOUT)
|
||||
if (ret != TIMEOUT_ERROR)
|
||||
lpfc_release_scsi_buf(phba, lpfc_cmd);
|
||||
|
||||
return ret;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -35,7 +35,7 @@ typedef enum _lpfc_ctx_cmd {
|
||||
LPFC_CTX_HOST
|
||||
} lpfc_ctx_cmd;
|
||||
|
||||
union lpfc_vmid_tag {
|
||||
union lpfc_vmid_iocb_tag {
|
||||
uint32_t app_id;
|
||||
uint8_t cs_ctl_vmid;
|
||||
struct lpfc_vmid_context *vmid_context; /* UVEM context information */
|
||||
@@ -69,18 +69,16 @@ struct lpfc_iocbq {
|
||||
uint16_t sli4_xritag; /* pre-assigned XRI, (OXID) tag. */
|
||||
uint16_t hba_wqidx; /* index to HBA work queue */
|
||||
struct lpfc_cq_event cq_event;
|
||||
struct lpfc_wcqe_complete wcqe_cmpl; /* WQE cmpl */
|
||||
uint64_t isr_timestamp;
|
||||
|
||||
union lpfc_wqe128 wqe; /* SLI-4 */
|
||||
IOCB_t iocb; /* SLI-3 */
|
||||
struct lpfc_wcqe_complete wcqe_cmpl; /* WQE cmpl */
|
||||
|
||||
uint8_t num_bdes;
|
||||
uint8_t abort_bls; /* ABTS by initiator or responder */
|
||||
|
||||
uint8_t rsvd2;
|
||||
uint8_t priority; /* OAS priority */
|
||||
uint8_t retry; /* retry counter for IOCB cmd - if needed */
|
||||
u32 cmd_flag;
|
||||
uint32_t iocb_flag;
|
||||
#define LPFC_IO_LIBDFC 1 /* libdfc iocb */
|
||||
#define LPFC_IO_WAKE 2 /* Synchronous I/O completed */
|
||||
#define LPFC_IO_WAKE_TMO LPFC_IO_WAKE /* Synchronous I/O timed out */
|
||||
@@ -125,13 +123,15 @@ struct lpfc_iocbq {
|
||||
struct lpfc_node_rrq *rrq;
|
||||
} context_un;
|
||||
|
||||
union lpfc_vmid_tag vmid_tag;
|
||||
void (*fabric_cmd_cmpl)(struct lpfc_hba *phba, struct lpfc_iocbq *cmd,
|
||||
struct lpfc_iocbq *rsp);
|
||||
void (*wait_cmd_cmpl)(struct lpfc_hba *phba, struct lpfc_iocbq *cmd,
|
||||
struct lpfc_iocbq *rsp);
|
||||
void (*cmd_cmpl)(struct lpfc_hba *phba, struct lpfc_iocbq *cmd,
|
||||
struct lpfc_iocbq *rsp);
|
||||
union lpfc_vmid_iocb_tag vmid_tag;
|
||||
void (*fabric_iocb_cmpl)(struct lpfc_hba *, struct lpfc_iocbq *,
|
||||
struct lpfc_iocbq *);
|
||||
void (*wait_iocb_cmpl)(struct lpfc_hba *, struct lpfc_iocbq *,
|
||||
struct lpfc_iocbq *);
|
||||
void (*iocb_cmpl)(struct lpfc_hba *, struct lpfc_iocbq *,
|
||||
struct lpfc_iocbq *);
|
||||
void (*wqe_cmpl)(struct lpfc_hba *, struct lpfc_iocbq *,
|
||||
struct lpfc_wcqe_complete *);
|
||||
};
|
||||
|
||||
#define SLI_IOCB_RET_IOCB 1 /* Return IOCB if cmd ring full */
|
||||
|
||||
@@ -3318,11 +3318,34 @@ struct fc_function_template qla2xxx_transport_vport_functions = {
|
||||
.bsg_timeout = qla24xx_bsg_timeout,
|
||||
};
|
||||
|
||||
static uint
|
||||
qla2x00_get_host_supported_speeds(scsi_qla_host_t *vha, uint speeds)
|
||||
{
|
||||
uint supported_speeds = FC_PORTSPEED_UNKNOWN;
|
||||
|
||||
if (speeds & FDMI_PORT_SPEED_64GB)
|
||||
supported_speeds |= FC_PORTSPEED_64GBIT;
|
||||
if (speeds & FDMI_PORT_SPEED_32GB)
|
||||
supported_speeds |= FC_PORTSPEED_32GBIT;
|
||||
if (speeds & FDMI_PORT_SPEED_16GB)
|
||||
supported_speeds |= FC_PORTSPEED_16GBIT;
|
||||
if (speeds & FDMI_PORT_SPEED_8GB)
|
||||
supported_speeds |= FC_PORTSPEED_8GBIT;
|
||||
if (speeds & FDMI_PORT_SPEED_4GB)
|
||||
supported_speeds |= FC_PORTSPEED_4GBIT;
|
||||
if (speeds & FDMI_PORT_SPEED_2GB)
|
||||
supported_speeds |= FC_PORTSPEED_2GBIT;
|
||||
if (speeds & FDMI_PORT_SPEED_1GB)
|
||||
supported_speeds |= FC_PORTSPEED_1GBIT;
|
||||
|
||||
return supported_speeds;
|
||||
}
|
||||
|
||||
void
|
||||
qla2x00_init_host_attr(scsi_qla_host_t *vha)
|
||||
{
|
||||
struct qla_hw_data *ha = vha->hw;
|
||||
u32 speeds = FC_PORTSPEED_UNKNOWN;
|
||||
u32 speeds = 0, fdmi_speed = 0;
|
||||
|
||||
fc_host_dev_loss_tmo(vha->host) = ha->port_down_retry_count;
|
||||
fc_host_node_name(vha->host) = wwn_to_u64(vha->node_name);
|
||||
@@ -3332,7 +3355,8 @@ qla2x00_init_host_attr(scsi_qla_host_t *vha)
|
||||
fc_host_max_npiv_vports(vha->host) = ha->max_npiv_vports;
|
||||
fc_host_npiv_vports_inuse(vha->host) = ha->cur_vport_count;
|
||||
|
||||
speeds = qla25xx_fdmi_port_speed_capability(ha);
|
||||
fdmi_speed = qla25xx_fdmi_port_speed_capability(ha);
|
||||
speeds = qla2x00_get_host_supported_speeds(vha, fdmi_speed);
|
||||
|
||||
fc_host_supported_speeds(vha->host) = speeds;
|
||||
}
|
||||
|
||||
@@ -1072,6 +1072,7 @@ static blk_status_t sd_setup_write_same_cmnd(struct scsi_cmnd *cmd)
|
||||
struct bio *bio = rq->bio;
|
||||
u64 lba = sectors_to_logical(sdp, blk_rq_pos(rq));
|
||||
u32 nr_blocks = sectors_to_logical(sdp, blk_rq_sectors(rq));
|
||||
unsigned int nr_bytes = blk_rq_bytes(rq);
|
||||
blk_status_t ret;
|
||||
|
||||
if (sdkp->device->no_write_same)
|
||||
@@ -1108,7 +1109,7 @@ static blk_status_t sd_setup_write_same_cmnd(struct scsi_cmnd *cmd)
|
||||
*/
|
||||
rq->__data_len = sdp->sector_size;
|
||||
ret = scsi_alloc_sgtables(cmd);
|
||||
rq->__data_len = blk_rq_bytes(rq);
|
||||
rq->__data_len = nr_bytes;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -962,8 +962,8 @@ sh_css_set_black_frame(struct ia_css_stream *stream,
|
||||
params->fpn_config.data = NULL;
|
||||
}
|
||||
if (!params->fpn_config.data) {
|
||||
params->fpn_config.data = kvmalloc(height * width *
|
||||
sizeof(short), GFP_KERNEL);
|
||||
params->fpn_config.data = kvmalloc(array3_size(height, width, sizeof(short)),
|
||||
GFP_KERNEL);
|
||||
if (!params->fpn_config.data) {
|
||||
IA_CSS_ERROR("out of memory");
|
||||
IA_CSS_LEAVE_ERR_PRIVATE(-ENOMEM);
|
||||
|
||||
@@ -342,6 +342,9 @@ static void omap8250_restore_regs(struct uart_8250_port *up)
|
||||
omap8250_update_mdr1(up, priv);
|
||||
|
||||
up->port.ops->set_mctrl(&up->port, up->port.mctrl);
|
||||
|
||||
if (up->port.rs485.flags & SER_RS485_ENABLED)
|
||||
serial8250_em485_stop_tx(up);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -1734,7 +1734,6 @@ static int pci_fintek_init(struct pci_dev *dev)
|
||||
resource_size_t bar_data[3];
|
||||
u8 config_base;
|
||||
struct serial_private *priv = pci_get_drvdata(dev);
|
||||
struct uart_8250_port *port;
|
||||
|
||||
if (!(pci_resource_flags(dev, 5) & IORESOURCE_IO) ||
|
||||
!(pci_resource_flags(dev, 4) & IORESOURCE_IO) ||
|
||||
@@ -1781,13 +1780,7 @@ static int pci_fintek_init(struct pci_dev *dev)
|
||||
|
||||
pci_write_config_byte(dev, config_base + 0x06, dev->irq);
|
||||
|
||||
if (priv) {
|
||||
/* re-apply RS232/485 mode when
|
||||
* pciserial_resume_ports()
|
||||
*/
|
||||
port = serial8250_get_port(priv->line[i]);
|
||||
pci_fintek_rs485_config(&port->port, NULL);
|
||||
} else {
|
||||
if (!priv) {
|
||||
/* First init without port data
|
||||
* force init to RS232 Mode
|
||||
*/
|
||||
|
||||
@@ -600,7 +600,7 @@ EXPORT_SYMBOL_GPL(serial8250_rpm_put);
|
||||
static int serial8250_em485_init(struct uart_8250_port *p)
|
||||
{
|
||||
if (p->em485)
|
||||
return 0;
|
||||
goto deassert_rts;
|
||||
|
||||
p->em485 = kmalloc(sizeof(struct uart_8250_em485), GFP_ATOMIC);
|
||||
if (!p->em485)
|
||||
@@ -616,7 +616,9 @@ static int serial8250_em485_init(struct uart_8250_port *p)
|
||||
p->em485->active_timer = NULL;
|
||||
p->em485->tx_stopped = true;
|
||||
|
||||
p->rs485_stop_tx(p);
|
||||
deassert_rts:
|
||||
if (p->em485->tx_stopped)
|
||||
p->rs485_stop_tx(p);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -2033,6 +2035,9 @@ EXPORT_SYMBOL_GPL(serial8250_do_set_mctrl);
|
||||
|
||||
static void serial8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
|
||||
{
|
||||
if (port->rs485.flags & SER_RS485_ENABLED)
|
||||
return;
|
||||
|
||||
if (port->set_mctrl)
|
||||
port->set_mctrl(port, mctrl);
|
||||
else
|
||||
@@ -3188,9 +3193,6 @@ static void serial8250_config_port(struct uart_port *port, int flags)
|
||||
if (flags & UART_CONFIG_TYPE)
|
||||
autoconfig(up);
|
||||
|
||||
if (port->rs485.flags & SER_RS485_ENABLED)
|
||||
port->rs485_config(port, &port->rs485);
|
||||
|
||||
/* if access method is AU, it is a 16550 with a quirk */
|
||||
if (port->type == PORT_16550A && port->iotype == UPIO_AU)
|
||||
up->bugs |= UART_BUG_NOMSR;
|
||||
|
||||
@@ -2733,10 +2733,6 @@ static int lpuart_probe(struct platform_device *pdev)
|
||||
if (ret)
|
||||
goto failed_reset;
|
||||
|
||||
ret = uart_add_one_port(&lpuart_reg, &sport->port);
|
||||
if (ret)
|
||||
goto failed_attach_port;
|
||||
|
||||
ret = uart_get_rs485_mode(&sport->port);
|
||||
if (ret)
|
||||
goto failed_get_rs485;
|
||||
@@ -2748,7 +2744,9 @@ static int lpuart_probe(struct platform_device *pdev)
|
||||
sport->port.rs485.delay_rts_after_send)
|
||||
dev_err(&pdev->dev, "driver doesn't support RTS delays\n");
|
||||
|
||||
sport->port.rs485_config(&sport->port, &sport->port.rs485);
|
||||
ret = uart_add_one_port(&lpuart_reg, &sport->port);
|
||||
if (ret)
|
||||
goto failed_attach_port;
|
||||
|
||||
ret = devm_request_irq(&pdev->dev, sport->port.irq, handler, 0,
|
||||
DRIVER_NAME, sport);
|
||||
|
||||
@@ -380,8 +380,7 @@ static void imx_uart_rts_active(struct imx_port *sport, u32 *ucr2)
|
||||
{
|
||||
*ucr2 &= ~(UCR2_CTSC | UCR2_CTS);
|
||||
|
||||
sport->port.mctrl |= TIOCM_RTS;
|
||||
mctrl_gpio_set(sport->gpios, sport->port.mctrl);
|
||||
mctrl_gpio_set(sport->gpios, sport->port.mctrl | TIOCM_RTS);
|
||||
}
|
||||
|
||||
/* called with port.lock taken and irqs caller dependent */
|
||||
@@ -390,8 +389,7 @@ static void imx_uart_rts_inactive(struct imx_port *sport, u32 *ucr2)
|
||||
*ucr2 &= ~UCR2_CTSC;
|
||||
*ucr2 |= UCR2_CTS;
|
||||
|
||||
sport->port.mctrl &= ~TIOCM_RTS;
|
||||
mctrl_gpio_set(sport->gpios, sport->port.mctrl);
|
||||
mctrl_gpio_set(sport->gpios, sport->port.mctrl & ~TIOCM_RTS);
|
||||
}
|
||||
|
||||
static void start_hrtimer_ms(struct hrtimer *hrt, unsigned long msec)
|
||||
@@ -2318,8 +2316,6 @@ static int imx_uart_probe(struct platform_device *pdev)
|
||||
dev_err(&pdev->dev,
|
||||
"low-active RTS not possible when receiver is off, enabling receiver\n");
|
||||
|
||||
imx_uart_rs485_config(&sport->port, &sport->port.rs485);
|
||||
|
||||
/* Disable interrupts before requesting them */
|
||||
ucr1 = imx_uart_readl(sport, UCR1);
|
||||
ucr1 &= ~(UCR1_ADEN | UCR1_TRDYEN | UCR1_IDEN | UCR1_RRDYEN | UCR1_RTSDEN);
|
||||
|
||||
@@ -42,6 +42,11 @@ static struct lock_class_key port_lock_key;
|
||||
|
||||
#define HIGH_BITS_OFFSET ((sizeof(long)-sizeof(int))*8)
|
||||
|
||||
/*
|
||||
* Max time with active RTS before/after data is sent.
|
||||
*/
|
||||
#define RS485_MAX_RTS_DELAY 100 /* msecs */
|
||||
|
||||
static void uart_change_speed(struct tty_struct *tty, struct uart_state *state,
|
||||
struct ktermios *old_termios);
|
||||
static void uart_wait_until_sent(struct tty_struct *tty, int timeout);
|
||||
@@ -144,15 +149,10 @@ uart_update_mctrl(struct uart_port *port, unsigned int set, unsigned int clear)
|
||||
unsigned long flags;
|
||||
unsigned int old;
|
||||
|
||||
if (port->rs485.flags & SER_RS485_ENABLED) {
|
||||
set &= ~TIOCM_RTS;
|
||||
clear &= ~TIOCM_RTS;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&port->lock, flags);
|
||||
old = port->mctrl;
|
||||
port->mctrl = (old & ~clear) | set;
|
||||
if (old != port->mctrl)
|
||||
if (old != port->mctrl && !(port->rs485.flags & SER_RS485_ENABLED))
|
||||
port->ops->set_mctrl(port, port->mctrl);
|
||||
spin_unlock_irqrestore(&port->lock, flags);
|
||||
}
|
||||
@@ -1299,8 +1299,41 @@ static int uart_set_rs485_config(struct uart_port *port,
|
||||
if (copy_from_user(&rs485, rs485_user, sizeof(*rs485_user)))
|
||||
return -EFAULT;
|
||||
|
||||
/* pick sane settings if the user hasn't */
|
||||
if (!(rs485.flags & SER_RS485_RTS_ON_SEND) ==
|
||||
!(rs485.flags & SER_RS485_RTS_AFTER_SEND)) {
|
||||
dev_warn_ratelimited(port->dev,
|
||||
"%s (%d): invalid RTS setting, using RTS_ON_SEND instead\n",
|
||||
port->name, port->line);
|
||||
rs485.flags |= SER_RS485_RTS_ON_SEND;
|
||||
rs485.flags &= ~SER_RS485_RTS_AFTER_SEND;
|
||||
}
|
||||
|
||||
if (rs485.delay_rts_before_send > RS485_MAX_RTS_DELAY) {
|
||||
rs485.delay_rts_before_send = RS485_MAX_RTS_DELAY;
|
||||
dev_warn_ratelimited(port->dev,
|
||||
"%s (%d): RTS delay before sending clamped to %u ms\n",
|
||||
port->name, port->line, rs485.delay_rts_before_send);
|
||||
}
|
||||
|
||||
if (rs485.delay_rts_after_send > RS485_MAX_RTS_DELAY) {
|
||||
rs485.delay_rts_after_send = RS485_MAX_RTS_DELAY;
|
||||
dev_warn_ratelimited(port->dev,
|
||||
"%s (%d): RTS delay after sending clamped to %u ms\n",
|
||||
port->name, port->line, rs485.delay_rts_after_send);
|
||||
}
|
||||
/* Return clean padding area to userspace */
|
||||
memset(rs485.padding, 0, sizeof(rs485.padding));
|
||||
|
||||
spin_lock_irqsave(&port->lock, flags);
|
||||
ret = port->rs485_config(port, &rs485);
|
||||
if (!ret) {
|
||||
port->rs485 = rs485;
|
||||
|
||||
/* Reset RTS and other mctrl lines when disabling RS485 */
|
||||
if (!(rs485.flags & SER_RS485_ENABLED))
|
||||
port->ops->set_mctrl(port, port->mctrl);
|
||||
}
|
||||
spin_unlock_irqrestore(&port->lock, flags);
|
||||
if (ret)
|
||||
return ret;
|
||||
@@ -2273,7 +2306,8 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
|
||||
|
||||
uart_change_pm(state, UART_PM_STATE_ON);
|
||||
spin_lock_irq(&uport->lock);
|
||||
ops->set_mctrl(uport, 0);
|
||||
if (!(uport->rs485.flags & SER_RS485_ENABLED))
|
||||
ops->set_mctrl(uport, 0);
|
||||
spin_unlock_irq(&uport->lock);
|
||||
if (console_suspend_enabled || !uart_console(uport)) {
|
||||
/* Protected by port mutex for now */
|
||||
@@ -2284,7 +2318,10 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
|
||||
if (tty)
|
||||
uart_change_speed(tty, state, NULL);
|
||||
spin_lock_irq(&uport->lock);
|
||||
ops->set_mctrl(uport, uport->mctrl);
|
||||
if (!(uport->rs485.flags & SER_RS485_ENABLED))
|
||||
ops->set_mctrl(uport, uport->mctrl);
|
||||
else
|
||||
uport->rs485_config(uport, &uport->rs485);
|
||||
ops->start_tx(uport);
|
||||
spin_unlock_irq(&uport->lock);
|
||||
tty_port_set_initialized(port, 1);
|
||||
@@ -2390,10 +2427,10 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
|
||||
*/
|
||||
spin_lock_irqsave(&port->lock, flags);
|
||||
port->mctrl &= TIOCM_DTR;
|
||||
if (port->rs485.flags & SER_RS485_ENABLED &&
|
||||
!(port->rs485.flags & SER_RS485_RTS_AFTER_SEND))
|
||||
port->mctrl |= TIOCM_RTS;
|
||||
port->ops->set_mctrl(port, port->mctrl);
|
||||
if (!(port->rs485.flags & SER_RS485_ENABLED))
|
||||
port->ops->set_mctrl(port, port->mctrl);
|
||||
else
|
||||
port->rs485_config(port, &port->rs485);
|
||||
spin_unlock_irqrestore(&port->lock, flags);
|
||||
|
||||
/*
|
||||
|
||||
@@ -388,6 +388,15 @@ static const struct usb_device_id usb_quirk_list[] = {
|
||||
/* Kingston DataTraveler 3.0 */
|
||||
{ USB_DEVICE(0x0951, 0x1666), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* NVIDIA Jetson devices in Force Recovery mode */
|
||||
{ USB_DEVICE(0x0955, 0x7018), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
{ USB_DEVICE(0x0955, 0x7019), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
{ USB_DEVICE(0x0955, 0x7418), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
{ USB_DEVICE(0x0955, 0x7721), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
{ USB_DEVICE(0x0955, 0x7c18), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
{ USB_DEVICE(0x0955, 0x7e19), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
{ USB_DEVICE(0x0955, 0x7f21), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
|
||||
/* X-Rite/Gretag-Macbeth Eye-One Pro display colorimeter */
|
||||
{ USB_DEVICE(0x0971, 0x2000), .driver_info = USB_QUIRK_NO_SET_INTF },
|
||||
|
||||
|
||||
@@ -1261,8 +1261,8 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
|
||||
trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS;
|
||||
}
|
||||
|
||||
/* always enable Interrupt on Missed ISOC */
|
||||
trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
|
||||
if (!no_interrupt && !chain)
|
||||
trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
|
||||
break;
|
||||
|
||||
case USB_ENDPOINT_XFER_BULK:
|
||||
@@ -3158,6 +3158,10 @@ static int dwc3_gadget_ep_reclaim_completed_trb(struct dwc3_ep *dep,
|
||||
if (event->status & DEPEVT_STATUS_SHORT && !chain)
|
||||
return 1;
|
||||
|
||||
if ((trb->ctrl & DWC3_TRB_CTRL_ISP_IMI) &&
|
||||
DWC3_TRB_SIZE_TRBSTS(trb->size) == DWC3_TRBSTS_MISSED_ISOC)
|
||||
return 1;
|
||||
|
||||
if ((trb->ctrl & DWC3_TRB_CTRL_IOC) ||
|
||||
(trb->ctrl & DWC3_TRB_CTRL_LST))
|
||||
return 1;
|
||||
|
||||
@@ -313,6 +313,7 @@ int uvcg_queue_enable(struct uvc_video_queue *queue, int enable)
|
||||
|
||||
queue->sequence = 0;
|
||||
queue->buf_used = 0;
|
||||
queue->flags &= ~UVC_QUEUE_DROP_INCOMPLETE;
|
||||
} else {
|
||||
ret = vb2_streamoff(&queue->queue, queue->queue.type);
|
||||
if (ret < 0)
|
||||
@@ -338,10 +339,11 @@ int uvcg_queue_enable(struct uvc_video_queue *queue, int enable)
|
||||
void uvcg_complete_buffer(struct uvc_video_queue *queue,
|
||||
struct uvc_buffer *buf)
|
||||
{
|
||||
if ((queue->flags & UVC_QUEUE_DROP_INCOMPLETE) &&
|
||||
buf->length != buf->bytesused) {
|
||||
buf->state = UVC_BUF_STATE_QUEUED;
|
||||
if (queue->flags & UVC_QUEUE_DROP_INCOMPLETE) {
|
||||
queue->flags &= ~UVC_QUEUE_DROP_INCOMPLETE;
|
||||
buf->state = UVC_BUF_STATE_ERROR;
|
||||
vb2_set_plane_payload(&buf->buf.vb2_buf, 0, 0);
|
||||
vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_ERROR);
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
@@ -59,6 +59,7 @@ uvc_video_encode_bulk(struct usb_request *req, struct uvc_video *video,
|
||||
struct uvc_buffer *buf)
|
||||
{
|
||||
void *mem = req->buf;
|
||||
struct uvc_request *ureq = req->context;
|
||||
int len = video->req_size;
|
||||
int ret;
|
||||
|
||||
@@ -84,13 +85,14 @@ uvc_video_encode_bulk(struct usb_request *req, struct uvc_video *video,
|
||||
video->queue.buf_used = 0;
|
||||
buf->state = UVC_BUF_STATE_DONE;
|
||||
list_del(&buf->queue);
|
||||
uvcg_complete_buffer(&video->queue, buf);
|
||||
video->fid ^= UVC_STREAM_FID;
|
||||
ureq->last_buf = buf;
|
||||
|
||||
video->payload_size = 0;
|
||||
}
|
||||
|
||||
if (video->payload_size == video->max_payload_size ||
|
||||
video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE ||
|
||||
buf->bytesused == video->queue.buf_used)
|
||||
video->payload_size = 0;
|
||||
}
|
||||
@@ -126,10 +128,10 @@ uvc_video_encode_isoc_sg(struct usb_request *req, struct uvc_video *video,
|
||||
sg = sg_next(sg);
|
||||
|
||||
for_each_sg(sg, iter, ureq->sgt.nents - 1, i) {
|
||||
if (!len || !buf->sg || !sg_dma_len(buf->sg))
|
||||
if (!len || !buf->sg || !buf->sg->length)
|
||||
break;
|
||||
|
||||
sg_left = sg_dma_len(buf->sg) - buf->offset;
|
||||
sg_left = buf->sg->length - buf->offset;
|
||||
part = min_t(unsigned int, len, sg_left);
|
||||
|
||||
sg_set_page(iter, sg_page(buf->sg), part, buf->offset);
|
||||
@@ -151,7 +153,8 @@ uvc_video_encode_isoc_sg(struct usb_request *req, struct uvc_video *video,
|
||||
req->length -= len;
|
||||
video->queue.buf_used += req->length - header_len;
|
||||
|
||||
if (buf->bytesused == video->queue.buf_used || !buf->sg) {
|
||||
if (buf->bytesused == video->queue.buf_used || !buf->sg ||
|
||||
video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE) {
|
||||
video->queue.buf_used = 0;
|
||||
buf->state = UVC_BUF_STATE_DONE;
|
||||
buf->offset = 0;
|
||||
@@ -166,6 +169,7 @@ uvc_video_encode_isoc(struct usb_request *req, struct uvc_video *video,
|
||||
struct uvc_buffer *buf)
|
||||
{
|
||||
void *mem = req->buf;
|
||||
struct uvc_request *ureq = req->context;
|
||||
int len = video->req_size;
|
||||
int ret;
|
||||
|
||||
@@ -180,12 +184,13 @@ uvc_video_encode_isoc(struct usb_request *req, struct uvc_video *video,
|
||||
|
||||
req->length = video->req_size - len;
|
||||
|
||||
if (buf->bytesused == video->queue.buf_used) {
|
||||
if (buf->bytesused == video->queue.buf_used ||
|
||||
video->queue.flags & UVC_QUEUE_DROP_INCOMPLETE) {
|
||||
video->queue.buf_used = 0;
|
||||
buf->state = UVC_BUF_STATE_DONE;
|
||||
list_del(&buf->queue);
|
||||
uvcg_complete_buffer(&video->queue, buf);
|
||||
video->fid ^= UVC_STREAM_FID;
|
||||
ureq->last_buf = buf;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -222,6 +227,11 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
|
||||
case 0:
|
||||
break;
|
||||
|
||||
case -EXDEV:
|
||||
uvcg_dbg(&video->uvc->func, "VS request missed xfer.\n");
|
||||
queue->flags |= UVC_QUEUE_DROP_INCOMPLETE;
|
||||
break;
|
||||
|
||||
case -ESHUTDOWN: /* disconnect from host. */
|
||||
uvcg_dbg(&video->uvc->func, "VS request cancelled.\n");
|
||||
uvcg_queue_cancel(queue, 1);
|
||||
|
||||
@@ -151,6 +151,7 @@ static void bdc_uspc_disconnected(struct bdc *bdc, bool reinit)
|
||||
bdc->delayed_status = false;
|
||||
bdc->reinit = reinit;
|
||||
bdc->test_mode = false;
|
||||
usb_gadget_set_state(&bdc->gadget, USB_STATE_NOTATTACHED);
|
||||
}
|
||||
|
||||
/* TNotify wkaeup timer */
|
||||
|
||||
@@ -900,15 +900,19 @@ void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
|
||||
if (dev->eps[i].stream_info)
|
||||
xhci_free_stream_info(xhci,
|
||||
dev->eps[i].stream_info);
|
||||
/* Endpoints on the TT/root port lists should have been removed
|
||||
* when usb_disable_device() was called for the device.
|
||||
* We can't drop them anyway, because the udev might have gone
|
||||
* away by this point, and we can't tell what speed it was.
|
||||
/*
|
||||
* Endpoints are normally deleted from the bandwidth list when
|
||||
* endpoints are dropped, before device is freed.
|
||||
* If host is dying or being removed then endpoints aren't
|
||||
* dropped cleanly, so delete the endpoint from list here.
|
||||
* Only applicable for hosts with software bandwidth checking.
|
||||
*/
|
||||
if (!list_empty(&dev->eps[i].bw_endpoint_list))
|
||||
xhci_warn(xhci, "Slot %u endpoint %u "
|
||||
"not removed from BW list!\n",
|
||||
slot_id, i);
|
||||
|
||||
if (!list_empty(&dev->eps[i].bw_endpoint_list)) {
|
||||
list_del_init(&dev->eps[i].bw_endpoint_list);
|
||||
xhci_dbg(xhci, "Slot %u endpoint %u not removed from BW list!\n",
|
||||
slot_id, i);
|
||||
}
|
||||
}
|
||||
/* If this is a hub, free the TT(s) from the TT list */
|
||||
xhci_free_tt_info(xhci, dev, slot_id);
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user