Merge 4.14.200 into android-4.14-stable
Changes in 4.14.200 af_key: pfkey_dump needs parameter validation phy: qcom-qmp: Use correct values for ipq8074 PCIe Gen2 PHY init KVM: fix memory leak in kvm_io_bus_unregister_dev() kprobes: fix kill kprobe which has been marked as gone mm/thp: fix __split_huge_pmd_locked() for migration PMD RDMA/ucma: ucma_context reference leak in error path hdlc_ppp: add range checks in ppp_cp_parse_cr() ip: fix tos reflection in ack and reset packets net: ipv6: fix kconfig dependency warning for IPV6_SEG6_HMAC tipc: fix shutdown() of connection oriented socket tipc: use skb_unshare() instead in tipc_buf_append() bnxt_en: Protect bnxt_set_eee() and bnxt_set_pauseparam() with mutex. net: phy: Avoid NPD upon phy_detach() when driver is unbound net: add __must_check to skb_put_padto() ipv4: Update exception handling for multipath routes via same device geneve: add transport ports in route lookup for geneve serial: 8250: Avoid error message on reprobe mm: fix double page fault on arm64 if PTE_AF is cleared scsi: aacraid: fix illegal IO beyond last LBA m68k: q40: Fix info-leak in rtc_ioctl gma/gma500: fix a memory disclosure bug due to uninitialized bytes ASoC: kirkwood: fix IRQ error handling media: smiapp: Fix error handling at NVM reading arch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache writeback x86/ioapic: Unbreak check_timer() ALSA: usb-audio: Add delay quirk for H570e USB headsets ALSA: hda/realtek - Couldn't detect Mic if booting with headset plugged PM / devfreq: tegra30: Fix integer overflow on CPU's freq max out scsi: fnic: fix use after free clk/ti/adpll: allocate room for terminating null mtd: cfi_cmdset_0002: don't free cfi->cfiq in error path of cfi_amdstd_setup() mfd: mfd-core: Protect against NULL call-back function pointer tracing: Adding NULL checks for trace_array descriptor pointer bcache: fix a lost wake-up problem caused by mca_cannibalize_lock RDMA/i40iw: Fix potential use after free xfs: fix attr leaf header freemap.size underflow RDMA/iw_cgxb4: Fix an error handling path in 'c4iw_connect()' mmc: core: Fix size overflow for mmc partitions gfs2: clean up iopen glock mess in gfs2_create_inode debugfs: Fix !DEBUG_FS debugfs_create_automount CIFS: Properly process SMB3 lease breaks kernel/sys.c: avoid copying possible padding bytes in copy_to_user neigh_stat_seq_next() should increase position index rt_cpu_seq_next should increase position index seqlock: Require WRITE_ONCE surrounding raw_seqcount_barrier media: ti-vpe: cal: Restrict DMA to avoid memory corruption ACPI: EC: Reference count query handlers under lock dmaengine: zynqmp_dma: fix burst length configuration powerpc/eeh: Only dump stack once if an MMIO loop is detected tracing: Set kernel_stack's caller size properly ar5523: Add USB ID of SMCWUSBT-G2 wireless adapter selftests/ftrace: fix glob selftest tools/power/x86/intel_pstate_tracer: changes for python 3 compatibility Bluetooth: Fix refcount use-after-free issue mm: pagewalk: fix termination condition in walk_pte_range() Bluetooth: prefetch channel before killing sock KVM: fix overflow of zero page refcount with ksm running ALSA: hda: Clear RIRB status before reading WP skbuff: fix a data race in skb_queue_len() audit: CONFIG_CHANGE don't log internal bookkeeping as an event selinux: sel_avc_get_stat_idx should increase position index scsi: lpfc: Fix RQ buffer leakage when no IOCBs available scsi: lpfc: Fix coverity errors in fmdi attribute handling drm/omap: fix possible object reference leak perf test: Fix test trace+probe_vfs_getname.sh on s390 RDMA/rxe: Fix configuration of atomic queue pair attributes KVM: x86: fix incorrect comparison in trace event media: staging/imx: Missing assignment in imx_media_capture_device_register() x86/pkeys: Add check for pkey "overflow" bpf: Remove recursion prevention from rcu free callback dmaengine: tegra-apb: Prevent race conditions on channel's freeing media: go7007: Fix URB type for interrupt handling Bluetooth: guard against controllers sending zero'd events timekeeping: Prevent 32bit truncation in scale64_check_overflow() ext4: fix a data race at inode->i_disksize mm: avoid data corruption on CoW fault into PFN-mapped VMA drm/amdgpu: increase atombios cmd timeout ath10k: use kzalloc to read for ath10k_sdio_hif_diag_read scsi: aacraid: Disabling TM path and only processing IOP reset Bluetooth: L2CAP: handle l2cap config request during open state media: tda10071: fix unsigned sign extension overflow xfs: don't ever return a stale pointer from __xfs_dir3_free_read tpm: ibmvtpm: Wait for buffer to be set before proceeding rtc: ds1374: fix possible race condition tracing: Use address-of operator on section symbols serial: 8250_port: Don't service RX FIFO if throttled serial: 8250_omap: Fix sleeping function called from invalid context during probe serial: 8250: 8250_omap: Terminate DMA before pushing data on RX timeout perf cpumap: Fix snprintf overflow check cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_work_fn tools: gpio-hammer: Avoid potential overflow in main RDMA/rxe: Set sys_image_guid to be aligned with HW IB devices SUNRPC: Fix a potential buffer overflow in 'svc_print_xprts()' svcrdma: Fix leak of transport addresses ubifs: Fix out-of-bounds memory access caused by abnormal value of node_len ALSA: usb-audio: Fix case when USB MIDI interface has more than one extra endpoint descriptor NFS: Fix races nfs_page_group_destroy() vs nfs_destroy_unlinked_subrequests() mm/kmemleak.c: use address-of operator on section symbols mm/filemap.c: clear page error before actual read mm/vmscan.c: fix data races using kswapd_classzone_idx mm/mmap.c: initialize align_offset explicitly for vm_unmapped_area scsi: qedi: Fix termination timeouts in session logout serial: uartps: Wait for tx_empty in console setup KVM: Remove CREATE_IRQCHIP/SET_PIT2 race bdev: Reduce time holding bd_mutex in sync in blkdev_close() drivers: char: tlclk.c: Avoid data race between init and interrupt handler staging:r8188eu: avoid skb_clone for amsdu to msdu conversion sparc64: vcc: Fix error return code in vcc_probe() arm64: cpufeature: Relax checks for AArch32 support at EL[0-2] dt-bindings: sound: wm8994: Correct required supplies based on actual implementaion atm: fix a memory leak of vcc->user_back power: supply: max17040: Correct voltage reading phy: samsung: s5pv210-usb2: Add delay after reset Bluetooth: Handle Inquiry Cancel error after Inquiry Complete USB: EHCI: ehci-mv: fix error handling in mv_ehci_probe() tty: serial: samsung: Correct clock selection logic ALSA: hda: Fix potential race in unsol event handler powerpc/traps: Make unrecoverable NMIs die instead of panic fuse: don't check refcount after stealing page USB: EHCI: ehci-mv: fix less than zero comparison of an unsigned int arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register e1000: Do not perform reset in reset_task if we are already down drm/nouveau/debugfs: fix runtime pm imbalance on error printk: handle blank console arguments passed in. usb: dwc3: Increase timeout for CmdAct cleared by device controller btrfs: don't force read-only after error in drop snapshot vfio/pci: fix memory leaks of eventfd ctx perf util: Fix memory leak of prefix_if_not_in perf kcore_copy: Fix module map when there are no modules loaded mtd: rawnand: omap_elm: Fix runtime PM imbalance on error ceph: fix potential race in ceph_check_caps mm/swap_state: fix a data race in swapin_nr_pages rapidio: avoid data race between file operation callbacks and mport_cdev_add(). mtd: parser: cmdline: Support MTD names containing one or more colons x86/speculation/mds: Mark mds_user_clear_cpu_buffers() __always_inline vfio/pci: Clear error and request eventfd ctx after releasing cifs: Fix double add page to memcg when cifs_readpages scsi: libfc: Handling of extra kref scsi: libfc: Skip additional kref updating work event selftests/x86/syscall_nt: Clear weird flags after each test vfio/pci: fix racy on error and request eventfd ctx btrfs: qgroup: fix data leak caused by race between writeback and truncate s390/init: add missing __init annotations i2c: core: Call i2c_acpi_install_space_handler() before i2c_acpi_register_devices() objtool: Fix noreturn detection for ignored functions ieee802154: fix one possible memleak in ca8210_dev_com_init ieee802154/adf7242: check status of adf7242_read_reg clocksource/drivers/h8300_timer8: Fix wrong return value in h8300_8timer_init() mwifiex: Increase AES key storage size to 256 bits batman-adv: bla: fix type misuse for backbone_gw hash indexing atm: eni: fix the missed pci_disable_device() for eni_init_one() batman-adv: mcast/TT: fix wrongly dropped or rerouted packets mac802154: tx: fix use-after-free drm/vc4/vc4_hdmi: fill ASoC card owner net: qed: RDMA personality shouldn't fail VF load batman-adv: Add missing include for in_interrupt() batman-adv: mcast: fix duplicate mcast packets in BLA backbone from mesh ALSA: asihpi: fix iounmap in error handler MIPS: Add the missing 'CPU_1074K' into __get_cpu_type() s390/dasd: Fix zero write for FBA devices kprobes: Fix to check probe enabled before disarm_kprobe_ftrace() mm, THP, swap: fix allocating cluster for swapfile by mistake lib/string.c: implement stpcpy ata: define AC_ERR_OK ata: make qc_prep return ata_completion_errors ata: sata_mv, avoid trigerrable BUG_ON Linux 4.14.200 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I3d3049dca196c46cb6b2a66d60a5a6a3a099efbb
This commit is contained in:
@@ -14,9 +14,15 @@ Required properties:
|
||||
- #gpio-cells : Must be 2. The first cell is the pin number and the
|
||||
second cell is used to specify optional parameters (currently unused).
|
||||
|
||||
- AVDD2-supply, DBVDD1-supply, DBVDD2-supply, DBVDD3-supply, CPVDD-supply,
|
||||
SPKVDD1-supply, SPKVDD2-supply : power supplies for the device, as covered
|
||||
in Documentation/devicetree/bindings/regulator/regulator.txt
|
||||
- power supplies for the device, as covered in
|
||||
Documentation/devicetree/bindings/regulator/regulator.txt, depending
|
||||
on compatible:
|
||||
- for wlf,wm1811 and wlf,wm8958:
|
||||
AVDD1-supply, AVDD2-supply, DBVDD1-supply, DBVDD2-supply, DBVDD3-supply,
|
||||
DCVDD-supply, CPVDD-supply, SPKVDD1-supply, SPKVDD2-supply
|
||||
- for wlf,wm8994:
|
||||
AVDD1-supply, AVDD2-supply, DBVDD-supply, DCVDD-supply, CPVDD-supply,
|
||||
SPKVDD1-supply, SPKVDD2-supply
|
||||
|
||||
Optional properties:
|
||||
|
||||
@@ -68,11 +74,11 @@ codec: wm8994@1a {
|
||||
|
||||
lineout1-se;
|
||||
|
||||
AVDD1-supply = <®ulator>;
|
||||
AVDD2-supply = <®ulator>;
|
||||
CPVDD-supply = <®ulator>;
|
||||
DBVDD1-supply = <®ulator>;
|
||||
DBVDD2-supply = <®ulator>;
|
||||
DBVDD3-supply = <®ulator>;
|
||||
DBVDD-supply = <®ulator>;
|
||||
DCVDD-supply = <®ulator>;
|
||||
SPKVDD1-supply = <®ulator>;
|
||||
SPKVDD2-supply = <®ulator>;
|
||||
};
|
||||
|
||||
@@ -251,7 +251,7 @@ High-level taskfile hooks
|
||||
|
||||
::
|
||||
|
||||
void (*qc_prep) (struct ata_queued_cmd *qc);
|
||||
enum ata_completion_errors (*qc_prep) (struct ata_queued_cmd *qc);
|
||||
int (*qc_issue) (struct ata_queued_cmd *qc);
|
||||
|
||||
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 14
|
||||
SUBLEVEL = 199
|
||||
SUBLEVEL = 200
|
||||
EXTRAVERSION =
|
||||
NAME = Petit Gorille
|
||||
|
||||
|
||||
@@ -141,11 +141,10 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
|
||||
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
|
||||
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
|
||||
/* Linux doesn't care about the EL3 */
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
|
||||
ARM64_FTR_END,
|
||||
};
|
||||
|
||||
@@ -278,7 +277,7 @@ static const struct arm64_ftr_bits ftr_id_pfr0[] = {
|
||||
};
|
||||
|
||||
static const struct arm64_ftr_bits ftr_id_dfr0[] = {
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
|
||||
/* [31:28] TraceFilt */
|
||||
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0xf), /* PerfMon */
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),
|
||||
@@ -632,9 +631,6 @@ void update_cpu_features(int cpu,
|
||||
taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu,
|
||||
info->reg_id_aa64mmfr2, boot->reg_id_aa64mmfr2);
|
||||
|
||||
/*
|
||||
* EL3 is not our concern.
|
||||
*/
|
||||
taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
|
||||
info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
|
||||
taint |= check_update_ftr_reg(SYS_ID_AA64PFR1_EL1, cpu,
|
||||
|
||||
@@ -303,6 +303,7 @@ static int q40_get_rtc_pll(struct rtc_pll_info *pll)
|
||||
{
|
||||
int tmp = Q40_RTC_CTRL;
|
||||
|
||||
pll->pll_ctrl = 0;
|
||||
pll->pll_value = tmp & Q40_RTC_PLL_MASK;
|
||||
if (tmp & Q40_RTC_PLL_SIGN)
|
||||
pll->pll_value = -pll->pll_value;
|
||||
|
||||
@@ -47,6 +47,7 @@ static inline int __pure __get_cpu_type(const int cpu_type)
|
||||
case CPU_34K:
|
||||
case CPU_1004K:
|
||||
case CPU_74K:
|
||||
case CPU_1074K:
|
||||
case CPU_M14KC:
|
||||
case CPU_M14KEC:
|
||||
case CPU_INTERAPTIV:
|
||||
|
||||
@@ -506,7 +506,7 @@ int eeh_dev_check_failure(struct eeh_dev *edev)
|
||||
rc = 1;
|
||||
if (pe->state & EEH_PE_ISOLATED) {
|
||||
pe->check_count++;
|
||||
if (pe->check_count % EEH_MAX_FAILS == 0) {
|
||||
if (pe->check_count == EEH_MAX_FAILS) {
|
||||
dn = pci_device_to_OF_node(dev);
|
||||
if (dn)
|
||||
location = of_get_property(dn, "ibm,loc-code",
|
||||
|
||||
@@ -357,11 +357,11 @@ out:
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
BUG_ON(get_paca()->in_nmi == 0);
|
||||
if (get_paca()->in_nmi > 1)
|
||||
nmi_panic(regs, "Unrecoverable nested System Reset");
|
||||
die("Unrecoverable nested System Reset", regs, SIGABRT);
|
||||
#endif
|
||||
/* Must die if the interrupt is not recoverable */
|
||||
if (!(regs->msr & MSR_RI))
|
||||
nmi_panic(regs, "Unrecoverable System Reset");
|
||||
die("Unrecoverable System Reset", regs, SIGABRT);
|
||||
|
||||
if (!nested)
|
||||
nmi_exit();
|
||||
@@ -701,7 +701,7 @@ void machine_check_exception(struct pt_regs *regs)
|
||||
|
||||
/* Must die if the interrupt is not recoverable */
|
||||
if (!(regs->msr & MSR_RI))
|
||||
nmi_panic(regs, "Unrecoverable Machine check");
|
||||
die("Unrecoverable Machine check", regs, SIGBUS);
|
||||
|
||||
return;
|
||||
|
||||
|
||||
@@ -540,7 +540,7 @@ static struct notifier_block kdump_mem_nb = {
|
||||
/*
|
||||
* Make sure that the area behind memory_end is protected
|
||||
*/
|
||||
static void reserve_memory_end(void)
|
||||
static void __init reserve_memory_end(void)
|
||||
{
|
||||
#ifdef CONFIG_CRASH_DUMP
|
||||
if (ipl_info.type == IPL_TYPE_FCP_DUMP &&
|
||||
@@ -558,7 +558,7 @@ static void reserve_memory_end(void)
|
||||
/*
|
||||
* Make sure that oldmem, where the dump is stored, is protected
|
||||
*/
|
||||
static void reserve_oldmem(void)
|
||||
static void __init reserve_oldmem(void)
|
||||
{
|
||||
#ifdef CONFIG_CRASH_DUMP
|
||||
if (OLDMEM_BASE)
|
||||
@@ -570,7 +570,7 @@ static void reserve_oldmem(void)
|
||||
/*
|
||||
* Make sure that oldmem, where the dump is stored, is protected
|
||||
*/
|
||||
static void remove_oldmem(void)
|
||||
static void __init remove_oldmem(void)
|
||||
{
|
||||
#ifdef CONFIG_CRASH_DUMP
|
||||
if (OLDMEM_BASE)
|
||||
|
||||
@@ -330,7 +330,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
|
||||
* combination with microcode which triggers a CPU buffer flush when the
|
||||
* instruction is executed.
|
||||
*/
|
||||
static inline void mds_clear_cpu_buffers(void)
|
||||
static __always_inline void mds_clear_cpu_buffers(void)
|
||||
{
|
||||
static const u16 ds = __KERNEL_DS;
|
||||
|
||||
@@ -351,7 +351,7 @@ static inline void mds_clear_cpu_buffers(void)
|
||||
*
|
||||
* Clear CPU buffers if the corresponding static key is enabled
|
||||
*/
|
||||
static inline void mds_user_clear_cpu_buffers(void)
|
||||
static __always_inline void mds_user_clear_cpu_buffers(void)
|
||||
{
|
||||
if (static_branch_likely(&mds_user_clear))
|
||||
mds_clear_cpu_buffers();
|
||||
|
||||
@@ -4,6 +4,11 @@
|
||||
|
||||
#define ARCH_DEFAULT_PKEY 0
|
||||
|
||||
/*
|
||||
* If more than 16 keys are ever supported, a thorough audit
|
||||
* will be necessary to ensure that the types that store key
|
||||
* numbers and masks have sufficient capacity.
|
||||
*/
|
||||
#define arch_max_pkey() (boot_cpu_has(X86_FEATURE_OSPKE) ? 16 : 1)
|
||||
|
||||
extern int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
|
||||
|
||||
@@ -2160,6 +2160,7 @@ static inline void __init check_timer(void)
|
||||
legacy_pic->init(0);
|
||||
legacy_pic->make_irq(0);
|
||||
apic_write(APIC_LVT0, APIC_DM_EXTINT);
|
||||
legacy_pic->unmask(0);
|
||||
|
||||
unlock_ExtINT_logic();
|
||||
|
||||
|
||||
@@ -907,8 +907,6 @@ const void *get_xsave_field_ptr(int xsave_state)
|
||||
|
||||
#ifdef CONFIG_ARCH_HAS_PKEYS
|
||||
|
||||
#define NR_VALID_PKRU_BITS (CONFIG_NR_PROTECTION_KEYS * 2)
|
||||
#define PKRU_VALID_MASK (NR_VALID_PKRU_BITS - 1)
|
||||
/*
|
||||
* This will go out and modify PKRU register to set the access
|
||||
* rights for @pkey to @init_val.
|
||||
@@ -927,6 +925,13 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
|
||||
if (!boot_cpu_has(X86_FEATURE_OSPKE))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* This code should only be called with valid 'pkey'
|
||||
* values originating from in-kernel users. Complain
|
||||
* if a bad value is observed.
|
||||
*/
|
||||
WARN_ON_ONCE(pkey >= arch_max_pkey());
|
||||
|
||||
/* Set the bits we need in PKRU: */
|
||||
if (init_val & PKEY_DISABLE_ACCESS)
|
||||
new_pkru_bits |= PKRU_AD_BIT;
|
||||
|
||||
@@ -339,7 +339,7 @@ TRACE_EVENT(
|
||||
/* These depend on page entry type, so compute them now. */
|
||||
__field(bool, r)
|
||||
__field(bool, x)
|
||||
__field(u8, u)
|
||||
__field(signed char, u)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
|
||||
@@ -4370,10 +4370,13 @@ long kvm_arch_vm_ioctl(struct file *filp,
|
||||
r = -EFAULT;
|
||||
if (copy_from_user(&u.ps, argp, sizeof u.ps))
|
||||
goto out;
|
||||
mutex_lock(&kvm->lock);
|
||||
r = -ENXIO;
|
||||
if (!kvm->arch.vpit)
|
||||
goto out;
|
||||
goto set_pit_out;
|
||||
r = kvm_vm_ioctl_set_pit(kvm, &u.ps);
|
||||
set_pit_out:
|
||||
mutex_unlock(&kvm->lock);
|
||||
break;
|
||||
}
|
||||
case KVM_GET_PIT2: {
|
||||
@@ -4393,10 +4396,13 @@ long kvm_arch_vm_ioctl(struct file *filp,
|
||||
r = -EFAULT;
|
||||
if (copy_from_user(&u.ps2, argp, sizeof(u.ps2)))
|
||||
goto out;
|
||||
mutex_lock(&kvm->lock);
|
||||
r = -ENXIO;
|
||||
if (!kvm->arch.vpit)
|
||||
goto out;
|
||||
goto set_pit2_out;
|
||||
r = kvm_vm_ioctl_set_pit2(kvm, &u.ps2);
|
||||
set_pit2_out:
|
||||
mutex_unlock(&kvm->lock);
|
||||
break;
|
||||
}
|
||||
case KVM_REINJECT_CONTROL: {
|
||||
|
||||
@@ -118,7 +118,7 @@ long __copy_user_flushcache(void *dst, const void __user *src, unsigned size)
|
||||
*/
|
||||
if (size < 8) {
|
||||
if (!IS_ALIGNED(dest, 4) || size != 4)
|
||||
clean_cache_range(dst, 1);
|
||||
clean_cache_range(dst, size);
|
||||
} else {
|
||||
if (!IS_ALIGNED(dest, 8)) {
|
||||
dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
|
||||
|
||||
@@ -1062,29 +1062,21 @@ void acpi_ec_unblock_transactions(void)
|
||||
/* --------------------------------------------------------------------------
|
||||
Event Management
|
||||
-------------------------------------------------------------------------- */
|
||||
static struct acpi_ec_query_handler *
|
||||
acpi_ec_get_query_handler(struct acpi_ec_query_handler *handler)
|
||||
{
|
||||
if (handler)
|
||||
kref_get(&handler->kref);
|
||||
return handler;
|
||||
}
|
||||
|
||||
static struct acpi_ec_query_handler *
|
||||
acpi_ec_get_query_handler_by_value(struct acpi_ec *ec, u8 value)
|
||||
{
|
||||
struct acpi_ec_query_handler *handler;
|
||||
bool found = false;
|
||||
|
||||
mutex_lock(&ec->mutex);
|
||||
list_for_each_entry(handler, &ec->list, node) {
|
||||
if (value == handler->query_bit) {
|
||||
found = true;
|
||||
break;
|
||||
kref_get(&handler->kref);
|
||||
mutex_unlock(&ec->mutex);
|
||||
return handler;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&ec->mutex);
|
||||
return found ? acpi_ec_get_query_handler(handler) : NULL;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void acpi_ec_query_handler_release(struct kref *kref)
|
||||
|
||||
@@ -72,7 +72,7 @@ struct acard_sg {
|
||||
__le32 size; /* bit 31 (EOT) max==0x10000 (64k) */
|
||||
};
|
||||
|
||||
static void acard_ahci_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors acard_ahci_qc_prep(struct ata_queued_cmd *qc);
|
||||
static bool acard_ahci_qc_fill_rtf(struct ata_queued_cmd *qc);
|
||||
static int acard_ahci_port_start(struct ata_port *ap);
|
||||
static int acard_ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
||||
@@ -257,7 +257,7 @@ static unsigned int acard_ahci_fill_sg(struct ata_queued_cmd *qc, void *cmd_tbl)
|
||||
return si;
|
||||
}
|
||||
|
||||
static void acard_ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors acard_ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct ata_port *ap = qc->ap;
|
||||
struct ahci_port_priv *pp = ap->private_data;
|
||||
@@ -295,6 +295,8 @@ static void acard_ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||
opts |= AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH;
|
||||
|
||||
ahci_fill_cmd_slot(pp, qc->tag, opts);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static bool acard_ahci_qc_fill_rtf(struct ata_queued_cmd *qc)
|
||||
|
||||
@@ -73,7 +73,7 @@ static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);
|
||||
static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc);
|
||||
static int ahci_port_start(struct ata_port *ap);
|
||||
static void ahci_port_stop(struct ata_port *ap);
|
||||
static void ahci_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors ahci_qc_prep(struct ata_queued_cmd *qc);
|
||||
static int ahci_pmp_qc_defer(struct ata_queued_cmd *qc);
|
||||
static void ahci_freeze(struct ata_port *ap);
|
||||
static void ahci_thaw(struct ata_port *ap);
|
||||
@@ -1626,7 +1626,7 @@ static int ahci_pmp_qc_defer(struct ata_queued_cmd *qc)
|
||||
return sata_pmp_qc_defer_cmd_switch(qc);
|
||||
}
|
||||
|
||||
static void ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct ata_port *ap = qc->ap;
|
||||
struct ahci_port_priv *pp = ap->private_data;
|
||||
@@ -1662,6 +1662,8 @@ static void ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||
opts |= AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH;
|
||||
|
||||
ahci_fill_cmd_slot(pp, qc->tag, opts);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static void ahci_fbs_dec_intr(struct ata_port *ap)
|
||||
|
||||
@@ -4986,7 +4986,10 @@ int ata_std_qc_defer(struct ata_queued_cmd *qc)
|
||||
return ATA_DEFER_LINK;
|
||||
}
|
||||
|
||||
void ata_noop_qc_prep(struct ata_queued_cmd *qc) { }
|
||||
enum ata_completion_errors ata_noop_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
/**
|
||||
* ata_sg_init - Associate command with scatter-gather table.
|
||||
@@ -5439,7 +5442,9 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
|
||||
return;
|
||||
}
|
||||
|
||||
ap->ops->qc_prep(qc);
|
||||
qc->err_mask |= ap->ops->qc_prep(qc);
|
||||
if (unlikely(qc->err_mask))
|
||||
goto err;
|
||||
trace_ata_qc_issue(qc);
|
||||
qc->err_mask |= ap->ops->qc_issue(qc);
|
||||
if (unlikely(qc->err_mask))
|
||||
|
||||
@@ -2725,12 +2725,14 @@ static void ata_bmdma_fill_sg_dumb(struct ata_queued_cmd *qc)
|
||||
* LOCKING:
|
||||
* spin_lock_irqsave(host lock)
|
||||
*/
|
||||
void ata_bmdma_qc_prep(struct ata_queued_cmd *qc)
|
||||
enum ata_completion_errors ata_bmdma_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
ata_bmdma_fill_sg(qc);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ata_bmdma_qc_prep);
|
||||
|
||||
@@ -2743,12 +2745,14 @@ EXPORT_SYMBOL_GPL(ata_bmdma_qc_prep);
|
||||
* LOCKING:
|
||||
* spin_lock_irqsave(host lock)
|
||||
*/
|
||||
void ata_bmdma_dumb_qc_prep(struct ata_queued_cmd *qc)
|
||||
enum ata_completion_errors ata_bmdma_dumb_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
ata_bmdma_fill_sg_dumb(qc);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ata_bmdma_dumb_qc_prep);
|
||||
|
||||
|
||||
@@ -507,7 +507,7 @@ static int pata_macio_cable_detect(struct ata_port *ap)
|
||||
return ATA_CBL_PATA40;
|
||||
}
|
||||
|
||||
static void pata_macio_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
unsigned int write = (qc->tf.flags & ATA_TFLAG_WRITE);
|
||||
struct ata_port *ap = qc->ap;
|
||||
@@ -520,7 +520,7 @@ static void pata_macio_qc_prep(struct ata_queued_cmd *qc)
|
||||
__func__, qc, qc->flags, write, qc->dev->devno);
|
||||
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
table = (struct dbdma_cmd *) priv->dma_table_cpu;
|
||||
|
||||
@@ -565,6 +565,8 @@ static void pata_macio_qc_prep(struct ata_queued_cmd *qc)
|
||||
table->command = cpu_to_le16(DBDMA_STOP);
|
||||
|
||||
dev_dbgdma(priv->dev, "%s: %d DMA list entries\n", __func__, pi);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -59,25 +59,27 @@ static void pxa_ata_dma_irq(void *d)
|
||||
/*
|
||||
* Prepare taskfile for submission.
|
||||
*/
|
||||
static void pxa_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors pxa_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct pata_pxa_data *pd = qc->ap->private_data;
|
||||
struct dma_async_tx_descriptor *tx;
|
||||
enum dma_transfer_direction dir;
|
||||
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
dir = (qc->dma_dir == DMA_TO_DEVICE ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM);
|
||||
tx = dmaengine_prep_slave_sg(pd->dma_chan, qc->sg, qc->n_elem, dir,
|
||||
DMA_PREP_INTERRUPT);
|
||||
if (!tx) {
|
||||
ata_dev_err(qc->dev, "prep_slave_sg() failed\n");
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
tx->callback = pxa_ata_dma_irq;
|
||||
tx->callback_param = pd;
|
||||
pd->dma_cookie = dmaengine_submit(tx);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -132,7 +132,7 @@ static int adma_ata_init_one(struct pci_dev *pdev,
|
||||
const struct pci_device_id *ent);
|
||||
static int adma_port_start(struct ata_port *ap);
|
||||
static void adma_port_stop(struct ata_port *ap);
|
||||
static void adma_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors adma_qc_prep(struct ata_queued_cmd *qc);
|
||||
static unsigned int adma_qc_issue(struct ata_queued_cmd *qc);
|
||||
static int adma_check_atapi_dma(struct ata_queued_cmd *qc);
|
||||
static void adma_freeze(struct ata_port *ap);
|
||||
@@ -311,7 +311,7 @@ static int adma_fill_sg(struct ata_queued_cmd *qc)
|
||||
return i;
|
||||
}
|
||||
|
||||
static void adma_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors adma_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct adma_port_priv *pp = qc->ap->private_data;
|
||||
u8 *buf = pp->pkt;
|
||||
@@ -322,7 +322,7 @@ static void adma_qc_prep(struct ata_queued_cmd *qc)
|
||||
|
||||
adma_enter_reg_mode(qc->ap);
|
||||
if (qc->tf.protocol != ATA_PROT_DMA)
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
buf[i++] = 0; /* Response flags */
|
||||
buf[i++] = 0; /* reserved */
|
||||
@@ -387,6 +387,7 @@ static void adma_qc_prep(struct ata_queued_cmd *qc)
|
||||
printk("%s\n", obuf);
|
||||
}
|
||||
#endif
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static inline void adma_packet_start(struct ata_queued_cmd *qc)
|
||||
|
||||
@@ -513,7 +513,7 @@ static unsigned int sata_fsl_fill_sg(struct ata_queued_cmd *qc, void *cmd_desc,
|
||||
return num_prde;
|
||||
}
|
||||
|
||||
static void sata_fsl_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors sata_fsl_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct ata_port *ap = qc->ap;
|
||||
struct sata_fsl_port_priv *pp = ap->private_data;
|
||||
@@ -559,6 +559,8 @@ static void sata_fsl_qc_prep(struct ata_queued_cmd *qc)
|
||||
|
||||
VPRINTK("SATA FSL : xx_qc_prep, di = 0x%x, ttl = %d, num_prde = %d\n",
|
||||
desc_info, ttl_dwords, num_prde);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static unsigned int sata_fsl_qc_issue(struct ata_queued_cmd *qc)
|
||||
|
||||
@@ -472,7 +472,7 @@ static void inic_fill_sg(struct inic_prd *prd, struct ata_queued_cmd *qc)
|
||||
prd[-1].flags |= PRD_END;
|
||||
}
|
||||
|
||||
static void inic_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors inic_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct inic_port_priv *pp = qc->ap->private_data;
|
||||
struct inic_pkt *pkt = pp->pkt;
|
||||
@@ -532,6 +532,8 @@ static void inic_qc_prep(struct ata_queued_cmd *qc)
|
||||
inic_fill_sg(prd, qc);
|
||||
|
||||
pp->cpb_tbl[0] = pp->pkt_dma;
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static unsigned int inic_qc_issue(struct ata_queued_cmd *qc)
|
||||
|
||||
@@ -605,8 +605,8 @@ static int mv5_scr_write(struct ata_link *link, unsigned int sc_reg_in, u32 val)
|
||||
static int mv_port_start(struct ata_port *ap);
|
||||
static void mv_port_stop(struct ata_port *ap);
|
||||
static int mv_qc_defer(struct ata_queued_cmd *qc);
|
||||
static void mv_qc_prep(struct ata_queued_cmd *qc);
|
||||
static void mv_qc_prep_iie(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors mv_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors mv_qc_prep_iie(struct ata_queued_cmd *qc);
|
||||
static unsigned int mv_qc_issue(struct ata_queued_cmd *qc);
|
||||
static int mv_hardreset(struct ata_link *link, unsigned int *class,
|
||||
unsigned long deadline);
|
||||
@@ -2044,7 +2044,7 @@ static void mv_rw_multi_errata_sata24(struct ata_queued_cmd *qc)
|
||||
* LOCKING:
|
||||
* Inherited from caller.
|
||||
*/
|
||||
static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors mv_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct ata_port *ap = qc->ap;
|
||||
struct mv_port_priv *pp = ap->private_data;
|
||||
@@ -2056,15 +2056,15 @@ static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||
switch (tf->protocol) {
|
||||
case ATA_PROT_DMA:
|
||||
if (tf->command == ATA_CMD_DSM)
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
/* fall-thru */
|
||||
case ATA_PROT_NCQ:
|
||||
break; /* continue below */
|
||||
case ATA_PROT_PIO:
|
||||
mv_rw_multi_errata_sata24(qc);
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
default:
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
/* Fill in command request block
|
||||
@@ -2111,12 +2111,10 @@ static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||
* non-NCQ mode are: [RW] STREAM DMA and W DMA FUA EXT, none
|
||||
* of which are defined/used by Linux. If we get here, this
|
||||
* driver needs work.
|
||||
*
|
||||
* FIXME: modify libata to give qc_prep a return value and
|
||||
* return error here.
|
||||
*/
|
||||
BUG_ON(tf->command);
|
||||
break;
|
||||
ata_port_err(ap, "%s: unsupported command: %.2x\n", __func__,
|
||||
tf->command);
|
||||
return AC_ERR_INVALID;
|
||||
}
|
||||
mv_crqb_pack_cmd(cw++, tf->nsect, ATA_REG_NSECT, 0);
|
||||
mv_crqb_pack_cmd(cw++, tf->hob_lbal, ATA_REG_LBAL, 0);
|
||||
@@ -2129,8 +2127,10 @@ static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||
mv_crqb_pack_cmd(cw++, tf->command, ATA_REG_CMD, 1); /* last */
|
||||
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
mv_fill_sg(qc);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -2145,7 +2145,7 @@ static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||
* LOCKING:
|
||||
* Inherited from caller.
|
||||
*/
|
||||
static void mv_qc_prep_iie(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors mv_qc_prep_iie(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct ata_port *ap = qc->ap;
|
||||
struct mv_port_priv *pp = ap->private_data;
|
||||
@@ -2156,9 +2156,9 @@ static void mv_qc_prep_iie(struct ata_queued_cmd *qc)
|
||||
|
||||
if ((tf->protocol != ATA_PROT_DMA) &&
|
||||
(tf->protocol != ATA_PROT_NCQ))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
if (tf->command == ATA_CMD_DSM)
|
||||
return; /* use bmdma for this */
|
||||
return AC_ERR_OK; /* use bmdma for this */
|
||||
|
||||
/* Fill in Gen IIE command request block */
|
||||
if (!(tf->flags & ATA_TFLAG_WRITE))
|
||||
@@ -2199,8 +2199,10 @@ static void mv_qc_prep_iie(struct ata_queued_cmd *qc)
|
||||
);
|
||||
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
mv_fill_sg(qc);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -313,7 +313,7 @@ static void nv_ck804_freeze(struct ata_port *ap);
|
||||
static void nv_ck804_thaw(struct ata_port *ap);
|
||||
static int nv_adma_slave_config(struct scsi_device *sdev);
|
||||
static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc);
|
||||
static void nv_adma_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc);
|
||||
static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc);
|
||||
static irqreturn_t nv_adma_interrupt(int irq, void *dev_instance);
|
||||
static void nv_adma_irq_clear(struct ata_port *ap);
|
||||
@@ -335,7 +335,7 @@ static void nv_mcp55_freeze(struct ata_port *ap);
|
||||
static void nv_swncq_error_handler(struct ata_port *ap);
|
||||
static int nv_swncq_slave_config(struct scsi_device *sdev);
|
||||
static int nv_swncq_port_start(struct ata_port *ap);
|
||||
static void nv_swncq_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc);
|
||||
static void nv_swncq_fill_sg(struct ata_queued_cmd *qc);
|
||||
static unsigned int nv_swncq_qc_issue(struct ata_queued_cmd *qc);
|
||||
static void nv_swncq_irq_clear(struct ata_port *ap, u16 fis);
|
||||
@@ -1382,7 +1382,7 @@ static int nv_adma_use_reg_mode(struct ata_queued_cmd *qc)
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void nv_adma_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct nv_adma_port_priv *pp = qc->ap->private_data;
|
||||
struct nv_adma_cpb *cpb = &pp->cpb[qc->tag];
|
||||
@@ -1394,7 +1394,7 @@ static void nv_adma_qc_prep(struct ata_queued_cmd *qc)
|
||||
(qc->flags & ATA_QCFLAG_DMAMAP));
|
||||
nv_adma_register_mode(qc->ap);
|
||||
ata_bmdma_qc_prep(qc);
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
cpb->resp_flags = NV_CPB_RESP_DONE;
|
||||
@@ -1426,6 +1426,8 @@ static void nv_adma_qc_prep(struct ata_queued_cmd *qc)
|
||||
cpb->ctl_flags = ctl_flags;
|
||||
wmb();
|
||||
cpb->resp_flags = 0;
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc)
|
||||
@@ -1989,17 +1991,19 @@ static int nv_swncq_port_start(struct ata_port *ap)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void nv_swncq_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
if (qc->tf.protocol != ATA_PROT_NCQ) {
|
||||
ata_bmdma_qc_prep(qc);
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
nv_swncq_fill_sg(qc);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static void nv_swncq_fill_sg(struct ata_queued_cmd *qc)
|
||||
|
||||
@@ -155,7 +155,7 @@ static int pdc_sata_scr_write(struct ata_link *link, unsigned int sc_reg, u32 va
|
||||
static int pdc_ata_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
||||
static int pdc_common_port_start(struct ata_port *ap);
|
||||
static int pdc_sata_port_start(struct ata_port *ap);
|
||||
static void pdc_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors pdc_qc_prep(struct ata_queued_cmd *qc);
|
||||
static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
||||
static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
||||
static int pdc_check_atapi_dma(struct ata_queued_cmd *qc);
|
||||
@@ -649,7 +649,7 @@ static void pdc_fill_sg(struct ata_queued_cmd *qc)
|
||||
prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
|
||||
}
|
||||
|
||||
static void pdc_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors pdc_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct pdc_port_priv *pp = qc->ap->private_data;
|
||||
unsigned int i;
|
||||
@@ -681,6 +681,8 @@ static void pdc_qc_prep(struct ata_queued_cmd *qc)
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static int pdc_is_sataii_tx4(unsigned long flags)
|
||||
|
||||
@@ -116,7 +116,7 @@ static int qs_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);
|
||||
static int qs_ata_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
||||
static int qs_port_start(struct ata_port *ap);
|
||||
static void qs_host_stop(struct ata_host *host);
|
||||
static void qs_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors qs_qc_prep(struct ata_queued_cmd *qc);
|
||||
static unsigned int qs_qc_issue(struct ata_queued_cmd *qc);
|
||||
static int qs_check_atapi_dma(struct ata_queued_cmd *qc);
|
||||
static void qs_freeze(struct ata_port *ap);
|
||||
@@ -276,7 +276,7 @@ static unsigned int qs_fill_sg(struct ata_queued_cmd *qc)
|
||||
return si;
|
||||
}
|
||||
|
||||
static void qs_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors qs_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct qs_port_priv *pp = qc->ap->private_data;
|
||||
u8 dflags = QS_DF_PORD, *buf = pp->pkt;
|
||||
@@ -288,7 +288,7 @@ static void qs_qc_prep(struct ata_queued_cmd *qc)
|
||||
|
||||
qs_enter_reg_mode(qc->ap);
|
||||
if (qc->tf.protocol != ATA_PROT_DMA)
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
nelem = qs_fill_sg(qc);
|
||||
|
||||
@@ -311,6 +311,8 @@ static void qs_qc_prep(struct ata_queued_cmd *qc)
|
||||
|
||||
/* frame information structure (FIS) */
|
||||
ata_tf_to_fis(&qc->tf, 0, 1, &buf[32]);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static inline void qs_packet_start(struct ata_queued_cmd *qc)
|
||||
|
||||
@@ -551,12 +551,14 @@ static void sata_rcar_bmdma_fill_sg(struct ata_queued_cmd *qc)
|
||||
prd[si - 1].addr |= cpu_to_le32(SATA_RCAR_DTEND);
|
||||
}
|
||||
|
||||
static void sata_rcar_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors sata_rcar_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
sata_rcar_bmdma_fill_sg(qc);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static void sata_rcar_bmdma_setup(struct ata_queued_cmd *qc)
|
||||
|
||||
@@ -119,7 +119,7 @@ static void sil_dev_config(struct ata_device *dev);
|
||||
static int sil_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val);
|
||||
static int sil_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);
|
||||
static int sil_set_mode(struct ata_link *link, struct ata_device **r_failed);
|
||||
static void sil_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors sil_qc_prep(struct ata_queued_cmd *qc);
|
||||
static void sil_bmdma_setup(struct ata_queued_cmd *qc);
|
||||
static void sil_bmdma_start(struct ata_queued_cmd *qc);
|
||||
static void sil_bmdma_stop(struct ata_queued_cmd *qc);
|
||||
@@ -333,12 +333,14 @@ static void sil_fill_sg(struct ata_queued_cmd *qc)
|
||||
last_prd->flags_len |= cpu_to_le32(ATA_PRD_EOT);
|
||||
}
|
||||
|
||||
static void sil_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors sil_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||
return;
|
||||
return AC_ERR_OK;
|
||||
|
||||
sil_fill_sg(qc);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static unsigned char sil_get_device_cache_line(struct pci_dev *pdev)
|
||||
|
||||
@@ -336,7 +336,7 @@ static void sil24_dev_config(struct ata_device *dev);
|
||||
static int sil24_scr_read(struct ata_link *link, unsigned sc_reg, u32 *val);
|
||||
static int sil24_scr_write(struct ata_link *link, unsigned sc_reg, u32 val);
|
||||
static int sil24_qc_defer(struct ata_queued_cmd *qc);
|
||||
static void sil24_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors sil24_qc_prep(struct ata_queued_cmd *qc);
|
||||
static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc);
|
||||
static bool sil24_qc_fill_rtf(struct ata_queued_cmd *qc);
|
||||
static void sil24_pmp_attach(struct ata_port *ap);
|
||||
@@ -840,7 +840,7 @@ static int sil24_qc_defer(struct ata_queued_cmd *qc)
|
||||
return ata_std_qc_defer(qc);
|
||||
}
|
||||
|
||||
static void sil24_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors sil24_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
struct ata_port *ap = qc->ap;
|
||||
struct sil24_port_priv *pp = ap->private_data;
|
||||
@@ -884,6 +884,8 @@ static void sil24_qc_prep(struct ata_queued_cmd *qc)
|
||||
|
||||
if (qc->flags & ATA_QCFLAG_DMAMAP)
|
||||
sil24_fill_sg(qc, sge);
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc)
|
||||
|
||||
@@ -218,7 +218,7 @@ static void pdc_error_handler(struct ata_port *ap);
|
||||
static void pdc_freeze(struct ata_port *ap);
|
||||
static void pdc_thaw(struct ata_port *ap);
|
||||
static int pdc_port_start(struct ata_port *ap);
|
||||
static void pdc20621_qc_prep(struct ata_queued_cmd *qc);
|
||||
static enum ata_completion_errors pdc20621_qc_prep(struct ata_queued_cmd *qc);
|
||||
static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
||||
static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
||||
static unsigned int pdc20621_dimm_init(struct ata_host *host);
|
||||
@@ -546,7 +546,7 @@ static void pdc20621_nodata_prep(struct ata_queued_cmd *qc)
|
||||
VPRINTK("ata pkt buf ofs %u, mmio copied\n", i);
|
||||
}
|
||||
|
||||
static void pdc20621_qc_prep(struct ata_queued_cmd *qc)
|
||||
static enum ata_completion_errors pdc20621_qc_prep(struct ata_queued_cmd *qc)
|
||||
{
|
||||
switch (qc->tf.protocol) {
|
||||
case ATA_PROT_DMA:
|
||||
@@ -558,6 +558,8 @@ static void pdc20621_qc_prep(struct ata_queued_cmd *qc)
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return AC_ERR_OK;
|
||||
}
|
||||
|
||||
static void __pdc20621_push_hdma(struct ata_queued_cmd *qc,
|
||||
|
||||
@@ -2243,7 +2243,7 @@ static int eni_init_one(struct pci_dev *pci_dev,
|
||||
|
||||
rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
|
||||
if (rc < 0)
|
||||
goto out;
|
||||
goto err_disable;
|
||||
|
||||
rc = -ENOMEM;
|
||||
eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);
|
||||
|
||||
@@ -777,18 +777,22 @@ static int __init tlclk_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = register_chrdev(tlclk_major, "telco_clock", &tlclk_fops);
|
||||
if (ret < 0) {
|
||||
printk(KERN_ERR "tlclk: can't get major %d.\n", tlclk_major);
|
||||
return ret;
|
||||
}
|
||||
tlclk_major = ret;
|
||||
telclk_interrupt = (inb(TLCLK_REG7) & 0x0f);
|
||||
|
||||
alarm_events = kzalloc( sizeof(struct tlclk_alarms), GFP_KERNEL);
|
||||
if (!alarm_events) {
|
||||
ret = -ENOMEM;
|
||||
goto out1;
|
||||
}
|
||||
|
||||
ret = register_chrdev(tlclk_major, "telco_clock", &tlclk_fops);
|
||||
if (ret < 0) {
|
||||
printk(KERN_ERR "tlclk: can't get major %d.\n", tlclk_major);
|
||||
kfree(alarm_events);
|
||||
return ret;
|
||||
}
|
||||
tlclk_major = ret;
|
||||
|
||||
/* Read telecom clock IRQ number (Set by BIOS) */
|
||||
if (!request_region(TLCLK_BASE, 8, "telco_clock")) {
|
||||
printk(KERN_ERR "tlclk: request_region 0x%X failed.\n",
|
||||
@@ -796,7 +800,6 @@ static int __init tlclk_init(void)
|
||||
ret = -EBUSY;
|
||||
goto out2;
|
||||
}
|
||||
telclk_interrupt = (inb(TLCLK_REG7) & 0x0f);
|
||||
|
||||
if (0x0F == telclk_interrupt ) { /* not MCPBL0010 ? */
|
||||
printk(KERN_ERR "telclk_interrupt = 0x%x non-mcpbl0010 hw.\n",
|
||||
@@ -837,8 +840,8 @@ out3:
|
||||
release_region(TLCLK_BASE, 8);
|
||||
out2:
|
||||
kfree(alarm_events);
|
||||
out1:
|
||||
unregister_chrdev(tlclk_major, "telco_clock");
|
||||
out1:
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -588,6 +588,7 @@ static irqreturn_t ibmvtpm_interrupt(int irq, void *vtpm_instance)
|
||||
*/
|
||||
while ((crq = ibmvtpm_crq_get_next(ibmvtpm)) != NULL) {
|
||||
ibmvtpm_crq_process(crq, ibmvtpm);
|
||||
wake_up_interruptible(&ibmvtpm->crq_queue.wq);
|
||||
crq->valid = 0;
|
||||
smp_wmb();
|
||||
}
|
||||
@@ -635,6 +636,7 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
|
||||
}
|
||||
|
||||
crq_q->num_entry = CRQ_RES_BUF_SIZE / sizeof(*crq_q->crq_addr);
|
||||
init_waitqueue_head(&crq_q->wq);
|
||||
ibmvtpm->crq_dma_handle = dma_map_single(dev, crq_q->crq_addr,
|
||||
CRQ_RES_BUF_SIZE,
|
||||
DMA_BIDIRECTIONAL);
|
||||
@@ -687,6 +689,13 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
|
||||
if (rc)
|
||||
goto init_irq_cleanup;
|
||||
|
||||
if (!wait_event_timeout(ibmvtpm->crq_queue.wq,
|
||||
ibmvtpm->rtce_buf != NULL,
|
||||
HZ)) {
|
||||
dev_err(dev, "CRQ response timed out\n");
|
||||
goto init_irq_cleanup;
|
||||
}
|
||||
|
||||
return tpm_chip_register(chip);
|
||||
init_irq_cleanup:
|
||||
do {
|
||||
|
||||
@@ -31,6 +31,7 @@ struct ibmvtpm_crq_queue {
|
||||
struct ibmvtpm_crq *crq_addr;
|
||||
u32 index;
|
||||
u32 num_entry;
|
||||
wait_queue_head_t wq;
|
||||
};
|
||||
|
||||
struct ibmvtpm_dev {
|
||||
|
||||
@@ -193,15 +193,8 @@ static const char *ti_adpll_clk_get_name(struct ti_adpll_data *d,
|
||||
if (err)
|
||||
return NULL;
|
||||
} else {
|
||||
const char *base_name = "adpll";
|
||||
char *buf;
|
||||
|
||||
buf = devm_kzalloc(d->dev, 8 + 1 + strlen(base_name) + 1 +
|
||||
strlen(postfix), GFP_KERNEL);
|
||||
if (!buf)
|
||||
return NULL;
|
||||
sprintf(buf, "%08lx.%s.%s", d->pa, base_name, postfix);
|
||||
name = buf;
|
||||
name = devm_kasprintf(d->dev, GFP_KERNEL, "%08lx.adpll.%s",
|
||||
d->pa, postfix);
|
||||
}
|
||||
|
||||
return name;
|
||||
|
||||
@@ -169,7 +169,7 @@ static int __init h8300_8timer_init(struct device_node *node)
|
||||
return PTR_ERR(clk);
|
||||
}
|
||||
|
||||
ret = ENXIO;
|
||||
ret = -ENXIO;
|
||||
base = of_iomap(node, 0);
|
||||
if (!base) {
|
||||
pr_err("failed to map registers for clockevent\n");
|
||||
|
||||
@@ -864,6 +864,7 @@ static struct notifier_block powernv_cpufreq_reboot_nb = {
|
||||
void powernv_cpufreq_work_fn(struct work_struct *work)
|
||||
{
|
||||
struct chip *chip = container_of(work, struct chip, throttle);
|
||||
struct cpufreq_policy *policy;
|
||||
unsigned int cpu;
|
||||
cpumask_t mask;
|
||||
|
||||
@@ -878,12 +879,14 @@ void powernv_cpufreq_work_fn(struct work_struct *work)
|
||||
chip->restore = false;
|
||||
for_each_cpu(cpu, &mask) {
|
||||
int index;
|
||||
struct cpufreq_policy policy;
|
||||
|
||||
cpufreq_get_policy(&policy, cpu);
|
||||
index = cpufreq_table_find_index_c(&policy, policy.cur);
|
||||
powernv_cpufreq_target_index(&policy, index);
|
||||
cpumask_andnot(&mask, &mask, policy.cpus);
|
||||
policy = cpufreq_cpu_get(cpu);
|
||||
if (!policy)
|
||||
continue;
|
||||
index = cpufreq_table_find_index_c(policy, policy->cur);
|
||||
powernv_cpufreq_target_index(policy, index);
|
||||
cpumask_andnot(&mask, &mask, policy->cpus);
|
||||
cpufreq_cpu_put(policy);
|
||||
}
|
||||
out:
|
||||
put_online_cpus();
|
||||
|
||||
@@ -79,6 +79,8 @@
|
||||
|
||||
#define KHZ 1000
|
||||
|
||||
#define KHZ_MAX (ULONG_MAX / KHZ)
|
||||
|
||||
/* Assume that the bus is saturated if the utilization is 25% */
|
||||
#define BUS_SATURATION_RATIO 25
|
||||
|
||||
@@ -179,7 +181,7 @@ struct tegra_actmon_emc_ratio {
|
||||
};
|
||||
|
||||
static struct tegra_actmon_emc_ratio actmon_emc_ratios[] = {
|
||||
{ 1400000, ULONG_MAX },
|
||||
{ 1400000, KHZ_MAX },
|
||||
{ 1200000, 750000 },
|
||||
{ 1100000, 600000 },
|
||||
{ 1000000, 500000 },
|
||||
|
||||
@@ -1208,8 +1208,7 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc)
|
||||
|
||||
dev_dbg(tdc2dev(tdc), "Freeing channel %d\n", tdc->id);
|
||||
|
||||
if (tdc->busy)
|
||||
tegra_dma_terminate_all(dc);
|
||||
tegra_dma_terminate_all(dc);
|
||||
|
||||
spin_lock_irqsave(&tdc->lock, flags);
|
||||
list_splice_init(&tdc->pending_sg_req, &sg_req_list);
|
||||
|
||||
@@ -125,10 +125,12 @@
|
||||
/* Max transfer size per descriptor */
|
||||
#define ZYNQMP_DMA_MAX_TRANS_LEN 0x40000000
|
||||
|
||||
/* Max burst lengths */
|
||||
#define ZYNQMP_DMA_MAX_DST_BURST_LEN 32768U
|
||||
#define ZYNQMP_DMA_MAX_SRC_BURST_LEN 32768U
|
||||
|
||||
/* Reset values for data attributes */
|
||||
#define ZYNQMP_DMA_AXCACHE_VAL 0xF
|
||||
#define ZYNQMP_DMA_ARLEN_RST_VAL 0xF
|
||||
#define ZYNQMP_DMA_AWLEN_RST_VAL 0xF
|
||||
|
||||
#define ZYNQMP_DMA_SRC_ISSUE_RST_VAL 0x1F
|
||||
|
||||
@@ -527,17 +529,19 @@ static void zynqmp_dma_handle_ovfl_int(struct zynqmp_dma_chan *chan, u32 status)
|
||||
|
||||
static void zynqmp_dma_config(struct zynqmp_dma_chan *chan)
|
||||
{
|
||||
u32 val;
|
||||
u32 val, burst_val;
|
||||
|
||||
val = readl(chan->regs + ZYNQMP_DMA_CTRL0);
|
||||
val |= ZYNQMP_DMA_POINT_TYPE_SG;
|
||||
writel(val, chan->regs + ZYNQMP_DMA_CTRL0);
|
||||
|
||||
val = readl(chan->regs + ZYNQMP_DMA_DATA_ATTR);
|
||||
burst_val = __ilog2_u32(chan->src_burst_len);
|
||||
val = (val & ~ZYNQMP_DMA_ARLEN) |
|
||||
(chan->src_burst_len << ZYNQMP_DMA_ARLEN_OFST);
|
||||
((burst_val << ZYNQMP_DMA_ARLEN_OFST) & ZYNQMP_DMA_ARLEN);
|
||||
burst_val = __ilog2_u32(chan->dst_burst_len);
|
||||
val = (val & ~ZYNQMP_DMA_AWLEN) |
|
||||
(chan->dst_burst_len << ZYNQMP_DMA_AWLEN_OFST);
|
||||
((burst_val << ZYNQMP_DMA_AWLEN_OFST) & ZYNQMP_DMA_AWLEN);
|
||||
writel(val, chan->regs + ZYNQMP_DMA_DATA_ATTR);
|
||||
}
|
||||
|
||||
@@ -551,8 +555,10 @@ static int zynqmp_dma_device_config(struct dma_chan *dchan,
|
||||
{
|
||||
struct zynqmp_dma_chan *chan = to_chan(dchan);
|
||||
|
||||
chan->src_burst_len = config->src_maxburst;
|
||||
chan->dst_burst_len = config->dst_maxburst;
|
||||
chan->src_burst_len = clamp(config->src_maxburst, 1U,
|
||||
ZYNQMP_DMA_MAX_SRC_BURST_LEN);
|
||||
chan->dst_burst_len = clamp(config->dst_maxburst, 1U,
|
||||
ZYNQMP_DMA_MAX_DST_BURST_LEN);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -873,8 +879,8 @@ static int zynqmp_dma_chan_probe(struct zynqmp_dma_device *zdev,
|
||||
return PTR_ERR(chan->regs);
|
||||
|
||||
chan->bus_width = ZYNQMP_DMA_BUS_WIDTH_64;
|
||||
chan->dst_burst_len = ZYNQMP_DMA_AWLEN_RST_VAL;
|
||||
chan->src_burst_len = ZYNQMP_DMA_ARLEN_RST_VAL;
|
||||
chan->dst_burst_len = ZYNQMP_DMA_MAX_DST_BURST_LEN;
|
||||
chan->src_burst_len = ZYNQMP_DMA_MAX_SRC_BURST_LEN;
|
||||
err = of_property_read_u32(node, "xlnx,bus-width", &chan->bus_width);
|
||||
if (err < 0) {
|
||||
dev_err(&pdev->dev, "missing xlnx,bus-width property\n");
|
||||
|
||||
@@ -742,8 +742,8 @@ static void atom_op_jump(atom_exec_context *ctx, int *ptr, int arg)
|
||||
cjiffies = jiffies;
|
||||
if (time_after(cjiffies, ctx->last_jump_jiffies)) {
|
||||
cjiffies -= ctx->last_jump_jiffies;
|
||||
if ((jiffies_to_msecs(cjiffies) > 5000)) {
|
||||
DRM_ERROR("atombios stuck in loop for more than 5secs aborting\n");
|
||||
if ((jiffies_to_msecs(cjiffies) > 10000)) {
|
||||
DRM_ERROR("atombios stuck in loop for more than 10secs aborting\n");
|
||||
ctx->abort = true;
|
||||
}
|
||||
} else {
|
||||
|
||||
@@ -415,6 +415,8 @@ static bool cdv_intel_find_dp_pll(const struct gma_limit_t *limit,
|
||||
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
|
||||
struct gma_clock_t clock;
|
||||
|
||||
memset(&clock, 0, sizeof(clock));
|
||||
|
||||
switch (refclk) {
|
||||
case 27000:
|
||||
if (target < 200000) {
|
||||
|
||||
@@ -161,8 +161,11 @@ nouveau_debugfs_pstate_set(struct file *file, const char __user *ubuf,
|
||||
}
|
||||
|
||||
ret = pm_runtime_get_sync(drm->dev);
|
||||
if (ret < 0 && ret != -EACCES)
|
||||
if (ret < 0 && ret != -EACCES) {
|
||||
pm_runtime_put_autosuspend(drm->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
|
||||
pm_runtime_put_autosuspend(drm->dev);
|
||||
if (ret < 0)
|
||||
|
||||
@@ -193,7 +193,7 @@ static int __init omapdss_boot_init(void)
|
||||
dss = of_find_matching_node(NULL, omapdss_of_match);
|
||||
|
||||
if (dss == NULL || !of_device_is_available(dss))
|
||||
return 0;
|
||||
goto put_node;
|
||||
|
||||
omapdss_walk_device(dss, true);
|
||||
|
||||
@@ -218,6 +218,8 @@ static int __init omapdss_boot_init(void)
|
||||
kfree(n);
|
||||
}
|
||||
|
||||
put_node:
|
||||
of_node_put(dss);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -1115,6 +1115,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *hdmi)
|
||||
card->num_links = 1;
|
||||
card->name = "vc4-hdmi";
|
||||
card->dev = dev;
|
||||
card->owner = THIS_MODULE;
|
||||
|
||||
/*
|
||||
* Be careful, snd_soc_register_card() calls dev_set_drvdata() and
|
||||
|
||||
@@ -1280,8 +1280,8 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
|
||||
|
||||
/* create pre-declared device nodes */
|
||||
of_i2c_register_devices(adap);
|
||||
i2c_acpi_register_devices(adap);
|
||||
i2c_acpi_install_space_handler(adap);
|
||||
i2c_acpi_register_devices(adap);
|
||||
|
||||
if (adap->nr < __i2c_first_dynamic_bus_num)
|
||||
i2c_scan_static_board_info(adap);
|
||||
|
||||
@@ -1315,13 +1315,13 @@ static ssize_t ucma_set_option(struct ucma_file *file, const char __user *inbuf,
|
||||
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
|
||||
return -EFAULT;
|
||||
|
||||
if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE))
|
||||
return -EINVAL;
|
||||
|
||||
ctx = ucma_get_ctx(file, cmd.id);
|
||||
if (IS_ERR(ctx))
|
||||
return PTR_ERR(ctx);
|
||||
|
||||
if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE))
|
||||
return -EINVAL;
|
||||
|
||||
optval = memdup_user((void __user *) (unsigned long) cmd.optval,
|
||||
cmd.optlen);
|
||||
if (IS_ERR(optval)) {
|
||||
|
||||
@@ -3265,7 +3265,7 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
|
||||
if (raddr->sin_addr.s_addr == htonl(INADDR_ANY)) {
|
||||
err = pick_local_ipaddrs(dev, cm_id);
|
||||
if (err)
|
||||
goto fail2;
|
||||
goto fail3;
|
||||
}
|
||||
|
||||
/* find a route */
|
||||
@@ -3287,7 +3287,7 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
|
||||
if (ipv6_addr_type(&raddr6->sin6_addr) == IPV6_ADDR_ANY) {
|
||||
err = pick_local_ip6addrs(dev, cm_id);
|
||||
if (err)
|
||||
goto fail2;
|
||||
goto fail3;
|
||||
}
|
||||
|
||||
/* find a route */
|
||||
|
||||
@@ -2052,9 +2052,9 @@ static int i40iw_addr_resolve_neigh_ipv6(struct i40iw_device *iwdev,
|
||||
dst = i40iw_get_dst_ipv6(&src_addr, &dst_addr);
|
||||
if (!dst || dst->error) {
|
||||
if (dst) {
|
||||
dst_release(dst);
|
||||
i40iw_pr_err("ip6_route_output returned dst->error = %d\n",
|
||||
dst->error);
|
||||
dst_release(dst);
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
|
||||
@@ -126,6 +126,8 @@ static int rxe_init_device_param(struct rxe_dev *rxe)
|
||||
rxe->attr.max_fast_reg_page_list_len = RXE_MAX_FMR_PAGE_LIST_LEN;
|
||||
rxe->attr.max_pkeys = RXE_MAX_PKEYS;
|
||||
rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY;
|
||||
addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid,
|
||||
rxe->ndev->dev_addr);
|
||||
|
||||
rxe->max_ucontext = RXE_MAX_UCONTEXT;
|
||||
|
||||
|
||||
@@ -593,15 +593,16 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask,
|
||||
struct ib_gid_attr sgid_attr;
|
||||
|
||||
if (mask & IB_QP_MAX_QP_RD_ATOMIC) {
|
||||
int max_rd_atomic = __roundup_pow_of_two(attr->max_rd_atomic);
|
||||
int max_rd_atomic = attr->max_rd_atomic ?
|
||||
roundup_pow_of_two(attr->max_rd_atomic) : 0;
|
||||
|
||||
qp->attr.max_rd_atomic = max_rd_atomic;
|
||||
atomic_set(&qp->req.rd_atomic, max_rd_atomic);
|
||||
}
|
||||
|
||||
if (mask & IB_QP_MAX_DEST_RD_ATOMIC) {
|
||||
int max_dest_rd_atomic =
|
||||
__roundup_pow_of_two(attr->max_dest_rd_atomic);
|
||||
int max_dest_rd_atomic = attr->max_dest_rd_atomic ?
|
||||
roundup_pow_of_two(attr->max_dest_rd_atomic) : 0;
|
||||
|
||||
qp->attr.max_dest_rd_atomic = max_dest_rd_atomic;
|
||||
|
||||
|
||||
@@ -548,6 +548,7 @@ struct cache_set {
|
||||
*/
|
||||
wait_queue_head_t btree_cache_wait;
|
||||
struct task_struct *btree_cache_alloc_lock;
|
||||
spinlock_t btree_cannibalize_lock;
|
||||
|
||||
/*
|
||||
* When we free a btree node, we increment the gen of the bucket the
|
||||
|
||||
@@ -840,15 +840,17 @@ out:
|
||||
|
||||
static int mca_cannibalize_lock(struct cache_set *c, struct btree_op *op)
|
||||
{
|
||||
struct task_struct *old;
|
||||
|
||||
old = cmpxchg(&c->btree_cache_alloc_lock, NULL, current);
|
||||
if (old && old != current) {
|
||||
spin_lock(&c->btree_cannibalize_lock);
|
||||
if (likely(c->btree_cache_alloc_lock == NULL)) {
|
||||
c->btree_cache_alloc_lock = current;
|
||||
} else if (c->btree_cache_alloc_lock != current) {
|
||||
if (op)
|
||||
prepare_to_wait(&c->btree_cache_wait, &op->wait,
|
||||
TASK_UNINTERRUPTIBLE);
|
||||
spin_unlock(&c->btree_cannibalize_lock);
|
||||
return -EINTR;
|
||||
}
|
||||
spin_unlock(&c->btree_cannibalize_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -883,10 +885,12 @@ static struct btree *mca_cannibalize(struct cache_set *c, struct btree_op *op,
|
||||
*/
|
||||
static void bch_cannibalize_unlock(struct cache_set *c)
|
||||
{
|
||||
spin_lock(&c->btree_cannibalize_lock);
|
||||
if (c->btree_cache_alloc_lock == current) {
|
||||
c->btree_cache_alloc_lock = NULL;
|
||||
wake_up(&c->btree_cache_wait);
|
||||
}
|
||||
spin_unlock(&c->btree_cannibalize_lock);
|
||||
}
|
||||
|
||||
static struct btree *mca_alloc(struct cache_set *c, struct btree_op *op,
|
||||
|
||||
@@ -1510,6 +1510,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
|
||||
sema_init(&c->sb_write_mutex, 1);
|
||||
mutex_init(&c->bucket_lock);
|
||||
init_waitqueue_head(&c->btree_cache_wait);
|
||||
spin_lock_init(&c->btree_cannibalize_lock);
|
||||
init_waitqueue_head(&c->bucket_wait);
|
||||
init_waitqueue_head(&c->gc_wait);
|
||||
sema_init(&c->uuid_write_mutex, 1);
|
||||
|
||||
@@ -483,10 +483,11 @@ static int tda10071_read_status(struct dvb_frontend *fe, enum fe_status *status)
|
||||
goto error;
|
||||
|
||||
if (dev->delivery_system == SYS_DVBS) {
|
||||
dev->dvbv3_ber = buf[0] << 24 | buf[1] << 16 |
|
||||
buf[2] << 8 | buf[3] << 0;
|
||||
dev->post_bit_error += buf[0] << 24 | buf[1] << 16 |
|
||||
buf[2] << 8 | buf[3] << 0;
|
||||
u32 bit_error = buf[0] << 24 | buf[1] << 16 |
|
||||
buf[2] << 8 | buf[3] << 0;
|
||||
|
||||
dev->dvbv3_ber = bit_error;
|
||||
dev->post_bit_error += bit_error;
|
||||
c->post_bit_error.stat[0].scale = FE_SCALE_COUNTER;
|
||||
c->post_bit_error.stat[0].uvalue = dev->post_bit_error;
|
||||
dev->block_error += buf[4] << 8 | buf[5] << 0;
|
||||
|
||||
@@ -2338,11 +2338,12 @@ smiapp_sysfs_nvm_read(struct device *dev, struct device_attribute *attr,
|
||||
if (rval < 0) {
|
||||
if (rval != -EBUSY && rval != -EAGAIN)
|
||||
pm_runtime_set_active(&client->dev);
|
||||
pm_runtime_put(&client->dev);
|
||||
pm_runtime_put_noidle(&client->dev);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (smiapp_read_nvm(sensor, sensor->nvm)) {
|
||||
pm_runtime_put(&client->dev);
|
||||
dev_err(&client->dev, "nvm read failed\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
@@ -687,12 +687,13 @@ static void pix_proc_config(struct cal_ctx *ctx)
|
||||
}
|
||||
|
||||
static void cal_wr_dma_config(struct cal_ctx *ctx,
|
||||
unsigned int width)
|
||||
unsigned int width, unsigned int height)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
val = reg_read(ctx->dev, CAL_WR_DMA_CTRL(ctx->csi2_port));
|
||||
set_field(&val, ctx->csi2_port, CAL_WR_DMA_CTRL_CPORT_MASK);
|
||||
set_field(&val, height, CAL_WR_DMA_CTRL_YSIZE_MASK);
|
||||
set_field(&val, CAL_WR_DMA_CTRL_DTAG_PIX_DAT,
|
||||
CAL_WR_DMA_CTRL_DTAG_MASK);
|
||||
set_field(&val, CAL_WR_DMA_CTRL_MODE_CONST,
|
||||
@@ -1318,7 +1319,8 @@ static int cal_start_streaming(struct vb2_queue *vq, unsigned int count)
|
||||
csi2_lane_config(ctx);
|
||||
csi2_ctx_config(ctx);
|
||||
pix_proc_config(ctx);
|
||||
cal_wr_dma_config(ctx, ctx->v_fmt.fmt.pix.bytesperline);
|
||||
cal_wr_dma_config(ctx, ctx->v_fmt.fmt.pix.bytesperline,
|
||||
ctx->v_fmt.fmt.pix.height);
|
||||
cal_wr_dma_addr(ctx, addr);
|
||||
csi2_ppi_enable(ctx);
|
||||
|
||||
|
||||
@@ -1052,6 +1052,7 @@ static int go7007_usb_probe(struct usb_interface *intf,
|
||||
struct go7007_usb *usb;
|
||||
const struct go7007_usb_board *board;
|
||||
struct usb_device *usbdev = interface_to_usbdev(intf);
|
||||
struct usb_host_endpoint *ep;
|
||||
unsigned num_i2c_devs;
|
||||
char *name;
|
||||
int video_pipe, i, v_urb_len;
|
||||
@@ -1147,7 +1148,8 @@ static int go7007_usb_probe(struct usb_interface *intf,
|
||||
if (usb->intr_urb->transfer_buffer == NULL)
|
||||
goto allocfail;
|
||||
|
||||
if (go->board_id == GO7007_BOARDID_SENSORAY_2250)
|
||||
ep = usb->usbdev->ep_in[4];
|
||||
if (usb_endpoint_type(&ep->desc) == USB_ENDPOINT_XFER_BULK)
|
||||
usb_fill_bulk_urb(usb->intr_urb, usb->usbdev,
|
||||
usb_rcvbulkpipe(usb->usbdev, 4),
|
||||
usb->intr_urb->transfer_buffer, 2*sizeof(u16),
|
||||
|
||||
@@ -32,6 +32,11 @@ int mfd_cell_enable(struct platform_device *pdev)
|
||||
const struct mfd_cell *cell = mfd_get_cell(pdev);
|
||||
int err = 0;
|
||||
|
||||
if (!cell->enable) {
|
||||
dev_dbg(&pdev->dev, "No .enable() call-back registered\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* only call enable hook if the cell wasn't previously enabled */
|
||||
if (atomic_inc_return(cell->usage_count) == 1)
|
||||
err = cell->enable(pdev);
|
||||
@@ -49,6 +54,11 @@ int mfd_cell_disable(struct platform_device *pdev)
|
||||
const struct mfd_cell *cell = mfd_get_cell(pdev);
|
||||
int err = 0;
|
||||
|
||||
if (!cell->disable) {
|
||||
dev_dbg(&pdev->dev, "No .disable() call-back registered\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* only disable if no other clients are using it */
|
||||
if (atomic_dec_return(cell->usage_count) == 0)
|
||||
err = cell->disable(pdev);
|
||||
|
||||
@@ -300,7 +300,7 @@ static void mmc_manage_enhanced_area(struct mmc_card *card, u8 *ext_csd)
|
||||
}
|
||||
}
|
||||
|
||||
static void mmc_part_add(struct mmc_card *card, unsigned int size,
|
||||
static void mmc_part_add(struct mmc_card *card, u64 size,
|
||||
unsigned int part_cfg, char *name, int idx, bool ro,
|
||||
int area_type)
|
||||
{
|
||||
@@ -316,7 +316,7 @@ static void mmc_manage_gp_partitions(struct mmc_card *card, u8 *ext_csd)
|
||||
{
|
||||
int idx;
|
||||
u8 hc_erase_grp_sz, hc_wp_grp_sz;
|
||||
unsigned int part_size;
|
||||
u64 part_size;
|
||||
|
||||
/*
|
||||
* General purpose partition feature support --
|
||||
@@ -346,8 +346,7 @@ static void mmc_manage_gp_partitions(struct mmc_card *card, u8 *ext_csd)
|
||||
(ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 1]
|
||||
<< 8) +
|
||||
ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3];
|
||||
part_size *= (size_t)(hc_erase_grp_sz *
|
||||
hc_wp_grp_sz);
|
||||
part_size *= (hc_erase_grp_sz * hc_wp_grp_sz);
|
||||
mmc_part_add(card, part_size << 19,
|
||||
EXT_CSD_PART_CONFIG_ACC_GP0 + idx,
|
||||
"gp%d", idx, false,
|
||||
@@ -365,7 +364,7 @@ static void mmc_manage_gp_partitions(struct mmc_card *card, u8 *ext_csd)
|
||||
static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
|
||||
{
|
||||
int err = 0, idx;
|
||||
unsigned int part_size;
|
||||
u64 part_size;
|
||||
struct device_node *np;
|
||||
bool broken_hpi = false;
|
||||
|
||||
|
||||
@@ -726,7 +726,6 @@ static struct mtd_info *cfi_amdstd_setup(struct mtd_info *mtd)
|
||||
kfree(mtd->eraseregions);
|
||||
kfree(mtd);
|
||||
kfree(cfi->cmdset_priv);
|
||||
kfree(cfi->cfiq);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
||||
@@ -228,12 +228,29 @@ static int mtdpart_setup_real(char *s)
|
||||
struct cmdline_mtd_partition *this_mtd;
|
||||
struct mtd_partition *parts;
|
||||
int mtd_id_len, num_parts;
|
||||
char *p, *mtd_id;
|
||||
char *p, *mtd_id, *semicol;
|
||||
|
||||
/*
|
||||
* Replace the first ';' by a NULL char so strrchr can work
|
||||
* properly.
|
||||
*/
|
||||
semicol = strchr(s, ';');
|
||||
if (semicol)
|
||||
*semicol = '\0';
|
||||
|
||||
mtd_id = s;
|
||||
|
||||
/* fetch <mtd-id> */
|
||||
p = strchr(s, ':');
|
||||
/*
|
||||
* fetch <mtd-id>. We use strrchr to ignore all ':' that could
|
||||
* be present in the MTD name, only the last one is interpreted
|
||||
* as an <mtd-id>/<part-definition> separator.
|
||||
*/
|
||||
p = strrchr(s, ':');
|
||||
|
||||
/* Restore the ';' now. */
|
||||
if (semicol)
|
||||
*semicol = ';';
|
||||
|
||||
if (!p) {
|
||||
pr_err("no mtd-id\n");
|
||||
return -EINVAL;
|
||||
|
||||
@@ -421,6 +421,7 @@ static int elm_probe(struct platform_device *pdev)
|
||||
pm_runtime_enable(&pdev->dev);
|
||||
if (pm_runtime_get_sync(&pdev->dev) < 0) {
|
||||
ret = -EINVAL;
|
||||
pm_runtime_put_sync(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
dev_err(&pdev->dev, "can't enable clock\n");
|
||||
return ret;
|
||||
|
||||
@@ -1264,9 +1264,12 @@ static int bnxt_set_pauseparam(struct net_device *dev,
|
||||
if (!BNXT_SINGLE_PF(bp))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
mutex_lock(&bp->link_lock);
|
||||
if (epause->autoneg) {
|
||||
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED))
|
||||
return -EINVAL;
|
||||
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
|
||||
rc = -EINVAL;
|
||||
goto pause_exit;
|
||||
}
|
||||
|
||||
link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
|
||||
if (bp->hwrm_spec_code >= 0x10201)
|
||||
@@ -1287,11 +1290,11 @@ static int bnxt_set_pauseparam(struct net_device *dev,
|
||||
if (epause->tx_pause)
|
||||
link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX;
|
||||
|
||||
if (netif_running(dev)) {
|
||||
mutex_lock(&bp->link_lock);
|
||||
if (netif_running(dev))
|
||||
rc = bnxt_hwrm_set_pause(bp);
|
||||
mutex_unlock(&bp->link_lock);
|
||||
}
|
||||
|
||||
pause_exit:
|
||||
mutex_unlock(&bp->link_lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@@ -1977,8 +1980,7 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
|
||||
struct bnxt *bp = netdev_priv(dev);
|
||||
struct ethtool_eee *eee = &bp->eee;
|
||||
struct bnxt_link_info *link_info = &bp->link_info;
|
||||
u32 advertising =
|
||||
_bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
|
||||
u32 advertising;
|
||||
int rc = 0;
|
||||
|
||||
if (!BNXT_SINGLE_PF(bp))
|
||||
@@ -1987,19 +1989,23 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
|
||||
if (!(bp->flags & BNXT_FLAG_EEE_CAP))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
mutex_lock(&bp->link_lock);
|
||||
advertising = _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
|
||||
if (!edata->eee_enabled)
|
||||
goto eee_ok;
|
||||
|
||||
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
|
||||
netdev_warn(dev, "EEE requires autoneg\n");
|
||||
return -EINVAL;
|
||||
rc = -EINVAL;
|
||||
goto eee_exit;
|
||||
}
|
||||
if (edata->tx_lpi_enabled) {
|
||||
if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi ||
|
||||
edata->tx_lpi_timer < bp->lpi_tmr_lo)) {
|
||||
netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n",
|
||||
bp->lpi_tmr_lo, bp->lpi_tmr_hi);
|
||||
return -EINVAL;
|
||||
rc = -EINVAL;
|
||||
goto eee_exit;
|
||||
} else if (!bp->lpi_tmr_hi) {
|
||||
edata->tx_lpi_timer = eee->tx_lpi_timer;
|
||||
}
|
||||
@@ -2009,7 +2015,8 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
|
||||
} else if (edata->advertised & ~advertising) {
|
||||
netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n",
|
||||
edata->advertised, advertising);
|
||||
return -EINVAL;
|
||||
rc = -EINVAL;
|
||||
goto eee_exit;
|
||||
}
|
||||
|
||||
eee->advertised = edata->advertised;
|
||||
@@ -2021,6 +2028,8 @@ eee_ok:
|
||||
if (netif_running(dev))
|
||||
rc = bnxt_hwrm_set_link_setting(bp, false, true);
|
||||
|
||||
eee_exit:
|
||||
mutex_unlock(&bp->link_lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
||||
@@ -567,8 +567,13 @@ void e1000_reinit_locked(struct e1000_adapter *adapter)
|
||||
WARN_ON(in_interrupt());
|
||||
while (test_and_set_bit(__E1000_RESETTING, &adapter->flags))
|
||||
msleep(1);
|
||||
e1000_down(adapter);
|
||||
e1000_up(adapter);
|
||||
|
||||
/* only run the task if not already down */
|
||||
if (!test_bit(__E1000_DOWN, &adapter->flags)) {
|
||||
e1000_down(adapter);
|
||||
e1000_up(adapter);
|
||||
}
|
||||
|
||||
clear_bit(__E1000_RESETTING, &adapter->flags);
|
||||
}
|
||||
|
||||
@@ -1458,10 +1463,15 @@ int e1000_close(struct net_device *netdev)
|
||||
struct e1000_hw *hw = &adapter->hw;
|
||||
int count = E1000_CHECK_RESET_COUNT;
|
||||
|
||||
while (test_bit(__E1000_RESETTING, &adapter->flags) && count--)
|
||||
while (test_and_set_bit(__E1000_RESETTING, &adapter->flags) && count--)
|
||||
usleep_range(10000, 20000);
|
||||
|
||||
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
|
||||
WARN_ON(count < 0);
|
||||
|
||||
/* signal that we're down so that the reset task will no longer run */
|
||||
set_bit(__E1000_DOWN, &adapter->flags);
|
||||
clear_bit(__E1000_RESETTING, &adapter->flags);
|
||||
|
||||
e1000_down(adapter);
|
||||
e1000_power_down_phy(adapter);
|
||||
e1000_free_irq(adapter);
|
||||
|
||||
@@ -96,6 +96,7 @@ static int qed_sp_vf_start(struct qed_hwfn *p_hwfn, struct qed_vf_info *p_vf)
|
||||
p_ramrod->personality = PERSONALITY_ETH;
|
||||
break;
|
||||
case QED_PCI_ETH_ROCE:
|
||||
case QED_PCI_ETH_IWARP:
|
||||
p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
|
||||
break;
|
||||
default:
|
||||
|
||||
@@ -716,7 +716,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
|
||||
struct net_device *dev,
|
||||
struct geneve_sock *gs4,
|
||||
struct flowi4 *fl4,
|
||||
const struct ip_tunnel_info *info)
|
||||
const struct ip_tunnel_info *info,
|
||||
__be16 dport, __be16 sport)
|
||||
{
|
||||
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
|
||||
struct geneve_dev *geneve = netdev_priv(dev);
|
||||
@@ -732,6 +733,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
|
||||
fl4->flowi4_proto = IPPROTO_UDP;
|
||||
fl4->daddr = info->key.u.ipv4.dst;
|
||||
fl4->saddr = info->key.u.ipv4.src;
|
||||
fl4->fl4_dport = dport;
|
||||
fl4->fl4_sport = sport;
|
||||
|
||||
tos = info->key.tos;
|
||||
if ((tos == 1) && !geneve->collect_md) {
|
||||
@@ -766,7 +769,8 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
|
||||
struct net_device *dev,
|
||||
struct geneve_sock *gs6,
|
||||
struct flowi6 *fl6,
|
||||
const struct ip_tunnel_info *info)
|
||||
const struct ip_tunnel_info *info,
|
||||
__be16 dport, __be16 sport)
|
||||
{
|
||||
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
|
||||
struct geneve_dev *geneve = netdev_priv(dev);
|
||||
@@ -782,6 +786,9 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
|
||||
fl6->flowi6_proto = IPPROTO_UDP;
|
||||
fl6->daddr = info->key.u.ipv6.dst;
|
||||
fl6->saddr = info->key.u.ipv6.src;
|
||||
fl6->fl6_dport = dport;
|
||||
fl6->fl6_sport = sport;
|
||||
|
||||
prio = info->key.tos;
|
||||
if ((prio == 1) && !geneve->collect_md) {
|
||||
prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
|
||||
@@ -828,7 +835,9 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
__be16 df;
|
||||
int err;
|
||||
|
||||
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
|
||||
geneve->info.key.tp_dst, sport);
|
||||
if (IS_ERR(rt))
|
||||
return PTR_ERR(rt);
|
||||
|
||||
@@ -839,7 +848,6 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
skb_dst_update_pmtu(skb, mtu);
|
||||
}
|
||||
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
if (geneve->collect_md) {
|
||||
tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
|
||||
ttl = key->ttl;
|
||||
@@ -874,7 +882,9 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
__be16 sport;
|
||||
int err;
|
||||
|
||||
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
|
||||
geneve->info.key.tp_dst, sport);
|
||||
if (IS_ERR(dst))
|
||||
return PTR_ERR(dst);
|
||||
|
||||
@@ -885,7 +895,6 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
skb_dst_update_pmtu(skb, mtu);
|
||||
}
|
||||
|
||||
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
|
||||
if (geneve->collect_md) {
|
||||
prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
|
||||
ttl = key->ttl;
|
||||
@@ -963,13 +972,18 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
|
||||
{
|
||||
struct ip_tunnel_info *info = skb_tunnel_info(skb);
|
||||
struct geneve_dev *geneve = netdev_priv(dev);
|
||||
__be16 sport;
|
||||
|
||||
if (ip_tunnel_info_af(info) == AF_INET) {
|
||||
struct rtable *rt;
|
||||
struct flowi4 fl4;
|
||||
struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
|
||||
|
||||
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
|
||||
struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
|
||||
sport = udp_flow_src_port(geneve->net, skb,
|
||||
1, USHRT_MAX, true);
|
||||
|
||||
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
|
||||
geneve->info.key.tp_dst, sport);
|
||||
if (IS_ERR(rt))
|
||||
return PTR_ERR(rt);
|
||||
|
||||
@@ -979,9 +993,13 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
|
||||
} else if (ip_tunnel_info_af(info) == AF_INET6) {
|
||||
struct dst_entry *dst;
|
||||
struct flowi6 fl6;
|
||||
struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
|
||||
|
||||
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
|
||||
struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
|
||||
sport = udp_flow_src_port(geneve->net, skb,
|
||||
1, USHRT_MAX, true);
|
||||
|
||||
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
|
||||
geneve->info.key.tp_dst, sport);
|
||||
if (IS_ERR(dst))
|
||||
return PTR_ERR(dst);
|
||||
|
||||
@@ -992,8 +1010,7 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
info->key.tp_src = udp_flow_src_port(geneve->net, skb,
|
||||
1, USHRT_MAX, true);
|
||||
info->key.tp_src = sport;
|
||||
info->key.tp_dst = geneve->info.key.tp_dst;
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -834,7 +834,9 @@ static int adf7242_rx(struct adf7242_local *lp)
|
||||
int ret;
|
||||
u8 lqi, len_u8, *data;
|
||||
|
||||
adf7242_read_reg(lp, 0, &len_u8);
|
||||
ret = adf7242_read_reg(lp, 0, &len_u8);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
len = len_u8;
|
||||
|
||||
|
||||
@@ -2924,6 +2924,7 @@ static int ca8210_dev_com_init(struct ca8210_priv *priv)
|
||||
);
|
||||
if (!priv->irq_workqueue) {
|
||||
dev_crit(&priv->spi->dev, "alloc of irq_workqueue failed!\n");
|
||||
destroy_workqueue(priv->mlme_workqueue);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
||||
@@ -1121,7 +1121,8 @@ void phy_detach(struct phy_device *phydev)
|
||||
|
||||
phy_led_triggers_unregister(phydev);
|
||||
|
||||
module_put(phydev->mdio.dev.driver->owner);
|
||||
if (phydev->mdio.dev.driver)
|
||||
module_put(phydev->mdio.dev.driver->owner);
|
||||
|
||||
/* If the device had no specific driver before (i.e. - it
|
||||
* was using the generic driver), we unbind the device
|
||||
|
||||
@@ -386,11 +386,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
|
||||
}
|
||||
|
||||
for (opt = data; len; len -= opt[1], opt += opt[1]) {
|
||||
if (len < 2 || len < opt[1]) {
|
||||
dev->stats.rx_errors++;
|
||||
kfree(out);
|
||||
return; /* bad packet, drop silently */
|
||||
}
|
||||
if (len < 2 || opt[1] < 2 || len < opt[1])
|
||||
goto err_out;
|
||||
|
||||
if (pid == PID_LCP)
|
||||
switch (opt[0]) {
|
||||
@@ -398,6 +395,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
|
||||
continue; /* MRU always OK and > 1500 bytes? */
|
||||
|
||||
case LCP_OPTION_ACCM: /* async control character map */
|
||||
if (opt[1] < sizeof(valid_accm))
|
||||
goto err_out;
|
||||
if (!memcmp(opt, valid_accm,
|
||||
sizeof(valid_accm)))
|
||||
continue;
|
||||
@@ -409,6 +408,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
|
||||
}
|
||||
break;
|
||||
case LCP_OPTION_MAGIC:
|
||||
if (len < 6)
|
||||
goto err_out;
|
||||
if (opt[1] != 6 || (!opt[2] && !opt[3] &&
|
||||
!opt[4] && !opt[5]))
|
||||
break; /* reject invalid magic number */
|
||||
@@ -427,6 +428,11 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
|
||||
ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data);
|
||||
|
||||
kfree(out);
|
||||
return;
|
||||
|
||||
err_out:
|
||||
dev->stats.rx_errors++;
|
||||
kfree(out);
|
||||
}
|
||||
|
||||
static int ppp_rx(struct sk_buff *skb)
|
||||
|
||||
@@ -1771,6 +1771,8 @@ static const struct usb_device_id ar5523_id_table[] = {
|
||||
AR5523_DEVICE_UX(0x0846, 0x4300), /* Netgear / WG111U */
|
||||
AR5523_DEVICE_UG(0x0846, 0x4250), /* Netgear / WG111T */
|
||||
AR5523_DEVICE_UG(0x0846, 0x5f00), /* Netgear / WPN111 */
|
||||
AR5523_DEVICE_UG(0x083a, 0x4506), /* SMC / EZ Connect
|
||||
SMCWUSBT-G2 */
|
||||
AR5523_DEVICE_UG(0x157e, 0x3006), /* Umedia / AR5523_1 */
|
||||
AR5523_DEVICE_UX(0x157e, 0x3205), /* Umedia / AR5523_2 */
|
||||
AR5523_DEVICE_UG(0x157e, 0x3006), /* Umedia / TEW444UBEU */
|
||||
|
||||
@@ -1564,23 +1564,33 @@ static int ath10k_sdio_hif_diag_read(struct ath10k *ar, u32 address, void *buf,
|
||||
size_t buf_len)
|
||||
{
|
||||
int ret;
|
||||
void *mem;
|
||||
|
||||
mem = kzalloc(buf_len, GFP_KERNEL);
|
||||
if (!mem)
|
||||
return -ENOMEM;
|
||||
|
||||
/* set window register to start read cycle */
|
||||
ret = ath10k_sdio_write32(ar, MBOX_WINDOW_READ_ADDR_ADDRESS, address);
|
||||
if (ret) {
|
||||
ath10k_warn(ar, "failed to set mbox window read address: %d", ret);
|
||||
return ret;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* read the data */
|
||||
ret = ath10k_sdio_read(ar, MBOX_WINDOW_DATA_ADDRESS, buf, buf_len);
|
||||
ret = ath10k_sdio_read(ar, MBOX_WINDOW_DATA_ADDRESS, mem, buf_len);
|
||||
if (ret) {
|
||||
ath10k_warn(ar, "failed to read from mbox window data address: %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
goto out;
|
||||
}
|
||||
|
||||
return 0;
|
||||
memcpy(buf, mem, buf_len);
|
||||
|
||||
out:
|
||||
kfree(mem);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ath10k_sdio_hif_diag_read32(struct ath10k *ar, u32 address,
|
||||
|
||||
@@ -938,7 +938,7 @@ struct mwifiex_tkip_param {
|
||||
struct mwifiex_aes_param {
|
||||
u8 pn[WPA_PN_SIZE];
|
||||
__le16 key_len;
|
||||
u8 key[WLAN_KEY_LEN_CCMP];
|
||||
u8 key[WLAN_KEY_LEN_CCMP_256];
|
||||
} __packed;
|
||||
|
||||
struct mwifiex_wapi_param {
|
||||
|
||||
@@ -624,7 +624,7 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
|
||||
key_v2 = &resp->params.key_material_v2;
|
||||
|
||||
len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len);
|
||||
if (len > WLAN_KEY_LEN_CCMP)
|
||||
if (len > sizeof(key_v2->key_param_set.key_params.aes.key))
|
||||
return -EINVAL;
|
||||
|
||||
if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) {
|
||||
@@ -640,7 +640,7 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
|
||||
return 0;
|
||||
|
||||
memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0,
|
||||
WLAN_KEY_LEN_CCMP);
|
||||
sizeof(key_v2->key_param_set.key_params.aes.key));
|
||||
priv->aes_key_v2.key_param_set.key_params.aes.key_len =
|
||||
cpu_to_le16(len);
|
||||
memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key,
|
||||
|
||||
@@ -102,6 +102,8 @@
|
||||
#define QSERDES_COM_CORECLK_DIV_MODE1 0x1bc
|
||||
|
||||
/* QMP PHY TX registers */
|
||||
#define QSERDES_TX_EMP_POST1_LVL 0x018
|
||||
#define QSERDES_TX_SLEW_CNTL 0x040
|
||||
#define QSERDES_TX_RES_CODE_LANE_OFFSET 0x054
|
||||
#define QSERDES_TX_DEBUG_BUS_SEL 0x064
|
||||
#define QSERDES_TX_HIGHZ_TRANSCEIVEREN_BIAS_DRVR_EN 0x068
|
||||
@@ -394,8 +396,8 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_serdes_tbl[] = {
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_BG_TRIM, 0xf),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_LOCK_CMP_EN, 0x1),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_MAP, 0x0),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_TIMER1, 0x1f),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_TIMER2, 0x3f),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_TIMER1, 0xff),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_TIMER2, 0x1f),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_CMN_CONFIG, 0x6),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_PLL_IVCO, 0xf),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_HSCLK_SEL, 0x0),
|
||||
@@ -421,7 +423,6 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_serdes_tbl[] = {
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_INTEGLOOP_GAIN1_MODE0, 0x0),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_INTEGLOOP_GAIN0_MODE0, 0x80),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_BIAS_EN_CTRL_BY_PSM, 0x1),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_VCO_TUNE_CTRL, 0xa),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_SSC_EN_CENTER, 0x1),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_SSC_PER1, 0x31),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_SSC_PER2, 0x1),
|
||||
@@ -430,7 +431,6 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_serdes_tbl[] = {
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_SSC_STEP_SIZE1, 0x2f),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_SSC_STEP_SIZE2, 0x19),
|
||||
QMP_PHY_INIT_CFG(QSERDES_COM_CLK_EP_DIV, 0x19),
|
||||
QMP_PHY_INIT_CFG(QSERDES_RX_SIGDET_CNTRL, 0x7),
|
||||
};
|
||||
|
||||
static const struct qmp_phy_init_tbl ipq8074_pcie_tx_tbl[] = {
|
||||
@@ -438,6 +438,8 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_tx_tbl[] = {
|
||||
QMP_PHY_INIT_CFG(QSERDES_TX_LANE_MODE, 0x6),
|
||||
QMP_PHY_INIT_CFG(QSERDES_TX_RES_CODE_LANE_OFFSET, 0x2),
|
||||
QMP_PHY_INIT_CFG(QSERDES_TX_RCV_DETECT_LVL_2, 0x12),
|
||||
QMP_PHY_INIT_CFG(QSERDES_TX_EMP_POST1_LVL, 0x36),
|
||||
QMP_PHY_INIT_CFG(QSERDES_TX_SLEW_CNTL, 0x0a),
|
||||
};
|
||||
|
||||
static const struct qmp_phy_init_tbl ipq8074_pcie_rx_tbl[] = {
|
||||
@@ -448,7 +450,6 @@ static const struct qmp_phy_init_tbl ipq8074_pcie_rx_tbl[] = {
|
||||
QMP_PHY_INIT_CFG(QSERDES_RX_RX_EQU_ADAPTOR_CNTRL4, 0xdb),
|
||||
QMP_PHY_INIT_CFG(QSERDES_RX_UCDR_SO_SATURATION_AND_ENABLE, 0x4b),
|
||||
QMP_PHY_INIT_CFG(QSERDES_RX_UCDR_SO_GAIN, 0x4),
|
||||
QMP_PHY_INIT_CFG(QSERDES_RX_UCDR_SO_GAIN_HALF, 0x4),
|
||||
};
|
||||
|
||||
static const struct qmp_phy_init_tbl ipq8074_pcie_pcs_tbl[] = {
|
||||
@@ -665,6 +666,9 @@ static const struct qmp_phy_cfg msm8996_usb3phy_cfg = {
|
||||
.mask_pcs_ready = PHYSTATUS,
|
||||
};
|
||||
|
||||
static const char * const ipq8074_pciephy_clk_l[] = {
|
||||
"aux", "cfg_ahb",
|
||||
};
|
||||
/* list of resets */
|
||||
static const char * const ipq8074_pciephy_reset_l[] = {
|
||||
"phy", "common",
|
||||
@@ -682,8 +686,8 @@ static const struct qmp_phy_cfg ipq8074_pciephy_cfg = {
|
||||
.rx_tbl_num = ARRAY_SIZE(ipq8074_pcie_rx_tbl),
|
||||
.pcs_tbl = ipq8074_pcie_pcs_tbl,
|
||||
.pcs_tbl_num = ARRAY_SIZE(ipq8074_pcie_pcs_tbl),
|
||||
.clk_list = NULL,
|
||||
.num_clks = 0,
|
||||
.clk_list = ipq8074_pciephy_clk_l,
|
||||
.num_clks = ARRAY_SIZE(ipq8074_pciephy_clk_l),
|
||||
.reset_list = ipq8074_pciephy_reset_l,
|
||||
.num_resets = ARRAY_SIZE(ipq8074_pciephy_reset_l),
|
||||
.vreg_list = NULL,
|
||||
|
||||
@@ -142,6 +142,10 @@ static void s5pv210_phy_pwr(struct samsung_usb2_phy_instance *inst, bool on)
|
||||
udelay(10);
|
||||
rst &= ~rstbits;
|
||||
writel(rst, drv->reg_phy + S5PV210_UPHYRST);
|
||||
/* The following delay is necessary for the reset sequence to be
|
||||
* completed
|
||||
*/
|
||||
udelay(80);
|
||||
} else {
|
||||
pwr = readl(drv->reg_phy + S5PV210_UPHYPWR);
|
||||
pwr |= phypwr;
|
||||
|
||||
@@ -109,7 +109,7 @@ static void max17040_get_vcell(struct i2c_client *client)
|
||||
|
||||
vcell = max17040_read_reg(client, MAX17040_VCELL);
|
||||
|
||||
chip->vcell = vcell;
|
||||
chip->vcell = (vcell >> 4) * 1250;
|
||||
}
|
||||
|
||||
static void max17040_get_soc(struct i2c_client *client)
|
||||
|
||||
@@ -2464,13 +2464,6 @@ static struct mport_dev *mport_cdev_add(struct rio_mport *mport)
|
||||
cdev_init(&md->cdev, &mport_fops);
|
||||
md->cdev.owner = THIS_MODULE;
|
||||
|
||||
ret = cdev_device_add(&md->cdev, &md->dev);
|
||||
if (ret) {
|
||||
rmcd_error("Failed to register mport %d (err=%d)",
|
||||
mport->id, ret);
|
||||
goto err_cdev;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&md->doorbells);
|
||||
spin_lock_init(&md->db_lock);
|
||||
INIT_LIST_HEAD(&md->portwrites);
|
||||
@@ -2490,6 +2483,13 @@ static struct mport_dev *mport_cdev_add(struct rio_mport *mport)
|
||||
#else
|
||||
md->properties.transfer_mode |= RIO_TRANSFER_MODE_TRANSFER;
|
||||
#endif
|
||||
|
||||
ret = cdev_device_add(&md->cdev, &md->dev);
|
||||
if (ret) {
|
||||
rmcd_error("Failed to register mport %d (err=%d)",
|
||||
mport->id, ret);
|
||||
goto err_cdev;
|
||||
}
|
||||
ret = rio_query_mport(mport, &attr);
|
||||
if (!ret) {
|
||||
md->properties.flags = attr.flags;
|
||||
|
||||
@@ -620,6 +620,10 @@ static int ds1374_probe(struct i2c_client *client,
|
||||
if (!ds1374)
|
||||
return -ENOMEM;
|
||||
|
||||
ds1374->rtc = devm_rtc_allocate_device(&client->dev);
|
||||
if (IS_ERR(ds1374->rtc))
|
||||
return PTR_ERR(ds1374->rtc);
|
||||
|
||||
ds1374->client = client;
|
||||
i2c_set_clientdata(client, ds1374);
|
||||
|
||||
@@ -641,12 +645,11 @@ static int ds1374_probe(struct i2c_client *client,
|
||||
device_set_wakeup_capable(&client->dev, 1);
|
||||
}
|
||||
|
||||
ds1374->rtc = devm_rtc_device_register(&client->dev, client->name,
|
||||
&ds1374_rtc_ops, THIS_MODULE);
|
||||
if (IS_ERR(ds1374->rtc)) {
|
||||
dev_err(&client->dev, "unable to register the class device\n");
|
||||
return PTR_ERR(ds1374->rtc);
|
||||
}
|
||||
ds1374->rtc->ops = &ds1374_rtc_ops;
|
||||
|
||||
ret = rtc_register_device(ds1374->rtc);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
#ifdef CONFIG_RTC_DRV_DS1374_WDT
|
||||
save_client = client;
|
||||
|
||||
@@ -39,6 +39,7 @@
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
static struct dasd_discipline dasd_fba_discipline;
|
||||
static void *dasd_fba_zero_page;
|
||||
|
||||
struct dasd_fba_private {
|
||||
struct dasd_fba_characteristics rdc_data;
|
||||
@@ -269,7 +270,7 @@ static void ccw_write_zero(struct ccw1 *ccw, int count)
|
||||
ccw->cmd_code = DASD_FBA_CCW_WRITE;
|
||||
ccw->flags |= CCW_FLAG_SLI;
|
||||
ccw->count = count;
|
||||
ccw->cda = (__u32) (addr_t) page_to_phys(ZERO_PAGE(0));
|
||||
ccw->cda = (__u32) (addr_t) dasd_fba_zero_page;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -808,6 +809,11 @@ dasd_fba_init(void)
|
||||
int ret;
|
||||
|
||||
ASCEBC(dasd_fba_discipline.ebcname, 4);
|
||||
|
||||
dasd_fba_zero_page = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
|
||||
if (!dasd_fba_zero_page)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = ccw_driver_register(&dasd_fba_driver);
|
||||
if (!ret)
|
||||
wait_for_device_probe();
|
||||
@@ -819,6 +825,7 @@ static void __exit
|
||||
dasd_fba_cleanup(void)
|
||||
{
|
||||
ccw_driver_unregister(&dasd_fba_driver);
|
||||
free_page((unsigned long)dasd_fba_zero_page);
|
||||
}
|
||||
|
||||
module_init(dasd_fba_init);
|
||||
|
||||
@@ -2322,13 +2322,13 @@ static int aac_read(struct scsi_cmnd * scsicmd)
|
||||
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 |
|
||||
SAM_STAT_CHECK_CONDITION;
|
||||
set_sense(&dev->fsa_dev[cid].sense_data,
|
||||
HARDWARE_ERROR, SENCODE_INTERNAL_TARGET_FAILURE,
|
||||
ILLEGAL_REQUEST, SENCODE_LBA_OUT_OF_RANGE,
|
||||
ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0);
|
||||
memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
|
||||
min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
|
||||
SCSI_SENSE_BUFFERSIZE));
|
||||
scsicmd->scsi_done(scsicmd);
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
dprintk((KERN_DEBUG "aac_read[cpu %d]: lba = %llu, t = %ld.\n",
|
||||
@@ -2414,13 +2414,13 @@ static int aac_write(struct scsi_cmnd * scsicmd)
|
||||
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 |
|
||||
SAM_STAT_CHECK_CONDITION;
|
||||
set_sense(&dev->fsa_dev[cid].sense_data,
|
||||
HARDWARE_ERROR, SENCODE_INTERNAL_TARGET_FAILURE,
|
||||
ILLEGAL_REQUEST, SENCODE_LBA_OUT_OF_RANGE,
|
||||
ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0);
|
||||
memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
|
||||
min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data),
|
||||
SCSI_SENSE_BUFFERSIZE));
|
||||
scsicmd->scsi_done(scsicmd);
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
dprintk((KERN_DEBUG "aac_write[cpu %d]: lba = %llu, t = %ld.\n",
|
||||
|
||||
@@ -771,7 +771,7 @@ int aac_hba_send(u8 command, struct fib *fibptr, fib_callback callback,
|
||||
hbacmd->request_id =
|
||||
cpu_to_le32((((u32)(fibptr - dev->fibs)) << 2) + 1);
|
||||
fibptr->flags |= FIB_CONTEXT_FLAG_SCSI_CMD;
|
||||
} else if (command != HBA_IU_TYPE_SCSI_TM_REQ)
|
||||
} else
|
||||
return -EINVAL;
|
||||
|
||||
|
||||
|
||||
@@ -736,7 +736,11 @@ static int aac_eh_abort(struct scsi_cmnd* cmd)
|
||||
status = aac_hba_send(HBA_IU_TYPE_SCSI_TM_REQ, fib,
|
||||
(fib_callback) aac_hba_callback,
|
||||
(void *) cmd);
|
||||
|
||||
if (status != -EINPROGRESS) {
|
||||
aac_fib_complete(fib);
|
||||
aac_fib_free(fib);
|
||||
return ret;
|
||||
}
|
||||
/* Wait up to 15 secs for completion */
|
||||
for (count = 0; count < 15; ++count) {
|
||||
if (cmd->SCp.sent_command) {
|
||||
@@ -915,11 +919,11 @@ static int aac_eh_dev_reset(struct scsi_cmnd *cmd)
|
||||
|
||||
info = &aac->hba_map[bus][cid];
|
||||
|
||||
if (info->devtype != AAC_DEVTYPE_NATIVE_RAW &&
|
||||
info->reset_state > 0)
|
||||
if (!(info->devtype == AAC_DEVTYPE_NATIVE_RAW &&
|
||||
!(info->reset_state > 0)))
|
||||
return FAILED;
|
||||
|
||||
pr_err("%s: Host adapter reset request. SCSI hang ?\n",
|
||||
pr_err("%s: Host device reset request. SCSI hang ?\n",
|
||||
AAC_DRIVERNAME);
|
||||
|
||||
fib = aac_fib_alloc(aac);
|
||||
@@ -934,7 +938,12 @@ static int aac_eh_dev_reset(struct scsi_cmnd *cmd)
|
||||
status = aac_hba_send(command, fib,
|
||||
(fib_callback) aac_tmf_callback,
|
||||
(void *) info);
|
||||
|
||||
if (status != -EINPROGRESS) {
|
||||
info->reset_state = 0;
|
||||
aac_fib_complete(fib);
|
||||
aac_fib_free(fib);
|
||||
return ret;
|
||||
}
|
||||
/* Wait up to 15 seconds for completion */
|
||||
for (count = 0; count < 15; ++count) {
|
||||
if (info->reset_state == 0) {
|
||||
@@ -973,11 +982,11 @@ static int aac_eh_target_reset(struct scsi_cmnd *cmd)
|
||||
|
||||
info = &aac->hba_map[bus][cid];
|
||||
|
||||
if (info->devtype != AAC_DEVTYPE_NATIVE_RAW &&
|
||||
info->reset_state > 0)
|
||||
if (!(info->devtype == AAC_DEVTYPE_NATIVE_RAW &&
|
||||
!(info->reset_state > 0)))
|
||||
return FAILED;
|
||||
|
||||
pr_err("%s: Host adapter reset request. SCSI hang ?\n",
|
||||
pr_err("%s: Host target reset request. SCSI hang ?\n",
|
||||
AAC_DRIVERNAME);
|
||||
|
||||
fib = aac_fib_alloc(aac);
|
||||
@@ -994,6 +1003,13 @@ static int aac_eh_target_reset(struct scsi_cmnd *cmd)
|
||||
(fib_callback) aac_tmf_callback,
|
||||
(void *) info);
|
||||
|
||||
if (status != -EINPROGRESS) {
|
||||
info->reset_state = 0;
|
||||
aac_fib_complete(fib);
|
||||
aac_fib_free(fib);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Wait up to 15 seconds for completion */
|
||||
for (count = 0; count < 15; ++count) {
|
||||
if (info->reset_state <= 0) {
|
||||
@@ -1046,7 +1062,7 @@ static int aac_eh_bus_reset(struct scsi_cmnd* cmd)
|
||||
}
|
||||
}
|
||||
|
||||
pr_err("%s: Host adapter reset request. SCSI hang ?\n", AAC_DRIVERNAME);
|
||||
pr_err("%s: Host bus reset request. SCSI hang ?\n", AAC_DRIVERNAME);
|
||||
|
||||
/*
|
||||
* Check the health of the controller
|
||||
|
||||
@@ -1034,7 +1034,8 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
|
||||
atomic64_inc(&fnic_stats->io_stats.io_completions);
|
||||
|
||||
|
||||
io_duration_time = jiffies_to_msecs(jiffies) - jiffies_to_msecs(io_req->start_time);
|
||||
io_duration_time = jiffies_to_msecs(jiffies) -
|
||||
jiffies_to_msecs(start_time);
|
||||
|
||||
if(io_duration_time <= 10)
|
||||
atomic64_inc(&fnic_stats->io_stats.io_btw_0_to_10_msec);
|
||||
|
||||
@@ -145,8 +145,10 @@ struct fc_rport_priv *fc_rport_create(struct fc_lport *lport, u32 port_id)
|
||||
size_t rport_priv_size = sizeof(*rdata);
|
||||
|
||||
rdata = fc_rport_lookup(lport, port_id);
|
||||
if (rdata)
|
||||
if (rdata) {
|
||||
kref_put(&rdata->kref, fc_rport_destroy);
|
||||
return rdata;
|
||||
}
|
||||
|
||||
if (lport->rport_priv_size > 0)
|
||||
rport_priv_size = lport->rport_priv_size;
|
||||
@@ -493,10 +495,11 @@ static void fc_rport_enter_delete(struct fc_rport_priv *rdata,
|
||||
|
||||
fc_rport_state_enter(rdata, RPORT_ST_DELETE);
|
||||
|
||||
kref_get(&rdata->kref);
|
||||
if (rdata->event == RPORT_EV_NONE &&
|
||||
!queue_work(rport_event_queue, &rdata->event_work))
|
||||
kref_put(&rdata->kref, fc_rport_destroy);
|
||||
if (rdata->event == RPORT_EV_NONE) {
|
||||
kref_get(&rdata->kref);
|
||||
if (!queue_work(rport_event_queue, &rdata->event_work))
|
||||
kref_put(&rdata->kref, fc_rport_destroy);
|
||||
}
|
||||
|
||||
rdata->event = event;
|
||||
}
|
||||
|
||||
@@ -1714,8 +1714,8 @@ lpfc_fdmi_hba_attr_wwnn(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad)
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, sizeof(struct lpfc_name));
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
memcpy(&ae->un.AttrWWN, &vport->fc_sparam.nodeName,
|
||||
sizeof(struct lpfc_name));
|
||||
@@ -1731,8 +1731,8 @@ lpfc_fdmi_hba_attr_manufacturer(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
/* This string MUST be consistent with other FC platforms
|
||||
* supported by Broadcom.
|
||||
@@ -1756,8 +1756,8 @@ lpfc_fdmi_hba_attr_sn(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad)
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
strncpy(ae->un.AttrString, phba->SerialNumber,
|
||||
sizeof(ae->un.AttrString));
|
||||
@@ -1778,8 +1778,8 @@ lpfc_fdmi_hba_attr_model(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
strncpy(ae->un.AttrString, phba->ModelName,
|
||||
sizeof(ae->un.AttrString));
|
||||
@@ -1799,8 +1799,8 @@ lpfc_fdmi_hba_attr_description(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
strncpy(ae->un.AttrString, phba->ModelDesc,
|
||||
sizeof(ae->un.AttrString));
|
||||
@@ -1822,8 +1822,8 @@ lpfc_fdmi_hba_attr_hdw_ver(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t i, j, incr, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
/* Convert JEDEC ID to ascii for hardware version */
|
||||
incr = vp->rev.biuRev;
|
||||
@@ -1852,8 +1852,8 @@ lpfc_fdmi_hba_attr_drvr_ver(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
strncpy(ae->un.AttrString, lpfc_release_version,
|
||||
sizeof(ae->un.AttrString));
|
||||
@@ -1874,8 +1874,8 @@ lpfc_fdmi_hba_attr_rom_ver(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
lpfc_decode_firmware_rev(phba, ae->un.AttrString, 1);
|
||||
@@ -1899,8 +1899,8 @@ lpfc_fdmi_hba_attr_fmw_ver(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
lpfc_decode_firmware_rev(phba, ae->un.AttrString, 1);
|
||||
len = strnlen(ae->un.AttrString,
|
||||
@@ -1919,8 +1919,8 @@ lpfc_fdmi_hba_attr_os_ver(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
snprintf(ae->un.AttrString, sizeof(ae->un.AttrString), "%s %s %s",
|
||||
init_utsname()->sysname,
|
||||
@@ -1942,7 +1942,7 @@ lpfc_fdmi_hba_attr_ct_len(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
|
||||
ae->un.AttrInt = cpu_to_be32(LPFC_MAX_CT_SIZE);
|
||||
size = FOURBYTES + sizeof(uint32_t);
|
||||
@@ -1958,8 +1958,8 @@ lpfc_fdmi_hba_attr_symbolic_name(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
len = lpfc_vport_symbolic_node_name(vport,
|
||||
ae->un.AttrString, 256);
|
||||
@@ -1977,7 +1977,7 @@ lpfc_fdmi_hba_attr_vendor_info(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
|
||||
/* Nothing is defined for this currently */
|
||||
ae->un.AttrInt = cpu_to_be32(0);
|
||||
@@ -1994,7 +1994,7 @@ lpfc_fdmi_hba_attr_num_ports(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
|
||||
/* Each driver instance corresponds to a single port */
|
||||
ae->un.AttrInt = cpu_to_be32(1);
|
||||
@@ -2011,8 +2011,8 @@ lpfc_fdmi_hba_attr_fabric_wwnn(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, sizeof(struct lpfc_name));
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
memcpy(&ae->un.AttrWWN, &vport->fabric_nodename,
|
||||
sizeof(struct lpfc_name));
|
||||
@@ -2030,8 +2030,8 @@ lpfc_fdmi_hba_attr_bios_ver(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
lpfc_decode_firmware_rev(phba, ae->un.AttrString, 1);
|
||||
len = strnlen(ae->un.AttrString,
|
||||
@@ -2050,7 +2050,7 @@ lpfc_fdmi_hba_attr_bios_state(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
|
||||
/* Driver doesn't have access to this information */
|
||||
ae->un.AttrInt = cpu_to_be32(0);
|
||||
@@ -2067,8 +2067,8 @@ lpfc_fdmi_hba_attr_vendor_id(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
strncpy(ae->un.AttrString, "EMULEX",
|
||||
sizeof(ae->un.AttrString));
|
||||
@@ -2089,8 +2089,8 @@ lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 32);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
|
||||
ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
|
||||
@@ -2111,7 +2111,7 @@ lpfc_fdmi_port_attr_support_speed(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
|
||||
ae->un.AttrInt = 0;
|
||||
if (!(phba->hba_flag & HBA_FCOE_MODE)) {
|
||||
@@ -2161,7 +2161,7 @@ lpfc_fdmi_port_attr_speed(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
|
||||
if (!(phba->hba_flag & HBA_FCOE_MODE)) {
|
||||
switch (phba->fc_linkspeed) {
|
||||
@@ -2225,7 +2225,7 @@ lpfc_fdmi_port_attr_max_frame(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
|
||||
hsp = (struct serv_parm *)&vport->fc_sparam;
|
||||
ae->un.AttrInt = (((uint32_t) hsp->cmn.bbRcvSizeMsb) << 8) |
|
||||
@@ -2245,8 +2245,8 @@ lpfc_fdmi_port_attr_os_devname(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
snprintf(ae->un.AttrString, sizeof(ae->un.AttrString),
|
||||
"/sys/class/scsi_host/host%d", shost->host_no);
|
||||
@@ -2266,8 +2266,8 @@ lpfc_fdmi_port_attr_host_name(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
snprintf(ae->un.AttrString, sizeof(ae->un.AttrString), "%s",
|
||||
init_utsname()->nodename);
|
||||
@@ -2287,8 +2287,8 @@ lpfc_fdmi_port_attr_wwnn(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, sizeof(struct lpfc_name));
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
memcpy(&ae->un.AttrWWN, &vport->fc_sparam.nodeName,
|
||||
sizeof(struct lpfc_name));
|
||||
@@ -2305,8 +2305,8 @@ lpfc_fdmi_port_attr_wwpn(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, sizeof(struct lpfc_name));
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
memcpy(&ae->un.AttrWWN, &vport->fc_sparam.portName,
|
||||
sizeof(struct lpfc_name));
|
||||
@@ -2323,8 +2323,8 @@ lpfc_fdmi_port_attr_symbolic_name(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
len = lpfc_vport_symbolic_port_name(vport, ae->un.AttrString, 256);
|
||||
len += (len & 3) ? (4 - (len & 3)) : 4;
|
||||
@@ -2342,7 +2342,7 @@ lpfc_fdmi_port_attr_port_type(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
if (phba->fc_topology == LPFC_TOPOLOGY_LOOP)
|
||||
ae->un.AttrInt = cpu_to_be32(LPFC_FDMI_PORTTYPE_NLPORT);
|
||||
else
|
||||
@@ -2360,7 +2360,7 @@ lpfc_fdmi_port_attr_class(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
ae->un.AttrInt = cpu_to_be32(FC_COS_CLASS2 | FC_COS_CLASS3);
|
||||
size = FOURBYTES + sizeof(uint32_t);
|
||||
ad->AttrLen = cpu_to_be16(size);
|
||||
@@ -2375,8 +2375,8 @@ lpfc_fdmi_port_attr_fabric_wwpn(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, sizeof(struct lpfc_name));
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
memcpy(&ae->un.AttrWWN, &vport->fabric_portname,
|
||||
sizeof(struct lpfc_name));
|
||||
@@ -2393,8 +2393,8 @@ lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 32);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
|
||||
ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
|
||||
@@ -2414,7 +2414,7 @@ lpfc_fdmi_port_attr_port_state(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
/* Link Up - operational */
|
||||
ae->un.AttrInt = cpu_to_be32(LPFC_FDMI_PORTSTATE_ONLINE);
|
||||
size = FOURBYTES + sizeof(uint32_t);
|
||||
@@ -2430,7 +2430,7 @@ lpfc_fdmi_port_attr_num_disc(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
vport->fdmi_num_disc = lpfc_find_map_node(vport);
|
||||
ae->un.AttrInt = cpu_to_be32(vport->fdmi_num_disc);
|
||||
size = FOURBYTES + sizeof(uint32_t);
|
||||
@@ -2446,7 +2446,7 @@ lpfc_fdmi_port_attr_nportid(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
ae->un.AttrInt = cpu_to_be32(vport->fc_myDID);
|
||||
size = FOURBYTES + sizeof(uint32_t);
|
||||
ad->AttrLen = cpu_to_be16(size);
|
||||
@@ -2461,8 +2461,8 @@ lpfc_fdmi_smart_attr_service(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
strncpy(ae->un.AttrString, "Smart SAN Initiator",
|
||||
sizeof(ae->un.AttrString));
|
||||
@@ -2482,8 +2482,8 @@ lpfc_fdmi_smart_attr_guid(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
memcpy(&ae->un.AttrString, &vport->fc_sparam.nodeName,
|
||||
sizeof(struct lpfc_name));
|
||||
@@ -2503,8 +2503,8 @@ lpfc_fdmi_smart_attr_version(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
strncpy(ae->un.AttrString, "Smart SAN Version 2.0",
|
||||
sizeof(ae->un.AttrString));
|
||||
@@ -2525,8 +2525,8 @@ lpfc_fdmi_smart_attr_model(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t len, size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
ae = &ad->AttrValue;
|
||||
memset(ae, 0, sizeof(*ae));
|
||||
|
||||
strncpy(ae->un.AttrString, phba->ModelName,
|
||||
sizeof(ae->un.AttrString));
|
||||
@@ -2545,7 +2545,7 @@ lpfc_fdmi_smart_attr_port_info(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
|
||||
/* SRIOV (type 3) is not supported */
|
||||
if (vport->vpi)
|
||||
@@ -2565,7 +2565,7 @@ lpfc_fdmi_smart_attr_qos(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
ae->un.AttrInt = cpu_to_be32(0);
|
||||
size = FOURBYTES + sizeof(uint32_t);
|
||||
ad->AttrLen = cpu_to_be16(size);
|
||||
@@ -2580,7 +2580,7 @@ lpfc_fdmi_smart_attr_security(struct lpfc_vport *vport,
|
||||
struct lpfc_fdmi_attr_entry *ae;
|
||||
uint32_t size;
|
||||
|
||||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
ae = &ad->AttrValue;
|
||||
ae->un.AttrInt = cpu_to_be32(1);
|
||||
size = FOURBYTES + sizeof(uint32_t);
|
||||
ad->AttrLen = cpu_to_be16(size);
|
||||
@@ -2728,7 +2728,8 @@ lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
||||
/* Registered Port List */
|
||||
/* One entry (port) per adapter */
|
||||
rh->rpl.EntryCnt = cpu_to_be32(1);
|
||||
memcpy(&rh->rpl.pe, &phba->pport->fc_sparam.portName,
|
||||
memcpy(&rh->rpl.pe.PortName,
|
||||
&phba->pport->fc_sparam.portName,
|
||||
sizeof(struct lpfc_name));
|
||||
|
||||
/* point to the HBA attribute block */
|
||||
|
||||
@@ -1326,25 +1326,8 @@ struct fc_rdp_res_frame {
|
||||
/* lpfc_sli_ct_request defines the CT_IU preamble for FDMI commands */
|
||||
#define SLI_CT_FDMI_Subtypes 0x10 /* Management Service Subtype */
|
||||
|
||||
/*
|
||||
* Registered Port List Format
|
||||
*/
|
||||
struct lpfc_fdmi_reg_port_list {
|
||||
uint32_t EntryCnt;
|
||||
uint32_t pe; /* Variable-length array */
|
||||
};
|
||||
|
||||
|
||||
/* Definitions for HBA / Port attribute entries */
|
||||
|
||||
struct lpfc_fdmi_attr_def { /* Defined in TLV format */
|
||||
/* Structure is in Big Endian format */
|
||||
uint32_t AttrType:16;
|
||||
uint32_t AttrLen:16;
|
||||
uint32_t AttrValue; /* Marks start of Value (ATTRIBUTE_ENTRY) */
|
||||
};
|
||||
|
||||
|
||||
/* Attribute Entry */
|
||||
struct lpfc_fdmi_attr_entry {
|
||||
union {
|
||||
@@ -1355,7 +1338,13 @@ struct lpfc_fdmi_attr_entry {
|
||||
} un;
|
||||
};
|
||||
|
||||
#define LPFC_FDMI_MAX_AE_SIZE sizeof(struct lpfc_fdmi_attr_entry)
|
||||
struct lpfc_fdmi_attr_def { /* Defined in TLV format */
|
||||
/* Structure is in Big Endian format */
|
||||
uint32_t AttrType:16;
|
||||
uint32_t AttrLen:16;
|
||||
/* Marks start of Value (ATTRIBUTE_ENTRY) */
|
||||
struct lpfc_fdmi_attr_entry AttrValue;
|
||||
} __packed;
|
||||
|
||||
/*
|
||||
* HBA Attribute Block
|
||||
@@ -1379,13 +1368,20 @@ struct lpfc_fdmi_hba_ident {
|
||||
struct lpfc_name PortName;
|
||||
};
|
||||
|
||||
/*
|
||||
* Registered Port List Format
|
||||
*/
|
||||
struct lpfc_fdmi_reg_port_list {
|
||||
uint32_t EntryCnt;
|
||||
struct lpfc_fdmi_port_entry pe;
|
||||
} __packed;
|
||||
|
||||
/*
|
||||
* Register HBA(RHBA)
|
||||
*/
|
||||
struct lpfc_fdmi_reg_hba {
|
||||
struct lpfc_fdmi_hba_ident hi;
|
||||
struct lpfc_fdmi_reg_port_list rpl; /* variable-length array */
|
||||
/* struct lpfc_fdmi_attr_block ab; */
|
||||
struct lpfc_fdmi_reg_port_list rpl;
|
||||
};
|
||||
|
||||
/*
|
||||
|
||||
@@ -17038,6 +17038,10 @@ lpfc_prep_seq(struct lpfc_vport *vport, struct hbq_dmabuf *seq_dmabuf)
|
||||
list_add_tail(&iocbq->list, &first_iocbq->list);
|
||||
}
|
||||
}
|
||||
/* Free the sequence's header buffer */
|
||||
if (!first_iocbq)
|
||||
lpfc_in_buf_free(vport->phba, &seq_dmabuf->dbuf);
|
||||
|
||||
return first_iocbq;
|
||||
}
|
||||
|
||||
|
||||
@@ -1072,6 +1072,9 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
|
||||
break;
|
||||
}
|
||||
|
||||
if (!abrt_conn)
|
||||
wait_delay += qedi->pf_params.iscsi_pf_params.two_msl_timer;
|
||||
|
||||
qedi_ep->state = EP_STATE_DISCONN_START;
|
||||
ret = qedi_ops->destroy_conn(qedi->cdev, qedi_ep->handle, abrt_conn);
|
||||
if (ret) {
|
||||
|
||||
@@ -685,7 +685,7 @@ int imx_media_capture_device_register(struct imx_media_video_dev *vdev)
|
||||
/* setup default format */
|
||||
fmt_src.pad = priv->src_sd_pad;
|
||||
fmt_src.which = V4L2_SUBDEV_FORMAT_ACTIVE;
|
||||
v4l2_subdev_call(sd, pad, get_fmt, NULL, &fmt_src);
|
||||
ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &fmt_src);
|
||||
if (ret) {
|
||||
v4l2_err(sd, "failed to get src_sd format\n");
|
||||
goto unreg;
|
||||
|
||||
@@ -1541,21 +1541,14 @@ static int amsdu_to_msdu(struct adapter *padapter, struct recv_frame *prframe)
|
||||
|
||||
/* Allocate new skb for releasing to upper layer */
|
||||
sub_skb = dev_alloc_skb(nSubframe_Length + 12);
|
||||
if (sub_skb) {
|
||||
skb_reserve(sub_skb, 12);
|
||||
skb_put_data(sub_skb, pdata, nSubframe_Length);
|
||||
} else {
|
||||
sub_skb = skb_clone(prframe->pkt, GFP_ATOMIC);
|
||||
if (sub_skb) {
|
||||
sub_skb->data = pdata;
|
||||
sub_skb->len = nSubframe_Length;
|
||||
skb_set_tail_pointer(sub_skb, nSubframe_Length);
|
||||
} else {
|
||||
DBG_88E("skb_clone() Fail!!! , nr_subframes=%d\n", nr_subframes);
|
||||
break;
|
||||
}
|
||||
if (!sub_skb) {
|
||||
DBG_88E("dev_alloc_skb() Fail!!! , nr_subframes=%d\n", nr_subframes);
|
||||
break;
|
||||
}
|
||||
|
||||
skb_reserve(sub_skb, 12);
|
||||
skb_put_data(sub_skb, pdata, nSubframe_Length);
|
||||
|
||||
subframes[nr_subframes++] = sub_skb;
|
||||
|
||||
if (nr_subframes >= MAX_SUBFRAME_COUNT) {
|
||||
|
||||
@@ -1065,8 +1065,10 @@ int serial8250_register_8250_port(struct uart_8250_port *up)
|
||||
serial8250_apply_quirks(uart);
|
||||
ret = uart_add_one_port(&serial8250_reg,
|
||||
&uart->port);
|
||||
if (ret == 0)
|
||||
ret = uart->port.line;
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
ret = uart->port.line;
|
||||
} else {
|
||||
dev_info(uart->port.dev,
|
||||
"skipping CIR port at 0x%lx / 0x%llx, IRQ %d\n",
|
||||
@@ -1091,6 +1093,11 @@ int serial8250_register_8250_port(struct uart_8250_port *up)
|
||||
mutex_unlock(&serial_mutex);
|
||||
|
||||
return ret;
|
||||
|
||||
err:
|
||||
uart->port.dev = NULL;
|
||||
mutex_unlock(&serial_mutex);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(serial8250_register_8250_port);
|
||||
|
||||
|
||||
@@ -773,7 +773,10 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
|
||||
dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
|
||||
|
||||
count = dma->rx_size - state.residue;
|
||||
|
||||
if (count < dma->rx_size)
|
||||
dmaengine_terminate_async(dma->rxchan);
|
||||
if (!count)
|
||||
goto unlock;
|
||||
ret = tty_insert_flip_string(tty_port, dma->rx_buf, count);
|
||||
|
||||
p->port.icount.rx += ret;
|
||||
@@ -833,7 +836,6 @@ static void omap_8250_rx_dma_flush(struct uart_8250_port *p)
|
||||
spin_unlock_irqrestore(&priv->rx_dma_lock, flags);
|
||||
|
||||
__dma_rx_do_complete(p);
|
||||
dmaengine_terminate_all(dma->rxchan);
|
||||
}
|
||||
|
||||
static int omap_8250_rx_dma(struct uart_8250_port *p)
|
||||
@@ -1216,11 +1218,11 @@ static int omap8250_probe(struct platform_device *pdev)
|
||||
spin_lock_init(&priv->rx_dma_lock);
|
||||
|
||||
device_init_wakeup(&pdev->dev, true);
|
||||
pm_runtime_enable(&pdev->dev);
|
||||
pm_runtime_use_autosuspend(&pdev->dev);
|
||||
pm_runtime_set_autosuspend_delay(&pdev->dev, -1);
|
||||
|
||||
pm_runtime_irq_safe(&pdev->dev);
|
||||
pm_runtime_enable(&pdev->dev);
|
||||
|
||||
pm_runtime_get_sync(&pdev->dev);
|
||||
|
||||
|
||||
@@ -1865,6 +1865,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
|
||||
unsigned char status;
|
||||
unsigned long flags;
|
||||
struct uart_8250_port *up = up_to_u8250p(port);
|
||||
bool skip_rx = false;
|
||||
|
||||
if (iir & UART_IIR_NO_INT)
|
||||
return 0;
|
||||
@@ -1873,7 +1874,20 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
|
||||
|
||||
status = serial_port_in(port, UART_LSR);
|
||||
|
||||
if (status & (UART_LSR_DR | UART_LSR_BI)) {
|
||||
/*
|
||||
* If port is stopped and there are no error conditions in the
|
||||
* FIFO, then don't drain the FIFO, as this may lead to TTY buffer
|
||||
* overflow. Not servicing, RX FIFO would trigger auto HW flow
|
||||
* control when FIFO occupancy reaches preset threshold, thus
|
||||
* halting RX. This only works when auto HW flow control is
|
||||
* available.
|
||||
*/
|
||||
if (!(status & (UART_LSR_FIFOE | UART_LSR_BRK_ERROR_BITS)) &&
|
||||
(port->status & (UPSTAT_AUTOCTS | UPSTAT_AUTORTS)) &&
|
||||
!(port->read_status_mask & UART_LSR_DR))
|
||||
skip_rx = true;
|
||||
|
||||
if (status & (UART_LSR_DR | UART_LSR_BI) && !skip_rx) {
|
||||
if (!up->dma || handle_rx_dma(up, iir))
|
||||
status = serial8250_rx_chars(up, status);
|
||||
}
|
||||
|
||||
@@ -1165,14 +1165,14 @@ static unsigned int s3c24xx_serial_getclk(struct s3c24xx_uart_port *ourport,
|
||||
struct s3c24xx_uart_info *info = ourport->info;
|
||||
struct clk *clk;
|
||||
unsigned long rate;
|
||||
unsigned int cnt, baud, quot, clk_sel, best_quot = 0;
|
||||
unsigned int cnt, baud, quot, best_quot = 0;
|
||||
char clkname[MAX_CLK_NAME_LENGTH];
|
||||
int calc_deviation, deviation = (1 << 30) - 1;
|
||||
|
||||
clk_sel = (ourport->cfg->clk_sel) ? ourport->cfg->clk_sel :
|
||||
ourport->info->def_clk_sel;
|
||||
for (cnt = 0; cnt < info->num_clks; cnt++) {
|
||||
if (!(clk_sel & (1 << cnt)))
|
||||
/* Keep selected clock if provided */
|
||||
if (ourport->cfg->clk_sel &&
|
||||
!(ourport->cfg->clk_sel & (1 << cnt)))
|
||||
continue;
|
||||
|
||||
sprintf(clkname, "clk_uart_baud%d", cnt);
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user