Merge 4.9.214 into android-4.9-q
Changes in 4.9.214
media: iguanair: fix endpoint sanity check
x86/cpu: Update cached HLE state on write to TSX_CTRL_CPUID_CLEAR
sparc32: fix struct ipc64_perm type definition
ASoC: qcom: Fix of-node refcount unbalance to link->codec_of_node
cls_rsvp: fix rsvp_policy
gtp: use __GFP_NOWARN to avoid memalloc warning
net: hsr: fix possible NULL deref in hsr_handle_frame()
net_sched: fix an OOB access in cls_tcindex
rxrpc: Fix insufficient receive notification generation
rxrpc: Fix NULL pointer deref due to call->conn being cleared on disconnect
tcp: clear tp->total_retrans in tcp_disconnect()
tcp: clear tp->delivered in tcp_disconnect()
tcp: clear tp->data_segs{in|out} in tcp_disconnect()
tcp: clear tp->segs_{in|out} in tcp_disconnect()
media: uvcvideo: Avoid cyclic entity chains due to malformed USB descriptors
mfd: dln2: More sanity checking for endpoints
brcmfmac: Fix memory leak in brcmf_usbdev_qinit
usb: gadget: legacy: set max_speed to super-speed
usb: gadget: f_ncm: Use atomic_t to track in-flight request
usb: gadget: f_ecm: Use atomic_t to track in-flight request
ALSA: dummy: Fix PCM format loop in proc output
media/v4l2-core: set pages dirty upon releasing DMA buffers
media: v4l2-rect.h: fix v4l2_rect_map_inside() top/left adjustments
lib/test_kasan.c: fix memory leak in kmalloc_oob_krealloc_more()
powerpc/pseries: Advance pfn if section is not present in lmb_is_removable()
mmc: spi: Toggle SPI polarity, do not hardcode it
PCI: keystone: Fix link training retries initiation
ubifs: Change gfp flags in page allocation for bulk read
ubifs: Fix deadlock in concurrent bulk-read and writepage
crypto: api - Check spawn->alg under lock in crypto_drop_spawn
scsi: qla2xxx: Fix mtcp dump collection failure
power: supply: ltc2941-battery-gauge: fix use-after-free
of: Add OF_DMA_DEFAULT_COHERENT & select it on powerpc
dm space map common: fix to ensure new block isn't already in use
crypto: pcrypt - Do not clear MAY_SLEEP flag in original request
crypto: atmel-aes - Fix counter overflow in CTR mode
crypto: api - Fix race condition in crypto_spawn_alg
crypto: picoxcell - adjust the position of tasklet_init and fix missed tasklet_kill
btrfs: set trans->drity in btrfs_commit_transaction
ARM: tegra: Enable PLLP bypass during Tegra124 LP1
mwifiex: fix unbalanced locking in mwifiex_process_country_ie()
sunrpc: expiry_time should be seconds not timeval
KVM: x86: Refactor prefix decoding to prevent Spectre-v1/L1TF attacks
KVM: x86: Protect DR-based index computations from Spectre-v1/L1TF attacks
KVM: x86: Protect kvm_lapic_reg_write() from Spectre-v1/L1TF attacks
KVM: x86: Protect kvm_hv_msr_[get|set]_crash_data() from Spectre-v1/L1TF attacks
KVM: x86: Protect ioapic_write_indirect() from Spectre-v1/L1TF attacks
KVM: x86: Protect MSR-based index computations in pmu.h from Spectre-v1/L1TF attacks
KVM: x86: Protect ioapic_read_indirect() from Spectre-v1/L1TF attacks
KVM: x86: Protect MSR-based index computations from Spectre-v1/L1TF attacks in x86.c
KVM: x86: Protect x86_decode_insn from Spectre-v1/L1TF attacks
KVM: x86: Protect MSR-based index computations in fixed_msr_to_seg_unit() from Spectre-v1/L1TF attacks
KVM: PPC: Book3S HV: Uninit vCPU if vcore creation fails
KVM: PPC: Book3S PR: Free shared page if mmu initialization fails
KVM: x86: Free wbinvd_dirty_mask if vCPU creation fails
clk: tegra: Mark fuse clock as critical
scsi: qla2xxx: Fix the endianness of the qla82xx_get_fw_size() return type
scsi: csiostor: Adjust indentation in csio_device_reset
scsi: qla4xxx: Adjust indentation in qla4xxx_mem_free
ext2: Adjust indentation in ext2_fill_super
powerpc/44x: Adjust indentation in ibm4xx_denali_fixup_memsize
NFC: pn544: Adjust indentation in pn544_hci_check_presence
ppp: Adjust indentation into ppp_async_input
net: smc911x: Adjust indentation in smc911x_phy_configure
net: tulip: Adjust indentation in {dmfe, uli526x}_init_module
IB/mlx5: Fix outstanding_pi index for GSI qps
nfsd: fix delay timer on 32-bit architectures
nfsd: fix jiffies/time_t mixup in LRU list
ubi: fastmap: Fix inverted logic in seen selfcheck
ubi: Fix an error pointer dereference in error handling code
mfd: da9062: Fix watchdog compatible string
mfd: rn5t618: Mark ADC control register volatile
net: systemport: Avoid RBUF stuck in Wake-on-LAN mode
bonding/alb: properly access headers in bond_alb_xmit()
NFS: switch back to to ->iterate()
NFS: Fix memory leaks and corruption in readdir
NFS: Fix bool initialization/comparison
NFS: Directory page cache pages need to be locked when read
ext4: fix deadlock allocating crypto bounce page from mempool
Btrfs: fix assertion failure on fsync with NO_HOLES enabled
btrfs: use bool argument in free_root_pointers()
btrfs: remove trivial locking wrappers of tree mod log
Btrfs: fix race between adding and putting tree mod seq elements and nodes
drm: atmel-hlcdc: enable clock before configuring timing engine
KVM: x86: Protect pmu_intel.c from Spectre-v1/L1TF attacks
btrfs: flush write bio if we loop in extent_write_cache_pages
KVM: x86/mmu: Apply max PA check for MMIO sptes to 32-bit KVM
KVM: VMX: Add non-canonical check on writes to RTIT address MSRs
KVM: nVMX: vmread should not set rflags to specify success in case of #PF
cifs: fail i/o on soft mounts if sessionsetup errors out
clocksource: Prevent double add_timer_on() for watchdog_timer
perf/core: Fix mlock accounting in perf_mmap()
rxrpc: Fix service call disconnection
ASoC: pcm: update FE/BE trigger order based on the command
RDMA/netlink: Do not always generate an ACK for some netlink operations
scsi: ufs: Fix ufshcd_probe_hba() reture value in case ufshcd_scsi_add_wlus() fails
PCI: Don't disable bridge BARs when assigning bus resources
nfs: NFS_SWAP should depend on SWAP
NFSv4: try lease recovery on NFS4ERR_EXPIRED
rtc: hym8563: Return -EINVAL if the time is known to be invalid
rtc: cmos: Stop using shared IRQ
ARC: [plat-axs10x]: Add missing multicast filter number to GMAC node
ARM: dts: at91: sama5d3: fix maximum peripheral clock rates
ARM: dts: at91: sama5d3: define clock rate range for tcb1
tools/power/acpi: fix compilation error
powerpc/pseries: Allow not having ibm, hypertas-functions::hcall-multi-tce for DDW
pinctrl: sh-pfc: r8a7778: Fix duplicate SDSELF_B and SD1_CLK_B
scsi: megaraid_sas: Do not initiate OCR if controller is not in ready state
dm: fix potential for q->make_request_fn NULL pointer
mwifiex: Fix possible buffer overflows in mwifiex_ret_wmm_get_status()
mwifiex: Fix possible buffer overflows in mwifiex_cmd_append_vsie_tlv()
libertas: don't exit from lbs_ibss_join_existing() with RCU read lock held
libertas: make lbs_ibss_join_existing() return error code on rates overflow
Linux 4.9.214
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ieac421e286126a690fd2c206ea7b4dde45e4a475
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 213
|
||||
SUBLEVEL = 214
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
||||
@@ -63,6 +63,7 @@
|
||||
interrupt-names = "macirq";
|
||||
phy-mode = "rgmii";
|
||||
snps,pbl = < 32 >;
|
||||
snps,multicast-filter-bins = <256>;
|
||||
clocks = <&apbclk>;
|
||||
clock-names = "stmmaceth";
|
||||
max-speed = <100>;
|
||||
|
||||
@@ -1109,49 +1109,49 @@
|
||||
usart0_clk: usart0_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <12>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
usart1_clk: usart1_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <13>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
usart2_clk: usart2_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <14>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
usart3_clk: usart3_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <15>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
uart0_clk: uart0_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <16>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
twi0_clk: twi0_clk {
|
||||
reg = <18>;
|
||||
#clock-cells = <0>;
|
||||
atmel,clk-output-range = <0 16625000>;
|
||||
atmel,clk-output-range = <0 41500000>;
|
||||
};
|
||||
|
||||
twi1_clk: twi1_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <19>;
|
||||
atmel,clk-output-range = <0 16625000>;
|
||||
atmel,clk-output-range = <0 41500000>;
|
||||
};
|
||||
|
||||
twi2_clk: twi2_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <20>;
|
||||
atmel,clk-output-range = <0 16625000>;
|
||||
atmel,clk-output-range = <0 41500000>;
|
||||
};
|
||||
|
||||
mci0_clk: mci0_clk {
|
||||
@@ -1167,19 +1167,19 @@
|
||||
spi0_clk: spi0_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <24>;
|
||||
atmel,clk-output-range = <0 133000000>;
|
||||
atmel,clk-output-range = <0 166000000>;
|
||||
};
|
||||
|
||||
spi1_clk: spi1_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <25>;
|
||||
atmel,clk-output-range = <0 133000000>;
|
||||
atmel,clk-output-range = <0 166000000>;
|
||||
};
|
||||
|
||||
tcb0_clk: tcb0_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <26>;
|
||||
atmel,clk-output-range = <0 133000000>;
|
||||
atmel,clk-output-range = <0 166000000>;
|
||||
};
|
||||
|
||||
pwm_clk: pwm_clk {
|
||||
@@ -1190,7 +1190,7 @@
|
||||
adc_clk: adc_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <29>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
dma0_clk: dma0_clk {
|
||||
@@ -1221,13 +1221,13 @@
|
||||
ssc0_clk: ssc0_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <38>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
ssc1_clk: ssc1_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <39>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
sha_clk: sha_clk {
|
||||
|
||||
@@ -37,13 +37,13 @@
|
||||
can0_clk: can0_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <40>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
can1_clk: can1_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <41>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -23,6 +23,7 @@
|
||||
tcb1_clk: tcb1_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <27>;
|
||||
atmel,clk-output-range = <0 166000000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -42,13 +42,13 @@
|
||||
uart0_clk: uart0_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <16>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
|
||||
uart1_clk: uart1_clk {
|
||||
#clock-cells = <0>;
|
||||
reg = <17>;
|
||||
atmel,clk-output-range = <0 66000000>;
|
||||
atmel,clk-output-range = <0 83000000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -382,6 +382,14 @@ _pll_m_c_x_done:
|
||||
pll_locked r1, r0, CLK_RESET_PLLC_BASE
|
||||
pll_locked r1, r0, CLK_RESET_PLLX_BASE
|
||||
|
||||
tegra_get_soc_id TEGRA_APB_MISC_BASE, r1
|
||||
cmp r1, #TEGRA30
|
||||
beq 1f
|
||||
ldr r1, [r0, #CLK_RESET_PLLP_BASE]
|
||||
bic r1, r1, #(1<<31) @ disable PllP bypass
|
||||
str r1, [r0, #CLK_RESET_PLLP_BASE]
|
||||
1:
|
||||
|
||||
mov32 r7, TEGRA_TMRUS_BASE
|
||||
ldr r1, [r7]
|
||||
add r1, r1, #LOCK_DELAY
|
||||
@@ -641,7 +649,10 @@ tegra30_switch_cpu_to_clk32k:
|
||||
str r0, [r4, #PMC_PLLP_WB0_OVERRIDE]
|
||||
|
||||
/* disable PLLP, PLLA, PLLC and PLLX */
|
||||
tegra_get_soc_id TEGRA_APB_MISC_BASE, r1
|
||||
cmp r1, #TEGRA30
|
||||
ldr r0, [r5, #CLK_RESET_PLLP_BASE]
|
||||
orrne r0, r0, #(1 << 31) @ enable PllP bypass on fast cluster
|
||||
bic r0, r0, #(1 << 30)
|
||||
str r0, [r5, #CLK_RESET_PLLP_BASE]
|
||||
ldr r0, [r5, #CLK_RESET_PLLA_BASE]
|
||||
|
||||
@@ -85,6 +85,7 @@ config PPC
|
||||
select BINFMT_ELF
|
||||
select ARCH_HAS_ELF_RANDOMIZE
|
||||
select OF
|
||||
select OF_DMA_DEFAULT_COHERENT if !NOT_COHERENT_CACHE
|
||||
select OF_EARLY_FLATTREE
|
||||
select OF_RESERVED_MEM
|
||||
select HAVE_FTRACE_MCOUNT_RECORD
|
||||
|
||||
@@ -232,7 +232,7 @@ void ibm4xx_denali_fixup_memsize(void)
|
||||
dpath = 8; /* 64 bits */
|
||||
|
||||
/* get address pins (rows) */
|
||||
val = SDRAM0_READ(DDR0_42);
|
||||
val = SDRAM0_READ(DDR0_42);
|
||||
|
||||
row = DDR_GET_VAL(val, DDR_APIN, DDR_APIN_SHIFT);
|
||||
if (row > max_row)
|
||||
|
||||
@@ -1766,7 +1766,7 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm,
|
||||
mutex_unlock(&kvm->lock);
|
||||
|
||||
if (!vcore)
|
||||
goto free_vcpu;
|
||||
goto uninit_vcpu;
|
||||
|
||||
spin_lock(&vcore->lock);
|
||||
++vcore->num_threads;
|
||||
@@ -1782,6 +1782,8 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm,
|
||||
|
||||
return vcpu;
|
||||
|
||||
uninit_vcpu:
|
||||
kvm_vcpu_uninit(vcpu);
|
||||
free_vcpu:
|
||||
kmem_cache_free(kvm_vcpu_cache, vcpu);
|
||||
out:
|
||||
|
||||
@@ -1482,10 +1482,12 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_pr(struct kvm *kvm,
|
||||
|
||||
err = kvmppc_mmu_init(vcpu);
|
||||
if (err < 0)
|
||||
goto uninit_vcpu;
|
||||
goto free_shared_page;
|
||||
|
||||
return vcpu;
|
||||
|
||||
free_shared_page:
|
||||
free_page((unsigned long)vcpu->arch.shared);
|
||||
uninit_vcpu:
|
||||
kvm_vcpu_uninit(vcpu);
|
||||
free_shadow_vcpu:
|
||||
|
||||
@@ -398,8 +398,10 @@ static bool lmb_is_removable(struct of_drconf_cell *lmb)
|
||||
|
||||
for (i = 0; i < scns_per_block; i++) {
|
||||
pfn = PFN_DOWN(phys_addr);
|
||||
if (!pfn_present(pfn))
|
||||
if (!pfn_present(pfn)) {
|
||||
phys_addr += MIN_MEMORY_BLOCK_SIZE;
|
||||
continue;
|
||||
}
|
||||
|
||||
rc &= is_mem_section_removable(pfn, PAGES_PER_SECTION);
|
||||
phys_addr += MIN_MEMORY_BLOCK_SIZE;
|
||||
|
||||
@@ -167,10 +167,10 @@ static unsigned long tce_get_pseries(struct iommu_table *tbl, long index)
|
||||
return be64_to_cpu(*tcep);
|
||||
}
|
||||
|
||||
static void tce_free_pSeriesLP(struct iommu_table*, long, long);
|
||||
static void tce_free_pSeriesLP(unsigned long liobn, long, long);
|
||||
static void tce_freemulti_pSeriesLP(struct iommu_table*, long, long);
|
||||
|
||||
static int tce_build_pSeriesLP(struct iommu_table *tbl, long tcenum,
|
||||
static int tce_build_pSeriesLP(unsigned long liobn, long tcenum, long tceshift,
|
||||
long npages, unsigned long uaddr,
|
||||
enum dma_data_direction direction,
|
||||
unsigned long attrs)
|
||||
@@ -181,25 +181,25 @@ static int tce_build_pSeriesLP(struct iommu_table *tbl, long tcenum,
|
||||
int ret = 0;
|
||||
long tcenum_start = tcenum, npages_start = npages;
|
||||
|
||||
rpn = __pa(uaddr) >> TCE_SHIFT;
|
||||
rpn = __pa(uaddr) >> tceshift;
|
||||
proto_tce = TCE_PCI_READ;
|
||||
if (direction != DMA_TO_DEVICE)
|
||||
proto_tce |= TCE_PCI_WRITE;
|
||||
|
||||
while (npages--) {
|
||||
tce = proto_tce | (rpn & TCE_RPN_MASK) << TCE_RPN_SHIFT;
|
||||
rc = plpar_tce_put((u64)tbl->it_index, (u64)tcenum << 12, tce);
|
||||
tce = proto_tce | (rpn & TCE_RPN_MASK) << tceshift;
|
||||
rc = plpar_tce_put((u64)liobn, (u64)tcenum << tceshift, tce);
|
||||
|
||||
if (unlikely(rc == H_NOT_ENOUGH_RESOURCES)) {
|
||||
ret = (int)rc;
|
||||
tce_free_pSeriesLP(tbl, tcenum_start,
|
||||
tce_free_pSeriesLP(liobn, tcenum_start,
|
||||
(npages_start - (npages + 1)));
|
||||
break;
|
||||
}
|
||||
|
||||
if (rc && printk_ratelimit()) {
|
||||
printk("tce_build_pSeriesLP: plpar_tce_put failed. rc=%lld\n", rc);
|
||||
printk("\tindex = 0x%llx\n", (u64)tbl->it_index);
|
||||
printk("\tindex = 0x%llx\n", (u64)liobn);
|
||||
printk("\ttcenum = 0x%llx\n", (u64)tcenum);
|
||||
printk("\ttce val = 0x%llx\n", tce );
|
||||
dump_stack();
|
||||
@@ -228,7 +228,8 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
|
||||
unsigned long flags;
|
||||
|
||||
if ((npages == 1) || !firmware_has_feature(FW_FEATURE_MULTITCE)) {
|
||||
return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr,
|
||||
return tce_build_pSeriesLP(tbl->it_index, tcenum,
|
||||
tbl->it_page_shift, npages, uaddr,
|
||||
direction, attrs);
|
||||
}
|
||||
|
||||
@@ -244,8 +245,9 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
|
||||
/* If allocation fails, fall back to the loop implementation */
|
||||
if (!tcep) {
|
||||
local_irq_restore(flags);
|
||||
return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr,
|
||||
direction, attrs);
|
||||
return tce_build_pSeriesLP(tbl->it_index, tcenum,
|
||||
tbl->it_page_shift,
|
||||
npages, uaddr, direction, attrs);
|
||||
}
|
||||
__this_cpu_write(tce_page, tcep);
|
||||
}
|
||||
@@ -296,16 +298,16 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void tce_free_pSeriesLP(struct iommu_table *tbl, long tcenum, long npages)
|
||||
static void tce_free_pSeriesLP(unsigned long liobn, long tcenum, long npages)
|
||||
{
|
||||
u64 rc;
|
||||
|
||||
while (npages--) {
|
||||
rc = plpar_tce_put((u64)tbl->it_index, (u64)tcenum << 12, 0);
|
||||
rc = plpar_tce_put((u64)liobn, (u64)tcenum << 12, 0);
|
||||
|
||||
if (rc && printk_ratelimit()) {
|
||||
printk("tce_free_pSeriesLP: plpar_tce_put failed. rc=%lld\n", rc);
|
||||
printk("\tindex = 0x%llx\n", (u64)tbl->it_index);
|
||||
printk("\tindex = 0x%llx\n", (u64)liobn);
|
||||
printk("\ttcenum = 0x%llx\n", (u64)tcenum);
|
||||
dump_stack();
|
||||
}
|
||||
@@ -320,7 +322,7 @@ static void tce_freemulti_pSeriesLP(struct iommu_table *tbl, long tcenum, long n
|
||||
u64 rc;
|
||||
|
||||
if (!firmware_has_feature(FW_FEATURE_MULTITCE))
|
||||
return tce_free_pSeriesLP(tbl, tcenum, npages);
|
||||
return tce_free_pSeriesLP(tbl->it_index, tcenum, npages);
|
||||
|
||||
rc = plpar_tce_stuff((u64)tbl->it_index, (u64)tcenum << 12, 0, npages);
|
||||
|
||||
@@ -435,6 +437,19 @@ static int tce_setrange_multi_pSeriesLP(unsigned long start_pfn,
|
||||
u64 rc = 0;
|
||||
long l, limit;
|
||||
|
||||
if (!firmware_has_feature(FW_FEATURE_MULTITCE)) {
|
||||
unsigned long tceshift = be32_to_cpu(maprange->tce_shift);
|
||||
unsigned long dmastart = (start_pfn << PAGE_SHIFT) +
|
||||
be64_to_cpu(maprange->dma_base);
|
||||
unsigned long tcenum = dmastart >> tceshift;
|
||||
unsigned long npages = num_pfn << PAGE_SHIFT >> tceshift;
|
||||
void *uaddr = __va(start_pfn << PAGE_SHIFT);
|
||||
|
||||
return tce_build_pSeriesLP(be32_to_cpu(maprange->liobn),
|
||||
tcenum, tceshift, npages, (unsigned long) uaddr,
|
||||
DMA_BIDIRECTIONAL, 0);
|
||||
}
|
||||
|
||||
local_irq_disable(); /* to protect tcep and the page behind it */
|
||||
tcep = __this_cpu_read(tce_page);
|
||||
|
||||
|
||||
@@ -14,19 +14,19 @@
|
||||
|
||||
struct ipc64_perm
|
||||
{
|
||||
__kernel_key_t key;
|
||||
__kernel_uid_t uid;
|
||||
__kernel_gid_t gid;
|
||||
__kernel_uid_t cuid;
|
||||
__kernel_gid_t cgid;
|
||||
__kernel_key_t key;
|
||||
__kernel_uid32_t uid;
|
||||
__kernel_gid32_t gid;
|
||||
__kernel_uid32_t cuid;
|
||||
__kernel_gid32_t cgid;
|
||||
#ifndef __arch64__
|
||||
unsigned short __pad0;
|
||||
unsigned short __pad0;
|
||||
#endif
|
||||
__kernel_mode_t mode;
|
||||
unsigned short __pad1;
|
||||
unsigned short seq;
|
||||
unsigned long long __unused1;
|
||||
unsigned long long __unused2;
|
||||
__kernel_mode_t mode;
|
||||
unsigned short __pad1;
|
||||
unsigned short seq;
|
||||
unsigned long long __unused1;
|
||||
unsigned long long __unused2;
|
||||
};
|
||||
|
||||
#endif /* __SPARC_IPCBUF_H */
|
||||
|
||||
@@ -115,11 +115,12 @@ void __init tsx_init(void)
|
||||
tsx_disable();
|
||||
|
||||
/*
|
||||
* tsx_disable() will change the state of the
|
||||
* RTM CPUID bit. Clear it here since it is now
|
||||
* expected to be not set.
|
||||
* tsx_disable() will change the state of the RTM and HLE CPUID
|
||||
* bits. Clear them here since they are now expected to be not
|
||||
* set.
|
||||
*/
|
||||
setup_clear_cpu_cap(X86_FEATURE_RTM);
|
||||
setup_clear_cpu_cap(X86_FEATURE_HLE);
|
||||
} else if (tsx_ctrl_state == TSX_CTRL_ENABLE) {
|
||||
|
||||
/*
|
||||
@@ -131,10 +132,10 @@ void __init tsx_init(void)
|
||||
tsx_enable();
|
||||
|
||||
/*
|
||||
* tsx_enable() will change the state of the
|
||||
* RTM CPUID bit. Force it here since it is now
|
||||
* expected to be set.
|
||||
* tsx_enable() will change the state of the RTM and HLE CPUID
|
||||
* bits. Force them here since they are now expected to be set.
|
||||
*/
|
||||
setup_force_cpu_cap(X86_FEATURE_RTM);
|
||||
setup_force_cpu_cap(X86_FEATURE_HLE);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -21,6 +21,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/nospec.h>
|
||||
#include "kvm_cache_regs.h"
|
||||
#include <asm/kvm_emulate.h>
|
||||
#include <linux/stringify.h>
|
||||
@@ -5053,16 +5054,28 @@ int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len)
|
||||
ctxt->ad_bytes = def_ad_bytes ^ 6;
|
||||
break;
|
||||
case 0x26: /* ES override */
|
||||
has_seg_override = true;
|
||||
ctxt->seg_override = VCPU_SREG_ES;
|
||||
break;
|
||||
case 0x2e: /* CS override */
|
||||
has_seg_override = true;
|
||||
ctxt->seg_override = VCPU_SREG_CS;
|
||||
break;
|
||||
case 0x36: /* SS override */
|
||||
has_seg_override = true;
|
||||
ctxt->seg_override = VCPU_SREG_SS;
|
||||
break;
|
||||
case 0x3e: /* DS override */
|
||||
has_seg_override = true;
|
||||
ctxt->seg_override = (ctxt->b >> 3) & 3;
|
||||
ctxt->seg_override = VCPU_SREG_DS;
|
||||
break;
|
||||
case 0x64: /* FS override */
|
||||
has_seg_override = true;
|
||||
ctxt->seg_override = VCPU_SREG_FS;
|
||||
break;
|
||||
case 0x65: /* GS override */
|
||||
has_seg_override = true;
|
||||
ctxt->seg_override = ctxt->b & 7;
|
||||
ctxt->seg_override = VCPU_SREG_GS;
|
||||
break;
|
||||
case 0x40 ... 0x4f: /* REX */
|
||||
if (mode != X86EMUL_MODE_PROT64)
|
||||
@@ -5146,10 +5159,15 @@ done_prefixes:
|
||||
}
|
||||
break;
|
||||
case Escape:
|
||||
if (ctxt->modrm > 0xbf)
|
||||
opcode = opcode.u.esc->high[ctxt->modrm - 0xc0];
|
||||
else
|
||||
if (ctxt->modrm > 0xbf) {
|
||||
size_t size = ARRAY_SIZE(opcode.u.esc->high);
|
||||
u32 index = array_index_nospec(
|
||||
ctxt->modrm - 0xc0, size);
|
||||
|
||||
opcode = opcode.u.esc->high[index];
|
||||
} else {
|
||||
opcode = opcode.u.esc->op[(ctxt->modrm >> 3) & 7];
|
||||
}
|
||||
break;
|
||||
case InstrDual:
|
||||
if ((ctxt->modrm >> 6) == 3)
|
||||
|
||||
@@ -28,6 +28,7 @@
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/highmem.h>
|
||||
#include <linux/nospec.h>
|
||||
#include <asm/apicdef.h>
|
||||
#include <trace/events/kvm.h>
|
||||
|
||||
@@ -719,11 +720,12 @@ static int kvm_hv_msr_get_crash_data(struct kvm_vcpu *vcpu,
|
||||
u32 index, u64 *pdata)
|
||||
{
|
||||
struct kvm_hv *hv = &vcpu->kvm->arch.hyperv;
|
||||
size_t size = ARRAY_SIZE(hv->hv_crash_param);
|
||||
|
||||
if (WARN_ON_ONCE(index >= ARRAY_SIZE(hv->hv_crash_param)))
|
||||
if (WARN_ON_ONCE(index >= size))
|
||||
return -EINVAL;
|
||||
|
||||
*pdata = hv->hv_crash_param[index];
|
||||
*pdata = hv->hv_crash_param[array_index_nospec(index, size)];
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -762,11 +764,12 @@ static int kvm_hv_msr_set_crash_data(struct kvm_vcpu *vcpu,
|
||||
u32 index, u64 data)
|
||||
{
|
||||
struct kvm_hv *hv = &vcpu->kvm->arch.hyperv;
|
||||
size_t size = ARRAY_SIZE(hv->hv_crash_param);
|
||||
|
||||
if (WARN_ON_ONCE(index >= ARRAY_SIZE(hv->hv_crash_param)))
|
||||
if (WARN_ON_ONCE(index >= size))
|
||||
return -EINVAL;
|
||||
|
||||
hv->hv_crash_param[index] = data;
|
||||
hv->hv_crash_param[array_index_nospec(index, size)] = data;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -36,6 +36,7 @@
|
||||
#include <linux/io.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/nospec.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/page.h>
|
||||
#include <asm/current.h>
|
||||
@@ -73,13 +74,14 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
|
||||
default:
|
||||
{
|
||||
u32 redir_index = (ioapic->ioregsel - 0x10) >> 1;
|
||||
u64 redir_content;
|
||||
u64 redir_content = ~0ULL;
|
||||
|
||||
if (redir_index < IOAPIC_NUM_PINS)
|
||||
redir_content =
|
||||
ioapic->redirtbl[redir_index].bits;
|
||||
else
|
||||
redir_content = ~0ULL;
|
||||
if (redir_index < IOAPIC_NUM_PINS) {
|
||||
u32 index = array_index_nospec(
|
||||
redir_index, IOAPIC_NUM_PINS);
|
||||
|
||||
redir_content = ioapic->redirtbl[index].bits;
|
||||
}
|
||||
|
||||
result = (ioapic->ioregsel & 0x1) ?
|
||||
(redir_content >> 32) & 0xffffffff :
|
||||
@@ -299,6 +301,7 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
|
||||
ioapic_debug("change redir index %x val %x\n", index, val);
|
||||
if (index >= IOAPIC_NUM_PINS)
|
||||
return;
|
||||
index = array_index_nospec(index, IOAPIC_NUM_PINS);
|
||||
e = &ioapic->redirtbl[index];
|
||||
mask_before = e->fields.mask;
|
||||
/* Preserve read-only fields */
|
||||
|
||||
@@ -28,6 +28,7 @@
|
||||
#include <linux/export.h>
|
||||
#include <linux/math64.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/nospec.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/msr.h>
|
||||
#include <asm/page.h>
|
||||
@@ -1587,15 +1588,20 @@ int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
|
||||
case APIC_LVTTHMR:
|
||||
case APIC_LVTPC:
|
||||
case APIC_LVT1:
|
||||
case APIC_LVTERR:
|
||||
case APIC_LVTERR: {
|
||||
/* TODO: Check vector */
|
||||
size_t size;
|
||||
u32 index;
|
||||
|
||||
if (!kvm_apic_sw_enabled(apic))
|
||||
val |= APIC_LVT_MASKED;
|
||||
|
||||
val &= apic_lvt_mask[(reg - APIC_LVTT) >> 4];
|
||||
size = ARRAY_SIZE(apic_lvt_mask);
|
||||
index = array_index_nospec(
|
||||
(reg - APIC_LVTT) >> 4, size);
|
||||
val &= apic_lvt_mask[index];
|
||||
kvm_lapic_set_reg(apic, reg, val);
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
case APIC_LVTT:
|
||||
if (!kvm_apic_sw_enabled(apic))
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/kvm_host.h>
|
||||
#include <linux/nospec.h>
|
||||
#include <asm/mtrr.h>
|
||||
|
||||
#include "cpuid.h"
|
||||
@@ -202,11 +203,15 @@ static bool fixed_msr_to_seg_unit(u32 msr, int *seg, int *unit)
|
||||
break;
|
||||
case MSR_MTRRfix16K_80000 ... MSR_MTRRfix16K_A0000:
|
||||
*seg = 1;
|
||||
*unit = msr - MSR_MTRRfix16K_80000;
|
||||
*unit = array_index_nospec(
|
||||
msr - MSR_MTRRfix16K_80000,
|
||||
MSR_MTRRfix16K_A0000 - MSR_MTRRfix16K_80000 + 1);
|
||||
break;
|
||||
case MSR_MTRRfix4K_C0000 ... MSR_MTRRfix4K_F8000:
|
||||
*seg = 2;
|
||||
*unit = msr - MSR_MTRRfix4K_C0000;
|
||||
*unit = array_index_nospec(
|
||||
msr - MSR_MTRRfix4K_C0000,
|
||||
MSR_MTRRfix4K_F8000 - MSR_MTRRfix4K_C0000 + 1);
|
||||
break;
|
||||
default:
|
||||
return false;
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
#ifndef __KVM_X86_PMU_H
|
||||
#define __KVM_X86_PMU_H
|
||||
|
||||
#include <linux/nospec.h>
|
||||
|
||||
#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu)
|
||||
#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu))
|
||||
#define pmc_to_pmu(pmc) (&(pmc)->vcpu->arch.pmu)
|
||||
@@ -80,8 +82,12 @@ static inline bool pmc_is_enabled(struct kvm_pmc *pmc)
|
||||
static inline struct kvm_pmc *get_gp_pmc(struct kvm_pmu *pmu, u32 msr,
|
||||
u32 base)
|
||||
{
|
||||
if (msr >= base && msr < base + pmu->nr_arch_gp_counters)
|
||||
return &pmu->gp_counters[msr - base];
|
||||
if (msr >= base && msr < base + pmu->nr_arch_gp_counters) {
|
||||
u32 index = array_index_nospec(msr - base,
|
||||
pmu->nr_arch_gp_counters);
|
||||
|
||||
return &pmu->gp_counters[index];
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
@@ -91,8 +97,12 @@ static inline struct kvm_pmc *get_fixed_pmc(struct kvm_pmu *pmu, u32 msr)
|
||||
{
|
||||
int base = MSR_CORE_PERF_FIXED_CTR0;
|
||||
|
||||
if (msr >= base && msr < base + pmu->nr_arch_fixed_counters)
|
||||
return &pmu->fixed_counters[msr - base];
|
||||
if (msr >= base && msr < base + pmu->nr_arch_fixed_counters) {
|
||||
u32 index = array_index_nospec(msr - base,
|
||||
pmu->nr_arch_fixed_counters);
|
||||
|
||||
return &pmu->fixed_counters[index];
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@@ -87,10 +87,14 @@ static unsigned intel_find_arch_event(struct kvm_pmu *pmu,
|
||||
|
||||
static unsigned intel_find_fixed_event(int idx)
|
||||
{
|
||||
if (idx >= ARRAY_SIZE(fixed_pmc_events))
|
||||
u32 event;
|
||||
size_t size = ARRAY_SIZE(fixed_pmc_events);
|
||||
|
||||
if (idx >= size)
|
||||
return PERF_COUNT_HW_MAX;
|
||||
|
||||
return intel_arch_events[fixed_pmc_events[idx]].event_type;
|
||||
event = fixed_pmc_events[array_index_nospec(idx, size)];
|
||||
return intel_arch_events[event].event_type;
|
||||
}
|
||||
|
||||
/* check if a PMC is enabled by comparing it with globl_ctrl bits. */
|
||||
@@ -131,15 +135,19 @@ static struct kvm_pmc *intel_msr_idx_to_pmc(struct kvm_vcpu *vcpu,
|
||||
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
|
||||
bool fixed = idx & (1u << 30);
|
||||
struct kvm_pmc *counters;
|
||||
unsigned int num_counters;
|
||||
|
||||
idx &= ~(3u << 30);
|
||||
if (!fixed && idx >= pmu->nr_arch_gp_counters)
|
||||
if (fixed) {
|
||||
counters = pmu->fixed_counters;
|
||||
num_counters = pmu->nr_arch_fixed_counters;
|
||||
} else {
|
||||
counters = pmu->gp_counters;
|
||||
num_counters = pmu->nr_arch_gp_counters;
|
||||
}
|
||||
if (idx >= num_counters)
|
||||
return NULL;
|
||||
if (fixed && idx >= pmu->nr_arch_fixed_counters)
|
||||
return NULL;
|
||||
counters = fixed ? pmu->fixed_counters : pmu->gp_counters;
|
||||
|
||||
return &counters[idx];
|
||||
return &counters[array_index_nospec(idx, num_counters)];
|
||||
}
|
||||
|
||||
static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
|
||||
|
||||
@@ -7653,8 +7653,10 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
|
||||
/* _system ok, as nested_vmx_check_permission verified cpl=0 */
|
||||
if (kvm_write_guest_virt_system(vcpu, gva, &field_value,
|
||||
(is_long_mode(vcpu) ? 8 : 4),
|
||||
&e))
|
||||
&e)) {
|
||||
kvm_inject_page_fault(vcpu, &e);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
nested_vmx_succeed(vcpu);
|
||||
|
||||
8033
arch/x86/kvm/vmx/vmx.c
Normal file
8033
arch/x86/kvm/vmx/vmx.c
Normal file
File diff suppressed because it is too large
Load Diff
@@ -54,6 +54,7 @@
|
||||
#include <linux/pvclock_gtod.h>
|
||||
#include <linux/kvm_irqfd.h>
|
||||
#include <linux/irqbypass.h>
|
||||
#include <linux/nospec.h>
|
||||
#include <trace/events/kvm.h>
|
||||
|
||||
#include <asm/debugreg.h>
|
||||
@@ -889,9 +890,11 @@ static u64 kvm_dr6_fixed(struct kvm_vcpu *vcpu)
|
||||
|
||||
static int __kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val)
|
||||
{
|
||||
size_t size = ARRAY_SIZE(vcpu->arch.db);
|
||||
|
||||
switch (dr) {
|
||||
case 0 ... 3:
|
||||
vcpu->arch.db[dr] = val;
|
||||
vcpu->arch.db[array_index_nospec(dr, size)] = val;
|
||||
if (!(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP))
|
||||
vcpu->arch.eff_db[dr] = val;
|
||||
break;
|
||||
@@ -928,9 +931,11 @@ EXPORT_SYMBOL_GPL(kvm_set_dr);
|
||||
|
||||
int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val)
|
||||
{
|
||||
size_t size = ARRAY_SIZE(vcpu->arch.db);
|
||||
|
||||
switch (dr) {
|
||||
case 0 ... 3:
|
||||
*val = vcpu->arch.db[dr];
|
||||
*val = vcpu->arch.db[array_index_nospec(dr, size)];
|
||||
break;
|
||||
case 4:
|
||||
/* fall through */
|
||||
@@ -2125,7 +2130,10 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 data)
|
||||
default:
|
||||
if (msr >= MSR_IA32_MC0_CTL &&
|
||||
msr < MSR_IA32_MCx_CTL(bank_num)) {
|
||||
u32 offset = msr - MSR_IA32_MC0_CTL;
|
||||
u32 offset = array_index_nospec(
|
||||
msr - MSR_IA32_MC0_CTL,
|
||||
MSR_IA32_MCx_CTL(bank_num) - MSR_IA32_MC0_CTL);
|
||||
|
||||
/* only 0 or all 1s can be written to IA32_MCi_CTL
|
||||
* some Linux kernels though clear bit 10 in bank 4 to
|
||||
* workaround a BIOS/GART TBL issue on AMD K8s, ignore
|
||||
@@ -2493,7 +2501,10 @@ static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
|
||||
default:
|
||||
if (msr >= MSR_IA32_MC0_CTL &&
|
||||
msr < MSR_IA32_MCx_CTL(bank_num)) {
|
||||
u32 offset = msr - MSR_IA32_MC0_CTL;
|
||||
u32 offset = array_index_nospec(
|
||||
msr - MSR_IA32_MC0_CTL,
|
||||
MSR_IA32_MCx_CTL(bank_num) - MSR_IA32_MC0_CTL);
|
||||
|
||||
data = vcpu->arch.mce_banks[offset];
|
||||
break;
|
||||
}
|
||||
@@ -6121,14 +6132,12 @@ static void kvm_set_mmio_spte_mask(void)
|
||||
/* Set the present bit. */
|
||||
mask |= 1ull;
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
/*
|
||||
* If reserved bit is not supported, clear the present bit to disable
|
||||
* mmio page fault.
|
||||
*/
|
||||
if (maxphyaddr == 52)
|
||||
mask &= ~1ull;
|
||||
#endif
|
||||
|
||||
kvm_mmu_set_mmio_spte_mask(mask);
|
||||
}
|
||||
@@ -7798,7 +7807,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
|
||||
kvm_mmu_unload(vcpu);
|
||||
vcpu_put(vcpu);
|
||||
|
||||
kvm_x86_ops->vcpu_free(vcpu);
|
||||
kvm_arch_vcpu_free(vcpu);
|
||||
}
|
||||
|
||||
void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
|
||||
|
||||
@@ -652,11 +652,9 @@ EXPORT_SYMBOL_GPL(crypto_grab_spawn);
|
||||
|
||||
void crypto_drop_spawn(struct crypto_spawn *spawn)
|
||||
{
|
||||
if (!spawn->alg)
|
||||
return;
|
||||
|
||||
down_write(&crypto_alg_sem);
|
||||
list_del(&spawn->list);
|
||||
if (spawn->alg)
|
||||
list_del(&spawn->list);
|
||||
up_write(&crypto_alg_sem);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(crypto_drop_spawn);
|
||||
@@ -664,22 +662,16 @@ EXPORT_SYMBOL_GPL(crypto_drop_spawn);
|
||||
static struct crypto_alg *crypto_spawn_alg(struct crypto_spawn *spawn)
|
||||
{
|
||||
struct crypto_alg *alg;
|
||||
struct crypto_alg *alg2;
|
||||
|
||||
down_read(&crypto_alg_sem);
|
||||
alg = spawn->alg;
|
||||
alg2 = alg;
|
||||
if (alg2)
|
||||
alg2 = crypto_mod_get(alg2);
|
||||
if (alg && !crypto_mod_get(alg)) {
|
||||
alg->cra_flags |= CRYPTO_ALG_DYING;
|
||||
alg = NULL;
|
||||
}
|
||||
up_read(&crypto_alg_sem);
|
||||
|
||||
if (!alg2) {
|
||||
if (alg)
|
||||
crypto_shoot_alg(alg);
|
||||
return ERR_PTR(-EAGAIN);
|
||||
}
|
||||
|
||||
return alg;
|
||||
return alg ?: ERR_PTR(-EAGAIN);
|
||||
}
|
||||
|
||||
struct crypto_tfm *crypto_spawn_tfm(struct crypto_spawn *spawn, u32 type,
|
||||
|
||||
@@ -356,13 +356,12 @@ static unsigned int crypto_ctxsize(struct crypto_alg *alg, u32 type, u32 mask)
|
||||
return len;
|
||||
}
|
||||
|
||||
void crypto_shoot_alg(struct crypto_alg *alg)
|
||||
static void crypto_shoot_alg(struct crypto_alg *alg)
|
||||
{
|
||||
down_write(&crypto_alg_sem);
|
||||
alg->cra_flags |= CRYPTO_ALG_DYING;
|
||||
up_write(&crypto_alg_sem);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(crypto_shoot_alg);
|
||||
|
||||
struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
|
||||
u32 mask)
|
||||
|
||||
@@ -87,7 +87,6 @@ void crypto_alg_tested(const char *name, int err);
|
||||
void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list,
|
||||
struct crypto_alg *nalg);
|
||||
void crypto_remove_final(struct list_head *list);
|
||||
void crypto_shoot_alg(struct crypto_alg *alg);
|
||||
struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
|
||||
u32 mask);
|
||||
void *crypto_create_tfm(struct crypto_alg *alg,
|
||||
|
||||
@@ -130,7 +130,6 @@ static void pcrypt_aead_done(struct crypto_async_request *areq, int err)
|
||||
struct padata_priv *padata = pcrypt_request_padata(preq);
|
||||
|
||||
padata->info = err;
|
||||
req->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
|
||||
|
||||
padata_do_serial(padata);
|
||||
}
|
||||
|
||||
@@ -797,7 +797,11 @@ static struct tegra_periph_init_data gate_clks[] = {
|
||||
GATE("vcp", "clk_m", 29, 0, tegra_clk_vcp, 0),
|
||||
GATE("apbdma", "clk_m", 34, 0, tegra_clk_apbdma, 0),
|
||||
GATE("kbc", "clk_32k", 36, TEGRA_PERIPH_ON_APB | TEGRA_PERIPH_NO_RESET, tegra_clk_kbc, 0),
|
||||
GATE("fuse", "clk_m", 39, TEGRA_PERIPH_ON_APB, tegra_clk_fuse, 0),
|
||||
/*
|
||||
* Critical for RAM re-repair operation, which must occur on resume
|
||||
* from LP1 system suspend and as part of CCPLEX cluster switching.
|
||||
*/
|
||||
GATE("fuse", "clk_m", 39, TEGRA_PERIPH_ON_APB, tegra_clk_fuse, CLK_IS_CRITICAL),
|
||||
GATE("fuse_burn", "clk_m", 39, TEGRA_PERIPH_ON_APB, tegra_clk_fuse_burn, 0),
|
||||
GATE("kfuse", "clk_m", 40, TEGRA_PERIPH_ON_APB, tegra_clk_kfuse, 0),
|
||||
GATE("apbif", "clk_m", 107, TEGRA_PERIPH_ON_APB, tegra_clk_apbif, 0),
|
||||
|
||||
@@ -87,7 +87,6 @@
|
||||
struct atmel_aes_caps {
|
||||
bool has_dualbuff;
|
||||
bool has_cfb64;
|
||||
bool has_ctr32;
|
||||
bool has_gcm;
|
||||
u32 max_burst_size;
|
||||
};
|
||||
@@ -923,8 +922,9 @@ static int atmel_aes_ctr_transfer(struct atmel_aes_dev *dd)
|
||||
struct atmel_aes_ctr_ctx *ctx = atmel_aes_ctr_ctx_cast(dd->ctx);
|
||||
struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq);
|
||||
struct scatterlist *src, *dst;
|
||||
u32 ctr, blocks;
|
||||
size_t datalen;
|
||||
u32 ctr;
|
||||
u16 blocks, start, end;
|
||||
bool use_dma, fragmented = false;
|
||||
|
||||
/* Check for transfer completion. */
|
||||
@@ -936,27 +936,17 @@ static int atmel_aes_ctr_transfer(struct atmel_aes_dev *dd)
|
||||
datalen = req->nbytes - ctx->offset;
|
||||
blocks = DIV_ROUND_UP(datalen, AES_BLOCK_SIZE);
|
||||
ctr = be32_to_cpu(ctx->iv[3]);
|
||||
if (dd->caps.has_ctr32) {
|
||||
/* Check 32bit counter overflow. */
|
||||
u32 start = ctr;
|
||||
u32 end = start + blocks - 1;
|
||||
|
||||
if (end < start) {
|
||||
ctr |= 0xffffffff;
|
||||
datalen = AES_BLOCK_SIZE * -start;
|
||||
fragmented = true;
|
||||
}
|
||||
} else {
|
||||
/* Check 16bit counter overflow. */
|
||||
u16 start = ctr & 0xffff;
|
||||
u16 end = start + (u16)blocks - 1;
|
||||
/* Check 16bit counter overflow. */
|
||||
start = ctr & 0xffff;
|
||||
end = start + blocks - 1;
|
||||
|
||||
if (blocks >> 16 || end < start) {
|
||||
ctr |= 0xffff;
|
||||
datalen = AES_BLOCK_SIZE * (0x10000-start);
|
||||
fragmented = true;
|
||||
}
|
||||
if (blocks >> 16 || end < start) {
|
||||
ctr |= 0xffff;
|
||||
datalen = AES_BLOCK_SIZE * (0x10000 - start);
|
||||
fragmented = true;
|
||||
}
|
||||
|
||||
use_dma = (datalen >= ATMEL_AES_DMA_THRESHOLD);
|
||||
|
||||
/* Jump to offset. */
|
||||
@@ -1926,7 +1916,6 @@ static void atmel_aes_get_cap(struct atmel_aes_dev *dd)
|
||||
{
|
||||
dd->caps.has_dualbuff = 0;
|
||||
dd->caps.has_cfb64 = 0;
|
||||
dd->caps.has_ctr32 = 0;
|
||||
dd->caps.has_gcm = 0;
|
||||
dd->caps.max_burst_size = 1;
|
||||
|
||||
@@ -1935,14 +1924,12 @@ static void atmel_aes_get_cap(struct atmel_aes_dev *dd)
|
||||
case 0x500:
|
||||
dd->caps.has_dualbuff = 1;
|
||||
dd->caps.has_cfb64 = 1;
|
||||
dd->caps.has_ctr32 = 1;
|
||||
dd->caps.has_gcm = 1;
|
||||
dd->caps.max_burst_size = 4;
|
||||
break;
|
||||
case 0x200:
|
||||
dd->caps.has_dualbuff = 1;
|
||||
dd->caps.has_cfb64 = 1;
|
||||
dd->caps.has_ctr32 = 1;
|
||||
dd->caps.has_gcm = 1;
|
||||
dd->caps.max_burst_size = 4;
|
||||
break;
|
||||
|
||||
@@ -1632,6 +1632,11 @@ static bool spacc_is_compatible(struct platform_device *pdev,
|
||||
return false;
|
||||
}
|
||||
|
||||
static void spacc_tasklet_kill(void *data)
|
||||
{
|
||||
tasklet_kill(data);
|
||||
}
|
||||
|
||||
static int spacc_probe(struct platform_device *pdev)
|
||||
{
|
||||
int i, err, ret = -EINVAL;
|
||||
@@ -1674,6 +1679,14 @@ static int spacc_probe(struct platform_device *pdev)
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
tasklet_init(&engine->complete, spacc_spacc_complete,
|
||||
(unsigned long)engine);
|
||||
|
||||
ret = devm_add_action(&pdev->dev, spacc_tasklet_kill,
|
||||
&engine->complete);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (devm_request_irq(&pdev->dev, irq->start, spacc_spacc_irq, 0,
|
||||
engine->name, engine)) {
|
||||
dev_err(engine->dev, "failed to request IRQ\n");
|
||||
@@ -1736,8 +1749,6 @@ static int spacc_probe(struct platform_device *pdev)
|
||||
INIT_LIST_HEAD(&engine->completed);
|
||||
INIT_LIST_HEAD(&engine->in_progress);
|
||||
engine->in_flight = 0;
|
||||
tasklet_init(&engine->complete, spacc_spacc_complete,
|
||||
(unsigned long)engine);
|
||||
|
||||
platform_set_drvdata(pdev, engine);
|
||||
|
||||
|
||||
@@ -81,7 +81,11 @@ static void atmel_hlcdc_crtc_mode_set_nofb(struct drm_crtc *c)
|
||||
struct videomode vm;
|
||||
unsigned long prate;
|
||||
unsigned int cfg;
|
||||
int div;
|
||||
int div, ret;
|
||||
|
||||
ret = clk_prepare_enable(crtc->dc->hlcdc->sys_clk);
|
||||
if (ret)
|
||||
return;
|
||||
|
||||
vm.vfront_porch = adj->crtc_vsync_start - adj->crtc_vdisplay;
|
||||
vm.vback_porch = adj->crtc_vtotal - adj->crtc_vsync_end;
|
||||
@@ -140,6 +144,8 @@ static void atmel_hlcdc_crtc_mode_set_nofb(struct drm_crtc *c)
|
||||
ATMEL_HLCDC_VSPSU | ATMEL_HLCDC_VSPHO |
|
||||
ATMEL_HLCDC_GUARDTIME_MASK | ATMEL_HLCDC_MODE_MASK,
|
||||
cfg);
|
||||
|
||||
clk_disable_unprepare(crtc->dc->hlcdc->sys_clk);
|
||||
}
|
||||
|
||||
static bool atmel_hlcdc_crtc_mode_fixup(struct drm_crtc *c,
|
||||
|
||||
@@ -141,7 +141,7 @@ int ib_nl_handle_ip_res_resp(struct sk_buff *skb,
|
||||
if (ib_nl_is_good_ip_resp(nlh))
|
||||
ib_nl_process_good_ip_rsep(nlh);
|
||||
|
||||
return skb->len;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ib_nl_ip_send_msg(struct rdma_dev_addr *dev_addr,
|
||||
|
||||
@@ -848,7 +848,7 @@ int ib_nl_handle_set_timeout(struct sk_buff *skb,
|
||||
}
|
||||
|
||||
settimeout_out:
|
||||
return skb->len;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int ib_nl_is_good_resolve_resp(const struct nlmsghdr *nlh)
|
||||
@@ -920,7 +920,7 @@ int ib_nl_handle_resolve_resp(struct sk_buff *skb,
|
||||
}
|
||||
|
||||
resp_out:
|
||||
return skb->len;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void free_sm_ah(struct kref *kref)
|
||||
|
||||
@@ -507,8 +507,7 @@ int mlx5_ib_gsi_post_send(struct ib_qp *qp, struct ib_send_wr *wr,
|
||||
ret = ib_post_send(tx_qp, &cur_wr.wr, bad_wr);
|
||||
if (ret) {
|
||||
/* Undo the effect of adding the outstanding wr */
|
||||
gsi->outstanding_pi = (gsi->outstanding_pi - 1) %
|
||||
gsi->cap.max_send_wr;
|
||||
gsi->outstanding_pi--;
|
||||
goto err;
|
||||
}
|
||||
spin_unlock_irqrestore(&gsi->lock, flags);
|
||||
|
||||
@@ -1457,7 +1457,6 @@ void dm_init_md_queue(struct mapped_device *md)
|
||||
* - must do so here (in alloc_dev callchain) before queue is used
|
||||
*/
|
||||
md->queue->queuedata = md;
|
||||
md->queue->backing_dev_info.congested_data = md;
|
||||
}
|
||||
|
||||
void dm_init_normal_md_queue(struct mapped_device *md)
|
||||
@@ -1468,6 +1467,7 @@ void dm_init_normal_md_queue(struct mapped_device *md)
|
||||
/*
|
||||
* Initialize aspects of queue that aren't relevant for blk-mq
|
||||
*/
|
||||
md->queue->backing_dev_info.congested_data = md;
|
||||
md->queue->backing_dev_info.congested_fn = dm_any_congested;
|
||||
blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY);
|
||||
}
|
||||
@@ -1555,6 +1555,12 @@ static struct mapped_device *alloc_dev(int minor)
|
||||
goto bad;
|
||||
|
||||
dm_init_md_queue(md);
|
||||
/*
|
||||
* default to bio-based required ->make_request_fn until DM
|
||||
* table is loaded and md->type established. If request-based
|
||||
* table is loaded: blk-mq will override accordingly.
|
||||
*/
|
||||
blk_queue_make_request(md->queue, dm_make_request);
|
||||
|
||||
md->disk = alloc_disk_node(1, numa_node_id);
|
||||
if (!md->disk)
|
||||
@@ -1853,7 +1859,6 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
|
||||
case DM_TYPE_BIO_BASED:
|
||||
case DM_TYPE_DAX_BIO_BASED:
|
||||
dm_init_normal_md_queue(md);
|
||||
blk_queue_make_request(md->queue, dm_make_request);
|
||||
/*
|
||||
* DM handles splitting bios as needed. Free the bio_split bioset
|
||||
* since it won't be used (saves 1 process per bio-based DM device).
|
||||
|
||||
@@ -382,6 +382,33 @@ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
int sm_ll_find_common_free_block(struct ll_disk *old_ll, struct ll_disk *new_ll,
|
||||
dm_block_t begin, dm_block_t end, dm_block_t *b)
|
||||
{
|
||||
int r;
|
||||
uint32_t count;
|
||||
|
||||
do {
|
||||
r = sm_ll_find_free_block(new_ll, begin, new_ll->nr_blocks, b);
|
||||
if (r)
|
||||
break;
|
||||
|
||||
/* double check this block wasn't used in the old transaction */
|
||||
if (*b >= old_ll->nr_blocks)
|
||||
count = 0;
|
||||
else {
|
||||
r = sm_ll_lookup(old_ll, *b, &count);
|
||||
if (r)
|
||||
break;
|
||||
|
||||
if (count)
|
||||
begin = *b + 1;
|
||||
}
|
||||
} while (count);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
static int sm_ll_mutate(struct ll_disk *ll, dm_block_t b,
|
||||
int (*mutator)(void *context, uint32_t old, uint32_t *new),
|
||||
void *context, enum allocation_event *ev)
|
||||
|
||||
@@ -109,6 +109,8 @@ int sm_ll_lookup_bitmap(struct ll_disk *ll, dm_block_t b, uint32_t *result);
|
||||
int sm_ll_lookup(struct ll_disk *ll, dm_block_t b, uint32_t *result);
|
||||
int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
|
||||
dm_block_t end, dm_block_t *result);
|
||||
int sm_ll_find_common_free_block(struct ll_disk *old_ll, struct ll_disk *new_ll,
|
||||
dm_block_t begin, dm_block_t end, dm_block_t *result);
|
||||
int sm_ll_insert(struct ll_disk *ll, dm_block_t b, uint32_t ref_count, enum allocation_event *ev);
|
||||
int sm_ll_inc(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev);
|
||||
int sm_ll_dec(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev);
|
||||
|
||||
@@ -167,8 +167,10 @@ static int sm_disk_new_block(struct dm_space_map *sm, dm_block_t *b)
|
||||
enum allocation_event ev;
|
||||
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
|
||||
|
||||
/* FIXME: we should loop round a couple of times */
|
||||
r = sm_ll_find_free_block(&smd->old_ll, smd->begin, smd->old_ll.nr_blocks, b);
|
||||
/*
|
||||
* Any block we allocate has to be free in both the old and current ll.
|
||||
*/
|
||||
r = sm_ll_find_common_free_block(&smd->old_ll, &smd->ll, smd->begin, smd->ll.nr_blocks, b);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
|
||||
@@ -447,7 +447,10 @@ static int sm_metadata_new_block_(struct dm_space_map *sm, dm_block_t *b)
|
||||
enum allocation_event ev;
|
||||
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
|
||||
|
||||
r = sm_ll_find_free_block(&smm->old_ll, smm->begin, smm->old_ll.nr_blocks, b);
|
||||
/*
|
||||
* Any block we allocate has to be free in both the old and current ll.
|
||||
*/
|
||||
r = sm_ll_find_common_free_block(&smm->old_ll, &smm->ll, smm->begin, smm->ll.nr_blocks, b);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
|
||||
@@ -430,7 +430,7 @@ static int iguanair_probe(struct usb_interface *intf,
|
||||
int ret, pipein, pipeout;
|
||||
struct usb_host_interface *idesc;
|
||||
|
||||
idesc = intf->altsetting;
|
||||
idesc = intf->cur_altsetting;
|
||||
if (idesc->desc.bNumEndpoints < 2)
|
||||
return -ENODEV;
|
||||
|
||||
|
||||
@@ -1411,6 +1411,11 @@ static int uvc_scan_chain_forward(struct uvc_video_chain *chain,
|
||||
break;
|
||||
if (forward == prev)
|
||||
continue;
|
||||
if (forward->chain.next || forward->chain.prev) {
|
||||
uvc_trace(UVC_TRACE_DESCR, "Found reference to "
|
||||
"entity %d already in chain.\n", forward->id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
switch (UVC_ENTITY_TYPE(forward)) {
|
||||
case UVC_VC_EXTENSION_UNIT:
|
||||
@@ -1492,6 +1497,13 @@ static int uvc_scan_chain_backward(struct uvc_video_chain *chain,
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (term->chain.next || term->chain.prev) {
|
||||
uvc_trace(UVC_TRACE_DESCR, "Found reference to "
|
||||
"entity %d already in chain.\n",
|
||||
term->id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (uvc_trace_param & UVC_TRACE_PROBE)
|
||||
printk(" %d", term->id);
|
||||
|
||||
|
||||
@@ -352,8 +352,11 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma)
|
||||
BUG_ON(dma->sglen);
|
||||
|
||||
if (dma->pages) {
|
||||
for (i = 0; i < dma->nr_pages; i++)
|
||||
for (i = 0; i < dma->nr_pages; i++) {
|
||||
if (dma->direction == DMA_FROM_DEVICE)
|
||||
set_page_dirty_lock(dma->pages[i]);
|
||||
put_page(dma->pages[i]);
|
||||
}
|
||||
kfree(dma->pages);
|
||||
dma->pages = NULL;
|
||||
}
|
||||
|
||||
@@ -142,7 +142,7 @@ static const struct mfd_cell da9062_devs[] = {
|
||||
.name = "da9062-watchdog",
|
||||
.num_resources = ARRAY_SIZE(da9062_wdt_resources),
|
||||
.resources = da9062_wdt_resources,
|
||||
.of_compatible = "dlg,da9062-wdt",
|
||||
.of_compatible = "dlg,da9062-watchdog",
|
||||
},
|
||||
{
|
||||
.name = "da9062-thermal",
|
||||
|
||||
@@ -729,6 +729,8 @@ static int dln2_probe(struct usb_interface *interface,
|
||||
const struct usb_device_id *usb_id)
|
||||
{
|
||||
struct usb_host_interface *hostif = interface->cur_altsetting;
|
||||
struct usb_endpoint_descriptor *epin;
|
||||
struct usb_endpoint_descriptor *epout;
|
||||
struct device *dev = &interface->dev;
|
||||
struct dln2_dev *dln2;
|
||||
int ret;
|
||||
@@ -738,12 +740,19 @@ static int dln2_probe(struct usb_interface *interface,
|
||||
hostif->desc.bNumEndpoints < 2)
|
||||
return -ENODEV;
|
||||
|
||||
epin = &hostif->endpoint[0].desc;
|
||||
epout = &hostif->endpoint[1].desc;
|
||||
if (!usb_endpoint_is_bulk_out(epout))
|
||||
return -ENODEV;
|
||||
if (!usb_endpoint_is_bulk_in(epin))
|
||||
return -ENODEV;
|
||||
|
||||
dln2 = kzalloc(sizeof(*dln2), GFP_KERNEL);
|
||||
if (!dln2)
|
||||
return -ENOMEM;
|
||||
|
||||
dln2->ep_out = hostif->endpoint[0].desc.bEndpointAddress;
|
||||
dln2->ep_in = hostif->endpoint[1].desc.bEndpointAddress;
|
||||
dln2->ep_out = epout->bEndpointAddress;
|
||||
dln2->ep_in = epin->bEndpointAddress;
|
||||
dln2->usb_dev = usb_get_dev(interface_to_usbdev(interface));
|
||||
dln2->interface = interface;
|
||||
usb_set_intfdata(interface, dln2);
|
||||
|
||||
@@ -32,6 +32,7 @@ static bool rn5t618_volatile_reg(struct device *dev, unsigned int reg)
|
||||
case RN5T618_WATCHDOGCNT:
|
||||
case RN5T618_DCIRQ:
|
||||
case RN5T618_ILIMDATAH ... RN5T618_AIN0DATAL:
|
||||
case RN5T618_ADCCNT3:
|
||||
case RN5T618_IR_ADC1 ... RN5T618_IR_ADC3:
|
||||
case RN5T618_IR_GPR:
|
||||
case RN5T618_IR_GPF:
|
||||
|
||||
@@ -1157,17 +1157,22 @@ static void mmc_spi_initsequence(struct mmc_spi_host *host)
|
||||
* SPI protocol. Another is that when chipselect is released while
|
||||
* the card returns BUSY status, the clock must issue several cycles
|
||||
* with chipselect high before the card will stop driving its output.
|
||||
*
|
||||
* SPI_CS_HIGH means "asserted" here. In some cases like when using
|
||||
* GPIOs for chip select, SPI_CS_HIGH is set but this will be logically
|
||||
* inverted by gpiolib, so if we want to ascertain to drive it high
|
||||
* we should toggle the default with an XOR as we do here.
|
||||
*/
|
||||
host->spi->mode |= SPI_CS_HIGH;
|
||||
host->spi->mode ^= SPI_CS_HIGH;
|
||||
if (spi_setup(host->spi) != 0) {
|
||||
/* Just warn; most cards work without it. */
|
||||
dev_warn(&host->spi->dev,
|
||||
"can't change chip-select polarity\n");
|
||||
host->spi->mode &= ~SPI_CS_HIGH;
|
||||
host->spi->mode ^= SPI_CS_HIGH;
|
||||
} else {
|
||||
mmc_spi_readbytes(host, 18);
|
||||
|
||||
host->spi->mode &= ~SPI_CS_HIGH;
|
||||
host->spi->mode ^= SPI_CS_HIGH;
|
||||
if (spi_setup(host->spi) != 0) {
|
||||
/* Wot, we can't get the same setup we had before? */
|
||||
dev_err(&host->spi->dev,
|
||||
|
||||
@@ -73,7 +73,7 @@ static int self_check_seen(struct ubi_device *ubi, unsigned long *seen)
|
||||
return 0;
|
||||
|
||||
for (pnum = 0; pnum < ubi->peb_count; pnum++) {
|
||||
if (test_bit(pnum, seen) && ubi->lookuptbl[pnum]) {
|
||||
if (!test_bit(pnum, seen) && ubi->lookuptbl[pnum]) {
|
||||
ubi_err(ubi, "self-check failed for PEB %d, fastmap didn't see it", pnum);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
@@ -1127,7 +1127,7 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
|
||||
struct rb_node *tmp_rb;
|
||||
int ret, i, j, free_peb_count, used_peb_count, vol_count;
|
||||
int scrub_peb_count, erase_peb_count;
|
||||
unsigned long *seen_pebs = NULL;
|
||||
unsigned long *seen_pebs;
|
||||
|
||||
fm_raw = ubi->fm_buf;
|
||||
memset(ubi->fm_buf, 0, ubi->fm_size);
|
||||
@@ -1141,7 +1141,7 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
|
||||
dvbuf = new_fm_vbuf(ubi, UBI_FM_DATA_VOLUME_ID);
|
||||
if (!dvbuf) {
|
||||
ret = -ENOMEM;
|
||||
goto out_kfree;
|
||||
goto out_free_avbuf;
|
||||
}
|
||||
|
||||
avhdr = ubi_get_vid_hdr(avbuf);
|
||||
@@ -1150,7 +1150,7 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
|
||||
seen_pebs = init_seen(ubi);
|
||||
if (IS_ERR(seen_pebs)) {
|
||||
ret = PTR_ERR(seen_pebs);
|
||||
goto out_kfree;
|
||||
goto out_free_dvbuf;
|
||||
}
|
||||
|
||||
spin_lock(&ubi->volumes_lock);
|
||||
@@ -1318,7 +1318,7 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
|
||||
ret = ubi_io_write_vid_hdr(ubi, new_fm->e[0]->pnum, avbuf);
|
||||
if (ret) {
|
||||
ubi_err(ubi, "unable to write vid_hdr to fastmap SB!");
|
||||
goto out_kfree;
|
||||
goto out_free_seen;
|
||||
}
|
||||
|
||||
for (i = 0; i < new_fm->used_blocks; i++) {
|
||||
@@ -1340,7 +1340,7 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
|
||||
if (ret) {
|
||||
ubi_err(ubi, "unable to write vid_hdr to PEB %i!",
|
||||
new_fm->e[i]->pnum);
|
||||
goto out_kfree;
|
||||
goto out_free_seen;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1350,7 +1350,7 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
|
||||
if (ret) {
|
||||
ubi_err(ubi, "unable to write fastmap to PEB %i!",
|
||||
new_fm->e[i]->pnum);
|
||||
goto out_kfree;
|
||||
goto out_free_seen;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1360,10 +1360,13 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
|
||||
ret = self_check_seen(ubi, seen_pebs);
|
||||
dbg_bld("fastmap written!");
|
||||
|
||||
out_kfree:
|
||||
ubi_free_vid_buf(avbuf);
|
||||
ubi_free_vid_buf(dvbuf);
|
||||
out_free_seen:
|
||||
free_seen(seen_pebs);
|
||||
out_free_dvbuf:
|
||||
ubi_free_vid_buf(dvbuf);
|
||||
out_free_avbuf:
|
||||
ubi_free_vid_buf(avbuf);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1371,26 +1371,31 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
|
||||
bool do_tx_balance = true;
|
||||
u32 hash_index = 0;
|
||||
const u8 *hash_start = NULL;
|
||||
struct ipv6hdr *ip6hdr;
|
||||
|
||||
skb_reset_mac_header(skb);
|
||||
eth_data = eth_hdr(skb);
|
||||
|
||||
switch (ntohs(skb->protocol)) {
|
||||
case ETH_P_IP: {
|
||||
const struct iphdr *iph = ip_hdr(skb);
|
||||
const struct iphdr *iph;
|
||||
|
||||
if (ether_addr_equal_64bits(eth_data->h_dest, mac_bcast) ||
|
||||
(iph->daddr == ip_bcast) ||
|
||||
(iph->protocol == IPPROTO_IGMP)) {
|
||||
(!pskb_network_may_pull(skb, sizeof(*iph)))) {
|
||||
do_tx_balance = false;
|
||||
break;
|
||||
}
|
||||
iph = ip_hdr(skb);
|
||||
if (iph->daddr == ip_bcast || iph->protocol == IPPROTO_IGMP) {
|
||||
do_tx_balance = false;
|
||||
break;
|
||||
}
|
||||
hash_start = (char *)&(iph->daddr);
|
||||
hash_size = sizeof(iph->daddr);
|
||||
}
|
||||
break;
|
||||
case ETH_P_IPV6:
|
||||
}
|
||||
case ETH_P_IPV6: {
|
||||
const struct ipv6hdr *ip6hdr;
|
||||
|
||||
/* IPv6 doesn't really use broadcast mac address, but leave
|
||||
* that here just in case.
|
||||
*/
|
||||
@@ -1407,7 +1412,11 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
|
||||
break;
|
||||
}
|
||||
|
||||
/* Additianally, DAD probes should not be tx-balanced as that
|
||||
if (!pskb_network_may_pull(skb, sizeof(*ip6hdr))) {
|
||||
do_tx_balance = false;
|
||||
break;
|
||||
}
|
||||
/* Additionally, DAD probes should not be tx-balanced as that
|
||||
* will lead to false positives for duplicate addresses and
|
||||
* prevent address configuration from working.
|
||||
*/
|
||||
@@ -1417,17 +1426,26 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
|
||||
break;
|
||||
}
|
||||
|
||||
hash_start = (char *)&(ipv6_hdr(skb)->daddr);
|
||||
hash_size = sizeof(ipv6_hdr(skb)->daddr);
|
||||
hash_start = (char *)&ip6hdr->daddr;
|
||||
hash_size = sizeof(ip6hdr->daddr);
|
||||
break;
|
||||
case ETH_P_IPX:
|
||||
if (ipx_hdr(skb)->ipx_checksum != IPX_NO_CHECKSUM) {
|
||||
}
|
||||
case ETH_P_IPX: {
|
||||
const struct ipxhdr *ipxhdr;
|
||||
|
||||
if (pskb_network_may_pull(skb, sizeof(*ipxhdr))) {
|
||||
do_tx_balance = false;
|
||||
break;
|
||||
}
|
||||
ipxhdr = (struct ipxhdr *)skb_network_header(skb);
|
||||
|
||||
if (ipxhdr->ipx_checksum != IPX_NO_CHECKSUM) {
|
||||
/* something is wrong with this packet */
|
||||
do_tx_balance = false;
|
||||
break;
|
||||
}
|
||||
|
||||
if (ipx_hdr(skb)->ipx_type != IPX_TYPE_NCP) {
|
||||
if (ipxhdr->ipx_type != IPX_TYPE_NCP) {
|
||||
/* The only protocol worth balancing in
|
||||
* this family since it has an "ARP" like
|
||||
* mechanism
|
||||
@@ -1436,9 +1454,11 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
|
||||
break;
|
||||
}
|
||||
|
||||
eth_data = eth_hdr(skb);
|
||||
hash_start = (char *)eth_data->h_dest;
|
||||
hash_size = ETH_ALEN;
|
||||
break;
|
||||
}
|
||||
case ETH_P_ARP:
|
||||
do_tx_balance = false;
|
||||
if (bond_info->rlb_enabled)
|
||||
|
||||
@@ -1983,6 +1983,9 @@ static int bcm_sysport_resume(struct device *d)
|
||||
|
||||
umac_reset(priv);
|
||||
|
||||
/* Disable the UniMAC RX/TX */
|
||||
umac_enable_set(priv, CMD_RX_EN | CMD_TX_EN, 0);
|
||||
|
||||
/* We may have been suspended and never received a WOL event that
|
||||
* would turn off MPD detection, take care of that now
|
||||
*/
|
||||
|
||||
@@ -2225,15 +2225,16 @@ static int __init dmfe_init_module(void)
|
||||
if (cr6set)
|
||||
dmfe_cr6_user_set = cr6set;
|
||||
|
||||
switch(mode) {
|
||||
case DMFE_10MHF:
|
||||
switch (mode) {
|
||||
case DMFE_10MHF:
|
||||
case DMFE_100MHF:
|
||||
case DMFE_10MFD:
|
||||
case DMFE_100MFD:
|
||||
case DMFE_1M_HPNA:
|
||||
dmfe_media_mode = mode;
|
||||
break;
|
||||
default:dmfe_media_mode = DMFE_AUTO;
|
||||
default:
|
||||
dmfe_media_mode = DMFE_AUTO;
|
||||
break;
|
||||
}
|
||||
|
||||
|
||||
@@ -1813,8 +1813,8 @@ static int __init uli526x_init_module(void)
|
||||
if (cr6set)
|
||||
uli526x_cr6_user_set = cr6set;
|
||||
|
||||
switch (mode) {
|
||||
case ULI526X_10MHF:
|
||||
switch (mode) {
|
||||
case ULI526X_10MHF:
|
||||
case ULI526X_100MHF:
|
||||
case ULI526X_10MFD:
|
||||
case ULI526X_100MFD:
|
||||
|
||||
@@ -948,7 +948,7 @@ static void smc911x_phy_configure(struct work_struct *work)
|
||||
if (lp->ctl_rspeed != 100)
|
||||
my_ad_caps &= ~(ADVERTISE_100BASE4|ADVERTISE_100FULL|ADVERTISE_100HALF);
|
||||
|
||||
if (!lp->ctl_rfduplx)
|
||||
if (!lp->ctl_rfduplx)
|
||||
my_ad_caps &= ~(ADVERTISE_100FULL|ADVERTISE_10FULL);
|
||||
|
||||
/* Update our Auto-Neg Advertisement Register */
|
||||
|
||||
@@ -784,11 +784,13 @@ static int gtp_hashtable_new(struct gtp_dev *gtp, int hsize)
|
||||
{
|
||||
int i;
|
||||
|
||||
gtp->addr_hash = kmalloc(sizeof(struct hlist_head) * hsize, GFP_KERNEL);
|
||||
gtp->addr_hash = kmalloc(sizeof(struct hlist_head) * hsize,
|
||||
GFP_KERNEL | __GFP_NOWARN);
|
||||
if (gtp->addr_hash == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
gtp->tid_hash = kmalloc(sizeof(struct hlist_head) * hsize, GFP_KERNEL);
|
||||
gtp->tid_hash = kmalloc(sizeof(struct hlist_head) * hsize,
|
||||
GFP_KERNEL | __GFP_NOWARN);
|
||||
if (gtp->tid_hash == NULL)
|
||||
goto err1;
|
||||
|
||||
|
||||
@@ -878,15 +878,15 @@ ppp_async_input(struct asyncppp *ap, const unsigned char *buf,
|
||||
skb = dev_alloc_skb(ap->mru + PPP_HDRLEN + 2);
|
||||
if (!skb)
|
||||
goto nomem;
|
||||
ap->rpkt = skb;
|
||||
}
|
||||
if (skb->len == 0) {
|
||||
/* Try to get the payload 4-byte aligned.
|
||||
* This should match the
|
||||
* PPP_ALLSTATIONS/PPP_UI/compressed tests in
|
||||
* process_input_packet, but we do not have
|
||||
* enough chars here to test buf[1] and buf[2].
|
||||
*/
|
||||
ap->rpkt = skb;
|
||||
}
|
||||
if (skb->len == 0) {
|
||||
/* Try to get the payload 4-byte aligned.
|
||||
* This should match the
|
||||
* PPP_ALLSTATIONS/PPP_UI/compressed tests in
|
||||
* process_input_packet, but we do not have
|
||||
* enough chars here to test buf[1] and buf[2].
|
||||
*/
|
||||
if (buf[0] != PPP_ALLSTATIONS)
|
||||
skb_reserve(skb, 2 + (buf[0] & 1));
|
||||
}
|
||||
|
||||
@@ -438,6 +438,7 @@ fail:
|
||||
usb_free_urb(req->urb);
|
||||
list_del(q->next);
|
||||
}
|
||||
kfree(reqs);
|
||||
return NULL;
|
||||
|
||||
}
|
||||
|
||||
@@ -1859,6 +1859,8 @@ static int lbs_ibss_join_existing(struct lbs_private *priv,
|
||||
rates_max = rates_eid[1];
|
||||
if (rates_max > MAX_RATES) {
|
||||
lbs_deb_join("invalid rates");
|
||||
rcu_read_unlock();
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
rates = cmd.bss.rates;
|
||||
|
||||
@@ -2878,6 +2878,13 @@ mwifiex_cmd_append_vsie_tlv(struct mwifiex_private *priv,
|
||||
vs_param_set->header.len =
|
||||
cpu_to_le16((((u16) priv->vs_ie[id].ie[1])
|
||||
& 0x00FF) + 2);
|
||||
if (le16_to_cpu(vs_param_set->header.len) >
|
||||
MWIFIEX_MAX_VSIE_LEN) {
|
||||
mwifiex_dbg(priv->adapter, ERROR,
|
||||
"Invalid param length!\n");
|
||||
break;
|
||||
}
|
||||
|
||||
memcpy(vs_param_set->ie, priv->vs_ie[id].ie,
|
||||
le16_to_cpu(vs_param_set->header.len));
|
||||
*buffer += le16_to_cpu(vs_param_set->header.len) +
|
||||
|
||||
@@ -274,6 +274,7 @@ static int mwifiex_process_country_ie(struct mwifiex_private *priv,
|
||||
|
||||
if (country_ie_len >
|
||||
(IEEE80211_COUNTRY_STRING_LEN + MWIFIEX_MAX_TRIPLET_802_11D)) {
|
||||
rcu_read_unlock();
|
||||
mwifiex_dbg(priv->adapter, ERROR,
|
||||
"11D: country_ie_len overflow!, deauth AP\n");
|
||||
return -EINVAL;
|
||||
|
||||
@@ -980,6 +980,10 @@ int mwifiex_ret_wmm_get_status(struct mwifiex_private *priv,
|
||||
"WMM Parameter Set Count: %d\n",
|
||||
wmm_param_ie->qos_info_bitmap & mask);
|
||||
|
||||
if (wmm_param_ie->vend_hdr.len + 2 >
|
||||
sizeof(struct ieee_types_wmm_parameter))
|
||||
break;
|
||||
|
||||
memcpy((u8 *) &priv->curr_bss_params.bss_descriptor.
|
||||
wmm_ie, wmm_param_ie,
|
||||
wmm_param_ie->vend_hdr.len + 2);
|
||||
|
||||
@@ -704,7 +704,7 @@ static int pn544_hci_check_presence(struct nfc_hci_dev *hdev,
|
||||
target->nfcid1_len != 10)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return nfc_hci_send_cmd(hdev, NFC_HCI_RF_READER_A_GATE,
|
||||
return nfc_hci_send_cmd(hdev, NFC_HCI_RF_READER_A_GATE,
|
||||
PN544_RF_READER_CMD_ACTIVATE_NEXT,
|
||||
target->nfcid1, target->nfcid1_len, NULL);
|
||||
} else if (target->supported_protocols & (NFC_PROTO_JEWEL_MASK |
|
||||
|
||||
@@ -112,4 +112,8 @@ config OF_OVERLAY
|
||||
config OF_NUMA
|
||||
bool
|
||||
|
||||
config OF_DMA_DEFAULT_COHERENT
|
||||
# arches should select this if DMA is coherent by default for OF devices
|
||||
bool
|
||||
|
||||
endif # OF
|
||||
|
||||
@@ -896,12 +896,16 @@ EXPORT_SYMBOL_GPL(of_dma_get_range);
|
||||
* @np: device node
|
||||
*
|
||||
* It returns true if "dma-coherent" property was found
|
||||
* for this device in DT.
|
||||
* for this device in the DT, or if DMA is coherent by
|
||||
* default for OF devices on the current platform.
|
||||
*/
|
||||
bool of_dma_is_coherent(struct device_node *np)
|
||||
{
|
||||
struct device_node *node = of_node_get(np);
|
||||
|
||||
if (IS_ENABLED(CONFIG_OF_DMA_DEFAULT_COHERENT))
|
||||
return true;
|
||||
|
||||
while (node) {
|
||||
if (of_property_read_bool(node, "dma-coherent")) {
|
||||
of_node_put(node);
|
||||
|
||||
@@ -502,7 +502,7 @@ void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie)
|
||||
/* Disable Link training */
|
||||
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
|
||||
val &= ~LTSSM_EN_VAL;
|
||||
ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
|
||||
ks_dw_app_writel(ks_pcie, CMD_STATUS, val);
|
||||
|
||||
/* Initiate Link Training */
|
||||
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
|
||||
|
||||
@@ -1833,12 +1833,18 @@ again:
|
||||
/* restore size and flags */
|
||||
list_for_each_entry(fail_res, &fail_head, list) {
|
||||
struct resource *res = fail_res->res;
|
||||
int idx;
|
||||
|
||||
res->start = fail_res->start;
|
||||
res->end = fail_res->end;
|
||||
res->flags = fail_res->flags;
|
||||
if (fail_res->dev->subordinate)
|
||||
res->flags = 0;
|
||||
|
||||
if (pci_is_bridge(fail_res->dev)) {
|
||||
idx = res - &fail_res->dev->resource[0];
|
||||
if (idx >= PCI_BRIDGE_RESOURCES &&
|
||||
idx <= PCI_BRIDGE_RESOURCE_END)
|
||||
res->flags = 0;
|
||||
}
|
||||
}
|
||||
free_list(&fail_head);
|
||||
|
||||
@@ -1904,12 +1910,18 @@ again:
|
||||
/* restore size and flags */
|
||||
list_for_each_entry(fail_res, &fail_head, list) {
|
||||
struct resource *res = fail_res->res;
|
||||
int idx;
|
||||
|
||||
res->start = fail_res->start;
|
||||
res->end = fail_res->end;
|
||||
res->flags = fail_res->flags;
|
||||
if (fail_res->dev->subordinate)
|
||||
res->flags = 0;
|
||||
|
||||
if (pci_is_bridge(fail_res->dev)) {
|
||||
idx = res - &fail_res->dev->resource[0];
|
||||
if (idx >= PCI_BRIDGE_RESOURCES &&
|
||||
idx <= PCI_BRIDGE_RESOURCE_END)
|
||||
res->flags = 0;
|
||||
}
|
||||
}
|
||||
free_list(&fail_head);
|
||||
|
||||
|
||||
@@ -2324,7 +2324,7 @@ static const struct pinmux_cfg_reg pinmux_config_regs[] = {
|
||||
FN_ATAG0_A, 0, FN_REMOCON_B, 0,
|
||||
/* IP0_11_8 [4] */
|
||||
FN_SD1_DAT2_A, FN_MMC_D2, 0, FN_BS,
|
||||
FN_ATADIR0_A, 0, FN_SDSELF_B, 0,
|
||||
FN_ATADIR0_A, 0, FN_SDSELF_A, 0,
|
||||
FN_PWM4_B, 0, 0, 0,
|
||||
0, 0, 0, 0,
|
||||
/* IP0_7_5 [3] */
|
||||
@@ -2366,7 +2366,7 @@ static const struct pinmux_cfg_reg pinmux_config_regs[] = {
|
||||
FN_TS_SDAT0_A, 0, 0, 0,
|
||||
0, 0, 0, 0,
|
||||
/* IP1_10_8 [3] */
|
||||
FN_SD1_CLK_B, FN_MMC_D6, 0, FN_A24,
|
||||
FN_SD1_CD_A, FN_MMC_D6, 0, FN_A24,
|
||||
FN_DREQ1_A, 0, FN_HRX0_B, FN_TS_SPSYNC0_A,
|
||||
/* IP1_7_5 [3] */
|
||||
FN_A23, FN_HTX0_B, FN_TX2_B, FN_DACK2_A,
|
||||
|
||||
@@ -364,7 +364,7 @@ static int ltc294x_i2c_remove(struct i2c_client *client)
|
||||
{
|
||||
struct ltc294x_info *info = i2c_get_clientdata(client);
|
||||
|
||||
cancel_delayed_work(&info->work);
|
||||
cancel_delayed_work_sync(&info->work);
|
||||
power_supply_unregister(info->supply);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -730,7 +730,7 @@ cmos_do_probe(struct device *dev, struct resource *ports, int rtc_irq)
|
||||
rtc_cmos_int_handler = cmos_interrupt;
|
||||
|
||||
retval = request_irq(rtc_irq, rtc_cmos_int_handler,
|
||||
IRQF_SHARED, dev_name(&cmos_rtc.rtc->dev),
|
||||
0, dev_name(&cmos_rtc.rtc->dev),
|
||||
cmos_rtc.rtc);
|
||||
if (retval < 0) {
|
||||
dev_dbg(dev, "IRQ %d is already in use\n", rtc_irq);
|
||||
|
||||
@@ -105,7 +105,7 @@ static int hym8563_rtc_read_time(struct device *dev, struct rtc_time *tm)
|
||||
|
||||
if (!hym8563->valid) {
|
||||
dev_warn(&client->dev, "no valid clock/calendar values available\n");
|
||||
return -EPERM;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = i2c_smbus_read_i2c_block_data(client, HYM8563_SEC, 7, buf);
|
||||
|
||||
@@ -1383,7 +1383,7 @@ csio_device_reset(struct device *dev,
|
||||
return -EINVAL;
|
||||
|
||||
/* Delete NPIV lnodes */
|
||||
csio_lnodes_exit(hw, 1);
|
||||
csio_lnodes_exit(hw, 1);
|
||||
|
||||
/* Block upper IOs */
|
||||
csio_lnodes_block_request(hw);
|
||||
|
||||
@@ -3978,7 +3978,8 @@ dcmd_timeout_ocr_possible(struct megasas_instance *instance) {
|
||||
if (!instance->ctrl_context)
|
||||
return KILL_ADAPTER;
|
||||
else if (instance->unload ||
|
||||
test_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags))
|
||||
test_bit(MEGASAS_FUSION_OCR_NOT_POSSIBLE,
|
||||
&instance->reset_flags))
|
||||
return IGNORE_TIMEOUT;
|
||||
else
|
||||
return INITIATE_OCR;
|
||||
|
||||
@@ -3438,6 +3438,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
|
||||
if (instance->requestorId && !instance->skip_heartbeat_timer_del)
|
||||
del_timer_sync(&instance->sriov_heartbeat_timer);
|
||||
set_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags);
|
||||
set_bit(MEGASAS_FUSION_OCR_NOT_POSSIBLE, &instance->reset_flags);
|
||||
atomic_set(&instance->adprecovery, MEGASAS_ADPRESET_SM_POLLING);
|
||||
instance->instancet->disable_intr(instance);
|
||||
msleep(1000);
|
||||
@@ -3594,7 +3595,7 @@ fail_kill_adapter:
|
||||
atomic_set(&instance->adprecovery, MEGASAS_HBA_OPERATIONAL);
|
||||
}
|
||||
out:
|
||||
clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags);
|
||||
clear_bit(MEGASAS_FUSION_OCR_NOT_POSSIBLE, &instance->reset_flags);
|
||||
mutex_unlock(&instance->reset_mutex);
|
||||
return retval;
|
||||
}
|
||||
|
||||
@@ -93,6 +93,7 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
|
||||
|
||||
#define MEGASAS_FP_CMD_LEN 16
|
||||
#define MEGASAS_FUSION_IN_RESET 0
|
||||
#define MEGASAS_FUSION_OCR_NOT_POSSIBLE 1
|
||||
#define THRESHOLD_REPLY_COUNT 50
|
||||
#define JBOD_MAPS_COUNT 2
|
||||
|
||||
|
||||
@@ -5723,9 +5723,8 @@ qla2x00_dump_mctp_data(scsi_qla_host_t *vha, dma_addr_t req_dma, uint32_t addr,
|
||||
mcp->mb[7] = LSW(MSD(req_dma));
|
||||
mcp->mb[8] = MSW(addr);
|
||||
/* Setting RAM ID to valid */
|
||||
mcp->mb[10] |= BIT_7;
|
||||
/* For MCTP RAM ID is 0x40 */
|
||||
mcp->mb[10] |= 0x40;
|
||||
mcp->mb[10] = BIT_7 | 0x40;
|
||||
|
||||
mcp->out_mb |= MBX_10|MBX_8|MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|
|
||||
MBX_0;
|
||||
|
||||
@@ -1600,8 +1600,7 @@ qla82xx_get_bootld_offset(struct qla_hw_data *ha)
|
||||
return (u8 *)&ha->hablob->fw->data[offset];
|
||||
}
|
||||
|
||||
static __le32
|
||||
qla82xx_get_fw_size(struct qla_hw_data *ha)
|
||||
static u32 qla82xx_get_fw_size(struct qla_hw_data *ha)
|
||||
{
|
||||
struct qla82xx_uri_data_desc *uri_desc = NULL;
|
||||
|
||||
@@ -1612,7 +1611,7 @@ qla82xx_get_fw_size(struct qla_hw_data *ha)
|
||||
return cpu_to_le32(uri_desc->size);
|
||||
}
|
||||
|
||||
return cpu_to_le32(*(u32 *)&ha->hablob->fw->data[FW_SIZE_OFFSET]);
|
||||
return get_unaligned_le32(&ha->hablob->fw->data[FW_SIZE_OFFSET]);
|
||||
}
|
||||
|
||||
static u8 *
|
||||
@@ -1803,7 +1802,7 @@ qla82xx_fw_load_from_blob(struct qla_hw_data *ha)
|
||||
}
|
||||
|
||||
flashaddr = FLASH_ADDR_START;
|
||||
size = (__force u32)qla82xx_get_fw_size(ha) / 8;
|
||||
size = qla82xx_get_fw_size(ha) / 8;
|
||||
ptr64 = (u64 *)qla82xx_get_fw_offs(ha);
|
||||
|
||||
for (i = 0; i < size; i++) {
|
||||
|
||||
@@ -4150,7 +4150,7 @@ static void qla4xxx_mem_free(struct scsi_qla_host *ha)
|
||||
dma_free_coherent(&ha->pdev->dev, ha->queues_len, ha->queues,
|
||||
ha->queues_dma);
|
||||
|
||||
if (ha->fw_dump)
|
||||
if (ha->fw_dump)
|
||||
vfree(ha->fw_dump);
|
||||
|
||||
ha->queues_len = 0;
|
||||
|
||||
@@ -5376,7 +5376,8 @@ static int ufshcd_probe_hba(struct ufs_hba *hba)
|
||||
ufshcd_init_icc_levels(hba);
|
||||
|
||||
/* Add required well known logical units to scsi mid layer */
|
||||
if (ufshcd_scsi_add_wlus(hba))
|
||||
ret = ufshcd_scsi_add_wlus(hba);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
scsi_scan_host(hba->host);
|
||||
|
||||
@@ -56,6 +56,7 @@ struct f_ecm {
|
||||
struct usb_ep *notify;
|
||||
struct usb_request *notify_req;
|
||||
u8 notify_state;
|
||||
atomic_t notify_count;
|
||||
bool is_open;
|
||||
|
||||
/* FIXME is_open needs some irq-ish locking
|
||||
@@ -384,7 +385,7 @@ static void ecm_do_notify(struct f_ecm *ecm)
|
||||
int status;
|
||||
|
||||
/* notification already in flight? */
|
||||
if (!req)
|
||||
if (atomic_read(&ecm->notify_count))
|
||||
return;
|
||||
|
||||
event = req->buf;
|
||||
@@ -424,10 +425,10 @@ static void ecm_do_notify(struct f_ecm *ecm)
|
||||
event->bmRequestType = 0xA1;
|
||||
event->wIndex = cpu_to_le16(ecm->ctrl_id);
|
||||
|
||||
ecm->notify_req = NULL;
|
||||
atomic_inc(&ecm->notify_count);
|
||||
status = usb_ep_queue(ecm->notify, req, GFP_ATOMIC);
|
||||
if (status < 0) {
|
||||
ecm->notify_req = req;
|
||||
atomic_dec(&ecm->notify_count);
|
||||
DBG(cdev, "notify --> %d\n", status);
|
||||
}
|
||||
}
|
||||
@@ -452,17 +453,19 @@ static void ecm_notify_complete(struct usb_ep *ep, struct usb_request *req)
|
||||
switch (req->status) {
|
||||
case 0:
|
||||
/* no fault */
|
||||
atomic_dec(&ecm->notify_count);
|
||||
break;
|
||||
case -ECONNRESET:
|
||||
case -ESHUTDOWN:
|
||||
atomic_set(&ecm->notify_count, 0);
|
||||
ecm->notify_state = ECM_NOTIFY_NONE;
|
||||
break;
|
||||
default:
|
||||
DBG(cdev, "event %02x --> %d\n",
|
||||
event->bNotificationType, req->status);
|
||||
atomic_dec(&ecm->notify_count);
|
||||
break;
|
||||
}
|
||||
ecm->notify_req = req;
|
||||
ecm_do_notify(ecm);
|
||||
}
|
||||
|
||||
@@ -909,6 +912,11 @@ static void ecm_unbind(struct usb_configuration *c, struct usb_function *f)
|
||||
|
||||
usb_free_all_descriptors(f);
|
||||
|
||||
if (atomic_read(&ecm->notify_count)) {
|
||||
usb_ep_dequeue(ecm->notify, ecm->notify_req);
|
||||
atomic_set(&ecm->notify_count, 0);
|
||||
}
|
||||
|
||||
kfree(ecm->notify_req->buf);
|
||||
usb_ep_free_request(ecm->notify, ecm->notify_req);
|
||||
}
|
||||
|
||||
@@ -57,6 +57,7 @@ struct f_ncm {
|
||||
struct usb_ep *notify;
|
||||
struct usb_request *notify_req;
|
||||
u8 notify_state;
|
||||
atomic_t notify_count;
|
||||
bool is_open;
|
||||
|
||||
const struct ndp_parser_opts *parser_opts;
|
||||
@@ -552,7 +553,7 @@ static void ncm_do_notify(struct f_ncm *ncm)
|
||||
int status;
|
||||
|
||||
/* notification already in flight? */
|
||||
if (!req)
|
||||
if (atomic_read(&ncm->notify_count))
|
||||
return;
|
||||
|
||||
event = req->buf;
|
||||
@@ -592,7 +593,8 @@ static void ncm_do_notify(struct f_ncm *ncm)
|
||||
event->bmRequestType = 0xA1;
|
||||
event->wIndex = cpu_to_le16(ncm->ctrl_id);
|
||||
|
||||
ncm->notify_req = NULL;
|
||||
atomic_inc(&ncm->notify_count);
|
||||
|
||||
/*
|
||||
* In double buffering if there is a space in FIFO,
|
||||
* completion callback can be called right after the call,
|
||||
@@ -602,7 +604,7 @@ static void ncm_do_notify(struct f_ncm *ncm)
|
||||
status = usb_ep_queue(ncm->notify, req, GFP_ATOMIC);
|
||||
spin_lock(&ncm->lock);
|
||||
if (status < 0) {
|
||||
ncm->notify_req = req;
|
||||
atomic_dec(&ncm->notify_count);
|
||||
DBG(cdev, "notify --> %d\n", status);
|
||||
}
|
||||
}
|
||||
@@ -637,17 +639,19 @@ static void ncm_notify_complete(struct usb_ep *ep, struct usb_request *req)
|
||||
case 0:
|
||||
VDBG(cdev, "Notification %02x sent\n",
|
||||
event->bNotificationType);
|
||||
atomic_dec(&ncm->notify_count);
|
||||
break;
|
||||
case -ECONNRESET:
|
||||
case -ESHUTDOWN:
|
||||
atomic_set(&ncm->notify_count, 0);
|
||||
ncm->notify_state = NCM_NOTIFY_NONE;
|
||||
break;
|
||||
default:
|
||||
DBG(cdev, "event %02x --> %d\n",
|
||||
event->bNotificationType, req->status);
|
||||
atomic_dec(&ncm->notify_count);
|
||||
break;
|
||||
}
|
||||
ncm->notify_req = req;
|
||||
ncm_do_notify(ncm);
|
||||
spin_unlock(&ncm->lock);
|
||||
}
|
||||
@@ -1639,6 +1643,11 @@ static void ncm_unbind(struct usb_configuration *c, struct usb_function *f)
|
||||
ncm_string_defs[0].id = 0;
|
||||
usb_free_all_descriptors(f);
|
||||
|
||||
if (atomic_read(&ncm->notify_count)) {
|
||||
usb_ep_dequeue(ncm->notify, ncm->notify_req);
|
||||
atomic_set(&ncm->notify_count, 0);
|
||||
}
|
||||
|
||||
kfree(ncm->notify_req->buf);
|
||||
usb_ep_free_request(ncm->notify, ncm->notify_req);
|
||||
}
|
||||
|
||||
@@ -229,7 +229,7 @@ static struct usb_composite_driver cdc_driver = {
|
||||
.name = "g_cdc",
|
||||
.dev = &device_desc,
|
||||
.strings = dev_strings,
|
||||
.max_speed = USB_SPEED_HIGH,
|
||||
.max_speed = USB_SPEED_SUPER,
|
||||
.bind = cdc_bind,
|
||||
.unbind = cdc_unbind,
|
||||
};
|
||||
|
||||
@@ -153,7 +153,7 @@ static struct usb_composite_driver gfs_driver = {
|
||||
.name = DRIVER_NAME,
|
||||
.dev = &gfs_dev_desc,
|
||||
.strings = gfs_dev_strings,
|
||||
.max_speed = USB_SPEED_HIGH,
|
||||
.max_speed = USB_SPEED_SUPER,
|
||||
.bind = gfs_bind,
|
||||
.unbind = gfs_unbind,
|
||||
};
|
||||
|
||||
@@ -486,7 +486,7 @@ static struct usb_composite_driver multi_driver = {
|
||||
.name = "g_multi",
|
||||
.dev = &device_desc,
|
||||
.strings = dev_strings,
|
||||
.max_speed = USB_SPEED_HIGH,
|
||||
.max_speed = USB_SPEED_SUPER,
|
||||
.bind = multi_bind,
|
||||
.unbind = multi_unbind,
|
||||
.needs_serial = 1,
|
||||
|
||||
@@ -203,7 +203,7 @@ static struct usb_composite_driver ncm_driver = {
|
||||
.name = "g_ncm",
|
||||
.dev = &device_desc,
|
||||
.strings = dev_strings,
|
||||
.max_speed = USB_SPEED_HIGH,
|
||||
.max_speed = USB_SPEED_SUPER,
|
||||
.bind = gncm_bind,
|
||||
.unbind = gncm_unbind,
|
||||
};
|
||||
|
||||
@@ -331,26 +331,6 @@ struct tree_mod_elem {
|
||||
struct tree_mod_root old_root;
|
||||
};
|
||||
|
||||
static inline void tree_mod_log_read_lock(struct btrfs_fs_info *fs_info)
|
||||
{
|
||||
read_lock(&fs_info->tree_mod_log_lock);
|
||||
}
|
||||
|
||||
static inline void tree_mod_log_read_unlock(struct btrfs_fs_info *fs_info)
|
||||
{
|
||||
read_unlock(&fs_info->tree_mod_log_lock);
|
||||
}
|
||||
|
||||
static inline void tree_mod_log_write_lock(struct btrfs_fs_info *fs_info)
|
||||
{
|
||||
write_lock(&fs_info->tree_mod_log_lock);
|
||||
}
|
||||
|
||||
static inline void tree_mod_log_write_unlock(struct btrfs_fs_info *fs_info)
|
||||
{
|
||||
write_unlock(&fs_info->tree_mod_log_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
* Pull a new tree mod seq number for our operation.
|
||||
*/
|
||||
@@ -370,14 +350,12 @@ static inline u64 btrfs_inc_tree_mod_seq(struct btrfs_fs_info *fs_info)
|
||||
u64 btrfs_get_tree_mod_seq(struct btrfs_fs_info *fs_info,
|
||||
struct seq_list *elem)
|
||||
{
|
||||
tree_mod_log_write_lock(fs_info);
|
||||
spin_lock(&fs_info->tree_mod_seq_lock);
|
||||
write_lock(&fs_info->tree_mod_log_lock);
|
||||
if (!elem->seq) {
|
||||
elem->seq = btrfs_inc_tree_mod_seq(fs_info);
|
||||
list_add_tail(&elem->list, &fs_info->tree_mod_seq_list);
|
||||
}
|
||||
spin_unlock(&fs_info->tree_mod_seq_lock);
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&fs_info->tree_mod_log_lock);
|
||||
|
||||
return elem->seq;
|
||||
}
|
||||
@@ -396,7 +374,7 @@ void btrfs_put_tree_mod_seq(struct btrfs_fs_info *fs_info,
|
||||
if (!seq_putting)
|
||||
return;
|
||||
|
||||
spin_lock(&fs_info->tree_mod_seq_lock);
|
||||
write_lock(&fs_info->tree_mod_log_lock);
|
||||
list_del(&elem->list);
|
||||
elem->seq = 0;
|
||||
|
||||
@@ -407,19 +385,17 @@ void btrfs_put_tree_mod_seq(struct btrfs_fs_info *fs_info,
|
||||
* blocker with lower sequence number exists, we
|
||||
* cannot remove anything from the log
|
||||
*/
|
||||
spin_unlock(&fs_info->tree_mod_seq_lock);
|
||||
write_unlock(&fs_info->tree_mod_log_lock);
|
||||
return;
|
||||
}
|
||||
min_seq = cur_elem->seq;
|
||||
}
|
||||
}
|
||||
spin_unlock(&fs_info->tree_mod_seq_lock);
|
||||
|
||||
/*
|
||||
* anything that's lower than the lowest existing (read: blocked)
|
||||
* sequence number can be removed from the tree.
|
||||
*/
|
||||
tree_mod_log_write_lock(fs_info);
|
||||
tm_root = &fs_info->tree_mod_log;
|
||||
for (node = rb_first(tm_root); node; node = next) {
|
||||
next = rb_next(node);
|
||||
@@ -429,7 +405,7 @@ void btrfs_put_tree_mod_seq(struct btrfs_fs_info *fs_info,
|
||||
rb_erase(node, tm_root);
|
||||
kfree(tm);
|
||||
}
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&fs_info->tree_mod_log_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -440,7 +416,7 @@ void btrfs_put_tree_mod_seq(struct btrfs_fs_info *fs_info,
|
||||
* for root replace operations, or the logical address of the affected
|
||||
* block for all other operations.
|
||||
*
|
||||
* Note: must be called with write lock (tree_mod_log_write_lock).
|
||||
* Note: must be called with write lock for fs_info::tree_mod_log_lock.
|
||||
*/
|
||||
static noinline int
|
||||
__tree_mod_log_insert(struct btrfs_fs_info *fs_info, struct tree_mod_elem *tm)
|
||||
@@ -480,7 +456,7 @@ __tree_mod_log_insert(struct btrfs_fs_info *fs_info, struct tree_mod_elem *tm)
|
||||
* Determines if logging can be omitted. Returns 1 if it can. Otherwise, it
|
||||
* returns zero with the tree_mod_log_lock acquired. The caller must hold
|
||||
* this until all tree mod log insertions are recorded in the rb tree and then
|
||||
* call tree_mod_log_write_unlock() to release.
|
||||
* write unlock fs_info::tree_mod_log_lock.
|
||||
*/
|
||||
static inline int tree_mod_dont_log(struct btrfs_fs_info *fs_info,
|
||||
struct extent_buffer *eb) {
|
||||
@@ -490,9 +466,9 @@ static inline int tree_mod_dont_log(struct btrfs_fs_info *fs_info,
|
||||
if (eb && btrfs_header_level(eb) == 0)
|
||||
return 1;
|
||||
|
||||
tree_mod_log_write_lock(fs_info);
|
||||
write_lock(&fs_info->tree_mod_log_lock);
|
||||
if (list_empty(&(fs_info)->tree_mod_seq_list)) {
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&fs_info->tree_mod_log_lock);
|
||||
return 1;
|
||||
}
|
||||
|
||||
@@ -556,7 +532,7 @@ tree_mod_log_insert_key(struct btrfs_fs_info *fs_info,
|
||||
}
|
||||
|
||||
ret = __tree_mod_log_insert(fs_info, tm);
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&eb->fs_info->tree_mod_log_lock);
|
||||
if (ret)
|
||||
kfree(tm);
|
||||
|
||||
@@ -620,7 +596,7 @@ tree_mod_log_insert_move(struct btrfs_fs_info *fs_info,
|
||||
ret = __tree_mod_log_insert(fs_info, tm);
|
||||
if (ret)
|
||||
goto free_tms;
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&eb->fs_info->tree_mod_log_lock);
|
||||
kfree(tm_list);
|
||||
|
||||
return 0;
|
||||
@@ -631,7 +607,7 @@ free_tms:
|
||||
kfree(tm_list[i]);
|
||||
}
|
||||
if (locked)
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&eb->fs_info->tree_mod_log_lock);
|
||||
kfree(tm_list);
|
||||
kfree(tm);
|
||||
|
||||
@@ -712,7 +688,7 @@ tree_mod_log_insert_root(struct btrfs_fs_info *fs_info,
|
||||
if (!ret)
|
||||
ret = __tree_mod_log_insert(fs_info, tm);
|
||||
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&fs_info->tree_mod_log_lock);
|
||||
if (ret)
|
||||
goto free_tms;
|
||||
kfree(tm_list);
|
||||
@@ -739,7 +715,7 @@ __tree_mod_log_search(struct btrfs_fs_info *fs_info, u64 start, u64 min_seq,
|
||||
struct tree_mod_elem *cur = NULL;
|
||||
struct tree_mod_elem *found = NULL;
|
||||
|
||||
tree_mod_log_read_lock(fs_info);
|
||||
read_lock(&fs_info->tree_mod_log_lock);
|
||||
tm_root = &fs_info->tree_mod_log;
|
||||
node = tm_root->rb_node;
|
||||
while (node) {
|
||||
@@ -767,7 +743,7 @@ __tree_mod_log_search(struct btrfs_fs_info *fs_info, u64 start, u64 min_seq,
|
||||
break;
|
||||
}
|
||||
}
|
||||
tree_mod_log_read_unlock(fs_info);
|
||||
read_unlock(&fs_info->tree_mod_log_lock);
|
||||
|
||||
return found;
|
||||
}
|
||||
@@ -848,7 +824,7 @@ tree_mod_log_eb_copy(struct btrfs_fs_info *fs_info, struct extent_buffer *dst,
|
||||
goto free_tms;
|
||||
}
|
||||
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&fs_info->tree_mod_log_lock);
|
||||
kfree(tm_list);
|
||||
|
||||
return 0;
|
||||
@@ -860,7 +836,7 @@ free_tms:
|
||||
kfree(tm_list[i]);
|
||||
}
|
||||
if (locked)
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&fs_info->tree_mod_log_lock);
|
||||
kfree(tm_list);
|
||||
|
||||
return ret;
|
||||
@@ -920,7 +896,7 @@ tree_mod_log_free_eb(struct btrfs_fs_info *fs_info, struct extent_buffer *eb)
|
||||
goto free_tms;
|
||||
|
||||
ret = __tree_mod_log_free_eb(fs_info, tm_list, nritems);
|
||||
tree_mod_log_write_unlock(fs_info);
|
||||
write_unlock(&eb->fs_info->tree_mod_log_lock);
|
||||
if (ret)
|
||||
goto free_tms;
|
||||
kfree(tm_list);
|
||||
@@ -1271,7 +1247,7 @@ __tree_mod_log_rewind(struct btrfs_fs_info *fs_info, struct extent_buffer *eb,
|
||||
unsigned long p_size = sizeof(struct btrfs_key_ptr);
|
||||
|
||||
n = btrfs_header_nritems(eb);
|
||||
tree_mod_log_read_lock(fs_info);
|
||||
read_lock(&fs_info->tree_mod_log_lock);
|
||||
while (tm && tm->seq >= time_seq) {
|
||||
/*
|
||||
* all the operations are recorded with the operator used for
|
||||
@@ -1326,7 +1302,7 @@ __tree_mod_log_rewind(struct btrfs_fs_info *fs_info, struct extent_buffer *eb,
|
||||
if (tm->logical != first_tm->logical)
|
||||
break;
|
||||
}
|
||||
tree_mod_log_read_unlock(fs_info);
|
||||
read_unlock(&fs_info->tree_mod_log_lock);
|
||||
btrfs_set_header_nritems(eb, n);
|
||||
}
|
||||
|
||||
|
||||
@@ -851,14 +851,12 @@ struct btrfs_fs_info {
|
||||
struct list_head delayed_iputs;
|
||||
struct mutex cleaner_delayed_iput_mutex;
|
||||
|
||||
/* this protects tree_mod_seq_list */
|
||||
spinlock_t tree_mod_seq_lock;
|
||||
atomic64_t tree_mod_seq;
|
||||
struct list_head tree_mod_seq_list;
|
||||
|
||||
/* this protects tree_mod_log */
|
||||
/* this protects tree_mod_log and tree_mod_seq_list */
|
||||
rwlock_t tree_mod_log_lock;
|
||||
struct rb_root tree_mod_log;
|
||||
struct list_head tree_mod_seq_list;
|
||||
|
||||
atomic_t nr_async_submits;
|
||||
atomic_t async_submit_draining;
|
||||
|
||||
@@ -279,7 +279,7 @@ void btrfs_merge_delayed_refs(struct btrfs_trans_handle *trans,
|
||||
if (head->is_data)
|
||||
return;
|
||||
|
||||
spin_lock(&fs_info->tree_mod_seq_lock);
|
||||
read_lock(&fs_info->tree_mod_log_lock);
|
||||
if (!list_empty(&fs_info->tree_mod_seq_list)) {
|
||||
struct seq_list *elem;
|
||||
|
||||
@@ -287,7 +287,7 @@ void btrfs_merge_delayed_refs(struct btrfs_trans_handle *trans,
|
||||
struct seq_list, list);
|
||||
seq = elem->seq;
|
||||
}
|
||||
spin_unlock(&fs_info->tree_mod_seq_lock);
|
||||
read_unlock(&fs_info->tree_mod_log_lock);
|
||||
|
||||
ref = list_first_entry(&head->ref_list, struct btrfs_delayed_ref_node,
|
||||
list);
|
||||
@@ -315,7 +315,7 @@ int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info,
|
||||
struct seq_list *elem;
|
||||
int ret = 0;
|
||||
|
||||
spin_lock(&fs_info->tree_mod_seq_lock);
|
||||
read_lock(&fs_info->tree_mod_log_lock);
|
||||
if (!list_empty(&fs_info->tree_mod_seq_list)) {
|
||||
elem = list_first_entry(&fs_info->tree_mod_seq_list,
|
||||
struct seq_list, list);
|
||||
@@ -329,7 +329,7 @@ int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info,
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock(&fs_info->tree_mod_seq_lock);
|
||||
read_unlock(&fs_info->tree_mod_log_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -2104,7 +2104,7 @@ static void free_root_extent_buffers(struct btrfs_root *root)
|
||||
}
|
||||
|
||||
/* helper to cleanup tree roots */
|
||||
static void free_root_pointers(struct btrfs_fs_info *info, int chunk_root)
|
||||
static void free_root_pointers(struct btrfs_fs_info *info, bool free_chunk_root)
|
||||
{
|
||||
free_root_extent_buffers(info->tree_root);
|
||||
|
||||
@@ -2113,7 +2113,7 @@ static void free_root_pointers(struct btrfs_fs_info *info, int chunk_root)
|
||||
free_root_extent_buffers(info->csum_root);
|
||||
free_root_extent_buffers(info->quota_root);
|
||||
free_root_extent_buffers(info->uuid_root);
|
||||
if (chunk_root)
|
||||
if (free_chunk_root)
|
||||
free_root_extent_buffers(info->chunk_root);
|
||||
free_root_extent_buffers(info->free_space_root);
|
||||
}
|
||||
@@ -2519,7 +2519,6 @@ int open_ctree(struct super_block *sb,
|
||||
spin_lock_init(&fs_info->delayed_iput_lock);
|
||||
spin_lock_init(&fs_info->defrag_inodes_lock);
|
||||
spin_lock_init(&fs_info->free_chunk_lock);
|
||||
spin_lock_init(&fs_info->tree_mod_seq_lock);
|
||||
spin_lock_init(&fs_info->super_lock);
|
||||
spin_lock_init(&fs_info->qgroup_op_lock);
|
||||
spin_lock_init(&fs_info->buffer_lock);
|
||||
@@ -3136,7 +3135,7 @@ fail_block_groups:
|
||||
btrfs_free_block_groups(fs_info);
|
||||
|
||||
fail_tree_roots:
|
||||
free_root_pointers(fs_info, 1);
|
||||
free_root_pointers(fs_info, true);
|
||||
invalidate_inode_pages2(fs_info->btree_inode->i_mapping);
|
||||
|
||||
fail_sb_buffer:
|
||||
@@ -3165,7 +3164,7 @@ recovery_tree_root:
|
||||
if (!btrfs_test_opt(tree_root->fs_info, USEBACKUPROOT))
|
||||
goto fail_tree_roots;
|
||||
|
||||
free_root_pointers(fs_info, 0);
|
||||
free_root_pointers(fs_info, false);
|
||||
|
||||
/* don't use the log in recovery mode, it won't be valid */
|
||||
btrfs_set_super_log_root(disk_super, 0);
|
||||
@@ -3862,7 +3861,7 @@ void close_ctree(struct btrfs_root *root)
|
||||
btrfs_stop_all_workers(fs_info);
|
||||
|
||||
clear_bit(BTRFS_FS_OPEN, &fs_info->flags);
|
||||
free_root_pointers(fs_info, 1);
|
||||
free_root_pointers(fs_info, true);
|
||||
|
||||
iput(fs_info->btree_inode);
|
||||
|
||||
|
||||
@@ -4049,6 +4049,14 @@ retry:
|
||||
*/
|
||||
scanned = 1;
|
||||
index = 0;
|
||||
|
||||
/*
|
||||
* If we're looping we could run into a page that is locked by a
|
||||
* writer and that writer could be waiting on writeback for a
|
||||
* page in our current bio, and thus deadlock, so flush the
|
||||
* write bio here.
|
||||
*/
|
||||
flush_write_bio(data);
|
||||
goto retry;
|
||||
}
|
||||
|
||||
|
||||
@@ -112,7 +112,6 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(void)
|
||||
spin_lock_init(&fs_info->qgroup_op_lock);
|
||||
spin_lock_init(&fs_info->super_lock);
|
||||
spin_lock_init(&fs_info->fs_roots_radix_lock);
|
||||
spin_lock_init(&fs_info->tree_mod_seq_lock);
|
||||
mutex_init(&fs_info->qgroup_ioctl_lock);
|
||||
mutex_init(&fs_info->qgroup_rescan_lock);
|
||||
rwlock_init(&fs_info->tree_mod_log_lock);
|
||||
|
||||
@@ -1917,6 +1917,14 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans,
|
||||
struct btrfs_transaction *prev_trans = NULL;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Some places just start a transaction to commit it. We need to make
|
||||
* sure that if this commit fails that the abort code actually marks the
|
||||
* transaction as failed, so set trans->dirty to make the abort code do
|
||||
* the right thing.
|
||||
*/
|
||||
trans->dirty = true;
|
||||
|
||||
/* Stop the commit early if ->aborted is set */
|
||||
if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {
|
||||
ret = cur_trans->aborted;
|
||||
|
||||
@@ -4443,13 +4443,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
|
||||
struct btrfs_file_extent_item);
|
||||
|
||||
if (btrfs_file_extent_type(leaf, extent) ==
|
||||
BTRFS_FILE_EXTENT_INLINE) {
|
||||
len = btrfs_file_extent_inline_len(leaf,
|
||||
path->slots[0],
|
||||
extent);
|
||||
ASSERT(len == i_size);
|
||||
BTRFS_FILE_EXTENT_INLINE)
|
||||
return 0;
|
||||
}
|
||||
|
||||
len = btrfs_file_extent_num_bytes(leaf, extent);
|
||||
/* Last extent goes beyond i_size, no need to log a hole. */
|
||||
|
||||
@@ -247,9 +247,14 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon)
|
||||
*/
|
||||
mutex_lock(&tcon->ses->session_mutex);
|
||||
rc = cifs_negotiate_protocol(0, tcon->ses);
|
||||
if (!rc && tcon->ses->need_reconnect)
|
||||
if (!rc && tcon->ses->need_reconnect) {
|
||||
rc = cifs_setup_session(0, tcon->ses, nls_codepage);
|
||||
|
||||
if ((rc == -EACCES) && !tcon->retry) {
|
||||
rc = -EHOSTDOWN;
|
||||
mutex_unlock(&tcon->ses->session_mutex);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
if (rc || !tcon->need_reconnect) {
|
||||
mutex_unlock(&tcon->ses->session_mutex);
|
||||
goto out;
|
||||
@@ -291,6 +296,7 @@ out:
|
||||
case SMB2_SET_INFO:
|
||||
rc = -EAGAIN;
|
||||
}
|
||||
failed:
|
||||
unload_nls(nls_codepage);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@@ -1047,9 +1047,9 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
|
||||
|
||||
if (EXT2_BLOCKS_PER_GROUP(sb) == 0)
|
||||
goto cantfind_ext2;
|
||||
sbi->s_groups_count = ((le32_to_cpu(es->s_blocks_count) -
|
||||
le32_to_cpu(es->s_first_data_block) - 1)
|
||||
/ EXT2_BLOCKS_PER_GROUP(sb)) + 1;
|
||||
sbi->s_groups_count = ((le32_to_cpu(es->s_blocks_count) -
|
||||
le32_to_cpu(es->s_first_data_block) - 1)
|
||||
/ EXT2_BLOCKS_PER_GROUP(sb)) + 1;
|
||||
db_count = (sbi->s_groups_count + EXT2_DESC_PER_BLOCK(sb) - 1) /
|
||||
EXT2_DESC_PER_BLOCK(sb);
|
||||
sbi->s_group_desc = kmalloc (db_count * sizeof (struct buffer_head *), GFP_KERNEL);
|
||||
|
||||
@@ -468,17 +468,26 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
|
||||
nr_to_submit) {
|
||||
gfp_t gfp_flags = GFP_NOFS;
|
||||
|
||||
/*
|
||||
* Since bounce page allocation uses a mempool, we can only use
|
||||
* a waiting mask (i.e. request guaranteed allocation) on the
|
||||
* first page of the bio. Otherwise it can deadlock.
|
||||
*/
|
||||
if (io->io_bio)
|
||||
gfp_flags = GFP_NOWAIT | __GFP_NOWARN;
|
||||
retry_encrypt:
|
||||
data_page = fscrypt_encrypt_page(inode, page, PAGE_SIZE, 0,
|
||||
page->index, gfp_flags);
|
||||
if (IS_ERR(data_page)) {
|
||||
ret = PTR_ERR(data_page);
|
||||
if (ret == -ENOMEM && wbc->sync_mode == WB_SYNC_ALL) {
|
||||
if (io->io_bio) {
|
||||
if (ret == -ENOMEM &&
|
||||
(io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) {
|
||||
gfp_flags = GFP_NOFS;
|
||||
if (io->io_bio)
|
||||
ext4_io_submit(io);
|
||||
congestion_wait(BLK_RW_ASYNC, HZ/50);
|
||||
}
|
||||
gfp_flags |= __GFP_NOFAIL;
|
||||
else
|
||||
gfp_flags |= __GFP_NOFAIL;
|
||||
congestion_wait(BLK_RW_ASYNC, HZ/50);
|
||||
goto retry_encrypt;
|
||||
}
|
||||
data_page = NULL;
|
||||
|
||||
@@ -89,7 +89,7 @@ config NFS_V4
|
||||
config NFS_SWAP
|
||||
bool "Provide swap over NFS support"
|
||||
default n
|
||||
depends on NFS_FS
|
||||
depends on NFS_FS && SWAP
|
||||
select SUNRPC_SWAP
|
||||
help
|
||||
This option enables swapon to work on files located on NFS mounts.
|
||||
|
||||
@@ -419,7 +419,7 @@ static bool referring_call_exists(struct nfs_client *clp,
|
||||
uint32_t nrclists,
|
||||
struct referring_call_list *rclists)
|
||||
{
|
||||
bool status = 0;
|
||||
bool status = false;
|
||||
int i, j;
|
||||
struct nfs4_session *session;
|
||||
struct nfs4_slot_table *tbl;
|
||||
|
||||
92
fs/nfs/dir.c
92
fs/nfs/dir.c
@@ -57,7 +57,7 @@ static void nfs_readdir_clear_array(struct page*);
|
||||
const struct file_operations nfs_dir_operations = {
|
||||
.llseek = nfs_llseek_dir,
|
||||
.read = generic_read_dir,
|
||||
.iterate_shared = nfs_readdir,
|
||||
.iterate = nfs_readdir,
|
||||
.open = nfs_opendir,
|
||||
.release = nfs_closedir,
|
||||
.fsync = nfs_fsync_dir,
|
||||
@@ -145,7 +145,6 @@ struct nfs_cache_array_entry {
|
||||
};
|
||||
|
||||
struct nfs_cache_array {
|
||||
atomic_t refcount;
|
||||
int size;
|
||||
int eof_index;
|
||||
u64 last_cookie;
|
||||
@@ -170,6 +169,17 @@ typedef struct {
|
||||
unsigned int eof:1;
|
||||
} nfs_readdir_descriptor_t;
|
||||
|
||||
static
|
||||
void nfs_readdir_init_array(struct page *page)
|
||||
{
|
||||
struct nfs_cache_array *array;
|
||||
|
||||
array = kmap_atomic(page);
|
||||
memset(array, 0, sizeof(struct nfs_cache_array));
|
||||
array->eof_index = -1;
|
||||
kunmap_atomic(array);
|
||||
}
|
||||
|
||||
/*
|
||||
* The caller is responsible for calling nfs_readdir_release_array(page)
|
||||
*/
|
||||
@@ -201,20 +211,12 @@ void nfs_readdir_clear_array(struct page *page)
|
||||
int i;
|
||||
|
||||
array = kmap_atomic(page);
|
||||
if (atomic_dec_and_test(&array->refcount))
|
||||
for (i = 0; i < array->size; i++)
|
||||
kfree(array->array[i].string.name);
|
||||
for (i = 0; i < array->size; i++)
|
||||
kfree(array->array[i].string.name);
|
||||
array->size = 0;
|
||||
kunmap_atomic(array);
|
||||
}
|
||||
|
||||
static bool grab_page(struct page *page)
|
||||
{
|
||||
struct nfs_cache_array *array = kmap_atomic(page);
|
||||
bool res = atomic_inc_not_zero(&array->refcount);
|
||||
kunmap_atomic(array);
|
||||
return res;
|
||||
}
|
||||
|
||||
/*
|
||||
* the caller is responsible for freeing qstr.name
|
||||
* when called by nfs_readdir_add_to_array, the strings will be freed in
|
||||
@@ -287,7 +289,7 @@ int nfs_readdir_search_for_pos(struct nfs_cache_array *array, nfs_readdir_descri
|
||||
desc->cache_entry_index = index;
|
||||
return 0;
|
||||
out_eof:
|
||||
desc->eof = 1;
|
||||
desc->eof = true;
|
||||
return -EBADCOOKIE;
|
||||
}
|
||||
|
||||
@@ -341,7 +343,7 @@ int nfs_readdir_search_for_cookie(struct nfs_cache_array *array, nfs_readdir_des
|
||||
if (array->eof_index >= 0) {
|
||||
status = -EBADCOOKIE;
|
||||
if (*desc->dir_cookie == array->last_cookie)
|
||||
desc->eof = 1;
|
||||
desc->eof = true;
|
||||
}
|
||||
out:
|
||||
return status;
|
||||
@@ -653,6 +655,8 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page,
|
||||
int status = -ENOMEM;
|
||||
unsigned int array_size = ARRAY_SIZE(pages);
|
||||
|
||||
nfs_readdir_init_array(page);
|
||||
|
||||
entry.prev_cookie = 0;
|
||||
entry.cookie = desc->last_cookie;
|
||||
entry.eof = 0;
|
||||
@@ -673,9 +677,8 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page,
|
||||
status = PTR_ERR(array);
|
||||
goto out_label_free;
|
||||
}
|
||||
memset(array, 0, sizeof(struct nfs_cache_array));
|
||||
atomic_set(&array->refcount, 1);
|
||||
array->eof_index = -1;
|
||||
|
||||
array = kmap(page);
|
||||
|
||||
status = nfs_readdir_alloc_pages(pages, array_size);
|
||||
if (status < 0)
|
||||
@@ -730,6 +733,7 @@ int nfs_readdir_filler(nfs_readdir_descriptor_t *desc, struct page* page)
|
||||
unlock_page(page);
|
||||
return 0;
|
||||
error:
|
||||
nfs_readdir_clear_array(page);
|
||||
unlock_page(page);
|
||||
return ret;
|
||||
}
|
||||
@@ -737,7 +741,6 @@ int nfs_readdir_filler(nfs_readdir_descriptor_t *desc, struct page* page)
|
||||
static
|
||||
void cache_page_release(nfs_readdir_descriptor_t *desc)
|
||||
{
|
||||
nfs_readdir_clear_array(desc->page);
|
||||
put_page(desc->page);
|
||||
desc->page = NULL;
|
||||
}
|
||||
@@ -745,33 +748,34 @@ void cache_page_release(nfs_readdir_descriptor_t *desc)
|
||||
static
|
||||
struct page *get_cache_page(nfs_readdir_descriptor_t *desc)
|
||||
{
|
||||
struct page *page;
|
||||
|
||||
for (;;) {
|
||||
page = read_cache_page(desc->file->f_mapping,
|
||||
return read_cache_page(desc->file->f_mapping,
|
||||
desc->page_index, (filler_t *)nfs_readdir_filler, desc);
|
||||
if (IS_ERR(page) || grab_page(page))
|
||||
break;
|
||||
put_page(page);
|
||||
}
|
||||
return page;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns 0 if desc->dir_cookie was found on page desc->page_index
|
||||
* and locks the page to prevent removal from the page cache.
|
||||
*/
|
||||
static
|
||||
int find_cache_page(nfs_readdir_descriptor_t *desc)
|
||||
int find_and_lock_cache_page(nfs_readdir_descriptor_t *desc)
|
||||
{
|
||||
int res;
|
||||
|
||||
desc->page = get_cache_page(desc);
|
||||
if (IS_ERR(desc->page))
|
||||
return PTR_ERR(desc->page);
|
||||
|
||||
res = nfs_readdir_search_array(desc);
|
||||
res = lock_page_killable(desc->page);
|
||||
if (res != 0)
|
||||
cache_page_release(desc);
|
||||
goto error;
|
||||
res = -EAGAIN;
|
||||
if (desc->page->mapping != NULL) {
|
||||
res = nfs_readdir_search_array(desc);
|
||||
if (res == 0)
|
||||
return 0;
|
||||
}
|
||||
unlock_page(desc->page);
|
||||
error:
|
||||
cache_page_release(desc);
|
||||
return res;
|
||||
}
|
||||
|
||||
@@ -786,7 +790,7 @@ int readdir_search_pagecache(nfs_readdir_descriptor_t *desc)
|
||||
desc->last_cookie = 0;
|
||||
}
|
||||
do {
|
||||
res = find_cache_page(desc);
|
||||
res = find_and_lock_cache_page(desc);
|
||||
} while (res == -EAGAIN);
|
||||
return res;
|
||||
}
|
||||
@@ -815,7 +819,7 @@ int nfs_do_filldir(nfs_readdir_descriptor_t *desc)
|
||||
ent = &array->array[i];
|
||||
if (!dir_emit(desc->ctx, ent->string.name, ent->string.len,
|
||||
nfs_compat_user_ino64(ent->ino), ent->d_type)) {
|
||||
desc->eof = 1;
|
||||
desc->eof = true;
|
||||
break;
|
||||
}
|
||||
desc->ctx->pos++;
|
||||
@@ -827,11 +831,10 @@ int nfs_do_filldir(nfs_readdir_descriptor_t *desc)
|
||||
ctx->duped = 1;
|
||||
}
|
||||
if (array->eof_index >= 0)
|
||||
desc->eof = 1;
|
||||
desc->eof = true;
|
||||
|
||||
nfs_readdir_release_array(desc->page);
|
||||
out:
|
||||
cache_page_release(desc);
|
||||
dfprintk(DIRCACHE, "NFS: nfs_do_filldir() filling ended @ cookie %Lu; returning = %d\n",
|
||||
(unsigned long long)*desc->dir_cookie, res);
|
||||
return res;
|
||||
@@ -877,13 +880,13 @@ int uncached_readdir(nfs_readdir_descriptor_t *desc)
|
||||
|
||||
status = nfs_do_filldir(desc);
|
||||
|
||||
out_release:
|
||||
nfs_readdir_clear_array(desc->page);
|
||||
cache_page_release(desc);
|
||||
out:
|
||||
dfprintk(DIRCACHE, "NFS: %s: returns %d\n",
|
||||
__func__, status);
|
||||
return status;
|
||||
out_release:
|
||||
cache_page_release(desc);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* The file offset position represents the dirent entry number. A
|
||||
@@ -928,7 +931,7 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
|
||||
if (res == -EBADCOOKIE) {
|
||||
res = 0;
|
||||
/* This means either end of directory */
|
||||
if (*desc->dir_cookie && desc->eof == 0) {
|
||||
if (*desc->dir_cookie && !desc->eof) {
|
||||
/* Or that the server has 'lost' a cookie */
|
||||
res = uncached_readdir(desc);
|
||||
if (res == 0)
|
||||
@@ -948,6 +951,8 @@ static int nfs_readdir(struct file *file, struct dir_context *ctx)
|
||||
break;
|
||||
|
||||
res = nfs_do_filldir(desc);
|
||||
unlock_page(desc->page);
|
||||
cache_page_release(desc);
|
||||
if (res < 0)
|
||||
break;
|
||||
} while (!desc->eof);
|
||||
@@ -960,11 +965,13 @@ out:
|
||||
|
||||
static loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int whence)
|
||||
{
|
||||
struct inode *inode = file_inode(filp);
|
||||
struct nfs_open_dir_context *dir_ctx = filp->private_data;
|
||||
|
||||
dfprintk(FILE, "NFS: llseek dir(%pD2, %lld, %d)\n",
|
||||
filp, offset, whence);
|
||||
|
||||
inode_lock(inode);
|
||||
switch (whence) {
|
||||
case 1:
|
||||
offset += filp->f_pos;
|
||||
@@ -972,13 +979,16 @@ static loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int whence)
|
||||
if (offset >= 0)
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
offset = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (offset != filp->f_pos) {
|
||||
filp->f_pos = offset;
|
||||
dir_ctx->dir_cookie = 0;
|
||||
dir_ctx->duped = 0;
|
||||
}
|
||||
out:
|
||||
inode_unlock(inode);
|
||||
return offset;
|
||||
}
|
||||
|
||||
|
||||
@@ -847,7 +847,7 @@ nfs4_find_client_sessionid(struct net *net, const struct sockaddr *addr,
|
||||
|
||||
spin_lock(&nn->nfs_client_lock);
|
||||
list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
|
||||
if (nfs4_cb_match_client(addr, clp, minorversion) == false)
|
||||
if (!nfs4_cb_match_client(addr, clp, minorversion))
|
||||
continue;
|
||||
|
||||
if (!nfs4_has_session(clp))
|
||||
|
||||
@@ -2916,6 +2916,11 @@ static struct nfs4_state *nfs4_do_open(struct inode *dir,
|
||||
exception.retry = 1;
|
||||
continue;
|
||||
}
|
||||
if (status == -NFS4ERR_EXPIRED) {
|
||||
nfs4_schedule_lease_recovery(server->nfs_client);
|
||||
exception.retry = 1;
|
||||
continue;
|
||||
}
|
||||
if (status == -EAGAIN) {
|
||||
/* We must have found a delegation */
|
||||
exception.retry = 1;
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user