Merge v4.14.194 into q
* tag 'v4.14.194' of https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux: Linux 4.14.194 dm cache: remove all obsolete writethrough-specific code dm cache: submit writethrough writes in parallel to origin and cache dm cache: pass cache structure to mode functions genirq/affinity: Make affinity setting if activated opt-in genirq/affinity: Handle affinity setting on inactive interrupts correctly khugepaged: retract_page_tables() remember to test exit sh: landisk: Add missing initialization of sh_io_port_base tools build feature: Quote CC and CXX for their arguments perf bench mem: Always memset source before memcpy ALSA: echoaudio: Fix potential Oops in snd_echo_resume() mfd: dln2: Run event handler loop under spinlock test_kmod: avoid potential double free in trigger_config_run_type() fs/ufs: avoid potential u32 multiplication overflow nfs: Fix getxattr kernel panic and memory overflow net: qcom/emac: add missed clk_disable_unprepare in error path of emac_clks_phase1_init drm/vmwgfx: Fix two list_for_each loop exit tests drm/vmwgfx: Use correct vmw_legacy_display_unit pointer Input: sentelic - fix error return when fsp_reg_write fails i2c: rcar: avoid race when unregistering slave tools build feature: Use CC and CXX from parent pwm: bcm-iproc: handle clk_get_rate() return clk: clk-atlas6: fix return value check in atlas6_clk_init() i2c: rcar: slave: only send STOP event when we have been addressed iommu/vt-d: Enforce PASID devTLB field mask iommu/omap: Check for failure of a call to omap_iommu_dump_ctx dm rq: don't call blk_mq_queue_stopped() in dm_stop_queue() gpu: ipu-v3: image-convert: Combine rotate/no-rotate irq handlers USB: serial: ftdi_sio: clean up receive processing USB: serial: ftdi_sio: make process-packet buffer unsigned RDMA/ipoib: Return void from ipoib_ib_dev_stop() mfd: arizona: Ensure 32k clock is put on driver unbind and error drm/imx: imx-ldb: Disable both channels for split mode in enc->disable() perf intel-pt: Fix FUP packet state pseries: Fix 64 bit logical memory block panic watchdog: f71808e_wdt: clear watchdog timeout occurred flag watchdog: f71808e_wdt: remove use of wrong watchdog_info option watchdog: f71808e_wdt: indicate WDIOF_CARDRESET support in watchdog_info.options tracing: Use trace_sched_process_free() instead of exit() for pid tracing tracing/hwlat: Honor the tracing_cpumask kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler ftrace: Setup correct FTRACE_FL_REGS flags for module ocfs2: change slot number type s16 to u16 ext2: fix missing percpu_counter_inc MIPS: CPU#0 is not hotpluggable mac80211: fix misplaced while instead of if bcache: allocate meta data pages as compound pages md/raid5: Fix Force reconstruct-write io stuck in degraded raid5 net/compat: Add missing sock updates for SCM_RIGHTS net: stmmac: dwmac1000: provide multicast filter fallback net: ethernet: stmmac: Disable hardware multicast filter powerpc: Fix circular dependency between percpu.h and mmu.h xtensa: fix xtensa_pmu_setup prototype iio: dac: ad5592r: fix unbalanced mutex unlocks in ad5592r_read_raw() dt-bindings: iio: io-channel-mux: Fix compatible string in example code btrfs: fix memory leaks after failure to lookup checksums during inode logging btrfs: only search for left_info if there is no right_info in try_merge_free_space btrfs: don't allocate anonymous block device for user invisible roots PCI: hotplug: ACPI: Fix context refcounting in acpiphp_grab_context() smb3: warn on confusing error scenario with sec=krb5 net: initialize fastreuse on inet_inherit_port xen/balloon: make the balloon wait interruptible xen/balloon: fix accounting in alloc_xenballooned_pages error path irqdomain/treewide: Free firmware node after domain removal ARM: 8992/1: Fix unwind_frame for clang-built kernels parisc: mask out enable and reserved bits from sba imask parisc: Implement __smp_store_release and __smp_load_acquire barriers mtd: rawnand: qcom: avoid write to unavailable register spi: spidev: Align buffers for DMA 9p: Fix memory leak in v9fs_mount ALSA: usb-audio: work around streaming quirk for MacroSilicon MS2109 fs/minix: reject too-large maximum file size fs/minix: don't allow getting deleted inodes fs/minix: check return value of sb_getblk() bitfield.h: don't compile-time validate _val in FIELD_FIT crypto: cpt - don't sleep of CRYPTO_TFM_REQ_MAY_SLEEP was not specified crypto: ccp - Fix use of merged scatterlists crypto: qat - fix double free in qat_uclo_create_batch_init_list ALSA: usb-audio: add quirk for Pioneer DDJ-RB ALSA: usb-audio: fix overeager device match for MacroSilicon MS2109 ALSA: usb-audio: Creative USB X-Fi Pro SB1095 volume knob support USB: serial: cp210x: enable usb generic throttle/unthrottle USB: serial: cp210x: re-enable auto-RTS on open net: Set fput_needed iff FDPUT_FPUT is set net: refactor bind_bucket fastreuse into helper net/nfc/rawsock.c: add CAP_NET_RAW check. drivers/net/wan/lapbether: Added needed_headroom and a skb->len check af_packet: TPACKET_V3: fix fill status rwlock imbalance crypto: aesni - add compatibility with IAS x86/fsgsbase/64: Fix NULL deref in 86_fsgsbase_read_task pinctrl-single: fix pcs_parse_pinconf() return value dlm: Fix kobject memleak fsl/fman: fix eth hash table allocation fsl/fman: check dereferencing null pointer fsl/fman: fix unreachable code fsl/fman: fix dereference null return value fsl/fman: use 32-bit unsigned integer net: spider_net: Fix the size used in a 'dma_free_coherent()' call liquidio: Fix wrong return value in cn23xx_get_pf_num() net: ethernet: aquantia: Fix wrong return value tools, build: Propagate build failures from tools/build/Makefile.build wl1251: fix always return 0 error s390/qeth: don't process empty bridge port events selftests/powerpc: Fix online CPU selection PCI: Release IVRS table in AMD ACS quirk selftests/powerpc: Fix CPU affinity for child process Bluetooth: hci_serdev: Only unregister device if it was registered power: supply: check if calc_soc succeeded in pm860x_init_battery Smack: prevent underflow in smk_set_cipso() Smack: fix another vsscanf out of bounds net: dsa: mv88e6xxx: MV88E6097 does not support jumbo configuration scsi: mesh: Fix panic after host or bus reset usb: dwc2: Fix error path in gadget registration MIPS: OCTEON: add missing put_device() call in dwc3_octeon_device_init() coresight: tmc: Fix TMC mode read in tmc_read_unprepare_etb() thermal: ti-soc-thermal: Fix reversed condition in ti_thermal_expose_sensor() USB: serial: iuu_phoenix: fix led-activity helpers drm/imx: tve: fix regulator_disable error path PCI/ASPM: Add missing newline in sysfs 'policy' staging: rtl8192u: fix a dubious looking mask before a shift powerpc/vdso: Fix vdso cpu truncation mwifiex: Prevent memory corruption handling keys scsi: scsi_debug: Add check for sdebug_max_queue during module init drm/bridge: sil_sii8620: initialize return of sii8620_readb drm: panel: simple: Fix bpc for LG LB070WV8 panel leds: core: Flush scheduled work for system suspend PCI: Fix pci_cfg_wait queue locking problem xfs: fix reflink quota reservation accounting error media: exynos4-is: Add missed check for pinctrl_lookup_state() media: firewire: Using uninitialized values in node_probe() ipvs: allow connection reuse for unconfirmed conntrack scsi: eesox: Fix different dev_id between request_irq() and free_irq() scsi: powertec: Fix different dev_id between request_irq() and free_irq() drm/radeon: fix array out-of-bounds read and write issues cxl: Fix kobject memleak drm/mipi: use dcs write for mipi_dsi_dcs_set_tear_scanline scsi: cumana_2: Fix different dev_id between request_irq() and free_irq() ASoC: Intel: bxt_rt298: add missing .owner field media: omap3isp: Add missed v4l2_ctrl_handler_free() for preview_init_entities() leds: lm355x: avoid enum conversion warning drm/arm: fix unintentional integer overflow on left shift iio: improve IIO_CONCENTRATION channel type description video: pxafb: Fix the function used to balance a 'dma_alloc_coherent()' call console: newport_con: fix an issue about leak related system resources video: fbdev: sm712fb: fix an issue about iounmap for a wrong address agp/intel: Fix a memory leak on module initialisation failure ACPICA: Do not increment operation_region reference counts for field units bcache: fix super block seq numbers comparision in register_cache_set() dyndbg: fix a BUG_ON in ddebug_describe_flags usb: bdc: Halt controller on suspend bdc: Fix bug causing crash after multiple disconnects usb: gadget: net2280: fix memory leak on probe error handling paths gpu: host1x: debug: Fix multiple channels emitting messages simultaneously iwlegacy: Check the return value of pcie_capability_read_*() brcmfmac: set state of hanger slot to FREE when flushing PSQ brcmfmac: To fix Bss Info flag definition Bug mm/mmap.c: Add cond_resched() for exit_mmap() CPU stalls irqchip/irq-mtk-sysirq: Replace spinlock with raw_spinlock drm/debugfs: fix plain echo to connector "force" attribute drm/nouveau: fix multiple instances of reference count leaks arm64: dts: hisilicon: hikey: fixes to comply with adi, adv7533 DT binding md-cluster: fix wild pointer of unlock_all_bitmaps() video: fbdev: neofb: fix memory leak in neo_scan_monitor() drm/radeon: Fix reference count leaks caused by pm_runtime_get_sync fs/btrfs: Add cond_resched() for try_release_extent_mapping() stalls Bluetooth: add a mutex lock to avoid UAF in do_enale_set drm/tilcdc: fix leak & null ref in panel_connector_get_modes ARM: socfpga: PM: add missing put_device() call in socfpga_setup_ocram_self_refresh() spi: lantiq: fix: Rx overflow error in full duplex mode ARM: at91: pm: add missing put_device() call in at91_pm_sram_init() platform/x86: intel-vbtn: Fix return value check in check_acpi_dev() platform/x86: intel-hid: Fix return value check in check_acpi_dev() m68k: mac: Fix IOP status/control register writes m68k: mac: Don't send IOP message until channel is idle arm64: dts: exynos: Fix silent hang after boot on Espresso arm64: dts: qcom: msm8916: Replace invalid bias-pull-none property EDAC: Fix reference count leaks arm64: dts: rockchip: fix rk3399-puma gmac reset gpio arm64: dts: rockchip: fix rk3399-puma vcc5v0-host gpio sched: correct SD_flags returned by tl->sd_flags() x86/mce/inject: Fix a wrong assignment of i_mce.status cgroup: add missing skcd->no_refcnt check in cgroup_sk_clone() HID: input: Fix devices that return multiple bytes in battery report tracepoint: Mark __tracepoint_string's __used Smack: fix use-after-free in smk_write_relabel_self() rxrpc: Fix race between recvmsg and sendmsg on immediate call failure usb: hso: check for return value in hso_serial_common_create() selftests/net: relax cpu affinity requirement in msg_zerocopy test Revert "vxlan: fix tos value before xmit" openvswitch: Prevent kernel-infoleak in ovs_ct_put_key() net: gre: recompute gre csum for sctp over gre tunnels hv_netvsc: do not use VF device if link is down net: lan78xx: replace bogus endpoint lookup vxlan: Ensure FDB dump is performed under RCU net: ethernet: mtk_eth_soc: fix MTU warnings ipv6: fix memory leaks on IPV6_ADDRFORM path ipv4: Silence suspicious RCU usage warning xattr: break delegations in {set,remove}xattr Drivers: hv: vmbus: Ignore CHANNELMSG_TL_CONNECT_RESULT(23) tools lib traceevent: Fix memory leak in process_dynamic_array_len atm: fix atm_dev refcnt leaks in atmtcp_remove_persistent igb: reinit_locked() should be called with rtnl_lock cfg80211: check vendor command doit pointer before use i2c: slave: add sanity check when unregistering i2c: slave: improve sanity check when registering drm/nouveau/fbcon: zero-initialise the mode_cmd2 structure drm/nouveau/fbcon: fix module unload when fbcon init has failed for some reason net/9p: validate fds in p9_fd_open leds: 88pm860x: fix use-after-free on unbind leds: lm3533: fix use-after-free on unbind leds: da903x: fix use-after-free on unbind leds: wm831x-status: fix use-after-free on unbind mtd: properly check all write ioctls for permissions vgacon: Fix for missing check in scrollback handling binder: Prevent context manager from incrementing ref 0 omapfb: dss: Fix max fclk divider for omap36xx Bluetooth: Prevent out-of-bounds read in hci_inquiry_result_with_rssi_evt() Bluetooth: Prevent out-of-bounds read in hci_inquiry_result_evt() Bluetooth: Fix slab-out-of-bounds read in hci_extended_inquiry_result_evt() staging: android: ashmem: Fix lockdep warning for write operation ALSA: seq: oss: Serialize ioctls usb: xhci: Fix ASMedia ASM1142 DMA addressing usb: xhci: define IDs for various ASMedia host controllers USB: iowarrior: fix up report size handling for some devices net/mlx5e: Don't support phys switch id if not in switchdev mode USB: serial: qcserial: add EM7305 QDL product ID Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com> Conflicts: drivers/hwtracing/coresight/coresight-tmc-etf.c net/ipv4/inet_connection_sock.c
This commit is contained in:
@@ -1524,7 +1524,8 @@ What: /sys/bus/iio/devices/iio:deviceX/in_concentrationX_voc_raw
|
||||
KernelVersion: 4.3
|
||||
Contact: linux-iio@vger.kernel.org
|
||||
Description:
|
||||
Raw (unscaled no offset etc.) percentage reading of a substance.
|
||||
Raw (unscaled no offset etc.) reading of a substance. Units
|
||||
after application of scale and offset are percents.
|
||||
|
||||
What: /sys/bus/iio/devices/iio:deviceX/in_resistance_raw
|
||||
What: /sys/bus/iio/devices/iio:deviceX/in_resistanceX_raw
|
||||
|
||||
@@ -21,7 +21,7 @@ controller state. The mux controller state is described in
|
||||
|
||||
Example:
|
||||
mux: mux-controller {
|
||||
compatible = "mux-gpio";
|
||||
compatible = "gpio-mux";
|
||||
#mux-control-cells = <0>;
|
||||
|
||||
mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>,
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 14
|
||||
SUBLEVEL = 193
|
||||
SUBLEVEL = 194
|
||||
EXTRAVERSION =
|
||||
NAME = Petit Gorille
|
||||
|
||||
|
||||
@@ -20,6 +20,19 @@
|
||||
* A simple function epilogue looks like this:
|
||||
* ldm sp, {fp, sp, pc}
|
||||
*
|
||||
* When compiled with clang, pc and sp are not pushed. A simple function
|
||||
* prologue looks like this when built with clang:
|
||||
*
|
||||
* stmdb {..., fp, lr}
|
||||
* add fp, sp, #x
|
||||
* sub sp, sp, #y
|
||||
*
|
||||
* A simple function epilogue looks like this when built with clang:
|
||||
*
|
||||
* sub sp, fp, #x
|
||||
* ldm {..., fp, pc}
|
||||
*
|
||||
*
|
||||
* Note that with framepointer enabled, even the leaf functions have the same
|
||||
* prologue and epilogue, therefore we can ignore the LR value in this case.
|
||||
*/
|
||||
@@ -32,6 +45,16 @@ int notrace unwind_frame(struct stackframe *frame)
|
||||
low = frame->sp;
|
||||
high = ALIGN(low, THREAD_SIZE);
|
||||
|
||||
#ifdef CONFIG_CC_IS_CLANG
|
||||
/* check current frame pointer is within bounds */
|
||||
if (fp < low + 4 || fp > high - 4)
|
||||
return -EINVAL;
|
||||
|
||||
frame->sp = frame->fp;
|
||||
frame->fp = *(unsigned long *)(fp);
|
||||
frame->pc = frame->lr;
|
||||
frame->lr = *(unsigned long *)(fp + 4);
|
||||
#else
|
||||
/* check current frame pointer is within bounds */
|
||||
if (fp < low + 12 || fp > high - 4)
|
||||
return -EINVAL;
|
||||
@@ -40,6 +63,7 @@ int notrace unwind_frame(struct stackframe *frame)
|
||||
frame->fp = *(unsigned long *)(fp - 12);
|
||||
frame->sp = *(unsigned long *)(fp - 8);
|
||||
frame->pc = *(unsigned long *)(fp - 4);
|
||||
#endif
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -456,13 +456,13 @@ static void __init at91_pm_sram_init(void)
|
||||
sram_pool = gen_pool_get(&pdev->dev, NULL);
|
||||
if (!sram_pool) {
|
||||
pr_warn("%s: sram pool unavailable!\n", __func__);
|
||||
return;
|
||||
goto out_put_device;
|
||||
}
|
||||
|
||||
sram_base = gen_pool_alloc(sram_pool, at91_pm_suspend_in_sram_sz);
|
||||
if (!sram_base) {
|
||||
pr_warn("%s: unable to alloc sram!\n", __func__);
|
||||
return;
|
||||
goto out_put_device;
|
||||
}
|
||||
|
||||
sram_pbase = gen_pool_virt_to_phys(sram_pool, sram_base);
|
||||
@@ -470,12 +470,17 @@ static void __init at91_pm_sram_init(void)
|
||||
at91_pm_suspend_in_sram_sz, false);
|
||||
if (!at91_suspend_sram_fn) {
|
||||
pr_warn("SRAM: Could not map\n");
|
||||
return;
|
||||
goto out_put_device;
|
||||
}
|
||||
|
||||
/* Copy the pm suspend handler to SRAM */
|
||||
at91_suspend_sram_fn = fncpy(at91_suspend_sram_fn,
|
||||
&at91_pm_suspend_in_sram, at91_pm_suspend_in_sram_sz);
|
||||
return;
|
||||
|
||||
out_put_device:
|
||||
put_device(&pdev->dev);
|
||||
return;
|
||||
}
|
||||
|
||||
static void __init at91_pm_backup_init(void)
|
||||
|
||||
@@ -60,14 +60,14 @@ static int socfpga_setup_ocram_self_refresh(void)
|
||||
if (!ocram_pool) {
|
||||
pr_warn("%s: ocram pool unavailable!\n", __func__);
|
||||
ret = -ENODEV;
|
||||
goto put_node;
|
||||
goto put_device;
|
||||
}
|
||||
|
||||
ocram_base = gen_pool_alloc(ocram_pool, socfpga_sdram_self_refresh_sz);
|
||||
if (!ocram_base) {
|
||||
pr_warn("%s: unable to alloc ocram!\n", __func__);
|
||||
ret = -ENOMEM;
|
||||
goto put_node;
|
||||
goto put_device;
|
||||
}
|
||||
|
||||
ocram_pbase = gen_pool_virt_to_phys(ocram_pool, ocram_base);
|
||||
@@ -78,7 +78,7 @@ static int socfpga_setup_ocram_self_refresh(void)
|
||||
if (!suspend_ocram_base) {
|
||||
pr_warn("%s: __arm_ioremap_exec failed!\n", __func__);
|
||||
ret = -ENOMEM;
|
||||
goto put_node;
|
||||
goto put_device;
|
||||
}
|
||||
|
||||
/* Copy the code that puts DDR in self refresh to ocram */
|
||||
@@ -92,6 +92,8 @@ static int socfpga_setup_ocram_self_refresh(void)
|
||||
if (!socfpga_sdram_self_refresh_in_ocram)
|
||||
ret = -EFAULT;
|
||||
|
||||
put_device:
|
||||
put_device(&pdev->dev);
|
||||
put_node:
|
||||
of_node_put(np);
|
||||
|
||||
|
||||
@@ -155,6 +155,7 @@
|
||||
regulator-min-microvolt = <700000>;
|
||||
regulator-max-microvolt = <1150000>;
|
||||
regulator-enable-ramp-delay = <125>;
|
||||
regulator-always-on;
|
||||
};
|
||||
|
||||
ldo8_reg: LDO8 {
|
||||
|
||||
@@ -210,6 +210,17 @@
|
||||
status = "ok";
|
||||
compatible = "adi,adv7533";
|
||||
reg = <0x39>;
|
||||
adi,dsi-lanes = <4>;
|
||||
ports {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
port@0 {
|
||||
reg = <0>;
|
||||
};
|
||||
port@1 {
|
||||
reg = <1>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
@@ -513,7 +513,7 @@
|
||||
reg = <0x39>;
|
||||
interrupt-parent = <&gpio1>;
|
||||
interrupts = <1 2>;
|
||||
pd-gpio = <&gpio0 4 0>;
|
||||
pd-gpios = <&gpio0 4 0>;
|
||||
adi,dsi-lanes = <4>;
|
||||
#sound-dai-cells = <0>;
|
||||
|
||||
|
||||
@@ -542,7 +542,7 @@
|
||||
pins = "gpio63", "gpio64", "gpio65", "gpio66",
|
||||
"gpio67", "gpio68";
|
||||
drive-strength = <8>;
|
||||
bias-pull-none;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
cdc_pdm_lines_sus: pdm_lines_off {
|
||||
@@ -571,7 +571,7 @@
|
||||
pins = "gpio113", "gpio114", "gpio115",
|
||||
"gpio116";
|
||||
drive-strength = <8>;
|
||||
bias-pull-none;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -599,7 +599,7 @@
|
||||
pinconf {
|
||||
pins = "gpio110";
|
||||
drive-strength = <8>;
|
||||
bias-pull-none;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -625,7 +625,7 @@
|
||||
pinconf {
|
||||
pins = "gpio116";
|
||||
drive-strength = <8>;
|
||||
bias-pull-none;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
ext_mclk_tlmm_lines_sus: mclk_lines_off {
|
||||
@@ -653,7 +653,7 @@
|
||||
pins = "gpio112", "gpio117", "gpio118",
|
||||
"gpio119";
|
||||
drive-strength = <8>;
|
||||
bias-pull-none;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
ext_sec_tlmm_lines_sus: tlmm_lines_off {
|
||||
|
||||
@@ -138,7 +138,7 @@
|
||||
|
||||
vcc5v0_host: vcc5v0-host-regulator {
|
||||
compatible = "regulator-fixed";
|
||||
gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
|
||||
gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
|
||||
enable-active-low;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&vcc5v0_host_en>;
|
||||
@@ -193,7 +193,7 @@
|
||||
phy-mode = "rgmii";
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&rgmii_pins>;
|
||||
snps,reset-gpio = <&gpio3 RK_PC0 GPIO_ACTIVE_HIGH>;
|
||||
snps,reset-gpio = <&gpio3 RK_PC0 GPIO_ACTIVE_LOW>;
|
||||
snps,reset-active-low;
|
||||
snps,reset-delays-us = <0 10000 50000>;
|
||||
tx_delay = <0x10>;
|
||||
|
||||
@@ -183,7 +183,7 @@ static __inline__ void iop_writeb(volatile struct mac_iop *iop, __u16 addr, __u8
|
||||
|
||||
static __inline__ void iop_stop(volatile struct mac_iop *iop)
|
||||
{
|
||||
iop->status_ctrl &= ~IOP_RUN;
|
||||
iop->status_ctrl = IOP_AUTOINC;
|
||||
}
|
||||
|
||||
static __inline__ void iop_start(volatile struct mac_iop *iop)
|
||||
@@ -191,14 +191,9 @@ static __inline__ void iop_start(volatile struct mac_iop *iop)
|
||||
iop->status_ctrl = IOP_RUN | IOP_AUTOINC;
|
||||
}
|
||||
|
||||
static __inline__ void iop_bypass(volatile struct mac_iop *iop)
|
||||
{
|
||||
iop->status_ctrl |= IOP_BYPASS;
|
||||
}
|
||||
|
||||
static __inline__ void iop_interrupt(volatile struct mac_iop *iop)
|
||||
{
|
||||
iop->status_ctrl |= IOP_IRQ;
|
||||
iop->status_ctrl = IOP_IRQ | IOP_RUN | IOP_AUTOINC;
|
||||
}
|
||||
|
||||
static int iop_alive(volatile struct mac_iop *iop)
|
||||
@@ -244,7 +239,6 @@ void __init iop_preinit(void)
|
||||
} else {
|
||||
iop_base[IOP_NUM_SCC] = (struct mac_iop *) SCC_IOP_BASE_QUADRA;
|
||||
}
|
||||
iop_base[IOP_NUM_SCC]->status_ctrl = 0x87;
|
||||
iop_scc_present = 1;
|
||||
} else {
|
||||
iop_base[IOP_NUM_SCC] = NULL;
|
||||
@@ -256,7 +250,7 @@ void __init iop_preinit(void)
|
||||
} else {
|
||||
iop_base[IOP_NUM_ISM] = (struct mac_iop *) ISM_IOP_BASE_QUADRA;
|
||||
}
|
||||
iop_base[IOP_NUM_ISM]->status_ctrl = 0;
|
||||
iop_stop(iop_base[IOP_NUM_ISM]);
|
||||
iop_ism_present = 1;
|
||||
} else {
|
||||
iop_base[IOP_NUM_ISM] = NULL;
|
||||
@@ -416,7 +410,8 @@ static void iop_handle_send(uint iop_num, uint chan)
|
||||
msg->status = IOP_MSGSTATUS_UNUSED;
|
||||
msg = msg->next;
|
||||
iop_send_queue[iop_num][chan] = msg;
|
||||
if (msg) iop_do_send(msg);
|
||||
if (msg && iop_readb(iop, IOP_ADDR_SEND_STATE + chan) == IOP_MSG_IDLE)
|
||||
iop_do_send(msg);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -490,16 +485,12 @@ int iop_send_message(uint iop_num, uint chan, void *privdata,
|
||||
|
||||
if (!(q = iop_send_queue[iop_num][chan])) {
|
||||
iop_send_queue[iop_num][chan] = msg;
|
||||
iop_do_send(msg);
|
||||
} else {
|
||||
while (q->next) q = q->next;
|
||||
q->next = msg;
|
||||
}
|
||||
|
||||
if (iop_readb(iop_base[iop_num],
|
||||
IOP_ADDR_SEND_STATE + chan) == IOP_MSG_IDLE) {
|
||||
iop_do_send(msg);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -517,6 +517,7 @@ static int __init dwc3_octeon_device_init(void)
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (res == NULL) {
|
||||
put_device(&pdev->dev);
|
||||
dev_err(&pdev->dev, "No memory resources\n");
|
||||
return -ENXIO;
|
||||
}
|
||||
@@ -528,8 +529,10 @@ static int __init dwc3_octeon_device_init(void)
|
||||
* know the difference.
|
||||
*/
|
||||
base = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(base))
|
||||
if (IS_ERR(base)) {
|
||||
put_device(&pdev->dev);
|
||||
return PTR_ERR(base);
|
||||
}
|
||||
|
||||
mutex_lock(&dwc3_octeon_clocks_mutex);
|
||||
dwc3_octeon_clocks_start(&pdev->dev, (u64)base);
|
||||
|
||||
@@ -20,7 +20,7 @@ static int __init topology_init(void)
|
||||
for_each_present_cpu(i) {
|
||||
struct cpu *c = &per_cpu(cpu_devices, i);
|
||||
|
||||
c->hotpluggable = 1;
|
||||
c->hotpluggable = !!i;
|
||||
ret = register_cpu(c, i);
|
||||
if (ret)
|
||||
printk(KERN_WARNING "topology_init: register_cpu %d "
|
||||
|
||||
@@ -26,6 +26,67 @@
|
||||
#define __smp_rmb() mb()
|
||||
#define __smp_wmb() mb()
|
||||
|
||||
#define __smp_store_release(p, v) \
|
||||
do { \
|
||||
typeof(p) __p = (p); \
|
||||
union { typeof(*p) __val; char __c[1]; } __u = \
|
||||
{ .__val = (__force typeof(*p)) (v) }; \
|
||||
compiletime_assert_atomic_type(*p); \
|
||||
switch (sizeof(*p)) { \
|
||||
case 1: \
|
||||
asm volatile("stb,ma %0,0(%1)" \
|
||||
: : "r"(*(__u8 *)__u.__c), "r"(__p) \
|
||||
: "memory"); \
|
||||
break; \
|
||||
case 2: \
|
||||
asm volatile("sth,ma %0,0(%1)" \
|
||||
: : "r"(*(__u16 *)__u.__c), "r"(__p) \
|
||||
: "memory"); \
|
||||
break; \
|
||||
case 4: \
|
||||
asm volatile("stw,ma %0,0(%1)" \
|
||||
: : "r"(*(__u32 *)__u.__c), "r"(__p) \
|
||||
: "memory"); \
|
||||
break; \
|
||||
case 8: \
|
||||
if (IS_ENABLED(CONFIG_64BIT)) \
|
||||
asm volatile("std,ma %0,0(%1)" \
|
||||
: : "r"(*(__u64 *)__u.__c), "r"(__p) \
|
||||
: "memory"); \
|
||||
break; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define __smp_load_acquire(p) \
|
||||
({ \
|
||||
union { typeof(*p) __val; char __c[1]; } __u; \
|
||||
typeof(p) __p = (p); \
|
||||
compiletime_assert_atomic_type(*p); \
|
||||
switch (sizeof(*p)) { \
|
||||
case 1: \
|
||||
asm volatile("ldb,ma 0(%1),%0" \
|
||||
: "=r"(*(__u8 *)__u.__c) : "r"(__p) \
|
||||
: "memory"); \
|
||||
break; \
|
||||
case 2: \
|
||||
asm volatile("ldh,ma 0(%1),%0" \
|
||||
: "=r"(*(__u16 *)__u.__c) : "r"(__p) \
|
||||
: "memory"); \
|
||||
break; \
|
||||
case 4: \
|
||||
asm volatile("ldw,ma 0(%1),%0" \
|
||||
: "=r"(*(__u32 *)__u.__c) : "r"(__p) \
|
||||
: "memory"); \
|
||||
break; \
|
||||
case 8: \
|
||||
if (IS_ENABLED(CONFIG_64BIT)) \
|
||||
asm volatile("ldd,ma 0(%1),%0" \
|
||||
: "=r"(*(__u64 *)__u.__c) : "r"(__p) \
|
||||
: "memory"); \
|
||||
break; \
|
||||
} \
|
||||
__u.__val; \
|
||||
})
|
||||
#include <asm-generic/barrier.h>
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
@@ -10,8 +10,6 @@
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
|
||||
#include <asm/paca.h>
|
||||
|
||||
#define __my_cpu_offset local_paca->data_offset
|
||||
|
||||
#endif /* CONFIG_SMP */
|
||||
@@ -19,4 +17,6 @@
|
||||
|
||||
#include <asm-generic/percpu.h>
|
||||
|
||||
#include <asm/paca.h>
|
||||
|
||||
#endif /* _ASM_POWERPC_PERCPU_H_ */
|
||||
|
||||
@@ -704,7 +704,7 @@ int vdso_getcpu_init(void)
|
||||
node = cpu_to_node(cpu);
|
||||
WARN_ON_ONCE(node > 0xffff);
|
||||
|
||||
val = (cpu & 0xfff) | ((node & 0xffff) << 16);
|
||||
val = (cpu & 0xffff) | ((node & 0xffff) << 16);
|
||||
mtspr(SPRN_SPRG_VDSO_WRITE, val);
|
||||
get_paca()->sprg_vdso = val;
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ static bool rtas_hp_event;
|
||||
unsigned long pseries_memory_block_size(void)
|
||||
{
|
||||
struct device_node *np;
|
||||
unsigned int memblock_size = MIN_MEMORY_BLOCK_SIZE;
|
||||
u64 memblock_size = MIN_MEMORY_BLOCK_SIZE;
|
||||
struct resource r;
|
||||
|
||||
np = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
|
||||
|
||||
@@ -85,6 +85,9 @@ device_initcall(landisk_devices_setup);
|
||||
|
||||
static void __init landisk_setup(char **cmdline_p)
|
||||
{
|
||||
/* I/O port identity mapping */
|
||||
__set_io_port_base(0);
|
||||
|
||||
/* LED ON */
|
||||
__raw_writeb(__raw_readb(PA_LED) | 0x03, PA_LED);
|
||||
|
||||
|
||||
@@ -127,10 +127,6 @@ ddq_add_8:
|
||||
|
||||
/* generate a unique variable for ddq_add_x */
|
||||
|
||||
.macro setddq n
|
||||
var_ddq_add = ddq_add_\n
|
||||
.endm
|
||||
|
||||
/* generate a unique variable for xmm register */
|
||||
.macro setxdata n
|
||||
var_xdata = %xmm\n
|
||||
@@ -140,9 +136,7 @@ ddq_add_8:
|
||||
|
||||
.macro club name, id
|
||||
.altmacro
|
||||
.if \name == DDQ_DATA
|
||||
setddq %\id
|
||||
.elseif \name == XDATA
|
||||
.if \name == XDATA
|
||||
setxdata %\id
|
||||
.endif
|
||||
.noaltmacro
|
||||
@@ -165,9 +159,8 @@ ddq_add_8:
|
||||
|
||||
.set i, 1
|
||||
.rept (by - 1)
|
||||
club DDQ_DATA, i
|
||||
club XDATA, i
|
||||
vpaddq var_ddq_add(%rip), xcounter, var_xdata
|
||||
vpaddq (ddq_add_1 + 16 * (i - 1))(%rip), xcounter, var_xdata
|
||||
vptest ddq_low_msk(%rip), var_xdata
|
||||
jnz 1f
|
||||
vpaddq ddq_high_add_1(%rip), var_xdata, var_xdata
|
||||
@@ -180,8 +173,7 @@ ddq_add_8:
|
||||
vmovdqa 1*16(p_keys), xkeyA
|
||||
|
||||
vpxor xkey0, xdata0, xdata0
|
||||
club DDQ_DATA, by
|
||||
vpaddq var_ddq_add(%rip), xcounter, xcounter
|
||||
vpaddq (ddq_add_1 + 16 * (by - 1))(%rip), xcounter, xcounter
|
||||
vptest ddq_low_msk(%rip), xcounter
|
||||
jnz 1f
|
||||
vpaddq ddq_high_add_1(%rip), xcounter, xcounter
|
||||
|
||||
@@ -2252,8 +2252,13 @@ static int mp_irqdomain_create(int ioapic)
|
||||
|
||||
static void ioapic_destroy_irqdomain(int idx)
|
||||
{
|
||||
struct ioapic_domain_cfg *cfg = &ioapics[idx].irqdomain_cfg;
|
||||
struct fwnode_handle *fn = ioapics[idx].irqdomain->fwnode;
|
||||
|
||||
if (ioapics[idx].irqdomain) {
|
||||
irq_domain_remove(ioapics[idx].irqdomain);
|
||||
if (!cfg->dev)
|
||||
irq_domain_free_fwnode(fn);
|
||||
ioapics[idx].irqdomain = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -368,6 +368,10 @@ static int x86_vector_alloc_irqs(struct irq_domain *domain, unsigned int virq,
|
||||
irq_data->chip = &lapic_controller;
|
||||
irq_data->chip_data = data;
|
||||
irq_data->hwirq = virq + i;
|
||||
|
||||
/* Don't invoke affinity setter on deactivated interrupts */
|
||||
irqd_set_affinity_on_activate(irq_data);
|
||||
|
||||
err = assign_irq_vector_policy(virq + i, node, data, info,
|
||||
irq_data);
|
||||
if (err) {
|
||||
|
||||
@@ -518,7 +518,7 @@ static void do_inject(void)
|
||||
*/
|
||||
if (inj_type == DFR_INT_INJ) {
|
||||
i_mce.status |= MCI_STATUS_DEFERRED;
|
||||
i_mce.status |= (i_mce.status & ~MCI_STATUS_UC);
|
||||
i_mce.status &= ~MCI_STATUS_UC;
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -374,7 +374,7 @@ static unsigned long task_seg_base(struct task_struct *task,
|
||||
*/
|
||||
mutex_lock(&task->mm->context.lock);
|
||||
ldt = task->mm->context.ldt;
|
||||
if (unlikely(idx >= ldt->nr_entries))
|
||||
if (unlikely(!ldt || idx >= ldt->nr_entries))
|
||||
base = 0;
|
||||
else
|
||||
base = get_desc_base(ldt->entries + idx);
|
||||
|
||||
@@ -404,7 +404,7 @@ static struct pmu xtensa_pmu = {
|
||||
.read = xtensa_pmu_read,
|
||||
};
|
||||
|
||||
static int xtensa_pmu_setup(int cpu)
|
||||
static int xtensa_pmu_setup(unsigned int cpu)
|
||||
{
|
||||
unsigned i;
|
||||
|
||||
|
||||
@@ -507,10 +507,6 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
|
||||
(u8)access_byte_width;
|
||||
}
|
||||
}
|
||||
/* An additional reference for the container */
|
||||
|
||||
acpi_ut_add_reference(obj_desc->field.region_obj);
|
||||
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
|
||||
"RegionField: BitOff %X, Off %X, Gran %X, Region %p\n",
|
||||
obj_desc->field.start_field_bit_offset,
|
||||
|
||||
@@ -593,11 +593,6 @@ acpi_ut_update_object_reference(union acpi_operand_object *object, u16 action)
|
||||
next_object = object->buffer_field.buffer_obj;
|
||||
break;
|
||||
|
||||
case ACPI_TYPE_LOCAL_REGION_FIELD:
|
||||
|
||||
next_object = object->field.region_obj;
|
||||
break;
|
||||
|
||||
case ACPI_TYPE_LOCAL_BANK_FIELD:
|
||||
|
||||
next_object = object->bank_field.bank_obj;
|
||||
@@ -638,6 +633,7 @@ acpi_ut_update_object_reference(union acpi_operand_object *object, u16 action)
|
||||
}
|
||||
break;
|
||||
|
||||
case ACPI_TYPE_LOCAL_REGION_FIELD:
|
||||
case ACPI_TYPE_REGION:
|
||||
default:
|
||||
|
||||
|
||||
@@ -3082,6 +3082,12 @@ static void binder_transaction(struct binder_proc *proc,
|
||||
goto err_dead_binder;
|
||||
}
|
||||
e->to_node = target_node->debug_id;
|
||||
if (WARN_ON(proc == target_proc)) {
|
||||
return_error = BR_FAILED_REPLY;
|
||||
return_error_param = -EINVAL;
|
||||
return_error_line = __LINE__;
|
||||
goto err_invalid_target_handle;
|
||||
}
|
||||
if (security_binder_transaction(proc->tsk,
|
||||
target_proc->tsk) < 0) {
|
||||
return_error = BR_FAILED_REPLY;
|
||||
@@ -3660,10 +3666,17 @@ static int binder_thread_write(struct binder_proc *proc,
|
||||
struct binder_node *ctx_mgr_node;
|
||||
mutex_lock(&context->context_mgr_node_lock);
|
||||
ctx_mgr_node = context->binder_context_mgr_node;
|
||||
if (ctx_mgr_node)
|
||||
if (ctx_mgr_node) {
|
||||
if (ctx_mgr_node->proc == proc) {
|
||||
binder_user_error("%d:%d context manager tried to acquire desc 0\n",
|
||||
proc->pid, thread->pid);
|
||||
mutex_unlock(&context->context_mgr_node_lock);
|
||||
return -EINVAL;
|
||||
}
|
||||
ret = binder_inc_ref_for_node(
|
||||
proc, ctx_mgr_node,
|
||||
strong, NULL, &rdata);
|
||||
}
|
||||
mutex_unlock(&context->context_mgr_node_lock);
|
||||
}
|
||||
if (ret)
|
||||
|
||||
@@ -432,9 +432,15 @@ static int atmtcp_remove_persistent(int itf)
|
||||
return -EMEDIUMTYPE;
|
||||
}
|
||||
dev_data = PRIV(dev);
|
||||
if (!dev_data->persist) return 0;
|
||||
if (!dev_data->persist) {
|
||||
atm_dev_put(dev);
|
||||
return 0;
|
||||
}
|
||||
dev_data->persist = 0;
|
||||
if (PRIV(dev)->vcc) return 0;
|
||||
if (PRIV(dev)->vcc) {
|
||||
atm_dev_put(dev);
|
||||
return 0;
|
||||
}
|
||||
kfree(dev_data);
|
||||
atm_dev_put(dev);
|
||||
atm_dev_deregister(dev);
|
||||
|
||||
@@ -361,7 +361,8 @@ void hci_uart_unregister_device(struct hci_uart *hu)
|
||||
struct hci_dev *hdev = hu->hdev;
|
||||
|
||||
clear_bit(HCI_UART_PROTO_READY, &hu->flags);
|
||||
hci_unregister_dev(hdev);
|
||||
if (test_bit(HCI_UART_REGISTERED, &hu->flags))
|
||||
hci_unregister_dev(hdev);
|
||||
hci_free_dev(hdev);
|
||||
|
||||
cancel_work_sync(&hu->write_work);
|
||||
|
||||
@@ -304,8 +304,10 @@ static int intel_gtt_setup_scratch_page(void)
|
||||
if (intel_private.needs_dmar) {
|
||||
dma_addr = pci_map_page(intel_private.pcidev, page, 0,
|
||||
PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
|
||||
if (pci_dma_mapping_error(intel_private.pcidev, dma_addr))
|
||||
if (pci_dma_mapping_error(intel_private.pcidev, dma_addr)) {
|
||||
__free_page(page);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
intel_private.scratch_page_dma = dma_addr;
|
||||
} else
|
||||
|
||||
@@ -136,7 +136,7 @@ static void __init atlas6_clk_init(struct device_node *np)
|
||||
|
||||
for (i = pll1; i < maxclk; i++) {
|
||||
atlas6_clks[i] = clk_register(NULL, atlas6_clk_hw_array[i]);
|
||||
BUG_ON(!atlas6_clks[i]);
|
||||
BUG_ON(IS_ERR(atlas6_clks[i]));
|
||||
}
|
||||
clk_register_clkdev(atlas6_clks[cpu], NULL, "cpu");
|
||||
clk_register_clkdev(atlas6_clks[io], NULL, "io");
|
||||
|
||||
@@ -205,6 +205,7 @@ static inline int cvm_enc_dec(struct ablkcipher_request *req, u32 enc)
|
||||
int status;
|
||||
|
||||
memset(req_info, 0, sizeof(struct cpt_request_info));
|
||||
req_info->may_sleep = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) != 0;
|
||||
memset(fctx, 0, sizeof(struct fc_context));
|
||||
create_input_list(req, enc, enc_iv_len);
|
||||
create_output_list(req, enc_iv_len);
|
||||
|
||||
@@ -136,7 +136,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
|
||||
|
||||
/* Setup gather (input) components */
|
||||
g_sz_bytes = ((req->incnt + 3) / 4) * sizeof(struct sglist_component);
|
||||
info->gather_components = kzalloc(g_sz_bytes, GFP_KERNEL);
|
||||
info->gather_components = kzalloc(g_sz_bytes, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
|
||||
if (!info->gather_components) {
|
||||
ret = -ENOMEM;
|
||||
goto scatter_gather_clean;
|
||||
@@ -153,7 +153,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
|
||||
|
||||
/* Setup scatter (output) components */
|
||||
s_sz_bytes = ((req->outcnt + 3) / 4) * sizeof(struct sglist_component);
|
||||
info->scatter_components = kzalloc(s_sz_bytes, GFP_KERNEL);
|
||||
info->scatter_components = kzalloc(s_sz_bytes, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
|
||||
if (!info->scatter_components) {
|
||||
ret = -ENOMEM;
|
||||
goto scatter_gather_clean;
|
||||
@@ -170,7 +170,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
|
||||
|
||||
/* Create and initialize DPTR */
|
||||
info->dlen = g_sz_bytes + s_sz_bytes + SG_LIST_HDR_SIZE;
|
||||
info->in_buffer = kzalloc(info->dlen, GFP_KERNEL);
|
||||
info->in_buffer = kzalloc(info->dlen, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
|
||||
if (!info->in_buffer) {
|
||||
ret = -ENOMEM;
|
||||
goto scatter_gather_clean;
|
||||
@@ -198,7 +198,7 @@ static inline int setup_sgio_list(struct cpt_vf *cptvf,
|
||||
}
|
||||
|
||||
/* Create and initialize RPTR */
|
||||
info->out_buffer = kzalloc(COMPLETION_CODE_SIZE, GFP_KERNEL);
|
||||
info->out_buffer = kzalloc(COMPLETION_CODE_SIZE, req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
|
||||
if (!info->out_buffer) {
|
||||
ret = -ENOMEM;
|
||||
goto scatter_gather_clean;
|
||||
@@ -434,7 +434,7 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
|
||||
struct cpt_vq_command vq_cmd;
|
||||
union cpt_inst_s cptinst;
|
||||
|
||||
info = kzalloc(sizeof(*info), GFP_KERNEL);
|
||||
info = kzalloc(sizeof(*info), req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
|
||||
if (unlikely(!info)) {
|
||||
dev_err(&pdev->dev, "Unable to allocate memory for info_buffer\n");
|
||||
return -ENOMEM;
|
||||
@@ -456,7 +456,7 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
|
||||
* Get buffer for union cpt_res_s response
|
||||
* structure and its physical address
|
||||
*/
|
||||
info->completion_addr = kzalloc(sizeof(union cpt_res_s), GFP_KERNEL);
|
||||
info->completion_addr = kzalloc(sizeof(union cpt_res_s), req->may_sleep ? GFP_KERNEL : GFP_ATOMIC);
|
||||
if (unlikely(!info->completion_addr)) {
|
||||
dev_err(&pdev->dev, "Unable to allocate memory for completion_addr\n");
|
||||
ret = -ENOMEM;
|
||||
|
||||
@@ -65,6 +65,8 @@ struct cpt_request_info {
|
||||
union ctrl_info ctrl; /* User control information */
|
||||
struct cptvf_request req; /* Request Information (Core specific) */
|
||||
|
||||
bool may_sleep;
|
||||
|
||||
struct buf_ptr in[MAX_BUF_CNT];
|
||||
struct buf_ptr out[MAX_BUF_CNT];
|
||||
|
||||
|
||||
@@ -471,6 +471,7 @@ struct ccp_sg_workarea {
|
||||
unsigned int sg_used;
|
||||
|
||||
struct scatterlist *dma_sg;
|
||||
struct scatterlist *dma_sg_head;
|
||||
struct device *dma_dev;
|
||||
unsigned int dma_count;
|
||||
enum dma_data_direction dma_dir;
|
||||
|
||||
@@ -67,7 +67,7 @@ static u32 ccp_gen_jobid(struct ccp_device *ccp)
|
||||
static void ccp_sg_free(struct ccp_sg_workarea *wa)
|
||||
{
|
||||
if (wa->dma_count)
|
||||
dma_unmap_sg(wa->dma_dev, wa->dma_sg, wa->nents, wa->dma_dir);
|
||||
dma_unmap_sg(wa->dma_dev, wa->dma_sg_head, wa->nents, wa->dma_dir);
|
||||
|
||||
wa->dma_count = 0;
|
||||
}
|
||||
@@ -96,6 +96,7 @@ static int ccp_init_sg_workarea(struct ccp_sg_workarea *wa, struct device *dev,
|
||||
return 0;
|
||||
|
||||
wa->dma_sg = sg;
|
||||
wa->dma_sg_head = sg;
|
||||
wa->dma_dev = dev;
|
||||
wa->dma_dir = dma_dir;
|
||||
wa->dma_count = dma_map_sg(dev, sg, wa->nents, dma_dir);
|
||||
@@ -108,14 +109,28 @@ static int ccp_init_sg_workarea(struct ccp_sg_workarea *wa, struct device *dev,
|
||||
static void ccp_update_sg_workarea(struct ccp_sg_workarea *wa, unsigned int len)
|
||||
{
|
||||
unsigned int nbytes = min_t(u64, len, wa->bytes_left);
|
||||
unsigned int sg_combined_len = 0;
|
||||
|
||||
if (!wa->sg)
|
||||
return;
|
||||
|
||||
wa->sg_used += nbytes;
|
||||
wa->bytes_left -= nbytes;
|
||||
if (wa->sg_used == wa->sg->length) {
|
||||
wa->sg = sg_next(wa->sg);
|
||||
if (wa->sg_used == sg_dma_len(wa->dma_sg)) {
|
||||
/* Advance to the next DMA scatterlist entry */
|
||||
wa->dma_sg = sg_next(wa->dma_sg);
|
||||
|
||||
/* In the case that the DMA mapped scatterlist has entries
|
||||
* that have been merged, the non-DMA mapped scatterlist
|
||||
* must be advanced multiple times for each merged entry.
|
||||
* This ensures that the current non-DMA mapped entry
|
||||
* corresponds to the current DMA mapped entry.
|
||||
*/
|
||||
do {
|
||||
sg_combined_len += wa->sg->length;
|
||||
wa->sg = sg_next(wa->sg);
|
||||
} while (wa->sg_used > sg_combined_len);
|
||||
|
||||
wa->sg_used = 0;
|
||||
}
|
||||
}
|
||||
@@ -304,7 +319,7 @@ static unsigned int ccp_queue_buf(struct ccp_data *data, unsigned int from)
|
||||
/* Update the structures and generate the count */
|
||||
buf_count = 0;
|
||||
while (sg_wa->bytes_left && (buf_count < dm_wa->length)) {
|
||||
nbytes = min(sg_wa->sg->length - sg_wa->sg_used,
|
||||
nbytes = min(sg_dma_len(sg_wa->dma_sg) - sg_wa->sg_used,
|
||||
dm_wa->length - buf_count);
|
||||
nbytes = min_t(u64, sg_wa->bytes_left, nbytes);
|
||||
|
||||
@@ -336,11 +351,11 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
|
||||
* and destination. The resulting len values will always be <= UINT_MAX
|
||||
* because the dma length is an unsigned int.
|
||||
*/
|
||||
sg_src_len = sg_dma_len(src->sg_wa.sg) - src->sg_wa.sg_used;
|
||||
sg_src_len = sg_dma_len(src->sg_wa.dma_sg) - src->sg_wa.sg_used;
|
||||
sg_src_len = min_t(u64, src->sg_wa.bytes_left, sg_src_len);
|
||||
|
||||
if (dst) {
|
||||
sg_dst_len = sg_dma_len(dst->sg_wa.sg) - dst->sg_wa.sg_used;
|
||||
sg_dst_len = sg_dma_len(dst->sg_wa.dma_sg) - dst->sg_wa.sg_used;
|
||||
sg_dst_len = min_t(u64, src->sg_wa.bytes_left, sg_dst_len);
|
||||
op_len = min(sg_src_len, sg_dst_len);
|
||||
} else {
|
||||
@@ -370,7 +385,7 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
|
||||
/* Enough data in the sg element, but we need to
|
||||
* adjust for any previously copied data
|
||||
*/
|
||||
op->src.u.dma.address = sg_dma_address(src->sg_wa.sg);
|
||||
op->src.u.dma.address = sg_dma_address(src->sg_wa.dma_sg);
|
||||
op->src.u.dma.offset = src->sg_wa.sg_used;
|
||||
op->src.u.dma.length = op_len & ~(block_size - 1);
|
||||
|
||||
@@ -391,7 +406,7 @@ static void ccp_prepare_data(struct ccp_data *src, struct ccp_data *dst,
|
||||
/* Enough room in the sg element, but we need to
|
||||
* adjust for any previously used area
|
||||
*/
|
||||
op->dst.u.dma.address = sg_dma_address(dst->sg_wa.sg);
|
||||
op->dst.u.dma.address = sg_dma_address(dst->sg_wa.dma_sg);
|
||||
op->dst.u.dma.offset = dst->sg_wa.sg_used;
|
||||
op->dst.u.dma.length = op->src.u.dma.length;
|
||||
}
|
||||
@@ -2034,7 +2049,7 @@ ccp_run_passthru_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
|
||||
dst.sg_wa.sg_used = 0;
|
||||
for (i = 1; i <= src.sg_wa.dma_count; i++) {
|
||||
if (!dst.sg_wa.sg ||
|
||||
(dst.sg_wa.sg->length < src.sg_wa.sg->length)) {
|
||||
(sg_dma_len(dst.sg_wa.sg) < sg_dma_len(src.sg_wa.sg))) {
|
||||
ret = -EINVAL;
|
||||
goto e_dst;
|
||||
}
|
||||
@@ -2060,8 +2075,8 @@ ccp_run_passthru_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
|
||||
goto e_dst;
|
||||
}
|
||||
|
||||
dst.sg_wa.sg_used += src.sg_wa.sg->length;
|
||||
if (dst.sg_wa.sg_used == dst.sg_wa.sg->length) {
|
||||
dst.sg_wa.sg_used += sg_dma_len(src.sg_wa.sg);
|
||||
if (dst.sg_wa.sg_used == sg_dma_len(dst.sg_wa.sg)) {
|
||||
dst.sg_wa.sg = sg_next(dst.sg_wa.sg);
|
||||
dst.sg_wa.sg_used = 0;
|
||||
}
|
||||
|
||||
@@ -332,13 +332,18 @@ static int qat_uclo_create_batch_init_list(struct icp_qat_fw_loader_handle
|
||||
}
|
||||
return 0;
|
||||
out_err:
|
||||
/* Do not free the list head unless we allocated it. */
|
||||
tail_old = tail_old->next;
|
||||
if (flag) {
|
||||
kfree(*init_tab_base);
|
||||
*init_tab_base = NULL;
|
||||
}
|
||||
|
||||
while (tail_old) {
|
||||
mem_init = tail_old->next;
|
||||
kfree(tail_old);
|
||||
tail_old = mem_init;
|
||||
}
|
||||
if (flag)
|
||||
kfree(*init_tab_base);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
||||
@@ -301,6 +301,7 @@ int edac_device_register_sysfs_main_kobj(struct edac_device_ctl_info *edac_dev)
|
||||
|
||||
/* Error exit stack */
|
||||
err_kobj_reg:
|
||||
kobject_put(&edac_dev->kobj);
|
||||
module_put(edac_dev->owner);
|
||||
|
||||
err_out:
|
||||
|
||||
@@ -386,7 +386,7 @@ static int edac_pci_main_kobj_setup(void)
|
||||
|
||||
/* Error unwind statck */
|
||||
kobject_init_and_add_fail:
|
||||
kfree(edac_pci_top_main_kobj);
|
||||
kobject_put(edac_pci_top_main_kobj);
|
||||
|
||||
kzalloc_fail:
|
||||
module_put(THIS_MODULE);
|
||||
|
||||
@@ -369,7 +369,7 @@ int malidp_de_planes_init(struct drm_device *drm)
|
||||
const struct malidp_hw_regmap *map = &malidp->dev->map;
|
||||
struct malidp_plane *plane = NULL;
|
||||
enum drm_plane_type plane_type;
|
||||
unsigned long crtcs = 1 << drm->mode_config.num_crtc;
|
||||
unsigned long crtcs = BIT(drm->mode_config.num_crtc);
|
||||
unsigned long flags = DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_180 |
|
||||
DRM_MODE_ROTATE_270 | DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y;
|
||||
u32 *formats;
|
||||
|
||||
@@ -167,7 +167,7 @@ static void sii8620_read_buf(struct sii8620 *ctx, u16 addr, u8 *buf, int len)
|
||||
|
||||
static u8 sii8620_readb(struct sii8620 *ctx, u16 addr)
|
||||
{
|
||||
u8 ret;
|
||||
u8 ret = 0;
|
||||
|
||||
sii8620_read_buf(ctx, addr, &ret, 1);
|
||||
return ret;
|
||||
|
||||
@@ -250,13 +250,13 @@ static ssize_t connector_write(struct file *file, const char __user *ubuf,
|
||||
|
||||
buf[len] = '\0';
|
||||
|
||||
if (!strcmp(buf, "on"))
|
||||
if (sysfs_streq(buf, "on"))
|
||||
connector->force = DRM_FORCE_ON;
|
||||
else if (!strcmp(buf, "digital"))
|
||||
else if (sysfs_streq(buf, "digital"))
|
||||
connector->force = DRM_FORCE_ON_DIGITAL;
|
||||
else if (!strcmp(buf, "off"))
|
||||
else if (sysfs_streq(buf, "off"))
|
||||
connector->force = DRM_FORCE_OFF;
|
||||
else if (!strcmp(buf, "unspecified"))
|
||||
else if (sysfs_streq(buf, "unspecified"))
|
||||
connector->force = DRM_FORCE_UNSPECIFIED;
|
||||
else
|
||||
return -EINVAL;
|
||||
|
||||
@@ -1032,11 +1032,11 @@ EXPORT_SYMBOL(mipi_dsi_dcs_set_pixel_format);
|
||||
*/
|
||||
int mipi_dsi_dcs_set_tear_scanline(struct mipi_dsi_device *dsi, u16 scanline)
|
||||
{
|
||||
u8 payload[3] = { MIPI_DCS_SET_TEAR_SCANLINE, scanline >> 8,
|
||||
scanline & 0xff };
|
||||
u8 payload[2] = { scanline >> 8, scanline & 0xff };
|
||||
ssize_t err;
|
||||
|
||||
err = mipi_dsi_generic_write(dsi, payload, sizeof(payload));
|
||||
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_TEAR_SCANLINE, payload,
|
||||
sizeof(payload));
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
|
||||
@@ -311,18 +311,19 @@ static void imx_ldb_encoder_disable(struct drm_encoder *encoder)
|
||||
{
|
||||
struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder);
|
||||
struct imx_ldb *ldb = imx_ldb_ch->ldb;
|
||||
int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;
|
||||
int mux, ret;
|
||||
|
||||
drm_panel_disable(imx_ldb_ch->panel);
|
||||
|
||||
if (imx_ldb_ch == &ldb->channel[0])
|
||||
if (imx_ldb_ch == &ldb->channel[0] || dual)
|
||||
ldb->ldb_ctrl &= ~LDB_CH0_MODE_EN_MASK;
|
||||
else if (imx_ldb_ch == &ldb->channel[1])
|
||||
if (imx_ldb_ch == &ldb->channel[1] || dual)
|
||||
ldb->ldb_ctrl &= ~LDB_CH1_MODE_EN_MASK;
|
||||
|
||||
regmap_write(ldb->regmap, IOMUXC_GPR2, ldb->ldb_ctrl);
|
||||
|
||||
if (ldb->ldb_ctrl & LDB_SPLIT_MODE_EN) {
|
||||
if (dual) {
|
||||
clk_disable_unprepare(ldb->clk[0]);
|
||||
clk_disable_unprepare(ldb->clk[1]);
|
||||
}
|
||||
|
||||
@@ -498,6 +498,13 @@ static int imx_tve_register(struct drm_device *drm, struct imx_tve *tve)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void imx_tve_disable_regulator(void *data)
|
||||
{
|
||||
struct imx_tve *tve = data;
|
||||
|
||||
regulator_disable(tve->dac_reg);
|
||||
}
|
||||
|
||||
static bool imx_tve_readable_reg(struct device *dev, unsigned int reg)
|
||||
{
|
||||
return (reg % 4 == 0) && (reg <= 0xdc);
|
||||
@@ -622,6 +629,9 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
|
||||
ret = regulator_enable(tve->dac_reg);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = devm_add_action_or_reset(dev, imx_tve_disable_regulator, tve);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
tve->clk = devm_clk_get(dev, "tve");
|
||||
@@ -668,18 +678,8 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void imx_tve_unbind(struct device *dev, struct device *master,
|
||||
void *data)
|
||||
{
|
||||
struct imx_tve *tve = dev_get_drvdata(dev);
|
||||
|
||||
if (!IS_ERR(tve->dac_reg))
|
||||
regulator_disable(tve->dac_reg);
|
||||
}
|
||||
|
||||
static const struct component_ops imx_tve_ops = {
|
||||
.bind = imx_tve_bind,
|
||||
.unbind = imx_tve_unbind,
|
||||
};
|
||||
|
||||
static int imx_tve_probe(struct platform_device *pdev)
|
||||
|
||||
@@ -840,8 +840,10 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv)
|
||||
|
||||
/* need to bring up power immediately if opening device */
|
||||
ret = pm_runtime_get_sync(dev->dev);
|
||||
if (ret < 0 && ret != -EACCES)
|
||||
if (ret < 0 && ret != -EACCES) {
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
get_task_comm(tmpname, current);
|
||||
snprintf(name, sizeof(name), "%s[%d]", tmpname, pid_nr(fpriv->pid));
|
||||
@@ -930,8 +932,10 @@ nouveau_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
long ret;
|
||||
|
||||
ret = pm_runtime_get_sync(dev->dev);
|
||||
if (ret < 0 && ret != -EACCES)
|
||||
if (ret < 0 && ret != -EACCES) {
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
switch (_IOC_NR(cmd) - DRM_COMMAND_BASE) {
|
||||
case DRM_NOUVEAU_NVIF:
|
||||
|
||||
@@ -310,7 +310,7 @@ nouveau_fbcon_create(struct drm_fb_helper *helper,
|
||||
struct nouveau_framebuffer *fb;
|
||||
struct nouveau_channel *chan;
|
||||
struct nouveau_bo *nvbo;
|
||||
struct drm_mode_fb_cmd2 mode_cmd;
|
||||
struct drm_mode_fb_cmd2 mode_cmd = {};
|
||||
int ret;
|
||||
|
||||
mode_cmd.width = sizes->surface_width;
|
||||
@@ -543,6 +543,7 @@ fini:
|
||||
drm_fb_helper_fini(&fbcon->helper);
|
||||
free:
|
||||
kfree(fbcon);
|
||||
drm->fbcon = NULL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -42,8 +42,10 @@ nouveau_gem_object_del(struct drm_gem_object *gem)
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_get_sync(dev);
|
||||
if (WARN_ON(ret < 0 && ret != -EACCES))
|
||||
if (WARN_ON(ret < 0 && ret != -EACCES)) {
|
||||
pm_runtime_put_autosuspend(dev);
|
||||
return;
|
||||
}
|
||||
|
||||
if (gem->import_attach)
|
||||
drm_prime_gem_destroy(gem, nvbo->bo.sg);
|
||||
|
||||
@@ -1253,7 +1253,7 @@ static const struct drm_display_mode lg_lb070wv8_mode = {
|
||||
static const struct panel_desc lg_lb070wv8 = {
|
||||
.modes = &lg_lb070wv8_mode,
|
||||
.num_modes = 1,
|
||||
.bpc = 16,
|
||||
.bpc = 8,
|
||||
.size = {
|
||||
.width = 151,
|
||||
.height = 91,
|
||||
|
||||
@@ -4342,7 +4342,7 @@ static int ci_set_mc_special_registers(struct radeon_device *rdev,
|
||||
table->mc_reg_table_entry[k].mc_data[j] |= 0x100;
|
||||
}
|
||||
j++;
|
||||
if (j > SMU7_DISCRETE_MC_REGISTER_ARRAY_SIZE)
|
||||
if (j >= SMU7_DISCRETE_MC_REGISTER_ARRAY_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
if (!pi->mem_gddr5) {
|
||||
|
||||
@@ -627,8 +627,10 @@ radeon_crtc_set_config(struct drm_mode_set *set,
|
||||
dev = set->crtc->dev;
|
||||
|
||||
ret = pm_runtime_get_sync(dev->dev);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = drm_crtc_helper_set_config(set, ctx);
|
||||
|
||||
|
||||
@@ -496,8 +496,10 @@ long radeon_drm_ioctl(struct file *filp,
|
||||
long ret;
|
||||
dev = file_priv->minor->dev;
|
||||
ret = pm_runtime_get_sync(dev->dev);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = drm_ioctl(filp, cmd, arg);
|
||||
|
||||
|
||||
@@ -659,8 +659,10 @@ int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
|
||||
file_priv->driver_priv = NULL;
|
||||
|
||||
r = pm_runtime_get_sync(dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return r;
|
||||
}
|
||||
|
||||
/* new gpu have virtual address space support */
|
||||
if (rdev->family >= CHIP_CAYMAN) {
|
||||
|
||||
@@ -152,12 +152,16 @@ static int panel_connector_get_modes(struct drm_connector *connector)
|
||||
int i;
|
||||
|
||||
for (i = 0; i < timings->num_timings; i++) {
|
||||
struct drm_display_mode *mode = drm_mode_create(dev);
|
||||
struct drm_display_mode *mode;
|
||||
struct videomode vm;
|
||||
|
||||
if (videomode_from_timings(timings, &vm, i))
|
||||
break;
|
||||
|
||||
mode = drm_mode_create(dev);
|
||||
if (!mode)
|
||||
break;
|
||||
|
||||
drm_display_mode_from_videomode(&vm, mode);
|
||||
|
||||
mode->type = DRM_MODE_TYPE_DRIVER;
|
||||
|
||||
@@ -2707,7 +2707,7 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv,
|
||||
++i;
|
||||
}
|
||||
|
||||
if (i != unit) {
|
||||
if (&con->head == &dev_priv->dev->mode_config.connector_list) {
|
||||
DRM_ERROR("Could not find initial display unit.\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -2729,13 +2729,13 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv,
|
||||
break;
|
||||
}
|
||||
|
||||
if (mode->type & DRM_MODE_TYPE_PREFERRED)
|
||||
*p_mode = mode;
|
||||
else {
|
||||
if (&mode->head == &con->modes) {
|
||||
WARN_ONCE(true, "Could not find initial preferred mode.\n");
|
||||
*p_mode = list_first_entry(&con->modes,
|
||||
struct drm_display_mode,
|
||||
head);
|
||||
} else {
|
||||
*p_mode = mode;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -79,7 +79,7 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
|
||||
struct vmw_legacy_display_unit *entry;
|
||||
struct drm_framebuffer *fb = NULL;
|
||||
struct drm_crtc *crtc = NULL;
|
||||
int i = 0;
|
||||
int i;
|
||||
|
||||
/* If there is no display topology the host just assumes
|
||||
* that the guest will set the same layout as the host.
|
||||
@@ -90,12 +90,11 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
|
||||
crtc = &entry->base.crtc;
|
||||
w = max(w, crtc->x + crtc->mode.hdisplay);
|
||||
h = max(h, crtc->y + crtc->mode.vdisplay);
|
||||
i++;
|
||||
}
|
||||
|
||||
if (crtc == NULL)
|
||||
return 0;
|
||||
fb = entry->base.crtc.primary->state->fb;
|
||||
fb = crtc->primary->state->fb;
|
||||
|
||||
return vmw_kms_write_svga(dev_priv, w, h, fb->pitches[0],
|
||||
fb->format->cpp[0] * 8,
|
||||
|
||||
@@ -25,6 +25,8 @@
|
||||
#include "debug.h"
|
||||
#include "channel.h"
|
||||
|
||||
static DEFINE_MUTEX(debug_lock);
|
||||
|
||||
unsigned int host1x_debug_trace_cmdbuf;
|
||||
|
||||
static pid_t host1x_debug_force_timeout_pid;
|
||||
@@ -49,12 +51,14 @@ static int show_channel(struct host1x_channel *ch, void *data, bool show_fifo)
|
||||
struct output *o = data;
|
||||
|
||||
mutex_lock(&ch->cdma.lock);
|
||||
mutex_lock(&debug_lock);
|
||||
|
||||
if (show_fifo)
|
||||
host1x_hw_show_channel_fifo(m, ch, o);
|
||||
|
||||
host1x_hw_show_channel_cdma(m, ch, o);
|
||||
|
||||
mutex_unlock(&debug_lock);
|
||||
mutex_unlock(&ch->cdma.lock);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -992,38 +992,7 @@ done:
|
||||
return IRQ_WAKE_THREAD;
|
||||
}
|
||||
|
||||
static irqreturn_t norotate_irq(int irq, void *data)
|
||||
{
|
||||
struct ipu_image_convert_chan *chan = data;
|
||||
struct ipu_image_convert_ctx *ctx;
|
||||
struct ipu_image_convert_run *run;
|
||||
unsigned long flags;
|
||||
irqreturn_t ret;
|
||||
|
||||
spin_lock_irqsave(&chan->irqlock, flags);
|
||||
|
||||
/* get current run and its context */
|
||||
run = chan->current_run;
|
||||
if (!run) {
|
||||
ret = IRQ_NONE;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ctx = run->ctx;
|
||||
|
||||
if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
|
||||
/* this is a rotation operation, just ignore */
|
||||
spin_unlock_irqrestore(&chan->irqlock, flags);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
ret = do_irq(run);
|
||||
out:
|
||||
spin_unlock_irqrestore(&chan->irqlock, flags);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static irqreturn_t rotate_irq(int irq, void *data)
|
||||
static irqreturn_t eof_irq(int irq, void *data)
|
||||
{
|
||||
struct ipu_image_convert_chan *chan = data;
|
||||
struct ipu_image_convert_priv *priv = chan->priv;
|
||||
@@ -1043,11 +1012,24 @@ static irqreturn_t rotate_irq(int irq, void *data)
|
||||
|
||||
ctx = run->ctx;
|
||||
|
||||
if (!ipu_rot_mode_is_irt(ctx->rot_mode)) {
|
||||
/* this was NOT a rotation operation, shouldn't happen */
|
||||
dev_err(priv->ipu->dev, "Unexpected rotation interrupt\n");
|
||||
spin_unlock_irqrestore(&chan->irqlock, flags);
|
||||
return IRQ_HANDLED;
|
||||
if (irq == chan->out_eof_irq) {
|
||||
if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
|
||||
/* this is a rotation op, just ignore */
|
||||
ret = IRQ_HANDLED;
|
||||
goto out;
|
||||
}
|
||||
} else if (irq == chan->rot_out_eof_irq) {
|
||||
if (!ipu_rot_mode_is_irt(ctx->rot_mode)) {
|
||||
/* this was NOT a rotation op, shouldn't happen */
|
||||
dev_err(priv->ipu->dev,
|
||||
"Unexpected rotation interrupt\n");
|
||||
ret = IRQ_HANDLED;
|
||||
goto out;
|
||||
}
|
||||
} else {
|
||||
dev_err(priv->ipu->dev, "Received unknown irq %d\n", irq);
|
||||
ret = IRQ_NONE;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = do_irq(run);
|
||||
@@ -1142,7 +1124,7 @@ static int get_ipu_resources(struct ipu_image_convert_chan *chan)
|
||||
chan->out_chan,
|
||||
IPU_IRQ_EOF);
|
||||
|
||||
ret = request_threaded_irq(chan->out_eof_irq, norotate_irq, do_bh,
|
||||
ret = request_threaded_irq(chan->out_eof_irq, eof_irq, do_bh,
|
||||
0, "ipu-ic", chan);
|
||||
if (ret < 0) {
|
||||
dev_err(priv->ipu->dev, "could not acquire irq %d\n",
|
||||
@@ -1155,7 +1137,7 @@ static int get_ipu_resources(struct ipu_image_convert_chan *chan)
|
||||
chan->rotation_out_chan,
|
||||
IPU_IRQ_EOF);
|
||||
|
||||
ret = request_threaded_irq(chan->rot_out_eof_irq, rotate_irq, do_bh,
|
||||
ret = request_threaded_irq(chan->rot_out_eof_irq, eof_irq, do_bh,
|
||||
0, "ipu-ic", chan);
|
||||
if (ret < 0) {
|
||||
dev_err(priv->ipu->dev, "could not acquire irq %d\n",
|
||||
|
||||
@@ -362,13 +362,13 @@ static int hidinput_query_battery_capacity(struct hid_device *dev)
|
||||
u8 *buf;
|
||||
int ret;
|
||||
|
||||
buf = kmalloc(2, GFP_KERNEL);
|
||||
buf = kmalloc(4, GFP_KERNEL);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = hid_hw_raw_request(dev, dev->battery_report_id, buf, 2,
|
||||
ret = hid_hw_raw_request(dev, dev->battery_report_id, buf, 4,
|
||||
dev->battery_report_type, HID_REQ_GET_REPORT);
|
||||
if (ret != 2) {
|
||||
if (ret < 2) {
|
||||
kfree(buf);
|
||||
return -ENODATA;
|
||||
}
|
||||
|
||||
@@ -1228,6 +1228,8 @@ channel_message_table[CHANNELMSG_COUNT] = {
|
||||
{ CHANNELMSG_19, 0, NULL },
|
||||
{ CHANNELMSG_20, 0, NULL },
|
||||
{ CHANNELMSG_TL_CONNECT_REQUEST, 0, NULL },
|
||||
{ CHANNELMSG_22, 0, NULL },
|
||||
{ CHANNELMSG_TL_CONNECT_RESULT, 0, NULL },
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -1239,23 +1241,14 @@ void vmbus_onmessage(void *context)
|
||||
{
|
||||
struct hv_message *msg = context;
|
||||
struct vmbus_channel_message_header *hdr;
|
||||
int size;
|
||||
|
||||
hdr = (struct vmbus_channel_message_header *)msg->u.payload;
|
||||
size = msg->header.payload_size;
|
||||
|
||||
if (hdr->msgtype >= CHANNELMSG_COUNT) {
|
||||
pr_err("Received invalid channel message type %d size %d\n",
|
||||
hdr->msgtype, size);
|
||||
print_hex_dump_bytes("", DUMP_PREFIX_NONE,
|
||||
(unsigned char *)msg->u.payload, size);
|
||||
return;
|
||||
}
|
||||
|
||||
if (channel_message_table[hdr->msgtype].message_handler)
|
||||
channel_message_table[hdr->msgtype].message_handler(hdr);
|
||||
else
|
||||
pr_err("Unhandled channel message type %d\n", hdr->msgtype);
|
||||
/*
|
||||
* vmbus_on_msg_dpc() makes sure the hdr->msgtype here can not go
|
||||
* out of bound and the message_handler pointer can not be NULL.
|
||||
*/
|
||||
channel_message_table[hdr->msgtype].message_handler(hdr);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -890,6 +890,10 @@ void vmbus_on_msg_dpc(unsigned long data)
|
||||
}
|
||||
|
||||
entry = &channel_message_table[hdr->msgtype];
|
||||
|
||||
if (!entry->message_handler)
|
||||
goto msg_handled;
|
||||
|
||||
if (entry->handler_type == VMHT_BLOCKING) {
|
||||
ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
|
||||
if (ctx == NULL)
|
||||
|
||||
@@ -673,6 +673,12 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
|
||||
|
||||
/* Re-enable the TMC if need be */
|
||||
if (drvdata->mode == CS_MODE_SYSFS) {
|
||||
/* There is no point in reading a TMC in HW FIFO mode */
|
||||
mode = readl_relaxed(drvdata->base + TMC_MODE);
|
||||
if (mode != TMC_MODE_CIRCULAR_BUFFER) {
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
return -EINVAL;
|
||||
}
|
||||
/*
|
||||
* The trace run will continue with the same allocated trace
|
||||
* buffer. As such zero-out the buffer so that we don't end
|
||||
|
||||
@@ -538,13 +538,14 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
|
||||
rcar_i2c_write(priv, ICSIER, SDR | SSR | SAR);
|
||||
}
|
||||
|
||||
rcar_i2c_write(priv, ICSSR, ~SAR & 0xff);
|
||||
/* Clear SSR, too, because of old STOPs to other clients than us */
|
||||
rcar_i2c_write(priv, ICSSR, ~(SAR | SSR) & 0xff);
|
||||
}
|
||||
|
||||
/* master sent stop */
|
||||
if (ssr_filtered & SSR) {
|
||||
i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value);
|
||||
rcar_i2c_write(priv, ICSIER, SAR | SSR);
|
||||
rcar_i2c_write(priv, ICSIER, SAR);
|
||||
rcar_i2c_write(priv, ICSSR, ~SSR & 0xff);
|
||||
}
|
||||
|
||||
@@ -802,7 +803,7 @@ static int rcar_reg_slave(struct i2c_client *slave)
|
||||
priv->slave = slave;
|
||||
rcar_i2c_write(priv, ICSAR, slave->addr);
|
||||
rcar_i2c_write(priv, ICSSR, 0);
|
||||
rcar_i2c_write(priv, ICSIER, SAR | SSR);
|
||||
rcar_i2c_write(priv, ICSIER, SAR);
|
||||
rcar_i2c_write(priv, ICSCR, SIE | SDBS);
|
||||
|
||||
return 0;
|
||||
@@ -814,12 +815,14 @@ static int rcar_unreg_slave(struct i2c_client *slave)
|
||||
|
||||
WARN_ON(!priv->slave);
|
||||
|
||||
/* disable irqs and ensure none is running before clearing ptr */
|
||||
/* ensure no irq is running before clearing ptr */
|
||||
disable_irq(priv->irq);
|
||||
rcar_i2c_write(priv, ICSIER, 0);
|
||||
rcar_i2c_write(priv, ICSCR, 0);
|
||||
rcar_i2c_write(priv, ICSSR, 0);
|
||||
enable_irq(priv->irq);
|
||||
rcar_i2c_write(priv, ICSCR, SDBS);
|
||||
rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
|
||||
|
||||
synchronize_irq(priv->irq);
|
||||
priv->slave = NULL;
|
||||
|
||||
pm_runtime_put(rcar_i2c_priv_to_dev(priv));
|
||||
|
||||
@@ -22,10 +22,8 @@ int i2c_slave_register(struct i2c_client *client, i2c_slave_cb_t slave_cb)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!client || !slave_cb) {
|
||||
WARN(1, "insufficient data\n");
|
||||
if (WARN(IS_ERR_OR_NULL(client) || !slave_cb, "insufficient data\n"))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!(client->flags & I2C_CLIENT_SLAVE))
|
||||
dev_warn(&client->dev, "%s: client slave flag not set. You might see address collisions\n",
|
||||
@@ -64,6 +62,9 @@ int i2c_slave_unregister(struct i2c_client *client)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (IS_ERR_OR_NULL(client))
|
||||
return -EINVAL;
|
||||
|
||||
if (!client->adapter->algo->unreg_slave) {
|
||||
dev_err(&client->dev, "%s: not supported by adapter\n", __func__);
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
@@ -417,7 +417,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
|
||||
s64 tmp = *val * (3767897513LL / 25LL);
|
||||
*val = div_s64_rem(tmp, 1000000000LL, val2);
|
||||
|
||||
ret = IIO_VAL_INT_PLUS_MICRO;
|
||||
return IIO_VAL_INT_PLUS_MICRO;
|
||||
} else {
|
||||
int mult;
|
||||
|
||||
@@ -448,7 +448,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev,
|
||||
ret = IIO_VAL_INT;
|
||||
break;
|
||||
default:
|
||||
ret = -EINVAL;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
unlock:
|
||||
|
||||
@@ -509,7 +509,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev);
|
||||
|
||||
int ipoib_ib_dev_open_default(struct net_device *dev);
|
||||
int ipoib_ib_dev_open(struct net_device *dev);
|
||||
int ipoib_ib_dev_stop(struct net_device *dev);
|
||||
void ipoib_ib_dev_stop(struct net_device *dev);
|
||||
void ipoib_ib_dev_up(struct net_device *dev);
|
||||
void ipoib_ib_dev_down(struct net_device *dev);
|
||||
int ipoib_ib_dev_stop_default(struct net_device *dev);
|
||||
|
||||
@@ -809,7 +809,7 @@ timeout:
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ipoib_ib_dev_stop(struct net_device *dev)
|
||||
void ipoib_ib_dev_stop(struct net_device *dev)
|
||||
{
|
||||
struct ipoib_dev_priv *priv = ipoib_priv(dev);
|
||||
|
||||
@@ -817,8 +817,6 @@ int ipoib_ib_dev_stop(struct net_device *dev)
|
||||
|
||||
clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags);
|
||||
ipoib_flush_ah(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void ipoib_ib_tx_timer_func(unsigned long ctx)
|
||||
|
||||
@@ -454,7 +454,7 @@ static ssize_t fsp_attr_set_setreg(struct psmouse *psmouse, void *data,
|
||||
|
||||
fsp_reg_write_enable(psmouse, false);
|
||||
|
||||
return count;
|
||||
return retval;
|
||||
}
|
||||
|
||||
PSMOUSE_DEFINE_WO_ATTR(setreg, S_IWUSR, NULL, fsp_attr_set_setreg);
|
||||
|
||||
@@ -601,13 +601,21 @@ out_free_table:
|
||||
|
||||
static void intel_teardown_irq_remapping(struct intel_iommu *iommu)
|
||||
{
|
||||
struct fwnode_handle *fn;
|
||||
|
||||
if (iommu && iommu->ir_table) {
|
||||
if (iommu->ir_msi_domain) {
|
||||
fn = iommu->ir_msi_domain->fwnode;
|
||||
|
||||
irq_domain_remove(iommu->ir_msi_domain);
|
||||
irq_domain_free_fwnode(fn);
|
||||
iommu->ir_msi_domain = NULL;
|
||||
}
|
||||
if (iommu->ir_domain) {
|
||||
fn = iommu->ir_domain->fwnode;
|
||||
|
||||
irq_domain_remove(iommu->ir_domain);
|
||||
irq_domain_free_fwnode(fn);
|
||||
iommu->ir_domain = NULL;
|
||||
}
|
||||
free_pages((unsigned long)iommu->ir_table->base,
|
||||
|
||||
@@ -101,8 +101,11 @@ static ssize_t debug_read_regs(struct file *file, char __user *userbuf,
|
||||
mutex_lock(&iommu_debug_lock);
|
||||
|
||||
bytes = omap_iommu_dump_ctx(obj, p, count);
|
||||
if (bytes < 0)
|
||||
goto err;
|
||||
bytes = simple_read_from_buffer(userbuf, count, ppos, buf, bytes);
|
||||
|
||||
err:
|
||||
mutex_unlock(&iommu_debug_lock);
|
||||
kfree(buf);
|
||||
|
||||
|
||||
@@ -2199,6 +2199,7 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
|
||||
{
|
||||
msi_alloc_info_t *info = args;
|
||||
struct its_device *its_dev = info->scratchpad[0].ptr;
|
||||
struct irq_data *irqd;
|
||||
irq_hw_number_t hwirq;
|
||||
int err;
|
||||
int i;
|
||||
@@ -2214,7 +2215,9 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
|
||||
|
||||
irq_domain_set_hwirq_and_chip(domain, virq + i,
|
||||
hwirq + i, &its_irq_chip, its_dev);
|
||||
irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(virq + i)));
|
||||
irqd = irq_get_irq_data(virq + i);
|
||||
irqd_set_single_target(irqd);
|
||||
irqd_set_affinity_on_activate(irqd);
|
||||
pr_debug("ID:%d pID:%d vID:%d\n",
|
||||
(int)(hwirq + i - its_dev->event_map.lpi_base),
|
||||
(int)(hwirq + i), virq + i);
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
struct mtk_sysirq_chip_data {
|
||||
spinlock_t lock;
|
||||
raw_spinlock_t lock;
|
||||
u32 nr_intpol_bases;
|
||||
void __iomem **intpol_bases;
|
||||
u32 *intpol_words;
|
||||
@@ -45,7 +45,7 @@ static int mtk_sysirq_set_type(struct irq_data *data, unsigned int type)
|
||||
reg_index = chip_data->which_word[hwirq];
|
||||
offset = hwirq & 0x1f;
|
||||
|
||||
spin_lock_irqsave(&chip_data->lock, flags);
|
||||
raw_spin_lock_irqsave(&chip_data->lock, flags);
|
||||
value = readl_relaxed(base + reg_index * 4);
|
||||
if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_EDGE_FALLING) {
|
||||
if (type == IRQ_TYPE_LEVEL_LOW)
|
||||
@@ -61,7 +61,7 @@ static int mtk_sysirq_set_type(struct irq_data *data, unsigned int type)
|
||||
|
||||
data = data->parent_data;
|
||||
ret = data->chip->irq_set_type(data, type);
|
||||
spin_unlock_irqrestore(&chip_data->lock, flags);
|
||||
raw_spin_unlock_irqrestore(&chip_data->lock, flags);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -220,7 +220,7 @@ static int __init mtk_sysirq_of_init(struct device_node *node,
|
||||
ret = -ENOMEM;
|
||||
goto out_free_which_word;
|
||||
}
|
||||
spin_lock_init(&chip_data->lock);
|
||||
raw_spin_lock_init(&chip_data->lock);
|
||||
|
||||
return 0;
|
||||
|
||||
|
||||
@@ -173,6 +173,7 @@ void led_classdev_suspend(struct led_classdev *led_cdev)
|
||||
{
|
||||
led_cdev->flags |= LED_SUSPENDED;
|
||||
led_set_brightness_nopm(led_cdev, 0);
|
||||
flush_work(&led_cdev->set_brightness_work);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(led_classdev_suspend);
|
||||
|
||||
|
||||
@@ -207,21 +207,33 @@ static int pm860x_led_probe(struct platform_device *pdev)
|
||||
data->cdev.brightness_set_blocking = pm860x_led_set;
|
||||
mutex_init(&data->lock);
|
||||
|
||||
ret = devm_led_classdev_register(chip->dev, &data->cdev);
|
||||
ret = led_classdev_register(chip->dev, &data->cdev);
|
||||
if (ret < 0) {
|
||||
dev_err(&pdev->dev, "Failed to register LED: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
pm860x_led_set(&data->cdev, 0);
|
||||
|
||||
platform_set_drvdata(pdev, data);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pm860x_led_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct pm860x_led *data = platform_get_drvdata(pdev);
|
||||
|
||||
led_classdev_unregister(&data->cdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver pm860x_led_driver = {
|
||||
.driver = {
|
||||
.name = "88pm860x-led",
|
||||
},
|
||||
.probe = pm860x_led_probe,
|
||||
.remove = pm860x_led_remove,
|
||||
};
|
||||
|
||||
module_platform_driver(pm860x_led_driver);
|
||||
|
||||
@@ -113,12 +113,23 @@ static int da903x_led_probe(struct platform_device *pdev)
|
||||
led->flags = pdata->flags;
|
||||
led->master = pdev->dev.parent;
|
||||
|
||||
ret = devm_led_classdev_register(led->master, &led->cdev);
|
||||
ret = led_classdev_register(led->master, &led->cdev);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to register LED %d\n", id);
|
||||
return ret;
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, led);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int da903x_led_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct da903x_led *led = platform_get_drvdata(pdev);
|
||||
|
||||
led_classdev_unregister(&led->cdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -127,6 +138,7 @@ static struct platform_driver da903x_led_driver = {
|
||||
.name = "da903x-led",
|
||||
},
|
||||
.probe = da903x_led_probe,
|
||||
.remove = da903x_led_remove,
|
||||
};
|
||||
|
||||
module_platform_driver(da903x_led_driver);
|
||||
|
||||
@@ -698,7 +698,7 @@ static int lm3533_led_probe(struct platform_device *pdev)
|
||||
|
||||
platform_set_drvdata(pdev, led);
|
||||
|
||||
ret = devm_led_classdev_register(pdev->dev.parent, &led->cdev);
|
||||
ret = led_classdev_register(pdev->dev.parent, &led->cdev);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to register LED %d\n", pdev->id);
|
||||
return ret;
|
||||
@@ -708,13 +708,18 @@ static int lm3533_led_probe(struct platform_device *pdev)
|
||||
|
||||
ret = lm3533_led_setup(led, pdata);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_deregister;
|
||||
|
||||
ret = lm3533_ctrlbank_enable(&led->cb);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_deregister;
|
||||
|
||||
return 0;
|
||||
|
||||
err_deregister:
|
||||
led_classdev_unregister(&led->cdev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int lm3533_led_remove(struct platform_device *pdev)
|
||||
@@ -724,6 +729,7 @@ static int lm3533_led_remove(struct platform_device *pdev)
|
||||
dev_dbg(&pdev->dev, "%s\n", __func__);
|
||||
|
||||
lm3533_ctrlbank_disable(&led->cb);
|
||||
led_classdev_unregister(&led->cdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -168,18 +168,19 @@ static int lm355x_chip_init(struct lm355x_chip_data *chip)
|
||||
/* input and output pins configuration */
|
||||
switch (chip->type) {
|
||||
case CHIP_LM3554:
|
||||
reg_val = pdata->pin_tx2 | pdata->ntc_pin;
|
||||
reg_val = (u32)pdata->pin_tx2 | (u32)pdata->ntc_pin;
|
||||
ret = regmap_update_bits(chip->regmap, 0xE0, 0x28, reg_val);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
reg_val = pdata->pass_mode;
|
||||
reg_val = (u32)pdata->pass_mode;
|
||||
ret = regmap_update_bits(chip->regmap, 0xA0, 0x04, reg_val);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
break;
|
||||
|
||||
case CHIP_LM3556:
|
||||
reg_val = pdata->pin_tx2 | pdata->ntc_pin | pdata->pass_mode;
|
||||
reg_val = (u32)pdata->pin_tx2 | (u32)pdata->ntc_pin |
|
||||
(u32)pdata->pass_mode;
|
||||
ret = regmap_update_bits(chip->regmap, 0x0A, 0xC4, reg_val);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
@@ -283,12 +283,23 @@ static int wm831x_status_probe(struct platform_device *pdev)
|
||||
drvdata->cdev.blink_set = wm831x_status_blink_set;
|
||||
drvdata->cdev.groups = wm831x_status_groups;
|
||||
|
||||
ret = devm_led_classdev_register(wm831x->dev, &drvdata->cdev);
|
||||
ret = led_classdev_register(wm831x->dev, &drvdata->cdev);
|
||||
if (ret < 0) {
|
||||
dev_err(&pdev->dev, "Failed to register LED: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, drvdata);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int wm831x_status_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct wm831x_status *drvdata = platform_get_drvdata(pdev);
|
||||
|
||||
led_classdev_unregister(&drvdata->cdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -297,6 +308,7 @@ static struct platform_driver wm831x_status_driver = {
|
||||
.name = "wm831x-status",
|
||||
},
|
||||
.probe = wm831x_status_probe,
|
||||
.remove = wm831x_status_remove,
|
||||
};
|
||||
|
||||
module_platform_driver(wm831x_status_driver);
|
||||
|
||||
@@ -319,7 +319,7 @@ int bch_btree_keys_alloc(struct btree_keys *b, unsigned page_order, gfp_t gfp)
|
||||
|
||||
b->page_order = page_order;
|
||||
|
||||
t->data = (void *) __get_free_pages(gfp, b->page_order);
|
||||
t->data = (void *) __get_free_pages(__GFP_COMP|gfp, b->page_order);
|
||||
if (!t->data)
|
||||
goto err;
|
||||
|
||||
|
||||
@@ -794,7 +794,7 @@ int bch_btree_cache_alloc(struct cache_set *c)
|
||||
mutex_init(&c->verify_lock);
|
||||
|
||||
c->verify_ondisk = (void *)
|
||||
__get_free_pages(GFP_KERNEL, ilog2(bucket_pages(c)));
|
||||
__get_free_pages(GFP_KERNEL|__GFP_COMP, ilog2(bucket_pages(c)));
|
||||
|
||||
c->verify_data = mca_bucket_alloc(c, &ZERO_KEY, GFP_KERNEL);
|
||||
|
||||
|
||||
@@ -838,8 +838,8 @@ int bch_journal_alloc(struct cache_set *c)
|
||||
j->w[1].c = c;
|
||||
|
||||
if (!(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) ||
|
||||
!(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)) ||
|
||||
!(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)))
|
||||
!(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)) ||
|
||||
!(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)))
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -1468,7 +1468,7 @@ void bch_cache_set_unregister(struct cache_set *c)
|
||||
}
|
||||
|
||||
#define alloc_bucket_pages(gfp, c) \
|
||||
((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))
|
||||
((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(bucket_pages(c))))
|
||||
|
||||
struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
|
||||
{
|
||||
@@ -1780,7 +1780,14 @@ found:
|
||||
sysfs_create_link(&c->kobj, &ca->kobj, buf))
|
||||
goto err;
|
||||
|
||||
if (ca->sb.seq > c->sb.seq) {
|
||||
/*
|
||||
* A special case is both ca->sb.seq and c->sb.seq are 0,
|
||||
* such condition happens on a new created cache device whose
|
||||
* super block is never flushed yet. In this case c->sb.version
|
||||
* and other members should be updated too, otherwise we will
|
||||
* have a mistaken super block version in cache set.
|
||||
*/
|
||||
if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
|
||||
c->sb.version = ca->sb.version;
|
||||
memcpy(c->sb.set_uuid, ca->sb.set_uuid, 16);
|
||||
c->sb.flags = ca->sb.flags;
|
||||
|
||||
@@ -410,7 +410,6 @@ struct cache {
|
||||
spinlock_t lock;
|
||||
struct list_head deferred_cells;
|
||||
struct bio_list deferred_bios;
|
||||
struct bio_list deferred_writethrough_bios;
|
||||
sector_t migration_threshold;
|
||||
wait_queue_head_t migration_wait;
|
||||
atomic_t nr_allocated_migrations;
|
||||
@@ -446,10 +445,10 @@ struct cache {
|
||||
struct dm_kcopyd_client *copier;
|
||||
struct workqueue_struct *wq;
|
||||
struct work_struct deferred_bio_worker;
|
||||
struct work_struct deferred_writethrough_worker;
|
||||
struct work_struct migration_worker;
|
||||
struct delayed_work waker;
|
||||
struct dm_bio_prison_v2 *prison;
|
||||
struct bio_set *bs;
|
||||
|
||||
mempool_t *migration_pool;
|
||||
|
||||
@@ -490,15 +489,6 @@ struct per_bio_data {
|
||||
struct dm_bio_prison_cell_v2 *cell;
|
||||
struct dm_hook_info hook_info;
|
||||
sector_t len;
|
||||
|
||||
/*
|
||||
* writethrough fields. These MUST remain at the end of this
|
||||
* structure and the 'cache' member must be the first as it
|
||||
* is used to determine the offset of the writethrough fields.
|
||||
*/
|
||||
struct cache *cache;
|
||||
dm_cblock_t cblock;
|
||||
struct dm_bio_details bio_details;
|
||||
};
|
||||
|
||||
struct dm_cache_migration {
|
||||
@@ -515,19 +505,19 @@ struct dm_cache_migration {
|
||||
|
||||
/*----------------------------------------------------------------*/
|
||||
|
||||
static bool writethrough_mode(struct cache_features *f)
|
||||
static bool writethrough_mode(struct cache *cache)
|
||||
{
|
||||
return f->io_mode == CM_IO_WRITETHROUGH;
|
||||
return cache->features.io_mode == CM_IO_WRITETHROUGH;
|
||||
}
|
||||
|
||||
static bool writeback_mode(struct cache_features *f)
|
||||
static bool writeback_mode(struct cache *cache)
|
||||
{
|
||||
return f->io_mode == CM_IO_WRITEBACK;
|
||||
return cache->features.io_mode == CM_IO_WRITEBACK;
|
||||
}
|
||||
|
||||
static inline bool passthrough_mode(struct cache_features *f)
|
||||
static inline bool passthrough_mode(struct cache *cache)
|
||||
{
|
||||
return unlikely(f->io_mode == CM_IO_PASSTHROUGH);
|
||||
return unlikely(cache->features.io_mode == CM_IO_PASSTHROUGH);
|
||||
}
|
||||
|
||||
/*----------------------------------------------------------------*/
|
||||
@@ -537,14 +527,9 @@ static void wake_deferred_bio_worker(struct cache *cache)
|
||||
queue_work(cache->wq, &cache->deferred_bio_worker);
|
||||
}
|
||||
|
||||
static void wake_deferred_writethrough_worker(struct cache *cache)
|
||||
{
|
||||
queue_work(cache->wq, &cache->deferred_writethrough_worker);
|
||||
}
|
||||
|
||||
static void wake_migration_worker(struct cache *cache)
|
||||
{
|
||||
if (passthrough_mode(&cache->features))
|
||||
if (passthrough_mode(cache))
|
||||
return;
|
||||
|
||||
queue_work(cache->wq, &cache->migration_worker);
|
||||
@@ -618,15 +603,9 @@ static unsigned lock_level(struct bio *bio)
|
||||
* Per bio data
|
||||
*--------------------------------------------------------------*/
|
||||
|
||||
/*
|
||||
* If using writeback, leave out struct per_bio_data's writethrough fields.
|
||||
*/
|
||||
#define PB_DATA_SIZE_WB (offsetof(struct per_bio_data, cache))
|
||||
#define PB_DATA_SIZE_WT (sizeof(struct per_bio_data))
|
||||
|
||||
static size_t get_per_bio_data_size(struct cache *cache)
|
||||
{
|
||||
return writethrough_mode(&cache->features) ? PB_DATA_SIZE_WT : PB_DATA_SIZE_WB;
|
||||
return sizeof(struct per_bio_data);
|
||||
}
|
||||
|
||||
static struct per_bio_data *get_per_bio_data(struct bio *bio, size_t data_size)
|
||||
@@ -868,16 +847,23 @@ static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio)
|
||||
spin_unlock_irqrestore(&cache->lock, flags);
|
||||
}
|
||||
|
||||
static void remap_to_origin_clear_discard(struct cache *cache, struct bio *bio,
|
||||
dm_oblock_t oblock)
|
||||
static void __remap_to_origin_clear_discard(struct cache *cache, struct bio *bio,
|
||||
dm_oblock_t oblock, bool bio_has_pbd)
|
||||
{
|
||||
// FIXME: this is called way too much.
|
||||
check_if_tick_bio_needed(cache, bio);
|
||||
if (bio_has_pbd)
|
||||
check_if_tick_bio_needed(cache, bio);
|
||||
remap_to_origin(cache, bio);
|
||||
if (bio_data_dir(bio) == WRITE)
|
||||
clear_discard(cache, oblock_to_dblock(cache, oblock));
|
||||
}
|
||||
|
||||
static void remap_to_origin_clear_discard(struct cache *cache, struct bio *bio,
|
||||
dm_oblock_t oblock)
|
||||
{
|
||||
// FIXME: check_if_tick_bio_needed() is called way too much through this interface
|
||||
__remap_to_origin_clear_discard(cache, bio, oblock, true);
|
||||
}
|
||||
|
||||
static void remap_to_cache_dirty(struct cache *cache, struct bio *bio,
|
||||
dm_oblock_t oblock, dm_cblock_t cblock)
|
||||
{
|
||||
@@ -937,57 +923,26 @@ static void issue_op(struct bio *bio, void *context)
|
||||
accounted_request(cache, bio);
|
||||
}
|
||||
|
||||
static void defer_writethrough_bio(struct cache *cache, struct bio *bio)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&cache->lock, flags);
|
||||
bio_list_add(&cache->deferred_writethrough_bios, bio);
|
||||
spin_unlock_irqrestore(&cache->lock, flags);
|
||||
|
||||
wake_deferred_writethrough_worker(cache);
|
||||
}
|
||||
|
||||
static void writethrough_endio(struct bio *bio)
|
||||
{
|
||||
struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT);
|
||||
|
||||
dm_unhook_bio(&pb->hook_info, bio);
|
||||
|
||||
if (bio->bi_status) {
|
||||
bio_endio(bio);
|
||||
return;
|
||||
}
|
||||
|
||||
dm_bio_restore(&pb->bio_details, bio);
|
||||
remap_to_cache(pb->cache, bio, pb->cblock);
|
||||
|
||||
/*
|
||||
* We can't issue this bio directly, since we're in interrupt
|
||||
* context. So it gets put on a bio list for processing by the
|
||||
* worker thread.
|
||||
*/
|
||||
defer_writethrough_bio(pb->cache, bio);
|
||||
}
|
||||
|
||||
/*
|
||||
* FIXME: send in parallel, huge latency as is.
|
||||
* When running in writethrough mode we need to send writes to clean blocks
|
||||
* to both the cache and origin devices. In future we'd like to clone the
|
||||
* bio and send them in parallel, but for now we're doing them in
|
||||
* series as this is easier.
|
||||
* to both the cache and origin devices. Clone the bio and send them in parallel.
|
||||
*/
|
||||
static void remap_to_origin_then_cache(struct cache *cache, struct bio *bio,
|
||||
dm_oblock_t oblock, dm_cblock_t cblock)
|
||||
static void remap_to_origin_and_cache(struct cache *cache, struct bio *bio,
|
||||
dm_oblock_t oblock, dm_cblock_t cblock)
|
||||
{
|
||||
struct per_bio_data *pb = get_per_bio_data(bio, PB_DATA_SIZE_WT);
|
||||
struct bio *origin_bio = bio_clone_fast(bio, GFP_NOIO, cache->bs);
|
||||
|
||||
pb->cache = cache;
|
||||
pb->cblock = cblock;
|
||||
dm_hook_bio(&pb->hook_info, bio, writethrough_endio, NULL);
|
||||
dm_bio_record(&pb->bio_details, bio);
|
||||
BUG_ON(!origin_bio);
|
||||
|
||||
remap_to_origin_clear_discard(pb->cache, bio, oblock);
|
||||
bio_chain(origin_bio, bio);
|
||||
/*
|
||||
* Passing false to __remap_to_origin_clear_discard() skips
|
||||
* all code that might use per_bio_data (since clone doesn't have it)
|
||||
*/
|
||||
__remap_to_origin_clear_discard(cache, origin_bio, oblock, false);
|
||||
submit_bio(origin_bio);
|
||||
|
||||
remap_to_cache(cache, bio, cblock);
|
||||
}
|
||||
|
||||
/*----------------------------------------------------------------
|
||||
@@ -1209,7 +1164,7 @@ static bool bio_writes_complete_block(struct cache *cache, struct bio *bio)
|
||||
|
||||
static bool optimisable_bio(struct cache *cache, struct bio *bio, dm_oblock_t block)
|
||||
{
|
||||
return writeback_mode(&cache->features) &&
|
||||
return writeback_mode(cache) &&
|
||||
(is_discarded_oblock(cache, block) || bio_writes_complete_block(cache, bio));
|
||||
}
|
||||
|
||||
@@ -1862,7 +1817,7 @@ static int map_bio(struct cache *cache, struct bio *bio, dm_oblock_t block,
|
||||
* Passthrough always maps to the origin, invalidating any
|
||||
* cache blocks that are written to.
|
||||
*/
|
||||
if (passthrough_mode(&cache->features)) {
|
||||
if (passthrough_mode(cache)) {
|
||||
if (bio_data_dir(bio) == WRITE) {
|
||||
bio_drop_shared_lock(cache, bio);
|
||||
atomic_inc(&cache->stats.demotion);
|
||||
@@ -1871,9 +1826,9 @@ static int map_bio(struct cache *cache, struct bio *bio, dm_oblock_t block,
|
||||
remap_to_origin_clear_discard(cache, bio, block);
|
||||
|
||||
} else {
|
||||
if (bio_data_dir(bio) == WRITE && writethrough_mode(&cache->features) &&
|
||||
if (bio_data_dir(bio) == WRITE && writethrough_mode(cache) &&
|
||||
!is_dirty(cache, cblock)) {
|
||||
remap_to_origin_then_cache(cache, bio, block, cblock);
|
||||
remap_to_origin_and_cache(cache, bio, block, cblock);
|
||||
accounted_begin(cache, bio);
|
||||
} else
|
||||
remap_to_cache_dirty(cache, bio, block, cblock);
|
||||
@@ -2003,28 +1958,6 @@ static void process_deferred_bios(struct work_struct *ws)
|
||||
schedule_commit(&cache->committer);
|
||||
}
|
||||
|
||||
static void process_deferred_writethrough_bios(struct work_struct *ws)
|
||||
{
|
||||
struct cache *cache = container_of(ws, struct cache, deferred_writethrough_worker);
|
||||
|
||||
unsigned long flags;
|
||||
struct bio_list bios;
|
||||
struct bio *bio;
|
||||
|
||||
bio_list_init(&bios);
|
||||
|
||||
spin_lock_irqsave(&cache->lock, flags);
|
||||
bio_list_merge(&bios, &cache->deferred_writethrough_bios);
|
||||
bio_list_init(&cache->deferred_writethrough_bios);
|
||||
spin_unlock_irqrestore(&cache->lock, flags);
|
||||
|
||||
/*
|
||||
* These bios have already been through accounted_begin()
|
||||
*/
|
||||
while ((bio = bio_list_pop(&bios)))
|
||||
generic_make_request(bio);
|
||||
}
|
||||
|
||||
/*----------------------------------------------------------------
|
||||
* Main worker loop
|
||||
*--------------------------------------------------------------*/
|
||||
@@ -2132,6 +2065,9 @@ static void destroy(struct cache *cache)
|
||||
kfree(cache->ctr_args[i]);
|
||||
kfree(cache->ctr_args);
|
||||
|
||||
if (cache->bs)
|
||||
bioset_free(cache->bs);
|
||||
|
||||
kfree(cache);
|
||||
}
|
||||
|
||||
@@ -2589,6 +2525,13 @@ static int cache_create(struct cache_args *ca, struct cache **result)
|
||||
cache->features = ca->features;
|
||||
ti->per_io_data_size = get_per_bio_data_size(cache);
|
||||
|
||||
if (writethrough_mode(cache)) {
|
||||
/* Create bioset for writethrough bios issued to origin */
|
||||
cache->bs = bioset_create(BIO_POOL_SIZE, 0, 0);
|
||||
if (!cache->bs)
|
||||
goto bad;
|
||||
}
|
||||
|
||||
cache->callbacks.congested_fn = cache_is_congested;
|
||||
dm_table_add_target_callbacks(ti->table, &cache->callbacks);
|
||||
|
||||
@@ -2649,7 +2592,7 @@ static int cache_create(struct cache_args *ca, struct cache **result)
|
||||
goto bad;
|
||||
}
|
||||
|
||||
if (passthrough_mode(&cache->features)) {
|
||||
if (passthrough_mode(cache)) {
|
||||
bool all_clean;
|
||||
|
||||
r = dm_cache_metadata_all_clean(cache->cmd, &all_clean);
|
||||
@@ -2670,7 +2613,6 @@ static int cache_create(struct cache_args *ca, struct cache **result)
|
||||
spin_lock_init(&cache->lock);
|
||||
INIT_LIST_HEAD(&cache->deferred_cells);
|
||||
bio_list_init(&cache->deferred_bios);
|
||||
bio_list_init(&cache->deferred_writethrough_bios);
|
||||
atomic_set(&cache->nr_allocated_migrations, 0);
|
||||
atomic_set(&cache->nr_io_migrations, 0);
|
||||
init_waitqueue_head(&cache->migration_wait);
|
||||
@@ -2709,8 +2651,6 @@ static int cache_create(struct cache_args *ca, struct cache **result)
|
||||
goto bad;
|
||||
}
|
||||
INIT_WORK(&cache->deferred_bio_worker, process_deferred_bios);
|
||||
INIT_WORK(&cache->deferred_writethrough_worker,
|
||||
process_deferred_writethrough_bios);
|
||||
INIT_WORK(&cache->migration_worker, check_migrations);
|
||||
INIT_DELAYED_WORK(&cache->waker, do_waker);
|
||||
|
||||
@@ -3279,13 +3219,13 @@ static void cache_status(struct dm_target *ti, status_type_t type,
|
||||
else
|
||||
DMEMIT("1 ");
|
||||
|
||||
if (writethrough_mode(&cache->features))
|
||||
if (writethrough_mode(cache))
|
||||
DMEMIT("writethrough ");
|
||||
|
||||
else if (passthrough_mode(&cache->features))
|
||||
else if (passthrough_mode(cache))
|
||||
DMEMIT("passthrough ");
|
||||
|
||||
else if (writeback_mode(&cache->features))
|
||||
else if (writeback_mode(cache))
|
||||
DMEMIT("writeback ");
|
||||
|
||||
else {
|
||||
@@ -3451,7 +3391,7 @@ static int process_invalidate_cblocks_message(struct cache *cache, unsigned coun
|
||||
unsigned i;
|
||||
struct cblock_range range;
|
||||
|
||||
if (!passthrough_mode(&cache->features)) {
|
||||
if (!passthrough_mode(cache)) {
|
||||
DMERR("%s: cache has to be in passthrough mode for invalidation",
|
||||
cache_device_name(cache));
|
||||
return -EPERM;
|
||||
|
||||
@@ -95,9 +95,6 @@ static void dm_old_stop_queue(struct request_queue *q)
|
||||
|
||||
static void dm_mq_stop_queue(struct request_queue *q)
|
||||
{
|
||||
if (blk_mq_queue_stopped(q))
|
||||
return;
|
||||
|
||||
blk_mq_quiesce_queue(q);
|
||||
}
|
||||
|
||||
|
||||
@@ -1423,6 +1423,7 @@ static void unlock_all_bitmaps(struct mddev *mddev)
|
||||
}
|
||||
}
|
||||
kfree(cinfo->other_bitmap_lockres);
|
||||
cinfo->other_bitmap_lockres = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -3593,6 +3593,7 @@ static int need_this_block(struct stripe_head *sh, struct stripe_head_state *s,
|
||||
* is missing/faulty, then we need to read everything we can.
|
||||
*/
|
||||
if (sh->raid_conf->level != 6 &&
|
||||
sh->raid_conf->rmw_level != PARITY_DISABLE_RMW &&
|
||||
sh->sector < sh->raid_conf->mddev->recovery_cp)
|
||||
/* reconstruct-write isn't being forced */
|
||||
return 0;
|
||||
@@ -4829,7 +4830,7 @@ static void handle_stripe(struct stripe_head *sh)
|
||||
* or to load a block that is being partially written.
|
||||
*/
|
||||
if (s.to_read || s.non_overwrite
|
||||
|| (conf->level == 6 && s.to_write && s.failed)
|
||||
|| (s.to_write && s.failed)
|
||||
|| (s.syncing && (s.uptodate + s.compute < disks))
|
||||
|| s.replacing
|
||||
|| s.expanding)
|
||||
|
||||
@@ -271,6 +271,8 @@ static int node_probe(struct fw_unit *unit, const struct ieee1394_device_id *id)
|
||||
|
||||
name_len = fw_csr_string(unit->directory, CSR_MODEL,
|
||||
name, sizeof(name));
|
||||
if (name_len < 0)
|
||||
return name_len;
|
||||
for (i = ARRAY_SIZE(model_names); --i; )
|
||||
if (strlen(model_names[i]) <= name_len &&
|
||||
strncmp(name, model_names[i], name_len) == 0)
|
||||
|
||||
@@ -1258,6 +1258,9 @@ static int fimc_md_get_pinctrl(struct fimc_md *fmd)
|
||||
|
||||
pctl->state_idle = pinctrl_lookup_state(pctl->pinctrl,
|
||||
PINCTRL_STATE_IDLE);
|
||||
if (IS_ERR(pctl->state_idle))
|
||||
return PTR_ERR(pctl->state_idle);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -2290,7 +2290,7 @@ static int preview_init_entities(struct isp_prev_device *prev)
|
||||
me->ops = &preview_media_ops;
|
||||
ret = media_entity_pads_init(me, PREV_PADS_NUM, pads);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto error_handler_free;
|
||||
|
||||
preview_init_formats(sd, NULL);
|
||||
|
||||
@@ -2323,6 +2323,8 @@ error_video_out:
|
||||
omap3isp_video_cleanup(&prev->video_in);
|
||||
error_video_in:
|
||||
media_entity_cleanup(&prev->subdev.entity);
|
||||
error_handler_free:
|
||||
v4l2_ctrl_handler_free(&prev->ctrls);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -1528,6 +1528,15 @@ err_irq:
|
||||
arizona_irq_exit(arizona);
|
||||
err_pm:
|
||||
pm_runtime_disable(arizona->dev);
|
||||
|
||||
switch (arizona->pdata.clk32k_src) {
|
||||
case ARIZONA_32KZ_MCLK1:
|
||||
case ARIZONA_32KZ_MCLK2:
|
||||
arizona_clk32k_disable(arizona);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
err_reset:
|
||||
arizona_enable_reset(arizona);
|
||||
regulator_disable(arizona->dcvdd);
|
||||
@@ -1550,6 +1559,15 @@ int arizona_dev_exit(struct arizona *arizona)
|
||||
regulator_disable(arizona->dcvdd);
|
||||
regulator_put(arizona->dcvdd);
|
||||
|
||||
switch (arizona->pdata.clk32k_src) {
|
||||
case ARIZONA_32KZ_MCLK1:
|
||||
case ARIZONA_32KZ_MCLK2:
|
||||
arizona_clk32k_disable(arizona);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
mfd_remove_devices(arizona->dev);
|
||||
arizona_free_irq(arizona, ARIZONA_IRQ_UNDERCLOCKED, arizona);
|
||||
arizona_free_irq(arizona, ARIZONA_IRQ_OVERCLOCKED, arizona);
|
||||
|
||||
@@ -294,7 +294,11 @@ static void dln2_rx(struct urb *urb)
|
||||
len = urb->actual_length - sizeof(struct dln2_header);
|
||||
|
||||
if (handle == DLN2_HANDLE_EVENT) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&dln2->event_cb_lock, flags);
|
||||
dln2_run_event_callbacks(dln2, id, echo, data, len);
|
||||
spin_unlock_irqrestore(&dln2->event_cb_lock, flags);
|
||||
} else {
|
||||
/* URB will be re-submitted in _dln2_transfer (free_rx_slot) */
|
||||
if (dln2_transfer_complete(dln2, urb, handle, echo))
|
||||
|
||||
@@ -606,7 +606,7 @@ static struct afu_config_record *cxl_sysfs_afu_new_cr(struct cxl_afu *afu, int c
|
||||
rc = kobject_init_and_add(&cr->kobj, &afu_config_record_type,
|
||||
&afu->dev.kobj, "cr%i", cr->cr);
|
||||
if (rc)
|
||||
goto err;
|
||||
goto err1;
|
||||
|
||||
rc = sysfs_create_bin_file(&cr->kobj, &cr->config_attr);
|
||||
if (rc)
|
||||
|
||||
@@ -373,9 +373,6 @@ static int mtdchar_writeoob(struct file *file, struct mtd_info *mtd,
|
||||
uint32_t retlen;
|
||||
int ret = 0;
|
||||
|
||||
if (!(file->f_mode & FMODE_WRITE))
|
||||
return -EPERM;
|
||||
|
||||
if (length > 4096)
|
||||
return -EINVAL;
|
||||
|
||||
@@ -682,6 +679,48 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check the file mode to require "dangerous" commands to have write
|
||||
* permissions.
|
||||
*/
|
||||
switch (cmd) {
|
||||
/* "safe" commands */
|
||||
case MEMGETREGIONCOUNT:
|
||||
case MEMGETREGIONINFO:
|
||||
case MEMGETINFO:
|
||||
case MEMREADOOB:
|
||||
case MEMREADOOB64:
|
||||
case MEMLOCK:
|
||||
case MEMUNLOCK:
|
||||
case MEMISLOCKED:
|
||||
case MEMGETOOBSEL:
|
||||
case MEMGETBADBLOCK:
|
||||
case MEMSETBADBLOCK:
|
||||
case OTPSELECT:
|
||||
case OTPGETREGIONCOUNT:
|
||||
case OTPGETREGIONINFO:
|
||||
case OTPLOCK:
|
||||
case ECCGETLAYOUT:
|
||||
case ECCGETSTATS:
|
||||
case MTDFILEMODE:
|
||||
case BLKPG:
|
||||
case BLKRRPART:
|
||||
break;
|
||||
|
||||
/* "dangerous" commands */
|
||||
case MEMERASE:
|
||||
case MEMERASE64:
|
||||
case MEMWRITEOOB:
|
||||
case MEMWRITEOOB64:
|
||||
case MEMWRITE:
|
||||
if (!(file->f_mode & FMODE_WRITE))
|
||||
return -EPERM;
|
||||
break;
|
||||
|
||||
default:
|
||||
return -ENOTTY;
|
||||
}
|
||||
|
||||
switch (cmd) {
|
||||
case MEMGETREGIONCOUNT:
|
||||
if (copy_to_user(argp, &(mtd->numeraseregions), sizeof(int)))
|
||||
@@ -729,9 +768,6 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
|
||||
{
|
||||
struct erase_info *erase;
|
||||
|
||||
if(!(file->f_mode & FMODE_WRITE))
|
||||
return -EPERM;
|
||||
|
||||
erase=kzalloc(sizeof(struct erase_info),GFP_KERNEL);
|
||||
if (!erase)
|
||||
ret = -ENOMEM;
|
||||
@@ -1055,9 +1091,6 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
ret = -ENOTTY;
|
||||
}
|
||||
|
||||
return ret;
|
||||
@@ -1101,6 +1134,11 @@ static long mtdchar_compat_ioctl(struct file *file, unsigned int cmd,
|
||||
struct mtd_oob_buf32 buf;
|
||||
struct mtd_oob_buf32 __user *buf_user = argp;
|
||||
|
||||
if (!(file->f_mode & FMODE_WRITE)) {
|
||||
ret = -EPERM;
|
||||
break;
|
||||
}
|
||||
|
||||
if (copy_from_user(&buf, argp, sizeof(buf)))
|
||||
ret = -EFAULT;
|
||||
else
|
||||
|
||||
@@ -435,11 +435,13 @@ struct qcom_nand_host {
|
||||
* among different NAND controllers.
|
||||
* @ecc_modes - ecc mode for NAND
|
||||
* @is_bam - whether NAND controller is using BAM
|
||||
* @is_qpic - whether NAND CTRL is part of qpic IP
|
||||
* @dev_cmd_reg_start - NAND_DEV_CMD_* registers starting offset
|
||||
*/
|
||||
struct qcom_nandc_props {
|
||||
u32 ecc_modes;
|
||||
bool is_bam;
|
||||
bool is_qpic;
|
||||
u32 dev_cmd_reg_start;
|
||||
};
|
||||
|
||||
@@ -2508,7 +2510,8 @@ static int qcom_nandc_setup(struct qcom_nand_controller *nandc)
|
||||
u32 nand_ctrl;
|
||||
|
||||
/* kill onenand */
|
||||
nandc_write(nandc, SFLASHC_BURST_CFG, 0);
|
||||
if (!nandc->props->is_qpic)
|
||||
nandc_write(nandc, SFLASHC_BURST_CFG, 0);
|
||||
nandc_write(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD_VLD),
|
||||
NAND_DEV_CMD_VLD_VAL);
|
||||
|
||||
@@ -2779,12 +2782,14 @@ static const struct qcom_nandc_props ipq806x_nandc_props = {
|
||||
static const struct qcom_nandc_props ipq4019_nandc_props = {
|
||||
.ecc_modes = (ECC_BCH_4BIT | ECC_BCH_8BIT),
|
||||
.is_bam = true,
|
||||
.is_qpic = true,
|
||||
.dev_cmd_reg_start = 0x0,
|
||||
};
|
||||
|
||||
static const struct qcom_nandc_props ipq8074_nandc_props = {
|
||||
.ecc_modes = (ECC_BCH_4BIT | ECC_BCH_8BIT),
|
||||
.is_bam = true,
|
||||
.is_qpic = true,
|
||||
.dev_cmd_reg_start = 0x7000,
|
||||
};
|
||||
|
||||
|
||||
@@ -2450,7 +2450,6 @@ static const struct mv88e6xxx_ops mv88e6097_ops = {
|
||||
.port_set_frame_mode = mv88e6351_port_set_frame_mode,
|
||||
.port_set_egress_floods = mv88e6352_port_set_egress_floods,
|
||||
.port_set_ether_type = mv88e6351_port_set_ether_type,
|
||||
.port_set_jumbo_size = mv88e6165_port_set_jumbo_size,
|
||||
.port_egress_rate_limiting = mv88e6095_port_egress_rate_limiting,
|
||||
.port_pause_limit = mv88e6097_port_pause_limit,
|
||||
.port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit,
|
||||
|
||||
@@ -746,7 +746,7 @@ static int hw_atl_a0_hw_multicast_list_set(struct aq_hw_s *self,
|
||||
int err = 0;
|
||||
|
||||
if (count > (HW_ATL_A0_MAC_MAX - HW_ATL_A0_MAC_MIN)) {
|
||||
err = EBADRQC;
|
||||
err = -EBADRQC;
|
||||
goto err_exit;
|
||||
}
|
||||
for (self->aq_nic_cfg->mc_list_count = 0U;
|
||||
|
||||
@@ -1167,7 +1167,7 @@ static int cn23xx_get_pf_num(struct octeon_device *oct)
|
||||
oct->pf_num = ((fdl_bit >> CN23XX_PCIE_SRIOV_FDL_BIT_POS) &
|
||||
CN23XX_PCIE_SRIOV_FDL_MASK);
|
||||
} else {
|
||||
ret = EINVAL;
|
||||
ret = -EINVAL;
|
||||
|
||||
/* Under some virtual environments, extended PCI regs are
|
||||
* inaccessible, in which case the above read will have failed.
|
||||
|
||||
@@ -1396,8 +1396,7 @@ static void enable_time_stamp(struct fman *fman)
|
||||
{
|
||||
struct fman_fpm_regs __iomem *fpm_rg = fman->fpm_regs;
|
||||
u16 fm_clk_freq = fman->state->fm_clk_freq;
|
||||
u32 tmp, intgr, ts_freq;
|
||||
u64 frac;
|
||||
u32 tmp, intgr, ts_freq, frac;
|
||||
|
||||
ts_freq = (u32)(1 << fman->state->count1_micro_bit);
|
||||
/* configure timestamp so that bit 8 will count 1 microsecond
|
||||
|
||||
@@ -1159,7 +1159,7 @@ int dtsec_del_hash_mac_address(struct fman_mac *dtsec, enet_addr_t *eth_addr)
|
||||
list_for_each(pos,
|
||||
&dtsec->multicast_addr_hash->lsts[bucket]) {
|
||||
hash_entry = ETH_HASH_ENTRY_OBJ(pos);
|
||||
if (hash_entry->addr == addr) {
|
||||
if (hash_entry && hash_entry->addr == addr) {
|
||||
list_del_init(&hash_entry->node);
|
||||
kfree(hash_entry);
|
||||
break;
|
||||
@@ -1172,7 +1172,7 @@ int dtsec_del_hash_mac_address(struct fman_mac *dtsec, enet_addr_t *eth_addr)
|
||||
list_for_each(pos,
|
||||
&dtsec->unicast_addr_hash->lsts[bucket]) {
|
||||
hash_entry = ETH_HASH_ENTRY_OBJ(pos);
|
||||
if (hash_entry->addr == addr) {
|
||||
if (hash_entry && hash_entry->addr == addr) {
|
||||
list_del_init(&hash_entry->node);
|
||||
kfree(hash_entry);
|
||||
break;
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user