Merge 4.9.94 into android-4.9
Changes in 4.9.94 qed: Fix overriding of supported autoneg value. cfg80211: make RATE_INFO_BW_20 the default md/raid5: make use of spin_lock_irq over local_irq_disable + spin_lock rtc: snvs: fix an incorrect check of return value x86/asm: Don't use RBP as a temporary register in csum_partial_copy_generic() x86/mm/kaslr: Use the _ASM_MUL macro for multiplication to work around Clang incompatibility ovl: persistent inode numbers for upper hardlinks NFSv4.1: RECLAIM_COMPLETE must handle NFS4ERR_CONN_NOT_BOUND_TO_SESSION x86/boot: Declare error() as noreturn IB/srpt: Fix abort handling IB/srpt: Avoid that aborting a command triggers a kernel warning af_key: Fix slab-out-of-bounds in pfkey_compile_policy. mac80211: bail out from prep_connection() if a reconfig is ongoing bna: Avoid reading past end of buffer qlge: Avoid reading past end of buffer ubi: fastmap: Fix slab corruption ipmi_ssif: unlock on allocation failure net: cdc_ncm: Fix TX zero padding net: ethernet: ti: cpsw: adjust cpsw fifos depth for fullduplex flow control lockd: fix lockd shutdown race drivers/misc/vmw_vmci/vmci_queue_pair.c: fix a couple integer overflow tests pidns: disable pid allocation if pid_ns_prepare_proc() is failed in alloc_pid() s390: move _text symbol to address higher than zero net/mlx4_en: Avoid adding steering rules with invalid ring qed: Correct doorbell configuration for !4Kb pages NFSv4.1: Work around a Linux server bug... CIFS: silence lockdep splat in cifs_relock_file() perf/callchain: Force USER_DS when invoking perf_callchain_user() blk-mq: NVMe 512B/4K+T10 DIF/DIX format returns I/O error on dd with split op net: qca_spi: Fix alignment issues in rx path netxen_nic: set rcode to the return status from the call to netxen_issue_cmd mdio: mux: Correct mdio_mux_init error path issues Input: elan_i2c - check if device is there before really probing Input: elantech - force relative mode on a certain module KVM: PPC: Book3S PR: Check copy_to/from_user return values irqchip/mbigen: Fix the clear register offset calculation vmxnet3: ensure that adapter is in proper state during force_close mm, vmstat: Remove spurious WARN() during zoneinfo print SMB2: Fix share type handling bus: brcmstb_gisb: Use register offsets with writes too bus: brcmstb_gisb: correct support for 64-bit address output PowerCap: Fix an error code in powercap_register_zone() iio: pressure: zpa2326: report interrupted case as failure ARM: dts: imx53-qsrb: Pulldown PMIC IRQ pin staging: wlan-ng: prism2mgmt.c: fixed a double endian conversion before calling hfa384x_drvr_setconfig16, also fixes relative sparse warning clk: renesas: rcar-gen2: Fix PLL0 on R-Car V2H and E2 x86/tsc: Provide 'tsc=unstable' boot parameter powerpc/modules: If mprofile-kernel is enabled add it to vermagic ARM: dts: imx6qdl-wandboard: Fix audio channel swap i2c: mux: reg: put away the parent i2c adapter on probe failure arm64: perf: Ignore exclude_hv when kernel is running in HYP mdio: mux: fix device_node_continue.cocci warnings ipv6: avoid dad-failures for addresses with NODAD async_tx: Fix DMA_PREP_FENCE usage in do_async_gen_syndrome() KVM: arm: Restore banked registers and physical timer access on hyp_panic() KVM: arm64: Restore host physical timer access on hyp_panic() usb: dwc3: keystone: check return value btrfs: fix incorrect error return ret being passed to mapping_set_error ata: libahci: properly propagate return value of platform_get_irq() ipmr: vrf: Find VIFs using the actual device uio: fix incorrect memory leak cleanup neighbour: update neigh timestamps iff update is effective arp: honour gratuitous ARP _replies_ ARM: dts: rockchip: fix rk322x i2s1 pinctrl error usb: chipidea: properly handle host or gadget initialization failure pxa_camera: fix module remove codepath for v4l2 clock USB: ene_usb6250: fix first command execution net: x25: fix one potential use-after-free issue USB: ene_usb6250: fix SCSI residue overwriting serial: 8250: omap: Disable DMA for console UART serial: sh-sci: Fix race condition causing garbage during shutdown net/wan/fsl_ucc_hdlc: fix unitialized variable warnings net/wan/fsl_ucc_hdlc: fix incorrect memory allocation fsl/qe: add bit description for SYNL register for GUMR sh_eth: Use platform device for printing before register_netdev() mlxsw: spectrum: Avoid possible NULL pointer dereference scsi: csiostor: fix use after free in csio_hw_use_fwconfig() powerpc/mm: Fix virt_addr_valid() etc. on 64-bit hash ath5k: fix memory leak on buf on failed eeprom read selftests/powerpc: Fix TM resched DSCR test with some compilers xfrm: fix state migration copy replay sequence numbers ASoC: simple-card: fix mic jack initialization iio: hi8435: avoid garbage event at first enable iio: hi8435: cleanup reset gpio iio: light: rpr0521 poweroff for probe fails ext4: handle the rest of ext4_mb_load_buddy() ENOMEM errors md-cluster: fix potential lock issue in add_new_disk ARM: davinci: da8xx: Create DSP device only when assigned memory ray_cs: Avoid reading past end of buffer net/wan/fsl_ucc_hdlc: fix muram allocation error leds: pca955x: Correct I2C Functionality perf/core: Fix error handling in perf_event_alloc() sched/numa: Use down_read_trylock() for the mmap_sem gpio: crystalcove: Do not write regular gpio registers for virtual GPIOs net/mlx5: Tolerate irq_set_affinity_hint() failures selinux: do not check open permission on sockets block: fix an error code in add_partition() mlx5: fix bug reading rss_hash_type from CQE net: ieee802154: fix net_device reference release too early libceph: NULL deref on crush_decode() error path perf report: Fix off-by-one for non-activation frames netfilter: ctnetlink: fix incorrect nf_ct_put during hash resize pNFS/flexfiles: missing error code in ff_layout_alloc_lseg() ASoC: rsnd: SSI PIO adjust to 24bit mode scsi: bnx2fc: fix race condition in bnx2fc_get_host_stats() fix race in drivers/char/random.c:get_reg() ext4: fix off-by-one on max nr_pages in ext4_find_unwritten_pgoff() ARM64: PCI: Fix struct acpi_pci_root_ops allocation failure path tcp: better validation of received ack sequences net: move somaxconn init from sysctl code Input: elan_i2c - clear INT before resetting controller bonding: Don't update slave->link until ready to commit cpuhotplug: Link lock stacks for hotplug callbacks PCI/msi: fix the pci_alloc_irq_vectors_affinity stub KVM: X86: Fix preempt the preemption timer cancel KVM: nVMX: Fix handling of lmsw instruction net: llc: add lock_sock in llc_ui_bind to avoid a race condition drm/msm: Take the mutex before calling msm_gem_new_impl i40iw: Fix sequence number for the first partial FPDU i40iw: Correct Q1/XF object count equation ARM: dts: ls1021a: add "fsl,ls1021a-esdhc" compatible string to esdhc node thermal: power_allocator: fix one race condition issue for thermal_instances list perf probe: Add warning message if there is unexpected event name l2tp: fix missing print session offset info rds; Reset rs->rs_bound_addr in rds_add_bound() failure path ACPI / video: Default lcd_only to true on Win8-ready and newer machines net/mlx4_en: Change default QoS settings VFS: close race between getcwd() and d_move() PM / devfreq: Fix potential NULL pointer dereference in governor_store hwmon: (ina2xx) Make calibration register value fixed media: videobuf2-core: don't go out of the buffer range ASoC: Intel: Skylake: Disable clock gating during firmware and library download ASoC: Intel: cht_bsw_rt5645: Analog Mic support scsi: libiscsi: Allow sd_shutdown on bad transport scsi: mpt3sas: Proper handling of set/clear of "ATA command pending" flag. irqchip/gic-v3: Fix the driver probe() fail due to disabled GICC entry ACPI: EC: Fix debugfs_create_*() usage mac80211: Fix setting TX power on monitor interfaces vfb: fix video mode and line_length being set when loaded gpio: label descriptors using the device name IB/rdmavt: Allocate CQ memory on the correct node blk-mq: fix race between updating nr_hw_queues and switching io sched backlight: tdo24m: Fix the SPI CS between transfers pinctrl: baytrail: Enable glitch filter for GPIOs used as interrupts ASoC: Intel: sst: Fix the return value of 'sst_send_byte_stream_mrfld()' rt2x00: do not pause queue unconditionally on error path wl1251: check return from call to wl1251_acx_arp_ip_filter hdlcdrv: Fix divide by zero in hdlcdrv_ioctl x86/efi: Disable runtime services on kexec kernel if booted with efi=old_map netfilter: conntrack: don't call iter for non-confirmed conntracks HID: i2c: Call acpi_device_fix_up_power for ACPI-enumerated devices ovl: filter trusted xattr for non-admin powerpc/[booke|4xx]: Don't clobber TCR[WP] when setting TCR[DIE] dmaengine: imx-sdma: Handle return value of clk_prepare_enable backlight: Report error on failure arm64: futex: Fix undefined behaviour with FUTEX_OP_OPARG_SHIFT usage net/mlx5: avoid build warning for uniprocessor cxgb4: FW upgrade fixes cxgb4: Fix netdev_features flag rtc: m41t80: fix SQW dividers override when setting a date i40evf: fix merge error in older patch rtc: opal: Handle disabled TPO in opal_get_tpo_time() rtc: interface: Validate alarm-time before handling rollover SUNRPC: ensure correct error is reported by xs_tcp_setup_socket() net: freescale: fix potential null pointer dereference clk: at91: fix clk-generated parenting drm/sun4i: Ignore the generic connectors for components dt-bindings: display: sun4i: Add allwinner,tcon-channel property mtd: nand: gpmi: Fix gpmi_nand_init() error path mtd: nand: check ecc->total sanity in nand_scan_tail KVM: SVM: do not zero out segment attributes if segment is unusable or not present clk: scpi: fix return type of __scpi_dvfs_round_rate clk: Fix __set_clk_rates error print-string powerpc/spufs: Fix coredump of SPU contexts drm/amdkfd: NULL dereference involving create_process() ath10k: add BMI parameters to fix calibration from DT/pre-cal perf trace: Add mmap alias for s390 qlcnic: Fix a sleep-in-atomic bug in qlcnic_82xx_hw_write_wx_2M and qlcnic_82xx_hw_read_wx_2M arm64: kernel: restrict /dev/mem read() calls to linear region mISDN: Fix a sleep-in-atomic bug net: phy: micrel: Restore led_mode and clk_sel on resume RDMA/iw_cxgb4: Avoid touch after free error in ARP failure handlers RDMA/hfi1: fix array termination by appending NULL to attr array drm/omap: fix tiled buffer stride calculations powerpc/8xx: fix mpc8xx_get_irq() return on no irq cxgb4: fix incorrect cim_la output for T6 Fix serial console on SNI RM400 machines bio-integrity: Do not allocate integrity context for bio w/o data ip6_tunnel: fix traffic class routing for tunnels skbuff: return -EMSGSIZE in skb_to_sgvec to prevent overflow macsec: check return value of skb_to_sgvec always sit: reload iphdr in ipip6_rcv net/mlx4: Fix the check in attaching steering rules net/mlx4: Check if Granular QoS per VF has been enabled before updating QP qos_vport perf header: Set proper module name when build-id event found perf report: Ensure the perf DSO mapping matches what libdw sees iwlwifi: mvm: fix firmware debug restart recording watchdog: f71808e_wdt: Add F71868 support iwlwifi: mvm: Fix command queue number on d0i3 flow iwlwifi: tt: move ucode_loaded check under mutex iwlwifi: pcie: only use d0i3 in suspend/resume if system_pm is set to d0i3 iwlwifi: fix min API version for 7265D, 3168, 8000 and 8265 tags: honor COMPILED_SOURCE with apart output directory ARM: dts: qcom: ipq4019: fix i2c_0 node e1000e: fix race condition around skb_tstamp_tx() igb: fix race condition with PTP_TX_IN_PROGRESS bits cxl: Unlock on error in probe cx25840: fix unchecked return values mceusb: sporadic RX truncation corruption fix net: phy: avoid genphy_aneg_done() for PHYs without clause 22 support ARM: imx: Add MXC_CPU_IMX6ULL and cpu_is_imx6ull nvme-pci: fix multiple ctrl removal scheduling nvme: fix hang in remove path KVM: nVMX: Update vmcs12->guest_linear_address on nested VM-exit e1000e: Undo e1000e_pm_freeze if __e1000_shutdown fails perf/core: Correct event creation with PERF_FORMAT_GROUP sched/deadline: Use the revised wakeup rule for suspending constrained dl tasks MIPS: mm: fixed mappings: correct initialisation MIPS: mm: adjust PKMAP location MIPS: kprobes: flush_insn_slot should flush only if probe initialised ARM: dts: armadillo800eva: Split LCD mux and gpio Fix loop device flush before configure v3 net: emac: fix reset timeout with AR8035 phy perf tools: Decompress kernel module when reading DSO data perf tests: Decompress kernel module before objdump skbuff: only inherit relevant tx_flags xen: avoid type warning in xchg_xen_ulong X.509: Fix error code in x509_cert_parse() pinctrl: meson-gxbb: remove non-existing pin GPIOX_22 coresight: Fix reference count for software sources coresight: tmc: Configure DMA mask appropriately stmmac: fix ptp header for GMAC3 hw timestamp geneve: add missing rx stats accounting crypto: omap-sham - buffer handling fixes for hashing later crypto: omap-sham - fix closing of hash with separate finalize call bnx2x: Allow vfs to disable txvlan offload sctp: fix recursive locking warning in sctp_do_peeloff net: fec: Add a fec_enet_clear_ethtool_stats() stub for CONFIG_M5272 sparc64: ldc abort during vds iso boot iio: magnetometer: st_magn_spi: fix spi_device_id table net: ena: fix rare uncompleted admin command false alarm net: ena: fix race condition between submit and completion admin command net: ena: add missing return when ena_com_get_io_handlers() fails net: ena: add missing unmap bars on device removal net: ena: disable admin msix while working in polling mode clk: meson: meson8b: add compatibles for Meson8 and Meson8m2 Bluetooth: Send HCI Set Event Mask Page 2 command only when needed cpuidle: dt: Add missing 'of_node_put()' ACPICA: OSL: Add support to exclude stdarg.h ACPICA: Events: Add runtime stub support for event APIs ACPICA: Disassembler: Abort on an invalid/unknown AML opcode s390/dasd: fix hanging safe offline vxlan: dont migrate permanent fdb entries during learn hsr: fix incorrect warning selftests: kselftest_harness: Fix compile warning drm/vc4: Fix resource leak in 'vc4_get_hang_state_ioctl()' in error handling path bcache: stop writeback thread after detaching bcache: segregate flash only volume write streams scsi: libsas: fix memory leak in sas_smp_get_phy_events() scsi: libsas: fix error when getting phy events scsi: libsas: initialize sas_phy status according to response of DISCOVER blk-mq: fix kernel oops in blk_mq_tag_idle() tty: n_gsm: Allow ADM response in addition to UA for control dlci EDAC, mv64x60: Fix an error handling path cxgb4vf: Fix SGE FL buffer initialization logic for 64K pages sdhci: Advertise 2.0v supply on SDIO host controller Input: goodix - disable IRQs while suspended mtd: mtd_oobtest: Handle bitflips during reads perf tools: Fix copyfile_offset update of output offset ipsec: check return value of skb_to_sgvec always rxrpc: check return value of skb_to_sgvec always virtio_net: check return value of skb_to_sgvec always virtio_net: check return value of skb_to_sgvec in one more location random: use lockless method of accessing and updating f->reg_idx clk: at91: fix clk-generated compilation arp: fix arp_filter on l3slave devices ipv6: the entire IPv6 header chain must fit the first fragment net: fix possible out-of-bound read in skb_network_protocol() net/ipv6: Fix route leaking between VRFs net/ipv6: Increment OUTxxx counters after netfilter hook netlink: make sure nladdr has correct size in netlink_connect() net/sched: fix NULL dereference in the error path of tcf_bpf_init() pptp: remove a buggy dst release in pptp_connect() r8169: fix setting driver_data after register_netdev sctp: do not leak kernel memory to user space sctp: sctp_sockaddr_af must check minimal addr length for AF_INET6 sky2: Increase D3 delay to sky2 stops working after suspend vhost: correctly remove wait queue during poll failure vlan: also check phy_driver ts_info for vlan's real device bonding: fix the err path for dev hwaddr sync in bond_enslave bonding: move dev_mc_sync after master_upper_dev_link in bond_enslave bonding: process the err returned by dev_set_allmulti properly in bond_enslave net: fool proof dev_valid_name() ip_tunnel: better validate user provided tunnel names ipv6: sit: better validate user provided tunnel names ip6_gre: better validate user provided tunnel names ip6_tunnel: better validate user provided tunnel names vti6: better validate user provided tunnel names net/mlx5e: Sync netdev vxlan ports at open net/sched: fix NULL dereference in the error path of tunnel_key_init() net/sched: fix NULL dereference on the error path of tcf_skbmod_init() net/mlx4_en: Fix mixed PFC and Global pause user control requests vhost: validate log when IOTLB is enabled route: check sysctl_fib_multipath_use_neigh earlier than hash team: move dev_mc_sync after master_upper_dev_link in team_port_add vhost_net: add missing lock nesting notation net/mlx4_core: Fix memory leak while delete slave's resources strparser: Fix sign of err codes net sched actions: fix dumping which requires several messages to user space vrf: Fix use after free and double free in vrf_finish_output Revert "xhci: plat: Register shutdown for xhci_plat" Linux 4.9.94 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -1,11 +1,14 @@
|
||||
* Amlogic Meson8b Clock and Reset Unit
|
||||
* Amlogic Meson8, Meson8b and Meson8m2 Clock and Reset Unit
|
||||
|
||||
The Amlogic Meson8b clock controller generates and supplies clock to various
|
||||
controllers within the SoC.
|
||||
The Amlogic Meson8 / Meson8b / Meson8m2 clock controller generates and
|
||||
supplies clock to various controllers within the SoC.
|
||||
|
||||
Required Properties:
|
||||
|
||||
- compatible: should be "amlogic,meson8b-clkc"
|
||||
- compatible: must be one of:
|
||||
- "amlogic,meson8-clkc" for Meson8 (S802) SoCs
|
||||
- "amlogic,meson8b-clkc" for Meson8 (S805) SoCs
|
||||
- "amlogic,meson8m2-clkc" for Meson8m2 (S812) SoCs
|
||||
- reg: it must be composed by two tuples:
|
||||
0) physical base address of the xtal register and length of memory
|
||||
mapped region.
|
||||
|
||||
@@ -47,10 +47,13 @@ Required properties:
|
||||
Documentation/devicetree/bindings/media/video-interfaces.txt. The
|
||||
first port should be the input endpoint, the second one the output
|
||||
|
||||
The output should have two endpoints. The first is the block
|
||||
connected to the TCON channel 0 (usually a panel or a bridge), the
|
||||
second the block connected to the TCON channel 1 (usually the TV
|
||||
encoder)
|
||||
The output may have multiple endpoints. The TCON has two channels,
|
||||
usually with the first channel being used for the panels interfaces
|
||||
(RGB, LVDS, etc.), and the second being used for the outputs that
|
||||
require another controller (TV Encoder, HDMI, etc.). The endpoints
|
||||
will take an extra property, allwinner,tcon-channel, to specify the
|
||||
channel the endpoint is associated to. If that property is not
|
||||
present, the endpoint number will be used as the channel number.
|
||||
|
||||
On SoCs other than the A33, there is one more clock required:
|
||||
- 'tcon-ch1': The clock driving the TCON channel 1
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 93
|
||||
SUBLEVEL = 94
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
imx53-qsrb {
|
||||
pinctrl_pmic: pmicgrp {
|
||||
fsl,pins = <
|
||||
MX53_PAD_CSI0_DAT5__GPIO5_23 0x1e4 /* IRQ */
|
||||
MX53_PAD_CSI0_DAT5__GPIO5_23 0x1c4 /* IRQ */
|
||||
>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -88,6 +88,7 @@
|
||||
clocks = <&clks IMX6QDL_CLK_CKO>;
|
||||
VDDA-supply = <®_2p5v>;
|
||||
VDDIO-supply = <®_3p3v>;
|
||||
lrclk-strength = <3>;
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
@@ -146,7 +146,7 @@
|
||||
};
|
||||
|
||||
esdhc: esdhc@1560000 {
|
||||
compatible = "fsl,esdhc";
|
||||
compatible = "fsl,ls1021a-esdhc", "fsl,esdhc";
|
||||
reg = <0x0 0x1560000 0x0 0x10000>;
|
||||
interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clock-frequency = <0>;
|
||||
|
||||
@@ -154,10 +154,10 @@
|
||||
|
||||
i2c_0: i2c@78b7000 {
|
||||
compatible = "qcom,i2c-qup-v2.2.1";
|
||||
reg = <0x78b7000 0x6000>;
|
||||
reg = <0x78b7000 0x600>;
|
||||
interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&gcc GCC_BLSP1_AHB_CLK>,
|
||||
<&gcc GCC_BLSP1_QUP2_I2C_APPS_CLK>;
|
||||
<&gcc GCC_BLSP1_QUP1_I2C_APPS_CLK>;
|
||||
clock-names = "iface", "core";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
@@ -266,7 +266,9 @@
|
||||
lcd0_pins: lcd0 {
|
||||
groups = "lcd0_data24_0", "lcd0_lclk_1", "lcd0_sync";
|
||||
function = "lcd0";
|
||||
};
|
||||
|
||||
lcd0_mux {
|
||||
/* DBGMD/LCDC0/FSIA MUX */
|
||||
gpio-hog;
|
||||
gpios = <176 0>;
|
||||
|
||||
@@ -617,9 +617,9 @@
|
||||
<0 12 RK_FUNC_1 &pcfg_pull_none>,
|
||||
<0 13 RK_FUNC_1 &pcfg_pull_none>,
|
||||
<0 14 RK_FUNC_1 &pcfg_pull_none>,
|
||||
<1 2 RK_FUNC_1 &pcfg_pull_none>,
|
||||
<1 4 RK_FUNC_1 &pcfg_pull_none>,
|
||||
<1 5 RK_FUNC_1 &pcfg_pull_none>;
|
||||
<1 2 RK_FUNC_2 &pcfg_pull_none>,
|
||||
<1 4 RK_FUNC_2 &pcfg_pull_none>,
|
||||
<1 5 RK_FUNC_2 &pcfg_pull_none>;
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ static inline int xen_irqs_disabled(struct pt_regs *regs)
|
||||
return raw_irqs_disabled_flags(regs->ARM_cpsr);
|
||||
}
|
||||
|
||||
#define xchg_xen_ulong(ptr, val) atomic64_xchg(container_of((ptr), \
|
||||
#define xchg_xen_ulong(ptr, val) atomic64_xchg(container_of((long long*)(ptr),\
|
||||
atomic64_t, \
|
||||
counter), (val))
|
||||
|
||||
|
||||
@@ -237,8 +237,10 @@ void __hyp_text __noreturn __hyp_panic(int cause)
|
||||
|
||||
vcpu = (struct kvm_vcpu *)read_sysreg(HTPIDR);
|
||||
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
|
||||
__timer_save_state(vcpu);
|
||||
__deactivate_traps(vcpu);
|
||||
__deactivate_vm(vcpu);
|
||||
__banked_restore_state(host_ctxt);
|
||||
__sysreg_restore_state(host_ctxt);
|
||||
}
|
||||
|
||||
|
||||
@@ -821,6 +821,8 @@ static struct platform_device da8xx_dsp = {
|
||||
.resource = da8xx_rproc_resources,
|
||||
};
|
||||
|
||||
static bool rproc_mem_inited __initdata;
|
||||
|
||||
#if IS_ENABLED(CONFIG_DA8XX_REMOTEPROC)
|
||||
|
||||
static phys_addr_t rproc_base __initdata;
|
||||
@@ -859,6 +861,8 @@ void __init da8xx_rproc_reserve_cma(void)
|
||||
ret = dma_declare_contiguous(&da8xx_dsp.dev, rproc_size, rproc_base, 0);
|
||||
if (ret)
|
||||
pr_err("%s: dma_declare_contiguous failed %d\n", __func__, ret);
|
||||
else
|
||||
rproc_mem_inited = true;
|
||||
}
|
||||
|
||||
#else
|
||||
@@ -873,6 +877,12 @@ int __init da8xx_register_rproc(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!rproc_mem_inited) {
|
||||
pr_warn("%s: memory not reserved for DSP, not registering DSP device\n",
|
||||
__func__);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = platform_device_register(&da8xx_dsp);
|
||||
if (ret)
|
||||
pr_err("%s: can't register DSP device: %d\n", __func__, ret);
|
||||
|
||||
@@ -131,6 +131,9 @@ struct device * __init imx_soc_device_init(void)
|
||||
case MXC_CPU_IMX6UL:
|
||||
soc_id = "i.MX6UL";
|
||||
break;
|
||||
case MXC_CPU_IMX6ULL:
|
||||
soc_id = "i.MX6ULL";
|
||||
break;
|
||||
case MXC_CPU_IMX7D:
|
||||
soc_id = "i.MX7D";
|
||||
break;
|
||||
|
||||
@@ -39,6 +39,7 @@
|
||||
#define MXC_CPU_IMX6SX 0x62
|
||||
#define MXC_CPU_IMX6Q 0x63
|
||||
#define MXC_CPU_IMX6UL 0x64
|
||||
#define MXC_CPU_IMX6ULL 0x65
|
||||
#define MXC_CPU_IMX7D 0x72
|
||||
|
||||
#define IMX_DDR_TYPE_LPDDR2 1
|
||||
@@ -73,6 +74,11 @@ static inline bool cpu_is_imx6ul(void)
|
||||
return __mxc_cpu_type == MXC_CPU_IMX6UL;
|
||||
}
|
||||
|
||||
static inline bool cpu_is_imx6ull(void)
|
||||
{
|
||||
return __mxc_cpu_type == MXC_CPU_IMX6ULL;
|
||||
}
|
||||
|
||||
static inline bool cpu_is_imx6q(void)
|
||||
{
|
||||
return __mxc_cpu_type == MXC_CPU_IMX6Q;
|
||||
|
||||
@@ -48,16 +48,16 @@ do { \
|
||||
} while (0)
|
||||
|
||||
static inline int
|
||||
futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
|
||||
futex_atomic_op_inuser(unsigned int encoded_op, u32 __user *uaddr)
|
||||
{
|
||||
int op = (encoded_op >> 28) & 7;
|
||||
int cmp = (encoded_op >> 24) & 15;
|
||||
int oparg = (encoded_op << 8) >> 20;
|
||||
int cmparg = (encoded_op << 20) >> 20;
|
||||
int oparg = (int)(encoded_op << 8) >> 20;
|
||||
int cmparg = (int)(encoded_op << 20) >> 20;
|
||||
int oldval = 0, ret, tmp;
|
||||
|
||||
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
|
||||
oparg = 1 << oparg;
|
||||
oparg = 1U << (oparg & 0x1f);
|
||||
|
||||
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
|
||||
return -EFAULT;
|
||||
|
||||
@@ -175,8 +175,10 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
|
||||
return NULL;
|
||||
|
||||
root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node);
|
||||
if (!root_ops)
|
||||
if (!root_ops) {
|
||||
kfree(ri);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
ri->cfg = pci_acpi_setup_ecam_mapping(root);
|
||||
if (!ri->cfg) {
|
||||
|
||||
@@ -871,15 +871,24 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
|
||||
|
||||
if (attr->exclude_idle)
|
||||
return -EPERM;
|
||||
if (is_kernel_in_hyp_mode() &&
|
||||
attr->exclude_kernel != attr->exclude_hv)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* If we're running in hyp mode, then we *are* the hypervisor.
|
||||
* Therefore we ignore exclude_hv in this configuration, since
|
||||
* there's no hypervisor to sample anyway. This is consistent
|
||||
* with other architectures (x86 and Power).
|
||||
*/
|
||||
if (is_kernel_in_hyp_mode()) {
|
||||
if (!attr->exclude_kernel)
|
||||
config_base |= ARMV8_PMU_INCLUDE_EL2;
|
||||
} else {
|
||||
if (attr->exclude_kernel)
|
||||
config_base |= ARMV8_PMU_EXCLUDE_EL1;
|
||||
if (!attr->exclude_hv)
|
||||
config_base |= ARMV8_PMU_INCLUDE_EL2;
|
||||
}
|
||||
if (attr->exclude_user)
|
||||
config_base |= ARMV8_PMU_EXCLUDE_EL0;
|
||||
if (!is_kernel_in_hyp_mode() && attr->exclude_kernel)
|
||||
config_base |= ARMV8_PMU_EXCLUDE_EL1;
|
||||
if (!attr->exclude_hv)
|
||||
config_base |= ARMV8_PMU_INCLUDE_EL2;
|
||||
|
||||
/*
|
||||
* Install the filter into config_base as this is used to
|
||||
|
||||
@@ -404,6 +404,7 @@ void __hyp_text __noreturn __hyp_panic(void)
|
||||
|
||||
vcpu = (struct kvm_vcpu *)read_sysreg(tpidr_el2);
|
||||
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
|
||||
__timer_save_state(vcpu);
|
||||
__deactivate_traps(vcpu);
|
||||
__deactivate_vm(vcpu);
|
||||
__sysreg_restore_host_state(host_ctxt);
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
|
||||
#include <linux/elf.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/mman.h>
|
||||
#include <linux/export.h>
|
||||
@@ -102,12 +103,18 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
|
||||
*/
|
||||
int valid_phys_addr_range(phys_addr_t addr, size_t size)
|
||||
{
|
||||
if (addr < PHYS_OFFSET)
|
||||
return 0;
|
||||
if (addr + size > __pa(high_memory - 1) + 1)
|
||||
return 0;
|
||||
|
||||
return 1;
|
||||
/*
|
||||
* Check whether addr is covered by a memory region without the
|
||||
* MEMBLOCK_NOMAP attribute, and whether that region covers the
|
||||
* entire range. In theory, this could lead to false negatives
|
||||
* if the range is covered by distinct but adjacent memory regions
|
||||
* that only differ in other attributes. However, few of such
|
||||
* attributes have been defined, and it is debatable whether it
|
||||
* follows that /dev/mem read() calls should be able traverse
|
||||
* such boundaries.
|
||||
*/
|
||||
return memblock_is_region_memory(addr, size) &&
|
||||
memblock_is_map_memory(addr);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -40,7 +40,8 @@ typedef union mips_instruction kprobe_opcode_t;
|
||||
|
||||
#define flush_insn_slot(p) \
|
||||
do { \
|
||||
flush_icache_range((unsigned long)p->addr, \
|
||||
if (p->addr) \
|
||||
flush_icache_range((unsigned long)p->addr, \
|
||||
(unsigned long)p->addr + \
|
||||
(MAX_INSN_SIZE * sizeof(kprobe_opcode_t))); \
|
||||
} while (0)
|
||||
|
||||
@@ -18,6 +18,10 @@
|
||||
|
||||
#include <asm-generic/pgtable-nopmd.h>
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <asm/highmem.h>
|
||||
#endif
|
||||
|
||||
extern int temp_tlb_entry;
|
||||
|
||||
/*
|
||||
@@ -61,7 +65,8 @@ extern int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1,
|
||||
|
||||
#define VMALLOC_START MAP_BASE
|
||||
|
||||
#define PKMAP_BASE (0xfe000000UL)
|
||||
#define PKMAP_END ((FIXADDR_START) & ~((LAST_PKMAP << PAGE_SHIFT)-1))
|
||||
#define PKMAP_BASE (PKMAP_END - PAGE_SIZE * LAST_PKMAP)
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
# define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE)
|
||||
|
||||
@@ -51,15 +51,15 @@ void __init pagetable_init(void)
|
||||
/*
|
||||
* Fixed mappings:
|
||||
*/
|
||||
vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
|
||||
fixrange_init(vaddr, vaddr + FIXADDR_SIZE, pgd_base);
|
||||
vaddr = __fix_to_virt(__end_of_fixed_addresses - 1);
|
||||
fixrange_init(vaddr & PMD_MASK, vaddr + FIXADDR_SIZE, pgd_base);
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
/*
|
||||
* Permanent kmaps:
|
||||
*/
|
||||
vaddr = PKMAP_BASE;
|
||||
fixrange_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);
|
||||
fixrange_init(vaddr & PMD_MASK, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);
|
||||
|
||||
pgd = swapper_pg_dir + __pgd_offset(vaddr);
|
||||
pud = pud_offset(pgd, vaddr);
|
||||
|
||||
@@ -14,6 +14,10 @@
|
||||
#include <asm-generic/module.h>
|
||||
|
||||
|
||||
#ifdef CC_USING_MPROFILE_KERNEL
|
||||
#define MODULE_ARCH_VERMAGIC "mprofile-kernel"
|
||||
#endif
|
||||
|
||||
#ifndef __powerpc64__
|
||||
/*
|
||||
* Thanks to Paul M for explaining this.
|
||||
|
||||
@@ -132,7 +132,19 @@ extern long long virt_phys_offset;
|
||||
#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT)
|
||||
#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
|
||||
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
/*
|
||||
* On hash the vmalloc and other regions alias to the kernel region when passed
|
||||
* through __pa(), which virt_to_pfn() uses. That means virt_addr_valid() can
|
||||
* return true for some vmalloc addresses, which is incorrect. So explicitly
|
||||
* check that the address is in the kernel region.
|
||||
*/
|
||||
#define virt_addr_valid(kaddr) (REGION_ID(kaddr) == KERNEL_REGION_ID && \
|
||||
pfn_valid(virt_to_pfn(kaddr)))
|
||||
#else
|
||||
#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
|
||||
#endif
|
||||
|
||||
/*
|
||||
* On Book-E parts we need __va to parse the device tree and we can't
|
||||
|
||||
@@ -719,12 +719,20 @@ static int __init get_freq(char *name, int cells, unsigned long *val)
|
||||
static void start_cpu_decrementer(void)
|
||||
{
|
||||
#if defined(CONFIG_BOOKE) || defined(CONFIG_40x)
|
||||
unsigned int tcr;
|
||||
|
||||
/* Clear any pending timer interrupts */
|
||||
mtspr(SPRN_TSR, TSR_ENW | TSR_WIS | TSR_DIS | TSR_FIS);
|
||||
|
||||
/* Enable decrementer interrupt */
|
||||
mtspr(SPRN_TCR, TCR_DIE);
|
||||
#endif /* defined(CONFIG_BOOKE) || defined(CONFIG_40x) */
|
||||
tcr = mfspr(SPRN_TCR);
|
||||
/*
|
||||
* The watchdog may have already been enabled by u-boot. So leave
|
||||
* TRC[WP] (Watchdog Period) alone.
|
||||
*/
|
||||
tcr &= TCR_WP_MASK; /* Clear all bits except for TCR[WP] */
|
||||
tcr |= TCR_DIE; /* Enable decrementer */
|
||||
mtspr(SPRN_TCR, tcr);
|
||||
#endif
|
||||
}
|
||||
|
||||
void __init generic_calibrate_decr(void)
|
||||
|
||||
@@ -50,7 +50,9 @@ static int kvmppc_h_pr_enter(struct kvm_vcpu *vcpu)
|
||||
pteg_addr = get_pteg_addr(vcpu, pte_index);
|
||||
|
||||
mutex_lock(&vcpu->kvm->arch.hpt_mutex);
|
||||
copy_from_user(pteg, (void __user *)pteg_addr, sizeof(pteg));
|
||||
ret = H_FUNCTION;
|
||||
if (copy_from_user(pteg, (void __user *)pteg_addr, sizeof(pteg)))
|
||||
goto done;
|
||||
hpte = pteg;
|
||||
|
||||
ret = H_PTEG_FULL;
|
||||
@@ -71,7 +73,9 @@ static int kvmppc_h_pr_enter(struct kvm_vcpu *vcpu)
|
||||
hpte[0] = cpu_to_be64(kvmppc_get_gpr(vcpu, 6));
|
||||
hpte[1] = cpu_to_be64(kvmppc_get_gpr(vcpu, 7));
|
||||
pteg_addr += i * HPTE_SIZE;
|
||||
copy_to_user((void __user *)pteg_addr, hpte, HPTE_SIZE);
|
||||
ret = H_FUNCTION;
|
||||
if (copy_to_user((void __user *)pteg_addr, hpte, HPTE_SIZE))
|
||||
goto done;
|
||||
kvmppc_set_gpr(vcpu, 4, pte_index | i);
|
||||
ret = H_SUCCESS;
|
||||
|
||||
@@ -93,7 +97,9 @@ static int kvmppc_h_pr_remove(struct kvm_vcpu *vcpu)
|
||||
|
||||
pteg = get_pteg_addr(vcpu, pte_index);
|
||||
mutex_lock(&vcpu->kvm->arch.hpt_mutex);
|
||||
copy_from_user(pte, (void __user *)pteg, sizeof(pte));
|
||||
ret = H_FUNCTION;
|
||||
if (copy_from_user(pte, (void __user *)pteg, sizeof(pte)))
|
||||
goto done;
|
||||
pte[0] = be64_to_cpu((__force __be64)pte[0]);
|
||||
pte[1] = be64_to_cpu((__force __be64)pte[1]);
|
||||
|
||||
@@ -103,7 +109,9 @@ static int kvmppc_h_pr_remove(struct kvm_vcpu *vcpu)
|
||||
((flags & H_ANDCOND) && (pte[0] & avpn) != 0))
|
||||
goto done;
|
||||
|
||||
copy_to_user((void __user *)pteg, &v, sizeof(v));
|
||||
ret = H_FUNCTION;
|
||||
if (copy_to_user((void __user *)pteg, &v, sizeof(v)))
|
||||
goto done;
|
||||
|
||||
rb = compute_tlbie_rb(pte[0], pte[1], pte_index);
|
||||
vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false);
|
||||
@@ -171,7 +179,10 @@ static int kvmppc_h_pr_bulk_remove(struct kvm_vcpu *vcpu)
|
||||
}
|
||||
|
||||
pteg = get_pteg_addr(vcpu, tsh & H_BULK_REMOVE_PTEX);
|
||||
copy_from_user(pte, (void __user *)pteg, sizeof(pte));
|
||||
if (copy_from_user(pte, (void __user *)pteg, sizeof(pte))) {
|
||||
ret = H_FUNCTION;
|
||||
break;
|
||||
}
|
||||
pte[0] = be64_to_cpu((__force __be64)pte[0]);
|
||||
pte[1] = be64_to_cpu((__force __be64)pte[1]);
|
||||
|
||||
@@ -184,7 +195,10 @@ static int kvmppc_h_pr_bulk_remove(struct kvm_vcpu *vcpu)
|
||||
tsh |= H_BULK_REMOVE_NOT_FOUND;
|
||||
} else {
|
||||
/* Splat the pteg in (userland) hpt */
|
||||
copy_to_user((void __user *)pteg, &v, sizeof(v));
|
||||
if (copy_to_user((void __user *)pteg, &v, sizeof(v))) {
|
||||
ret = H_FUNCTION;
|
||||
break;
|
||||
}
|
||||
|
||||
rb = compute_tlbie_rb(pte[0], pte[1],
|
||||
tsh & H_BULK_REMOVE_PTEX);
|
||||
@@ -211,7 +225,9 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu)
|
||||
|
||||
pteg = get_pteg_addr(vcpu, pte_index);
|
||||
mutex_lock(&vcpu->kvm->arch.hpt_mutex);
|
||||
copy_from_user(pte, (void __user *)pteg, sizeof(pte));
|
||||
ret = H_FUNCTION;
|
||||
if (copy_from_user(pte, (void __user *)pteg, sizeof(pte)))
|
||||
goto done;
|
||||
pte[0] = be64_to_cpu((__force __be64)pte[0]);
|
||||
pte[1] = be64_to_cpu((__force __be64)pte[1]);
|
||||
|
||||
@@ -234,7 +250,9 @@ static int kvmppc_h_pr_protect(struct kvm_vcpu *vcpu)
|
||||
vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false);
|
||||
pte[0] = (__force u64)cpu_to_be64(pte[0]);
|
||||
pte[1] = (__force u64)cpu_to_be64(pte[1]);
|
||||
copy_to_user((void __user *)pteg, pte, sizeof(pte));
|
||||
ret = H_FUNCTION;
|
||||
if (copy_to_user((void __user *)pteg, pte, sizeof(pte)))
|
||||
goto done;
|
||||
ret = H_SUCCESS;
|
||||
|
||||
done:
|
||||
|
||||
@@ -175,6 +175,8 @@ static int spufs_arch_write_note(struct spu_context *ctx, int i,
|
||||
skip = roundup(cprm->pos - total + sz, 4) - cprm->pos;
|
||||
if (!dump_skip(cprm, skip))
|
||||
goto Eio;
|
||||
|
||||
rc = 0;
|
||||
out:
|
||||
free_page((unsigned long)buf);
|
||||
return rc;
|
||||
|
||||
@@ -79,7 +79,7 @@ unsigned int mpc8xx_get_irq(void)
|
||||
irq = in_be32(&siu_reg->sc_sivec) >> 26;
|
||||
|
||||
if (irq == PIC_VEC_SPURRIOUS)
|
||||
irq = 0;
|
||||
return 0;
|
||||
|
||||
return irq_linear_revmap(mpc8xx_pic_host, irq);
|
||||
|
||||
|
||||
@@ -31,8 +31,14 @@ SECTIONS
|
||||
{
|
||||
. = 0x00000000;
|
||||
.text : {
|
||||
_text = .; /* Text and read-only data */
|
||||
/* Text and read-only data */
|
||||
HEAD_TEXT
|
||||
/*
|
||||
* E.g. perf doesn't like symbols starting at address zero,
|
||||
* therefore skip the initial PSW and channel program located
|
||||
* at address zero and let _text start at 0x200.
|
||||
*/
|
||||
_text = 0x200;
|
||||
TEXT_TEXT
|
||||
SCHED_TEXT
|
||||
CPUIDLE_TEXT
|
||||
|
||||
@@ -1733,9 +1733,14 @@ static int read_nonraw(struct ldc_channel *lp, void *buf, unsigned int size)
|
||||
|
||||
lp->rcv_nxt = p->seqid;
|
||||
|
||||
/*
|
||||
* If this is a control-only packet, there is nothing
|
||||
* else to do but advance the rx queue since the packet
|
||||
* was already processed above.
|
||||
*/
|
||||
if (!(p->type & LDC_DATA)) {
|
||||
new = rx_advance(lp, new);
|
||||
goto no_data;
|
||||
break;
|
||||
}
|
||||
if (p->stype & (LDC_ACK | LDC_NACK)) {
|
||||
err = data_ack_nack(lp, p);
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
#ifndef BOOT_COMPRESSED_ERROR_H
|
||||
#define BOOT_COMPRESSED_ERROR_H
|
||||
|
||||
#include <linux/compiler.h>
|
||||
|
||||
void warn(char *m);
|
||||
void error(char *m);
|
||||
void error(char *m) __noreturn;
|
||||
|
||||
#endif /* BOOT_COMPRESSED_ERROR_H */
|
||||
|
||||
@@ -366,6 +366,8 @@ static int __init tsc_setup(char *str)
|
||||
tsc_clocksource_reliable = 1;
|
||||
if (!strncmp(str, "noirqtime", 9))
|
||||
no_sched_irq_time = 1;
|
||||
if (!strcmp(str, "unstable"))
|
||||
mark_tsc_unstable("boot parameter");
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
||||
@@ -1363,8 +1363,10 @@ EXPORT_SYMBOL_GPL(kvm_lapic_hv_timer_in_use);
|
||||
|
||||
static void cancel_hv_tscdeadline(struct kvm_lapic *apic)
|
||||
{
|
||||
preempt_disable();
|
||||
kvm_x86_ops->cancel_hv_timer(apic->vcpu);
|
||||
apic->lapic_timer.hv_timer_in_use = false;
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
|
||||
|
||||
@@ -1879,6 +1879,7 @@ static void svm_get_segment(struct kvm_vcpu *vcpu,
|
||||
*/
|
||||
if (var->unusable)
|
||||
var->db = 0;
|
||||
/* This is symmetric with svm_set_segment() */
|
||||
var->dpl = to_svm(vcpu)->vmcb->save.cpl;
|
||||
break;
|
||||
}
|
||||
@@ -2024,18 +2025,14 @@ static void svm_set_segment(struct kvm_vcpu *vcpu,
|
||||
s->base = var->base;
|
||||
s->limit = var->limit;
|
||||
s->selector = var->selector;
|
||||
if (var->unusable)
|
||||
s->attrib = 0;
|
||||
else {
|
||||
s->attrib = (var->type & SVM_SELECTOR_TYPE_MASK);
|
||||
s->attrib |= (var->s & 1) << SVM_SELECTOR_S_SHIFT;
|
||||
s->attrib |= (var->dpl & 3) << SVM_SELECTOR_DPL_SHIFT;
|
||||
s->attrib |= (var->present & 1) << SVM_SELECTOR_P_SHIFT;
|
||||
s->attrib |= (var->avl & 1) << SVM_SELECTOR_AVL_SHIFT;
|
||||
s->attrib |= (var->l & 1) << SVM_SELECTOR_L_SHIFT;
|
||||
s->attrib |= (var->db & 1) << SVM_SELECTOR_DB_SHIFT;
|
||||
s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT;
|
||||
}
|
||||
s->attrib = (var->type & SVM_SELECTOR_TYPE_MASK);
|
||||
s->attrib |= (var->s & 1) << SVM_SELECTOR_S_SHIFT;
|
||||
s->attrib |= (var->dpl & 3) << SVM_SELECTOR_DPL_SHIFT;
|
||||
s->attrib |= ((var->present & 1) && !var->unusable) << SVM_SELECTOR_P_SHIFT;
|
||||
s->attrib |= (var->avl & 1) << SVM_SELECTOR_AVL_SHIFT;
|
||||
s->attrib |= (var->l & 1) << SVM_SELECTOR_L_SHIFT;
|
||||
s->attrib |= (var->db & 1) << SVM_SELECTOR_DB_SHIFT;
|
||||
s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT;
|
||||
|
||||
/*
|
||||
* This is always accurate, except if SYSRET returned to a segment
|
||||
@@ -2044,7 +2041,8 @@ static void svm_set_segment(struct kvm_vcpu *vcpu,
|
||||
* would entail passing the CPL to userspace and back.
|
||||
*/
|
||||
if (seg == VCPU_SREG_SS)
|
||||
svm->vmcb->save.cpl = (s->attrib >> SVM_SELECTOR_DPL_SHIFT) & 3;
|
||||
/* This is symmetric with svm_get_segment() */
|
||||
svm->vmcb->save.cpl = (var->dpl & 3);
|
||||
|
||||
mark_dirty(svm->vmcb, VMCB_SEG);
|
||||
}
|
||||
|
||||
@@ -7924,11 +7924,13 @@ static bool nested_vmx_exit_handled_cr(struct kvm_vcpu *vcpu,
|
||||
{
|
||||
unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
|
||||
int cr = exit_qualification & 15;
|
||||
int reg = (exit_qualification >> 8) & 15;
|
||||
unsigned long val = kvm_register_readl(vcpu, reg);
|
||||
int reg;
|
||||
unsigned long val;
|
||||
|
||||
switch ((exit_qualification >> 4) & 3) {
|
||||
case 0: /* mov to cr */
|
||||
reg = (exit_qualification >> 8) & 15;
|
||||
val = kvm_register_readl(vcpu, reg);
|
||||
switch (cr) {
|
||||
case 0:
|
||||
if (vmcs12->cr0_guest_host_mask &
|
||||
@@ -7983,6 +7985,7 @@ static bool nested_vmx_exit_handled_cr(struct kvm_vcpu *vcpu,
|
||||
* lmsw can change bits 1..3 of cr0, and only set bit 0 of
|
||||
* cr0. Other attempted changes are ignored, with no exit.
|
||||
*/
|
||||
val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f;
|
||||
if (vmcs12->cr0_guest_host_mask & 0xe &
|
||||
(val ^ vmcs12->cr0_read_shadow))
|
||||
return true;
|
||||
@@ -10660,8 +10663,7 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
|
||||
vmcs12->guest_pdptr3 = vmcs_read64(GUEST_PDPTR3);
|
||||
}
|
||||
|
||||
if (nested_cpu_has_ept(vmcs12))
|
||||
vmcs12->guest_linear_address = vmcs_readl(GUEST_LINEAR_ADDRESS);
|
||||
vmcs12->guest_linear_address = vmcs_readl(GUEST_LINEAR_ADDRESS);
|
||||
|
||||
if (nested_cpu_has_vid(vmcs12))
|
||||
vmcs12->guest_intr_status = vmcs_read16(GUEST_INTR_STATUS);
|
||||
|
||||
@@ -55,7 +55,7 @@ ENTRY(csum_partial_copy_generic)
|
||||
movq %r12, 3*8(%rsp)
|
||||
movq %r14, 4*8(%rsp)
|
||||
movq %r13, 5*8(%rsp)
|
||||
movq %rbp, 6*8(%rsp)
|
||||
movq %r15, 6*8(%rsp)
|
||||
|
||||
movq %r8, (%rsp)
|
||||
movq %r9, 1*8(%rsp)
|
||||
@@ -74,7 +74,7 @@ ENTRY(csum_partial_copy_generic)
|
||||
/* main loop. clear in 64 byte blocks */
|
||||
/* r9: zero, r8: temp2, rbx: temp1, rax: sum, rcx: saved length */
|
||||
/* r11: temp3, rdx: temp4, r12 loopcnt */
|
||||
/* r10: temp5, rbp: temp6, r14 temp7, r13 temp8 */
|
||||
/* r10: temp5, r15: temp6, r14 temp7, r13 temp8 */
|
||||
.p2align 4
|
||||
.Lloop:
|
||||
source
|
||||
@@ -89,7 +89,7 @@ ENTRY(csum_partial_copy_generic)
|
||||
source
|
||||
movq 32(%rdi), %r10
|
||||
source
|
||||
movq 40(%rdi), %rbp
|
||||
movq 40(%rdi), %r15
|
||||
source
|
||||
movq 48(%rdi), %r14
|
||||
source
|
||||
@@ -103,7 +103,7 @@ ENTRY(csum_partial_copy_generic)
|
||||
adcq %r11, %rax
|
||||
adcq %rdx, %rax
|
||||
adcq %r10, %rax
|
||||
adcq %rbp, %rax
|
||||
adcq %r15, %rax
|
||||
adcq %r14, %rax
|
||||
adcq %r13, %rax
|
||||
|
||||
@@ -121,7 +121,7 @@ ENTRY(csum_partial_copy_generic)
|
||||
dest
|
||||
movq %r10, 32(%rsi)
|
||||
dest
|
||||
movq %rbp, 40(%rsi)
|
||||
movq %r15, 40(%rsi)
|
||||
dest
|
||||
movq %r14, 48(%rsi)
|
||||
dest
|
||||
@@ -203,7 +203,7 @@ ENTRY(csum_partial_copy_generic)
|
||||
movq 3*8(%rsp), %r12
|
||||
movq 4*8(%rsp), %r14
|
||||
movq 5*8(%rsp), %r13
|
||||
movq 6*8(%rsp), %rbp
|
||||
movq 6*8(%rsp), %r15
|
||||
addq $7*8, %rsp
|
||||
ret
|
||||
|
||||
|
||||
@@ -832,9 +832,11 @@ static void __init kexec_enter_virtual_mode(void)
|
||||
|
||||
/*
|
||||
* We don't do virtual mode, since we don't do runtime services, on
|
||||
* non-native EFI
|
||||
* non-native EFI. With efi=old_map, we don't do runtime services in
|
||||
* kexec kernel because in the initial boot something else might
|
||||
* have been mapped at these virtual addresses.
|
||||
*/
|
||||
if (!efi_is_native()) {
|
||||
if (!efi_is_native() || efi_enabled(EFI_OLD_MEMMAP)) {
|
||||
efi_memmap_unmap();
|
||||
clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
|
||||
return;
|
||||
|
||||
@@ -175,6 +175,9 @@ bool bio_integrity_enabled(struct bio *bio)
|
||||
if (!bio_is_rw(bio))
|
||||
return false;
|
||||
|
||||
if (!bio_sectors(bio))
|
||||
return false;
|
||||
|
||||
/* Already protected? */
|
||||
if (bio_integrity(bio))
|
||||
return false;
|
||||
|
||||
@@ -1265,13 +1265,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
|
||||
|
||||
blk_queue_bounce(q, &bio);
|
||||
|
||||
blk_queue_split(q, &bio, q->bio_split);
|
||||
|
||||
if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
|
||||
bio_io_error(bio);
|
||||
return BLK_QC_T_NONE;
|
||||
}
|
||||
|
||||
blk_queue_split(q, &bio, q->bio_split);
|
||||
|
||||
if (!is_flush_fua && !blk_queue_nomerges(q) &&
|
||||
blk_attempt_plug_merge(q, bio, &request_count, &same_queue_rq))
|
||||
return BLK_QC_T_NONE;
|
||||
@@ -1592,7 +1592,8 @@ static void blk_mq_exit_hctx(struct request_queue *q,
|
||||
{
|
||||
unsigned flush_start_tag = set->queue_depth;
|
||||
|
||||
blk_mq_tag_idle(hctx);
|
||||
if (blk_mq_hw_queue_mapped(hctx))
|
||||
blk_mq_tag_idle(hctx);
|
||||
|
||||
if (set->ops->exit_request)
|
||||
set->ops->exit_request(set->driver_data,
|
||||
@@ -1907,6 +1908,9 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
|
||||
struct blk_mq_hw_ctx **hctxs = q->queue_hw_ctx;
|
||||
|
||||
blk_mq_sysfs_unregister(q);
|
||||
|
||||
/* protect against switching io scheduler */
|
||||
mutex_lock(&q->sysfs_lock);
|
||||
for (i = 0; i < set->nr_hw_queues; i++) {
|
||||
int node;
|
||||
|
||||
@@ -1956,6 +1960,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
|
||||
}
|
||||
}
|
||||
q->nr_hw_queues = i;
|
||||
mutex_unlock(&q->sysfs_lock);
|
||||
blk_mq_sysfs_register(q);
|
||||
}
|
||||
|
||||
|
||||
@@ -321,8 +321,10 @@ struct hd_struct *add_partition(struct gendisk *disk, int partno,
|
||||
|
||||
if (info) {
|
||||
struct partition_meta_info *pinfo = alloc_part_info(disk);
|
||||
if (!pinfo)
|
||||
if (!pinfo) {
|
||||
err = -ENOMEM;
|
||||
goto out_free_stats;
|
||||
}
|
||||
memcpy(pinfo, info, sizeof(*info));
|
||||
p->info = pinfo;
|
||||
}
|
||||
|
||||
@@ -102,6 +102,7 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
|
||||
}
|
||||
}
|
||||
|
||||
ret = -ENOMEM;
|
||||
cert->pub->key = kmemdup(ctx->key, ctx->key_size, GFP_KERNEL);
|
||||
if (!cert->pub->key)
|
||||
goto error_decode;
|
||||
|
||||
@@ -62,9 +62,6 @@ do_async_gen_syndrome(struct dma_chan *chan,
|
||||
dma_addr_t dma_dest[2];
|
||||
int src_off = 0;
|
||||
|
||||
if (submit->flags & ASYNC_TX_FENCE)
|
||||
dma_flags |= DMA_PREP_FENCE;
|
||||
|
||||
while (src_cnt > 0) {
|
||||
submit->flags = flags_orig;
|
||||
pq_src_cnt = min(src_cnt, dma_maxpq(dma, dma_flags));
|
||||
@@ -83,6 +80,8 @@ do_async_gen_syndrome(struct dma_chan *chan,
|
||||
if (cb_fn_orig)
|
||||
dma_flags |= DMA_PREP_INTERRUPT;
|
||||
}
|
||||
if (submit->flags & ASYNC_TX_FENCE)
|
||||
dma_flags |= DMA_PREP_FENCE;
|
||||
|
||||
/* Drivers force forward progress in case they can not provide
|
||||
* a descriptor
|
||||
|
||||
@@ -87,8 +87,8 @@ MODULE_PARM_DESC(report_key_events,
|
||||
static bool device_id_scheme = false;
|
||||
module_param(device_id_scheme, bool, 0444);
|
||||
|
||||
static bool only_lcd = false;
|
||||
module_param(only_lcd, bool, 0444);
|
||||
static int only_lcd = -1;
|
||||
module_param(only_lcd, int, 0444);
|
||||
|
||||
static int register_count;
|
||||
static DEFINE_MUTEX(register_count_mutex);
|
||||
@@ -2082,6 +2082,16 @@ int acpi_video_register(void)
|
||||
goto leave;
|
||||
}
|
||||
|
||||
/*
|
||||
* We're seeing a lot of bogus backlight interfaces on newer machines
|
||||
* without a LCD such as desktops, servers and HDMI sticks. Checking
|
||||
* the lcd flag fixes this, so enable this on any machines which are
|
||||
* win8 ready (where we also prefer the native backlight driver, so
|
||||
* normally the acpi_video code should not register there anyways).
|
||||
*/
|
||||
if (only_lcd == -1)
|
||||
only_lcd = acpi_osi_is_win8();
|
||||
|
||||
dmi_check_system(video_dmi_table);
|
||||
|
||||
ret = acpi_bus_register_driver(&acpi_video_bus);
|
||||
|
||||
@@ -180,6 +180,12 @@ acpi_status acpi_enable_event(u32 event, u32 flags)
|
||||
|
||||
ACPI_FUNCTION_TRACE(acpi_enable_event);
|
||||
|
||||
/* If Hardware Reduced flag is set, there are no fixed events */
|
||||
|
||||
if (acpi_gbl_reduced_hardware) {
|
||||
return_ACPI_STATUS(AE_OK);
|
||||
}
|
||||
|
||||
/* Decode the Fixed Event */
|
||||
|
||||
if (event > ACPI_EVENT_MAX) {
|
||||
@@ -237,6 +243,12 @@ acpi_status acpi_disable_event(u32 event, u32 flags)
|
||||
|
||||
ACPI_FUNCTION_TRACE(acpi_disable_event);
|
||||
|
||||
/* If Hardware Reduced flag is set, there are no fixed events */
|
||||
|
||||
if (acpi_gbl_reduced_hardware) {
|
||||
return_ACPI_STATUS(AE_OK);
|
||||
}
|
||||
|
||||
/* Decode the Fixed Event */
|
||||
|
||||
if (event > ACPI_EVENT_MAX) {
|
||||
@@ -290,6 +302,12 @@ acpi_status acpi_clear_event(u32 event)
|
||||
|
||||
ACPI_FUNCTION_TRACE(acpi_clear_event);
|
||||
|
||||
/* If Hardware Reduced flag is set, there are no fixed events */
|
||||
|
||||
if (acpi_gbl_reduced_hardware) {
|
||||
return_ACPI_STATUS(AE_OK);
|
||||
}
|
||||
|
||||
/* Decode the Fixed Event */
|
||||
|
||||
if (event > ACPI_EVENT_MAX) {
|
||||
|
||||
@@ -121,6 +121,9 @@ static acpi_status acpi_ps_get_aml_opcode(struct acpi_walk_state *walk_state)
|
||||
(u32)(aml_offset +
|
||||
sizeof(struct acpi_table_header)));
|
||||
|
||||
ACPI_ERROR((AE_INFO,
|
||||
"Aborting disassembly, AML byte code is corrupt"));
|
||||
|
||||
/* Dump the context surrounding the invalid opcode */
|
||||
|
||||
acpi_ut_dump_buffer(((u8 *)walk_state->parser_state.
|
||||
@@ -129,6 +132,14 @@ static acpi_status acpi_ps_get_aml_opcode(struct acpi_walk_state *walk_state)
|
||||
sizeof(struct acpi_table_header) -
|
||||
16));
|
||||
acpi_os_printf(" */\n");
|
||||
|
||||
/*
|
||||
* Just abort the disassembly, cannot continue because the
|
||||
* parser is essentially lost. The disassembler can then
|
||||
* randomly fail because an ill-constructed parse tree
|
||||
* can result.
|
||||
*/
|
||||
return_ACPI_STATUS(AE_AML_BAD_OPCODE);
|
||||
#endif
|
||||
}
|
||||
|
||||
@@ -293,6 +304,9 @@ acpi_ps_create_op(struct acpi_walk_state *walk_state,
|
||||
if (status == AE_CTRL_PARSE_CONTINUE) {
|
||||
return_ACPI_STATUS(AE_CTRL_PARSE_CONTINUE);
|
||||
}
|
||||
if (ACPI_FAILURE(status)) {
|
||||
return_ACPI_STATUS(status);
|
||||
}
|
||||
|
||||
/* Create Op structure and append to parent's argument list */
|
||||
|
||||
|
||||
@@ -1518,7 +1518,7 @@ static int acpi_ec_setup(struct acpi_ec *ec, bool handle_events)
|
||||
}
|
||||
|
||||
acpi_handle_info(ec->handle,
|
||||
"GPE=0x%lx, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n",
|
||||
"GPE=0x%x, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n",
|
||||
ec->gpe, ec->command_addr, ec->data_addr);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -128,7 +128,7 @@ static int acpi_ec_add_debugfs(struct acpi_ec *ec, unsigned int ec_device_count)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (!debugfs_create_x32("gpe", 0444, dev_dir, (u32 *)&first_ec->gpe))
|
||||
if (!debugfs_create_x32("gpe", 0444, dev_dir, &first_ec->gpe))
|
||||
goto error;
|
||||
if (!debugfs_create_bool("use_global_lock", 0444, dev_dir,
|
||||
&first_ec->global_lock))
|
||||
|
||||
@@ -158,7 +158,7 @@ static inline void acpi_early_processor_osc(void) {}
|
||||
-------------------------------------------------------------------------- */
|
||||
struct acpi_ec {
|
||||
acpi_handle handle;
|
||||
unsigned long gpe;
|
||||
u32 gpe;
|
||||
unsigned long command_addr;
|
||||
unsigned long data_addr;
|
||||
bool global_lock;
|
||||
|
||||
@@ -514,8 +514,9 @@ int ahci_platform_init_host(struct platform_device *pdev,
|
||||
|
||||
irq = platform_get_irq(pdev, 0);
|
||||
if (irq <= 0) {
|
||||
dev_err(dev, "no irq\n");
|
||||
return -EINVAL;
|
||||
if (irq != -EPROBE_DEFER)
|
||||
dev_err(dev, "no irq\n");
|
||||
return irq;
|
||||
}
|
||||
|
||||
hpriv->irq = irq;
|
||||
|
||||
@@ -612,6 +612,9 @@ static int loop_switch(struct loop_device *lo, struct file *file)
|
||||
*/
|
||||
static int loop_flush(struct loop_device *lo)
|
||||
{
|
||||
/* loop not yet configured, no running thread, nothing to flush */
|
||||
if (lo->lo_state != Lo_bound)
|
||||
return 0;
|
||||
return loop_switch(lo, NULL);
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright (C) 2014 Broadcom Corporation
|
||||
* Copyright (C) 2014-2017 Broadcom
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
@@ -37,8 +37,6 @@
|
||||
#define ARB_ERR_CAP_CLEAR (1 << 0)
|
||||
#define ARB_ERR_CAP_STATUS_TIMEOUT (1 << 12)
|
||||
#define ARB_ERR_CAP_STATUS_TEA (1 << 11)
|
||||
#define ARB_ERR_CAP_STATUS_BS_SHIFT (1 << 2)
|
||||
#define ARB_ERR_CAP_STATUS_BS_MASK 0x3c
|
||||
#define ARB_ERR_CAP_STATUS_WRITE (1 << 1)
|
||||
#define ARB_ERR_CAP_STATUS_VALID (1 << 0)
|
||||
|
||||
@@ -47,7 +45,6 @@ enum {
|
||||
ARB_ERR_CAP_CLR,
|
||||
ARB_ERR_CAP_HI_ADDR,
|
||||
ARB_ERR_CAP_ADDR,
|
||||
ARB_ERR_CAP_DATA,
|
||||
ARB_ERR_CAP_STATUS,
|
||||
ARB_ERR_CAP_MASTER,
|
||||
};
|
||||
@@ -57,7 +54,6 @@ static const int gisb_offsets_bcm7038[] = {
|
||||
[ARB_ERR_CAP_CLR] = 0x0c4,
|
||||
[ARB_ERR_CAP_HI_ADDR] = -1,
|
||||
[ARB_ERR_CAP_ADDR] = 0x0c8,
|
||||
[ARB_ERR_CAP_DATA] = 0x0cc,
|
||||
[ARB_ERR_CAP_STATUS] = 0x0d0,
|
||||
[ARB_ERR_CAP_MASTER] = -1,
|
||||
};
|
||||
@@ -67,7 +63,6 @@ static const int gisb_offsets_bcm7400[] = {
|
||||
[ARB_ERR_CAP_CLR] = 0x0c8,
|
||||
[ARB_ERR_CAP_HI_ADDR] = -1,
|
||||
[ARB_ERR_CAP_ADDR] = 0x0cc,
|
||||
[ARB_ERR_CAP_DATA] = 0x0d0,
|
||||
[ARB_ERR_CAP_STATUS] = 0x0d4,
|
||||
[ARB_ERR_CAP_MASTER] = 0x0d8,
|
||||
};
|
||||
@@ -77,7 +72,6 @@ static const int gisb_offsets_bcm7435[] = {
|
||||
[ARB_ERR_CAP_CLR] = 0x168,
|
||||
[ARB_ERR_CAP_HI_ADDR] = -1,
|
||||
[ARB_ERR_CAP_ADDR] = 0x16c,
|
||||
[ARB_ERR_CAP_DATA] = 0x170,
|
||||
[ARB_ERR_CAP_STATUS] = 0x174,
|
||||
[ARB_ERR_CAP_MASTER] = 0x178,
|
||||
};
|
||||
@@ -87,7 +81,6 @@ static const int gisb_offsets_bcm7445[] = {
|
||||
[ARB_ERR_CAP_CLR] = 0x7e4,
|
||||
[ARB_ERR_CAP_HI_ADDR] = 0x7e8,
|
||||
[ARB_ERR_CAP_ADDR] = 0x7ec,
|
||||
[ARB_ERR_CAP_DATA] = 0x7f0,
|
||||
[ARB_ERR_CAP_STATUS] = 0x7f4,
|
||||
[ARB_ERR_CAP_MASTER] = 0x7f8,
|
||||
};
|
||||
@@ -109,9 +102,13 @@ static u32 gisb_read(struct brcmstb_gisb_arb_device *gdev, int reg)
|
||||
{
|
||||
int offset = gdev->gisb_offsets[reg];
|
||||
|
||||
/* return 1 if the hardware doesn't have ARB_ERR_CAP_MASTER */
|
||||
if (offset == -1)
|
||||
return 1;
|
||||
if (offset < 0) {
|
||||
/* return 1 if the hardware doesn't have ARB_ERR_CAP_MASTER */
|
||||
if (reg == ARB_ERR_CAP_MASTER)
|
||||
return 1;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (gdev->big_endian)
|
||||
return ioread32be(gdev->base + offset);
|
||||
@@ -119,6 +116,16 @@ static u32 gisb_read(struct brcmstb_gisb_arb_device *gdev, int reg)
|
||||
return ioread32(gdev->base + offset);
|
||||
}
|
||||
|
||||
static u64 gisb_read_address(struct brcmstb_gisb_arb_device *gdev)
|
||||
{
|
||||
u64 value;
|
||||
|
||||
value = gisb_read(gdev, ARB_ERR_CAP_ADDR);
|
||||
value |= (u64)gisb_read(gdev, ARB_ERR_CAP_HI_ADDR) << 32;
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
static void gisb_write(struct brcmstb_gisb_arb_device *gdev, u32 val, int reg)
|
||||
{
|
||||
int offset = gdev->gisb_offsets[reg];
|
||||
@@ -127,9 +134,9 @@ static void gisb_write(struct brcmstb_gisb_arb_device *gdev, u32 val, int reg)
|
||||
return;
|
||||
|
||||
if (gdev->big_endian)
|
||||
iowrite32be(val, gdev->base + reg);
|
||||
iowrite32be(val, gdev->base + offset);
|
||||
else
|
||||
iowrite32(val, gdev->base + reg);
|
||||
iowrite32(val, gdev->base + offset);
|
||||
}
|
||||
|
||||
static ssize_t gisb_arb_get_timeout(struct device *dev,
|
||||
@@ -185,7 +192,7 @@ static int brcmstb_gisb_arb_decode_addr(struct brcmstb_gisb_arb_device *gdev,
|
||||
const char *reason)
|
||||
{
|
||||
u32 cap_status;
|
||||
unsigned long arb_addr;
|
||||
u64 arb_addr;
|
||||
u32 master;
|
||||
const char *m_name;
|
||||
char m_fmt[11];
|
||||
@@ -197,10 +204,7 @@ static int brcmstb_gisb_arb_decode_addr(struct brcmstb_gisb_arb_device *gdev,
|
||||
return 1;
|
||||
|
||||
/* Read the address and master */
|
||||
arb_addr = gisb_read(gdev, ARB_ERR_CAP_ADDR) & 0xffffffff;
|
||||
#if (IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT))
|
||||
arb_addr |= (u64)gisb_read(gdev, ARB_ERR_CAP_HI_ADDR) << 32;
|
||||
#endif
|
||||
arb_addr = gisb_read_address(gdev);
|
||||
master = gisb_read(gdev, ARB_ERR_CAP_MASTER);
|
||||
|
||||
m_name = brcmstb_gisb_master_to_str(gdev, master);
|
||||
@@ -209,7 +213,7 @@ static int brcmstb_gisb_arb_decode_addr(struct brcmstb_gisb_arb_device *gdev,
|
||||
m_name = m_fmt;
|
||||
}
|
||||
|
||||
pr_crit("%s: %s at 0x%lx [%c %s], core: %s\n",
|
||||
pr_crit("%s: %s at 0x%llx [%c %s], core: %s\n",
|
||||
__func__, reason, arb_addr,
|
||||
cap_status & ARB_ERR_CAP_STATUS_WRITE ? 'W' : 'R',
|
||||
cap_status & ARB_ERR_CAP_STATUS_TIMEOUT ? "timeout" : "",
|
||||
|
||||
@@ -409,6 +409,7 @@ static void start_event_fetch(struct ssif_info *ssif_info, unsigned long *flags)
|
||||
msg = ipmi_alloc_smi_msg();
|
||||
if (!msg) {
|
||||
ssif_info->ssif_state = SSIF_NORMAL;
|
||||
ipmi_ssif_unlock_cond(ssif_info, flags);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -431,6 +432,7 @@ static void start_recv_msg_fetch(struct ssif_info *ssif_info,
|
||||
msg = ipmi_alloc_smi_msg();
|
||||
if (!msg) {
|
||||
ssif_info->ssif_state = SSIF_NORMAL;
|
||||
ipmi_ssif_unlock_cond(ssif_info, flags);
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
@@ -1115,12 +1115,16 @@ static void add_interrupt_bench(cycles_t start)
|
||||
static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs)
|
||||
{
|
||||
__u32 *ptr = (__u32 *) regs;
|
||||
unsigned int idx;
|
||||
|
||||
if (regs == NULL)
|
||||
return 0;
|
||||
if (f->reg_idx >= sizeof(struct pt_regs) / sizeof(__u32))
|
||||
f->reg_idx = 0;
|
||||
return *(ptr + f->reg_idx++);
|
||||
idx = READ_ONCE(f->reg_idx);
|
||||
if (idx >= sizeof(struct pt_regs) / sizeof(__u32))
|
||||
idx = 0;
|
||||
ptr += idx++;
|
||||
WRITE_ONCE(f->reg_idx, idx);
|
||||
return *ptr;
|
||||
}
|
||||
|
||||
void add_interrupt_randomness(int irq, int irq_flags)
|
||||
|
||||
@@ -260,13 +260,13 @@ at91_clk_register_generated(struct regmap *regmap, spinlock_t *lock,
|
||||
gck->lock = lock;
|
||||
gck->range = *range;
|
||||
|
||||
clk_generated_startup(gck);
|
||||
hw = &gck->hw;
|
||||
ret = clk_hw_register(NULL, &gck->hw);
|
||||
if (ret) {
|
||||
kfree(gck);
|
||||
hw = ERR_PTR(ret);
|
||||
} else
|
||||
clk_generated_startup(gck);
|
||||
}
|
||||
|
||||
return hw;
|
||||
}
|
||||
|
||||
@@ -106,7 +106,7 @@ static int __set_clk_rates(struct device_node *node, bool clk_supplier)
|
||||
|
||||
rc = clk_set_rate(clk, rate);
|
||||
if (rc < 0)
|
||||
pr_err("clk: couldn't set %s clk rate to %d (%d), current rate: %ld\n",
|
||||
pr_err("clk: couldn't set %s clk rate to %u (%d), current rate: %lu\n",
|
||||
__clk_get_name(clk), rate, rc,
|
||||
clk_get_rate(clk));
|
||||
clk_put(clk);
|
||||
|
||||
@@ -71,15 +71,15 @@ static const struct clk_ops scpi_clk_ops = {
|
||||
};
|
||||
|
||||
/* find closest match to given frequency in OPP table */
|
||||
static int __scpi_dvfs_round_rate(struct scpi_clk *clk, unsigned long rate)
|
||||
static long __scpi_dvfs_round_rate(struct scpi_clk *clk, unsigned long rate)
|
||||
{
|
||||
int idx;
|
||||
u32 fmin = 0, fmax = ~0, ftmp;
|
||||
unsigned long fmin = 0, fmax = ~0, ftmp;
|
||||
const struct scpi_opp *opp = clk->info->opps;
|
||||
|
||||
for (idx = 0; idx < clk->info->count; idx++, opp++) {
|
||||
ftmp = opp->freq;
|
||||
if (ftmp >= (u32)rate) {
|
||||
if (ftmp >= rate) {
|
||||
if (ftmp <= fmax)
|
||||
fmax = ftmp;
|
||||
break;
|
||||
|
||||
@@ -7,9 +7,9 @@ config COMMON_CLK_MESON8B
|
||||
bool
|
||||
depends on COMMON_CLK_AMLOGIC
|
||||
help
|
||||
Support for the clock controller on AmLogic S805 devices, aka
|
||||
meson8b. Say Y if you want peripherals and CPU frequency scaling to
|
||||
work.
|
||||
Support for the clock controller on AmLogic S802 (Meson8),
|
||||
S805 (Meson8b) and S812 (Meson8m2) devices. Say Y if you
|
||||
want peripherals and CPU frequency scaling to work.
|
||||
|
||||
config COMMON_CLK_GXBB
|
||||
bool
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
/*
|
||||
* AmLogic S805 / Meson8b Clock Controller Driver
|
||||
* AmLogic S802 (Meson8) / S805 (Meson8b) / S812 (Meson8m2) Clock Controller
|
||||
* Driver
|
||||
*
|
||||
* Copyright (c) 2015 Endless Mobile, Inc.
|
||||
* Author: Carlo Caione <carlo@endlessm.com>
|
||||
@@ -661,7 +662,9 @@ iounmap:
|
||||
}
|
||||
|
||||
static const struct of_device_id meson8b_clkc_match_table[] = {
|
||||
{ .compatible = "amlogic,meson8-clkc" },
|
||||
{ .compatible = "amlogic,meson8b-clkc" },
|
||||
{ .compatible = "amlogic,meson8m2-clkc" },
|
||||
{ }
|
||||
};
|
||||
|
||||
|
||||
@@ -271,11 +271,14 @@ struct cpg_pll_config {
|
||||
unsigned int extal_div;
|
||||
unsigned int pll1_mult;
|
||||
unsigned int pll3_mult;
|
||||
unsigned int pll0_mult; /* For R-Car V2H and E2 only */
|
||||
};
|
||||
|
||||
static const struct cpg_pll_config cpg_pll_configs[8] __initconst = {
|
||||
{ 1, 208, 106 }, { 1, 208, 88 }, { 1, 156, 80 }, { 1, 156, 66 },
|
||||
{ 2, 240, 122 }, { 2, 240, 102 }, { 2, 208, 106 }, { 2, 208, 88 },
|
||||
{ 1, 208, 106, 200 }, { 1, 208, 88, 200 },
|
||||
{ 1, 156, 80, 150 }, { 1, 156, 66, 150 },
|
||||
{ 2, 240, 122, 230 }, { 2, 240, 102, 230 },
|
||||
{ 2, 208, 106, 200 }, { 2, 208, 88, 200 },
|
||||
};
|
||||
|
||||
/* SDHI divisors */
|
||||
@@ -297,6 +300,12 @@ static const struct clk_div_table cpg_sd01_div_table[] = {
|
||||
|
||||
static u32 cpg_mode __initdata;
|
||||
|
||||
static const char * const pll0_mult_match[] = {
|
||||
"renesas,r8a7792-cpg-clocks",
|
||||
"renesas,r8a7794-cpg-clocks",
|
||||
NULL
|
||||
};
|
||||
|
||||
static struct clk * __init
|
||||
rcar_gen2_cpg_register_clock(struct device_node *np, struct rcar_gen2_cpg *cpg,
|
||||
const struct cpg_pll_config *config,
|
||||
@@ -317,9 +326,15 @@ rcar_gen2_cpg_register_clock(struct device_node *np, struct rcar_gen2_cpg *cpg,
|
||||
* clock implementation and we currently have no need to change
|
||||
* the multiplier value.
|
||||
*/
|
||||
u32 value = clk_readl(cpg->reg + CPG_PLL0CR);
|
||||
if (of_device_compatible_match(np, pll0_mult_match)) {
|
||||
/* R-Car V2H and E2 do not have PLL0CR */
|
||||
mult = config->pll0_mult;
|
||||
div = 3;
|
||||
} else {
|
||||
u32 value = clk_readl(cpg->reg + CPG_PLL0CR);
|
||||
mult = ((value >> 24) & ((1 << 7) - 1)) + 1;
|
||||
}
|
||||
parent_name = "main";
|
||||
mult = ((value >> 24) & ((1 << 7) - 1)) + 1;
|
||||
} else if (!strcmp(name, "pll1")) {
|
||||
parent_name = "main";
|
||||
mult = config->pll1_mult / 2;
|
||||
|
||||
@@ -174,8 +174,10 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
|
||||
if (!state_node)
|
||||
break;
|
||||
|
||||
if (!of_device_is_available(state_node))
|
||||
if (!of_device_is_available(state_node)) {
|
||||
of_node_put(state_node);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!idle_state_valid(state_node, i, cpumask)) {
|
||||
pr_warn("%s idle state not valid, bailing out\n",
|
||||
|
||||
@@ -750,7 +750,10 @@ static int omap_sham_align_sgs(struct scatterlist *sg,
|
||||
if (final)
|
||||
new_len = DIV_ROUND_UP(new_len, bs) * bs;
|
||||
else
|
||||
new_len = new_len / bs * bs;
|
||||
new_len = (new_len - 1) / bs * bs;
|
||||
|
||||
if (nbytes != new_len)
|
||||
list_ok = false;
|
||||
|
||||
while (nbytes > 0 && sg_tmp) {
|
||||
n++;
|
||||
@@ -846,6 +849,8 @@ static int omap_sham_prepare_request(struct ahash_request *req, bool update)
|
||||
xmit_len = DIV_ROUND_UP(xmit_len, bs) * bs;
|
||||
else
|
||||
xmit_len = xmit_len / bs * bs;
|
||||
} else if (!final) {
|
||||
xmit_len -= bs;
|
||||
}
|
||||
|
||||
hash_later = rctx->total - xmit_len;
|
||||
@@ -873,14 +878,21 @@ static int omap_sham_prepare_request(struct ahash_request *req, bool update)
|
||||
}
|
||||
|
||||
if (hash_later) {
|
||||
if (req->nbytes) {
|
||||
scatterwalk_map_and_copy(rctx->buffer, req->src,
|
||||
req->nbytes - hash_later,
|
||||
hash_later, 0);
|
||||
} else {
|
||||
int offset = 0;
|
||||
|
||||
if (hash_later > req->nbytes) {
|
||||
memcpy(rctx->buffer, rctx->buffer + xmit_len,
|
||||
hash_later);
|
||||
hash_later - req->nbytes);
|
||||
offset = hash_later - req->nbytes;
|
||||
}
|
||||
|
||||
if (req->nbytes) {
|
||||
scatterwalk_map_and_copy(rctx->buffer + offset,
|
||||
req->src,
|
||||
offset + req->nbytes -
|
||||
hash_later, hash_later, 0);
|
||||
}
|
||||
|
||||
rctx->bufcnt = hash_later;
|
||||
} else {
|
||||
rctx->bufcnt = 0;
|
||||
@@ -1130,7 +1142,7 @@ retry:
|
||||
ctx = ahash_request_ctx(req);
|
||||
|
||||
err = omap_sham_prepare_request(req, ctx->op == OP_UPDATE);
|
||||
if (err)
|
||||
if (err || !ctx->total)
|
||||
goto err1;
|
||||
|
||||
dev_dbg(dd->dev, "handling new req, op: %lu, nbytes: %d\n",
|
||||
@@ -1189,11 +1201,10 @@ static int omap_sham_update(struct ahash_request *req)
|
||||
if (!req->nbytes)
|
||||
return 0;
|
||||
|
||||
if (ctx->total + req->nbytes < ctx->buflen) {
|
||||
if (ctx->bufcnt + req->nbytes <= ctx->buflen) {
|
||||
scatterwalk_map_and_copy(ctx->buffer + ctx->bufcnt, req->src,
|
||||
0, req->nbytes, 0);
|
||||
ctx->bufcnt += req->nbytes;
|
||||
ctx->total += req->nbytes;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -943,7 +943,8 @@ static ssize_t governor_store(struct device *dev, struct device_attribute *attr,
|
||||
if (df->governor == governor) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
} else if (df->governor->immutable || governor->immutable) {
|
||||
} else if ((df->governor && df->governor->immutable) ||
|
||||
governor->immutable) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -1755,19 +1755,26 @@ static int sdma_probe(struct platform_device *pdev)
|
||||
if (IS_ERR(sdma->clk_ahb))
|
||||
return PTR_ERR(sdma->clk_ahb);
|
||||
|
||||
clk_prepare(sdma->clk_ipg);
|
||||
clk_prepare(sdma->clk_ahb);
|
||||
ret = clk_prepare(sdma->clk_ipg);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = clk_prepare(sdma->clk_ahb);
|
||||
if (ret)
|
||||
goto err_clk;
|
||||
|
||||
ret = devm_request_irq(&pdev->dev, irq, sdma_int_handler, 0, "sdma",
|
||||
sdma);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_irq;
|
||||
|
||||
sdma->irq = irq;
|
||||
|
||||
sdma->script_addrs = kzalloc(sizeof(*sdma->script_addrs), GFP_KERNEL);
|
||||
if (!sdma->script_addrs)
|
||||
return -ENOMEM;
|
||||
if (!sdma->script_addrs) {
|
||||
ret = -ENOMEM;
|
||||
goto err_irq;
|
||||
}
|
||||
|
||||
/* initially no scripts available */
|
||||
saddr_arr = (s32 *)sdma->script_addrs;
|
||||
@@ -1882,6 +1889,10 @@ err_register:
|
||||
dma_async_device_unregister(&sdma->dma_device);
|
||||
err_init:
|
||||
kfree(sdma->script_addrs);
|
||||
err_irq:
|
||||
clk_unprepare(sdma->clk_ahb);
|
||||
err_clk:
|
||||
clk_unprepare(sdma->clk_ipg);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1893,6 +1904,8 @@ static int sdma_remove(struct platform_device *pdev)
|
||||
devm_free_irq(&pdev->dev, sdma->irq, sdma);
|
||||
dma_async_device_unregister(&sdma->dma_device);
|
||||
kfree(sdma->script_addrs);
|
||||
clk_unprepare(sdma->clk_ahb);
|
||||
clk_unprepare(sdma->clk_ipg);
|
||||
/* Kill the tasklet */
|
||||
for (i = 0; i < MAX_DMA_CHANNELS; i++) {
|
||||
struct sdma_channel *sdmac = &sdma->channel[i];
|
||||
|
||||
@@ -759,7 +759,7 @@ static int mv64x60_mc_err_probe(struct platform_device *pdev)
|
||||
/* Non-ECC RAM? */
|
||||
printk(KERN_WARNING "%s: No ECC DIMMs discovered\n", __func__);
|
||||
res = -ENODEV;
|
||||
goto err2;
|
||||
goto err;
|
||||
}
|
||||
|
||||
edac_dbg(3, "init mci\n");
|
||||
|
||||
@@ -90,8 +90,18 @@ static inline int to_reg(int gpio, enum ctrl_register reg_type)
|
||||
{
|
||||
int reg;
|
||||
|
||||
if (gpio == 94)
|
||||
return GPIOPANELCTL;
|
||||
if (gpio >= CRYSTALCOVE_GPIO_NUM) {
|
||||
/*
|
||||
* Virtual GPIO called from ACPI, for now we only support
|
||||
* the panel ctl.
|
||||
*/
|
||||
switch (gpio) {
|
||||
case 0x5e:
|
||||
return GPIOPANELCTL;
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
}
|
||||
|
||||
if (reg_type == CTRL_IN) {
|
||||
if (gpio < 8)
|
||||
@@ -130,36 +140,36 @@ static void crystalcove_update_irq_ctrl(struct crystalcove_gpio *cg, int gpio)
|
||||
static int crystalcove_gpio_dir_in(struct gpio_chip *chip, unsigned gpio)
|
||||
{
|
||||
struct crystalcove_gpio *cg = gpiochip_get_data(chip);
|
||||
int reg = to_reg(gpio, CTRL_OUT);
|
||||
|
||||
if (gpio > CRYSTALCOVE_VGPIO_NUM)
|
||||
if (reg < 0)
|
||||
return 0;
|
||||
|
||||
return regmap_write(cg->regmap, to_reg(gpio, CTRL_OUT),
|
||||
CTLO_INPUT_SET);
|
||||
return regmap_write(cg->regmap, reg, CTLO_INPUT_SET);
|
||||
}
|
||||
|
||||
static int crystalcove_gpio_dir_out(struct gpio_chip *chip, unsigned gpio,
|
||||
int value)
|
||||
{
|
||||
struct crystalcove_gpio *cg = gpiochip_get_data(chip);
|
||||
int reg = to_reg(gpio, CTRL_OUT);
|
||||
|
||||
if (gpio > CRYSTALCOVE_VGPIO_NUM)
|
||||
if (reg < 0)
|
||||
return 0;
|
||||
|
||||
return regmap_write(cg->regmap, to_reg(gpio, CTRL_OUT),
|
||||
CTLO_OUTPUT_SET | value);
|
||||
return regmap_write(cg->regmap, reg, CTLO_OUTPUT_SET | value);
|
||||
}
|
||||
|
||||
static int crystalcove_gpio_get(struct gpio_chip *chip, unsigned gpio)
|
||||
{
|
||||
struct crystalcove_gpio *cg = gpiochip_get_data(chip);
|
||||
int ret;
|
||||
unsigned int val;
|
||||
int ret, reg = to_reg(gpio, CTRL_IN);
|
||||
|
||||
if (gpio > CRYSTALCOVE_VGPIO_NUM)
|
||||
if (reg < 0)
|
||||
return 0;
|
||||
|
||||
ret = regmap_read(cg->regmap, to_reg(gpio, CTRL_IN), &val);
|
||||
ret = regmap_read(cg->regmap, reg, &val);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -170,14 +180,15 @@ static void crystalcove_gpio_set(struct gpio_chip *chip,
|
||||
unsigned gpio, int value)
|
||||
{
|
||||
struct crystalcove_gpio *cg = gpiochip_get_data(chip);
|
||||
int reg = to_reg(gpio, CTRL_OUT);
|
||||
|
||||
if (gpio > CRYSTALCOVE_VGPIO_NUM)
|
||||
if (reg < 0)
|
||||
return;
|
||||
|
||||
if (value)
|
||||
regmap_update_bits(cg->regmap, to_reg(gpio, CTRL_OUT), 1, 1);
|
||||
regmap_update_bits(cg->regmap, reg, 1, 1);
|
||||
else
|
||||
regmap_update_bits(cg->regmap, to_reg(gpio, CTRL_OUT), 1, 0);
|
||||
regmap_update_bits(cg->regmap, reg, 1, 0);
|
||||
}
|
||||
|
||||
static int crystalcove_irq_type(struct irq_data *data, unsigned type)
|
||||
@@ -185,6 +196,9 @@ static int crystalcove_irq_type(struct irq_data *data, unsigned type)
|
||||
struct crystalcove_gpio *cg =
|
||||
gpiochip_get_data(irq_data_get_irq_chip_data(data));
|
||||
|
||||
if (data->hwirq >= CRYSTALCOVE_GPIO_NUM)
|
||||
return 0;
|
||||
|
||||
switch (type) {
|
||||
case IRQ_TYPE_NONE:
|
||||
cg->intcnt_value = CTLI_INTCNT_DIS;
|
||||
@@ -235,8 +249,10 @@ static void crystalcove_irq_unmask(struct irq_data *data)
|
||||
struct crystalcove_gpio *cg =
|
||||
gpiochip_get_data(irq_data_get_irq_chip_data(data));
|
||||
|
||||
cg->set_irq_mask = false;
|
||||
cg->update |= UPDATE_IRQ_MASK;
|
||||
if (data->hwirq < CRYSTALCOVE_GPIO_NUM) {
|
||||
cg->set_irq_mask = false;
|
||||
cg->update |= UPDATE_IRQ_MASK;
|
||||
}
|
||||
}
|
||||
|
||||
static void crystalcove_irq_mask(struct irq_data *data)
|
||||
@@ -244,8 +260,10 @@ static void crystalcove_irq_mask(struct irq_data *data)
|
||||
struct crystalcove_gpio *cg =
|
||||
gpiochip_get_data(irq_data_get_irq_chip_data(data));
|
||||
|
||||
cg->set_irq_mask = true;
|
||||
cg->update |= UPDATE_IRQ_MASK;
|
||||
if (data->hwirq < CRYSTALCOVE_GPIO_NUM) {
|
||||
cg->set_irq_mask = true;
|
||||
cg->update |= UPDATE_IRQ_MASK;
|
||||
}
|
||||
}
|
||||
|
||||
static struct irq_chip crystalcove_irqchip = {
|
||||
|
||||
@@ -3231,7 +3231,8 @@ struct gpio_desc *__must_check gpiod_get_index(struct device *dev,
|
||||
return desc;
|
||||
}
|
||||
|
||||
status = gpiod_request(desc, con_id);
|
||||
/* If a connection label was passed use that, else use the device name as label */
|
||||
status = gpiod_request(desc, con_id ? con_id : dev_name(dev));
|
||||
if (status < 0)
|
||||
return ERR_PTR(status);
|
||||
|
||||
|
||||
@@ -317,7 +317,8 @@ static struct kfd_process *create_process(const struct task_struct *thread)
|
||||
|
||||
/* init process apertures*/
|
||||
process->is_32bit_user_mode = in_compat_syscall();
|
||||
if (kfd_init_apertures(process) != 0)
|
||||
err = kfd_init_apertures(process);
|
||||
if (err != 0)
|
||||
goto err_init_apretures;
|
||||
|
||||
return process;
|
||||
|
||||
@@ -770,6 +770,8 @@ static int msm_gem_new_impl(struct drm_device *dev,
|
||||
unsigned sz;
|
||||
bool use_vram = false;
|
||||
|
||||
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
|
||||
|
||||
switch (flags & MSM_BO_CACHE_MASK) {
|
||||
case MSM_BO_UNCACHED:
|
||||
case MSM_BO_CACHED:
|
||||
@@ -863,7 +865,11 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
|
||||
|
||||
size = PAGE_ALIGN(dmabuf->size);
|
||||
|
||||
/* Take mutex so we can modify the inactive list in msm_gem_new_impl */
|
||||
mutex_lock(&dev->struct_mutex);
|
||||
ret = msm_gem_new_impl(dev, size, MSM_BO_WC, dmabuf->resv, &obj);
|
||||
mutex_unlock(&dev->struct_mutex);
|
||||
|
||||
if (ret)
|
||||
goto fail;
|
||||
|
||||
|
||||
@@ -195,7 +195,7 @@ static void evict_entry(struct drm_gem_object *obj,
|
||||
size_t size = PAGE_SIZE * n;
|
||||
loff_t off = mmap_offset(obj) +
|
||||
(entry->obj_pgoff << PAGE_SHIFT);
|
||||
const int m = 1 + ((omap_obj->width << fmt) / PAGE_SIZE);
|
||||
const int m = DIV_ROUND_UP(omap_obj->width << fmt, PAGE_SIZE);
|
||||
|
||||
if (m > 1) {
|
||||
int i;
|
||||
@@ -442,7 +442,7 @@ static int fault_2d(struct drm_gem_object *obj,
|
||||
* into account in some of the math, so figure out virtual stride
|
||||
* in pages
|
||||
*/
|
||||
const int m = 1 + ((omap_obj->width << fmt) / PAGE_SIZE);
|
||||
const int m = DIV_ROUND_UP(omap_obj->width << fmt, PAGE_SIZE);
|
||||
|
||||
/* We don't use vmf->pgoff since that has the fake offset: */
|
||||
pgoff = ((unsigned long)vmf->virtual_address -
|
||||
|
||||
@@ -212,6 +212,11 @@ static const struct component_master_ops sun4i_drv_master_ops = {
|
||||
.unbind = sun4i_drv_unbind,
|
||||
};
|
||||
|
||||
static bool sun4i_drv_node_is_connector(struct device_node *node)
|
||||
{
|
||||
return of_device_is_compatible(node, "hdmi-connector");
|
||||
}
|
||||
|
||||
static bool sun4i_drv_node_is_frontend(struct device_node *node)
|
||||
{
|
||||
return of_device_is_compatible(node, "allwinner,sun5i-a13-display-frontend") ||
|
||||
@@ -252,6 +257,13 @@ static int sun4i_drv_add_endpoints(struct device *dev,
|
||||
!of_device_is_available(node))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* The connectors will be the last nodes in our pipeline, we
|
||||
* can just bail out.
|
||||
*/
|
||||
if (sun4i_drv_node_is_connector(node))
|
||||
return 0;
|
||||
|
||||
if (!sun4i_drv_node_is_frontend(node)) {
|
||||
/* Add current component */
|
||||
DRM_DEBUG_DRIVER("Adding component %s\n",
|
||||
|
||||
@@ -110,8 +110,8 @@ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
|
||||
&handle);
|
||||
|
||||
if (ret) {
|
||||
state->bo_count = i - 1;
|
||||
goto err;
|
||||
state->bo_count = i;
|
||||
goto err_delete_handle;
|
||||
}
|
||||
bo_state[i].handle = handle;
|
||||
bo_state[i].paddr = vc4_bo->base.paddr;
|
||||
@@ -123,13 +123,16 @@ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
|
||||
state->bo_count * sizeof(*bo_state)))
|
||||
ret = -EFAULT;
|
||||
|
||||
kfree(bo_state);
|
||||
err_delete_handle:
|
||||
if (ret) {
|
||||
for (i = 0; i < state->bo_count; i++)
|
||||
drm_gem_handle_delete(file_priv, bo_state[i].handle);
|
||||
}
|
||||
|
||||
err_free:
|
||||
|
||||
vc4_free_hang_state(dev, kernel_state);
|
||||
kfree(bo_state);
|
||||
|
||||
err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -968,6 +968,15 @@ static int i2c_hid_acpi_pdata(struct i2c_client *client,
|
||||
return ret < 0 && ret != -ENXIO ? ret : 0;
|
||||
}
|
||||
|
||||
static void i2c_hid_acpi_fix_up_power(struct device *dev)
|
||||
{
|
||||
acpi_handle handle = ACPI_HANDLE(dev);
|
||||
struct acpi_device *adev;
|
||||
|
||||
if (handle && acpi_bus_get_device(handle, &adev) == 0)
|
||||
acpi_device_fix_up_power(adev);
|
||||
}
|
||||
|
||||
static const struct acpi_device_id i2c_hid_acpi_match[] = {
|
||||
{"ACPI0C50", 0 },
|
||||
{"PNP0C50", 0 },
|
||||
@@ -980,6 +989,8 @@ static inline int i2c_hid_acpi_pdata(struct i2c_client *client,
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static inline void i2c_hid_acpi_fix_up_power(struct device *dev) {}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
@@ -1082,6 +1093,8 @@ static int i2c_hid_probe(struct i2c_client *client,
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
|
||||
i2c_hid_acpi_fix_up_power(&client->dev);
|
||||
|
||||
pm_runtime_get_noresume(&client->dev);
|
||||
pm_runtime_set_active(&client->dev);
|
||||
pm_runtime_enable(&client->dev);
|
||||
|
||||
@@ -94,18 +94,20 @@ enum ina2xx_ids { ina219, ina226 };
|
||||
|
||||
struct ina2xx_config {
|
||||
u16 config_default;
|
||||
int calibration_factor;
|
||||
int calibration_value;
|
||||
int registers;
|
||||
int shunt_div;
|
||||
int bus_voltage_shift;
|
||||
int bus_voltage_lsb; /* uV */
|
||||
int power_lsb; /* uW */
|
||||
int power_lsb_factor;
|
||||
};
|
||||
|
||||
struct ina2xx_data {
|
||||
const struct ina2xx_config *config;
|
||||
|
||||
long rshunt;
|
||||
long current_lsb_uA;
|
||||
long power_lsb_uW;
|
||||
struct mutex config_lock;
|
||||
struct regmap *regmap;
|
||||
|
||||
@@ -115,21 +117,21 @@ struct ina2xx_data {
|
||||
static const struct ina2xx_config ina2xx_config[] = {
|
||||
[ina219] = {
|
||||
.config_default = INA219_CONFIG_DEFAULT,
|
||||
.calibration_factor = 40960000,
|
||||
.calibration_value = 4096,
|
||||
.registers = INA219_REGISTERS,
|
||||
.shunt_div = 100,
|
||||
.bus_voltage_shift = 3,
|
||||
.bus_voltage_lsb = 4000,
|
||||
.power_lsb = 20000,
|
||||
.power_lsb_factor = 20,
|
||||
},
|
||||
[ina226] = {
|
||||
.config_default = INA226_CONFIG_DEFAULT,
|
||||
.calibration_factor = 5120000,
|
||||
.calibration_value = 2048,
|
||||
.registers = INA226_REGISTERS,
|
||||
.shunt_div = 400,
|
||||
.bus_voltage_shift = 0,
|
||||
.bus_voltage_lsb = 1250,
|
||||
.power_lsb = 25000,
|
||||
.power_lsb_factor = 25,
|
||||
},
|
||||
};
|
||||
|
||||
@@ -168,12 +170,16 @@ static u16 ina226_interval_to_reg(int interval)
|
||||
return INA226_SHIFT_AVG(avg_bits);
|
||||
}
|
||||
|
||||
/*
|
||||
* Calibration register is set to the best value, which eliminates
|
||||
* truncation errors on calculating current register in hardware.
|
||||
* According to datasheet (eq. 3) the best values are 2048 for
|
||||
* ina226 and 4096 for ina219. They are hardcoded as calibration_value.
|
||||
*/
|
||||
static int ina2xx_calibrate(struct ina2xx_data *data)
|
||||
{
|
||||
u16 val = DIV_ROUND_CLOSEST(data->config->calibration_factor,
|
||||
data->rshunt);
|
||||
|
||||
return regmap_write(data->regmap, INA2XX_CALIBRATION, val);
|
||||
return regmap_write(data->regmap, INA2XX_CALIBRATION,
|
||||
data->config->calibration_value);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -186,10 +192,6 @@ static int ina2xx_init(struct ina2xx_data *data)
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Set current LSB to 1mA, shunt is in uOhms
|
||||
* (equation 13 in datasheet).
|
||||
*/
|
||||
return ina2xx_calibrate(data);
|
||||
}
|
||||
|
||||
@@ -267,15 +269,15 @@ static int ina2xx_get_value(struct ina2xx_data *data, u8 reg,
|
||||
val = DIV_ROUND_CLOSEST(val, 1000);
|
||||
break;
|
||||
case INA2XX_POWER:
|
||||
val = regval * data->config->power_lsb;
|
||||
val = regval * data->power_lsb_uW;
|
||||
break;
|
||||
case INA2XX_CURRENT:
|
||||
/* signed register, LSB=1mA (selected), in mA */
|
||||
val = (s16)regval;
|
||||
/* signed register, result in mA */
|
||||
val = regval * data->current_lsb_uA;
|
||||
val = DIV_ROUND_CLOSEST(val, 1000);
|
||||
break;
|
||||
case INA2XX_CALIBRATION:
|
||||
val = DIV_ROUND_CLOSEST(data->config->calibration_factor,
|
||||
regval);
|
||||
val = regval;
|
||||
break;
|
||||
default:
|
||||
/* programmer goofed */
|
||||
@@ -303,9 +305,32 @@ static ssize_t ina2xx_show_value(struct device *dev,
|
||||
ina2xx_get_value(data, attr->index, regval));
|
||||
}
|
||||
|
||||
static ssize_t ina2xx_set_shunt(struct device *dev,
|
||||
struct device_attribute *da,
|
||||
const char *buf, size_t count)
|
||||
/*
|
||||
* In order to keep calibration register value fixed, the product
|
||||
* of current_lsb and shunt_resistor should also be fixed and equal
|
||||
* to shunt_voltage_lsb = 1 / shunt_div multiplied by 10^9 in order
|
||||
* to keep the scale.
|
||||
*/
|
||||
static int ina2xx_set_shunt(struct ina2xx_data *data, long val)
|
||||
{
|
||||
unsigned int dividend = DIV_ROUND_CLOSEST(1000000000,
|
||||
data->config->shunt_div);
|
||||
if (val <= 0 || val > dividend)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&data->config_lock);
|
||||
data->rshunt = val;
|
||||
data->current_lsb_uA = DIV_ROUND_CLOSEST(dividend, val);
|
||||
data->power_lsb_uW = data->config->power_lsb_factor *
|
||||
data->current_lsb_uA;
|
||||
mutex_unlock(&data->config_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t ina2xx_store_shunt(struct device *dev,
|
||||
struct device_attribute *da,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
unsigned long val;
|
||||
int status;
|
||||
@@ -315,18 +340,9 @@ static ssize_t ina2xx_set_shunt(struct device *dev,
|
||||
if (status < 0)
|
||||
return status;
|
||||
|
||||
if (val == 0 ||
|
||||
/* Values greater than the calibration factor make no sense. */
|
||||
val > data->config->calibration_factor)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&data->config_lock);
|
||||
data->rshunt = val;
|
||||
status = ina2xx_calibrate(data);
|
||||
mutex_unlock(&data->config_lock);
|
||||
status = ina2xx_set_shunt(data, val);
|
||||
if (status < 0)
|
||||
return status;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
@@ -386,7 +402,7 @@ static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ina2xx_show_value, NULL,
|
||||
|
||||
/* shunt resistance */
|
||||
static SENSOR_DEVICE_ATTR(shunt_resistor, S_IRUGO | S_IWUSR,
|
||||
ina2xx_show_value, ina2xx_set_shunt,
|
||||
ina2xx_show_value, ina2xx_store_shunt,
|
||||
INA2XX_CALIBRATION);
|
||||
|
||||
/* update interval (ina226 only) */
|
||||
@@ -441,10 +457,7 @@ static int ina2xx_probe(struct i2c_client *client,
|
||||
val = INA2XX_RSHUNT_DEFAULT;
|
||||
}
|
||||
|
||||
if (val <= 0 || val > data->config->calibration_factor)
|
||||
return -ENODEV;
|
||||
|
||||
data->rshunt = val;
|
||||
ina2xx_set_shunt(data, val);
|
||||
|
||||
ina2xx_regmap_config.max_register = data->config->registers;
|
||||
|
||||
|
||||
@@ -362,6 +362,13 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
|
||||
desc.type = CORESIGHT_DEV_TYPE_SINK;
|
||||
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
|
||||
desc.ops = &tmc_etr_cs_ops;
|
||||
/*
|
||||
* ETR configuration uses a 40-bit AXI master in place of
|
||||
* the embedded SRAM of ETB/ETF.
|
||||
*/
|
||||
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
|
||||
if (ret)
|
||||
goto out;
|
||||
} else {
|
||||
desc.type = CORESIGHT_DEV_TYPE_LINKSINK;
|
||||
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO;
|
||||
|
||||
@@ -550,6 +550,9 @@ int coresight_enable(struct coresight_device *csdev)
|
||||
int cpu, ret = 0;
|
||||
struct coresight_device *sink;
|
||||
struct list_head *path;
|
||||
enum coresight_dev_subtype_source subtype;
|
||||
|
||||
subtype = csdev->subtype.source_subtype;
|
||||
|
||||
mutex_lock(&coresight_mutex);
|
||||
|
||||
@@ -557,8 +560,16 @@ int coresight_enable(struct coresight_device *csdev)
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (csdev->enable)
|
||||
if (csdev->enable) {
|
||||
/*
|
||||
* There could be multiple applications driving the software
|
||||
* source. So keep the refcount for each such user when the
|
||||
* source is already enabled.
|
||||
*/
|
||||
if (subtype == CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE)
|
||||
atomic_inc(csdev->refcnt);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Search for a valid sink for this session but don't reset the
|
||||
@@ -585,7 +596,7 @@ int coresight_enable(struct coresight_device *csdev)
|
||||
if (ret)
|
||||
goto err_source;
|
||||
|
||||
switch (csdev->subtype.source_subtype) {
|
||||
switch (subtype) {
|
||||
case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC:
|
||||
/*
|
||||
* When working from sysFS it is important to keep track
|
||||
|
||||
@@ -196,20 +196,25 @@ static int i2c_mux_reg_probe(struct platform_device *pdev)
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
mux->data.reg_size = resource_size(res);
|
||||
mux->data.reg = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(mux->data.reg))
|
||||
return PTR_ERR(mux->data.reg);
|
||||
if (IS_ERR(mux->data.reg)) {
|
||||
ret = PTR_ERR(mux->data.reg);
|
||||
goto err_put_parent;
|
||||
}
|
||||
}
|
||||
|
||||
if (mux->data.reg_size != 4 && mux->data.reg_size != 2 &&
|
||||
mux->data.reg_size != 1) {
|
||||
dev_err(&pdev->dev, "Invalid register size\n");
|
||||
return -EINVAL;
|
||||
ret = -EINVAL;
|
||||
goto err_put_parent;
|
||||
}
|
||||
|
||||
muxc = i2c_mux_alloc(parent, &pdev->dev, mux->data.n_values, 0, 0,
|
||||
i2c_mux_reg_select, NULL);
|
||||
if (!muxc)
|
||||
return -ENOMEM;
|
||||
if (!muxc) {
|
||||
ret = -ENOMEM;
|
||||
goto err_put_parent;
|
||||
}
|
||||
muxc->priv = mux;
|
||||
|
||||
platform_set_drvdata(pdev, muxc);
|
||||
@@ -235,6 +240,8 @@ static int i2c_mux_reg_probe(struct platform_device *pdev)
|
||||
|
||||
add_adapter_failed:
|
||||
i2c_mux_del_adapters(muxc);
|
||||
err_put_parent:
|
||||
i2c_put_adapter(parent);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -121,10 +121,21 @@ static int hi8435_write_event_config(struct iio_dev *idev,
|
||||
enum iio_event_direction dir, int state)
|
||||
{
|
||||
struct hi8435_priv *priv = iio_priv(idev);
|
||||
int ret;
|
||||
u32 tmp;
|
||||
|
||||
if (state) {
|
||||
ret = hi8435_readl(priv, HI8435_SO31_0_REG, &tmp);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (tmp & BIT(chan->channel))
|
||||
priv->event_prev_val |= BIT(chan->channel);
|
||||
else
|
||||
priv->event_prev_val &= ~BIT(chan->channel);
|
||||
|
||||
priv->event_scan_mask &= ~BIT(chan->channel);
|
||||
if (state)
|
||||
priv->event_scan_mask |= BIT(chan->channel);
|
||||
} else
|
||||
priv->event_scan_mask &= ~BIT(chan->channel);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -442,13 +453,15 @@ static int hi8435_probe(struct spi_device *spi)
|
||||
priv->spi = spi;
|
||||
|
||||
reset_gpio = devm_gpiod_get(&spi->dev, NULL, GPIOD_OUT_LOW);
|
||||
if (IS_ERR(reset_gpio)) {
|
||||
/* chip s/w reset if h/w reset failed */
|
||||
if (!IS_ERR(reset_gpio)) {
|
||||
/* need >=100ns low pulse to reset chip */
|
||||
gpiod_set_raw_value_cansleep(reset_gpio, 0);
|
||||
udelay(1);
|
||||
gpiod_set_raw_value_cansleep(reset_gpio, 1);
|
||||
} else {
|
||||
/* s/w reset chip if h/w reset is not available */
|
||||
hi8435_writeb(priv, HI8435_CTRL_REG, HI8435_CTRL_SRST);
|
||||
hi8435_writeb(priv, HI8435_CTRL_REG, 0);
|
||||
} else {
|
||||
udelay(5);
|
||||
gpiod_set_value(reset_gpio, 1);
|
||||
}
|
||||
|
||||
spi_set_drvdata(spi, idev);
|
||||
|
||||
@@ -510,13 +510,26 @@ static int rpr0521_probe(struct i2c_client *client,
|
||||
|
||||
ret = pm_runtime_set_active(&client->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto err_poweroff;
|
||||
|
||||
pm_runtime_enable(&client->dev);
|
||||
pm_runtime_set_autosuspend_delay(&client->dev, RPR0521_SLEEP_DELAY_MS);
|
||||
pm_runtime_use_autosuspend(&client->dev);
|
||||
|
||||
return iio_device_register(indio_dev);
|
||||
ret = iio_device_register(indio_dev);
|
||||
if (ret)
|
||||
goto err_pm_disable;
|
||||
|
||||
return 0;
|
||||
|
||||
err_pm_disable:
|
||||
pm_runtime_disable(&client->dev);
|
||||
pm_runtime_set_suspended(&client->dev);
|
||||
pm_runtime_put_noidle(&client->dev);
|
||||
err_poweroff:
|
||||
rpr0521_poweroff(data);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int rpr0521_remove(struct i2c_client *client)
|
||||
|
||||
@@ -48,8 +48,6 @@ static int st_magn_spi_remove(struct spi_device *spi)
|
||||
}
|
||||
|
||||
static const struct spi_device_id st_magn_id_table[] = {
|
||||
{ LSM303DLHC_MAGN_DEV_NAME },
|
||||
{ LSM303DLM_MAGN_DEV_NAME },
|
||||
{ LIS3MDL_MAGN_DEV_NAME },
|
||||
{ LSM303AGR_MAGN_DEV_NAME },
|
||||
{},
|
||||
|
||||
@@ -871,12 +871,13 @@ static int zpa2326_wait_oneshot_completion(const struct iio_dev *indio_dev,
|
||||
{
|
||||
int ret;
|
||||
unsigned int val;
|
||||
long timeout;
|
||||
|
||||
zpa2326_dbg(indio_dev, "waiting for one shot completion interrupt");
|
||||
|
||||
ret = wait_for_completion_interruptible_timeout(
|
||||
timeout = wait_for_completion_interruptible_timeout(
|
||||
&private->data_ready, ZPA2326_CONVERSION_JIFFIES);
|
||||
if (ret > 0)
|
||||
if (timeout > 0)
|
||||
/*
|
||||
* Interrupt handler completed before timeout: return operation
|
||||
* status.
|
||||
@@ -886,13 +887,16 @@ static int zpa2326_wait_oneshot_completion(const struct iio_dev *indio_dev,
|
||||
/* Clear all interrupts just to be sure. */
|
||||
regmap_read(private->regmap, ZPA2326_INT_SOURCE_REG, &val);
|
||||
|
||||
if (!ret)
|
||||
if (!timeout) {
|
||||
/* Timed out. */
|
||||
zpa2326_warn(indio_dev, "no one shot interrupt occurred (%ld)",
|
||||
timeout);
|
||||
ret = -ETIME;
|
||||
|
||||
if (ret != -ERESTARTSYS)
|
||||
zpa2326_warn(indio_dev, "no one shot interrupt occurred (%d)",
|
||||
ret);
|
||||
} else if (timeout < 0) {
|
||||
zpa2326_warn(indio_dev,
|
||||
"wait for one shot interrupt cancelled");
|
||||
ret = -ERESTARTSYS;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -488,6 +488,7 @@ static int _put_ep_safe(struct c4iw_dev *dev, struct sk_buff *skb)
|
||||
|
||||
ep = *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *)));
|
||||
release_ep_resources(ep);
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -498,6 +499,7 @@ static int _put_pass_ep_safe(struct c4iw_dev *dev, struct sk_buff *skb)
|
||||
ep = *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *)));
|
||||
c4iw_put_ep(&ep->parent_ep->com);
|
||||
release_ep_resources(ep);
|
||||
kfree_skb(skb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -569,11 +571,13 @@ static void abort_arp_failure(void *handle, struct sk_buff *skb)
|
||||
|
||||
PDBG("%s rdev %p\n", __func__, rdev);
|
||||
req->cmd = CPL_ABORT_NO_RST;
|
||||
skb_get(skb);
|
||||
ret = c4iw_ofld_send(rdev, skb);
|
||||
if (ret) {
|
||||
__state_set(&ep->com, DEAD);
|
||||
queue_arp_failure_cpl(ep, skb, FAKE_CPL_PUT_EP_SAFE);
|
||||
}
|
||||
} else
|
||||
kfree_skb(skb);
|
||||
}
|
||||
|
||||
static int send_flowc(struct c4iw_ep *ep)
|
||||
|
||||
@@ -196,7 +196,8 @@ static const struct sysfs_ops port_cc_sysfs_ops = {
|
||||
};
|
||||
|
||||
static struct attribute *port_cc_default_attributes[] = {
|
||||
&cc_prescan_attr.attr
|
||||
&cc_prescan_attr.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
static struct kobj_type port_cc_ktype = {
|
||||
|
||||
@@ -3644,8 +3644,10 @@ enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_APBVT_ENTRY].cnt = 1;
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_MR].cnt = mrwanted;
|
||||
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt = I40IW_MAX_WQ_ENTRIES * qpwanted;
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_Q1].cnt = 4 * I40IW_MAX_IRD_SIZE * qpwanted;
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt =
|
||||
roundup_pow_of_two(I40IW_MAX_WQ_ENTRIES * qpwanted);
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_Q1].cnt =
|
||||
roundup_pow_of_two(2 * I40IW_MAX_IRD_SIZE * qpwanted);
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_XFFL].cnt =
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt / hmc_fpm_misc->xf_block_size;
|
||||
hmc_info->hmc_obj[I40IW_HMC_IW_Q1FL].cnt =
|
||||
|
||||
@@ -86,6 +86,7 @@
|
||||
#define RDMA_OPCODE_MASK 0x0f
|
||||
#define RDMA_READ_REQ_OPCODE 1
|
||||
#define Q2_BAD_FRAME_OFFSET 72
|
||||
#define Q2_FPSN_OFFSET 64
|
||||
#define CQE_MAJOR_DRV 0x8000
|
||||
|
||||
#define I40IW_TERM_SENT 0x01
|
||||
|
||||
@@ -1320,7 +1320,7 @@ static void i40iw_ieq_handle_exception(struct i40iw_puda_rsrc *ieq,
|
||||
u32 *hw_host_ctx = (u32 *)qp->hw_host_ctx;
|
||||
u32 rcv_wnd = hw_host_ctx[23];
|
||||
/* first partial seq # in q2 */
|
||||
u32 fps = qp->q2_buf[16];
|
||||
u32 fps = *(u32 *)(qp->q2_buf + Q2_FPSN_OFFSET);
|
||||
struct list_head *rxlist = &pfpdu->rxlist;
|
||||
struct list_head *plist;
|
||||
|
||||
|
||||
@@ -197,7 +197,7 @@ struct ib_cq *rvt_create_cq(struct ib_device *ibdev,
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
/* Allocate the completion queue structure. */
|
||||
cq = kzalloc(sizeof(*cq), GFP_KERNEL);
|
||||
cq = kzalloc_node(sizeof(*cq), GFP_KERNEL, rdi->dparms.node);
|
||||
if (!cq)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
@@ -213,7 +213,9 @@ struct ib_cq *rvt_create_cq(struct ib_device *ibdev,
|
||||
sz += sizeof(struct ib_uverbs_wc) * (entries + 1);
|
||||
else
|
||||
sz += sizeof(struct ib_wc) * (entries + 1);
|
||||
wc = vmalloc_user(sz);
|
||||
wc = udata ?
|
||||
vmalloc_user(sz) :
|
||||
vzalloc_node(sz, rdi->dparms.node);
|
||||
if (!wc) {
|
||||
ret = ERR_PTR(-ENOMEM);
|
||||
goto bail_cq;
|
||||
@@ -368,7 +370,9 @@ int rvt_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata)
|
||||
sz += sizeof(struct ib_uverbs_wc) * (cqe + 1);
|
||||
else
|
||||
sz += sizeof(struct ib_wc) * (cqe + 1);
|
||||
wc = vmalloc_user(sz);
|
||||
wc = udata ?
|
||||
vmalloc_user(sz) :
|
||||
vzalloc_node(sz, rdi->dparms.node);
|
||||
if (!wc)
|
||||
return -ENOMEM;
|
||||
|
||||
|
||||
@@ -2292,12 +2292,8 @@ static void srpt_queue_response(struct se_cmd *cmd)
|
||||
}
|
||||
spin_unlock_irqrestore(&ioctx->spinlock, flags);
|
||||
|
||||
if (unlikely(transport_check_aborted_status(&ioctx->cmd, false)
|
||||
|| WARN_ON_ONCE(state == SRPT_STATE_CMD_RSP_SENT))) {
|
||||
atomic_inc(&ch->req_lim_delta);
|
||||
srpt_abort_cmd(ioctx);
|
||||
if (unlikely(WARN_ON_ONCE(state == SRPT_STATE_CMD_RSP_SENT)))
|
||||
return;
|
||||
}
|
||||
|
||||
/* For read commands, transfer the data to the initiator. */
|
||||
if (ioctx->cmd.data_direction == DMA_FROM_DEVICE &&
|
||||
@@ -2670,7 +2666,8 @@ static void srpt_release_cmd(struct se_cmd *se_cmd)
|
||||
struct srpt_rdma_ch *ch = ioctx->ch;
|
||||
unsigned long flags;
|
||||
|
||||
WARN_ON(ioctx->state != SRPT_STATE_DONE);
|
||||
WARN_ON_ONCE(ioctx->state != SRPT_STATE_DONE &&
|
||||
!(ioctx->cmd.transport_state & CMD_T_ABORTED));
|
||||
|
||||
if (ioctx->n_rw_ctx) {
|
||||
srpt_free_rw_ctxs(ch, ioctx);
|
||||
|
||||
@@ -1082,6 +1082,13 @@ static int elan_probe(struct i2c_client *client,
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Make sure there is something at this address */
|
||||
error = i2c_smbus_read_byte(client);
|
||||
if (error < 0) {
|
||||
dev_dbg(&client->dev, "nothing at this address: %d\n", error);
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
/* Initialize the touchpad. */
|
||||
error = elan_initialize(data);
|
||||
if (error)
|
||||
|
||||
@@ -557,7 +557,14 @@ static int elan_i2c_finish_fw_update(struct i2c_client *client,
|
||||
long ret;
|
||||
int error;
|
||||
int len;
|
||||
u8 buffer[ETP_I2C_INF_LENGTH];
|
||||
u8 buffer[ETP_I2C_REPORT_LEN];
|
||||
|
||||
len = i2c_master_recv(client, buffer, ETP_I2C_REPORT_LEN);
|
||||
if (len != ETP_I2C_REPORT_LEN) {
|
||||
error = len < 0 ? len : -EIO;
|
||||
dev_warn(dev, "failed to read I2C data after FW WDT reset: %d (%d)\n",
|
||||
error, len);
|
||||
}
|
||||
|
||||
reinit_completion(completion);
|
||||
enable_irq(client->irq);
|
||||
|
||||
@@ -1711,6 +1711,17 @@ int elantech_init(struct psmouse *psmouse)
|
||||
etd->samples[0], etd->samples[1], etd->samples[2]);
|
||||
}
|
||||
|
||||
if (etd->samples[1] == 0x74 && etd->hw_version == 0x03) {
|
||||
/*
|
||||
* This module has a bug which makes absolute mode
|
||||
* unusable, so let's abort so we'll be using standard
|
||||
* PS/2 protocol.
|
||||
*/
|
||||
psmouse_info(psmouse,
|
||||
"absolute mode broken, forcing standard PS/2 protocol\n");
|
||||
goto init_fail;
|
||||
}
|
||||
|
||||
if (elantech_set_absolute_mode(psmouse)) {
|
||||
psmouse_err(psmouse,
|
||||
"failed to put touchpad into absolute mode.\n");
|
||||
|
||||
@@ -778,8 +778,10 @@ static int __maybe_unused goodix_suspend(struct device *dev)
|
||||
int error;
|
||||
|
||||
/* We need gpio pins to suspend/resume */
|
||||
if (!ts->gpiod_int || !ts->gpiod_rst)
|
||||
if (!ts->gpiod_int || !ts->gpiod_rst) {
|
||||
disable_irq(client->irq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
wait_for_completion(&ts->firmware_loading_complete);
|
||||
|
||||
@@ -819,8 +821,10 @@ static int __maybe_unused goodix_resume(struct device *dev)
|
||||
struct goodix_ts_data *ts = i2c_get_clientdata(client);
|
||||
int error;
|
||||
|
||||
if (!ts->gpiod_int || !ts->gpiod_rst)
|
||||
if (!ts->gpiod_int || !ts->gpiod_rst) {
|
||||
enable_irq(client->irq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Exit sleep mode by outputting HIGH level to INT pin
|
||||
|
||||
@@ -1250,6 +1250,10 @@ gic_acpi_parse_madt_gicc(struct acpi_subtable_header *header,
|
||||
u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
|
||||
void __iomem *redist_base;
|
||||
|
||||
/* GICC entry which has !ACPI_MADT_ENABLED is not unusable so skip */
|
||||
if (!(gicc->flags & ACPI_MADT_ENABLED))
|
||||
return 0;
|
||||
|
||||
redist_base = ioremap(gicc->gicr_base_address, size);
|
||||
if (!redist_base)
|
||||
return -ENOMEM;
|
||||
@@ -1299,6 +1303,13 @@ static int __init gic_acpi_match_gicc(struct acpi_subtable_header *header,
|
||||
if ((gicc->flags & ACPI_MADT_ENABLED) && gicc->gicr_base_address)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* It's perfectly valid firmware can pass disabled GICC entry, driver
|
||||
* should not treat as errors, skip the entry instead of probe fail.
|
||||
*/
|
||||
if (!(gicc->flags & ACPI_MADT_ENABLED))
|
||||
return 0;
|
||||
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
|
||||
@@ -105,10 +105,7 @@ static inline void get_mbigen_type_reg(irq_hw_number_t hwirq,
|
||||
static inline void get_mbigen_clear_reg(irq_hw_number_t hwirq,
|
||||
u32 *mask, u32 *addr)
|
||||
{
|
||||
unsigned int ofst;
|
||||
|
||||
hwirq -= RESERVED_IRQ_PER_MBIGEN_CHIP;
|
||||
ofst = hwirq / 32 * 4;
|
||||
unsigned int ofst = (hwirq / 32) * 4;
|
||||
|
||||
*mask = 1 << (hwirq % 32);
|
||||
*addr = ofst + REG_MBIGEN_CLEAR_OFFSET;
|
||||
|
||||
@@ -72,7 +72,7 @@ send_socklist(struct mISDN_sock_list *sl, struct sk_buff *skb)
|
||||
if (sk->sk_state != MISDN_BOUND)
|
||||
continue;
|
||||
if (!cskb)
|
||||
cskb = skb_copy(skb, GFP_KERNEL);
|
||||
cskb = skb_copy(skb, GFP_ATOMIC);
|
||||
if (!cskb) {
|
||||
printk(KERN_WARNING "%s no skb\n", __func__);
|
||||
break;
|
||||
|
||||
@@ -266,7 +266,7 @@ static int pca955x_probe(struct i2c_client *client,
|
||||
"slave address 0x%02x\n",
|
||||
id->name, chip->bits, client->addr);
|
||||
|
||||
if (!i2c_check_functionality(adapter, I2C_FUNC_I2C))
|
||||
if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
|
||||
return -EIO;
|
||||
|
||||
if (pdata) {
|
||||
|
||||
@@ -512,15 +512,21 @@ struct open_bucket {
|
||||
|
||||
/*
|
||||
* We keep multiple buckets open for writes, and try to segregate different
|
||||
* write streams for better cache utilization: first we look for a bucket where
|
||||
* the last write to it was sequential with the current write, and failing that
|
||||
* we look for a bucket that was last used by the same task.
|
||||
* write streams for better cache utilization: first we try to segregate flash
|
||||
* only volume write streams from cached devices, secondly we look for a bucket
|
||||
* where the last write to it was sequential with the current write, and
|
||||
* failing that we look for a bucket that was last used by the same task.
|
||||
*
|
||||
* The ideas is if you've got multiple tasks pulling data into the cache at the
|
||||
* same time, you'll get better cache utilization if you try to segregate their
|
||||
* data and preserve locality.
|
||||
*
|
||||
* For example, say you've starting Firefox at the same time you're copying a
|
||||
* For example, dirty sectors of flash only volume is not reclaimable, if their
|
||||
* dirty sectors mixed with dirty sectors of cached device, such buckets will
|
||||
* be marked as dirty and won't be reclaimed, though the dirty data of cached
|
||||
* device have been written back to backend device.
|
||||
*
|
||||
* And say you've starting Firefox at the same time you're copying a
|
||||
* bunch of files. Firefox will likely end up being fairly hot and stay in the
|
||||
* cache awhile, but the data you copied might not be; if you wrote all that
|
||||
* data to the same buckets it'd get invalidated at the same time.
|
||||
@@ -537,7 +543,10 @@ static struct open_bucket *pick_data_bucket(struct cache_set *c,
|
||||
struct open_bucket *ret, *ret_task = NULL;
|
||||
|
||||
list_for_each_entry_reverse(ret, &c->data_buckets, list)
|
||||
if (!bkey_cmp(&ret->key, search))
|
||||
if (UUID_FLASH_ONLY(&c->uuids[KEY_INODE(&ret->key)]) !=
|
||||
UUID_FLASH_ONLY(&c->uuids[KEY_INODE(search)]))
|
||||
continue;
|
||||
else if (!bkey_cmp(&ret->key, search))
|
||||
goto found;
|
||||
else if (ret->last_write_point == write_point)
|
||||
ret_task = ret;
|
||||
|
||||
@@ -892,6 +892,12 @@ static void cached_dev_detach_finish(struct work_struct *w)
|
||||
|
||||
mutex_lock(&bch_register_lock);
|
||||
|
||||
cancel_delayed_work_sync(&dc->writeback_rate_update);
|
||||
if (!IS_ERR_OR_NULL(dc->writeback_thread)) {
|
||||
kthread_stop(dc->writeback_thread);
|
||||
dc->writeback_thread = NULL;
|
||||
}
|
||||
|
||||
memset(&dc->sb.set_uuid, 0, 16);
|
||||
SET_BDEV_STATE(&dc->sb, BDEV_STATE_NONE);
|
||||
|
||||
|
||||
@@ -1122,8 +1122,10 @@ static int add_new_disk(struct mddev *mddev, struct md_rdev *rdev)
|
||||
cmsg.raid_slot = cpu_to_le32(rdev->desc_nr);
|
||||
lock_comm(cinfo);
|
||||
ret = __sendmsg(cinfo, &cmsg);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
unlock_comm(cinfo);
|
||||
return ret;
|
||||
}
|
||||
cinfo->no_new_dev_lockres->flags |= DLM_LKF_NOQUEUE;
|
||||
ret = dlm_lock_sync(cinfo->no_new_dev_lockres, DLM_LOCK_EX);
|
||||
cinfo->no_new_dev_lockres->flags &= ~DLM_LKF_NOQUEUE;
|
||||
|
||||
@@ -110,8 +110,7 @@ static inline void unlock_device_hash_lock(struct r5conf *conf, int hash)
|
||||
static inline void lock_all_device_hash_locks_irq(struct r5conf *conf)
|
||||
{
|
||||
int i;
|
||||
local_irq_disable();
|
||||
spin_lock(conf->hash_locks);
|
||||
spin_lock_irq(conf->hash_locks);
|
||||
for (i = 1; i < NR_STRIPE_HASH_LOCKS; i++)
|
||||
spin_lock_nest_lock(conf->hash_locks + i, conf->hash_locks);
|
||||
spin_lock(&conf->device_lock);
|
||||
@@ -121,9 +120,9 @@ static inline void unlock_all_device_hash_locks_irq(struct r5conf *conf)
|
||||
{
|
||||
int i;
|
||||
spin_unlock(&conf->device_lock);
|
||||
for (i = NR_STRIPE_HASH_LOCKS; i; i--)
|
||||
spin_unlock(conf->hash_locks + i - 1);
|
||||
local_irq_enable();
|
||||
for (i = NR_STRIPE_HASH_LOCKS - 1; i; i--)
|
||||
spin_unlock(conf->hash_locks + i);
|
||||
spin_unlock_irq(conf->hash_locks);
|
||||
}
|
||||
|
||||
/* bio's attached to a stripe+device for I/O are linked together in bi_sector
|
||||
@@ -732,12 +731,11 @@ static bool is_full_stripe_write(struct stripe_head *sh)
|
||||
|
||||
static void lock_two_stripes(struct stripe_head *sh1, struct stripe_head *sh2)
|
||||
{
|
||||
local_irq_disable();
|
||||
if (sh1 > sh2) {
|
||||
spin_lock(&sh2->stripe_lock);
|
||||
spin_lock_irq(&sh2->stripe_lock);
|
||||
spin_lock_nested(&sh1->stripe_lock, 1);
|
||||
} else {
|
||||
spin_lock(&sh1->stripe_lock);
|
||||
spin_lock_irq(&sh1->stripe_lock);
|
||||
spin_lock_nested(&sh2->stripe_lock, 1);
|
||||
}
|
||||
}
|
||||
@@ -745,8 +743,7 @@ static void lock_two_stripes(struct stripe_head *sh1, struct stripe_head *sh2)
|
||||
static void unlock_two_stripes(struct stripe_head *sh1, struct stripe_head *sh2)
|
||||
{
|
||||
spin_unlock(&sh1->stripe_lock);
|
||||
spin_unlock(&sh2->stripe_lock);
|
||||
local_irq_enable();
|
||||
spin_unlock_irq(&sh2->stripe_lock);
|
||||
}
|
||||
|
||||
/* Only freshly new full stripe normal write stripe can be added to a batch list */
|
||||
|
||||
@@ -420,11 +420,13 @@ static void cx25840_initialize(struct i2c_client *client)
|
||||
INIT_WORK(&state->fw_work, cx25840_work_handler);
|
||||
init_waitqueue_head(&state->fw_wait);
|
||||
q = create_singlethread_workqueue("cx25840_fw");
|
||||
prepare_to_wait(&state->fw_wait, &wait, TASK_UNINTERRUPTIBLE);
|
||||
queue_work(q, &state->fw_work);
|
||||
schedule();
|
||||
finish_wait(&state->fw_wait, &wait);
|
||||
destroy_workqueue(q);
|
||||
if (q) {
|
||||
prepare_to_wait(&state->fw_wait, &wait, TASK_UNINTERRUPTIBLE);
|
||||
queue_work(q, &state->fw_work);
|
||||
schedule();
|
||||
finish_wait(&state->fw_wait, &wait);
|
||||
destroy_workqueue(q);
|
||||
}
|
||||
|
||||
/* 6. */
|
||||
cx25840_write(client, 0x115, 0x8c);
|
||||
@@ -634,11 +636,13 @@ static void cx23885_initialize(struct i2c_client *client)
|
||||
INIT_WORK(&state->fw_work, cx25840_work_handler);
|
||||
init_waitqueue_head(&state->fw_wait);
|
||||
q = create_singlethread_workqueue("cx25840_fw");
|
||||
prepare_to_wait(&state->fw_wait, &wait, TASK_UNINTERRUPTIBLE);
|
||||
queue_work(q, &state->fw_work);
|
||||
schedule();
|
||||
finish_wait(&state->fw_wait, &wait);
|
||||
destroy_workqueue(q);
|
||||
if (q) {
|
||||
prepare_to_wait(&state->fw_wait, &wait, TASK_UNINTERRUPTIBLE);
|
||||
queue_work(q, &state->fw_work);
|
||||
schedule();
|
||||
finish_wait(&state->fw_wait, &wait);
|
||||
destroy_workqueue(q);
|
||||
}
|
||||
|
||||
/* Call the cx23888 specific std setup func, we no longer rely on
|
||||
* the generic cx24840 func.
|
||||
@@ -752,11 +756,13 @@ static void cx231xx_initialize(struct i2c_client *client)
|
||||
INIT_WORK(&state->fw_work, cx25840_work_handler);
|
||||
init_waitqueue_head(&state->fw_wait);
|
||||
q = create_singlethread_workqueue("cx25840_fw");
|
||||
prepare_to_wait(&state->fw_wait, &wait, TASK_UNINTERRUPTIBLE);
|
||||
queue_work(q, &state->fw_work);
|
||||
schedule();
|
||||
finish_wait(&state->fw_wait, &wait);
|
||||
destroy_workqueue(q);
|
||||
if (q) {
|
||||
prepare_to_wait(&state->fw_wait, &wait, TASK_UNINTERRUPTIBLE);
|
||||
queue_work(q, &state->fw_work);
|
||||
schedule();
|
||||
finish_wait(&state->fw_wait, &wait);
|
||||
destroy_workqueue(q);
|
||||
}
|
||||
|
||||
cx25840_std_setup(client);
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user