Merge 5.4.165 into android11-5.4-lts
Changes in 5.4.165
serial: tegra: Change lower tolerance baud rate limit for tegra20 and tegra30
ntfs: fix ntfs_test_inode and ntfs_init_locked_inode function type
HID: quirks: Add quirk for the Microsoft Surface 3 type-cover
HID: google: add eel USB id
HID: add hid_is_usb() function to make it simpler for USB detection
HID: add USB_HID dependancy to hid-prodikeys
HID: add USB_HID dependancy to hid-chicony
HID: add USB_HID dependancy on some USB HID drivers
HID: bigbenff: prevent null pointer dereference
HID: wacom: fix problems when device is not a valid USB device
HID: check for valid USB device for many HID drivers
can: kvaser_usb: get CAN clock frequency from device
can: kvaser_pciefd: kvaser_pciefd_rx_error_frame(): increase correct stats->{rx,tx}_errors counter
can: sja1000: fix use after free in ems_pcmcia_add_card()
nfc: fix potential NULL pointer deref in nfc_genl_dump_ses_done
selftests: netfilter: add a vrf+conntrack testcase
vrf: don't run conntrack on vrf with !dflt qdisc
bpf: Fix the off-by-two error in range markings
ice: ignore dropped packets during init
bonding: make tx_rebalance_counter an atomic
nfp: Fix memory leak in nfp_cpp_area_cache_add()
seg6: fix the iif in the IPv6 socket control block
udp: using datalen to cap max gso segments
iavf: restore MSI state on reset
iavf: Fix reporting when setting descriptor count
IB/hfi1: Correct guard on eager buffer deallocation
mm: bdi: initialize bdi_min_ratio when bdi is unregistered
ALSA: ctl: Fix copy of updated id with element read/write
ALSA: hda/realtek - Add headset Mic support for Lenovo ALC897 platform
ALSA: pcm: oss: Fix negative period/buffer sizes
ALSA: pcm: oss: Limit the period size to 16MB
ALSA: pcm: oss: Handle missing errors in snd_pcm_oss_change_params*()
btrfs: clear extent buffer uptodate when we fail to write it
btrfs: replace the BUG_ON in btrfs_del_root_ref with proper error handling
nfsd: Fix nsfd startup race (again)
tracefs: Have new files inherit the ownership of their parent
clk: qcom: regmap-mux: fix parent clock lookup
drm/syncobj: Deal with signalled fences in drm_syncobj_find_fence.
can: pch_can: pch_can_rx_normal: fix use after free
can: m_can: Disable and ignore ELO interrupt
x86/sme: Explicitly map new EFI memmap table as encrypted
libata: add horkage for ASMedia 1092
wait: add wake_up_pollfree()
binder: use wake_up_pollfree()
signalfd: use wake_up_pollfree()
aio: keep poll requests on waitqueue until completed
aio: fix use-after-free due to missing POLLFREE handling
tracefs: Set all files to the same group ownership as the mount option
block: fix ioprio_get(IOPRIO_WHO_PGRP) vs setuid(2)
qede: validate non LSO skb length
ASoC: qdsp6: q6routing: Fix return value from msm_routing_put_audio_mixer
i40e: Fix failed opcode appearing if handling messages from VF
i40e: Fix pre-set max number of queues for VF
mtd: rawnand: fsmc: Take instruction delay into account
mtd: rawnand: fsmc: Fix timing computation
dt-bindings: net: Reintroduce PHY no lane swap binding
tools build: Remove needless libpython-version feature check that breaks test-all fast path
net: cdc_ncm: Allow for dwNtbOutMaxSize to be unset or zero
net: altera: set a couple error code in probe()
net: fec: only clear interrupt of handling queue in fec_enet_rx_queue()
net, neigh: clear whole pneigh_entry at alloc time
net/qla3xxx: fix an error code in ql_adapter_up()
selftests/fib_tests: Rework fib_rp_filter_test()
USB: gadget: detect too-big endpoint 0 requests
USB: gadget: zero allocate endpoint 0 buffers
usb: core: config: fix validation of wMaxPacketValue entries
xhci: Remove CONFIG_USB_DEFAULT_PERSIST to prevent xHCI from runtime suspending
usb: core: config: using bit mask instead of individual bits
xhci: avoid race between disable slot command and host runtime suspend
iio: trigger: Fix reference counting
iio: trigger: stm32-timer: fix MODULE_ALIAS
iio: stk3310: Don't return error code in interrupt handler
iio: mma8452: Fix trigger reference couting
iio: ltr501: Don't return error code in trigger handler
iio: kxsd9: Don't return error code in trigger handler
iio: itg3200: Call iio_trigger_notify_done() on error
iio: dln2-adc: Fix lockdep complaint
iio: dln2: Check return value of devm_iio_trigger_register()
iio: at91-sama5d2: Fix incorrect sign extension
iio: adc: axp20x_adc: fix charging current reporting on AXP22x
iio: ad7768-1: Call iio_trigger_notify_done() on error
iio: accel: kxcjk-1013: Fix possible memory leak in probe and remove
irqchip/armada-370-xp: Fix return value of armada_370_xp_msi_alloc()
irqchip/armada-370-xp: Fix support for Multi-MSI interrupts
irqchip/irq-gic-v3-its.c: Force synchronisation when issuing INVALL
irqchip: nvic: Fix offset for Interrupt Priority Offsets
misc: fastrpc: fix improper packet size calculation
bpf: Add selftests to cover packet access corner cases
Linux 5.4.165
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I756efb854dc947509cf712a292eab0bf72f32694
This commit is contained in:
@@ -87,6 +87,14 @@ properties:
|
||||
compensate for the board being designed with the lanes
|
||||
swapped.
|
||||
|
||||
enet-phy-lane-no-swap:
|
||||
$ref: /schemas/types.yaml#/definitions/flag
|
||||
description:
|
||||
If set, indicates that PHY will disable swap of the
|
||||
TX/RX lanes. This property allows the PHY to work correcly after
|
||||
e.g. wrong bootstrap configuration caused by issues in PCB
|
||||
layout design.
|
||||
|
||||
eee-broken-100tx:
|
||||
$ref: /schemas/types.yaml#definitions/flag
|
||||
description:
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 164
|
||||
SUBLEVEL = 165
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
|
||||
@@ -1992,6 +1992,7 @@ config EFI
|
||||
depends on ACPI
|
||||
select UCS2_STRING
|
||||
select EFI_RUNTIME_WRAPPERS
|
||||
select ARCH_USE_MEMREMAP_PROT
|
||||
---help---
|
||||
This enables the kernel to use EFI runtime services that are
|
||||
available (such as the EFI variable services).
|
||||
|
||||
@@ -279,7 +279,8 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
|
||||
return;
|
||||
}
|
||||
|
||||
new = early_memremap(new_phys, new_size);
|
||||
new = early_memremap_prot(new_phys, new_size,
|
||||
pgprot_val(pgprot_encrypted(FIXMAP_PAGE_NORMAL)));
|
||||
if (!new) {
|
||||
pr_err("Failed to map new boot services memmap\n");
|
||||
return;
|
||||
|
||||
@@ -207,6 +207,7 @@ SYSCALL_DEFINE2(ioprio_get, int, which, int, who)
|
||||
pgrp = task_pgrp(current);
|
||||
else
|
||||
pgrp = find_vpid(who);
|
||||
read_lock(&tasklist_lock);
|
||||
do_each_pid_thread(pgrp, PIDTYPE_PGID, p) {
|
||||
tmpio = get_task_ioprio(p);
|
||||
if (tmpio < 0)
|
||||
@@ -216,6 +217,8 @@ SYSCALL_DEFINE2(ioprio_get, int, which, int, who)
|
||||
else
|
||||
ret = ioprio_best(ret, tmpio);
|
||||
} while_each_pid_thread(pgrp, PIDTYPE_PGID, p);
|
||||
read_unlock(&tasklist_lock);
|
||||
|
||||
break;
|
||||
case IOPRIO_WHO_USER:
|
||||
uid = make_kuid(current_user_ns(), who);
|
||||
|
||||
@@ -4544,23 +4544,20 @@ static int binder_thread_release(struct binder_proc *proc,
|
||||
__release(&t->lock);
|
||||
|
||||
/*
|
||||
* If this thread used poll, make sure we remove the waitqueue
|
||||
* from any epoll data structures holding it with POLLFREE.
|
||||
* waitqueue_active() is safe to use here because we're holding
|
||||
* the inner lock.
|
||||
* If this thread used poll, make sure we remove the waitqueue from any
|
||||
* poll data structures holding it.
|
||||
*/
|
||||
if ((thread->looper & BINDER_LOOPER_STATE_POLL) &&
|
||||
waitqueue_active(&thread->wait)) {
|
||||
wake_up_poll(&thread->wait, EPOLLHUP | POLLFREE);
|
||||
}
|
||||
if (thread->looper & BINDER_LOOPER_STATE_POLL)
|
||||
wake_up_pollfree(&thread->wait);
|
||||
|
||||
binder_inner_proc_unlock(thread->proc);
|
||||
|
||||
/*
|
||||
* This is needed to avoid races between wake_up_poll() above and
|
||||
* and ep_remove_waitqueue() called for other reasons (eg the epoll file
|
||||
* descriptor being closed); ep_remove_waitqueue() holds an RCU read
|
||||
* lock, so we can be sure it's done after calling synchronize_rcu().
|
||||
* This is needed to avoid races between wake_up_pollfree() above and
|
||||
* someone else removing the last entry from the queue for other reasons
|
||||
* (e.g. ep_remove_wait_queue() being called due to an epoll file
|
||||
* descriptor being closed). Such other users hold an RCU read lock, so
|
||||
* we can be sure they're done after we call synchronize_rcu().
|
||||
*/
|
||||
if (thread->looper & BINDER_LOOPER_STATE_POLL)
|
||||
synchronize_rcu();
|
||||
|
||||
@@ -4437,6 +4437,8 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
|
||||
{ "VRFDFC22048UCHC-TE*", NULL, ATA_HORKAGE_NODMA },
|
||||
/* Odd clown on sil3726/4726 PMPs */
|
||||
{ "Config Disk", NULL, ATA_HORKAGE_DISABLE },
|
||||
/* Similar story with ASMedia 1092 */
|
||||
{ "ASMT109x- Config", NULL, ATA_HORKAGE_DISABLE },
|
||||
|
||||
/* Weird ATAPI devices */
|
||||
{ "TORiSAN DVD-ROM DRD-N216", NULL, ATA_HORKAGE_MAX_SEC_128 },
|
||||
|
||||
@@ -28,7 +28,7 @@ static u8 mux_get_parent(struct clk_hw *hw)
|
||||
val &= mask;
|
||||
|
||||
if (mux->parent_map)
|
||||
return qcom_find_src_index(hw, mux->parent_map, val);
|
||||
return qcom_find_cfg_index(hw, mux->parent_map, val);
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
@@ -69,6 +69,18 @@ int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map, u8 src)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qcom_find_src_index);
|
||||
|
||||
int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map, u8 cfg)
|
||||
{
|
||||
int i, num_parents = clk_hw_get_num_parents(hw);
|
||||
|
||||
for (i = 0; i < num_parents; i++)
|
||||
if (cfg == map[i].cfg)
|
||||
return i;
|
||||
|
||||
return -ENOENT;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qcom_find_cfg_index);
|
||||
|
||||
struct regmap *
|
||||
qcom_cc_map(struct platform_device *pdev, const struct qcom_cc_desc *desc)
|
||||
{
|
||||
|
||||
@@ -49,6 +49,8 @@ extern void
|
||||
qcom_pll_set_fsm_mode(struct regmap *m, u32 reg, u8 bias_count, u8 lock_count);
|
||||
extern int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map,
|
||||
u8 src);
|
||||
extern int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map,
|
||||
u8 cfg);
|
||||
|
||||
extern int qcom_cc_register_board_clk(struct device *dev, const char *path,
|
||||
const char *name, unsigned long rate);
|
||||
|
||||
@@ -329,8 +329,17 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
|
||||
|
||||
if (*fence) {
|
||||
ret = dma_fence_chain_find_seqno(fence, point);
|
||||
if (!ret)
|
||||
if (!ret) {
|
||||
/* If the requested seqno is already signaled
|
||||
* drm_syncobj_find_fence may return a NULL
|
||||
* fence. To make sure the recipient gets
|
||||
* signalled, use a new fence instead.
|
||||
*/
|
||||
if (!*fence)
|
||||
*fence = dma_fence_get_stub();
|
||||
|
||||
goto out;
|
||||
}
|
||||
dma_fence_put(*fence);
|
||||
} else {
|
||||
ret = -EINVAL;
|
||||
|
||||
@@ -206,14 +206,14 @@ config HID_CHERRY
|
||||
|
||||
config HID_CHICONY
|
||||
tristate "Chicony devices"
|
||||
depends on HID
|
||||
depends on USB_HID
|
||||
default !EXPERT
|
||||
---help---
|
||||
Support for Chicony Tactical pad and special keys on Chicony keyboards.
|
||||
|
||||
config HID_CORSAIR
|
||||
tristate "Corsair devices"
|
||||
depends on HID && USB && LEDS_CLASS
|
||||
depends on USB_HID && LEDS_CLASS
|
||||
---help---
|
||||
Support for Corsair devices that are not fully compliant with the
|
||||
HID standard.
|
||||
@@ -244,7 +244,7 @@ config HID_MACALLY
|
||||
|
||||
config HID_PRODIKEYS
|
||||
tristate "Prodikeys PC-MIDI Keyboard support"
|
||||
depends on HID && SND
|
||||
depends on USB_HID && SND
|
||||
select SND_RAWMIDI
|
||||
---help---
|
||||
Support for Prodikeys PC-MIDI Keyboard device support.
|
||||
@@ -524,7 +524,7 @@ config HID_LENOVO
|
||||
|
||||
config HID_LOGITECH
|
||||
tristate "Logitech devices"
|
||||
depends on HID
|
||||
depends on USB_HID
|
||||
default !EXPERT
|
||||
---help---
|
||||
Support for Logitech devices that are not fully compliant with HID standard.
|
||||
@@ -900,7 +900,7 @@ config HID_SAITEK
|
||||
|
||||
config HID_SAMSUNG
|
||||
tristate "Samsung InfraRed remote control or keyboards"
|
||||
depends on HID
|
||||
depends on USB_HID
|
||||
---help---
|
||||
Support for Samsung InfraRed remote control or keyboards.
|
||||
|
||||
|
||||
@@ -849,7 +849,7 @@ static int asus_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
if (drvdata->quirks & QUIRK_IS_MULTITOUCH)
|
||||
drvdata->tp = &asus_i2c_tp;
|
||||
|
||||
if (drvdata->quirks & QUIRK_T100_KEYBOARD) {
|
||||
if ((drvdata->quirks & QUIRK_T100_KEYBOARD) && hid_is_usb(hdev)) {
|
||||
struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
|
||||
|
||||
if (intf->altsetting->desc.bInterfaceNumber == T100_TPAD_INTF) {
|
||||
|
||||
@@ -191,7 +191,7 @@ static void bigben_worker(struct work_struct *work)
|
||||
struct bigben_device, worker);
|
||||
struct hid_field *report_field = bigben->report->field[0];
|
||||
|
||||
if (bigben->removed)
|
||||
if (bigben->removed || !report_field)
|
||||
return;
|
||||
|
||||
if (bigben->work_led) {
|
||||
|
||||
@@ -58,8 +58,12 @@ static int ch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
|
||||
static __u8 *ch_switch12_report_fixup(struct hid_device *hdev, __u8 *rdesc,
|
||||
unsigned int *rsize)
|
||||
{
|
||||
struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
|
||||
|
||||
struct usb_interface *intf;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return rdesc;
|
||||
|
||||
intf = to_usb_interface(hdev->dev.parent);
|
||||
if (intf->cur_altsetting->desc.bInterfaceNumber == 1) {
|
||||
/* Change usage maximum and logical maximum from 0x7fff to
|
||||
* 0x2fff, so they don't exceed HID_MAX_USAGES */
|
||||
|
||||
@@ -553,7 +553,12 @@ static int corsair_probe(struct hid_device *dev, const struct hid_device_id *id)
|
||||
int ret;
|
||||
unsigned long quirks = id->driver_data;
|
||||
struct corsair_drvdata *drvdata;
|
||||
struct usb_interface *usbif = to_usb_interface(dev->dev.parent);
|
||||
struct usb_interface *usbif;
|
||||
|
||||
if (!hid_is_usb(dev))
|
||||
return -EINVAL;
|
||||
|
||||
usbif = to_usb_interface(dev->dev.parent);
|
||||
|
||||
drvdata = devm_kzalloc(&dev->dev, sizeof(struct corsair_drvdata),
|
||||
GFP_KERNEL);
|
||||
|
||||
@@ -50,7 +50,7 @@ struct elan_drvdata {
|
||||
|
||||
static int is_not_elan_touchpad(struct hid_device *hdev)
|
||||
{
|
||||
if (hdev->bus == BUS_USB) {
|
||||
if (hid_is_usb(hdev)) {
|
||||
struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
|
||||
|
||||
return (intf->altsetting->desc.bInterfaceNumber !=
|
||||
|
||||
@@ -229,6 +229,9 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
struct elo_priv *priv;
|
||||
int ret;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -469,6 +469,8 @@ static int hammer_probe(struct hid_device *hdev,
|
||||
static const struct hid_device_id hammer_devices[] = {
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_DON) },
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_EEL) },
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
|
||||
@@ -140,12 +140,17 @@ static int holtek_kbd_input_event(struct input_dev *dev, unsigned int type,
|
||||
static int holtek_kbd_probe(struct hid_device *hdev,
|
||||
const struct hid_device_id *id)
|
||||
{
|
||||
struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
|
||||
int ret = hid_parse(hdev);
|
||||
struct usb_interface *intf;
|
||||
int ret;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
ret = hid_parse(hdev);
|
||||
if (!ret)
|
||||
ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT);
|
||||
|
||||
intf = to_usb_interface(hdev->dev.parent);
|
||||
if (!ret && intf->cur_altsetting->desc.bInterfaceNumber == 1) {
|
||||
struct hid_input *hidinput;
|
||||
list_for_each_entry(hidinput, &hdev->inputs, list) {
|
||||
|
||||
@@ -62,6 +62,14 @@ static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
|
||||
return rdesc;
|
||||
}
|
||||
|
||||
static int holtek_mouse_probe(struct hid_device *hdev,
|
||||
const struct hid_device_id *id)
|
||||
{
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct hid_device_id holtek_mouse_devices[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,
|
||||
USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) },
|
||||
@@ -83,6 +91,7 @@ static struct hid_driver holtek_mouse_driver = {
|
||||
.name = "holtek_mouse",
|
||||
.id_table = holtek_mouse_devices,
|
||||
.report_fixup = holtek_mouse_report_fixup,
|
||||
.probe = holtek_mouse_probe,
|
||||
};
|
||||
|
||||
module_hid_driver(holtek_mouse_driver);
|
||||
|
||||
@@ -489,6 +489,7 @@
|
||||
#define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d
|
||||
#define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044
|
||||
#define USB_DEVICE_ID_GOOGLE_DON 0x5050
|
||||
#define USB_DEVICE_ID_GOOGLE_EEL 0x5057
|
||||
|
||||
#define USB_VENDOR_ID_GOTOP 0x08f2
|
||||
#define USB_DEVICE_ID_SUPER_Q2 0x007f
|
||||
@@ -858,6 +859,7 @@
|
||||
#define USB_DEVICE_ID_MS_TOUCH_COVER_2 0x07a7
|
||||
#define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9
|
||||
#define USB_DEVICE_ID_MS_POWER_COVER 0x07da
|
||||
#define USB_DEVICE_ID_MS_SURFACE3_COVER 0x07de
|
||||
#define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd
|
||||
#define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb
|
||||
#define SPI_DEVICE_ID_MS_SURFACE_D6_0 0x0c1d
|
||||
|
||||
@@ -769,12 +769,18 @@ static int lg_raw_event(struct hid_device *hdev, struct hid_report *report,
|
||||
|
||||
static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
{
|
||||
struct usb_interface *iface = to_usb_interface(hdev->dev.parent);
|
||||
__u8 iface_num = iface->cur_altsetting->desc.bInterfaceNumber;
|
||||
struct usb_interface *iface;
|
||||
__u8 iface_num;
|
||||
unsigned int connect_mask = HID_CONNECT_DEFAULT;
|
||||
struct lg_drv_data *drv_data;
|
||||
int ret;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
iface = to_usb_interface(hdev->dev.parent);
|
||||
iface_num = iface->cur_altsetting->desc.bInterfaceNumber;
|
||||
|
||||
/* G29 only work with the 1st interface */
|
||||
if ((hdev->product == USB_DEVICE_ID_LOGITECH_G29_WHEEL) &&
|
||||
(iface_num != 0)) {
|
||||
|
||||
@@ -1686,7 +1686,7 @@ static int logi_dj_probe(struct hid_device *hdev,
|
||||
case recvr_type_27mhz: no_dj_interfaces = 2; break;
|
||||
case recvr_type_bluetooth: no_dj_interfaces = 2; break;
|
||||
}
|
||||
if (hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
|
||||
if (hid_is_usb(hdev)) {
|
||||
intf = to_usb_interface(hdev->dev.parent);
|
||||
if (intf && intf->altsetting->desc.bInterfaceNumber >=
|
||||
no_dj_interfaces) {
|
||||
|
||||
@@ -798,12 +798,18 @@ static int pk_raw_event(struct hid_device *hdev, struct hid_report *report,
|
||||
static int pk_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
{
|
||||
int ret;
|
||||
struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
|
||||
unsigned short ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
|
||||
struct usb_interface *intf;
|
||||
unsigned short ifnum;
|
||||
unsigned long quirks = id->driver_data;
|
||||
struct pk_device *pk;
|
||||
struct pcmidi_snd *pm = NULL;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
intf = to_usb_interface(hdev->dev.parent);
|
||||
ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
|
||||
|
||||
pk = kzalloc(sizeof(*pk), GFP_KERNEL);
|
||||
if (pk == NULL) {
|
||||
hid_err(hdev, "can't alloc descriptor\n");
|
||||
|
||||
@@ -124,6 +124,7 @@ static const struct hid_device_id hid_quirks[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE3_COVER), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_2), HID_QUIRK_NO_INIT_REPORTS },
|
||||
|
||||
@@ -344,6 +344,9 @@ static int arvo_probe(struct hid_device *hdev,
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -324,6 +324,9 @@ static int isku_probe(struct hid_device *hdev,
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -749,6 +749,9 @@ static int kone_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -431,6 +431,9 @@ static int koneplus_probe(struct hid_device *hdev,
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -133,6 +133,9 @@ static int konepure_probe(struct hid_device *hdev,
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -501,6 +501,9 @@ static int kovaplus_probe(struct hid_device *hdev,
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -160,6 +160,9 @@ static int lua_probe(struct hid_device *hdev,
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -449,6 +449,9 @@ static int pyra_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -141,6 +141,9 @@ static int ryos_probe(struct hid_device *hdev,
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -113,6 +113,9 @@ static int savu_probe(struct hid_device *hdev,
|
||||
{
|
||||
int retval;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
retval = hid_parse(hdev);
|
||||
if (retval) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -152,6 +152,9 @@ static int samsung_probe(struct hid_device *hdev,
|
||||
int ret;
|
||||
unsigned int cmask = HID_CONNECT_DEFAULT;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
ret = hid_parse(hdev);
|
||||
if (ret) {
|
||||
hid_err(hdev, "parse failed\n");
|
||||
|
||||
@@ -290,7 +290,7 @@ static int u2fzero_probe(struct hid_device *hdev,
|
||||
unsigned int minor;
|
||||
int ret;
|
||||
|
||||
if (!hid_is_using_ll_driver(hdev, &usb_hid_driver))
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL);
|
||||
|
||||
@@ -164,6 +164,9 @@ static int uclogic_probe(struct hid_device *hdev,
|
||||
struct uclogic_drvdata *drvdata = NULL;
|
||||
bool params_initialized = false;
|
||||
|
||||
if (!hid_is_usb(hdev))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* libinput requires the pad interface to be on a different node
|
||||
* than the pen, so use QUIRK_MULTI_INPUT for all tablets.
|
||||
|
||||
@@ -841,8 +841,7 @@ int uclogic_params_init(struct uclogic_params *params,
|
||||
struct uclogic_params p = {0, };
|
||||
|
||||
/* Check arguments */
|
||||
if (params == NULL || hdev == NULL ||
|
||||
!hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
|
||||
if (params == NULL || hdev == NULL || !hid_is_usb(hdev)) {
|
||||
rc = -EINVAL;
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
@@ -726,7 +726,7 @@ static void wacom_retrieve_hid_descriptor(struct hid_device *hdev,
|
||||
* Skip the query for this type and modify defaults based on
|
||||
* interface number.
|
||||
*/
|
||||
if (features->type == WIRELESS) {
|
||||
if (features->type == WIRELESS && intf) {
|
||||
if (intf->cur_altsetting->desc.bInterfaceNumber == 0)
|
||||
features->device_type = WACOM_DEVICETYPE_WL_MONITOR;
|
||||
else
|
||||
@@ -2217,7 +2217,7 @@ static void wacom_update_name(struct wacom *wacom, const char *suffix)
|
||||
if ((features->type == HID_GENERIC) && !strcmp("Wacom HID", features->name)) {
|
||||
char *product_name = wacom->hdev->name;
|
||||
|
||||
if (hid_is_using_ll_driver(wacom->hdev, &usb_hid_driver)) {
|
||||
if (hid_is_usb(wacom->hdev)) {
|
||||
struct usb_interface *intf = to_usb_interface(wacom->hdev->dev.parent);
|
||||
struct usb_device *dev = interface_to_usbdev(intf);
|
||||
product_name = dev->product;
|
||||
@@ -2448,6 +2448,9 @@ static void wacom_wireless_work(struct work_struct *work)
|
||||
|
||||
wacom_destroy_battery(wacom);
|
||||
|
||||
if (!usbdev)
|
||||
return;
|
||||
|
||||
/* Stylus interface */
|
||||
hdev1 = usb_get_intfdata(usbdev->config->interface[1]);
|
||||
wacom1 = hid_get_drvdata(hdev1);
|
||||
@@ -2727,8 +2730,6 @@ static void wacom_mode_change_work(struct work_struct *work)
|
||||
static int wacom_probe(struct hid_device *hdev,
|
||||
const struct hid_device_id *id)
|
||||
{
|
||||
struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
|
||||
struct usb_device *dev = interface_to_usbdev(intf);
|
||||
struct wacom *wacom;
|
||||
struct wacom_wac *wacom_wac;
|
||||
struct wacom_features *features;
|
||||
@@ -2763,8 +2764,14 @@ static int wacom_probe(struct hid_device *hdev,
|
||||
wacom_wac->hid_data.inputmode = -1;
|
||||
wacom_wac->mode_report = -1;
|
||||
|
||||
wacom->usbdev = dev;
|
||||
wacom->intf = intf;
|
||||
if (hid_is_usb(hdev)) {
|
||||
struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
|
||||
struct usb_device *dev = interface_to_usbdev(intf);
|
||||
|
||||
wacom->usbdev = dev;
|
||||
wacom->intf = intf;
|
||||
}
|
||||
|
||||
mutex_init(&wacom->lock);
|
||||
INIT_DELAYED_WORK(&wacom->init_work, wacom_init_work);
|
||||
INIT_WORK(&wacom->wireless_work, wacom_wireless_work);
|
||||
|
||||
@@ -1415,8 +1415,7 @@ static int kxcjk1013_probe(struct i2c_client *client,
|
||||
return 0;
|
||||
|
||||
err_buffer_cleanup:
|
||||
if (data->dready_trig)
|
||||
iio_triggered_buffer_cleanup(indio_dev);
|
||||
iio_triggered_buffer_cleanup(indio_dev);
|
||||
err_trigger_unregister:
|
||||
if (data->dready_trig)
|
||||
iio_trigger_unregister(data->dready_trig);
|
||||
@@ -1439,8 +1438,8 @@ static int kxcjk1013_remove(struct i2c_client *client)
|
||||
pm_runtime_set_suspended(&client->dev);
|
||||
pm_runtime_put_noidle(&client->dev);
|
||||
|
||||
iio_triggered_buffer_cleanup(indio_dev);
|
||||
if (data->dready_trig) {
|
||||
iio_triggered_buffer_cleanup(indio_dev);
|
||||
iio_trigger_unregister(data->dready_trig);
|
||||
iio_trigger_unregister(data->motion_trig);
|
||||
}
|
||||
|
||||
@@ -224,14 +224,14 @@ static irqreturn_t kxsd9_trigger_handler(int irq, void *p)
|
||||
hw_values.chan,
|
||||
sizeof(hw_values.chan));
|
||||
if (ret) {
|
||||
dev_err(st->dev,
|
||||
"error reading data\n");
|
||||
return ret;
|
||||
dev_err(st->dev, "error reading data: %d\n", ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
iio_push_to_buffers_with_timestamp(indio_dev,
|
||||
&hw_values,
|
||||
iio_get_time_ns(indio_dev));
|
||||
out:
|
||||
iio_trigger_notify_done(indio_dev->trig);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
|
||||
@@ -1473,7 +1473,7 @@ static int mma8452_trigger_setup(struct iio_dev *indio_dev)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
indio_dev->trig = trig;
|
||||
indio_dev->trig = iio_trigger_get(trig);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -470,8 +470,8 @@ static irqreturn_t ad7768_trigger_handler(int irq, void *p)
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan,
|
||||
iio_get_time_ns(indio_dev));
|
||||
|
||||
iio_trigger_notify_done(indio_dev->trig);
|
||||
err_unlock:
|
||||
iio_trigger_notify_done(indio_dev->trig);
|
||||
mutex_unlock(&st->lock);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
|
||||
@@ -1369,7 +1369,8 @@ static int at91_adc_read_info_raw(struct iio_dev *indio_dev,
|
||||
*val = st->conversion_value;
|
||||
ret = at91_adc_adjust_val_osr(st, val);
|
||||
if (chan->scan_type.sign == 's')
|
||||
*val = sign_extend32(*val, 11);
|
||||
*val = sign_extend32(*val,
|
||||
chan->scan_type.realbits - 1);
|
||||
st->conversion_done = false;
|
||||
}
|
||||
|
||||
|
||||
@@ -251,19 +251,8 @@ static int axp22x_adc_raw(struct iio_dev *indio_dev,
|
||||
struct iio_chan_spec const *chan, int *val)
|
||||
{
|
||||
struct axp20x_adc_iio *info = iio_priv(indio_dev);
|
||||
int size;
|
||||
|
||||
/*
|
||||
* N.B.: Unlike the Chinese datasheets tell, the charging current is
|
||||
* stored on 12 bits, not 13 bits. Only discharging current is on 13
|
||||
* bits.
|
||||
*/
|
||||
if (chan->type == IIO_CURRENT && chan->channel == AXP22X_BATT_DISCHRG_I)
|
||||
size = 13;
|
||||
else
|
||||
size = 12;
|
||||
|
||||
*val = axp20x_read_variable_width(info->regmap, chan->address, size);
|
||||
*val = axp20x_read_variable_width(info->regmap, chan->address, 12);
|
||||
if (*val < 0)
|
||||
return *val;
|
||||
|
||||
@@ -386,9 +375,8 @@ static int axp22x_adc_scale(struct iio_chan_spec const *chan, int *val,
|
||||
return IIO_VAL_INT_PLUS_MICRO;
|
||||
|
||||
case IIO_CURRENT:
|
||||
*val = 0;
|
||||
*val2 = 500000;
|
||||
return IIO_VAL_INT_PLUS_MICRO;
|
||||
*val = 1;
|
||||
return IIO_VAL_INT;
|
||||
|
||||
case IIO_TEMP:
|
||||
*val = 100;
|
||||
|
||||
@@ -248,7 +248,6 @@ static int dln2_adc_set_chan_period(struct dln2_adc *dln2,
|
||||
static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel)
|
||||
{
|
||||
int ret, i;
|
||||
struct iio_dev *indio_dev = platform_get_drvdata(dln2->pdev);
|
||||
u16 conflict;
|
||||
__le16 value;
|
||||
int olen = sizeof(value);
|
||||
@@ -257,13 +256,9 @@ static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel)
|
||||
.chan = channel,
|
||||
};
|
||||
|
||||
ret = iio_device_claim_direct_mode(indio_dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = dln2_adc_set_chan_enabled(dln2, channel, true);
|
||||
if (ret < 0)
|
||||
goto release_direct;
|
||||
return ret;
|
||||
|
||||
ret = dln2_adc_set_port_enabled(dln2, true, &conflict);
|
||||
if (ret < 0) {
|
||||
@@ -300,8 +295,6 @@ disable_port:
|
||||
dln2_adc_set_port_enabled(dln2, false, NULL);
|
||||
disable_chan:
|
||||
dln2_adc_set_chan_enabled(dln2, channel, false);
|
||||
release_direct:
|
||||
iio_device_release_direct_mode(indio_dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -337,10 +330,16 @@ static int dln2_adc_read_raw(struct iio_dev *indio_dev,
|
||||
|
||||
switch (mask) {
|
||||
case IIO_CHAN_INFO_RAW:
|
||||
ret = iio_device_claim_direct_mode(indio_dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
mutex_lock(&dln2->mutex);
|
||||
ret = dln2_adc_read(dln2, chan->channel);
|
||||
mutex_unlock(&dln2->mutex);
|
||||
|
||||
iio_device_release_direct_mode(indio_dev);
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@@ -666,7 +665,11 @@ static int dln2_adc_probe(struct platform_device *pdev)
|
||||
return -ENOMEM;
|
||||
}
|
||||
iio_trigger_set_drvdata(dln2->trig, dln2);
|
||||
devm_iio_trigger_register(dev, dln2->trig);
|
||||
ret = devm_iio_trigger_register(dev, dln2->trig);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to register trigger: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
iio_trigger_set_immutable(indio_dev, dln2->trig);
|
||||
|
||||
ret = devm_iio_triggered_buffer_setup(dev, indio_dev, NULL,
|
||||
|
||||
@@ -61,9 +61,9 @@ static irqreturn_t itg3200_trigger_handler(int irq, void *p)
|
||||
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
|
||||
|
||||
error_ret:
|
||||
iio_trigger_notify_done(indio_dev->trig);
|
||||
|
||||
error_ret:
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
|
||||
@@ -549,7 +549,6 @@ static struct iio_trigger *viio_trigger_alloc(const char *fmt, va_list vargs)
|
||||
irq_modify_status(trig->subirq_base + i,
|
||||
IRQ_NOREQUEST | IRQ_NOAUTOEN, IRQ_NOPROBE);
|
||||
}
|
||||
get_device(&trig->dev);
|
||||
|
||||
return trig;
|
||||
|
||||
|
||||
@@ -1272,7 +1272,7 @@ static irqreturn_t ltr501_trigger_handler(int irq, void *p)
|
||||
ret = regmap_bulk_read(data->regmap, LTR501_ALS_DATA1,
|
||||
(u8 *)als_buf, sizeof(als_buf));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto done;
|
||||
if (test_bit(0, indio_dev->active_scan_mask))
|
||||
scan.channels[j++] = le16_to_cpu(als_buf[1]);
|
||||
if (test_bit(1, indio_dev->active_scan_mask))
|
||||
|
||||
@@ -544,9 +544,8 @@ static irqreturn_t stk3310_irq_event_handler(int irq, void *private)
|
||||
mutex_lock(&data->lock);
|
||||
ret = regmap_field_read(data->reg_flag_nf, &dir);
|
||||
if (ret < 0) {
|
||||
dev_err(&data->client->dev, "register read failed\n");
|
||||
mutex_unlock(&data->lock);
|
||||
return ret;
|
||||
dev_err(&data->client->dev, "register read failed: %d\n", ret);
|
||||
goto out;
|
||||
}
|
||||
event = IIO_UNMOD_EVENT_CODE(IIO_PROXIMITY, 1,
|
||||
IIO_EV_TYPE_THRESH,
|
||||
@@ -558,6 +557,7 @@ static irqreturn_t stk3310_irq_event_handler(int irq, void *private)
|
||||
ret = regmap_field_write(data->reg_flag_psint, 0);
|
||||
if (ret < 0)
|
||||
dev_err(&data->client->dev, "failed to reset interrupts\n");
|
||||
out:
|
||||
mutex_unlock(&data->lock);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
|
||||
@@ -800,6 +800,6 @@ static struct platform_driver stm32_timer_trigger_driver = {
|
||||
};
|
||||
module_platform_driver(stm32_timer_trigger_driver);
|
||||
|
||||
MODULE_ALIAS("platform: stm32-timer-trigger");
|
||||
MODULE_ALIAS("platform:stm32-timer-trigger");
|
||||
MODULE_DESCRIPTION("STMicroelectronics STM32 Timer Trigger driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
||||
@@ -1175,7 +1175,7 @@ void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd)
|
||||
rcd->egrbufs.rcvtids = NULL;
|
||||
|
||||
for (e = 0; e < rcd->egrbufs.alloced; e++) {
|
||||
if (rcd->egrbufs.buffers[e].dma)
|
||||
if (rcd->egrbufs.buffers[e].addr)
|
||||
dma_free_coherent(&dd->pcidev->dev,
|
||||
rcd->egrbufs.buffers[e].len,
|
||||
rcd->egrbufs.buffers[e].addr,
|
||||
|
||||
@@ -232,17 +232,13 @@ static int armada_370_xp_msi_alloc(struct irq_domain *domain, unsigned int virq,
|
||||
int hwirq, i;
|
||||
|
||||
mutex_lock(&msi_used_lock);
|
||||
|
||||
hwirq = bitmap_find_next_zero_area(msi_used, PCI_MSI_DOORBELL_NR,
|
||||
0, nr_irqs, 0);
|
||||
if (hwirq >= PCI_MSI_DOORBELL_NR) {
|
||||
mutex_unlock(&msi_used_lock);
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
bitmap_set(msi_used, hwirq, nr_irqs);
|
||||
hwirq = bitmap_find_free_region(msi_used, PCI_MSI_DOORBELL_NR,
|
||||
order_base_2(nr_irqs));
|
||||
mutex_unlock(&msi_used_lock);
|
||||
|
||||
if (hwirq < 0)
|
||||
return -ENOSPC;
|
||||
|
||||
for (i = 0; i < nr_irqs; i++) {
|
||||
irq_domain_set_info(domain, virq + i, hwirq + i,
|
||||
&armada_370_xp_msi_bottom_irq_chip,
|
||||
@@ -250,7 +246,7 @@ static int armada_370_xp_msi_alloc(struct irq_domain *domain, unsigned int virq,
|
||||
NULL, NULL);
|
||||
}
|
||||
|
||||
return hwirq;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void armada_370_xp_msi_free(struct irq_domain *domain,
|
||||
@@ -259,7 +255,7 @@ static void armada_370_xp_msi_free(struct irq_domain *domain,
|
||||
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
|
||||
|
||||
mutex_lock(&msi_used_lock);
|
||||
bitmap_clear(msi_used, d->hwirq, nr_irqs);
|
||||
bitmap_release_region(msi_used, d->hwirq, order_base_2(nr_irqs));
|
||||
mutex_unlock(&msi_used_lock);
|
||||
}
|
||||
|
||||
|
||||
@@ -574,7 +574,7 @@ static struct its_collection *its_build_invall_cmd(struct its_node *its,
|
||||
|
||||
its_fixup_cmd(cmd);
|
||||
|
||||
return NULL;
|
||||
return desc->its_invall_cmd.col;
|
||||
}
|
||||
|
||||
static struct its_vpe *its_build_vinvall_cmd(struct its_node *its,
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
|
||||
#define NVIC_ISER 0x000
|
||||
#define NVIC_ICER 0x080
|
||||
#define NVIC_IPR 0x300
|
||||
#define NVIC_IPR 0x400
|
||||
|
||||
#define NVIC_MAX_BANKS 16
|
||||
/*
|
||||
|
||||
@@ -693,16 +693,18 @@ static int fastrpc_get_meta_size(struct fastrpc_invoke_ctx *ctx)
|
||||
static u64 fastrpc_get_payload_size(struct fastrpc_invoke_ctx *ctx, int metalen)
|
||||
{
|
||||
u64 size = 0;
|
||||
int i;
|
||||
int oix;
|
||||
|
||||
size = ALIGN(metalen, FASTRPC_ALIGN);
|
||||
for (i = 0; i < ctx->nscalars; i++) {
|
||||
for (oix = 0; oix < ctx->nbufs; oix++) {
|
||||
int i = ctx->olaps[oix].raix;
|
||||
|
||||
if (ctx->args[i].fd == 0 || ctx->args[i].fd == -1) {
|
||||
|
||||
if (ctx->olaps[i].offset == 0)
|
||||
if (ctx->olaps[oix].offset == 0)
|
||||
size = ALIGN(size, FASTRPC_ALIGN);
|
||||
|
||||
size += (ctx->olaps[i].mend - ctx->olaps[i].mstart);
|
||||
size += (ctx->olaps[oix].mend - ctx->olaps[oix].mstart);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/completion.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/dmaengine.h>
|
||||
#include <linux/dma-direction.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
@@ -93,6 +94,14 @@
|
||||
|
||||
#define FSMC_BUSY_WAIT_TIMEOUT (1 * HZ)
|
||||
|
||||
/*
|
||||
* According to SPEAr300 Reference Manual (RM0082)
|
||||
* TOUDEL = 7ns (Output delay from the flip-flops to the board)
|
||||
* TINDEL = 5ns (Input delay from the board to the flipflop)
|
||||
*/
|
||||
#define TOUTDEL 7000
|
||||
#define TINDEL 5000
|
||||
|
||||
struct fsmc_nand_timings {
|
||||
u8 tclr;
|
||||
u8 tar;
|
||||
@@ -277,7 +286,7 @@ static int fsmc_calc_timings(struct fsmc_nand_data *host,
|
||||
{
|
||||
unsigned long hclk = clk_get_rate(host->clk);
|
||||
unsigned long hclkn = NSEC_PER_SEC / hclk;
|
||||
u32 thiz, thold, twait, tset;
|
||||
u32 thiz, thold, twait, tset, twait_min;
|
||||
|
||||
if (sdrt->tRC_min < 30000)
|
||||
return -EOPNOTSUPP;
|
||||
@@ -309,13 +318,6 @@ static int fsmc_calc_timings(struct fsmc_nand_data *host,
|
||||
else if (tims->thold > FSMC_THOLD_MASK)
|
||||
tims->thold = FSMC_THOLD_MASK;
|
||||
|
||||
twait = max(sdrt->tRP_min, sdrt->tWP_min);
|
||||
tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;
|
||||
if (tims->twait == 0)
|
||||
tims->twait = 1;
|
||||
else if (tims->twait > FSMC_TWAIT_MASK)
|
||||
tims->twait = FSMC_TWAIT_MASK;
|
||||
|
||||
tset = max(sdrt->tCS_min - sdrt->tWP_min,
|
||||
sdrt->tCEA_max - sdrt->tREA_max);
|
||||
tims->tset = DIV_ROUND_UP(tset / 1000, hclkn) - 1;
|
||||
@@ -324,6 +326,21 @@ static int fsmc_calc_timings(struct fsmc_nand_data *host,
|
||||
else if (tims->tset > FSMC_TSET_MASK)
|
||||
tims->tset = FSMC_TSET_MASK;
|
||||
|
||||
/*
|
||||
* According to SPEAr300 Reference Manual (RM0082) which gives more
|
||||
* information related to FSMSC timings than the SPEAr600 one (RM0305),
|
||||
* twait >= tCEA - (tset * TCLK) + TOUTDEL + TINDEL
|
||||
*/
|
||||
twait_min = sdrt->tCEA_max - ((tims->tset + 1) * hclkn * 1000)
|
||||
+ TOUTDEL + TINDEL;
|
||||
twait = max3(sdrt->tRP_min, sdrt->tWP_min, twait_min);
|
||||
|
||||
tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;
|
||||
if (tims->twait == 0)
|
||||
tims->twait = 1;
|
||||
else if (tims->twait > FSMC_TWAIT_MASK)
|
||||
tims->twait = FSMC_TWAIT_MASK;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -650,6 +667,9 @@ static int fsmc_exec_op(struct nand_chip *chip, const struct nand_operation *op,
|
||||
instr->ctx.waitrdy.timeout_ms);
|
||||
break;
|
||||
}
|
||||
|
||||
if (instr->delay_ns)
|
||||
ndelay(instr->delay_ns);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
||||
@@ -1514,14 +1514,14 @@ void bond_alb_monitor(struct work_struct *work)
|
||||
struct slave *slave;
|
||||
|
||||
if (!bond_has_slaves(bond)) {
|
||||
bond_info->tx_rebalance_counter = 0;
|
||||
atomic_set(&bond_info->tx_rebalance_counter, 0);
|
||||
bond_info->lp_counter = 0;
|
||||
goto re_arm;
|
||||
}
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
bond_info->tx_rebalance_counter++;
|
||||
atomic_inc(&bond_info->tx_rebalance_counter);
|
||||
bond_info->lp_counter++;
|
||||
|
||||
/* send learning packets */
|
||||
@@ -1543,7 +1543,7 @@ void bond_alb_monitor(struct work_struct *work)
|
||||
}
|
||||
|
||||
/* rebalance tx traffic */
|
||||
if (bond_info->tx_rebalance_counter >= BOND_TLB_REBALANCE_TICKS) {
|
||||
if (atomic_read(&bond_info->tx_rebalance_counter) >= BOND_TLB_REBALANCE_TICKS) {
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
tlb_clear_slave(bond, slave, 1);
|
||||
if (slave == rcu_access_pointer(bond->curr_active_slave)) {
|
||||
@@ -1553,7 +1553,7 @@ void bond_alb_monitor(struct work_struct *work)
|
||||
bond_info->unbalanced_load = 0;
|
||||
}
|
||||
}
|
||||
bond_info->tx_rebalance_counter = 0;
|
||||
atomic_set(&bond_info->tx_rebalance_counter, 0);
|
||||
}
|
||||
|
||||
if (bond_info->rlb_enabled) {
|
||||
@@ -1623,7 +1623,8 @@ int bond_alb_init_slave(struct bonding *bond, struct slave *slave)
|
||||
tlb_init_slave(slave);
|
||||
|
||||
/* order a rebalance ASAP */
|
||||
bond->alb_info.tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS;
|
||||
atomic_set(&bond->alb_info.tx_rebalance_counter,
|
||||
BOND_TLB_REBALANCE_TICKS);
|
||||
|
||||
if (bond->alb_info.rlb_enabled)
|
||||
bond->alb_info.rlb_rebalance = 1;
|
||||
@@ -1660,7 +1661,8 @@ void bond_alb_handle_link_change(struct bonding *bond, struct slave *slave, char
|
||||
rlb_clear_slave(bond, slave);
|
||||
} else if (link == BOND_LINK_UP) {
|
||||
/* order a rebalance ASAP */
|
||||
bond_info->tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS;
|
||||
atomic_set(&bond_info->tx_rebalance_counter,
|
||||
BOND_TLB_REBALANCE_TICKS);
|
||||
if (bond->alb_info.rlb_enabled) {
|
||||
bond->alb_info.rlb_rebalance = 1;
|
||||
/* If the updelay module parameter is smaller than the
|
||||
|
||||
@@ -248,6 +248,9 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
|
||||
#define KVASER_PCIEFD_SPACK_EWLR BIT(23)
|
||||
#define KVASER_PCIEFD_SPACK_EPLR BIT(24)
|
||||
|
||||
/* Kvaser KCAN_EPACK second word */
|
||||
#define KVASER_PCIEFD_EPACK_DIR_TX BIT(0)
|
||||
|
||||
struct kvaser_pciefd;
|
||||
|
||||
struct kvaser_pciefd_can {
|
||||
@@ -1283,7 +1286,10 @@ static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
|
||||
|
||||
can->err_rep_cnt++;
|
||||
can->can.can_stats.bus_error++;
|
||||
stats->rx_errors++;
|
||||
if (p->header[1] & KVASER_PCIEFD_EPACK_DIR_TX)
|
||||
stats->tx_errors++;
|
||||
else
|
||||
stats->rx_errors++;
|
||||
|
||||
can->bec.txerr = bec.txerr;
|
||||
can->bec.rxerr = bec.rxerr;
|
||||
|
||||
@@ -206,15 +206,15 @@ enum m_can_reg {
|
||||
|
||||
/* Interrupts for version 3.0.x */
|
||||
#define IR_ERR_LEC_30X (IR_STE | IR_FOE | IR_ACKE | IR_BE | IR_CRCE)
|
||||
#define IR_ERR_BUS_30X (IR_ERR_LEC_30X | IR_WDI | IR_ELO | IR_BEU | \
|
||||
IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \
|
||||
IR_RF1L | IR_RF0L)
|
||||
#define IR_ERR_BUS_30X (IR_ERR_LEC_30X | IR_WDI | IR_BEU | IR_BEC | \
|
||||
IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \
|
||||
IR_RF0L)
|
||||
#define IR_ERR_ALL_30X (IR_ERR_STATE | IR_ERR_BUS_30X)
|
||||
/* Interrupts for version >= 3.1.x */
|
||||
#define IR_ERR_LEC_31X (IR_PED | IR_PEA)
|
||||
#define IR_ERR_BUS_31X (IR_ERR_LEC_31X | IR_WDI | IR_ELO | IR_BEU | \
|
||||
IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \
|
||||
IR_RF1L | IR_RF0L)
|
||||
#define IR_ERR_BUS_31X (IR_ERR_LEC_31X | IR_WDI | IR_BEU | IR_BEC | \
|
||||
IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \
|
||||
IR_RF0L)
|
||||
#define IR_ERR_ALL_31X (IR_ERR_STATE | IR_ERR_BUS_31X)
|
||||
|
||||
/* Interrupt Line Select (ILS) */
|
||||
@@ -751,8 +751,6 @@ static void m_can_handle_other_err(struct net_device *dev, u32 irqstatus)
|
||||
{
|
||||
if (irqstatus & IR_WDI)
|
||||
netdev_err(dev, "Message RAM Watchdog event due to missing READY\n");
|
||||
if (irqstatus & IR_ELO)
|
||||
netdev_err(dev, "Error Logging Overflow\n");
|
||||
if (irqstatus & IR_BEU)
|
||||
netdev_err(dev, "Bit Error Uncorrected\n");
|
||||
if (irqstatus & IR_BEC)
|
||||
|
||||
@@ -692,11 +692,11 @@ static int pch_can_rx_normal(struct net_device *ndev, u32 obj_num, int quota)
|
||||
cf->data[i + 1] = data_reg >> 8;
|
||||
}
|
||||
|
||||
netif_receive_skb(skb);
|
||||
rcv_pkts++;
|
||||
stats->rx_packets++;
|
||||
quota--;
|
||||
stats->rx_bytes += cf->can_dlc;
|
||||
netif_receive_skb(skb);
|
||||
|
||||
pch_fifo_thresh(priv, obj_num);
|
||||
obj_num++;
|
||||
|
||||
@@ -235,7 +235,12 @@ static int ems_pcmcia_add_card(struct pcmcia_device *pdev, unsigned long base)
|
||||
free_sja1000dev(dev);
|
||||
}
|
||||
|
||||
err = request_irq(dev->irq, &ems_pcmcia_interrupt, IRQF_SHARED,
|
||||
if (!card->channels) {
|
||||
err = -ENODEV;
|
||||
goto failure_cleanup;
|
||||
}
|
||||
|
||||
err = request_irq(pdev->irq, &ems_pcmcia_interrupt, IRQF_SHARED,
|
||||
DRV_NAME, card);
|
||||
if (!err)
|
||||
return 0;
|
||||
|
||||
@@ -28,10 +28,6 @@
|
||||
|
||||
#include "kvaser_usb.h"
|
||||
|
||||
/* Forward declaration */
|
||||
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg;
|
||||
|
||||
#define CAN_USB_CLOCK 8000000
|
||||
#define MAX_USBCAN_NET_DEVICES 2
|
||||
|
||||
/* Command header size */
|
||||
@@ -80,6 +76,12 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg;
|
||||
|
||||
#define CMD_LEAF_LOG_MESSAGE 106
|
||||
|
||||
/* Leaf frequency options */
|
||||
#define KVASER_USB_LEAF_SWOPTION_FREQ_MASK 0x60
|
||||
#define KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK 0
|
||||
#define KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK BIT(5)
|
||||
#define KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK BIT(6)
|
||||
|
||||
/* error factors */
|
||||
#define M16C_EF_ACKE BIT(0)
|
||||
#define M16C_EF_CRCE BIT(1)
|
||||
@@ -340,6 +342,50 @@ struct kvaser_usb_err_summary {
|
||||
};
|
||||
};
|
||||
|
||||
static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
|
||||
.name = "kvaser_usb",
|
||||
.tseg1_min = KVASER_USB_TSEG1_MIN,
|
||||
.tseg1_max = KVASER_USB_TSEG1_MAX,
|
||||
.tseg2_min = KVASER_USB_TSEG2_MIN,
|
||||
.tseg2_max = KVASER_USB_TSEG2_MAX,
|
||||
.sjw_max = KVASER_USB_SJW_MAX,
|
||||
.brp_min = KVASER_USB_BRP_MIN,
|
||||
.brp_max = KVASER_USB_BRP_MAX,
|
||||
.brp_inc = KVASER_USB_BRP_INC,
|
||||
};
|
||||
|
||||
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = {
|
||||
.clock = {
|
||||
.freq = 8000000,
|
||||
},
|
||||
.timestamp_freq = 1,
|
||||
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
|
||||
};
|
||||
|
||||
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = {
|
||||
.clock = {
|
||||
.freq = 16000000,
|
||||
},
|
||||
.timestamp_freq = 1,
|
||||
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
|
||||
};
|
||||
|
||||
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = {
|
||||
.clock = {
|
||||
.freq = 24000000,
|
||||
},
|
||||
.timestamp_freq = 1,
|
||||
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
|
||||
};
|
||||
|
||||
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = {
|
||||
.clock = {
|
||||
.freq = 32000000,
|
||||
},
|
||||
.timestamp_freq = 1,
|
||||
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
|
||||
};
|
||||
|
||||
static void *
|
||||
kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
|
||||
const struct sk_buff *skb, int *frame_len,
|
||||
@@ -471,6 +517,27 @@ static int kvaser_usb_leaf_send_simple_cmd(const struct kvaser_usb *dev,
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev,
|
||||
const struct leaf_cmd_softinfo *softinfo)
|
||||
{
|
||||
u32 sw_options = le32_to_cpu(softinfo->sw_options);
|
||||
|
||||
dev->fw_version = le32_to_cpu(softinfo->fw_version);
|
||||
dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx);
|
||||
|
||||
switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) {
|
||||
case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK:
|
||||
dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz;
|
||||
break;
|
||||
case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK:
|
||||
dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz;
|
||||
break;
|
||||
case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK:
|
||||
dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
|
||||
{
|
||||
struct kvaser_cmd cmd;
|
||||
@@ -486,14 +553,13 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev)
|
||||
|
||||
switch (dev->card_data.leaf.family) {
|
||||
case KVASER_LEAF:
|
||||
dev->fw_version = le32_to_cpu(cmd.u.leaf.softinfo.fw_version);
|
||||
dev->max_tx_urbs =
|
||||
le16_to_cpu(cmd.u.leaf.softinfo.max_outstanding_tx);
|
||||
kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo);
|
||||
break;
|
||||
case KVASER_USBCAN:
|
||||
dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version);
|
||||
dev->max_tx_urbs =
|
||||
le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx);
|
||||
dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz;
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -1225,24 +1291,11 @@ static int kvaser_usb_leaf_init_card(struct kvaser_usb *dev)
|
||||
{
|
||||
struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
|
||||
|
||||
dev->cfg = &kvaser_usb_leaf_dev_cfg;
|
||||
card_data->ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = {
|
||||
.name = "kvaser_usb",
|
||||
.tseg1_min = KVASER_USB_TSEG1_MIN,
|
||||
.tseg1_max = KVASER_USB_TSEG1_MAX,
|
||||
.tseg2_min = KVASER_USB_TSEG2_MIN,
|
||||
.tseg2_max = KVASER_USB_TSEG2_MAX,
|
||||
.sjw_max = KVASER_USB_SJW_MAX,
|
||||
.brp_min = KVASER_USB_BRP_MIN,
|
||||
.brp_max = KVASER_USB_BRP_MAX,
|
||||
.brp_inc = KVASER_USB_BRP_INC,
|
||||
};
|
||||
|
||||
static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
|
||||
{
|
||||
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
|
||||
@@ -1348,11 +1401,3 @@ const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops = {
|
||||
.dev_read_bulk_callback = kvaser_usb_leaf_read_bulk_callback,
|
||||
.dev_frame_to_cmd = kvaser_usb_leaf_frame_to_cmd,
|
||||
};
|
||||
|
||||
static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg = {
|
||||
.clock = {
|
||||
.freq = CAN_USB_CLOCK,
|
||||
},
|
||||
.timestamp_freq = 1,
|
||||
.bittiming_const = &kvaser_usb_leaf_bittiming_const,
|
||||
};
|
||||
|
||||
@@ -1431,16 +1431,19 @@ static int altera_tse_probe(struct platform_device *pdev)
|
||||
priv->rxdescmem_busaddr = dma_res->start;
|
||||
|
||||
} else {
|
||||
ret = -ENODEV;
|
||||
goto err_free_netdev;
|
||||
}
|
||||
|
||||
if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask)))
|
||||
if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) {
|
||||
dma_set_coherent_mask(priv->device,
|
||||
DMA_BIT_MASK(priv->dmaops->dmamask));
|
||||
else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32)))
|
||||
} else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) {
|
||||
dma_set_coherent_mask(priv->device, DMA_BIT_MASK(32));
|
||||
else
|
||||
} else {
|
||||
ret = -EIO;
|
||||
goto err_free_netdev;
|
||||
}
|
||||
|
||||
/* MAC address space */
|
||||
ret = request_and_map(pdev, "control_port", &control_port,
|
||||
|
||||
@@ -373,6 +373,9 @@ struct bufdesc_ex {
|
||||
#define FEC_ENET_WAKEUP ((uint)0x00020000) /* Wakeup request */
|
||||
#define FEC_ENET_TXF (FEC_ENET_TXF_0 | FEC_ENET_TXF_1 | FEC_ENET_TXF_2)
|
||||
#define FEC_ENET_RXF (FEC_ENET_RXF_0 | FEC_ENET_RXF_1 | FEC_ENET_RXF_2)
|
||||
#define FEC_ENET_RXF_GET(X) (((X) == 0) ? FEC_ENET_RXF_0 : \
|
||||
(((X) == 1) ? FEC_ENET_RXF_1 : \
|
||||
FEC_ENET_RXF_2))
|
||||
#define FEC_ENET_TS_AVAIL ((uint)0x00010000)
|
||||
#define FEC_ENET_TS_TIMER ((uint)0x00008000)
|
||||
|
||||
|
||||
@@ -1444,7 +1444,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
|
||||
break;
|
||||
pkt_received++;
|
||||
|
||||
writel(FEC_ENET_RXF, fep->hwp + FEC_IEVENT);
|
||||
writel(FEC_ENET_RXF_GET(queue_id), fep->hwp + FEC_IEVENT);
|
||||
|
||||
/* Check for errors. */
|
||||
status ^= BD_ENET_RX_LAST;
|
||||
|
||||
@@ -1804,6 +1804,32 @@ static int i40e_vc_send_resp_to_vf(struct i40e_vf *vf,
|
||||
return i40e_vc_send_msg_to_vf(vf, opcode, retval, NULL, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_sync_vf_state
|
||||
* @vf: pointer to the VF info
|
||||
* @state: VF state
|
||||
*
|
||||
* Called from a VF message to synchronize the service with a potential
|
||||
* VF reset state
|
||||
**/
|
||||
static bool i40e_sync_vf_state(struct i40e_vf *vf, enum i40e_vf_states state)
|
||||
{
|
||||
int i;
|
||||
|
||||
/* When handling some messages, it needs VF state to be set.
|
||||
* It is possible that this flag is cleared during VF reset,
|
||||
* so there is a need to wait until the end of the reset to
|
||||
* handle the request message correctly.
|
||||
*/
|
||||
for (i = 0; i < I40E_VF_STATE_WAIT_COUNT; i++) {
|
||||
if (test_bit(state, &vf->vf_states))
|
||||
return true;
|
||||
usleep_range(10000, 20000);
|
||||
}
|
||||
|
||||
return test_bit(state, &vf->vf_states);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_vc_get_version_msg
|
||||
* @vf: pointer to the VF info
|
||||
@@ -1864,7 +1890,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
|
||||
size_t len = 0;
|
||||
int ret;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -2019,7 +2045,7 @@ static int i40e_vc_config_promiscuous_mode_msg(struct i40e_vf *vf, u8 *msg)
|
||||
bool allmulti = false;
|
||||
bool alluni = false;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err_out;
|
||||
}
|
||||
@@ -2107,7 +2133,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
struct i40e_vsi *vsi;
|
||||
u16 num_qps_all = 0;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
}
|
||||
@@ -2255,7 +2281,7 @@ static int i40e_vc_config_irq_map_msg(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
int i;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
}
|
||||
@@ -2427,7 +2453,7 @@ static int i40e_vc_disable_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
i40e_status aq_ret = 0;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
}
|
||||
@@ -2477,7 +2503,7 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
u8 cur_pairs = vf->num_queue_pairs;
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states))
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE))
|
||||
return -EINVAL;
|
||||
|
||||
if (req_pairs > I40E_MAX_VF_QUEUES) {
|
||||
@@ -2523,7 +2549,7 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg)
|
||||
|
||||
memset(&stats, 0, sizeof(struct i40e_eth_stats));
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
}
|
||||
@@ -2632,7 +2658,7 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status ret = 0;
|
||||
int i;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
|
||||
!i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {
|
||||
ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
@@ -2701,7 +2727,7 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status ret = 0;
|
||||
int i;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
|
||||
!i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {
|
||||
ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
@@ -2840,7 +2866,7 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
int i;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
|
||||
!i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto error_param;
|
||||
@@ -2960,9 +2986,9 @@ static int i40e_vc_config_rss_key(struct i40e_vf *vf, u8 *msg)
|
||||
struct i40e_vsi *vsi = NULL;
|
||||
i40e_status aq_ret = 0;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
|
||||
!i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) ||
|
||||
(vrk->key_len != I40E_HKEY_ARRAY_SIZE)) {
|
||||
vrk->key_len != I40E_HKEY_ARRAY_SIZE) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -2991,9 +3017,9 @@ static int i40e_vc_config_rss_lut(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
u16 i;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||
|
||||
!i40e_vc_isvalid_vsi_id(vf, vrl->vsi_id) ||
|
||||
(vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) {
|
||||
vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -3026,7 +3052,7 @@ static int i40e_vc_get_rss_hena(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
int len = 0;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -3062,7 +3088,7 @@ static int i40e_vc_set_rss_hena(struct i40e_vf *vf, u8 *msg)
|
||||
struct i40e_hw *hw = &pf->hw;
|
||||
i40e_status aq_ret = 0;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -3087,7 +3113,7 @@ static int i40e_vc_enable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
struct i40e_vsi *vsi;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -3113,7 +3139,7 @@ static int i40e_vc_disable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
struct i40e_vsi *vsi;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -3340,7 +3366,7 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
int i, ret;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -3471,7 +3497,7 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
int i, ret;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err_out;
|
||||
}
|
||||
@@ -3580,7 +3606,7 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
|
||||
i40e_status aq_ret = 0;
|
||||
u64 speed = 0;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
@@ -3687,11 +3713,6 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
|
||||
|
||||
/* set this flag only after making sure all inputs are sane */
|
||||
vf->adq_enabled = true;
|
||||
/* num_req_queues is set when user changes number of queues via ethtool
|
||||
* and this causes issue for default VSI(which depends on this variable)
|
||||
* when ADq is enabled, hence reset it.
|
||||
*/
|
||||
vf->num_req_queues = 0;
|
||||
|
||||
/* reset the VF in order to allocate resources */
|
||||
i40e_vc_notify_vf_reset(vf);
|
||||
@@ -3715,7 +3736,7 @@ static int i40e_vc_del_qch_msg(struct i40e_vf *vf, u8 *msg)
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
i40e_status aq_ret = 0;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
goto err;
|
||||
}
|
||||
|
||||
@@ -19,6 +19,8 @@
|
||||
|
||||
#define I40E_MAX_VF_PROMISC_FLAGS 3
|
||||
|
||||
#define I40E_VF_STATE_WAIT_COUNT 20
|
||||
|
||||
/* Various queue ctrls */
|
||||
enum i40e_queue_ctrl {
|
||||
I40E_QUEUE_CTRL_UNKNOWN = 0,
|
||||
|
||||
@@ -612,23 +612,44 @@ static int iavf_set_ringparam(struct net_device *netdev,
|
||||
if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
|
||||
return -EINVAL;
|
||||
|
||||
new_tx_count = clamp_t(u32, ring->tx_pending,
|
||||
IAVF_MIN_TXD,
|
||||
IAVF_MAX_TXD);
|
||||
new_tx_count = ALIGN(new_tx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE);
|
||||
if (ring->tx_pending > IAVF_MAX_TXD ||
|
||||
ring->tx_pending < IAVF_MIN_TXD ||
|
||||
ring->rx_pending > IAVF_MAX_RXD ||
|
||||
ring->rx_pending < IAVF_MIN_RXD) {
|
||||
netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n",
|
||||
ring->tx_pending, ring->rx_pending, IAVF_MIN_TXD,
|
||||
IAVF_MAX_RXD, IAVF_REQ_DESCRIPTOR_MULTIPLE);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
new_rx_count = clamp_t(u32, ring->rx_pending,
|
||||
IAVF_MIN_RXD,
|
||||
IAVF_MAX_RXD);
|
||||
new_rx_count = ALIGN(new_rx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE);
|
||||
new_tx_count = ALIGN(ring->tx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE);
|
||||
if (new_tx_count != ring->tx_pending)
|
||||
netdev_info(netdev, "Requested Tx descriptor count rounded up to %d\n",
|
||||
new_tx_count);
|
||||
|
||||
new_rx_count = ALIGN(ring->rx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE);
|
||||
if (new_rx_count != ring->rx_pending)
|
||||
netdev_info(netdev, "Requested Rx descriptor count rounded up to %d\n",
|
||||
new_rx_count);
|
||||
|
||||
/* if nothing to do return success */
|
||||
if ((new_tx_count == adapter->tx_desc_count) &&
|
||||
(new_rx_count == adapter->rx_desc_count))
|
||||
(new_rx_count == adapter->rx_desc_count)) {
|
||||
netdev_dbg(netdev, "Nothing to change, descriptor count is same as requested\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
adapter->tx_desc_count = new_tx_count;
|
||||
adapter->rx_desc_count = new_rx_count;
|
||||
if (new_tx_count != adapter->tx_desc_count) {
|
||||
netdev_dbg(netdev, "Changing Tx descriptor count from %d to %d\n",
|
||||
adapter->tx_desc_count, new_tx_count);
|
||||
adapter->tx_desc_count = new_tx_count;
|
||||
}
|
||||
|
||||
if (new_rx_count != adapter->rx_desc_count) {
|
||||
netdev_dbg(netdev, "Changing Rx descriptor count from %d to %d\n",
|
||||
adapter->rx_desc_count, new_rx_count);
|
||||
adapter->rx_desc_count = new_rx_count;
|
||||
}
|
||||
|
||||
if (netif_running(netdev)) {
|
||||
adapter->flags |= IAVF_FLAG_RESET_NEEDED;
|
||||
|
||||
@@ -2151,6 +2151,7 @@ static void iavf_reset_task(struct work_struct *work)
|
||||
}
|
||||
|
||||
pci_set_master(adapter->pdev);
|
||||
pci_restore_msi_state(adapter->pdev);
|
||||
|
||||
if (i == IAVF_RESET_WAIT_COUNT) {
|
||||
dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
|
||||
|
||||
@@ -3561,6 +3561,9 @@ static int ice_up_complete(struct ice_vsi *vsi)
|
||||
netif_carrier_on(vsi->netdev);
|
||||
}
|
||||
|
||||
/* clear this now, and the first stats read will be used as baseline */
|
||||
vsi->stat_offsets_loaded = false;
|
||||
|
||||
ice_service_task_schedule(pf);
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -803,8 +803,10 @@ int nfp_cpp_area_cache_add(struct nfp_cpp *cpp, size_t size)
|
||||
return -ENOMEM;
|
||||
|
||||
cache = kzalloc(sizeof(*cache), GFP_KERNEL);
|
||||
if (!cache)
|
||||
if (!cache) {
|
||||
nfp_cpp_area_free(area);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
cache->id = 0;
|
||||
cache->addr = 0;
|
||||
|
||||
@@ -1597,6 +1597,13 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
||||
data_split = true;
|
||||
}
|
||||
} else {
|
||||
if (unlikely(skb->len > ETH_TX_MAX_NON_LSO_PKT_LEN)) {
|
||||
DP_ERR(edev, "Unexpected non LSO skb length = 0x%x\n", skb->len);
|
||||
qede_free_failed_tx_pkt(txq, first_bd, 0, false);
|
||||
qede_update_tx_producer(txq);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
val |= ((skb->len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) <<
|
||||
ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT);
|
||||
}
|
||||
|
||||
@@ -3495,20 +3495,19 @@ static int ql_adapter_up(struct ql3_adapter *qdev)
|
||||
|
||||
spin_lock_irqsave(&qdev->hw_lock, hw_flags);
|
||||
|
||||
err = ql_wait_for_drvr_lock(qdev);
|
||||
if (err) {
|
||||
err = ql_adapter_initialize(qdev);
|
||||
if (err) {
|
||||
netdev_err(ndev, "Unable to initialize adapter\n");
|
||||
goto err_init;
|
||||
}
|
||||
netdev_err(ndev, "Releasing driver lock\n");
|
||||
ql_sem_unlock(qdev, QL_DRVR_SEM_MASK);
|
||||
} else {
|
||||
if (!ql_wait_for_drvr_lock(qdev)) {
|
||||
netdev_err(ndev, "Could not acquire driver lock\n");
|
||||
err = -ENODEV;
|
||||
goto err_lock;
|
||||
}
|
||||
|
||||
err = ql_adapter_initialize(qdev);
|
||||
if (err) {
|
||||
netdev_err(ndev, "Unable to initialize adapter\n");
|
||||
goto err_init;
|
||||
}
|
||||
ql_sem_unlock(qdev, QL_DRVR_SEM_MASK);
|
||||
|
||||
spin_unlock_irqrestore(&qdev->hw_lock, hw_flags);
|
||||
|
||||
set_bit(QL_ADAPTER_UP, &qdev->flags);
|
||||
|
||||
@@ -177,6 +177,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
|
||||
/* clamp new_tx to sane values */
|
||||
min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth16);
|
||||
max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
|
||||
if (max == 0)
|
||||
max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
|
||||
|
||||
/* some devices set dwNtbOutMaxSize too low for the above default */
|
||||
min = min(min, max);
|
||||
|
||||
@@ -495,8 +495,6 @@ static struct sk_buff *vrf_ip6_out_direct(struct net_device *vrf_dev,
|
||||
|
||||
skb->dev = vrf_dev;
|
||||
|
||||
vrf_nf_set_untracked(skb);
|
||||
|
||||
err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk,
|
||||
skb, NULL, vrf_dev, vrf_ip6_out_direct_finish);
|
||||
|
||||
@@ -517,6 +515,8 @@ static struct sk_buff *vrf_ip6_out(struct net_device *vrf_dev,
|
||||
if (rt6_need_strict(&ipv6_hdr(skb)->daddr))
|
||||
return skb;
|
||||
|
||||
vrf_nf_set_untracked(skb);
|
||||
|
||||
if (qdisc_tx_is_default(vrf_dev) ||
|
||||
IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED)
|
||||
return vrf_ip6_out_direct(vrf_dev, sk, skb);
|
||||
@@ -732,8 +732,6 @@ static struct sk_buff *vrf_ip_out_direct(struct net_device *vrf_dev,
|
||||
|
||||
skb->dev = vrf_dev;
|
||||
|
||||
vrf_nf_set_untracked(skb);
|
||||
|
||||
err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk,
|
||||
skb, NULL, vrf_dev, vrf_ip_out_direct_finish);
|
||||
|
||||
@@ -755,6 +753,8 @@ static struct sk_buff *vrf_ip_out(struct net_device *vrf_dev,
|
||||
ipv4_is_lbcast(ip_hdr(skb)->daddr))
|
||||
return skb;
|
||||
|
||||
vrf_nf_set_untracked(skb);
|
||||
|
||||
if (qdisc_tx_is_default(vrf_dev) ||
|
||||
IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
|
||||
return vrf_ip_out_direct(vrf_dev, sk, skb);
|
||||
|
||||
@@ -1494,7 +1494,7 @@ static struct tegra_uart_chip_data tegra20_uart_chip_data = {
|
||||
.fifo_mode_enable_status = false,
|
||||
.uart_max_port = 5,
|
||||
.max_dma_burst_bytes = 4,
|
||||
.error_tolerance_low_range = 0,
|
||||
.error_tolerance_low_range = -4,
|
||||
.error_tolerance_high_range = 4,
|
||||
};
|
||||
|
||||
@@ -1505,7 +1505,7 @@ static struct tegra_uart_chip_data tegra30_uart_chip_data = {
|
||||
.fifo_mode_enable_status = false,
|
||||
.uart_max_port = 5,
|
||||
.max_dma_burst_bytes = 4,
|
||||
.error_tolerance_low_range = 0,
|
||||
.error_tolerance_low_range = -4,
|
||||
.error_tolerance_high_range = 4,
|
||||
};
|
||||
|
||||
|
||||
@@ -409,7 +409,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
|
||||
* the USB-2 spec requires such endpoints to have wMaxPacketSize = 0
|
||||
* (see the end of section 5.6.3), so don't warn about them.
|
||||
*/
|
||||
maxp = usb_endpoint_maxp(&endpoint->desc);
|
||||
maxp = le16_to_cpu(endpoint->desc.wMaxPacketSize);
|
||||
if (maxp == 0 && !(usb_endpoint_xfer_isoc(d) && asnum == 0)) {
|
||||
dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid wMaxPacketSize 0\n",
|
||||
cfgno, inum, asnum, d->bEndpointAddress);
|
||||
@@ -425,9 +425,9 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
|
||||
maxpacket_maxes = full_speed_maxpacket_maxes;
|
||||
break;
|
||||
case USB_SPEED_HIGH:
|
||||
/* Bits 12..11 are allowed only for HS periodic endpoints */
|
||||
/* Multiple-transactions bits are allowed only for HS periodic endpoints */
|
||||
if (usb_endpoint_xfer_int(d) || usb_endpoint_xfer_isoc(d)) {
|
||||
i = maxp & (BIT(12) | BIT(11));
|
||||
i = maxp & USB_EP_MAXP_MULT_MASK;
|
||||
maxp &= ~i;
|
||||
}
|
||||
/* fallthrough */
|
||||
|
||||
@@ -1648,6 +1648,18 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
|
||||
struct usb_function *f = NULL;
|
||||
u8 endp;
|
||||
|
||||
if (w_length > USB_COMP_EP0_BUFSIZ) {
|
||||
if (ctrl->bRequestType == USB_DIR_OUT) {
|
||||
goto done;
|
||||
} else {
|
||||
/* Cast away the const, we are going to overwrite on purpose. */
|
||||
__le16 *temp = (__le16 *)&ctrl->wLength;
|
||||
|
||||
*temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ);
|
||||
w_length = USB_COMP_EP0_BUFSIZ;
|
||||
}
|
||||
}
|
||||
|
||||
/* partial re-init of the response message; the function or the
|
||||
* gadget might need to intercept e.g. a control-OUT completion
|
||||
* when we delegate to it.
|
||||
@@ -2161,7 +2173,7 @@ int composite_dev_prepare(struct usb_composite_driver *composite,
|
||||
if (!cdev->req)
|
||||
return -ENOMEM;
|
||||
|
||||
cdev->req->buf = kmalloc(USB_COMP_EP0_BUFSIZ, GFP_KERNEL);
|
||||
cdev->req->buf = kzalloc(USB_COMP_EP0_BUFSIZ, GFP_KERNEL);
|
||||
if (!cdev->req->buf)
|
||||
goto fail;
|
||||
|
||||
|
||||
@@ -137,7 +137,7 @@ static int dbgp_enable_ep_req(struct usb_ep *ep)
|
||||
goto fail_1;
|
||||
}
|
||||
|
||||
req->buf = kmalloc(DBGP_REQ_LEN, GFP_KERNEL);
|
||||
req->buf = kzalloc(DBGP_REQ_LEN, GFP_KERNEL);
|
||||
if (!req->buf) {
|
||||
err = -ENOMEM;
|
||||
stp = 2;
|
||||
@@ -345,6 +345,19 @@ static int dbgp_setup(struct usb_gadget *gadget,
|
||||
void *data = NULL;
|
||||
u16 len = 0;
|
||||
|
||||
if (length > DBGP_REQ_LEN) {
|
||||
if (ctrl->bRequestType == USB_DIR_OUT) {
|
||||
return err;
|
||||
} else {
|
||||
/* Cast away the const, we are going to overwrite on purpose. */
|
||||
__le16 *temp = (__le16 *)&ctrl->wLength;
|
||||
|
||||
*temp = cpu_to_le16(DBGP_REQ_LEN);
|
||||
length = DBGP_REQ_LEN;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if (request == USB_REQ_GET_DESCRIPTOR) {
|
||||
switch (value>>8) {
|
||||
case USB_DT_DEVICE:
|
||||
|
||||
@@ -110,6 +110,8 @@ enum ep0_state {
|
||||
/* enough for the whole queue: most events invalidate others */
|
||||
#define N_EVENT 5
|
||||
|
||||
#define RBUF_SIZE 256
|
||||
|
||||
struct dev_data {
|
||||
spinlock_t lock;
|
||||
refcount_t count;
|
||||
@@ -144,7 +146,7 @@ struct dev_data {
|
||||
struct dentry *dentry;
|
||||
|
||||
/* except this scratch i/o buffer for ep0 */
|
||||
u8 rbuf [256];
|
||||
u8 rbuf[RBUF_SIZE];
|
||||
};
|
||||
|
||||
static inline void get_dev (struct dev_data *data)
|
||||
@@ -1333,6 +1335,18 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
|
||||
u16 w_value = le16_to_cpu(ctrl->wValue);
|
||||
u16 w_length = le16_to_cpu(ctrl->wLength);
|
||||
|
||||
if (w_length > RBUF_SIZE) {
|
||||
if (ctrl->bRequestType == USB_DIR_OUT) {
|
||||
return value;
|
||||
} else {
|
||||
/* Cast away the const, we are going to overwrite on purpose. */
|
||||
__le16 *temp = (__le16 *)&ctrl->wLength;
|
||||
|
||||
*temp = cpu_to_le16(RBUF_SIZE);
|
||||
w_length = RBUF_SIZE;
|
||||
}
|
||||
}
|
||||
|
||||
spin_lock (&dev->lock);
|
||||
dev->setup_abort = 0;
|
||||
if (dev->state == STATE_DEV_UNCONNECTED) {
|
||||
|
||||
@@ -629,6 +629,7 @@ static int xhci_enter_test_mode(struct xhci_hcd *xhci,
|
||||
continue;
|
||||
|
||||
retval = xhci_disable_slot(xhci, i);
|
||||
xhci_free_virt_device(xhci, i);
|
||||
if (retval)
|
||||
xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n",
|
||||
i, retval);
|
||||
|
||||
@@ -1265,7 +1265,6 @@ static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id)
|
||||
if (xhci->quirks & XHCI_EP_LIMIT_QUIRK)
|
||||
/* Delete default control endpoint resources */
|
||||
xhci_free_device_endpoint_resources(xhci, virt_dev, true);
|
||||
xhci_free_virt_device(xhci, slot_id);
|
||||
}
|
||||
|
||||
static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id,
|
||||
|
||||
@@ -3889,7 +3889,6 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
|
||||
struct xhci_slot_ctx *slot_ctx;
|
||||
int i, ret;
|
||||
|
||||
#ifndef CONFIG_USB_DEFAULT_PERSIST
|
||||
/*
|
||||
* We called pm_runtime_get_noresume when the device was attached.
|
||||
* Decrement the counter here to allow controller to runtime suspend
|
||||
@@ -3897,7 +3896,6 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
|
||||
*/
|
||||
if (xhci->quirks & XHCI_RESET_ON_RESUME)
|
||||
pm_runtime_put_noidle(hcd->self.controller);
|
||||
#endif
|
||||
|
||||
ret = xhci_check_args(hcd, udev, NULL, 0, true, __func__);
|
||||
/* If the host is halted due to driver unload, we still need to free the
|
||||
@@ -3916,9 +3914,8 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
|
||||
del_timer_sync(&virt_dev->eps[i].stop_cmd_timer);
|
||||
}
|
||||
virt_dev->udev = NULL;
|
||||
ret = xhci_disable_slot(xhci, udev->slot_id);
|
||||
if (ret)
|
||||
xhci_free_virt_device(xhci, udev->slot_id);
|
||||
xhci_disable_slot(xhci, udev->slot_id);
|
||||
xhci_free_virt_device(xhci, udev->slot_id);
|
||||
}
|
||||
|
||||
int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
|
||||
@@ -3928,7 +3925,7 @@ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
|
||||
u32 state;
|
||||
int ret = 0;
|
||||
|
||||
command = xhci_alloc_command(xhci, false, GFP_KERNEL);
|
||||
command = xhci_alloc_command(xhci, true, GFP_KERNEL);
|
||||
if (!command)
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -3953,6 +3950,15 @@ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
|
||||
}
|
||||
xhci_ring_cmd_db(xhci);
|
||||
spin_unlock_irqrestore(&xhci->lock, flags);
|
||||
|
||||
wait_for_completion(command->completion);
|
||||
|
||||
if (command->status != COMP_SUCCESS)
|
||||
xhci_warn(xhci, "Unsuccessful disable slot %u command, status %d\n",
|
||||
slot_id, command->status);
|
||||
|
||||
xhci_free_command(xhci, command);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -4049,23 +4055,20 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
|
||||
|
||||
xhci_debugfs_create_slot(xhci, slot_id);
|
||||
|
||||
#ifndef CONFIG_USB_DEFAULT_PERSIST
|
||||
/*
|
||||
* If resetting upon resume, we can't put the controller into runtime
|
||||
* suspend if there is a device attached.
|
||||
*/
|
||||
if (xhci->quirks & XHCI_RESET_ON_RESUME)
|
||||
pm_runtime_get_noresume(hcd->self.controller);
|
||||
#endif
|
||||
|
||||
/* Is this a LS or FS device under a HS hub? */
|
||||
/* Hub or peripherial? */
|
||||
return 1;
|
||||
|
||||
disable_slot:
|
||||
ret = xhci_disable_slot(xhci, udev->slot_id);
|
||||
if (ret)
|
||||
xhci_free_virt_device(xhci, udev->slot_id);
|
||||
xhci_disable_slot(xhci, udev->slot_id);
|
||||
xhci_free_virt_device(xhci, udev->slot_id);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -4195,6 +4198,7 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
|
||||
|
||||
mutex_unlock(&xhci->mutex);
|
||||
ret = xhci_disable_slot(xhci, udev->slot_id);
|
||||
xhci_free_virt_device(xhci, udev->slot_id);
|
||||
if (!ret)
|
||||
xhci_alloc_dev(hcd, udev);
|
||||
kfree(command->completion);
|
||||
|
||||
184
fs/aio.c
184
fs/aio.c
@@ -183,8 +183,9 @@ struct poll_iocb {
|
||||
struct file *file;
|
||||
struct wait_queue_head *head;
|
||||
__poll_t events;
|
||||
bool done;
|
||||
bool cancelled;
|
||||
bool work_scheduled;
|
||||
bool work_need_resched;
|
||||
struct wait_queue_entry wait;
|
||||
struct work_struct work;
|
||||
};
|
||||
@@ -1626,6 +1627,51 @@ static void aio_poll_put_work(struct work_struct *work)
|
||||
iocb_put(iocb);
|
||||
}
|
||||
|
||||
/*
|
||||
* Safely lock the waitqueue which the request is on, synchronizing with the
|
||||
* case where the ->poll() provider decides to free its waitqueue early.
|
||||
*
|
||||
* Returns true on success, meaning that req->head->lock was locked, req->wait
|
||||
* is on req->head, and an RCU read lock was taken. Returns false if the
|
||||
* request was already removed from its waitqueue (which might no longer exist).
|
||||
*/
|
||||
static bool poll_iocb_lock_wq(struct poll_iocb *req)
|
||||
{
|
||||
wait_queue_head_t *head;
|
||||
|
||||
/*
|
||||
* While we hold the waitqueue lock and the waitqueue is nonempty,
|
||||
* wake_up_pollfree() will wait for us. However, taking the waitqueue
|
||||
* lock in the first place can race with the waitqueue being freed.
|
||||
*
|
||||
* We solve this as eventpoll does: by taking advantage of the fact that
|
||||
* all users of wake_up_pollfree() will RCU-delay the actual free. If
|
||||
* we enter rcu_read_lock() and see that the pointer to the queue is
|
||||
* non-NULL, we can then lock it without the memory being freed out from
|
||||
* under us, then check whether the request is still on the queue.
|
||||
*
|
||||
* Keep holding rcu_read_lock() as long as we hold the queue lock, in
|
||||
* case the caller deletes the entry from the queue, leaving it empty.
|
||||
* In that case, only RCU prevents the queue memory from being freed.
|
||||
*/
|
||||
rcu_read_lock();
|
||||
head = smp_load_acquire(&req->head);
|
||||
if (head) {
|
||||
spin_lock(&head->lock);
|
||||
if (!list_empty(&req->wait.entry))
|
||||
return true;
|
||||
spin_unlock(&head->lock);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
return false;
|
||||
}
|
||||
|
||||
static void poll_iocb_unlock_wq(struct poll_iocb *req)
|
||||
{
|
||||
spin_unlock(&req->head->lock);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static void aio_poll_complete_work(struct work_struct *work)
|
||||
{
|
||||
struct poll_iocb *req = container_of(work, struct poll_iocb, work);
|
||||
@@ -1645,14 +1691,27 @@ static void aio_poll_complete_work(struct work_struct *work)
|
||||
* avoid further branches in the fast path.
|
||||
*/
|
||||
spin_lock_irq(&ctx->ctx_lock);
|
||||
if (!mask && !READ_ONCE(req->cancelled)) {
|
||||
add_wait_queue(req->head, &req->wait);
|
||||
spin_unlock_irq(&ctx->ctx_lock);
|
||||
return;
|
||||
}
|
||||
if (poll_iocb_lock_wq(req)) {
|
||||
if (!mask && !READ_ONCE(req->cancelled)) {
|
||||
/*
|
||||
* The request isn't actually ready to be completed yet.
|
||||
* Reschedule completion if another wakeup came in.
|
||||
*/
|
||||
if (req->work_need_resched) {
|
||||
schedule_work(&req->work);
|
||||
req->work_need_resched = false;
|
||||
} else {
|
||||
req->work_scheduled = false;
|
||||
}
|
||||
poll_iocb_unlock_wq(req);
|
||||
spin_unlock_irq(&ctx->ctx_lock);
|
||||
return;
|
||||
}
|
||||
list_del_init(&req->wait.entry);
|
||||
poll_iocb_unlock_wq(req);
|
||||
} /* else, POLLFREE has freed the waitqueue, so we must complete */
|
||||
list_del_init(&iocb->ki_list);
|
||||
iocb->ki_res.res = mangle_poll(mask);
|
||||
req->done = true;
|
||||
spin_unlock_irq(&ctx->ctx_lock);
|
||||
|
||||
iocb_put(iocb);
|
||||
@@ -1664,13 +1723,14 @@ static int aio_poll_cancel(struct kiocb *iocb)
|
||||
struct aio_kiocb *aiocb = container_of(iocb, struct aio_kiocb, rw);
|
||||
struct poll_iocb *req = &aiocb->poll;
|
||||
|
||||
spin_lock(&req->head->lock);
|
||||
WRITE_ONCE(req->cancelled, true);
|
||||
if (!list_empty(&req->wait.entry)) {
|
||||
list_del_init(&req->wait.entry);
|
||||
schedule_work(&aiocb->poll.work);
|
||||
}
|
||||
spin_unlock(&req->head->lock);
|
||||
if (poll_iocb_lock_wq(req)) {
|
||||
WRITE_ONCE(req->cancelled, true);
|
||||
if (!req->work_scheduled) {
|
||||
schedule_work(&aiocb->poll.work);
|
||||
req->work_scheduled = true;
|
||||
}
|
||||
poll_iocb_unlock_wq(req);
|
||||
} /* else, the request was force-cancelled by POLLFREE already */
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -1687,20 +1747,26 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
|
||||
if (mask && !(mask & req->events))
|
||||
return 0;
|
||||
|
||||
list_del_init(&req->wait.entry);
|
||||
|
||||
if (mask && spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
|
||||
/*
|
||||
* Complete the request inline if possible. This requires that three
|
||||
* conditions be met:
|
||||
* 1. An event mask must have been passed. If a plain wakeup was done
|
||||
* instead, then mask == 0 and we have to call vfs_poll() to get
|
||||
* the events, so inline completion isn't possible.
|
||||
* 2. The completion work must not have already been scheduled.
|
||||
* 3. ctx_lock must not be busy. We have to use trylock because we
|
||||
* already hold the waitqueue lock, so this inverts the normal
|
||||
* locking order. Use irqsave/irqrestore because not all
|
||||
* filesystems (e.g. fuse) call this function with IRQs disabled,
|
||||
* yet IRQs have to be disabled before ctx_lock is obtained.
|
||||
*/
|
||||
if (mask && !req->work_scheduled &&
|
||||
spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
|
||||
struct kioctx *ctx = iocb->ki_ctx;
|
||||
|
||||
/*
|
||||
* Try to complete the iocb inline if we can. Use
|
||||
* irqsave/irqrestore because not all filesystems (e.g. fuse)
|
||||
* call this function with IRQs disabled and because IRQs
|
||||
* have to be disabled before ctx_lock is obtained.
|
||||
*/
|
||||
list_del_init(&req->wait.entry);
|
||||
list_del(&iocb->ki_list);
|
||||
iocb->ki_res.res = mangle_poll(mask);
|
||||
req->done = true;
|
||||
if (iocb->ki_eventfd && eventfd_signal_count()) {
|
||||
iocb = NULL;
|
||||
INIT_WORK(&req->work, aio_poll_put_work);
|
||||
@@ -1710,7 +1776,43 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
|
||||
if (iocb)
|
||||
iocb_put(iocb);
|
||||
} else {
|
||||
schedule_work(&req->work);
|
||||
/*
|
||||
* Schedule the completion work if needed. If it was already
|
||||
* scheduled, record that another wakeup came in.
|
||||
*
|
||||
* Don't remove the request from the waitqueue here, as it might
|
||||
* not actually be complete yet (we won't know until vfs_poll()
|
||||
* is called), and we must not miss any wakeups. POLLFREE is an
|
||||
* exception to this; see below.
|
||||
*/
|
||||
if (req->work_scheduled) {
|
||||
req->work_need_resched = true;
|
||||
} else {
|
||||
schedule_work(&req->work);
|
||||
req->work_scheduled = true;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the waitqueue is being freed early but we can't complete
|
||||
* the request inline, we have to tear down the request as best
|
||||
* we can. That means immediately removing the request from its
|
||||
* waitqueue and preventing all further accesses to the
|
||||
* waitqueue via the request. We also need to schedule the
|
||||
* completion work (done above). Also mark the request as
|
||||
* cancelled, to potentially skip an unneeded call to ->poll().
|
||||
*/
|
||||
if (mask & POLLFREE) {
|
||||
WRITE_ONCE(req->cancelled, true);
|
||||
list_del_init(&req->wait.entry);
|
||||
|
||||
/*
|
||||
* Careful: this *must* be the last step, since as soon
|
||||
* as req->head is NULL'ed out, the request can be
|
||||
* completed and freed, since aio_poll_complete_work()
|
||||
* will no longer need to take the waitqueue lock.
|
||||
*/
|
||||
smp_store_release(&req->head, NULL);
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
@@ -1718,6 +1820,7 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
|
||||
struct aio_poll_table {
|
||||
struct poll_table_struct pt;
|
||||
struct aio_kiocb *iocb;
|
||||
bool queued;
|
||||
int error;
|
||||
};
|
||||
|
||||
@@ -1728,11 +1831,12 @@ aio_poll_queue_proc(struct file *file, struct wait_queue_head *head,
|
||||
struct aio_poll_table *pt = container_of(p, struct aio_poll_table, pt);
|
||||
|
||||
/* multiple wait queues per file are not supported */
|
||||
if (unlikely(pt->iocb->poll.head)) {
|
||||
if (unlikely(pt->queued)) {
|
||||
pt->error = -EINVAL;
|
||||
return;
|
||||
}
|
||||
|
||||
pt->queued = true;
|
||||
pt->error = 0;
|
||||
pt->iocb->poll.head = head;
|
||||
add_wait_queue(head, &pt->iocb->poll.wait);
|
||||
@@ -1757,12 +1861,14 @@ static int aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
|
||||
req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP;
|
||||
|
||||
req->head = NULL;
|
||||
req->done = false;
|
||||
req->cancelled = false;
|
||||
req->work_scheduled = false;
|
||||
req->work_need_resched = false;
|
||||
|
||||
apt.pt._qproc = aio_poll_queue_proc;
|
||||
apt.pt._key = req->events;
|
||||
apt.iocb = aiocb;
|
||||
apt.queued = false;
|
||||
apt.error = -EINVAL; /* same as no support for IOCB_CMD_POLL */
|
||||
|
||||
/* initialized the list so that we can do list_empty checks */
|
||||
@@ -1771,23 +1877,35 @@ static int aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
|
||||
|
||||
mask = vfs_poll(req->file, &apt.pt) & req->events;
|
||||
spin_lock_irq(&ctx->ctx_lock);
|
||||
if (likely(req->head)) {
|
||||
spin_lock(&req->head->lock);
|
||||
if (unlikely(list_empty(&req->wait.entry))) {
|
||||
if (apt.error)
|
||||
if (likely(apt.queued)) {
|
||||
bool on_queue = poll_iocb_lock_wq(req);
|
||||
|
||||
if (!on_queue || req->work_scheduled) {
|
||||
/*
|
||||
* aio_poll_wake() already either scheduled the async
|
||||
* completion work, or completed the request inline.
|
||||
*/
|
||||
if (apt.error) /* unsupported case: multiple queues */
|
||||
cancel = true;
|
||||
apt.error = 0;
|
||||
mask = 0;
|
||||
}
|
||||
if (mask || apt.error) {
|
||||
/* Steal to complete synchronously. */
|
||||
list_del_init(&req->wait.entry);
|
||||
} else if (cancel) {
|
||||
/* Cancel if possible (may be too late though). */
|
||||
WRITE_ONCE(req->cancelled, true);
|
||||
} else if (!req->done) { /* actually waiting for an event */
|
||||
} else if (on_queue) {
|
||||
/*
|
||||
* Actually waiting for an event, so add the request to
|
||||
* active_reqs so that it can be cancelled if needed.
|
||||
*/
|
||||
list_add_tail(&aiocb->ki_list, &ctx->active_reqs);
|
||||
aiocb->ki_cancel = aio_poll_cancel;
|
||||
}
|
||||
spin_unlock(&req->head->lock);
|
||||
if (on_queue)
|
||||
poll_iocb_unlock_wq(req);
|
||||
}
|
||||
if (mask) { /* no async, we'd stolen it */
|
||||
aiocb->ki_res.res = mangle_poll(mask);
|
||||
|
||||
@@ -3754,6 +3754,12 @@ static void set_btree_ioerr(struct page *page)
|
||||
if (test_and_set_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags))
|
||||
return;
|
||||
|
||||
/*
|
||||
* A read may stumble upon this buffer later, make sure that it gets an
|
||||
* error and knows there was an error.
|
||||
*/
|
||||
clear_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
|
||||
|
||||
/*
|
||||
* If we error out, we should add back the dirty_metadata_bytes
|
||||
* to make it consistent.
|
||||
|
||||
@@ -371,7 +371,8 @@ int btrfs_del_root_ref(struct btrfs_trans_handle *trans, u64 root_id,
|
||||
key.offset = ref_id;
|
||||
again:
|
||||
ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1);
|
||||
BUG_ON(ret < 0);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
if (ret == 0) {
|
||||
leaf = path->nodes[0];
|
||||
ref = btrfs_item_ptr(leaf, path->slots[0],
|
||||
|
||||
@@ -2177,6 +2177,7 @@ static struct notifier_block nfsd4_cld_block = {
|
||||
int
|
||||
register_cld_notifier(void)
|
||||
{
|
||||
WARN_ON(!nfsd_net_id);
|
||||
return rpc_pipefs_notifier_register(&nfsd4_cld_block);
|
||||
}
|
||||
|
||||
|
||||
@@ -1526,12 +1526,9 @@ static int __init init_nfsd(void)
|
||||
int retval;
|
||||
printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n");
|
||||
|
||||
retval = register_cld_notifier();
|
||||
if (retval)
|
||||
return retval;
|
||||
retval = nfsd4_init_slabs();
|
||||
if (retval)
|
||||
goto out_unregister_notifier;
|
||||
return retval;
|
||||
retval = nfsd4_init_pnfs();
|
||||
if (retval)
|
||||
goto out_free_slabs;
|
||||
@@ -1549,9 +1546,14 @@ static int __init init_nfsd(void)
|
||||
goto out_free_exports;
|
||||
retval = register_pernet_subsys(&nfsd_net_ops);
|
||||
if (retval < 0)
|
||||
goto out_free_filesystem;
|
||||
retval = register_cld_notifier();
|
||||
if (retval)
|
||||
goto out_free_all;
|
||||
return 0;
|
||||
out_free_all:
|
||||
unregister_pernet_subsys(&nfsd_net_ops);
|
||||
out_free_filesystem:
|
||||
unregister_filesystem(&nfsd_fs_type);
|
||||
out_free_exports:
|
||||
remove_proc_entry("fs/nfs/exports", NULL);
|
||||
@@ -1565,13 +1567,12 @@ out_free_stat:
|
||||
nfsd4_exit_pnfs();
|
||||
out_free_slabs:
|
||||
nfsd4_free_slabs();
|
||||
out_unregister_notifier:
|
||||
unregister_cld_notifier();
|
||||
return retval;
|
||||
}
|
||||
|
||||
static void __exit exit_nfsd(void)
|
||||
{
|
||||
unregister_cld_notifier();
|
||||
unregister_pernet_subsys(&nfsd_net_ops);
|
||||
nfsd_drc_slab_free();
|
||||
remove_proc_entry("fs/nfs/exports", NULL);
|
||||
@@ -1582,7 +1583,6 @@ static void __exit exit_nfsd(void)
|
||||
nfsd4_exit_pnfs();
|
||||
nfsd_fault_inject_cleanup();
|
||||
unregister_filesystem(&nfsd_fs_type);
|
||||
unregister_cld_notifier();
|
||||
}
|
||||
|
||||
MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
|
||||
|
||||
@@ -1503,7 +1503,7 @@ static int ntfs_dir_fsync(struct file *filp, loff_t start, loff_t end,
|
||||
na.type = AT_BITMAP;
|
||||
na.name = I30;
|
||||
na.name_len = 4;
|
||||
bmp_vi = ilookup5(vi->i_sb, vi->i_ino, (test_t)ntfs_test_inode, &na);
|
||||
bmp_vi = ilookup5(vi->i_sb, vi->i_ino, ntfs_test_inode, &na);
|
||||
if (bmp_vi) {
|
||||
write_inode_now(bmp_vi, !datasync);
|
||||
iput(bmp_vi);
|
||||
|
||||
@@ -30,10 +30,10 @@
|
||||
/**
|
||||
* ntfs_test_inode - compare two (possibly fake) inodes for equality
|
||||
* @vi: vfs inode which to test
|
||||
* @na: ntfs attribute which is being tested with
|
||||
* @data: data which is being tested with
|
||||
*
|
||||
* Compare the ntfs attribute embedded in the ntfs specific part of the vfs
|
||||
* inode @vi for equality with the ntfs attribute @na.
|
||||
* inode @vi for equality with the ntfs attribute @data.
|
||||
*
|
||||
* If searching for the normal file/directory inode, set @na->type to AT_UNUSED.
|
||||
* @na->name and @na->name_len are then ignored.
|
||||
@@ -43,8 +43,9 @@
|
||||
* NOTE: This function runs with the inode_hash_lock spin lock held so it is not
|
||||
* allowed to sleep.
|
||||
*/
|
||||
int ntfs_test_inode(struct inode *vi, ntfs_attr *na)
|
||||
int ntfs_test_inode(struct inode *vi, void *data)
|
||||
{
|
||||
ntfs_attr *na = (ntfs_attr *)data;
|
||||
ntfs_inode *ni;
|
||||
|
||||
if (vi->i_ino != na->mft_no)
|
||||
@@ -72,9 +73,9 @@ int ntfs_test_inode(struct inode *vi, ntfs_attr *na)
|
||||
/**
|
||||
* ntfs_init_locked_inode - initialize an inode
|
||||
* @vi: vfs inode to initialize
|
||||
* @na: ntfs attribute which to initialize @vi to
|
||||
* @data: data which to initialize @vi to
|
||||
*
|
||||
* Initialize the vfs inode @vi with the values from the ntfs attribute @na in
|
||||
* Initialize the vfs inode @vi with the values from the ntfs attribute @data in
|
||||
* order to enable ntfs_test_inode() to do its work.
|
||||
*
|
||||
* If initializing the normal file/directory inode, set @na->type to AT_UNUSED.
|
||||
@@ -87,8 +88,9 @@ int ntfs_test_inode(struct inode *vi, ntfs_attr *na)
|
||||
* NOTE: This function runs with the inode->i_lock spin lock held so it is not
|
||||
* allowed to sleep. (Hence the GFP_ATOMIC allocation.)
|
||||
*/
|
||||
static int ntfs_init_locked_inode(struct inode *vi, ntfs_attr *na)
|
||||
static int ntfs_init_locked_inode(struct inode *vi, void *data)
|
||||
{
|
||||
ntfs_attr *na = (ntfs_attr *)data;
|
||||
ntfs_inode *ni = NTFS_I(vi);
|
||||
|
||||
vi->i_ino = na->mft_no;
|
||||
@@ -131,7 +133,6 @@ static int ntfs_init_locked_inode(struct inode *vi, ntfs_attr *na)
|
||||
return 0;
|
||||
}
|
||||
|
||||
typedef int (*set_t)(struct inode *, void *);
|
||||
static int ntfs_read_locked_inode(struct inode *vi);
|
||||
static int ntfs_read_locked_attr_inode(struct inode *base_vi, struct inode *vi);
|
||||
static int ntfs_read_locked_index_inode(struct inode *base_vi,
|
||||
@@ -164,8 +165,8 @@ struct inode *ntfs_iget(struct super_block *sb, unsigned long mft_no)
|
||||
na.name = NULL;
|
||||
na.name_len = 0;
|
||||
|
||||
vi = iget5_locked(sb, mft_no, (test_t)ntfs_test_inode,
|
||||
(set_t)ntfs_init_locked_inode, &na);
|
||||
vi = iget5_locked(sb, mft_no, ntfs_test_inode,
|
||||
ntfs_init_locked_inode, &na);
|
||||
if (unlikely(!vi))
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
@@ -225,8 +226,8 @@ struct inode *ntfs_attr_iget(struct inode *base_vi, ATTR_TYPE type,
|
||||
na.name = name;
|
||||
na.name_len = name_len;
|
||||
|
||||
vi = iget5_locked(base_vi->i_sb, na.mft_no, (test_t)ntfs_test_inode,
|
||||
(set_t)ntfs_init_locked_inode, &na);
|
||||
vi = iget5_locked(base_vi->i_sb, na.mft_no, ntfs_test_inode,
|
||||
ntfs_init_locked_inode, &na);
|
||||
if (unlikely(!vi))
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
@@ -280,8 +281,8 @@ struct inode *ntfs_index_iget(struct inode *base_vi, ntfschar *name,
|
||||
na.name = name;
|
||||
na.name_len = name_len;
|
||||
|
||||
vi = iget5_locked(base_vi->i_sb, na.mft_no, (test_t)ntfs_test_inode,
|
||||
(set_t)ntfs_init_locked_inode, &na);
|
||||
vi = iget5_locked(base_vi->i_sb, na.mft_no, ntfs_test_inode,
|
||||
ntfs_init_locked_inode, &na);
|
||||
if (unlikely(!vi))
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
|
||||
@@ -253,9 +253,7 @@ typedef struct {
|
||||
ATTR_TYPE type;
|
||||
} ntfs_attr;
|
||||
|
||||
typedef int (*test_t)(struct inode *, void *);
|
||||
|
||||
extern int ntfs_test_inode(struct inode *vi, ntfs_attr *na);
|
||||
extern int ntfs_test_inode(struct inode *vi, void *data);
|
||||
|
||||
extern struct inode *ntfs_iget(struct super_block *sb, unsigned long mft_no);
|
||||
extern struct inode *ntfs_attr_iget(struct inode *base_vi, ATTR_TYPE type,
|
||||
|
||||
@@ -958,7 +958,7 @@ bool ntfs_may_write_mft_record(ntfs_volume *vol, const unsigned long mft_no,
|
||||
* dirty code path of the inode dirty code path when writing
|
||||
* $MFT occurs.
|
||||
*/
|
||||
vi = ilookup5_nowait(sb, mft_no, (test_t)ntfs_test_inode, &na);
|
||||
vi = ilookup5_nowait(sb, mft_no, ntfs_test_inode, &na);
|
||||
}
|
||||
if (vi) {
|
||||
ntfs_debug("Base inode 0x%lx is in icache.", mft_no);
|
||||
@@ -1019,7 +1019,7 @@ bool ntfs_may_write_mft_record(ntfs_volume *vol, const unsigned long mft_no,
|
||||
vi = igrab(mft_vi);
|
||||
BUG_ON(vi != mft_vi);
|
||||
} else
|
||||
vi = ilookup5_nowait(sb, na.mft_no, (test_t)ntfs_test_inode,
|
||||
vi = ilookup5_nowait(sb, na.mft_no, ntfs_test_inode,
|
||||
&na);
|
||||
if (!vi) {
|
||||
/*
|
||||
|
||||
@@ -35,17 +35,7 @@
|
||||
|
||||
void signalfd_cleanup(struct sighand_struct *sighand)
|
||||
{
|
||||
wait_queue_head_t *wqh = &sighand->signalfd_wqh;
|
||||
/*
|
||||
* The lockless check can race with remove_wait_queue() in progress,
|
||||
* but in this case its caller should run under rcu_read_lock() and
|
||||
* sighand_cachep is SLAB_TYPESAFE_BY_RCU, we can safely return.
|
||||
*/
|
||||
if (likely(!waitqueue_active(wqh)))
|
||||
return;
|
||||
|
||||
/* wait_queue_entry_t->func(POLLFREE) should do remove_wait_queue() */
|
||||
wake_up_poll(wqh, EPOLLHUP | POLLFREE);
|
||||
wake_up_pollfree(&sighand->signalfd_wqh);
|
||||
}
|
||||
|
||||
struct signalfd_ctx {
|
||||
|
||||
@@ -159,6 +159,77 @@ struct tracefs_fs_info {
|
||||
struct tracefs_mount_opts mount_opts;
|
||||
};
|
||||
|
||||
static void change_gid(struct dentry *dentry, kgid_t gid)
|
||||
{
|
||||
if (!dentry->d_inode)
|
||||
return;
|
||||
dentry->d_inode->i_gid = gid;
|
||||
}
|
||||
|
||||
/*
|
||||
* Taken from d_walk, but without he need for handling renames.
|
||||
* Nothing can be renamed while walking the list, as tracefs
|
||||
* does not support renames. This is only called when mounting
|
||||
* or remounting the file system, to set all the files to
|
||||
* the given gid.
|
||||
*/
|
||||
static void set_gid(struct dentry *parent, kgid_t gid)
|
||||
{
|
||||
struct dentry *this_parent;
|
||||
struct list_head *next;
|
||||
|
||||
this_parent = parent;
|
||||
spin_lock(&this_parent->d_lock);
|
||||
|
||||
change_gid(this_parent, gid);
|
||||
repeat:
|
||||
next = this_parent->d_subdirs.next;
|
||||
resume:
|
||||
while (next != &this_parent->d_subdirs) {
|
||||
struct list_head *tmp = next;
|
||||
struct dentry *dentry = list_entry(tmp, struct dentry, d_child);
|
||||
next = tmp->next;
|
||||
|
||||
spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
|
||||
|
||||
change_gid(dentry, gid);
|
||||
|
||||
if (!list_empty(&dentry->d_subdirs)) {
|
||||
spin_unlock(&this_parent->d_lock);
|
||||
spin_release(&dentry->d_lock.dep_map, 1, _RET_IP_);
|
||||
this_parent = dentry;
|
||||
spin_acquire(&this_parent->d_lock.dep_map, 0, 1, _RET_IP_);
|
||||
goto repeat;
|
||||
}
|
||||
spin_unlock(&dentry->d_lock);
|
||||
}
|
||||
/*
|
||||
* All done at this level ... ascend and resume the search.
|
||||
*/
|
||||
rcu_read_lock();
|
||||
ascend:
|
||||
if (this_parent != parent) {
|
||||
struct dentry *child = this_parent;
|
||||
this_parent = child->d_parent;
|
||||
|
||||
spin_unlock(&child->d_lock);
|
||||
spin_lock(&this_parent->d_lock);
|
||||
|
||||
/* go into the first sibling still alive */
|
||||
do {
|
||||
next = child->d_child.next;
|
||||
if (next == &this_parent->d_subdirs)
|
||||
goto ascend;
|
||||
child = list_entry(next, struct dentry, d_child);
|
||||
} while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED));
|
||||
rcu_read_unlock();
|
||||
goto resume;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
spin_unlock(&this_parent->d_lock);
|
||||
return;
|
||||
}
|
||||
|
||||
static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
|
||||
{
|
||||
substring_t args[MAX_OPT_ARGS];
|
||||
@@ -191,6 +262,7 @@ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
|
||||
if (!gid_valid(gid))
|
||||
return -EINVAL;
|
||||
opts->gid = gid;
|
||||
set_gid(tracefs_mount->mnt_root, gid);
|
||||
break;
|
||||
case Opt_mode:
|
||||
if (match_octal(&args[0], &option))
|
||||
@@ -409,6 +481,8 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode,
|
||||
inode->i_mode = mode;
|
||||
inode->i_fop = fops ? fops : &tracefs_file_operations;
|
||||
inode->i_private = data;
|
||||
inode->i_uid = d_inode(dentry->d_parent)->i_uid;
|
||||
inode->i_gid = d_inode(dentry->d_parent)->i_gid;
|
||||
d_instantiate(dentry, inode);
|
||||
fsnotify_create(dentry->d_parent->d_inode, dentry);
|
||||
return end_creating(dentry);
|
||||
@@ -431,6 +505,8 @@ static struct dentry *__create_dir(const char *name, struct dentry *parent,
|
||||
inode->i_mode = S_IFDIR | S_IRWXU | S_IRUSR| S_IRGRP | S_IXUSR | S_IXGRP;
|
||||
inode->i_op = ops;
|
||||
inode->i_fop = &simple_dir_operations;
|
||||
inode->i_uid = d_inode(dentry->d_parent)->i_uid;
|
||||
inode->i_gid = d_inode(dentry->d_parent)->i_gid;
|
||||
|
||||
/* directory inodes start off with i_nlink == 2 (for "." entry) */
|
||||
inc_nlink(inode);
|
||||
|
||||
@@ -837,6 +837,11 @@ static inline bool hid_is_using_ll_driver(struct hid_device *hdev,
|
||||
return hdev->ll_driver == driver;
|
||||
}
|
||||
|
||||
static inline bool hid_is_usb(struct hid_device *hdev)
|
||||
{
|
||||
return hid_is_using_ll_driver(hdev, &usb_hid_driver);
|
||||
}
|
||||
|
||||
#define PM_HINT_FULLON 1<<5
|
||||
#define PM_HINT_NORMAL 1<<1
|
||||
|
||||
|
||||
@@ -204,6 +204,7 @@ void __wake_up_locked_key_bookmark(struct wait_queue_head *wq_head,
|
||||
void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
|
||||
void __wake_up_locked(struct wait_queue_head *wq_head, unsigned int mode, int nr);
|
||||
void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode, int nr);
|
||||
void __wake_up_pollfree(struct wait_queue_head *wq_head);
|
||||
|
||||
#define wake_up(x) __wake_up(x, TASK_NORMAL, 1, NULL)
|
||||
#define wake_up_nr(x, nr) __wake_up(x, TASK_NORMAL, nr, NULL)
|
||||
@@ -230,6 +231,31 @@ void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode, int nr);
|
||||
#define wake_up_interruptible_sync_poll(x, m) \
|
||||
__wake_up_sync_key((x), TASK_INTERRUPTIBLE, 1, poll_to_key(m))
|
||||
|
||||
/**
|
||||
* wake_up_pollfree - signal that a polled waitqueue is going away
|
||||
* @wq_head: the wait queue head
|
||||
*
|
||||
* In the very rare cases where a ->poll() implementation uses a waitqueue whose
|
||||
* lifetime is tied to a task rather than to the 'struct file' being polled,
|
||||
* this function must be called before the waitqueue is freed so that
|
||||
* non-blocking polls (e.g. epoll) are notified that the queue is going away.
|
||||
*
|
||||
* The caller must also RCU-delay the freeing of the wait_queue_head, e.g. via
|
||||
* an explicit synchronize_rcu() or call_rcu(), or via SLAB_TYPESAFE_BY_RCU.
|
||||
*/
|
||||
static inline void wake_up_pollfree(struct wait_queue_head *wq_head)
|
||||
{
|
||||
/*
|
||||
* For performance reasons, we don't always take the queue lock here.
|
||||
* Therefore, we might race with someone removing the last entry from
|
||||
* the queue, and proceed while they still hold the queue lock.
|
||||
* However, rcu_read_lock() is required to be held in such cases, so we
|
||||
* can safely proceed with an RCU-delayed free.
|
||||
*/
|
||||
if (waitqueue_active(wq_head))
|
||||
__wake_up_pollfree(wq_head);
|
||||
}
|
||||
|
||||
#define ___wait_cond_timeout(condition) \
|
||||
({ \
|
||||
bool __cond = (condition); \
|
||||
|
||||
@@ -126,7 +126,7 @@ struct tlb_slave_info {
|
||||
struct alb_bond_info {
|
||||
struct tlb_client_info *tx_hashtbl; /* Dynamically allocated */
|
||||
u32 unbalanced_load;
|
||||
int tx_rebalance_counter;
|
||||
atomic_t tx_rebalance_counter;
|
||||
int lp_counter;
|
||||
/* -------- rlb parameters -------- */
|
||||
int rlb_enabled;
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user