Merge 4.9.217 into android-4.9-q
Changes in 4.9.217 NFS: Remove superfluous kmap in nfs_readdir_xdr_to_array phy: Revert toggling reset changes. net: phy: Avoid multiple suspends cgroup, netclassid: periodically release file_lock on classid updating gre: fix uninit-value in __iptunnel_pull_header ipv6/addrconf: call ipv6_mc_up() for non-Ethernet interface net: macsec: update SCI upon MAC address change. net: nfc: fix bounds checking bugs on "pipe" r8152: check disconnect status after long sleep bnxt_en: reinitialize IRQs when MTU is modified fib: add missing attribute validation for tun_id nl802154: add missing attribute validation nl802154: add missing attribute validation for dev_type macsec: add missing attribute validation for port net: fq: add missing attribute validation for orphan mask team: add missing attribute validation for port ifindex team: add missing attribute validation for array index nfc: add missing attribute validation for SE API nfc: add missing attribute validation for vendor subcommand ipvlan: add cond_resched_rcu() while processing muticast backlog ipvlan: do not add hardware address of master to its unicast filter list ipvlan: egress mcast packets are not exceptional ipvlan: do not use cond_resched_rcu() in ipvlan_process_multicast() ipvlan: don't deref eth hdr before checking it's set macvlan: add cond_resched() during multicast processing net: fec: validate the new settings in fec_enet_set_coalesce() slip: make slhc_compress() more robust against malicious packets bonding/alb: make sure arp header is pulled before accessing it cgroup: memcg: net: do not associate sock with unrelated cgroup net: phy: fix MDIO bus PM PHY resuming virtio-blk: fix hw_queue stopped on arbitrary error iommu/vt-d: quirk_ioat_snb_local_iommu: replace WARN_TAINT with pr_warn + add_taint workqueue: don't use wq_select_unbound_cpu() for bound works drm/amd/display: remove duplicated assignment to grph_obj_type cifs_atomic_open(): fix double-put on late allocation failure gfs2_atomic_open(): fix O_EXCL|O_CREAT handling on cold dcache KVM: x86: clear stale x86_emulate_ctxt->intercept value ARC: define __ALIGN_STR and __ALIGN symbols for ARC efi: Fix a race and a buffer overflow while reading efivars via sysfs iommu/vt-d: dmar: replace WARN_TAINT with pr_warn + add_taint iommu/vt-d: Fix a bug in intel_iommu_iova_to_phys() for huge page nl80211: add missing attribute validation for critical protocol indication nl80211: add missing attribute validation for beacon report scanning nl80211: add missing attribute validation for channel switch netfilter: cthelper: add missing attribute validation for cthelper mwifiex: Fix heap overflow in mmwifiex_process_tdls_action_frame() iommu/vt-d: Fix the wrong printing in RHSA parsing iommu/vt-d: Ignore devices with out-of-spec domain number ipv6: restrict IPV6_ADDRFORM operation efi: Add a sanity check to efivar_store_raw() batman-adv: Fix double free during fragment merge error batman-adv: Fix transmission of final, 16th fragment batman-adv: Initialize gw sel_class via batadv_algo batman-adv: Fix rx packet/bytes stats on local ARP reply batman-adv: Use default throughput value on cfg80211 error batman-adv: Accept only filled wifi station info batman-adv: fix TT sync flag inconsistencies batman-adv: Avoid spurious warnings from bat_v neigh_cmp implementation batman-adv: Always initialize fragment header priority batman-adv: Fix check of retrieved orig_gw in batadv_v_gw_is_eligible batman-adv: Fix lock for ogm cnt access in batadv_iv_ogm_calc_tq batman-adv: Fix internal interface indices types batman-adv: Avoid race in TT TVLV allocator helper batman-adv: Fix TT sync flags for intermediate TT responses batman-adv: prevent TT request storms by not sending inconsistent TT TLVLs batman-adv: Fix debugfs path for renamed hardif batman-adv: Fix debugfs path for renamed softif batman-adv: Avoid storing non-TT-sync flags on singular entries too batman-adv: Fix multicast TT issues with bogus ROAM flags batman-adv: Prevent duplicated gateway_node entry batman-adv: Fix duplicated OGMs on NETDEV_UP batman-adv: Avoid free/alloc race when handling OGM2 buffer batman-adv: Avoid free/alloc race when handling OGM buffer batman-adv: Don't schedule OGM for disabled interface batman-adv: update data pointers after skb_cow() batman-adv: Avoid probe ELP information leak batman-adv: Use explicit tvlv padding for ELP packets perf/amd/uncore: Replace manual sampling check with CAP_NO_INTERRUPT flag ACPI: watchdog: Allow disabling WDAT at boot HID: apple: Add support for recent firmware on Magic Keyboards HID: i2c-hid: add Trekstor Surfbook E11B to descriptor override cfg80211: check reg_rule for NULL in handle_channel_custom() net: ks8851-ml: Fix IRQ handling and locking mac80211: rx: avoid RCU list traversal under mutex signal: avoid double atomic counter increments for user accounting jbd2: fix data races at struct journal_head ARM: 8957/1: VDSO: Match ARMv8 timer in cntvct_functional() ARM: 8958/1: rename missed uaccess .fixup section mm: slub: add missing TID bump in kmem_cache_alloc_bulk() ipv4: ensure rcu_read_lock() in cipso_v4_error() Linux 4.9.217 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ia7aeed273cd7548dc8d0dfaaad8b96bedfe499b1
This commit is contained in:
@@ -596,3 +596,10 @@ in your dentry operations instead.
|
||||
[mandatory]
|
||||
->rename() has an added flags argument. Any flags not handled by the
|
||||
filesystem should result in EINVAL being returned.
|
||||
--
|
||||
[mandatory]
|
||||
|
||||
[should've been added in 2016] stale comment in finish_open()
|
||||
nonwithstanding, failure exits in ->atomic_open() instances should
|
||||
*NOT* fput() the file, no matter what. Everything is handled by the
|
||||
caller.
|
||||
|
||||
@@ -336,6 +336,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||
dynamic table installation which will install SSDT
|
||||
tables to /sys/firmware/acpi/tables/dynamic.
|
||||
|
||||
acpi_no_watchdog [HW,ACPI,WDT]
|
||||
Ignore the ACPI-based watchdog interface (WDAT) and let
|
||||
a native driver control the watchdog device instead.
|
||||
|
||||
acpi_rsdp= [ACPI,EFI,KEXEC]
|
||||
Pass the RSDP address to the kernel, mostly used
|
||||
on machines running EFI runtime service to boot the
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 216
|
||||
SUBLEVEL = 217
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
||||
@@ -14,6 +14,8 @@
|
||||
#ifdef __ASSEMBLY__
|
||||
|
||||
#define ASM_NL ` /* use '`' to mark new line in macro */
|
||||
#define __ALIGN .align 4
|
||||
#define __ALIGN_STR __stringify(__ALIGN)
|
||||
|
||||
/* annotation for data we want in DCCM - if enabled in .config */
|
||||
.macro ARCFP_DATA nm
|
||||
|
||||
@@ -85,6 +85,8 @@ static bool __init cntvct_functional(void)
|
||||
* this.
|
||||
*/
|
||||
np = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
|
||||
if (!np)
|
||||
np = of_find_compatible_node(NULL, NULL, "arm,armv8-timer");
|
||||
if (!np)
|
||||
goto out_put;
|
||||
|
||||
|
||||
@@ -100,7 +100,7 @@ ENTRY(arm_copy_from_user)
|
||||
|
||||
ENDPROC(arm_copy_from_user)
|
||||
|
||||
.pushsection .fixup,"ax"
|
||||
.pushsection .text.fixup,"ax"
|
||||
.align 0
|
||||
copy_abort_preamble
|
||||
ldmfd sp!, {r1, r2, r3}
|
||||
|
||||
@@ -185,20 +185,18 @@ static int amd_uncore_event_init(struct perf_event *event)
|
||||
|
||||
/*
|
||||
* NB and Last level cache counters (MSRs) are shared across all cores
|
||||
* that share the same NB / Last level cache. Interrupts can be directed
|
||||
* to a single target core, however, event counts generated by processes
|
||||
* running on other cores cannot be masked out. So we do not support
|
||||
* sampling and per-thread events.
|
||||
* that share the same NB / Last level cache. On family 16h and below,
|
||||
* Interrupts can be directed to a single target core, however, event
|
||||
* counts generated by processes running on other cores cannot be masked
|
||||
* out. So we do not support sampling and per-thread events via
|
||||
* CAP_NO_INTERRUPT, and we do not enable counter overflow interrupts:
|
||||
*/
|
||||
if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK)
|
||||
return -EINVAL;
|
||||
|
||||
/* NB and Last level cache counters do not have usr/os/guest/host bits */
|
||||
if (event->attr.exclude_user || event->attr.exclude_kernel ||
|
||||
event->attr.exclude_host || event->attr.exclude_guest)
|
||||
return -EINVAL;
|
||||
|
||||
/* and we do not enable counter overflow interrupts */
|
||||
hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;
|
||||
hwc->idx = -1;
|
||||
|
||||
@@ -275,6 +273,7 @@ static struct pmu amd_nb_pmu = {
|
||||
.start = amd_uncore_start,
|
||||
.stop = amd_uncore_stop,
|
||||
.read = amd_uncore_read,
|
||||
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
|
||||
};
|
||||
|
||||
static struct pmu amd_llc_pmu = {
|
||||
@@ -287,6 +286,7 @@ static struct pmu amd_llc_pmu = {
|
||||
.start = amd_uncore_start,
|
||||
.stop = amd_uncore_stop,
|
||||
.read = amd_uncore_read,
|
||||
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
|
||||
};
|
||||
|
||||
static struct amd_uncore *amd_uncore_alloc(unsigned int cpu)
|
||||
|
||||
@@ -5022,6 +5022,7 @@ int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len)
|
||||
ctxt->fetch.ptr = ctxt->fetch.data;
|
||||
ctxt->fetch.end = ctxt->fetch.data + insn_len;
|
||||
ctxt->opcode_len = 1;
|
||||
ctxt->intercept = x86_intercept_none;
|
||||
if (insn_len > 0)
|
||||
memcpy(ctxt->fetch.data, insn, insn_len);
|
||||
else {
|
||||
|
||||
@@ -58,12 +58,14 @@ static bool acpi_watchdog_uses_rtc(const struct acpi_table_wdat *wdat)
|
||||
}
|
||||
#endif
|
||||
|
||||
static bool acpi_no_watchdog;
|
||||
|
||||
static const struct acpi_table_wdat *acpi_watchdog_get_wdat(void)
|
||||
{
|
||||
const struct acpi_table_wdat *wdat = NULL;
|
||||
acpi_status status;
|
||||
|
||||
if (acpi_disabled)
|
||||
if (acpi_disabled || acpi_no_watchdog)
|
||||
return NULL;
|
||||
|
||||
status = acpi_get_table(ACPI_SIG_WDAT, 0,
|
||||
@@ -91,6 +93,14 @@ bool acpi_has_watchdog(void)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_has_watchdog);
|
||||
|
||||
/* ACPI watchdog can be disabled on boot command line */
|
||||
static int __init disable_acpi_watchdog(char *str)
|
||||
{
|
||||
acpi_no_watchdog = true;
|
||||
return 1;
|
||||
}
|
||||
__setup("acpi_no_watchdog", disable_acpi_watchdog);
|
||||
|
||||
void __init acpi_watchdog_init(void)
|
||||
{
|
||||
const struct acpi_wdat_entry *entries;
|
||||
|
||||
@@ -215,10 +215,12 @@ static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
|
||||
err = __virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num);
|
||||
if (err) {
|
||||
virtqueue_kick(vblk->vqs[qid].vq);
|
||||
blk_mq_stop_hw_queue(hctx);
|
||||
/* Don't stop the queue if -ENOMEM: we may have failed to
|
||||
* bounce the buffer due to global resource outage.
|
||||
*/
|
||||
if (err == -ENOSPC)
|
||||
blk_mq_stop_hw_queue(hctx);
|
||||
spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
|
||||
/* Out of mem doesn't actually happen, since we fall back
|
||||
* to direct descriptors */
|
||||
if (err == -ENOMEM || err == -ENOSPC)
|
||||
return BLK_MQ_RQ_QUEUE_BUSY;
|
||||
return BLK_MQ_RQ_QUEUE_ERROR;
|
||||
|
||||
@@ -139,13 +139,16 @@ static ssize_t
|
||||
efivar_attr_read(struct efivar_entry *entry, char *buf)
|
||||
{
|
||||
struct efi_variable *var = &entry->var;
|
||||
unsigned long size = sizeof(var->Data);
|
||||
char *str = buf;
|
||||
int ret;
|
||||
|
||||
if (!entry || !buf)
|
||||
return -EINVAL;
|
||||
|
||||
var->DataSize = 1024;
|
||||
if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data))
|
||||
ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
|
||||
var->DataSize = size;
|
||||
if (ret)
|
||||
return -EIO;
|
||||
|
||||
if (var->Attributes & EFI_VARIABLE_NON_VOLATILE)
|
||||
@@ -172,13 +175,16 @@ static ssize_t
|
||||
efivar_size_read(struct efivar_entry *entry, char *buf)
|
||||
{
|
||||
struct efi_variable *var = &entry->var;
|
||||
unsigned long size = sizeof(var->Data);
|
||||
char *str = buf;
|
||||
int ret;
|
||||
|
||||
if (!entry || !buf)
|
||||
return -EINVAL;
|
||||
|
||||
var->DataSize = 1024;
|
||||
if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data))
|
||||
ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
|
||||
var->DataSize = size;
|
||||
if (ret)
|
||||
return -EIO;
|
||||
|
||||
str += sprintf(str, "0x%lx\n", var->DataSize);
|
||||
@@ -189,12 +195,15 @@ static ssize_t
|
||||
efivar_data_read(struct efivar_entry *entry, char *buf)
|
||||
{
|
||||
struct efi_variable *var = &entry->var;
|
||||
unsigned long size = sizeof(var->Data);
|
||||
int ret;
|
||||
|
||||
if (!entry || !buf)
|
||||
return -EINVAL;
|
||||
|
||||
var->DataSize = 1024;
|
||||
if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data))
|
||||
ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
|
||||
var->DataSize = size;
|
||||
if (ret)
|
||||
return -EIO;
|
||||
|
||||
memcpy(buf, var->Data, var->DataSize);
|
||||
@@ -263,6 +272,9 @@ efivar_store_raw(struct efivar_entry *entry, const char *buf, size_t count)
|
||||
u8 *data;
|
||||
int err;
|
||||
|
||||
if (!entry || !buf)
|
||||
return -EINVAL;
|
||||
|
||||
if (is_compat()) {
|
||||
struct compat_efi_variable *compat;
|
||||
|
||||
@@ -314,14 +326,16 @@ efivar_show_raw(struct efivar_entry *entry, char *buf)
|
||||
{
|
||||
struct efi_variable *var = &entry->var;
|
||||
struct compat_efi_variable *compat;
|
||||
unsigned long datasize = sizeof(var->Data);
|
||||
size_t size;
|
||||
int ret;
|
||||
|
||||
if (!entry || !buf)
|
||||
return 0;
|
||||
|
||||
var->DataSize = 1024;
|
||||
if (efivar_entry_get(entry, &entry->var.Attributes,
|
||||
&entry->var.DataSize, entry->var.Data))
|
||||
ret = efivar_entry_get(entry, &var->Attributes, &datasize, var->Data);
|
||||
var->DataSize = datasize;
|
||||
if (ret)
|
||||
return -EIO;
|
||||
|
||||
if (is_compat()) {
|
||||
|
||||
@@ -363,8 +363,7 @@ bool amdgpu_atombios_get_connector_info_from_object_table(struct amdgpu_device *
|
||||
router.ddc_valid = false;
|
||||
router.cd_valid = false;
|
||||
for (j = 0; j < ((le16_to_cpu(path->usSize) - 8) / 2); j++) {
|
||||
uint8_t grph_obj_type=
|
||||
grph_obj_type =
|
||||
uint8_t grph_obj_type =
|
||||
(le16_to_cpu(path->usGraphicObjIds[j]) &
|
||||
OBJECT_TYPE_MASK) >> OBJECT_TYPE_SHIFT;
|
||||
|
||||
|
||||
@@ -341,7 +341,8 @@ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
|
||||
unsigned long **bit, int *max)
|
||||
{
|
||||
if (usage->hid == (HID_UP_CUSTOM | 0x0003) ||
|
||||
usage->hid == (HID_UP_MSVENDOR | 0x0003)) {
|
||||
usage->hid == (HID_UP_MSVENDOR | 0x0003) ||
|
||||
usage->hid == (HID_UP_HPVENDOR2 | 0x0003)) {
|
||||
/* The fn key on Apple USB keyboards */
|
||||
set_bit(EV_REP, hi->input->evbit);
|
||||
hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_FN);
|
||||
|
||||
@@ -341,6 +341,14 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{
|
||||
.ident = "Trekstor SURFBOOK E11B",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TREKSTOR"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SURFBOOK E11B"),
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{
|
||||
.ident = "Direkt-Tek DTLAPY116-2",
|
||||
.matches = {
|
||||
|
||||
@@ -39,6 +39,7 @@
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/iommu.h>
|
||||
#include <linux/limits.h>
|
||||
#include <asm/irq_remapping.h>
|
||||
#include <asm/iommu_table.h>
|
||||
|
||||
@@ -138,6 +139,13 @@ dmar_alloc_pci_notify_info(struct pci_dev *dev, unsigned long event)
|
||||
|
||||
BUG_ON(dev->is_virtfn);
|
||||
|
||||
/*
|
||||
* Ignore devices that have a domain number higher than what can
|
||||
* be looked up in DMAR, e.g. VMD subdevices with domain 0x10000
|
||||
*/
|
||||
if (pci_domain_nr(dev->bus) > U16_MAX)
|
||||
return NULL;
|
||||
|
||||
/* Only generate path[] for device addition event */
|
||||
if (event == BUS_NOTIFY_ADD_DEVICE)
|
||||
for (tmp = dev; tmp; tmp = tmp->bus->self)
|
||||
@@ -450,12 +458,13 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
|
||||
|
||||
/* Check for NUL termination within the designated length */
|
||||
if (strnlen(andd->device_name, header->length - 8) == header->length - 8) {
|
||||
WARN_TAINT(1, TAINT_FIRMWARE_WORKAROUND,
|
||||
pr_warn(FW_BUG
|
||||
"Your BIOS is broken; ANDD object name is not NUL-terminated\n"
|
||||
"BIOS vendor: %s; Ver: %s; Product Version: %s\n",
|
||||
dmi_get_system_info(DMI_BIOS_VENDOR),
|
||||
dmi_get_system_info(DMI_BIOS_VERSION),
|
||||
dmi_get_system_info(DMI_PRODUCT_VERSION));
|
||||
add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
|
||||
return -EINVAL;
|
||||
}
|
||||
pr_info("ANDD device: %x name: %s\n", andd->device_number,
|
||||
@@ -481,14 +490,14 @@ static int dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
WARN_TAINT(
|
||||
1, TAINT_FIRMWARE_WORKAROUND,
|
||||
pr_warn(FW_BUG
|
||||
"Your BIOS is broken; RHSA refers to non-existent DMAR unit at %llx\n"
|
||||
"BIOS vendor: %s; Ver: %s; Product Version: %s\n",
|
||||
drhd->reg_base_addr,
|
||||
rhsa->base_address,
|
||||
dmi_get_system_info(DMI_BIOS_VENDOR),
|
||||
dmi_get_system_info(DMI_BIOS_VERSION),
|
||||
dmi_get_system_info(DMI_PRODUCT_VERSION));
|
||||
add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -834,14 +843,14 @@ int __init dmar_table_init(void)
|
||||
|
||||
static void warn_invalid_dmar(u64 addr, const char *message)
|
||||
{
|
||||
WARN_TAINT_ONCE(
|
||||
1, TAINT_FIRMWARE_WORKAROUND,
|
||||
pr_warn_once(FW_BUG
|
||||
"Your BIOS is broken; DMAR reported at address %llx%s!\n"
|
||||
"BIOS vendor: %s; Ver: %s; Product Version: %s\n",
|
||||
addr, message,
|
||||
dmi_get_system_info(DMI_BIOS_VENDOR),
|
||||
dmi_get_system_info(DMI_BIOS_VERSION),
|
||||
dmi_get_system_info(DMI_PRODUCT_VERSION));
|
||||
add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
|
||||
}
|
||||
|
||||
static int __ref
|
||||
|
||||
@@ -4085,10 +4085,11 @@ static void quirk_ioat_snb_local_iommu(struct pci_dev *pdev)
|
||||
|
||||
/* we know that the this iommu should be at offset 0xa000 from vtbar */
|
||||
drhd = dmar_find_matched_drhd_unit(pdev);
|
||||
if (WARN_TAINT_ONCE(!drhd || drhd->reg_base_addr - vtbar != 0xa000,
|
||||
TAINT_FIRMWARE_WORKAROUND,
|
||||
"BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n"))
|
||||
if (!drhd || drhd->reg_base_addr - vtbar != 0xa000) {
|
||||
pr_warn_once(FW_BUG "BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n");
|
||||
add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
|
||||
pdev->dev.archdata.iommu = DUMMY_DEVICE_DOMAIN_INFO;
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB, quirk_ioat_snb_local_iommu);
|
||||
|
||||
@@ -5192,8 +5193,10 @@ static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
|
||||
u64 phys = 0;
|
||||
|
||||
pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level);
|
||||
if (pte)
|
||||
phys = dma_pte_addr(pte);
|
||||
if (pte && dma_pte_present(pte))
|
||||
phys = dma_pte_addr(pte) +
|
||||
(iova & (BIT_MASK(level_to_offset_bits(level) +
|
||||
VTD_PAGE_SHIFT) - 1));
|
||||
|
||||
return phys;
|
||||
}
|
||||
|
||||
@@ -71,11 +71,6 @@ struct arp_pkt {
|
||||
};
|
||||
#pragma pack()
|
||||
|
||||
static inline struct arp_pkt *arp_pkt(const struct sk_buff *skb)
|
||||
{
|
||||
return (struct arp_pkt *)skb_network_header(skb);
|
||||
}
|
||||
|
||||
/* Forward declaration */
|
||||
static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[],
|
||||
bool strict_match);
|
||||
@@ -574,10 +569,11 @@ static void rlb_req_update_subnet_clients(struct bonding *bond, __be32 src_ip)
|
||||
spin_unlock(&bond->mode_lock);
|
||||
}
|
||||
|
||||
static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bond)
|
||||
static struct slave *rlb_choose_channel(struct sk_buff *skb,
|
||||
struct bonding *bond,
|
||||
const struct arp_pkt *arp)
|
||||
{
|
||||
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
|
||||
struct arp_pkt *arp = arp_pkt(skb);
|
||||
struct slave *assigned_slave, *curr_active_slave;
|
||||
struct rlb_client_info *client_info;
|
||||
u32 hash_index = 0;
|
||||
@@ -674,8 +670,12 @@ static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bon
|
||||
*/
|
||||
static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
|
||||
{
|
||||
struct arp_pkt *arp = arp_pkt(skb);
|
||||
struct slave *tx_slave = NULL;
|
||||
struct arp_pkt *arp;
|
||||
|
||||
if (!pskb_network_may_pull(skb, sizeof(*arp)))
|
||||
return NULL;
|
||||
arp = (struct arp_pkt *)skb_network_header(skb);
|
||||
|
||||
/* Don't modify or load balance ARPs that do not originate locally
|
||||
* (e.g.,arrive via a bridge).
|
||||
@@ -685,7 +685,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
|
||||
|
||||
if (arp->op_code == htons(ARPOP_REPLY)) {
|
||||
/* the arp must be sent on the selected rx channel */
|
||||
tx_slave = rlb_choose_channel(skb, bond);
|
||||
tx_slave = rlb_choose_channel(skb, bond, arp);
|
||||
if (tx_slave)
|
||||
ether_addr_copy(arp->mac_src, tx_slave->dev->dev_addr);
|
||||
netdev_dbg(bond->dev, "Server sent ARP Reply packet\n");
|
||||
@@ -695,7 +695,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
|
||||
* When the arp reply is received the entry will be updated
|
||||
* with the correct unicast address of the client.
|
||||
*/
|
||||
rlb_choose_channel(skb, bond);
|
||||
rlb_choose_channel(skb, bond, arp);
|
||||
|
||||
/* The ARP reply packets must be delayed so that
|
||||
* they can cancel out the influence of the ARP request.
|
||||
|
||||
@@ -6439,13 +6439,13 @@ static int bnxt_change_mtu(struct net_device *dev, int new_mtu)
|
||||
return -EINVAL;
|
||||
|
||||
if (netif_running(dev))
|
||||
bnxt_close_nic(bp, false, false);
|
||||
bnxt_close_nic(bp, true, false);
|
||||
|
||||
dev->mtu = new_mtu;
|
||||
bnxt_set_ring_params(bp);
|
||||
|
||||
if (netif_running(dev))
|
||||
return bnxt_open_nic(bp, false, false);
|
||||
return bnxt_open_nic(bp, true, false);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -2470,15 +2470,15 @@ fec_enet_set_coalesce(struct net_device *ndev, struct ethtool_coalesce *ec)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
cycle = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
|
||||
cycle = fec_enet_us_to_itr_clock(ndev, ec->rx_coalesce_usecs);
|
||||
if (cycle > 0xFFFF) {
|
||||
pr_err("Rx coalesced usec exceed hardware limitation\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
cycle = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
|
||||
cycle = fec_enet_us_to_itr_clock(ndev, ec->tx_coalesce_usecs);
|
||||
if (cycle > 0xFFFF) {
|
||||
pr_err("Rx coalesced usec exceed hardware limitation\n");
|
||||
pr_err("Tx coalesced usec exceed hardware limitation\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
||||
@@ -831,14 +831,17 @@ static irqreturn_t ks_irq(int irq, void *pw)
|
||||
{
|
||||
struct net_device *netdev = pw;
|
||||
struct ks_net *ks = netdev_priv(netdev);
|
||||
unsigned long flags;
|
||||
u16 status;
|
||||
|
||||
spin_lock_irqsave(&ks->statelock, flags);
|
||||
/*this should be the first in IRQ handler */
|
||||
ks_save_cmd_reg(ks);
|
||||
|
||||
status = ks_rdreg16(ks, KS_ISR);
|
||||
if (unlikely(!status)) {
|
||||
ks_restore_cmd_reg(ks);
|
||||
spin_unlock_irqrestore(&ks->statelock, flags);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
@@ -864,6 +867,7 @@ static irqreturn_t ks_irq(int irq, void *pw)
|
||||
ks->netdev->stats.rx_over_errors++;
|
||||
/* this should be the last in IRQ handler*/
|
||||
ks_restore_cmd_reg(ks);
|
||||
spin_unlock_irqrestore(&ks->statelock, flags);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
@@ -933,6 +937,7 @@ static int ks_net_stop(struct net_device *netdev)
|
||||
|
||||
/* shutdown RX/TX QMU */
|
||||
ks_disable_qmu(ks);
|
||||
ks_disable_int(ks);
|
||||
|
||||
/* set powermode to soft power down to save power */
|
||||
ks_set_powermode(ks, PMECR_PM_SOFTDOWN);
|
||||
@@ -989,10 +994,9 @@ static netdev_tx_t ks_start_xmit(struct sk_buff *skb, struct net_device *netdev)
|
||||
{
|
||||
netdev_tx_t retv = NETDEV_TX_OK;
|
||||
struct ks_net *ks = netdev_priv(netdev);
|
||||
unsigned long flags;
|
||||
|
||||
disable_irq(netdev->irq);
|
||||
ks_disable_int(ks);
|
||||
spin_lock(&ks->statelock);
|
||||
spin_lock_irqsave(&ks->statelock, flags);
|
||||
|
||||
/* Extra space are required:
|
||||
* 4 byte for alignment, 4 for status/length, 4 for CRC
|
||||
@@ -1006,9 +1010,7 @@ static netdev_tx_t ks_start_xmit(struct sk_buff *skb, struct net_device *netdev)
|
||||
dev_kfree_skb(skb);
|
||||
} else
|
||||
retv = NETDEV_TX_BUSY;
|
||||
spin_unlock(&ks->statelock);
|
||||
ks_enable_int(ks);
|
||||
enable_irq(netdev->irq);
|
||||
spin_unlock_irqrestore(&ks->statelock, flags);
|
||||
return retv;
|
||||
}
|
||||
|
||||
|
||||
@@ -251,6 +251,7 @@ acct:
|
||||
} else {
|
||||
kfree_skb(skb);
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -443,19 +444,21 @@ static int ipvlan_process_outbound(struct sk_buff *skb)
|
||||
struct ethhdr *ethh = eth_hdr(skb);
|
||||
int ret = NET_XMIT_DROP;
|
||||
|
||||
/* In this mode we dont care about multicast and broadcast traffic */
|
||||
if (is_multicast_ether_addr(ethh->h_dest)) {
|
||||
pr_warn_ratelimited("Dropped {multi|broad}cast of type= [%x]\n",
|
||||
ntohs(skb->protocol));
|
||||
kfree_skb(skb);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* The ipvlan is a pseudo-L2 device, so the packets that we receive
|
||||
* will have L2; which need to discarded and processed further
|
||||
* in the net-ns of the main-device.
|
||||
*/
|
||||
if (skb_mac_header_was_set(skb)) {
|
||||
/* In this mode we dont care about
|
||||
* multicast and broadcast traffic */
|
||||
if (is_multicast_ether_addr(ethh->h_dest)) {
|
||||
pr_debug_ratelimited(
|
||||
"Dropped {multi|broad}cast of type=[%x]\n",
|
||||
ntohs(skb->protocol));
|
||||
kfree_skb(skb);
|
||||
goto out;
|
||||
}
|
||||
|
||||
skb_pull(skb, sizeof(*ethh));
|
||||
skb->mac_header = (typeof(skb->mac_header))~0U;
|
||||
skb_reset_network_header(skb);
|
||||
|
||||
@@ -217,7 +217,6 @@ static void ipvlan_uninit(struct net_device *dev)
|
||||
static int ipvlan_open(struct net_device *dev)
|
||||
{
|
||||
struct ipvl_dev *ipvlan = netdev_priv(dev);
|
||||
struct net_device *phy_dev = ipvlan->phy_dev;
|
||||
struct ipvl_addr *addr;
|
||||
|
||||
if (ipvlan->port->mode == IPVLAN_MODE_L3 ||
|
||||
@@ -229,7 +228,7 @@ static int ipvlan_open(struct net_device *dev)
|
||||
list_for_each_entry(addr, &ipvlan->addrs, anode)
|
||||
ipvlan_ht_addr_add(ipvlan, addr);
|
||||
|
||||
return dev_uc_add(phy_dev, phy_dev->dev_addr);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ipvlan_stop(struct net_device *dev)
|
||||
@@ -241,8 +240,6 @@ static int ipvlan_stop(struct net_device *dev)
|
||||
dev_uc_unsync(phy_dev, dev);
|
||||
dev_mc_unsync(phy_dev, dev);
|
||||
|
||||
dev_uc_del(phy_dev, phy_dev->dev_addr);
|
||||
|
||||
list_for_each_entry(addr, &ipvlan->addrs, anode)
|
||||
ipvlan_ht_addr_del(addr);
|
||||
|
||||
|
||||
@@ -2871,6 +2871,11 @@ static void macsec_dev_set_rx_mode(struct net_device *dev)
|
||||
dev_uc_sync(real_dev, dev);
|
||||
}
|
||||
|
||||
static sci_t dev_to_sci(struct net_device *dev, __be16 port)
|
||||
{
|
||||
return make_sci(dev->dev_addr, port);
|
||||
}
|
||||
|
||||
static int macsec_set_mac_address(struct net_device *dev, void *p)
|
||||
{
|
||||
struct macsec_dev *macsec = macsec_priv(dev);
|
||||
@@ -2892,6 +2897,7 @@ static int macsec_set_mac_address(struct net_device *dev, void *p)
|
||||
|
||||
out:
|
||||
ether_addr_copy(dev->dev_addr, addr->sa_data);
|
||||
macsec->secy.sci = dev_to_sci(dev, MACSEC_PORT_ES);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -2976,6 +2982,7 @@ static const struct device_type macsec_type = {
|
||||
|
||||
static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = {
|
||||
[IFLA_MACSEC_SCI] = { .type = NLA_U64 },
|
||||
[IFLA_MACSEC_PORT] = { .type = NLA_U16 },
|
||||
[IFLA_MACSEC_ICV_LEN] = { .type = NLA_U8 },
|
||||
[IFLA_MACSEC_CIPHER_SUITE] = { .type = NLA_U64 },
|
||||
[IFLA_MACSEC_WINDOW] = { .type = NLA_U32 },
|
||||
@@ -3160,11 +3167,6 @@ static bool sci_exists(struct net_device *dev, sci_t sci)
|
||||
return false;
|
||||
}
|
||||
|
||||
static sci_t dev_to_sci(struct net_device *dev, __be16 port)
|
||||
{
|
||||
return make_sci(dev->dev_addr, port);
|
||||
}
|
||||
|
||||
static int macsec_add_dev(struct net_device *dev, sci_t sci, u8 icv_len)
|
||||
{
|
||||
struct macsec_dev *macsec = macsec_priv(dev);
|
||||
|
||||
@@ -309,6 +309,8 @@ static void macvlan_process_broadcast(struct work_struct *w)
|
||||
if (src)
|
||||
dev_put(src->dev);
|
||||
kfree_skb(skb);
|
||||
|
||||
cond_resched();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -80,7 +80,7 @@ static LIST_HEAD(phy_fixup_list);
|
||||
static DEFINE_MUTEX(phy_fixup_lock);
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static bool mdio_bus_phy_may_suspend(struct phy_device *phydev, bool suspend)
|
||||
static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
|
||||
{
|
||||
struct device_driver *drv = phydev->mdio.dev.driver;
|
||||
struct phy_driver *phydrv = to_phy_driver(drv);
|
||||
@@ -92,11 +92,10 @@ static bool mdio_bus_phy_may_suspend(struct phy_device *phydev, bool suspend)
|
||||
/* PHY not attached? May suspend if the PHY has not already been
|
||||
* suspended as part of a prior call to phy_disconnect() ->
|
||||
* phy_detach() -> phy_suspend() because the parent netdev might be the
|
||||
* MDIO bus driver and clock gated at this point. Also may resume if
|
||||
* PHY is not attached.
|
||||
* MDIO bus driver and clock gated at this point.
|
||||
*/
|
||||
if (!netdev)
|
||||
return suspend ? !phydev->suspended : phydev->suspended;
|
||||
goto out;
|
||||
|
||||
/* Don't suspend PHY if the attached netdev parent may wakeup.
|
||||
* The parent may point to a PCI device, as in tg3 driver.
|
||||
@@ -111,7 +110,8 @@ static bool mdio_bus_phy_may_suspend(struct phy_device *phydev, bool suspend)
|
||||
if (device_may_wakeup(&netdev->dev))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
out:
|
||||
return !phydev->suspended;
|
||||
}
|
||||
|
||||
static int mdio_bus_phy_suspend(struct device *dev)
|
||||
@@ -126,9 +126,11 @@ static int mdio_bus_phy_suspend(struct device *dev)
|
||||
if (phydev->attached_dev && phydev->adjust_link)
|
||||
phy_stop_machine(phydev);
|
||||
|
||||
if (!mdio_bus_phy_may_suspend(phydev, true))
|
||||
if (!mdio_bus_phy_may_suspend(phydev))
|
||||
return 0;
|
||||
|
||||
phydev->suspended_by_mdio_bus = true;
|
||||
|
||||
return phy_suspend(phydev);
|
||||
}
|
||||
|
||||
@@ -137,9 +139,11 @@ static int mdio_bus_phy_resume(struct device *dev)
|
||||
struct phy_device *phydev = to_phy_device(dev);
|
||||
int ret;
|
||||
|
||||
if (!mdio_bus_phy_may_suspend(phydev, false))
|
||||
if (!phydev->suspended_by_mdio_bus)
|
||||
goto no_resume;
|
||||
|
||||
phydev->suspended_by_mdio_bus = false;
|
||||
|
||||
ret = phy_resume(phydev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@@ -232,7 +232,7 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
|
||||
register struct cstate *cs = lcs->next;
|
||||
register unsigned long deltaS, deltaA;
|
||||
register short changes = 0;
|
||||
int hlen;
|
||||
int nlen, hlen;
|
||||
unsigned char new_seq[16];
|
||||
register unsigned char *cp = new_seq;
|
||||
struct iphdr *ip;
|
||||
@@ -248,6 +248,8 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
|
||||
return isize;
|
||||
|
||||
ip = (struct iphdr *) icp;
|
||||
if (ip->version != 4 || ip->ihl < 5)
|
||||
return isize;
|
||||
|
||||
/* Bail if this packet isn't TCP, or is an IP fragment */
|
||||
if (ip->protocol != IPPROTO_TCP || (ntohs(ip->frag_off) & 0x3fff)) {
|
||||
@@ -258,10 +260,14 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
|
||||
comp->sls_o_tcp++;
|
||||
return isize;
|
||||
}
|
||||
/* Extract TCP header */
|
||||
nlen = ip->ihl * 4;
|
||||
if (isize < nlen + sizeof(*th))
|
||||
return isize;
|
||||
|
||||
th = (struct tcphdr *)(((unsigned char *)ip) + ip->ihl*4);
|
||||
hlen = ip->ihl*4 + th->doff*4;
|
||||
th = (struct tcphdr *)(icp + nlen);
|
||||
if (th->doff < sizeof(struct tcphdr) / 4)
|
||||
return isize;
|
||||
hlen = nlen + th->doff * 4;
|
||||
|
||||
/* Bail if the TCP packet isn't `compressible' (i.e., ACK isn't set or
|
||||
* some other control bit is set). Also uncompressible if
|
||||
|
||||
@@ -2216,6 +2216,8 @@ team_nl_option_policy[TEAM_ATTR_OPTION_MAX + 1] = {
|
||||
[TEAM_ATTR_OPTION_CHANGED] = { .type = NLA_FLAG },
|
||||
[TEAM_ATTR_OPTION_TYPE] = { .type = NLA_U8 },
|
||||
[TEAM_ATTR_OPTION_DATA] = { .type = NLA_BINARY },
|
||||
[TEAM_ATTR_OPTION_PORT_IFINDEX] = { .type = NLA_U32 },
|
||||
[TEAM_ATTR_OPTION_ARRAY_INDEX] = { .type = NLA_U32 },
|
||||
};
|
||||
|
||||
static int team_nl_cmd_noop(struct sk_buff *skb, struct genl_info *info)
|
||||
|
||||
@@ -3423,7 +3423,10 @@ static void r8153_init(struct r8152 *tp)
|
||||
if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_BOOT_CTRL) &
|
||||
AUTOLOAD_DONE)
|
||||
break;
|
||||
|
||||
msleep(20);
|
||||
if (test_bit(RTL8152_UNPLUG, &tp->flags))
|
||||
break;
|
||||
}
|
||||
|
||||
for (i = 0; i < 500; i++) {
|
||||
@@ -3447,7 +3450,10 @@ static void r8153_init(struct r8152 *tp)
|
||||
ocp_data = ocp_reg_read(tp, OCP_PHY_STATUS) & PHY_STAT_MASK;
|
||||
if (ocp_data == PHY_STAT_LAN_ON)
|
||||
break;
|
||||
|
||||
msleep(20);
|
||||
if (test_bit(RTL8152_UNPLUG, &tp->flags))
|
||||
break;
|
||||
}
|
||||
|
||||
usb_disable_lpm(tp->udev);
|
||||
|
||||
@@ -917,59 +917,117 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
|
||||
|
||||
switch (*pos) {
|
||||
case WLAN_EID_SUPP_RATES:
|
||||
if (pos[1] > 32)
|
||||
return;
|
||||
sta_ptr->tdls_cap.rates_len = pos[1];
|
||||
for (i = 0; i < pos[1]; i++)
|
||||
sta_ptr->tdls_cap.rates[i] = pos[i + 2];
|
||||
break;
|
||||
|
||||
case WLAN_EID_EXT_SUPP_RATES:
|
||||
if (pos[1] > 32)
|
||||
return;
|
||||
basic = sta_ptr->tdls_cap.rates_len;
|
||||
if (pos[1] > 32 - basic)
|
||||
return;
|
||||
for (i = 0; i < pos[1]; i++)
|
||||
sta_ptr->tdls_cap.rates[basic + i] = pos[i + 2];
|
||||
sta_ptr->tdls_cap.rates_len += pos[1];
|
||||
break;
|
||||
case WLAN_EID_HT_CAPABILITY:
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.ht_capb, pos,
|
||||
if (pos > end - sizeof(struct ieee80211_ht_cap) - 2)
|
||||
return;
|
||||
if (pos[1] != sizeof(struct ieee80211_ht_cap))
|
||||
return;
|
||||
/* copy the ie's value into ht_capb*/
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.ht_capb, pos + 2,
|
||||
sizeof(struct ieee80211_ht_cap));
|
||||
sta_ptr->is_11n_enabled = 1;
|
||||
break;
|
||||
case WLAN_EID_HT_OPERATION:
|
||||
memcpy(&sta_ptr->tdls_cap.ht_oper, pos,
|
||||
if (pos > end -
|
||||
sizeof(struct ieee80211_ht_operation) - 2)
|
||||
return;
|
||||
if (pos[1] != sizeof(struct ieee80211_ht_operation))
|
||||
return;
|
||||
/* copy the ie's value into ht_oper*/
|
||||
memcpy(&sta_ptr->tdls_cap.ht_oper, pos + 2,
|
||||
sizeof(struct ieee80211_ht_operation));
|
||||
break;
|
||||
case WLAN_EID_BSS_COEX_2040:
|
||||
if (pos > end - 3)
|
||||
return;
|
||||
if (pos[1] != 1)
|
||||
return;
|
||||
sta_ptr->tdls_cap.coex_2040 = pos[2];
|
||||
break;
|
||||
case WLAN_EID_EXT_CAPABILITY:
|
||||
if (pos > end - sizeof(struct ieee_types_header))
|
||||
return;
|
||||
if (pos[1] < sizeof(struct ieee_types_header))
|
||||
return;
|
||||
if (pos[1] > 8)
|
||||
return;
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.extcap, pos,
|
||||
sizeof(struct ieee_types_header) +
|
||||
min_t(u8, pos[1], 8));
|
||||
break;
|
||||
case WLAN_EID_RSN:
|
||||
if (pos > end - sizeof(struct ieee_types_header))
|
||||
return;
|
||||
if (pos[1] < sizeof(struct ieee_types_header))
|
||||
return;
|
||||
if (pos[1] > IEEE_MAX_IE_SIZE -
|
||||
sizeof(struct ieee_types_header))
|
||||
return;
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.rsn_ie, pos,
|
||||
sizeof(struct ieee_types_header) +
|
||||
min_t(u8, pos[1], IEEE_MAX_IE_SIZE -
|
||||
sizeof(struct ieee_types_header)));
|
||||
break;
|
||||
case WLAN_EID_QOS_CAPA:
|
||||
if (pos > end - 3)
|
||||
return;
|
||||
if (pos[1] != 1)
|
||||
return;
|
||||
sta_ptr->tdls_cap.qos_info = pos[2];
|
||||
break;
|
||||
case WLAN_EID_VHT_OPERATION:
|
||||
if (priv->adapter->is_hw_11ac_capable)
|
||||
memcpy(&sta_ptr->tdls_cap.vhtoper, pos,
|
||||
if (priv->adapter->is_hw_11ac_capable) {
|
||||
if (pos > end -
|
||||
sizeof(struct ieee80211_vht_operation) - 2)
|
||||
return;
|
||||
if (pos[1] !=
|
||||
sizeof(struct ieee80211_vht_operation))
|
||||
return;
|
||||
/* copy the ie's value into vhtoper*/
|
||||
memcpy(&sta_ptr->tdls_cap.vhtoper, pos + 2,
|
||||
sizeof(struct ieee80211_vht_operation));
|
||||
}
|
||||
break;
|
||||
case WLAN_EID_VHT_CAPABILITY:
|
||||
if (priv->adapter->is_hw_11ac_capable) {
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.vhtcap, pos,
|
||||
if (pos > end -
|
||||
sizeof(struct ieee80211_vht_cap) - 2)
|
||||
return;
|
||||
if (pos[1] != sizeof(struct ieee80211_vht_cap))
|
||||
return;
|
||||
/* copy the ie's value into vhtcap*/
|
||||
memcpy((u8 *)&sta_ptr->tdls_cap.vhtcap, pos + 2,
|
||||
sizeof(struct ieee80211_vht_cap));
|
||||
sta_ptr->is_11ac_enabled = 1;
|
||||
}
|
||||
break;
|
||||
case WLAN_EID_AID:
|
||||
if (priv->adapter->is_hw_11ac_capable)
|
||||
if (priv->adapter->is_hw_11ac_capable) {
|
||||
if (pos > end - 4)
|
||||
return;
|
||||
if (pos[1] != 2)
|
||||
return;
|
||||
sta_ptr->tdls_cap.aid =
|
||||
le16_to_cpu(*(__le16 *)(pos + 2));
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -551,7 +551,6 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry,
|
||||
if (server->ops->close)
|
||||
server->ops->close(xid, tcon, &fid);
|
||||
cifs_del_pending_open(&open);
|
||||
fput(file);
|
||||
rc = -ENOMEM;
|
||||
}
|
||||
|
||||
|
||||
@@ -1248,7 +1248,7 @@ static int gfs2_atomic_open(struct inode *dir, struct dentry *dentry,
|
||||
if (!(*opened & FILE_OPENED))
|
||||
return finish_no_open(file, d);
|
||||
dput(d);
|
||||
return 0;
|
||||
return excl && (flags & O_CREAT) ? -EEXIST : 0;
|
||||
}
|
||||
|
||||
BUG_ON(d != NULL);
|
||||
|
||||
@@ -1037,8 +1037,8 @@ static bool jbd2_write_access_granted(handle_t *handle, struct buffer_head *bh,
|
||||
/* For undo access buffer must have data copied */
|
||||
if (undo && !jh->b_committed_data)
|
||||
goto out;
|
||||
if (jh->b_transaction != handle->h_transaction &&
|
||||
jh->b_next_transaction != handle->h_transaction)
|
||||
if (READ_ONCE(jh->b_transaction) != handle->h_transaction &&
|
||||
READ_ONCE(jh->b_next_transaction) != handle->h_transaction)
|
||||
goto out;
|
||||
/*
|
||||
* There are two reasons for the barrier here:
|
||||
@@ -2448,8 +2448,8 @@ void __jbd2_journal_refile_buffer(struct journal_head *jh)
|
||||
* our jh reference and thus __jbd2_journal_file_buffer() must not
|
||||
* take a new one.
|
||||
*/
|
||||
jh->b_transaction = jh->b_next_transaction;
|
||||
jh->b_next_transaction = NULL;
|
||||
WRITE_ONCE(jh->b_transaction, jh->b_next_transaction);
|
||||
WRITE_ONCE(jh->b_next_transaction, NULL);
|
||||
if (buffer_freed(bh))
|
||||
jlist = BJ_Forget;
|
||||
else if (jh->b_modified)
|
||||
|
||||
@@ -678,8 +678,6 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page,
|
||||
goto out_label_free;
|
||||
}
|
||||
|
||||
array = kmap(page);
|
||||
|
||||
status = nfs_readdir_alloc_pages(pages, array_size);
|
||||
if (status < 0)
|
||||
goto out_release_array;
|
||||
|
||||
@@ -837,9 +837,6 @@ cleanup_file:
|
||||
* the return value of d_splice_alias(), then the caller needs to perform dput()
|
||||
* on it after finish_open().
|
||||
*
|
||||
* On successful return @file is a fully instantiated open file. After this, if
|
||||
* an error occurs in ->atomic_open(), it needs to clean up with fput().
|
||||
*
|
||||
* Returns zero on success or -errno if the open failed.
|
||||
*/
|
||||
int finish_open(struct file *file, struct dentry *dentry,
|
||||
|
||||
@@ -333,6 +333,7 @@ struct phy_c45_device_ids {
|
||||
* is_pseudo_fixed_link: Set to true if this phy is an Ethernet switch, etc.
|
||||
* has_fixups: Set to true if this phy has fixups/quirks.
|
||||
* suspended: Set to true if this phy has been suspended successfully.
|
||||
* suspended_by_mdio_bus: Set to true if this phy was suspended by MDIO bus.
|
||||
* state: state of the PHY for management purposes
|
||||
* dev_flags: Device-specific flags used by the PHY driver.
|
||||
* link_timeout: The number of timer firings to wait before the
|
||||
@@ -369,6 +370,7 @@ struct phy_device {
|
||||
bool is_pseudo_fixed_link;
|
||||
bool has_fixups;
|
||||
bool suspended;
|
||||
bool suspended_by_mdio_bus;
|
||||
|
||||
enum phy_state state;
|
||||
|
||||
|
||||
@@ -93,6 +93,7 @@ struct fib_rules_ops {
|
||||
[FRA_OIFNAME] = { .type = NLA_STRING, .len = IFNAMSIZ - 1 }, \
|
||||
[FRA_PRIORITY] = { .type = NLA_U32 }, \
|
||||
[FRA_FWMARK] = { .type = NLA_U32 }, \
|
||||
[FRA_TUN_ID] = { .type = NLA_U64 }, \
|
||||
[FRA_FWMASK] = { .type = NLA_U32 }, \
|
||||
[FRA_TABLE] = { .type = NLA_U32 }, \
|
||||
[FRA_SUPPRESS_PREFIXLEN] = { .type = NLA_U32 }, \
|
||||
|
||||
@@ -6468,6 +6468,10 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
|
||||
return;
|
||||
}
|
||||
|
||||
/* Don't associate the sock with unrelated interrupted task's cgroup. */
|
||||
if (in_interrupt())
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
while (true) {
|
||||
|
||||
@@ -375,27 +375,32 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimi
|
||||
{
|
||||
struct sigqueue *q = NULL;
|
||||
struct user_struct *user;
|
||||
int sigpending;
|
||||
|
||||
/*
|
||||
* Protect access to @t credentials. This can go away when all
|
||||
* callers hold rcu read lock.
|
||||
*
|
||||
* NOTE! A pending signal will hold on to the user refcount,
|
||||
* and we get/put the refcount only when the sigpending count
|
||||
* changes from/to zero.
|
||||
*/
|
||||
rcu_read_lock();
|
||||
user = get_uid(__task_cred(t)->user);
|
||||
atomic_inc(&user->sigpending);
|
||||
user = __task_cred(t)->user;
|
||||
sigpending = atomic_inc_return(&user->sigpending);
|
||||
if (sigpending == 1)
|
||||
get_uid(user);
|
||||
rcu_read_unlock();
|
||||
|
||||
if (override_rlimit ||
|
||||
atomic_read(&user->sigpending) <=
|
||||
task_rlimit(t, RLIMIT_SIGPENDING)) {
|
||||
if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
|
||||
q = kmem_cache_alloc(sigqueue_cachep, flags);
|
||||
} else {
|
||||
print_dropped_signal(sig);
|
||||
}
|
||||
|
||||
if (unlikely(q == NULL)) {
|
||||
atomic_dec(&user->sigpending);
|
||||
free_uid(user);
|
||||
if (atomic_dec_and_test(&user->sigpending))
|
||||
free_uid(user);
|
||||
} else {
|
||||
INIT_LIST_HEAD(&q->list);
|
||||
q->flags = 0;
|
||||
@@ -409,8 +414,8 @@ static void __sigqueue_free(struct sigqueue *q)
|
||||
{
|
||||
if (q->flags & SIGQUEUE_PREALLOC)
|
||||
return;
|
||||
atomic_dec(&q->user->sigpending);
|
||||
free_uid(q->user);
|
||||
if (atomic_dec_and_test(&q->user->sigpending))
|
||||
free_uid(q->user);
|
||||
kmem_cache_free(sigqueue_cachep, q);
|
||||
}
|
||||
|
||||
|
||||
@@ -1406,14 +1406,16 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
|
||||
WARN_ON_ONCE(!is_chained_work(wq)))
|
||||
return;
|
||||
retry:
|
||||
if (req_cpu == WORK_CPU_UNBOUND)
|
||||
cpu = wq_select_unbound_cpu(raw_smp_processor_id());
|
||||
|
||||
/* pwq which will be used unless @work is executing elsewhere */
|
||||
if (!(wq->flags & WQ_UNBOUND))
|
||||
pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
|
||||
else
|
||||
if (wq->flags & WQ_UNBOUND) {
|
||||
if (req_cpu == WORK_CPU_UNBOUND)
|
||||
cpu = wq_select_unbound_cpu(raw_smp_processor_id());
|
||||
pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
|
||||
} else {
|
||||
if (req_cpu == WORK_CPU_UNBOUND)
|
||||
cpu = raw_smp_processor_id();
|
||||
pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
|
||||
}
|
||||
|
||||
/*
|
||||
* If @work was previously on a different pool, it might still be
|
||||
|
||||
@@ -5726,6 +5726,10 @@ void mem_cgroup_sk_alloc(struct sock *sk)
|
||||
return;
|
||||
}
|
||||
|
||||
/* Do not associate the sock with unrelated interrupted task's memcg. */
|
||||
if (in_interrupt())
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
memcg = mem_cgroup_from_task(current);
|
||||
if (memcg == root_mem_cgroup)
|
||||
|
||||
@@ -3114,6 +3114,15 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
|
||||
void *object = c->freelist;
|
||||
|
||||
if (unlikely(!object)) {
|
||||
/*
|
||||
* We may have removed an object from c->freelist using
|
||||
* the fastpath in the previous iteration; in that case,
|
||||
* c->tid has not been bumped yet.
|
||||
* Since ___slab_alloc() may reenable interrupts while
|
||||
* allocating memory, we should bump c->tid now.
|
||||
*/
|
||||
c->tid = next_tid(c->tid);
|
||||
|
||||
/*
|
||||
* Invoking slow path likely have side-effect
|
||||
* of re-populating per CPU c->freelist
|
||||
|
||||
@@ -34,6 +34,7 @@
|
||||
#include <linux/kref.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/netlink.h>
|
||||
#include <linux/pkt_sched.h>
|
||||
@@ -149,7 +150,7 @@ static void batadv_iv_ogm_orig_free(struct batadv_orig_node *orig_node)
|
||||
* Return: 0 on success, a negative error code otherwise.
|
||||
*/
|
||||
static int batadv_iv_ogm_orig_add_if(struct batadv_orig_node *orig_node,
|
||||
int max_if_num)
|
||||
unsigned int max_if_num)
|
||||
{
|
||||
void *data_ptr;
|
||||
size_t old_size;
|
||||
@@ -193,7 +194,8 @@ unlock:
|
||||
*/
|
||||
static void
|
||||
batadv_iv_ogm_drop_bcast_own_entry(struct batadv_orig_node *orig_node,
|
||||
int max_if_num, int del_if_num)
|
||||
unsigned int max_if_num,
|
||||
unsigned int del_if_num)
|
||||
{
|
||||
size_t chunk_size;
|
||||
size_t if_offset;
|
||||
@@ -231,7 +233,8 @@ batadv_iv_ogm_drop_bcast_own_entry(struct batadv_orig_node *orig_node,
|
||||
*/
|
||||
static void
|
||||
batadv_iv_ogm_drop_bcast_own_sum_entry(struct batadv_orig_node *orig_node,
|
||||
int max_if_num, int del_if_num)
|
||||
unsigned int max_if_num,
|
||||
unsigned int del_if_num)
|
||||
{
|
||||
size_t if_offset;
|
||||
void *data_ptr;
|
||||
@@ -268,7 +271,8 @@ batadv_iv_ogm_drop_bcast_own_sum_entry(struct batadv_orig_node *orig_node,
|
||||
* Return: 0 on success, a negative error code otherwise.
|
||||
*/
|
||||
static int batadv_iv_ogm_orig_del_if(struct batadv_orig_node *orig_node,
|
||||
int max_if_num, int del_if_num)
|
||||
unsigned int max_if_num,
|
||||
unsigned int del_if_num)
|
||||
{
|
||||
spin_lock_bh(&orig_node->bat_iv.ogm_cnt_lock);
|
||||
|
||||
@@ -302,7 +306,8 @@ static struct batadv_orig_node *
|
||||
batadv_iv_ogm_orig_get(struct batadv_priv *bat_priv, const u8 *addr)
|
||||
{
|
||||
struct batadv_orig_node *orig_node;
|
||||
int size, hash_added;
|
||||
int hash_added;
|
||||
size_t size;
|
||||
|
||||
orig_node = batadv_orig_hash_find(bat_priv, addr);
|
||||
if (orig_node)
|
||||
@@ -366,14 +371,18 @@ static int batadv_iv_ogm_iface_enable(struct batadv_hard_iface *hard_iface)
|
||||
unsigned char *ogm_buff;
|
||||
u32 random_seqno;
|
||||
|
||||
mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
|
||||
/* randomize initial seqno to avoid collision */
|
||||
get_random_bytes(&random_seqno, sizeof(random_seqno));
|
||||
atomic_set(&hard_iface->bat_iv.ogm_seqno, random_seqno);
|
||||
|
||||
hard_iface->bat_iv.ogm_buff_len = BATADV_OGM_HLEN;
|
||||
ogm_buff = kmalloc(hard_iface->bat_iv.ogm_buff_len, GFP_ATOMIC);
|
||||
if (!ogm_buff)
|
||||
if (!ogm_buff) {
|
||||
mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
hard_iface->bat_iv.ogm_buff = ogm_buff;
|
||||
|
||||
@@ -385,35 +394,59 @@ static int batadv_iv_ogm_iface_enable(struct batadv_hard_iface *hard_iface)
|
||||
batadv_ogm_packet->reserved = 0;
|
||||
batadv_ogm_packet->tq = BATADV_TQ_MAX_VALUE;
|
||||
|
||||
mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void batadv_iv_ogm_iface_disable(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
|
||||
kfree(hard_iface->bat_iv.ogm_buff);
|
||||
hard_iface->bat_iv.ogm_buff = NULL;
|
||||
|
||||
mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
}
|
||||
|
||||
static void batadv_iv_ogm_iface_update_mac(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
struct batadv_ogm_packet *batadv_ogm_packet;
|
||||
unsigned char *ogm_buff = hard_iface->bat_iv.ogm_buff;
|
||||
void *ogm_buff;
|
||||
|
||||
batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff;
|
||||
mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
|
||||
ogm_buff = hard_iface->bat_iv.ogm_buff;
|
||||
if (!ogm_buff)
|
||||
goto unlock;
|
||||
|
||||
batadv_ogm_packet = ogm_buff;
|
||||
ether_addr_copy(batadv_ogm_packet->orig,
|
||||
hard_iface->net_dev->dev_addr);
|
||||
ether_addr_copy(batadv_ogm_packet->prev_sender,
|
||||
hard_iface->net_dev->dev_addr);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
}
|
||||
|
||||
static void
|
||||
batadv_iv_ogm_primary_iface_set(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
struct batadv_ogm_packet *batadv_ogm_packet;
|
||||
unsigned char *ogm_buff = hard_iface->bat_iv.ogm_buff;
|
||||
void *ogm_buff;
|
||||
|
||||
batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff;
|
||||
mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
|
||||
ogm_buff = hard_iface->bat_iv.ogm_buff;
|
||||
if (!ogm_buff)
|
||||
goto unlock;
|
||||
|
||||
batadv_ogm_packet = ogm_buff;
|
||||
batadv_ogm_packet->ttl = BATADV_TTL;
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
}
|
||||
|
||||
/* when do we schedule our own ogm to be sent */
|
||||
@@ -898,7 +931,7 @@ batadv_iv_ogm_slide_own_bcast_window(struct batadv_hard_iface *hard_iface)
|
||||
u32 i;
|
||||
size_t word_index;
|
||||
u8 *w;
|
||||
int if_num;
|
||||
unsigned int if_num;
|
||||
|
||||
for (i = 0; i < hash->size; i++) {
|
||||
head = &hash->table[i];
|
||||
@@ -919,7 +952,11 @@ batadv_iv_ogm_slide_own_bcast_window(struct batadv_hard_iface *hard_iface)
|
||||
}
|
||||
}
|
||||
|
||||
static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
|
||||
/**
|
||||
* batadv_iv_ogm_schedule_buff() - schedule submission of hardif ogm buffer
|
||||
* @hard_iface: interface whose ogm buffer should be transmitted
|
||||
*/
|
||||
static void batadv_iv_ogm_schedule_buff(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
|
||||
unsigned char **ogm_buff = &hard_iface->bat_iv.ogm_buff;
|
||||
@@ -930,8 +967,10 @@ static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
|
||||
u16 tvlv_len = 0;
|
||||
unsigned long send_time;
|
||||
|
||||
if ((hard_iface->if_status == BATADV_IF_NOT_IN_USE) ||
|
||||
(hard_iface->if_status == BATADV_IF_TO_BE_REMOVED))
|
||||
lockdep_assert_held(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
|
||||
/* interface already disabled by batadv_iv_ogm_iface_disable */
|
||||
if (!*ogm_buff)
|
||||
return;
|
||||
|
||||
/* the interface gets activated here to avoid race conditions between
|
||||
@@ -1000,6 +1039,17 @@ out:
|
||||
batadv_hardif_put(primary_if);
|
||||
}
|
||||
|
||||
static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
if (hard_iface->if_status == BATADV_IF_NOT_IN_USE ||
|
||||
hard_iface->if_status == BATADV_IF_TO_BE_REMOVED)
|
||||
return;
|
||||
|
||||
mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
batadv_iv_ogm_schedule_buff(hard_iface);
|
||||
mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_iv_ogm_orig_update - use OGM to update corresponding data in an
|
||||
* originator
|
||||
@@ -1028,7 +1078,7 @@ batadv_iv_ogm_orig_update(struct batadv_priv *bat_priv,
|
||||
struct batadv_neigh_node *tmp_neigh_node = NULL;
|
||||
struct batadv_neigh_node *router = NULL;
|
||||
struct batadv_orig_node *orig_node_tmp;
|
||||
int if_num;
|
||||
unsigned int if_num;
|
||||
u8 sum_orig, sum_neigh;
|
||||
u8 *neigh_addr;
|
||||
u8 tq_avg;
|
||||
@@ -1186,7 +1236,7 @@ static bool batadv_iv_ogm_calc_tq(struct batadv_orig_node *orig_node,
|
||||
u8 total_count;
|
||||
u8 orig_eq_count, neigh_rq_count, neigh_rq_inv, tq_own;
|
||||
unsigned int neigh_rq_inv_cube, neigh_rq_max_cube;
|
||||
int if_num;
|
||||
unsigned int if_num;
|
||||
unsigned int tq_asym_penalty, inv_asym_penalty;
|
||||
unsigned int combined_tq;
|
||||
unsigned int tq_iface_penalty;
|
||||
@@ -1227,7 +1277,7 @@ static bool batadv_iv_ogm_calc_tq(struct batadv_orig_node *orig_node,
|
||||
orig_node->last_seen = jiffies;
|
||||
|
||||
/* find packet count of corresponding one hop neighbor */
|
||||
spin_lock_bh(&orig_node->bat_iv.ogm_cnt_lock);
|
||||
spin_lock_bh(&orig_neigh_node->bat_iv.ogm_cnt_lock);
|
||||
if_num = if_incoming->if_num;
|
||||
orig_eq_count = orig_neigh_node->bat_iv.bcast_own_sum[if_num];
|
||||
neigh_ifinfo = batadv_neigh_ifinfo_new(neigh_node, if_outgoing);
|
||||
@@ -1237,7 +1287,7 @@ static bool batadv_iv_ogm_calc_tq(struct batadv_orig_node *orig_node,
|
||||
} else {
|
||||
neigh_rq_count = 0;
|
||||
}
|
||||
spin_unlock_bh(&orig_node->bat_iv.ogm_cnt_lock);
|
||||
spin_unlock_bh(&orig_neigh_node->bat_iv.ogm_cnt_lock);
|
||||
|
||||
/* pay attention to not get a value bigger than 100 % */
|
||||
if (orig_eq_count > neigh_rq_count)
|
||||
@@ -1705,9 +1755,9 @@ static void batadv_iv_ogm_process(const struct sk_buff *skb, int ogm_offset,
|
||||
|
||||
if (is_my_orig) {
|
||||
unsigned long *word;
|
||||
int offset;
|
||||
size_t offset;
|
||||
s32 bit_pos;
|
||||
s16 if_num;
|
||||
unsigned int if_num;
|
||||
u8 *weight;
|
||||
|
||||
orig_neigh_node = batadv_iv_ogm_orig_get(bat_priv,
|
||||
@@ -2473,12 +2523,22 @@ batadv_iv_ogm_neigh_is_sob(struct batadv_neigh_node *neigh1,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void batadv_iv_iface_activate(struct batadv_hard_iface *hard_iface)
|
||||
static void batadv_iv_iface_enabled(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
/* begin scheduling originator messages on that interface */
|
||||
batadv_iv_ogm_schedule(hard_iface);
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_iv_init_sel_class - initialize GW selection class
|
||||
* @bat_priv: the bat priv with all the soft interface information
|
||||
*/
|
||||
static void batadv_iv_init_sel_class(struct batadv_priv *bat_priv)
|
||||
{
|
||||
/* set default TQ difference threshold to 20 */
|
||||
atomic_set(&bat_priv->gw.sel_class, 20);
|
||||
}
|
||||
|
||||
static struct batadv_gw_node *
|
||||
batadv_iv_gw_get_best_gw_node(struct batadv_priv *bat_priv)
|
||||
{
|
||||
@@ -2803,8 +2863,8 @@ unlock:
|
||||
static struct batadv_algo_ops batadv_batman_iv __read_mostly = {
|
||||
.name = "BATMAN_IV",
|
||||
.iface = {
|
||||
.activate = batadv_iv_iface_activate,
|
||||
.enable = batadv_iv_ogm_iface_enable,
|
||||
.enabled = batadv_iv_iface_enabled,
|
||||
.disable = batadv_iv_ogm_iface_disable,
|
||||
.update_mac = batadv_iv_ogm_iface_update_mac,
|
||||
.primary_set = batadv_iv_ogm_primary_iface_set,
|
||||
@@ -2827,6 +2887,7 @@ static struct batadv_algo_ops batadv_batman_iv __read_mostly = {
|
||||
.del_if = batadv_iv_ogm_orig_del_if,
|
||||
},
|
||||
.gw = {
|
||||
.init_sel_class = batadv_iv_init_sel_class,
|
||||
.get_best_gw_node = batadv_iv_gw_get_best_gw_node,
|
||||
.is_eligible = batadv_iv_gw_is_eligible,
|
||||
#ifdef CONFIG_BATMAN_ADV_DEBUGFS
|
||||
|
||||
@@ -19,7 +19,6 @@
|
||||
#include "main.h"
|
||||
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/bug.h>
|
||||
#include <linux/cache.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/if_ether.h>
|
||||
@@ -623,11 +622,11 @@ static int batadv_v_neigh_cmp(struct batadv_neigh_node *neigh1,
|
||||
int ret = 0;
|
||||
|
||||
ifinfo1 = batadv_neigh_ifinfo_get(neigh1, if_outgoing1);
|
||||
if (WARN_ON(!ifinfo1))
|
||||
if (!ifinfo1)
|
||||
goto err_ifinfo1;
|
||||
|
||||
ifinfo2 = batadv_neigh_ifinfo_get(neigh2, if_outgoing2);
|
||||
if (WARN_ON(!ifinfo2))
|
||||
if (!ifinfo2)
|
||||
goto err_ifinfo2;
|
||||
|
||||
ret = ifinfo1->bat_v.throughput - ifinfo2->bat_v.throughput;
|
||||
@@ -649,11 +648,11 @@ static bool batadv_v_neigh_is_sob(struct batadv_neigh_node *neigh1,
|
||||
bool ret = false;
|
||||
|
||||
ifinfo1 = batadv_neigh_ifinfo_get(neigh1, if_outgoing1);
|
||||
if (WARN_ON(!ifinfo1))
|
||||
if (!ifinfo1)
|
||||
goto err_ifinfo1;
|
||||
|
||||
ifinfo2 = batadv_neigh_ifinfo_get(neigh2, if_outgoing2);
|
||||
if (WARN_ON(!ifinfo2))
|
||||
if (!ifinfo2)
|
||||
goto err_ifinfo2;
|
||||
|
||||
threshold = ifinfo1->bat_v.throughput / 4;
|
||||
@@ -668,6 +667,16 @@ err_ifinfo1:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_v_init_sel_class - initialize GW selection class
|
||||
* @bat_priv: the bat priv with all the soft interface information
|
||||
*/
|
||||
static void batadv_v_init_sel_class(struct batadv_priv *bat_priv)
|
||||
{
|
||||
/* set default throughput difference threshold to 5Mbps */
|
||||
atomic_set(&bat_priv->gw.sel_class, 50);
|
||||
}
|
||||
|
||||
static ssize_t batadv_v_store_sel_class(struct batadv_priv *bat_priv,
|
||||
char *buff, size_t count)
|
||||
{
|
||||
@@ -805,7 +814,7 @@ static bool batadv_v_gw_is_eligible(struct batadv_priv *bat_priv,
|
||||
}
|
||||
|
||||
orig_gw = batadv_gw_node_get(bat_priv, orig_node);
|
||||
if (!orig_node)
|
||||
if (!orig_gw)
|
||||
goto out;
|
||||
|
||||
if (batadv_v_gw_throughput_get(orig_gw, &orig_throughput) < 0)
|
||||
@@ -1054,6 +1063,7 @@ static struct batadv_algo_ops batadv_batman_v __read_mostly = {
|
||||
.dump = batadv_v_orig_dump,
|
||||
},
|
||||
.gw = {
|
||||
.init_sel_class = batadv_v_init_sel_class,
|
||||
.store_sel_class = batadv_v_store_sel_class,
|
||||
.show_sel_class = batadv_v_show_sel_class,
|
||||
.get_best_gw_node = batadv_v_gw_get_best_gw_node,
|
||||
@@ -1094,9 +1104,6 @@ int batadv_v_mesh_init(struct batadv_priv *bat_priv)
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* set default throughput difference threshold to 5Mbps */
|
||||
atomic_set(&bat_priv->gw.sel_class, 50);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
#include "main.h"
|
||||
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/byteorder/generic.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/etherdevice.h>
|
||||
@@ -29,6 +30,7 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kref.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/nl80211.h>
|
||||
#include <linux/random.h>
|
||||
#include <linux/rculist.h>
|
||||
#include <linux/rcupdate.h>
|
||||
@@ -100,8 +102,12 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
if (!ret)
|
||||
return sinfo.expected_throughput / 100;
|
||||
if (ret)
|
||||
goto default_throughput;
|
||||
if (!(sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT)))
|
||||
goto default_throughput;
|
||||
|
||||
return sinfo.expected_throughput / 100;
|
||||
}
|
||||
|
||||
/* unsupported WiFi driver version */
|
||||
@@ -185,6 +191,7 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
|
||||
struct sk_buff *skb;
|
||||
int probe_len, i;
|
||||
int elp_skb_len;
|
||||
void *tmp;
|
||||
|
||||
/* this probing routine is for Wifi neighbours only */
|
||||
if (!batadv_is_wifi_netdev(hard_iface->net_dev))
|
||||
@@ -216,7 +223,8 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
|
||||
* the packet to be exactly of that size to make the link
|
||||
* throughput estimation effective.
|
||||
*/
|
||||
skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len);
|
||||
tmp = skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len);
|
||||
memset(tmp, 0, probe_len - hard_iface->bat_v.elp_skb->len);
|
||||
|
||||
batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
|
||||
"Sending unicast (probe) ELP packet on interface %s to %pM\n",
|
||||
@@ -327,21 +335,23 @@ out:
|
||||
*/
|
||||
int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
static const size_t tvlv_padding = sizeof(__be32);
|
||||
struct batadv_elp_packet *elp_packet;
|
||||
unsigned char *elp_buff;
|
||||
u32 random_seqno;
|
||||
size_t size;
|
||||
int res = -ENOMEM;
|
||||
|
||||
size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN;
|
||||
size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN + tvlv_padding;
|
||||
hard_iface->bat_v.elp_skb = dev_alloc_skb(size);
|
||||
if (!hard_iface->bat_v.elp_skb)
|
||||
goto out;
|
||||
|
||||
skb_reserve(hard_iface->bat_v.elp_skb, ETH_HLEN + NET_IP_ALIGN);
|
||||
elp_buff = skb_put(hard_iface->bat_v.elp_skb, BATADV_ELP_HLEN);
|
||||
elp_buff = skb_put(hard_iface->bat_v.elp_skb,
|
||||
BATADV_ELP_HLEN + tvlv_padding);
|
||||
elp_packet = (struct batadv_elp_packet *)elp_buff;
|
||||
memset(elp_packet, 0, BATADV_ELP_HLEN);
|
||||
memset(elp_packet, 0, BATADV_ELP_HLEN + tvlv_padding);
|
||||
|
||||
elp_packet->packet_type = BATADV_ELP;
|
||||
elp_packet->version = BATADV_COMPAT_VERSION;
|
||||
|
||||
@@ -28,6 +28,8 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kref.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/random.h>
|
||||
#include <linux/rculist.h>
|
||||
@@ -127,22 +129,19 @@ static void batadv_v_ogm_send_to_if(struct sk_buff *skb,
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_v_ogm_send - periodic worker broadcasting the own OGM
|
||||
* @work: work queue item
|
||||
* batadv_v_ogm_send_softif() - periodic worker broadcasting the own OGM
|
||||
* @bat_priv: the bat priv with all the soft interface information
|
||||
*/
|
||||
static void batadv_v_ogm_send(struct work_struct *work)
|
||||
static void batadv_v_ogm_send_softif(struct batadv_priv *bat_priv)
|
||||
{
|
||||
struct batadv_hard_iface *hard_iface;
|
||||
struct batadv_priv_bat_v *bat_v;
|
||||
struct batadv_priv *bat_priv;
|
||||
struct batadv_ogm2_packet *ogm_packet;
|
||||
struct sk_buff *skb, *skb_tmp;
|
||||
unsigned char *ogm_buff, *pkt_buff;
|
||||
int ogm_buff_len;
|
||||
u16 tvlv_len = 0;
|
||||
|
||||
bat_v = container_of(work, struct batadv_priv_bat_v, ogm_wq.work);
|
||||
bat_priv = container_of(bat_v, struct batadv_priv, bat_v);
|
||||
lockdep_assert_held(&bat_priv->bat_v.ogm_buff_mutex);
|
||||
|
||||
if (atomic_read(&bat_priv->mesh_state) == BATADV_MESH_DEACTIVATING)
|
||||
goto out;
|
||||
@@ -209,6 +208,23 @@ out:
|
||||
return;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_v_ogm_send() - periodic worker broadcasting the own OGM
|
||||
* @work: work queue item
|
||||
*/
|
||||
static void batadv_v_ogm_send(struct work_struct *work)
|
||||
{
|
||||
struct batadv_priv_bat_v *bat_v;
|
||||
struct batadv_priv *bat_priv;
|
||||
|
||||
bat_v = container_of(work, struct batadv_priv_bat_v, ogm_wq.work);
|
||||
bat_priv = container_of(bat_v, struct batadv_priv, bat_v);
|
||||
|
||||
mutex_lock(&bat_priv->bat_v.ogm_buff_mutex);
|
||||
batadv_v_ogm_send_softif(bat_priv);
|
||||
mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_v_ogm_iface_enable - prepare an interface for B.A.T.M.A.N. V
|
||||
* @hard_iface: the interface to prepare
|
||||
@@ -235,11 +251,15 @@ void batadv_v_ogm_primary_iface_set(struct batadv_hard_iface *primary_iface)
|
||||
struct batadv_priv *bat_priv = netdev_priv(primary_iface->soft_iface);
|
||||
struct batadv_ogm2_packet *ogm_packet;
|
||||
|
||||
mutex_lock(&bat_priv->bat_v.ogm_buff_mutex);
|
||||
if (!bat_priv->bat_v.ogm_buff)
|
||||
return;
|
||||
goto unlock;
|
||||
|
||||
ogm_packet = (struct batadv_ogm2_packet *)bat_priv->bat_v.ogm_buff;
|
||||
ether_addr_copy(ogm_packet->orig, primary_iface->net_dev->dev_addr);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -827,6 +847,8 @@ int batadv_v_ogm_init(struct batadv_priv *bat_priv)
|
||||
atomic_set(&bat_priv->bat_v.ogm_seqno, random_seqno);
|
||||
INIT_DELAYED_WORK(&bat_priv->bat_v.ogm_wq, batadv_v_ogm_send);
|
||||
|
||||
mutex_init(&bat_priv->bat_v.ogm_buff_mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -838,7 +860,11 @@ void batadv_v_ogm_free(struct batadv_priv *bat_priv)
|
||||
{
|
||||
cancel_delayed_work_sync(&bat_priv->bat_v.ogm_wq);
|
||||
|
||||
mutex_lock(&bat_priv->bat_v.ogm_buff_mutex);
|
||||
|
||||
kfree(bat_priv->bat_v.ogm_buff);
|
||||
bat_priv->bat_v.ogm_buff = NULL;
|
||||
bat_priv->bat_v.ogm_buff_len = 0;
|
||||
|
||||
mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex);
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
#include "debugfs.h"
|
||||
#include "main.h"
|
||||
|
||||
#include <linux/dcache.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/errno.h>
|
||||
@@ -339,6 +340,25 @@ out:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_debugfs_rename_hardif() - Fix debugfs path for renamed hardif
|
||||
* @hard_iface: hard interface which was renamed
|
||||
*/
|
||||
void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
const char *name = hard_iface->net_dev->name;
|
||||
struct dentry *dir;
|
||||
struct dentry *d;
|
||||
|
||||
dir = hard_iface->debug_dir;
|
||||
if (!dir)
|
||||
return;
|
||||
|
||||
d = debugfs_rename(dir->d_parent, dir, dir->d_parent, name);
|
||||
if (!d)
|
||||
pr_err("Can't rename debugfs dir to %s\n", name);
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_debugfs_del_hardif - delete the base directory for a hard interface
|
||||
* in debugfs.
|
||||
@@ -403,6 +423,26 @@ out:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_debugfs_rename_meshif() - Fix debugfs path for renamed softif
|
||||
* @dev: net_device which was renamed
|
||||
*/
|
||||
void batadv_debugfs_rename_meshif(struct net_device *dev)
|
||||
{
|
||||
struct batadv_priv *bat_priv = netdev_priv(dev);
|
||||
const char *name = dev->name;
|
||||
struct dentry *dir;
|
||||
struct dentry *d;
|
||||
|
||||
dir = bat_priv->debug_dir;
|
||||
if (!dir)
|
||||
return;
|
||||
|
||||
d = debugfs_rename(dir->d_parent, dir, dir->d_parent, name);
|
||||
if (!d)
|
||||
pr_err("Can't rename debugfs dir to %s\n", name);
|
||||
}
|
||||
|
||||
void batadv_debugfs_del_meshif(struct net_device *dev)
|
||||
{
|
||||
struct batadv_priv *bat_priv = netdev_priv(dev);
|
||||
|
||||
@@ -29,8 +29,10 @@ struct net_device;
|
||||
void batadv_debugfs_init(void);
|
||||
void batadv_debugfs_destroy(void);
|
||||
int batadv_debugfs_add_meshif(struct net_device *dev);
|
||||
void batadv_debugfs_rename_meshif(struct net_device *dev);
|
||||
void batadv_debugfs_del_meshif(struct net_device *dev);
|
||||
int batadv_debugfs_add_hardif(struct batadv_hard_iface *hard_iface);
|
||||
void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface);
|
||||
void batadv_debugfs_del_hardif(struct batadv_hard_iface *hard_iface);
|
||||
|
||||
#else
|
||||
@@ -48,6 +50,10 @@ static inline int batadv_debugfs_add_meshif(struct net_device *dev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void batadv_debugfs_rename_meshif(struct net_device *dev)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void batadv_debugfs_del_meshif(struct net_device *dev)
|
||||
{
|
||||
}
|
||||
@@ -58,6 +64,11 @@ int batadv_debugfs_add_hardif(struct batadv_hard_iface *hard_iface)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline
|
||||
void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
}
|
||||
|
||||
static inline
|
||||
void batadv_debugfs_del_hardif(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
|
||||
@@ -1025,8 +1025,9 @@ bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv,
|
||||
skb_reset_mac_header(skb_new);
|
||||
skb_new->protocol = eth_type_trans(skb_new,
|
||||
bat_priv->soft_iface);
|
||||
bat_priv->stats.rx_packets++;
|
||||
bat_priv->stats.rx_bytes += skb->len + ETH_HLEN + hdr_size;
|
||||
batadv_inc_counter(bat_priv, BATADV_CNT_RX);
|
||||
batadv_add_counter(bat_priv, BATADV_CNT_RX_BYTES,
|
||||
skb->len + ETH_HLEN + hdr_size);
|
||||
bat_priv->soft_iface->last_rx = jiffies;
|
||||
|
||||
netif_rx(skb_new);
|
||||
|
||||
@@ -232,8 +232,10 @@ err_unlock:
|
||||
spin_unlock_bh(&chain->lock);
|
||||
|
||||
err:
|
||||
if (!ret)
|
||||
if (!ret) {
|
||||
kfree(frag_entry_new);
|
||||
kfree_skb(skb);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -305,7 +307,7 @@ free:
|
||||
*
|
||||
* There are three possible outcomes: 1) Packet is merged: Return true and
|
||||
* set *skb to merged packet; 2) Packet is buffered: Return true and set *skb
|
||||
* to NULL; 3) Error: Return false and leave skb as is.
|
||||
* to NULL; 3) Error: Return false and free skb.
|
||||
*
|
||||
* Return: true when packet is merged or buffered, false when skb is not not
|
||||
* used.
|
||||
@@ -330,9 +332,9 @@ bool batadv_frag_skb_buffer(struct sk_buff **skb,
|
||||
goto out_err;
|
||||
|
||||
out:
|
||||
*skb = skb_out;
|
||||
ret = true;
|
||||
out_err:
|
||||
*skb = skb_out;
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -482,12 +484,20 @@ int batadv_frag_send_packet(struct sk_buff *skb,
|
||||
*/
|
||||
if (skb->priority >= 256 && skb->priority <= 263)
|
||||
frag_header.priority = skb->priority - 256;
|
||||
else
|
||||
frag_header.priority = 0;
|
||||
|
||||
ether_addr_copy(frag_header.orig, primary_if->net_dev->dev_addr);
|
||||
ether_addr_copy(frag_header.dest, orig_node->orig);
|
||||
|
||||
/* Eat and send fragments from the tail of skb */
|
||||
while (skb->len > max_fragment_size) {
|
||||
/* The initial check in this function should cover this case */
|
||||
if (frag_header.no == BATADV_FRAG_MAX_FRAGMENTS - 1) {
|
||||
ret = -1;
|
||||
goto out;
|
||||
}
|
||||
|
||||
skb_fragment = batadv_frag_create(skb, &frag_header, mtu);
|
||||
if (!skb_fragment)
|
||||
goto out;
|
||||
@@ -505,12 +515,6 @@ int batadv_frag_send_packet(struct sk_buff *skb,
|
||||
}
|
||||
|
||||
frag_header.no++;
|
||||
|
||||
/* The initial check in this function should cover this case */
|
||||
if (frag_header.no == BATADV_FRAG_MAX_FRAGMENTS - 1) {
|
||||
ret = -1;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
/* Make room for the fragment header. */
|
||||
|
||||
@@ -31,6 +31,7 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kref.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/netlink.h>
|
||||
#include <linux/rculist.h>
|
||||
@@ -325,6 +326,9 @@ out:
|
||||
* @bat_priv: the bat priv with all the soft interface information
|
||||
* @orig_node: originator announcing gateway capabilities
|
||||
* @gateway: announced bandwidth information
|
||||
*
|
||||
* Has to be called with the appropriate locks being acquired
|
||||
* (gw.list_lock).
|
||||
*/
|
||||
static void batadv_gw_node_add(struct batadv_priv *bat_priv,
|
||||
struct batadv_orig_node *orig_node,
|
||||
@@ -332,6 +336,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
|
||||
{
|
||||
struct batadv_gw_node *gw_node;
|
||||
|
||||
lockdep_assert_held(&bat_priv->gw.list_lock);
|
||||
|
||||
if (gateway->bandwidth_down == 0)
|
||||
return;
|
||||
|
||||
@@ -346,10 +352,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
|
||||
gw_node->bandwidth_down = ntohl(gateway->bandwidth_down);
|
||||
gw_node->bandwidth_up = ntohl(gateway->bandwidth_up);
|
||||
|
||||
spin_lock_bh(&bat_priv->gw.list_lock);
|
||||
kref_get(&gw_node->refcount);
|
||||
hlist_add_head_rcu(&gw_node->list, &bat_priv->gw.list);
|
||||
spin_unlock_bh(&bat_priv->gw.list_lock);
|
||||
|
||||
batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
|
||||
"Found new gateway %pM -> gw bandwidth: %u.%u/%u.%u MBit\n",
|
||||
@@ -404,11 +408,14 @@ void batadv_gw_node_update(struct batadv_priv *bat_priv,
|
||||
{
|
||||
struct batadv_gw_node *gw_node, *curr_gw = NULL;
|
||||
|
||||
spin_lock_bh(&bat_priv->gw.list_lock);
|
||||
gw_node = batadv_gw_node_get(bat_priv, orig_node);
|
||||
if (!gw_node) {
|
||||
batadv_gw_node_add(bat_priv, orig_node, gateway);
|
||||
spin_unlock_bh(&bat_priv->gw.list_lock);
|
||||
goto out;
|
||||
}
|
||||
spin_unlock_bh(&bat_priv->gw.list_lock);
|
||||
|
||||
if ((gw_node->bandwidth_down == ntohl(gateway->bandwidth_down)) &&
|
||||
(gw_node->bandwidth_up == ntohl(gateway->bandwidth_up)))
|
||||
|
||||
@@ -253,6 +253,11 @@ static void batadv_gw_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
|
||||
*/
|
||||
void batadv_gw_init(struct batadv_priv *bat_priv)
|
||||
{
|
||||
if (bat_priv->algo_ops->gw.init_sel_class)
|
||||
bat_priv->algo_ops->gw.init_sel_class(bat_priv);
|
||||
else
|
||||
atomic_set(&bat_priv->gw.sel_class, 1);
|
||||
|
||||
batadv_tvlv_handler_register(bat_priv, batadv_gw_tvlv_ogm_handler_v1,
|
||||
NULL, BATADV_TVLV_GW, 1,
|
||||
BATADV_TVLV_HANDLER_OGM_CIFNOTFND);
|
||||
|
||||
@@ -28,6 +28,7 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kref.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/printk.h>
|
||||
#include <linux/rculist.h>
|
||||
@@ -539,6 +540,11 @@ int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface,
|
||||
hard_iface->soft_iface = soft_iface;
|
||||
bat_priv = netdev_priv(hard_iface->soft_iface);
|
||||
|
||||
if (bat_priv->num_ifaces >= UINT_MAX) {
|
||||
ret = -ENOSPC;
|
||||
goto err_dev;
|
||||
}
|
||||
|
||||
ret = netdev_master_upper_dev_link(hard_iface->net_dev,
|
||||
soft_iface, NULL, NULL);
|
||||
if (ret)
|
||||
@@ -591,6 +597,9 @@ int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface,
|
||||
|
||||
batadv_hardif_recalc_extra_skbroom(soft_iface);
|
||||
|
||||
if (bat_priv->algo_ops->iface.enabled)
|
||||
bat_priv->algo_ops->iface.enabled(hard_iface);
|
||||
|
||||
out:
|
||||
return 0;
|
||||
|
||||
@@ -646,7 +655,7 @@ void batadv_hardif_disable_interface(struct batadv_hard_iface *hard_iface,
|
||||
batadv_hardif_recalc_extra_skbroom(hard_iface->soft_iface);
|
||||
|
||||
/* nobody uses this interface anymore */
|
||||
if (!bat_priv->num_ifaces) {
|
||||
if (bat_priv->num_ifaces == 0) {
|
||||
batadv_gw_check_client_stop(bat_priv);
|
||||
|
||||
if (autodel == BATADV_IF_CLEANUP_AUTO)
|
||||
@@ -682,7 +691,7 @@ batadv_hardif_add_interface(struct net_device *net_dev)
|
||||
if (ret)
|
||||
goto free_if;
|
||||
|
||||
hard_iface->if_num = -1;
|
||||
hard_iface->if_num = 0;
|
||||
hard_iface->net_dev = net_dev;
|
||||
hard_iface->soft_iface = NULL;
|
||||
hard_iface->if_status = BATADV_IF_NOT_IN_USE;
|
||||
@@ -694,6 +703,7 @@ batadv_hardif_add_interface(struct net_device *net_dev)
|
||||
INIT_LIST_HEAD(&hard_iface->list);
|
||||
INIT_HLIST_HEAD(&hard_iface->neigh_list);
|
||||
|
||||
mutex_init(&hard_iface->bat_iv.ogm_buff_mutex);
|
||||
spin_lock_init(&hard_iface->neigh_list_lock);
|
||||
kref_init(&hard_iface->refcount);
|
||||
|
||||
@@ -750,6 +760,32 @@ void batadv_hardif_remove_interfaces(void)
|
||||
rtnl_unlock();
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_hard_if_event_softif() - Handle events for soft interfaces
|
||||
* @event: NETDEV_* event to handle
|
||||
* @net_dev: net_device which generated an event
|
||||
*
|
||||
* Return: NOTIFY_* result
|
||||
*/
|
||||
static int batadv_hard_if_event_softif(unsigned long event,
|
||||
struct net_device *net_dev)
|
||||
{
|
||||
struct batadv_priv *bat_priv;
|
||||
|
||||
switch (event) {
|
||||
case NETDEV_REGISTER:
|
||||
batadv_sysfs_add_meshif(net_dev);
|
||||
bat_priv = netdev_priv(net_dev);
|
||||
batadv_softif_create_vlan(bat_priv, BATADV_NO_FLAGS);
|
||||
break;
|
||||
case NETDEV_CHANGENAME:
|
||||
batadv_debugfs_rename_meshif(net_dev);
|
||||
break;
|
||||
}
|
||||
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
static int batadv_hard_if_event(struct notifier_block *this,
|
||||
unsigned long event, void *ptr)
|
||||
{
|
||||
@@ -758,12 +794,8 @@ static int batadv_hard_if_event(struct notifier_block *this,
|
||||
struct batadv_hard_iface *primary_if = NULL;
|
||||
struct batadv_priv *bat_priv;
|
||||
|
||||
if (batadv_softif_is_valid(net_dev) && event == NETDEV_REGISTER) {
|
||||
batadv_sysfs_add_meshif(net_dev);
|
||||
bat_priv = netdev_priv(net_dev);
|
||||
batadv_softif_create_vlan(bat_priv, BATADV_NO_FLAGS);
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
if (batadv_softif_is_valid(net_dev))
|
||||
return batadv_hard_if_event_softif(event, net_dev);
|
||||
|
||||
hard_iface = batadv_hardif_get_by_netdev(net_dev);
|
||||
if (!hard_iface && (event == NETDEV_REGISTER ||
|
||||
@@ -807,6 +839,9 @@ static int batadv_hard_if_event(struct notifier_block *this,
|
||||
if (hard_iface == primary_if)
|
||||
batadv_primary_if_update_addr(bat_priv, NULL);
|
||||
break;
|
||||
case NETDEV_CHANGENAME:
|
||||
batadv_debugfs_rename_hardif(hard_iface);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -1495,7 +1495,7 @@ int batadv_orig_dump(struct sk_buff *msg, struct netlink_callback *cb)
|
||||
}
|
||||
|
||||
int batadv_orig_hash_add_if(struct batadv_hard_iface *hard_iface,
|
||||
int max_if_num)
|
||||
unsigned int max_if_num)
|
||||
{
|
||||
struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
|
||||
struct batadv_algo_ops *bao = bat_priv->algo_ops;
|
||||
@@ -1530,7 +1530,7 @@ err:
|
||||
}
|
||||
|
||||
int batadv_orig_hash_del_if(struct batadv_hard_iface *hard_iface,
|
||||
int max_if_num)
|
||||
unsigned int max_if_num)
|
||||
{
|
||||
struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
|
||||
struct batadv_hashtable *hash = bat_priv->orig_hash;
|
||||
|
||||
@@ -78,9 +78,9 @@ int batadv_orig_seq_print_text(struct seq_file *seq, void *offset);
|
||||
int batadv_orig_dump(struct sk_buff *msg, struct netlink_callback *cb);
|
||||
int batadv_orig_hardif_seq_print_text(struct seq_file *seq, void *offset);
|
||||
int batadv_orig_hash_add_if(struct batadv_hard_iface *hard_iface,
|
||||
int max_if_num);
|
||||
unsigned int max_if_num);
|
||||
int batadv_orig_hash_del_if(struct batadv_hard_iface *hard_iface,
|
||||
int max_if_num);
|
||||
unsigned int max_if_num);
|
||||
struct batadv_orig_node_vlan *
|
||||
batadv_orig_node_vlan_new(struct batadv_orig_node *orig_node,
|
||||
unsigned short vid);
|
||||
|
||||
@@ -930,7 +930,6 @@ int batadv_recv_unicast_packet(struct sk_buff *skb,
|
||||
bool is4addr;
|
||||
|
||||
unicast_packet = (struct batadv_unicast_packet *)skb->data;
|
||||
unicast_4addr_packet = (struct batadv_unicast_4addr_packet *)skb->data;
|
||||
|
||||
is4addr = unicast_packet->packet_type == BATADV_UNICAST_4ADDR;
|
||||
/* the caller function should have already pulled 2 bytes */
|
||||
@@ -951,9 +950,13 @@ int batadv_recv_unicast_packet(struct sk_buff *skb,
|
||||
if (!batadv_check_unicast_ttvn(bat_priv, skb, hdr_size))
|
||||
return NET_RX_DROP;
|
||||
|
||||
unicast_packet = (struct batadv_unicast_packet *)skb->data;
|
||||
|
||||
/* packet for me */
|
||||
if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) {
|
||||
if (is4addr) {
|
||||
unicast_4addr_packet =
|
||||
(struct batadv_unicast_4addr_packet *)skb->data;
|
||||
subtype = unicast_4addr_packet->subtype;
|
||||
batadv_dat_inc_counter(bat_priv, subtype);
|
||||
|
||||
@@ -1080,6 +1083,12 @@ int batadv_recv_frag_packet(struct sk_buff *skb,
|
||||
batadv_inc_counter(bat_priv, BATADV_CNT_FRAG_RX);
|
||||
batadv_add_counter(bat_priv, BATADV_CNT_FRAG_RX_BYTES, skb->len);
|
||||
|
||||
/* batadv_frag_skb_buffer will always consume the skb and
|
||||
* the caller should therefore never try to free the
|
||||
* skb after this point
|
||||
*/
|
||||
ret = NET_RX_SUCCESS;
|
||||
|
||||
/* Add fragment to buffer and merge if possible. */
|
||||
if (!batadv_frag_skb_buffer(&skb, orig_node_src))
|
||||
goto out;
|
||||
|
||||
@@ -808,7 +808,6 @@ static int batadv_softif_init_late(struct net_device *dev)
|
||||
atomic_set(&bat_priv->mcast.num_want_all_ipv6, 0);
|
||||
#endif
|
||||
atomic_set(&bat_priv->gw.mode, BATADV_GW_MODE_OFF);
|
||||
atomic_set(&bat_priv->gw.sel_class, 20);
|
||||
atomic_set(&bat_priv->gw.bandwidth_down, 100);
|
||||
atomic_set(&bat_priv->gw.bandwidth_up, 20);
|
||||
atomic_set(&bat_priv->orig_interval, 1000);
|
||||
|
||||
@@ -867,7 +867,7 @@ batadv_tt_prepare_tvlv_global_data(struct batadv_orig_node *orig_node,
|
||||
struct batadv_orig_node_vlan *vlan;
|
||||
u8 *tt_change_ptr;
|
||||
|
||||
rcu_read_lock();
|
||||
spin_lock_bh(&orig_node->vlan_list_lock);
|
||||
hlist_for_each_entry_rcu(vlan, &orig_node->vlan_list, list) {
|
||||
num_vlan++;
|
||||
num_entries += atomic_read(&vlan->tt.num_entries);
|
||||
@@ -905,7 +905,7 @@ batadv_tt_prepare_tvlv_global_data(struct batadv_orig_node *orig_node,
|
||||
*tt_change = (struct batadv_tvlv_tt_change *)tt_change_ptr;
|
||||
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
spin_unlock_bh(&orig_node->vlan_list_lock);
|
||||
return tvlv_len;
|
||||
}
|
||||
|
||||
@@ -936,15 +936,20 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
|
||||
struct batadv_tvlv_tt_vlan_data *tt_vlan;
|
||||
struct batadv_softif_vlan *vlan;
|
||||
u16 num_vlan = 0;
|
||||
u16 num_entries = 0;
|
||||
u16 vlan_entries = 0;
|
||||
u16 total_entries = 0;
|
||||
u16 tvlv_len;
|
||||
u8 *tt_change_ptr;
|
||||
int change_offset;
|
||||
|
||||
rcu_read_lock();
|
||||
spin_lock_bh(&bat_priv->softif_vlan_list_lock);
|
||||
hlist_for_each_entry_rcu(vlan, &bat_priv->softif_vlan_list, list) {
|
||||
vlan_entries = atomic_read(&vlan->tt.num_entries);
|
||||
if (vlan_entries < 1)
|
||||
continue;
|
||||
|
||||
num_vlan++;
|
||||
num_entries += atomic_read(&vlan->tt.num_entries);
|
||||
total_entries += vlan_entries;
|
||||
}
|
||||
|
||||
change_offset = sizeof(**tt_data);
|
||||
@@ -952,7 +957,7 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
|
||||
|
||||
/* if tt_len is negative, allocate the space needed by the full table */
|
||||
if (*tt_len < 0)
|
||||
*tt_len = batadv_tt_len(num_entries);
|
||||
*tt_len = batadv_tt_len(total_entries);
|
||||
|
||||
tvlv_len = *tt_len;
|
||||
tvlv_len += change_offset;
|
||||
@@ -969,6 +974,10 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
|
||||
|
||||
tt_vlan = (struct batadv_tvlv_tt_vlan_data *)(*tt_data + 1);
|
||||
hlist_for_each_entry_rcu(vlan, &bat_priv->softif_vlan_list, list) {
|
||||
vlan_entries = atomic_read(&vlan->tt.num_entries);
|
||||
if (vlan_entries < 1)
|
||||
continue;
|
||||
|
||||
tt_vlan->vid = htons(vlan->vid);
|
||||
tt_vlan->crc = htonl(vlan->tt.crc);
|
||||
|
||||
@@ -979,7 +988,7 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
|
||||
*tt_change = (struct batadv_tvlv_tt_change *)tt_change_ptr;
|
||||
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
|
||||
return tvlv_len;
|
||||
}
|
||||
|
||||
@@ -1539,6 +1548,8 @@ batadv_tt_global_orig_entry_find(const struct batadv_tt_global_entry *entry,
|
||||
* by a given originator
|
||||
* @entry: the TT global entry to check
|
||||
* @orig_node: the originator to search in the list
|
||||
* @flags: a pointer to store TT flags for the given @entry received
|
||||
* from @orig_node
|
||||
*
|
||||
* find out if an orig_node is already in the list of a tt_global_entry.
|
||||
*
|
||||
@@ -1546,7 +1557,8 @@ batadv_tt_global_orig_entry_find(const struct batadv_tt_global_entry *entry,
|
||||
*/
|
||||
static bool
|
||||
batadv_tt_global_entry_has_orig(const struct batadv_tt_global_entry *entry,
|
||||
const struct batadv_orig_node *orig_node)
|
||||
const struct batadv_orig_node *orig_node,
|
||||
u8 *flags)
|
||||
{
|
||||
struct batadv_tt_orig_list_entry *orig_entry;
|
||||
bool found = false;
|
||||
@@ -1554,15 +1566,51 @@ batadv_tt_global_entry_has_orig(const struct batadv_tt_global_entry *entry,
|
||||
orig_entry = batadv_tt_global_orig_entry_find(entry, orig_node);
|
||||
if (orig_entry) {
|
||||
found = true;
|
||||
|
||||
if (flags)
|
||||
*flags = orig_entry->flags;
|
||||
|
||||
batadv_tt_orig_list_entry_put(orig_entry);
|
||||
}
|
||||
|
||||
return found;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_tt_global_sync_flags - update TT sync flags
|
||||
* @tt_global: the TT global entry to update sync flags in
|
||||
*
|
||||
* Updates the sync flag bits in the tt_global flag attribute with a logical
|
||||
* OR of all sync flags from any of its TT orig entries.
|
||||
*/
|
||||
static void
|
||||
batadv_tt_global_sync_flags(struct batadv_tt_global_entry *tt_global)
|
||||
{
|
||||
struct batadv_tt_orig_list_entry *orig_entry;
|
||||
const struct hlist_head *head;
|
||||
u16 flags = BATADV_NO_FLAGS;
|
||||
|
||||
rcu_read_lock();
|
||||
head = &tt_global->orig_list;
|
||||
hlist_for_each_entry_rcu(orig_entry, head, list)
|
||||
flags |= orig_entry->flags;
|
||||
rcu_read_unlock();
|
||||
|
||||
flags |= tt_global->common.flags & (~BATADV_TT_SYNC_MASK);
|
||||
tt_global->common.flags = flags;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_tt_global_orig_entry_add - add or update a TT orig entry
|
||||
* @tt_global: the TT global entry to add an orig entry in
|
||||
* @orig_node: the originator to add an orig entry for
|
||||
* @ttvn: translation table version number of this changeset
|
||||
* @flags: TT sync flags
|
||||
*/
|
||||
static void
|
||||
batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
|
||||
struct batadv_orig_node *orig_node, int ttvn)
|
||||
struct batadv_orig_node *orig_node, int ttvn,
|
||||
u8 flags)
|
||||
{
|
||||
struct batadv_tt_orig_list_entry *orig_entry;
|
||||
|
||||
@@ -1574,7 +1622,8 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
|
||||
* was added during a "temporary client detection"
|
||||
*/
|
||||
orig_entry->ttvn = ttvn;
|
||||
goto out;
|
||||
orig_entry->flags = flags;
|
||||
goto sync_flags;
|
||||
}
|
||||
|
||||
orig_entry = kmem_cache_zalloc(batadv_tt_orig_cache, GFP_ATOMIC);
|
||||
@@ -1586,6 +1635,7 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
|
||||
batadv_tt_global_size_inc(orig_node, tt_global->common.vid);
|
||||
orig_entry->orig_node = orig_node;
|
||||
orig_entry->ttvn = ttvn;
|
||||
orig_entry->flags = flags;
|
||||
kref_init(&orig_entry->refcount);
|
||||
|
||||
kref_get(&orig_entry->refcount);
|
||||
@@ -1593,6 +1643,8 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
|
||||
&tt_global->orig_list);
|
||||
atomic_inc(&tt_global->orig_list_count);
|
||||
|
||||
sync_flags:
|
||||
batadv_tt_global_sync_flags(tt_global);
|
||||
out:
|
||||
if (orig_entry)
|
||||
batadv_tt_orig_list_entry_put(orig_entry);
|
||||
@@ -1656,7 +1708,9 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
|
||||
ether_addr_copy(common->addr, tt_addr);
|
||||
common->vid = vid;
|
||||
|
||||
common->flags = flags;
|
||||
if (!is_multicast_ether_addr(common->addr))
|
||||
common->flags = flags & (~BATADV_TT_SYNC_MASK);
|
||||
|
||||
tt_global_entry->roam_at = 0;
|
||||
/* node must store current time in case of roaming. This is
|
||||
* needed to purge this entry out on timeout (if nobody claims
|
||||
@@ -1698,7 +1752,7 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
|
||||
if (!(common->flags & BATADV_TT_CLIENT_TEMP))
|
||||
goto out;
|
||||
if (batadv_tt_global_entry_has_orig(tt_global_entry,
|
||||
orig_node))
|
||||
orig_node, NULL))
|
||||
goto out_remove;
|
||||
batadv_tt_global_del_orig_list(tt_global_entry);
|
||||
goto add_orig_entry;
|
||||
@@ -1716,10 +1770,11 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
|
||||
}
|
||||
|
||||
/* the change can carry possible "attribute" flags like the
|
||||
* TT_CLIENT_WIFI, therefore they have to be copied in the
|
||||
* TT_CLIENT_TEMP, therefore they have to be copied in the
|
||||
* client entry
|
||||
*/
|
||||
common->flags |= flags;
|
||||
if (!is_multicast_ether_addr(common->addr))
|
||||
common->flags |= flags & (~BATADV_TT_SYNC_MASK);
|
||||
|
||||
/* If there is the BATADV_TT_CLIENT_ROAM flag set, there is only
|
||||
* one originator left in the list and we previously received a
|
||||
@@ -1736,7 +1791,8 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
|
||||
}
|
||||
add_orig_entry:
|
||||
/* add the new orig_entry (if needed) or update it */
|
||||
batadv_tt_global_orig_entry_add(tt_global_entry, orig_node, ttvn);
|
||||
batadv_tt_global_orig_entry_add(tt_global_entry, orig_node, ttvn,
|
||||
flags & BATADV_TT_SYNC_MASK);
|
||||
|
||||
batadv_dbg(BATADV_DBG_TT, bat_priv,
|
||||
"Creating new global tt entry: %pM (vid: %d, via %pM)\n",
|
||||
@@ -1959,6 +2015,7 @@ batadv_tt_global_dump_subentry(struct sk_buff *msg, u32 portid, u32 seq,
|
||||
struct batadv_tt_orig_list_entry *orig,
|
||||
bool best)
|
||||
{
|
||||
u16 flags = (common->flags & (~BATADV_TT_SYNC_MASK)) | orig->flags;
|
||||
void *hdr;
|
||||
struct batadv_orig_node_vlan *vlan;
|
||||
u8 last_ttvn;
|
||||
@@ -1988,7 +2045,7 @@ batadv_tt_global_dump_subentry(struct sk_buff *msg, u32 portid, u32 seq,
|
||||
nla_put_u8(msg, BATADV_ATTR_TT_LAST_TTVN, last_ttvn) ||
|
||||
nla_put_u32(msg, BATADV_ATTR_TT_CRC32, crc) ||
|
||||
nla_put_u16(msg, BATADV_ATTR_TT_VID, common->vid) ||
|
||||
nla_put_u32(msg, BATADV_ATTR_TT_FLAGS, common->flags))
|
||||
nla_put_u32(msg, BATADV_ATTR_TT_FLAGS, flags))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (best && nla_put_flag(msg, BATADV_ATTR_FLAG_BEST))
|
||||
@@ -2602,6 +2659,7 @@ static u32 batadv_tt_global_crc(struct batadv_priv *bat_priv,
|
||||
unsigned short vid)
|
||||
{
|
||||
struct batadv_hashtable *hash = bat_priv->tt.global_hash;
|
||||
struct batadv_tt_orig_list_entry *tt_orig;
|
||||
struct batadv_tt_common_entry *tt_common;
|
||||
struct batadv_tt_global_entry *tt_global;
|
||||
struct hlist_head *head;
|
||||
@@ -2640,8 +2698,9 @@ static u32 batadv_tt_global_crc(struct batadv_priv *bat_priv,
|
||||
/* find out if this global entry is announced by this
|
||||
* originator
|
||||
*/
|
||||
if (!batadv_tt_global_entry_has_orig(tt_global,
|
||||
orig_node))
|
||||
tt_orig = batadv_tt_global_orig_entry_find(tt_global,
|
||||
orig_node);
|
||||
if (!tt_orig)
|
||||
continue;
|
||||
|
||||
/* use network order to read the VID: this ensures that
|
||||
@@ -2653,10 +2712,12 @@ static u32 batadv_tt_global_crc(struct batadv_priv *bat_priv,
|
||||
/* compute the CRC on flags that have to be kept in sync
|
||||
* among nodes
|
||||
*/
|
||||
flags = tt_common->flags & BATADV_TT_SYNC_MASK;
|
||||
flags = tt_orig->flags;
|
||||
crc_tmp = crc32c(crc_tmp, &flags, sizeof(flags));
|
||||
|
||||
crc ^= crc32c(crc_tmp, tt_common->addr, ETH_ALEN);
|
||||
|
||||
batadv_tt_orig_list_entry_put(tt_orig);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
@@ -2834,23 +2895,46 @@ unlock:
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_tt_local_valid - verify that given tt entry is a valid one
|
||||
* batadv_tt_local_valid() - verify local tt entry and get flags
|
||||
* @entry_ptr: to be checked local tt entry
|
||||
* @data_ptr: not used but definition required to satisfy the callback prototype
|
||||
* @flags: a pointer to store TT flags for this client to
|
||||
*
|
||||
* Checks the validity of the given local TT entry. If it is, then the provided
|
||||
* flags pointer is updated.
|
||||
*
|
||||
* Return: true if the entry is a valid, false otherwise.
|
||||
*/
|
||||
static bool batadv_tt_local_valid(const void *entry_ptr, const void *data_ptr)
|
||||
static bool batadv_tt_local_valid(const void *entry_ptr,
|
||||
const void *data_ptr,
|
||||
u8 *flags)
|
||||
{
|
||||
const struct batadv_tt_common_entry *tt_common_entry = entry_ptr;
|
||||
|
||||
if (tt_common_entry->flags & BATADV_TT_CLIENT_NEW)
|
||||
return false;
|
||||
|
||||
if (flags)
|
||||
*flags = tt_common_entry->flags;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* batadv_tt_global_valid() - verify global tt entry and get flags
|
||||
* @entry_ptr: to be checked global tt entry
|
||||
* @data_ptr: an orig_node object (may be NULL)
|
||||
* @flags: a pointer to store TT flags for this client to
|
||||
*
|
||||
* Checks the validity of the given global TT entry. If it is, then the provided
|
||||
* flags pointer is updated either with the common (summed) TT flags if data_ptr
|
||||
* is NULL or the specific, per originator TT flags otherwise.
|
||||
*
|
||||
* Return: true if the entry is a valid, false otherwise.
|
||||
*/
|
||||
static bool batadv_tt_global_valid(const void *entry_ptr,
|
||||
const void *data_ptr)
|
||||
const void *data_ptr,
|
||||
u8 *flags)
|
||||
{
|
||||
const struct batadv_tt_common_entry *tt_common_entry = entry_ptr;
|
||||
const struct batadv_tt_global_entry *tt_global_entry;
|
||||
@@ -2864,7 +2948,8 @@ static bool batadv_tt_global_valid(const void *entry_ptr,
|
||||
struct batadv_tt_global_entry,
|
||||
common);
|
||||
|
||||
return batadv_tt_global_entry_has_orig(tt_global_entry, orig_node);
|
||||
return batadv_tt_global_entry_has_orig(tt_global_entry, orig_node,
|
||||
flags);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -2874,25 +2959,34 @@ static bool batadv_tt_global_valid(const void *entry_ptr,
|
||||
* @hash: hash table containing the tt entries
|
||||
* @tt_len: expected tvlv tt data buffer length in number of bytes
|
||||
* @tvlv_buff: pointer to the buffer to fill with the TT data
|
||||
* @valid_cb: function to filter tt change entries
|
||||
* @valid_cb: function to filter tt change entries and to return TT flags
|
||||
* @cb_data: data passed to the filter function as argument
|
||||
*
|
||||
* Fills the tvlv buff with the tt entries from the specified hash. If valid_cb
|
||||
* is not provided then this becomes a no-op.
|
||||
*/
|
||||
static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
|
||||
struct batadv_hashtable *hash,
|
||||
void *tvlv_buff, u16 tt_len,
|
||||
bool (*valid_cb)(const void *,
|
||||
const void *),
|
||||
const void *,
|
||||
u8 *flags),
|
||||
void *cb_data)
|
||||
{
|
||||
struct batadv_tt_common_entry *tt_common_entry;
|
||||
struct batadv_tvlv_tt_change *tt_change;
|
||||
struct hlist_head *head;
|
||||
u16 tt_tot, tt_num_entries = 0;
|
||||
u8 flags;
|
||||
bool ret;
|
||||
u32 i;
|
||||
|
||||
tt_tot = batadv_tt_entries(tt_len);
|
||||
tt_change = (struct batadv_tvlv_tt_change *)tvlv_buff;
|
||||
|
||||
if (!valid_cb)
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
for (i = 0; i < hash->size; i++) {
|
||||
head = &hash->table[i];
|
||||
@@ -2902,11 +2996,12 @@ static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
|
||||
if (tt_tot == tt_num_entries)
|
||||
break;
|
||||
|
||||
if ((valid_cb) && (!valid_cb(tt_common_entry, cb_data)))
|
||||
ret = valid_cb(tt_common_entry, cb_data, &flags);
|
||||
if (!ret)
|
||||
continue;
|
||||
|
||||
ether_addr_copy(tt_change->addr, tt_common_entry->addr);
|
||||
tt_change->flags = tt_common_entry->flags;
|
||||
tt_change->flags = flags;
|
||||
tt_change->vid = htons(tt_common_entry->vid);
|
||||
memset(tt_change->reserved, 0,
|
||||
sizeof(tt_change->reserved));
|
||||
|
||||
@@ -27,6 +27,7 @@
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/if_ether.h>
|
||||
#include <linux/kref.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/netlink.h>
|
||||
#include <linux/sched.h> /* for linux/wait.h */
|
||||
@@ -81,11 +82,13 @@ enum batadv_dhcp_recipient {
|
||||
* @ogm_buff: buffer holding the OGM packet
|
||||
* @ogm_buff_len: length of the OGM packet buffer
|
||||
* @ogm_seqno: OGM sequence number - used to identify each OGM
|
||||
* @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len
|
||||
*/
|
||||
struct batadv_hard_iface_bat_iv {
|
||||
unsigned char *ogm_buff;
|
||||
int ogm_buff_len;
|
||||
atomic_t ogm_seqno;
|
||||
struct mutex ogm_buff_mutex;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -139,7 +142,7 @@ struct batadv_hard_iface_bat_v {
|
||||
*/
|
||||
struct batadv_hard_iface {
|
||||
struct list_head list;
|
||||
s16 if_num;
|
||||
unsigned int if_num;
|
||||
char if_status;
|
||||
struct net_device *net_dev;
|
||||
u8 num_bcasts;
|
||||
@@ -966,12 +969,14 @@ struct batadv_softif_vlan {
|
||||
* @ogm_buff: buffer holding the OGM packet
|
||||
* @ogm_buff_len: length of the OGM packet buffer
|
||||
* @ogm_seqno: OGM sequence number - used to identify each OGM
|
||||
* @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len
|
||||
* @ogm_wq: workqueue used to schedule OGM transmissions
|
||||
*/
|
||||
struct batadv_priv_bat_v {
|
||||
unsigned char *ogm_buff;
|
||||
int ogm_buff_len;
|
||||
atomic_t ogm_seqno;
|
||||
struct mutex ogm_buff_mutex;
|
||||
struct delayed_work ogm_wq;
|
||||
};
|
||||
|
||||
@@ -1060,7 +1065,7 @@ struct batadv_priv {
|
||||
atomic_t bcast_seqno;
|
||||
atomic_t bcast_queue_left;
|
||||
atomic_t batman_queue_left;
|
||||
char num_ifaces;
|
||||
unsigned int num_ifaces;
|
||||
struct kobject *mesh_obj;
|
||||
struct dentry *debug_dir;
|
||||
struct hlist_head forw_bat_list;
|
||||
@@ -1241,6 +1246,7 @@ struct batadv_tt_global_entry {
|
||||
* struct batadv_tt_orig_list_entry - orig node announcing a non-mesh client
|
||||
* @orig_node: pointer to orig node announcing this non-mesh client
|
||||
* @ttvn: translation table version number which added the non-mesh client
|
||||
* @flags: per orig entry TT sync flags
|
||||
* @list: list node for batadv_tt_global_entry::orig_list
|
||||
* @refcount: number of contexts the object is used
|
||||
* @rcu: struct used for freeing in an RCU-safe manner
|
||||
@@ -1248,6 +1254,7 @@ struct batadv_tt_global_entry {
|
||||
struct batadv_tt_orig_list_entry {
|
||||
struct batadv_orig_node *orig_node;
|
||||
u8 ttvn;
|
||||
u8 flags;
|
||||
struct hlist_node list;
|
||||
struct kref refcount;
|
||||
struct rcu_head rcu;
|
||||
@@ -1397,6 +1404,7 @@ struct batadv_forw_packet {
|
||||
* @activate: start routing mechanisms when hard-interface is brought up
|
||||
* (optional)
|
||||
* @enable: init routing info when hard-interface is enabled
|
||||
* @enabled: notification when hard-interface was enabled (optional)
|
||||
* @disable: de-init routing info when hard-interface is disabled
|
||||
* @update_mac: (re-)init mac addresses of the protocol information
|
||||
* belonging to this hard-interface
|
||||
@@ -1405,6 +1413,7 @@ struct batadv_forw_packet {
|
||||
struct batadv_algo_iface_ops {
|
||||
void (*activate)(struct batadv_hard_iface *hard_iface);
|
||||
int (*enable)(struct batadv_hard_iface *hard_iface);
|
||||
void (*enabled)(struct batadv_hard_iface *hard_iface);
|
||||
void (*disable)(struct batadv_hard_iface *hard_iface);
|
||||
void (*update_mac)(struct batadv_hard_iface *hard_iface);
|
||||
void (*primary_set)(struct batadv_hard_iface *hard_iface);
|
||||
@@ -1452,9 +1461,10 @@ struct batadv_algo_neigh_ops {
|
||||
*/
|
||||
struct batadv_algo_orig_ops {
|
||||
void (*free)(struct batadv_orig_node *orig_node);
|
||||
int (*add_if)(struct batadv_orig_node *orig_node, int max_if_num);
|
||||
int (*del_if)(struct batadv_orig_node *orig_node, int max_if_num,
|
||||
int del_if_num);
|
||||
int (*add_if)(struct batadv_orig_node *orig_node,
|
||||
unsigned int max_if_num);
|
||||
int (*del_if)(struct batadv_orig_node *orig_node,
|
||||
unsigned int max_if_num, unsigned int del_if_num);
|
||||
#ifdef CONFIG_BATMAN_ADV_DEBUGFS
|
||||
void (*print)(struct batadv_priv *priv, struct seq_file *seq,
|
||||
struct batadv_hard_iface *hard_iface);
|
||||
@@ -1466,6 +1476,7 @@ struct batadv_algo_orig_ops {
|
||||
|
||||
/**
|
||||
* struct batadv_algo_gw_ops - mesh algorithm callbacks (GW specific)
|
||||
* @init_sel_class: initialize GW selection class (optional)
|
||||
* @store_sel_class: parse and stores a new GW selection class (optional)
|
||||
* @show_sel_class: prints the current GW selection class (optional)
|
||||
* @get_best_gw_node: select the best GW from the list of available nodes
|
||||
@@ -1476,6 +1487,7 @@ struct batadv_algo_orig_ops {
|
||||
* @dump: dump gateways to a netlink socket (optional)
|
||||
*/
|
||||
struct batadv_algo_gw_ops {
|
||||
void (*init_sel_class)(struct batadv_priv *bat_priv);
|
||||
ssize_t (*store_sel_class)(struct batadv_priv *bat_priv, char *buff,
|
||||
size_t count);
|
||||
ssize_t (*show_sel_class)(struct batadv_priv *bat_priv, char *buff);
|
||||
|
||||
@@ -55,30 +55,60 @@ static void cgrp_css_free(struct cgroup_subsys_state *css)
|
||||
kfree(css_cls_state(css));
|
||||
}
|
||||
|
||||
/*
|
||||
* To avoid freezing of sockets creation for tasks with big number of threads
|
||||
* and opened sockets lets release file_lock every 1000 iterated descriptors.
|
||||
* New sockets will already have been created with new classid.
|
||||
*/
|
||||
|
||||
struct update_classid_context {
|
||||
u32 classid;
|
||||
unsigned int batch;
|
||||
};
|
||||
|
||||
#define UPDATE_CLASSID_BATCH 1000
|
||||
|
||||
static int update_classid_sock(const void *v, struct file *file, unsigned n)
|
||||
{
|
||||
int err;
|
||||
struct update_classid_context *ctx = (void *)v;
|
||||
struct socket *sock = sock_from_file(file, &err);
|
||||
|
||||
if (sock) {
|
||||
spin_lock(&cgroup_sk_update_lock);
|
||||
sock_cgroup_set_classid(&sock->sk->sk_cgrp_data,
|
||||
(unsigned long)v);
|
||||
sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid);
|
||||
spin_unlock(&cgroup_sk_update_lock);
|
||||
}
|
||||
if (--ctx->batch == 0) {
|
||||
ctx->batch = UPDATE_CLASSID_BATCH;
|
||||
return n + 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void update_classid_task(struct task_struct *p, u32 classid)
|
||||
{
|
||||
struct update_classid_context ctx = {
|
||||
.classid = classid,
|
||||
.batch = UPDATE_CLASSID_BATCH
|
||||
};
|
||||
unsigned int fd = 0;
|
||||
|
||||
do {
|
||||
task_lock(p);
|
||||
fd = iterate_fd(p->files, fd, update_classid_sock, &ctx);
|
||||
task_unlock(p);
|
||||
cond_resched();
|
||||
} while (fd);
|
||||
}
|
||||
|
||||
static void cgrp_attach(struct cgroup_taskset *tset)
|
||||
{
|
||||
struct cgroup_subsys_state *css;
|
||||
struct task_struct *p;
|
||||
|
||||
cgroup_taskset_for_each(p, css, tset) {
|
||||
task_lock(p);
|
||||
iterate_fd(p->files, 0, update_classid_sock,
|
||||
(void *)(unsigned long)css_cls_state(css)->classid);
|
||||
task_unlock(p);
|
||||
update_classid_task(p, css_cls_state(css)->classid);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -100,10 +130,7 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
|
||||
|
||||
css_task_iter_start(css, &it);
|
||||
while ((p = css_task_iter_next(&it))) {
|
||||
task_lock(p);
|
||||
iterate_fd(p->files, 0, update_classid_sock,
|
||||
(void *)(unsigned long)cs->classid);
|
||||
task_unlock(p);
|
||||
update_classid_task(p, cs->classid);
|
||||
cond_resched();
|
||||
}
|
||||
css_task_iter_end(&it);
|
||||
|
||||
@@ -30,7 +30,13 @@ const struct nla_policy ieee802154_policy[IEEE802154_ATTR_MAX + 1] = {
|
||||
[IEEE802154_ATTR_HW_ADDR] = { .type = NLA_HW_ADDR, },
|
||||
[IEEE802154_ATTR_PAN_ID] = { .type = NLA_U16, },
|
||||
[IEEE802154_ATTR_CHANNEL] = { .type = NLA_U8, },
|
||||
[IEEE802154_ATTR_BCN_ORD] = { .type = NLA_U8, },
|
||||
[IEEE802154_ATTR_SF_ORD] = { .type = NLA_U8, },
|
||||
[IEEE802154_ATTR_PAN_COORD] = { .type = NLA_U8, },
|
||||
[IEEE802154_ATTR_BAT_EXT] = { .type = NLA_U8, },
|
||||
[IEEE802154_ATTR_COORD_REALIGN] = { .type = NLA_U8, },
|
||||
[IEEE802154_ATTR_PAGE] = { .type = NLA_U8, },
|
||||
[IEEE802154_ATTR_DEV_TYPE] = { .type = NLA_U8, },
|
||||
[IEEE802154_ATTR_COORD_SHORT_ADDR] = { .type = NLA_U16, },
|
||||
[IEEE802154_ATTR_COORD_HW_ADDR] = { .type = NLA_HW_ADDR, },
|
||||
[IEEE802154_ATTR_COORD_PAN_ID] = { .type = NLA_U16, },
|
||||
|
||||
@@ -1738,6 +1738,7 @@ void cipso_v4_error(struct sk_buff *skb, int error, u32 gateway)
|
||||
{
|
||||
unsigned char optbuf[sizeof(struct ip_options) + 40];
|
||||
struct ip_options *opt = (struct ip_options *)optbuf;
|
||||
int res;
|
||||
|
||||
if (ip_hdr(skb)->protocol == IPPROTO_ICMP || error != -EACCES)
|
||||
return;
|
||||
@@ -1749,7 +1750,11 @@ void cipso_v4_error(struct sk_buff *skb, int error, u32 gateway)
|
||||
|
||||
memset(opt, 0, sizeof(struct ip_options));
|
||||
opt->optlen = ip_hdr(skb)->ihl*4 - sizeof(struct iphdr);
|
||||
if (__ip_options_compile(dev_net(skb->dev), opt, skb, NULL))
|
||||
rcu_read_lock();
|
||||
res = __ip_options_compile(dev_net(skb->dev), opt, skb, NULL);
|
||||
rcu_read_unlock();
|
||||
|
||||
if (res)
|
||||
return;
|
||||
|
||||
if (gateway)
|
||||
|
||||
@@ -60,7 +60,9 @@ int gre_del_protocol(const struct gre_protocol *proto, u8 version)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(gre_del_protocol);
|
||||
|
||||
/* Fills in tpi and returns header length to be pulled. */
|
||||
/* Fills in tpi and returns header length to be pulled.
|
||||
* Note that caller must use pskb_may_pull() before pulling GRE header.
|
||||
*/
|
||||
int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
|
||||
bool *csum_err, __be16 proto, int nhs)
|
||||
{
|
||||
@@ -114,8 +116,14 @@ int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
|
||||
* - When dealing with WCCPv2, Skip extra 4 bytes in GRE header
|
||||
*/
|
||||
if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) {
|
||||
u8 _val, *val;
|
||||
|
||||
val = skb_header_pointer(skb, nhs + hdr_len,
|
||||
sizeof(_val), &_val);
|
||||
if (!val)
|
||||
return -EINVAL;
|
||||
tpi->proto = proto;
|
||||
if ((*(u8 *)options & 0xF0) != 0x40)
|
||||
if ((*val & 0xF0) != 0x40)
|
||||
hdr_len += 4;
|
||||
}
|
||||
tpi->hdr_len = hdr_len;
|
||||
|
||||
@@ -3220,6 +3220,10 @@ static void addrconf_dev_config(struct net_device *dev)
|
||||
(dev->type != ARPHRD_6LOWPAN) &&
|
||||
(dev->type != ARPHRD_NONE)) {
|
||||
/* Alas, we support only Ethernet autoconfiguration. */
|
||||
idev = __in6_dev_get(dev);
|
||||
if (!IS_ERR_OR_NULL(idev) && dev->flags & IFF_UP &&
|
||||
dev->flags & IFF_MULTICAST)
|
||||
ipv6_mc_up(idev);
|
||||
return;
|
||||
}
|
||||
|
||||
|
||||
@@ -184,9 +184,15 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
|
||||
retv = -EBUSY;
|
||||
break;
|
||||
}
|
||||
} else if (sk->sk_protocol != IPPROTO_TCP)
|
||||
} else if (sk->sk_protocol == IPPROTO_TCP) {
|
||||
if (sk->sk_prot != &tcpv6_prot) {
|
||||
retv = -EBUSY;
|
||||
break;
|
||||
}
|
||||
break;
|
||||
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
if (sk->sk_state != TCP_ESTABLISHED) {
|
||||
retv = -ENOTCONN;
|
||||
break;
|
||||
|
||||
@@ -3841,7 +3841,7 @@ void __ieee80211_check_fast_rx_iface(struct ieee80211_sub_if_data *sdata)
|
||||
|
||||
lockdep_assert_held(&local->sta_mtx);
|
||||
|
||||
list_for_each_entry_rcu(sta, &local->sta_list, list) {
|
||||
list_for_each_entry(sta, &local->sta_list, list) {
|
||||
if (sdata != sta->sdata &&
|
||||
(!sta->sdata->bss || sta->sdata->bss != sdata->bss))
|
||||
continue;
|
||||
|
||||
@@ -711,6 +711,8 @@ static const struct nla_policy nfnl_cthelper_policy[NFCTH_MAX+1] = {
|
||||
[NFCTH_NAME] = { .type = NLA_NUL_STRING,
|
||||
.len = NF_CT_HELPER_NAME_LEN-1 },
|
||||
[NFCTH_QUEUE_NUM] = { .type = NLA_U32, },
|
||||
[NFCTH_PRIV_DATA_LEN] = { .type = NLA_U32, },
|
||||
[NFCTH_STATUS] = { .type = NLA_U32, },
|
||||
};
|
||||
|
||||
static const struct nfnl_callback nfnl_cthelper_cb[NFNL_MSG_CTHELPER_MAX] = {
|
||||
|
||||
@@ -193,13 +193,20 @@ exit:
|
||||
void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
u8 gate = hdev->pipes[pipe].gate;
|
||||
u8 status = NFC_HCI_ANY_OK;
|
||||
struct hci_create_pipe_resp *create_info;
|
||||
struct hci_delete_pipe_noti *delete_info;
|
||||
struct hci_all_pipe_cleared_noti *cleared_info;
|
||||
u8 gate;
|
||||
|
||||
pr_debug("from gate %x pipe %x cmd %x\n", gate, pipe, cmd);
|
||||
pr_debug("from pipe %x cmd %x\n", pipe, cmd);
|
||||
|
||||
if (pipe >= NFC_HCI_MAX_PIPES) {
|
||||
status = NFC_HCI_ANY_E_NOK;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
gate = hdev->pipes[pipe].gate;
|
||||
|
||||
switch (cmd) {
|
||||
case NFC_HCI_ADM_NOTIFY_PIPE_CREATED:
|
||||
@@ -387,8 +394,14 @@ void nfc_hci_event_received(struct nfc_hci_dev *hdev, u8 pipe, u8 event,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
int r = 0;
|
||||
u8 gate = hdev->pipes[pipe].gate;
|
||||
u8 gate;
|
||||
|
||||
if (pipe >= NFC_HCI_MAX_PIPES) {
|
||||
pr_err("Discarded event %x to invalid pipe %x\n", event, pipe);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
gate = hdev->pipes[pipe].gate;
|
||||
if (gate == NFC_HCI_INVALID_GATE) {
|
||||
pr_err("Discarded event %x to unopened pipe %x\n", event, pipe);
|
||||
goto exit;
|
||||
|
||||
@@ -62,7 +62,10 @@ static const struct nla_policy nfc_genl_policy[NFC_ATTR_MAX + 1] = {
|
||||
[NFC_ATTR_LLC_SDP] = { .type = NLA_NESTED },
|
||||
[NFC_ATTR_FIRMWARE_NAME] = { .type = NLA_STRING,
|
||||
.len = NFC_FIRMWARE_NAME_MAXSIZE },
|
||||
[NFC_ATTR_SE_INDEX] = { .type = NLA_U32 },
|
||||
[NFC_ATTR_SE_APDU] = { .type = NLA_BINARY },
|
||||
[NFC_ATTR_VENDOR_ID] = { .type = NLA_U32 },
|
||||
[NFC_ATTR_VENDOR_SUBCMD] = { .type = NLA_U32 },
|
||||
[NFC_ATTR_VENDOR_DATA] = { .type = NLA_BINARY },
|
||||
|
||||
};
|
||||
|
||||
@@ -697,6 +697,7 @@ static const struct nla_policy fq_policy[TCA_FQ_MAX + 1] = {
|
||||
[TCA_FQ_FLOW_MAX_RATE] = { .type = NLA_U32 },
|
||||
[TCA_FQ_BUCKETS_LOG] = { .type = NLA_U32 },
|
||||
[TCA_FQ_FLOW_REFILL_DELAY] = { .type = NLA_U32 },
|
||||
[TCA_FQ_ORPHAN_MASK] = { .type = NLA_U32 },
|
||||
[TCA_FQ_LOW_RATE_THRESHOLD] = { .type = NLA_U32 },
|
||||
};
|
||||
|
||||
|
||||
@@ -359,6 +359,8 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
|
||||
[NL80211_ATTR_KEY_DEFAULT_TYPES] = { .type = NLA_NESTED },
|
||||
[NL80211_ATTR_WOWLAN_TRIGGERS] = { .type = NLA_NESTED },
|
||||
[NL80211_ATTR_STA_PLINK_STATE] = { .type = NLA_U8 },
|
||||
[NL80211_ATTR_MEASUREMENT_DURATION] = { .type = NLA_U16 },
|
||||
[NL80211_ATTR_MEASUREMENT_DURATION_MANDATORY] = { .type = NLA_FLAG },
|
||||
[NL80211_ATTR_SCHED_SCAN_INTERVAL] = { .type = NLA_U32 },
|
||||
[NL80211_ATTR_REKEY_DATA] = { .type = NLA_NESTED },
|
||||
[NL80211_ATTR_SCAN_SUPP_RATES] = { .type = NLA_NESTED },
|
||||
@@ -407,6 +409,8 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
|
||||
[NL80211_ATTR_MDID] = { .type = NLA_U16 },
|
||||
[NL80211_ATTR_IE_RIC] = { .type = NLA_BINARY,
|
||||
.len = IEEE80211_MAX_DATA_LEN },
|
||||
[NL80211_ATTR_CRIT_PROT_ID] = { .type = NLA_U16 },
|
||||
[NL80211_ATTR_MAX_CRIT_PROT_DURATION] = { .type = NLA_U16 },
|
||||
[NL80211_ATTR_PEER_AID] = { .type = NLA_U16 },
|
||||
[NL80211_ATTR_CH_SWITCH_COUNT] = { .type = NLA_U32 },
|
||||
[NL80211_ATTR_CH_SWITCH_BLOCK_TX] = { .type = NLA_FLAG },
|
||||
@@ -432,6 +436,7 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
|
||||
[NL80211_ATTR_USER_PRIO] = { .type = NLA_U8 },
|
||||
[NL80211_ATTR_ADMITTED_TIME] = { .type = NLA_U16 },
|
||||
[NL80211_ATTR_SMPS_MODE] = { .type = NLA_U8 },
|
||||
[NL80211_ATTR_OPER_CLASS] = { .type = NLA_U8 },
|
||||
[NL80211_ATTR_MAC_MASK] = { .len = ETH_ALEN },
|
||||
[NL80211_ATTR_WIPHY_SELF_MANAGED_REG] = { .type = NLA_FLAG },
|
||||
[NL80211_ATTR_NETNS_FD] = { .type = NLA_U32 },
|
||||
|
||||
@@ -1730,7 +1730,7 @@ static void handle_channel_custom(struct wiphy *wiphy,
|
||||
break;
|
||||
}
|
||||
|
||||
if (IS_ERR(reg_rule)) {
|
||||
if (IS_ERR_OR_NULL(reg_rule)) {
|
||||
pr_debug("Disabling freq %d MHz as custom regd has no rule that fits it\n",
|
||||
chan->center_freq);
|
||||
if (wiphy->regulatory_flags & REGULATORY_WIPHY_SELF_MANAGED) {
|
||||
|
||||
Reference in New Issue
Block a user