Merge 4.9.326 into android-4.9-q
Changes in 4.9.326 Bluetooth: L2CAP: Fix use-after-free caused by l2cap_chan_put ntfs: fix use-after-free in ntfs_ucsncmp() scsi: ufs: host: Hold reference returned by of_parse_phandle() net: ping6: Fix memleak in ipv6_renew_options(). net: sungem_phy: Add of_node_put() for reference returned by of_get_parent() netfilter: nf_queue: do not allow packet truncation below transport header offset ARM: crypto: comment out gcc warning that breaks clang builds mt7601u: add USB device ID for some versions of XiaoDu WiFi Dongle. ion: Make user_ion_handle_put_nolock() a void function selinux: Minor cleanups proc: Pass file mode to proc_pid_make_inode selinux: Clean up initialization of isec->sclass selinux: Convert isec->lock into a spinlock selinux: fix error initialization in inode_doinit_with_dentry() selinux: fix inode_doinit_with_dentry() LABEL_INVALID error handling include/uapi/linux/swab.h: fix userspace breakage, use __BITS_PER_LONG for swap init/main: Fix double "the" in comment init/main: properly align the multi-line comment init: move stack canary initialization after setup_arch init/main.c: extract early boot entropy from the passed cmdline ACPI: video: Force backlight native for some TongFang devices ACPI: video: Shortening quirk list by identifying Clevo by board_name only random: only call boot_init_stack_canary() once macintosh/adb: fix oob read in do_adb_query() function ALSA: bcd2000: Fix a UAF bug on the error path of probing add barriers to buffer_uptodate and set_buffer_uptodate KVM: SVM: Don't BUG if userspace injects an interrupt with GIF=0 KVM: x86: Mark TSS busy during LTR emulation _after_ all fault checks ALSA: hda/conexant: Add quirk for LENOVO 20149 Notebook model ALSA: hda/cirrus - support for iMac 12,1 model vfs: Check the truncate maximum size in inode_newsize_ok() usbnet: Fix linkwatch use-after-free on disconnect parisc: Fix device names in /proc/iomem drm/nouveau: fix another off-by-one in nvbios_addr bpf: fix overflow in prog accounting fuse: limit nsec md-raid10: fix KASAN warning ia64, processor: fix -Wincompatible-pointer-types in ia64_get_irr() PCI: Add defines for normal and subtractive PCI bridges powerpc/fsl-pci: Fix Class Code of PCIe Root Port powerpc/powernv: Avoid crashing if rng is NULL MIPS: cpuinfo: Fix a warning for CONFIG_CPUMASK_OFFSTACK USB: HCD: Fix URB giveback issue in tasklet function netfilter: nf_tables: fix null deref due to zeroed list head scsi: zfcp: Fix missing auto port scan and thus missing target ports x86/olpc: fix 'logical not is only applied to the left hand side' spmi: trace: fix stack-out-of-bound access in SPMI tracing functions ext4: add EXT4_INODE_HAS_XATTR_SPACE macro in xattr.h ext4: make sure ext4_append() always allocates new block ext4: fix use-after-free in ext4_xattr_set_entry ext4: update s_overhead_clusters in the superblock during an on-line resize ext4: fix extent status tree race in writeback error recovery path ext4: correct max_inline_xattr_value_size computing dm raid: fix address sanitizer warning in raid_status net_sched: cls_route: remove from list when handle is 0 btrfs: reject log replay if there is unsupported RO compat flag tcp: fix over estimation in sk_forced_mem_schedule() scsi: sg: Allow waiting for commands to complete on removed device Revert "net: usb: ax88179_178a needs FLAG_SEND_ZLP" Bluetooth: L2CAP: Fix l2cap_global_chan_by_psm regression nios2: time: Read timer in get_cycles only if initialized net/9p: Initialize the iounit field during fid creation net_sched: cls_route: disallow handle of 0 ALSA: info: Fix llseek return value when using callback rds: add missing barrier to release_refill ata: libata-eh: Add missing command name btrfs: fix lost error handling when looking up extended ref on log replay can: ems_usb: fix clang's -Wunaligned-access warning NFSv4.1: RECLAIM_COMPLETE must handle EACCES SUNRPC: Reinitialise the backchannel request buffers before reuse pinctrl: nomadik: Fix refcount leak in nmk_pinctrl_dt_subnode_to_map pinctrl: qcom: msm8916: Allow CAMSS GP clocks to be muxed vsock: Fix memory leak in vsock_connect() xen/xenbus: fix return type in xenbus_file_read() atm: idt77252: fix use-after-free bugs caused by tst_timer nios2: page fault et.al. are *not* restartable syscalls... nios2: don't leave NULLs in sys_call_table[] nios2: traced syscall does need to check the syscall number nios2: fix syscall restart checks nios2: restarts apply only to the first sigframe we build... nios2: add force_successful_syscall_return() netfilter: nf_tables: really skip inactive sets when allocating name fec: Fix timer capture timing in `fec_ptp_enable_pps()` irqchip/tegra: Fix overflow implicit truncation warnings usb: host: ohci-ppc-of: Fix refcount leak bug gadgetfs: ep_io - wait until IRQ finishes cxl: Fix a memory leak in an error handling path drivers:md:fix a potential use-after-free bug ext4: avoid remove directory when directory is corrupted ext4: avoid resizing to a partial cluster size tty: serial: Fix refcount leak bug in ucc_uart.c vfio: Clear the caps->buf to NULL after free mips: cavium-octeon: Fix missing of_node_put() in octeon2_usb_clocks_start ALSA: core: Add async signal helpers ALSA: timer: Use deferred fasync helper powerpc/64: Init jump labels before parse_early_param() video: fbdev: i740fb: Check the argument of i740_calc_vclk() MIPS: tlbex: Explicitly compare _PAGE_NO_EXEC against 0 Linux 4.9.326 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I3ca17af58cd0c61bd81028c496849592cfd22f0f
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 325
|
||||
SUBLEVEL = 326
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
||||
@@ -29,8 +29,9 @@ MODULE_LICENSE("GPL");
|
||||
* While older versions of GCC do not generate incorrect code, they fail to
|
||||
* recognize the parallel nature of these functions, and emit plain ARM code,
|
||||
* which is known to be slower than the optimized ARM code in asm-arm/xor.h.
|
||||
*
|
||||
* #warning This code requires at least version 4.6 of GCC
|
||||
*/
|
||||
#warning This code requires at least version 4.6 of GCC
|
||||
#endif
|
||||
|
||||
#pragma GCC diagnostic ignored "-Wunused-variable"
|
||||
|
||||
@@ -554,7 +554,7 @@ ia64_get_irr(unsigned int vector)
|
||||
{
|
||||
unsigned int reg = vector / 64;
|
||||
unsigned int bit = vector % 64;
|
||||
u64 irr;
|
||||
unsigned long irr;
|
||||
|
||||
switch (reg) {
|
||||
case 0: irr = ia64_getreg(_IA64_REG_CR_IRR0); break;
|
||||
|
||||
@@ -130,11 +130,12 @@ static void octeon2_usb_clocks_start(struct device *dev)
|
||||
"refclk-frequency", &clock_rate);
|
||||
if (i) {
|
||||
dev_err(dev, "No UCTL \"refclk-frequency\"\n");
|
||||
of_node_put(uctl_node);
|
||||
goto exit;
|
||||
}
|
||||
i = of_property_read_string(uctl_node,
|
||||
"refclk-type", &clock_type);
|
||||
|
||||
of_node_put(uctl_node);
|
||||
if (!i && strcmp("crystal", clock_type) == 0)
|
||||
is_crystal_clock = true;
|
||||
}
|
||||
|
||||
@@ -162,7 +162,7 @@ static void *c_start(struct seq_file *m, loff_t *pos)
|
||||
{
|
||||
unsigned long i = *pos;
|
||||
|
||||
return i < NR_CPUS ? (void *) (i + 1) : NULL;
|
||||
return i < nr_cpu_ids ? (void *) (i + 1) : NULL;
|
||||
}
|
||||
|
||||
static void *c_next(struct seq_file *m, void *v, loff_t *pos)
|
||||
|
||||
@@ -637,7 +637,7 @@ static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
|
||||
return;
|
||||
}
|
||||
|
||||
if (cpu_has_rixi && !!_PAGE_NO_EXEC) {
|
||||
if (cpu_has_rixi && _PAGE_NO_EXEC != 0) {
|
||||
if (fill_includes_sw_bits) {
|
||||
UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
|
||||
} else {
|
||||
@@ -2518,7 +2518,7 @@ static void check_pabits(void)
|
||||
unsigned long entry;
|
||||
unsigned pabits, fillbits;
|
||||
|
||||
if (!cpu_has_rixi || !_PAGE_NO_EXEC) {
|
||||
if (!cpu_has_rixi || _PAGE_NO_EXEC == 0) {
|
||||
/*
|
||||
* We'll only be making use of the fact that we can rotate bits
|
||||
* into the fill if the CPU supports RIXI, so don't bother
|
||||
|
||||
@@ -50,7 +50,8 @@
|
||||
stw r13, PT_R13(sp)
|
||||
stw r14, PT_R14(sp)
|
||||
stw r15, PT_R15(sp)
|
||||
stw r2, PT_ORIG_R2(sp)
|
||||
movi r24, -1
|
||||
stw r24, PT_ORIG_R2(sp)
|
||||
stw r7, PT_ORIG_R7(sp)
|
||||
|
||||
stw ra, PT_RA(sp)
|
||||
|
||||
@@ -74,6 +74,8 @@ extern void show_regs(struct pt_regs *);
|
||||
((struct pt_regs *)((unsigned long)current_thread_info() + THREAD_SIZE)\
|
||||
- 1)
|
||||
|
||||
#define force_successful_syscall_return() (current_pt_regs()->orig_r2 = -1)
|
||||
|
||||
int do_syscall_trace_enter(void);
|
||||
void do_syscall_trace_exit(void);
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
@@ -185,6 +185,7 @@ ENTRY(handle_system_call)
|
||||
ldw r5, PT_R5(sp)
|
||||
|
||||
local_restart:
|
||||
stw r2, PT_ORIG_R2(sp)
|
||||
/* Check that the requested system call is within limits */
|
||||
movui r1, __NR_syscalls
|
||||
bgeu r2, r1, ret_invsyscall
|
||||
@@ -192,7 +193,6 @@ local_restart:
|
||||
movhi r11, %hiadj(sys_call_table)
|
||||
add r1, r1, r11
|
||||
ldw r1, %lo(sys_call_table)(r1)
|
||||
beq r1, r0, ret_invsyscall
|
||||
|
||||
/* Check if we are being traced */
|
||||
GET_THREAD_INFO r11
|
||||
@@ -213,6 +213,9 @@ local_restart:
|
||||
translate_rc_and_ret:
|
||||
movi r1, 0
|
||||
bge r2, zero, 3f
|
||||
ldw r1, PT_ORIG_R2(sp)
|
||||
addi r1, r1, 1
|
||||
beq r1, zero, 3f
|
||||
sub r2, zero, r2
|
||||
movi r1, 1
|
||||
3:
|
||||
@@ -255,9 +258,9 @@ traced_system_call:
|
||||
ldw r6, PT_R6(sp)
|
||||
ldw r7, PT_R7(sp)
|
||||
|
||||
/* Fetch the syscall function, we don't need to check the boundaries
|
||||
* since this is already done.
|
||||
*/
|
||||
/* Fetch the syscall function. */
|
||||
movui r1, __NR_syscalls
|
||||
bgeu r2, r1, traced_invsyscall
|
||||
slli r1, r2, 2
|
||||
movhi r11,%hiadj(sys_call_table)
|
||||
add r1, r1, r11
|
||||
@@ -276,6 +279,9 @@ traced_system_call:
|
||||
translate_rc_and_ret2:
|
||||
movi r1, 0
|
||||
bge r2, zero, 4f
|
||||
ldw r1, PT_ORIG_R2(sp)
|
||||
addi r1, r1, 1
|
||||
beq r1, zero, 4f
|
||||
sub r2, zero, r2
|
||||
movi r1, 1
|
||||
4:
|
||||
@@ -287,6 +293,11 @@ end_translate_rc_and_ret2:
|
||||
RESTORE_SWITCH_STACK
|
||||
br ret_from_exception
|
||||
|
||||
/* If the syscall number was invalid return ENOSYS */
|
||||
traced_invsyscall:
|
||||
movi r2, -ENOSYS
|
||||
br translate_rc_and_ret2
|
||||
|
||||
Luser_return:
|
||||
GET_THREAD_INFO r11 /* get thread_info pointer */
|
||||
ldw r10, TI_FLAGS(r11) /* get thread_info->flags */
|
||||
@@ -336,9 +347,6 @@ external_interrupt:
|
||||
/* skip if no interrupt is pending */
|
||||
beq r12, r0, ret_from_interrupt
|
||||
|
||||
movi r24, -1
|
||||
stw r24, PT_ORIG_R2(sp)
|
||||
|
||||
/*
|
||||
* Process an external hardware interrupt.
|
||||
*/
|
||||
|
||||
@@ -240,7 +240,7 @@ static int do_signal(struct pt_regs *regs)
|
||||
/*
|
||||
* If we were from a system call, check for system call restarting...
|
||||
*/
|
||||
if (regs->orig_r2 >= 0) {
|
||||
if (regs->orig_r2 >= 0 && regs->r1) {
|
||||
continue_addr = regs->ea;
|
||||
restart_addr = continue_addr - 4;
|
||||
retval = regs->r2;
|
||||
@@ -261,6 +261,7 @@ static int do_signal(struct pt_regs *regs)
|
||||
regs->ea = restart_addr;
|
||||
break;
|
||||
}
|
||||
regs->orig_r2 = -1;
|
||||
}
|
||||
|
||||
if (get_signal(&ksig)) {
|
||||
|
||||
@@ -25,5 +25,6 @@
|
||||
#define __SYSCALL(nr, call) [nr] = (call),
|
||||
|
||||
void *sys_call_table[__NR_syscalls] = {
|
||||
[0 ... __NR_syscalls-1] = sys_ni_syscall,
|
||||
#include <asm/unistd.h>
|
||||
};
|
||||
|
||||
@@ -107,7 +107,10 @@ static struct nios2_clocksource nios2_cs = {
|
||||
|
||||
cycles_t get_cycles(void)
|
||||
{
|
||||
return nios2_timer_read(&nios2_cs.cs);
|
||||
/* Only read timer if it has been initialized */
|
||||
if (nios2_cs.timer.base)
|
||||
return nios2_timer_read(&nios2_cs.cs);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(get_cycles);
|
||||
|
||||
|
||||
@@ -504,7 +504,6 @@ alloc_pa_dev(unsigned long hpa, struct hardware_path *mod_path)
|
||||
dev->id.hversion_rev = iodc_data[1] & 0x0f;
|
||||
dev->id.sversion = ((iodc_data[4] & 0x0f) << 16) |
|
||||
(iodc_data[5] << 8) | iodc_data[6];
|
||||
dev->hpa.name = parisc_pathname(dev);
|
||||
dev->hpa.start = hpa;
|
||||
/* This is awkward. The STI spec says that gfx devices may occupy
|
||||
* 32MB or 64MB. Unfortunately, we don't know how to tell whether
|
||||
@@ -518,10 +517,10 @@ alloc_pa_dev(unsigned long hpa, struct hardware_path *mod_path)
|
||||
dev->hpa.end = hpa + 0xfff;
|
||||
}
|
||||
dev->hpa.flags = IORESOURCE_MEM;
|
||||
name = parisc_hardware_description(&dev->id);
|
||||
if (name) {
|
||||
strlcpy(dev->name, name, sizeof(dev->name));
|
||||
}
|
||||
dev->hpa.name = dev->name;
|
||||
name = parisc_hardware_description(&dev->id) ? : "unknown";
|
||||
snprintf(dev->name, sizeof(dev->name), "%s [%s]",
|
||||
name, parisc_pathname(dev));
|
||||
|
||||
/* Silently fail things like mouse ports which are subsumed within
|
||||
* the keyboard controller
|
||||
|
||||
@@ -682,6 +682,13 @@ void __init early_init_devtree(void *params)
|
||||
of_scan_flat_dt(early_init_dt_scan_root, NULL);
|
||||
of_scan_flat_dt(early_init_dt_scan_memory_ppc, NULL);
|
||||
|
||||
/*
|
||||
* As generic code authors expect to be able to use static keys
|
||||
* in early_param() handlers, we initialize the static keys just
|
||||
* before parsing early params (it's fine to call jump_label_init()
|
||||
* more than once).
|
||||
*/
|
||||
jump_label_init();
|
||||
parse_early_param();
|
||||
|
||||
/* make sure we've parsed cmdline for mem= before this */
|
||||
|
||||
@@ -67,6 +67,8 @@ int powernv_get_random_real_mode(unsigned long *v)
|
||||
struct powernv_rng *rng;
|
||||
|
||||
rng = raw_cpu_read(powernv_rng);
|
||||
if (!rng)
|
||||
return 0;
|
||||
|
||||
*v = rng_whiten(rng, in_rm64(rng->regs_real));
|
||||
|
||||
|
||||
@@ -524,6 +524,7 @@ int fsl_add_bridge(struct platform_device *pdev, int is_primary)
|
||||
struct resource rsrc;
|
||||
const int *bus_range;
|
||||
u8 hdr_type, progif;
|
||||
u32 class_code;
|
||||
struct device_node *dev;
|
||||
struct ccsr_pci __iomem *pci;
|
||||
u16 temp;
|
||||
@@ -597,6 +598,13 @@ int fsl_add_bridge(struct platform_device *pdev, int is_primary)
|
||||
PPC_INDIRECT_TYPE_SURPRESS_PRIMARY_BUS;
|
||||
if (fsl_pcie_check_link(hose))
|
||||
hose->indirect_type |= PPC_INDIRECT_TYPE_NO_PCIE_LINK;
|
||||
/* Fix Class Code to PCI_CLASS_BRIDGE_PCI_NORMAL for pre-3.0 controller */
|
||||
if (in_be32(&pci->block_rev1) < PCIE_IP_REV_3_0) {
|
||||
early_read_config_dword(hose, 0, 0, PCIE_FSL_CSR_CLASSCODE, &class_code);
|
||||
class_code &= 0xff;
|
||||
class_code |= PCI_CLASS_BRIDGE_PCI_NORMAL << 8;
|
||||
early_write_config_dword(hose, 0, 0, PCIE_FSL_CSR_CLASSCODE, class_code);
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* Set PBFR(PCI Bus Function Register)[10] = 1 to
|
||||
|
||||
@@ -23,6 +23,7 @@ struct platform_device;
|
||||
|
||||
#define PCIE_LTSSM 0x0404 /* PCIE Link Training and Status */
|
||||
#define PCIE_LTSSM_L0 0x16 /* L0 state */
|
||||
#define PCIE_FSL_CSR_CLASSCODE 0x474 /* FSL GPEX CSR */
|
||||
#define PCIE_IP_REV_2_2 0x02080202 /* PCIE IP block version Rev2.2 */
|
||||
#define PCIE_IP_REV_3_0 0x02080300 /* PCIE IP block version Rev3.0 */
|
||||
#define PIWAR_EN 0x80000000 /* Enable */
|
||||
|
||||
@@ -1713,16 +1713,6 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
|
||||
case VCPU_SREG_TR:
|
||||
if (seg_desc.s || (seg_desc.type != 1 && seg_desc.type != 9))
|
||||
goto exception;
|
||||
if (!seg_desc.p) {
|
||||
err_vec = NP_VECTOR;
|
||||
goto exception;
|
||||
}
|
||||
old_desc = seg_desc;
|
||||
seg_desc.type |= 2; /* busy */
|
||||
ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
|
||||
sizeof(seg_desc), &ctxt->exception);
|
||||
if (ret != X86EMUL_CONTINUE)
|
||||
return ret;
|
||||
break;
|
||||
case VCPU_SREG_LDTR:
|
||||
if (seg_desc.s || seg_desc.type != 2)
|
||||
@@ -1763,6 +1753,15 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
|
||||
((u64)base3 << 32)))
|
||||
return emulate_gp(ctxt, 0);
|
||||
}
|
||||
|
||||
if (seg == VCPU_SREG_TR) {
|
||||
old_desc = seg_desc;
|
||||
seg_desc.type |= 2; /* busy */
|
||||
ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
|
||||
sizeof(seg_desc), &ctxt->exception);
|
||||
if (ret != X86EMUL_CONTINUE)
|
||||
return ret;
|
||||
}
|
||||
load:
|
||||
ctxt->ops->set_segment(ctxt, selector, &seg_desc, base3, seg);
|
||||
if (desc)
|
||||
|
||||
@@ -4492,8 +4492,6 @@ static void svm_set_irq(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct vcpu_svm *svm = to_svm(vcpu);
|
||||
|
||||
BUG_ON(!(gif_set(svm)));
|
||||
|
||||
trace_kvm_inj_virq(vcpu->arch.interrupt.nr);
|
||||
++vcpu->stat.irq_injections;
|
||||
|
||||
|
||||
@@ -85,7 +85,7 @@ static void send_ebook_state(void)
|
||||
return;
|
||||
}
|
||||
|
||||
if (!!test_bit(SW_TABLET_MODE, ebook_switch_idev->sw) == state)
|
||||
if (test_bit(SW_TABLET_MODE, ebook_switch_idev->sw) == !!state)
|
||||
return; /* Nothing new to report. */
|
||||
|
||||
input_report_switch(ebook_switch_idev, SW_TABLET_MODE, state);
|
||||
|
||||
@@ -150,23 +150,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "Clevo NL5xRU",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "Clevo NL5xRU",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "Clevo NL5xRU",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
|
||||
},
|
||||
},
|
||||
@@ -189,28 +172,60 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "Clevo NL5xNU",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
|
||||
},
|
||||
},
|
||||
/*
|
||||
* The TongFang PF5PU1G, PF4NU1F, PF5NU1G, and PF5LUXG/TUXEDO BA15 Gen10,
|
||||
* Pulse 14/15 Gen1, and Pulse 15 Gen2 have the same problem as the Clevo
|
||||
* NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2. See the description
|
||||
* above.
|
||||
*/
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "TongFang PF5PU1G",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_NAME, "PF5PU1G"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "TongFang PF4NU1F",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_NAME, "PF4NU1F"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "TongFang PF4NU1F",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "PULSE1401"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "Clevo NL5xNU",
|
||||
.ident = "TongFang PF5NU1G",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "PF5NU1G"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "Clevo NL5xNU",
|
||||
.ident = "TongFang PF5NU1G",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "PULSE1501"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = video_detect_force_native,
|
||||
.ident = "TongFang PF5LUXG",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_NAME, "PF5LUXG"),
|
||||
},
|
||||
},
|
||||
|
||||
/*
|
||||
* These models have a working acpi_video backlight control, and using
|
||||
* native backlight causes a regression where backlight does not work
|
||||
|
||||
@@ -2439,6 +2439,7 @@ const char *ata_get_cmd_descript(u8 command)
|
||||
{ ATA_CMD_WRITE_QUEUED_FUA_EXT, "WRITE DMA QUEUED FUA EXT" },
|
||||
{ ATA_CMD_FPDMA_READ, "READ FPDMA QUEUED" },
|
||||
{ ATA_CMD_FPDMA_WRITE, "WRITE FPDMA QUEUED" },
|
||||
{ ATA_CMD_NCQ_NON_DATA, "NCQ NON-DATA" },
|
||||
{ ATA_CMD_FPDMA_SEND, "SEND FPDMA QUEUED" },
|
||||
{ ATA_CMD_FPDMA_RECV, "RECEIVE FPDMA QUEUED" },
|
||||
{ ATA_CMD_PIO_READ, "READ SECTOR(S)" },
|
||||
|
||||
@@ -3777,6 +3777,7 @@ static void __exit idt77252_exit(void)
|
||||
card = idt77252_chain;
|
||||
dev = card->atmdev;
|
||||
idt77252_chain = card->next;
|
||||
del_timer_sync(&card->tst_timer);
|
||||
|
||||
if (dev->phy->stop)
|
||||
dev->phy->stop(dev);
|
||||
|
||||
@@ -33,7 +33,7 @@ nvbios_addr(struct nvkm_bios *bios, u32 *addr, u8 size)
|
||||
{
|
||||
u32 p = *addr;
|
||||
|
||||
if (*addr > bios->image0_size && bios->imaged_addr) {
|
||||
if (*addr >= bios->image0_size && bios->imaged_addr) {
|
||||
*addr -= bios->image0_size;
|
||||
*addr += bios->imaged_addr;
|
||||
}
|
||||
|
||||
@@ -157,10 +157,10 @@ static int tegra_ictlr_suspend(void)
|
||||
lic->cop_iep[i] = readl_relaxed(ictlr + ICTLR_COP_IEP_CLASS);
|
||||
|
||||
/* Disable COP interrupts */
|
||||
writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
|
||||
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
|
||||
|
||||
/* Disable CPU interrupts */
|
||||
writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
|
||||
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
|
||||
|
||||
/* Enable the wakeup sources of ictlr */
|
||||
writel_relaxed(lic->ictlr_wake_mask[i], ictlr + ICTLR_CPU_IER_SET);
|
||||
@@ -181,12 +181,12 @@ static void tegra_ictlr_resume(void)
|
||||
|
||||
writel_relaxed(lic->cpu_iep[i],
|
||||
ictlr + ICTLR_CPU_IEP_CLASS);
|
||||
writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
|
||||
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
|
||||
writel_relaxed(lic->cpu_ier[i],
|
||||
ictlr + ICTLR_CPU_IER_SET);
|
||||
writel_relaxed(lic->cop_iep[i],
|
||||
ictlr + ICTLR_COP_IEP_CLASS);
|
||||
writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
|
||||
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
|
||||
writel_relaxed(lic->cop_ier[i],
|
||||
ictlr + ICTLR_COP_IER_SET);
|
||||
}
|
||||
@@ -321,7 +321,7 @@ static int __init tegra_ictlr_init(struct device_node *node,
|
||||
lic->base[i] = base;
|
||||
|
||||
/* Disable all interrupts */
|
||||
writel_relaxed(~0UL, base + ICTLR_CPU_IER_CLR);
|
||||
writel_relaxed(GENMASK(31, 0), base + ICTLR_CPU_IER_CLR);
|
||||
/* All interrupts target IRQ */
|
||||
writel_relaxed(0, base + ICTLR_CPU_IEP_CLASS);
|
||||
|
||||
|
||||
@@ -650,7 +650,7 @@ do_adb_query(struct adb_request *req)
|
||||
|
||||
switch(req->data[1]) {
|
||||
case ADB_QUERY_GETDEVINFO:
|
||||
if (req->nbytes < 3)
|
||||
if (req->nbytes < 3 || req->data[2] >= 16)
|
||||
break;
|
||||
mutex_lock(&adb_handler_mutex);
|
||||
req->reply[0] = adb_handler[req->data[2]].original_address;
|
||||
|
||||
@@ -3173,7 +3173,7 @@ static void raid_status(struct dm_target *ti, status_type_t type,
|
||||
{
|
||||
struct raid_set *rs = ti->private;
|
||||
struct mddev *mddev = &rs->md;
|
||||
struct r5conf *conf = mddev->private;
|
||||
struct r5conf *conf = rs_is_raid456(rs) ? mddev->private : NULL;
|
||||
int i, max_nr_stripes = conf ? conf->max_nr_stripes : 0;
|
||||
bool array_in_sync;
|
||||
unsigned int raid_param_cnt = 1; /* at least 1 for chunksize */
|
||||
|
||||
@@ -1785,9 +1785,12 @@ static int raid10_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
|
||||
int err = 0;
|
||||
int number = rdev->raid_disk;
|
||||
struct md_rdev **rdevp;
|
||||
struct raid10_info *p = conf->mirrors + number;
|
||||
struct raid10_info *p;
|
||||
|
||||
print_conf(conf);
|
||||
if (unlikely(number >= mddev->raid_disks))
|
||||
return 0;
|
||||
p = conf->mirrors + number;
|
||||
if (rdev == p->rdev)
|
||||
rdevp = &p->rdev;
|
||||
else if (rdev == p->replacement)
|
||||
|
||||
@@ -2513,10 +2513,10 @@ static void raid5_end_write_request(struct bio *bi)
|
||||
if (!test_and_clear_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags))
|
||||
clear_bit(R5_LOCKED, &sh->dev[i].flags);
|
||||
set_bit(STRIPE_HANDLE, &sh->state);
|
||||
raid5_release_stripe(sh);
|
||||
|
||||
if (sh->batch_head && sh != sh->batch_head)
|
||||
raid5_release_stripe(sh->batch_head);
|
||||
raid5_release_stripe(sh);
|
||||
}
|
||||
|
||||
static void raid5_build_block(struct stripe_head *sh, int i, int previous)
|
||||
|
||||
@@ -302,6 +302,7 @@ int afu_allocate_irqs(struct cxl_context *ctx, u32 count)
|
||||
|
||||
out:
|
||||
cxl_ops->release_irq_ranges(&ctx->irqs, ctx->afu->adapter);
|
||||
bitmap_free(ctx->irq_bitmap);
|
||||
afu_irq_name_free(ctx);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
@@ -206,7 +206,7 @@ struct __packed ems_cpc_msg {
|
||||
__le32 ts_sec; /* timestamp in seconds */
|
||||
__le32 ts_nsec; /* timestamp in nano seconds */
|
||||
|
||||
union {
|
||||
union __packed {
|
||||
u8 generic[64];
|
||||
struct cpc_can_msg can_msg;
|
||||
struct cpc_can_params can_params;
|
||||
|
||||
@@ -155,11 +155,7 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
|
||||
* NSEC_PER_SEC - ts.tv_nsec. Add the remaining nanoseconds
|
||||
* to current timer would be next second.
|
||||
*/
|
||||
tempval = readl(fep->hwp + FEC_ATIME_CTRL);
|
||||
tempval |= FEC_T_CTRL_CAPTURE;
|
||||
writel(tempval, fep->hwp + FEC_ATIME_CTRL);
|
||||
|
||||
tempval = readl(fep->hwp + FEC_ATIME);
|
||||
tempval = fep->cc.read(&fep->cc);
|
||||
/* Convert the ptp local counter to 1588 timestamp */
|
||||
ns = timecounter_cyc2time(&fep->tc, tempval);
|
||||
ts = ns_to_timespec64(ns);
|
||||
|
||||
@@ -453,6 +453,7 @@ static int bcm5421_init(struct mii_phy* phy)
|
||||
int can_low_power = 1;
|
||||
if (np == NULL || of_get_property(np, "no-autolowpower", NULL))
|
||||
can_low_power = 0;
|
||||
of_node_put(np);
|
||||
if (can_low_power) {
|
||||
/* Enable automatic low-power */
|
||||
sungem_phy_write(phy, 0x1c, 0x9002);
|
||||
|
||||
@@ -1703,7 +1703,7 @@ static const struct driver_info ax88179_info = {
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
@@ -1716,7 +1716,7 @@ static const struct driver_info ax88178a_info = {
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
@@ -1729,7 +1729,7 @@ static const struct driver_info cypress_GX3_info = {
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
@@ -1742,7 +1742,7 @@ static const struct driver_info dlink_dub1312_info = {
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
@@ -1755,7 +1755,7 @@ static const struct driver_info sitecom_info = {
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
@@ -1768,7 +1768,7 @@ static const struct driver_info samsung_info = {
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
@@ -1781,7 +1781,7 @@ static const struct driver_info lenovo_info = {
|
||||
.link_reset = ax88179_link_reset,
|
||||
.reset = ax88179_reset,
|
||||
.stop = ax88179_stop,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP,
|
||||
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
|
||||
.rx_fixup = ax88179_rx_fixup,
|
||||
.tx_fixup = ax88179_tx_fixup,
|
||||
};
|
||||
|
||||
@@ -847,13 +847,11 @@ int usbnet_stop (struct net_device *net)
|
||||
|
||||
mpn = !test_and_clear_bit(EVENT_NO_RUNTIME_PM, &dev->flags);
|
||||
|
||||
/* deferred work (task, timer, softirq) must also stop.
|
||||
* can't flush_scheduled_work() until we drop rtnl (later),
|
||||
* else workers could deadlock; so make workers a NOP.
|
||||
*/
|
||||
/* deferred work (timer, softirq, task) must also stop */
|
||||
dev->flags = 0;
|
||||
del_timer_sync (&dev->delay);
|
||||
tasklet_kill (&dev->bh);
|
||||
cancel_work_sync(&dev->kevent);
|
||||
if (!pm)
|
||||
usb_autopm_put_interface(dev->intf);
|
||||
|
||||
@@ -1577,8 +1575,6 @@ void usbnet_disconnect (struct usb_interface *intf)
|
||||
net = dev->net;
|
||||
unregister_netdev (net);
|
||||
|
||||
cancel_work_sync(&dev->kevent);
|
||||
|
||||
usb_scuttle_anchored_urbs(&dev->deferred);
|
||||
|
||||
if (dev->driver_info->unbind)
|
||||
|
||||
@@ -34,6 +34,7 @@ static struct usb_device_id mt7601u_device_table[] = {
|
||||
{ USB_DEVICE(0x2717, 0x4106) },
|
||||
{ USB_DEVICE(0x2955, 0x0001) },
|
||||
{ USB_DEVICE(0x2955, 0x1001) },
|
||||
{ USB_DEVICE(0x2955, 0x1003) },
|
||||
{ USB_DEVICE(0x2a5f, 0x1000) },
|
||||
{ USB_DEVICE(0x7392, 0x7710) },
|
||||
{ 0, }
|
||||
|
||||
@@ -1455,8 +1455,10 @@ static int nmk_pinctrl_dt_subnode_to_map(struct pinctrl_dev *pctldev,
|
||||
|
||||
has_config = nmk_pinctrl_dt_get_config(np, &configs);
|
||||
np_config = of_parse_phandle(np, "ste,config", 0);
|
||||
if (np_config)
|
||||
if (np_config) {
|
||||
has_config |= nmk_pinctrl_dt_get_config(np_config, &configs);
|
||||
of_node_put(np_config);
|
||||
}
|
||||
if (has_config) {
|
||||
const char *gpio_name;
|
||||
const char *pin;
|
||||
|
||||
@@ -852,8 +852,8 @@ static const struct msm_pingroup msm8916_groups[] = {
|
||||
PINGROUP(28, pwr_modem_enabled_a, NA, NA, NA, NA, NA, qdss_tracedata_b, NA, atest_combodac),
|
||||
PINGROUP(29, cci_i2c, NA, NA, NA, NA, NA, qdss_tracedata_b, NA, atest_combodac),
|
||||
PINGROUP(30, cci_i2c, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
|
||||
PINGROUP(31, cci_timer0, NA, NA, NA, NA, NA, NA, NA, NA),
|
||||
PINGROUP(32, cci_timer1, NA, NA, NA, NA, NA, NA, NA, NA),
|
||||
PINGROUP(31, cci_timer0, flash_strobe, NA, NA, NA, NA, NA, NA, NA),
|
||||
PINGROUP(32, cci_timer1, flash_strobe, NA, NA, NA, NA, NA, NA, NA),
|
||||
PINGROUP(33, cci_async, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
|
||||
PINGROUP(34, pwr_nav_enabled_a, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
|
||||
PINGROUP(35, pwr_crypto_enabled_a, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
|
||||
|
||||
@@ -144,27 +144,33 @@ void zfcp_fc_enqueue_event(struct zfcp_adapter *adapter,
|
||||
|
||||
static int zfcp_fc_wka_port_get(struct zfcp_fc_wka_port *wka_port)
|
||||
{
|
||||
int ret = -EIO;
|
||||
|
||||
if (mutex_lock_interruptible(&wka_port->mutex))
|
||||
return -ERESTARTSYS;
|
||||
|
||||
if (wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE ||
|
||||
wka_port->status == ZFCP_FC_WKA_PORT_CLOSING) {
|
||||
wka_port->status = ZFCP_FC_WKA_PORT_OPENING;
|
||||
if (zfcp_fsf_open_wka_port(wka_port))
|
||||
if (zfcp_fsf_open_wka_port(wka_port)) {
|
||||
/* could not even send request, nothing to wait for */
|
||||
wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&wka_port->mutex);
|
||||
|
||||
wait_event(wka_port->completion_wq,
|
||||
wait_event(wka_port->opened,
|
||||
wka_port->status == ZFCP_FC_WKA_PORT_ONLINE ||
|
||||
wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE);
|
||||
|
||||
if (wka_port->status == ZFCP_FC_WKA_PORT_ONLINE) {
|
||||
atomic_inc(&wka_port->refcount);
|
||||
return 0;
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
return -EIO;
|
||||
out:
|
||||
mutex_unlock(&wka_port->mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void zfcp_fc_wka_port_offline(struct work_struct *work)
|
||||
@@ -180,9 +186,12 @@ static void zfcp_fc_wka_port_offline(struct work_struct *work)
|
||||
|
||||
wka_port->status = ZFCP_FC_WKA_PORT_CLOSING;
|
||||
if (zfcp_fsf_close_wka_port(wka_port)) {
|
||||
/* could not even send request, nothing to wait for */
|
||||
wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
|
||||
wake_up(&wka_port->completion_wq);
|
||||
goto out;
|
||||
}
|
||||
wait_event(wka_port->closed,
|
||||
wka_port->status == ZFCP_FC_WKA_PORT_OFFLINE);
|
||||
out:
|
||||
mutex_unlock(&wka_port->mutex);
|
||||
}
|
||||
@@ -192,13 +201,15 @@ static void zfcp_fc_wka_port_put(struct zfcp_fc_wka_port *wka_port)
|
||||
if (atomic_dec_return(&wka_port->refcount) != 0)
|
||||
return;
|
||||
/* wait 10 milliseconds, other reqs might pop in */
|
||||
schedule_delayed_work(&wka_port->work, HZ / 100);
|
||||
queue_delayed_work(wka_port->adapter->work_queue, &wka_port->work,
|
||||
msecs_to_jiffies(10));
|
||||
}
|
||||
|
||||
static void zfcp_fc_wka_port_init(struct zfcp_fc_wka_port *wka_port, u32 d_id,
|
||||
struct zfcp_adapter *adapter)
|
||||
{
|
||||
init_waitqueue_head(&wka_port->completion_wq);
|
||||
init_waitqueue_head(&wka_port->opened);
|
||||
init_waitqueue_head(&wka_port->closed);
|
||||
|
||||
wka_port->adapter = adapter;
|
||||
wka_port->d_id = d_id;
|
||||
|
||||
@@ -169,7 +169,8 @@ enum zfcp_fc_wka_status {
|
||||
/**
|
||||
* struct zfcp_fc_wka_port - representation of well-known-address (WKA) FC port
|
||||
* @adapter: Pointer to adapter structure this WKA port belongs to
|
||||
* @completion_wq: Wait for completion of open/close command
|
||||
* @opened: Wait for completion of open command
|
||||
* @closed: Wait for completion of close command
|
||||
* @status: Current status of WKA port
|
||||
* @refcount: Reference count to keep port open as long as it is in use
|
||||
* @d_id: FC destination id or well-known-address
|
||||
@@ -179,7 +180,8 @@ enum zfcp_fc_wka_status {
|
||||
*/
|
||||
struct zfcp_fc_wka_port {
|
||||
struct zfcp_adapter *adapter;
|
||||
wait_queue_head_t completion_wq;
|
||||
wait_queue_head_t opened;
|
||||
wait_queue_head_t closed;
|
||||
enum zfcp_fc_wka_status status;
|
||||
atomic_t refcount;
|
||||
u32 d_id;
|
||||
|
||||
@@ -1582,7 +1582,7 @@ static void zfcp_fsf_open_wka_port_handler(struct zfcp_fsf_req *req)
|
||||
wka_port->status = ZFCP_FC_WKA_PORT_ONLINE;
|
||||
}
|
||||
out:
|
||||
wake_up(&wka_port->completion_wq);
|
||||
wake_up(&wka_port->opened);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1640,7 +1640,7 @@ static void zfcp_fsf_close_wka_port_handler(struct zfcp_fsf_req *req)
|
||||
}
|
||||
|
||||
wka_port->status = ZFCP_FC_WKA_PORT_OFFLINE;
|
||||
wake_up(&wka_port->completion_wq);
|
||||
wake_up(&wka_port->closed);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -196,7 +196,7 @@ static void sg_link_reserve(Sg_fd * sfp, Sg_request * srp, int size);
|
||||
static void sg_unlink_reserve(Sg_fd * sfp, Sg_request * srp);
|
||||
static Sg_fd *sg_add_sfp(Sg_device * sdp);
|
||||
static void sg_remove_sfp(struct kref *);
|
||||
static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id);
|
||||
static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id, bool *busy);
|
||||
static Sg_request *sg_add_request(Sg_fd * sfp);
|
||||
static int sg_remove_request(Sg_fd * sfp, Sg_request * srp);
|
||||
static Sg_device *sg_get_dev(int dev);
|
||||
@@ -418,6 +418,7 @@ sg_read(struct file *filp, char __user *buf, size_t count, loff_t * ppos)
|
||||
Sg_fd *sfp;
|
||||
Sg_request *srp;
|
||||
int req_pack_id = -1;
|
||||
bool busy;
|
||||
sg_io_hdr_t *hp;
|
||||
struct sg_header *old_hdr = NULL;
|
||||
int retval = 0;
|
||||
@@ -465,25 +466,19 @@ sg_read(struct file *filp, char __user *buf, size_t count, loff_t * ppos)
|
||||
} else
|
||||
req_pack_id = old_hdr->pack_id;
|
||||
}
|
||||
srp = sg_get_rq_mark(sfp, req_pack_id);
|
||||
srp = sg_get_rq_mark(sfp, req_pack_id, &busy);
|
||||
if (!srp) { /* now wait on packet to arrive */
|
||||
if (atomic_read(&sdp->detaching)) {
|
||||
retval = -ENODEV;
|
||||
goto free_old_hdr;
|
||||
}
|
||||
if (filp->f_flags & O_NONBLOCK) {
|
||||
retval = -EAGAIN;
|
||||
goto free_old_hdr;
|
||||
}
|
||||
retval = wait_event_interruptible(sfp->read_wait,
|
||||
(atomic_read(&sdp->detaching) ||
|
||||
(srp = sg_get_rq_mark(sfp, req_pack_id))));
|
||||
if (atomic_read(&sdp->detaching)) {
|
||||
retval = -ENODEV;
|
||||
goto free_old_hdr;
|
||||
}
|
||||
if (retval) {
|
||||
/* -ERESTARTSYS as signal hit process */
|
||||
((srp = sg_get_rq_mark(sfp, req_pack_id, &busy)) ||
|
||||
(!busy && atomic_read(&sdp->detaching))));
|
||||
if (!srp) {
|
||||
/* signal or detaching */
|
||||
if (!retval)
|
||||
retval = -ENODEV;
|
||||
goto free_old_hdr;
|
||||
}
|
||||
}
|
||||
@@ -936,9 +931,7 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg)
|
||||
if (result < 0)
|
||||
return result;
|
||||
result = wait_event_interruptible(sfp->read_wait,
|
||||
(srp_done(sfp, srp) || atomic_read(&sdp->detaching)));
|
||||
if (atomic_read(&sdp->detaching))
|
||||
return -ENODEV;
|
||||
srp_done(sfp, srp));
|
||||
write_lock_irq(&sfp->rq_list_lock);
|
||||
if (srp->done) {
|
||||
srp->done = 2;
|
||||
@@ -2095,19 +2088,28 @@ sg_unlink_reserve(Sg_fd * sfp, Sg_request * srp)
|
||||
}
|
||||
|
||||
static Sg_request *
|
||||
sg_get_rq_mark(Sg_fd * sfp, int pack_id)
|
||||
sg_get_rq_mark(Sg_fd * sfp, int pack_id, bool *busy)
|
||||
{
|
||||
Sg_request *resp;
|
||||
unsigned long iflags;
|
||||
|
||||
*busy = false;
|
||||
write_lock_irqsave(&sfp->rq_list_lock, iflags);
|
||||
list_for_each_entry(resp, &sfp->rq_list, entry) {
|
||||
/* look for requests that are ready + not SG_IO owned */
|
||||
if ((1 == resp->done) && (!resp->sg_io_owned) &&
|
||||
/* look for requests that are not SG_IO owned */
|
||||
if ((!resp->sg_io_owned) &&
|
||||
((-1 == pack_id) || (resp->header.pack_id == pack_id))) {
|
||||
resp->done = 2; /* guard against other readers */
|
||||
write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
|
||||
return resp;
|
||||
switch (resp->done) {
|
||||
case 0: /* request active */
|
||||
*busy = true;
|
||||
break;
|
||||
case 1: /* request done; response ready to return */
|
||||
resp->done = 2; /* guard against other readers */
|
||||
write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
|
||||
return resp;
|
||||
case 2: /* response already being returned */
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
|
||||
@@ -2161,6 +2163,15 @@ sg_remove_request(Sg_fd * sfp, Sg_request * srp)
|
||||
res = 1;
|
||||
}
|
||||
write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
|
||||
|
||||
/*
|
||||
* If the device is detaching, wakeup any readers in case we just
|
||||
* removed the last response, which would leave nothing for them to
|
||||
* return other than -ENODEV.
|
||||
*/
|
||||
if (unlikely(atomic_read(&sfp->parentdp->detaching)))
|
||||
wake_up_interruptible_all(&sfp->read_wait);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
|
||||
@@ -126,9 +126,20 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool phandle_exists(const struct device_node *np,
|
||||
const char *phandle_name, int index)
|
||||
{
|
||||
struct device_node *parse_np = of_parse_phandle(np, phandle_name, index);
|
||||
|
||||
if (parse_np)
|
||||
of_node_put(parse_np);
|
||||
|
||||
return parse_np != NULL;
|
||||
}
|
||||
|
||||
#define MAX_PROP_SIZE 32
|
||||
static int ufshcd_populate_vreg(struct device *dev, const char *name,
|
||||
struct ufs_vreg **out_vreg)
|
||||
struct ufs_vreg **out_vreg)
|
||||
{
|
||||
int ret = 0;
|
||||
char prop_name[MAX_PROP_SIZE];
|
||||
@@ -141,7 +152,7 @@ static int ufshcd_populate_vreg(struct device *dev, const char *name,
|
||||
}
|
||||
|
||||
snprintf(prop_name, MAX_PROP_SIZE, "%s-supply", name);
|
||||
if (!of_parse_phandle(np, prop_name, 0)) {
|
||||
if (!phandle_exists(np, prop_name, 0)) {
|
||||
dev_info(dev, "%s: Unable to find %s regulator, assuming enabled\n",
|
||||
__func__, prop_name);
|
||||
goto out;
|
||||
|
||||
@@ -64,14 +64,10 @@ static struct ion_handle *pass_to_user(struct ion_handle *handle)
|
||||
}
|
||||
|
||||
/* Must hold the client lock */
|
||||
static int user_ion_handle_put_nolock(struct ion_handle *handle)
|
||||
static void user_ion_handle_put_nolock(struct ion_handle *handle)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (--handle->user_ref_count == 0)
|
||||
ret = ion_handle_put_nolock(handle);
|
||||
|
||||
return ret;
|
||||
ion_handle_put_nolock(handle);
|
||||
}
|
||||
|
||||
static void user_ion_free_nolock(struct ion_client *client,
|
||||
|
||||
@@ -1143,6 +1143,8 @@ static unsigned int soc_info(unsigned int *rev_h, unsigned int *rev_l)
|
||||
/* No compatible property, so try the name. */
|
||||
soc_string = np->name;
|
||||
|
||||
of_node_put(np);
|
||||
|
||||
/* Extract the SOC number from the "PowerPC," string */
|
||||
if ((sscanf(soc_string, "PowerPC,%u", &soc) != 1) || !soc)
|
||||
return 0;
|
||||
|
||||
@@ -1803,7 +1803,6 @@ static void usb_giveback_urb_bh(unsigned long param)
|
||||
|
||||
spin_lock_irq(&bh->lock);
|
||||
bh->running = true;
|
||||
restart:
|
||||
list_replace_init(&bh->head, &local_list);
|
||||
spin_unlock_irq(&bh->lock);
|
||||
|
||||
@@ -1817,10 +1816,17 @@ static void usb_giveback_urb_bh(unsigned long param)
|
||||
bh->completing_ep = NULL;
|
||||
}
|
||||
|
||||
/* check if there are new URBs to giveback */
|
||||
/*
|
||||
* giveback new URBs next time to prevent this function
|
||||
* from not exiting for a long time.
|
||||
*/
|
||||
spin_lock_irq(&bh->lock);
|
||||
if (!list_empty(&bh->head))
|
||||
goto restart;
|
||||
if (!list_empty(&bh->head)) {
|
||||
if (bh->high_prio)
|
||||
tasklet_hi_schedule(&bh->bh);
|
||||
else
|
||||
tasklet_schedule(&bh->bh);
|
||||
}
|
||||
bh->running = false;
|
||||
spin_unlock_irq(&bh->lock);
|
||||
}
|
||||
@@ -1845,7 +1851,7 @@ static void usb_giveback_urb_bh(unsigned long param)
|
||||
void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
|
||||
{
|
||||
struct giveback_urb_bh *bh;
|
||||
bool running, high_prio_bh;
|
||||
bool running;
|
||||
|
||||
/* pass status to tasklet via unlinked */
|
||||
if (likely(!urb->unlinked))
|
||||
@@ -1856,13 +1862,10 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
|
||||
return;
|
||||
}
|
||||
|
||||
if (usb_pipeisoc(urb->pipe) || usb_pipeint(urb->pipe)) {
|
||||
if (usb_pipeisoc(urb->pipe) || usb_pipeint(urb->pipe))
|
||||
bh = &hcd->high_prio_bh;
|
||||
high_prio_bh = true;
|
||||
} else {
|
||||
else
|
||||
bh = &hcd->low_prio_bh;
|
||||
high_prio_bh = false;
|
||||
}
|
||||
|
||||
spin_lock(&bh->lock);
|
||||
list_add_tail(&urb->urb_list, &bh->head);
|
||||
@@ -1871,7 +1874,7 @@ void usb_hcd_giveback_urb(struct usb_hcd *hcd, struct urb *urb, int status)
|
||||
|
||||
if (running)
|
||||
;
|
||||
else if (high_prio_bh)
|
||||
else if (bh->high_prio)
|
||||
tasklet_hi_schedule(&bh->bh);
|
||||
else
|
||||
tasklet_schedule(&bh->bh);
|
||||
@@ -2880,6 +2883,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
|
||||
|
||||
/* initialize tasklets */
|
||||
init_giveback_urb_bh(&hcd->high_prio_bh);
|
||||
hcd->high_prio_bh.high_prio = true;
|
||||
init_giveback_urb_bh(&hcd->low_prio_bh);
|
||||
|
||||
/* enable irqs just before we start the controller,
|
||||
|
||||
@@ -365,6 +365,7 @@ ep_io (struct ep_data *epdata, void *buf, unsigned len)
|
||||
spin_unlock_irq (&epdata->dev->lock);
|
||||
|
||||
DBG (epdata->dev, "endpoint gone\n");
|
||||
wait_for_completion(&done);
|
||||
epdata->status = -ENODEV;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -168,6 +168,7 @@ static int ohci_hcd_ppc_of_probe(struct platform_device *op)
|
||||
release_mem_region(res.start, 0x4);
|
||||
} else
|
||||
pr_debug("%s: cannot get ehci offset from fdt\n", __FILE__);
|
||||
of_node_put(np);
|
||||
}
|
||||
|
||||
irq_dispose_mapping(irq);
|
||||
|
||||
@@ -1793,6 +1793,7 @@ struct vfio_info_cap_header *vfio_info_cap_add(struct vfio_info_cap *caps,
|
||||
buf = krealloc(caps->buf, caps->size + size, GFP_KERNEL);
|
||||
if (!buf) {
|
||||
kfree(caps->buf);
|
||||
caps->buf = NULL;
|
||||
caps->size = 0;
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
@@ -399,7 +399,7 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
|
||||
u32 xres, right, hslen, left, xtotal;
|
||||
u32 yres, lower, vslen, upper, ytotal;
|
||||
u32 vxres, xoffset, vyres, yoffset;
|
||||
u32 bpp, base, dacspeed24, mem;
|
||||
u32 bpp, base, dacspeed24, mem, freq;
|
||||
u8 r7;
|
||||
int i;
|
||||
|
||||
@@ -641,7 +641,12 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
|
||||
par->atc[VGA_ATC_OVERSCAN] = 0;
|
||||
|
||||
/* Calculate VCLK that most closely matches the requested dot clock */
|
||||
i740_calc_vclk((((u32)1e9) / var->pixclock) * (u32)(1e3), par);
|
||||
freq = (((u32)1e9) / var->pixclock) * (u32)(1e3);
|
||||
if (freq < I740_RFREQ_FIX) {
|
||||
fb_dbg(info, "invalid pixclock\n");
|
||||
freq = I740_RFREQ_FIX;
|
||||
}
|
||||
i740_calc_vclk(freq, par);
|
||||
|
||||
/* Since we program the clocks ourselves, always use VCLK2. */
|
||||
par->misc |= 0x0C;
|
||||
|
||||
@@ -122,7 +122,7 @@ static ssize_t xenbus_file_read(struct file *filp,
|
||||
{
|
||||
struct xenbus_file_priv *u = filp->private_data;
|
||||
struct read_buffer *rb;
|
||||
unsigned i;
|
||||
ssize_t i;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&u->reply_mutex);
|
||||
@@ -142,7 +142,7 @@ again:
|
||||
rb = list_entry(u->read_buffers.next, struct read_buffer, list);
|
||||
i = 0;
|
||||
while (i < len) {
|
||||
unsigned sz = min((unsigned)len - i, rb->len - rb->cons);
|
||||
size_t sz = min_t(size_t, len - i, rb->len - rb->cons);
|
||||
|
||||
ret = copy_to_user(ubuf + i, &rb->msg[rb->cons], sz);
|
||||
|
||||
|
||||
@@ -111,6 +111,8 @@ EXPORT_SYMBOL(setattr_prepare);
|
||||
*/
|
||||
int inode_newsize_ok(const struct inode *inode, loff_t offset)
|
||||
{
|
||||
if (offset < 0)
|
||||
return -EINVAL;
|
||||
if (inode->i_size < offset) {
|
||||
unsigned long limit;
|
||||
|
||||
|
||||
@@ -2774,6 +2774,20 @@ int open_ctree(struct super_block *sb,
|
||||
err = -EINVAL;
|
||||
goto fail_alloc;
|
||||
}
|
||||
/*
|
||||
* We have unsupported RO compat features, although RO mounted, we
|
||||
* should not cause any metadata write, including log replay.
|
||||
* Or we could screw up whatever the new feature requires.
|
||||
*/
|
||||
if (unlikely(features && btrfs_super_log_root(disk_super) &&
|
||||
!btrfs_test_opt(fs_info, NOLOGREPLAY))) {
|
||||
btrfs_err(fs_info,
|
||||
"cannot replay dirty log with unsupported compat_ro features (0x%llx), try rescue=nologreplay",
|
||||
features);
|
||||
err = -EINVAL;
|
||||
goto fail_alloc;
|
||||
}
|
||||
|
||||
|
||||
max_active = fs_info->thread_pool_size;
|
||||
|
||||
|
||||
@@ -1074,7 +1074,9 @@ again:
|
||||
extref = btrfs_lookup_inode_extref(NULL, root, path, name, namelen,
|
||||
inode_objectid, parent_objectid, 0,
|
||||
0);
|
||||
if (!IS_ERR_OR_NULL(extref)) {
|
||||
if (IS_ERR(extref)) {
|
||||
return PTR_ERR(extref);
|
||||
} else if (extref) {
|
||||
u32 item_size;
|
||||
u32 cur_offset = 0;
|
||||
unsigned long base;
|
||||
|
||||
@@ -41,6 +41,9 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
|
||||
struct ext4_inode *raw_inode;
|
||||
int free, min_offs;
|
||||
|
||||
if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
|
||||
return 0;
|
||||
|
||||
min_offs = EXT4_SB(inode->i_sb)->s_inode_size -
|
||||
EXT4_GOOD_OLD_INODE_SIZE -
|
||||
EXT4_I(inode)->i_extra_isize -
|
||||
|
||||
@@ -1673,7 +1673,14 @@ static void mpage_release_unused_pages(struct mpage_da_data *mpd,
|
||||
ext4_lblk_t start, last;
|
||||
start = index << (PAGE_SHIFT - inode->i_blkbits);
|
||||
last = end << (PAGE_SHIFT - inode->i_blkbits);
|
||||
|
||||
/*
|
||||
* avoid racing with extent status tree scans made by
|
||||
* ext4_insert_delayed_block()
|
||||
*/
|
||||
down_write(&EXT4_I(inode)->i_data_sem);
|
||||
ext4_es_remove_extent(inode, start, last - start + 1);
|
||||
up_write(&EXT4_I(inode)->i_data_sem);
|
||||
}
|
||||
|
||||
pagevec_init(&pvec, 0);
|
||||
|
||||
@@ -51,6 +51,7 @@ static struct buffer_head *ext4_append(handle_t *handle,
|
||||
struct inode *inode,
|
||||
ext4_lblk_t *block)
|
||||
{
|
||||
struct ext4_map_blocks map;
|
||||
struct buffer_head *bh;
|
||||
int err;
|
||||
|
||||
@@ -60,6 +61,21 @@ static struct buffer_head *ext4_append(handle_t *handle,
|
||||
return ERR_PTR(-ENOSPC);
|
||||
|
||||
*block = inode->i_size >> inode->i_sb->s_blocksize_bits;
|
||||
map.m_lblk = *block;
|
||||
map.m_len = 1;
|
||||
|
||||
/*
|
||||
* We're appending new directory block. Make sure the block is not
|
||||
* allocated yet, otherwise we will end up corrupting the
|
||||
* directory.
|
||||
*/
|
||||
err = ext4_map_blocks(NULL, inode, &map, 0);
|
||||
if (err < 0)
|
||||
return ERR_PTR(err);
|
||||
if (err) {
|
||||
EXT4_ERROR_INODE(inode, "Logical block already allocated");
|
||||
return ERR_PTR(-EFSCORRUPTED);
|
||||
}
|
||||
|
||||
bh = ext4_bread(handle, inode, *block, EXT4_GET_BLOCKS_CREATE);
|
||||
if (IS_ERR(bh))
|
||||
@@ -2743,11 +2759,8 @@ bool ext4_empty_dir(struct inode *inode)
|
||||
de = (struct ext4_dir_entry_2 *) (bh->b_data +
|
||||
(offset & (sb->s_blocksize - 1)));
|
||||
if (ext4_check_dir_entry(inode, NULL, de, bh,
|
||||
bh->b_data, bh->b_size, offset)) {
|
||||
offset = (offset | (sb->s_blocksize - 1)) + 1;
|
||||
continue;
|
||||
}
|
||||
if (le32_to_cpu(de->inode)) {
|
||||
bh->b_data, bh->b_size, offset) ||
|
||||
le32_to_cpu(de->inode)) {
|
||||
brelse(bh);
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -1446,6 +1446,7 @@ static void ext4_update_super(struct super_block *sb,
|
||||
* Update the fs overhead information
|
||||
*/
|
||||
ext4_calculate_overhead(sb);
|
||||
es->s_overhead_clusters = cpu_to_le32(sbi->s_overhead);
|
||||
|
||||
if (test_opt(sb, DEBUG))
|
||||
printk(KERN_DEBUG "EXT4-fs: added group %u:"
|
||||
@@ -1940,6 +1941,16 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
|
||||
}
|
||||
brelse(bh);
|
||||
|
||||
/*
|
||||
* For bigalloc, trim the requested size to the nearest cluster
|
||||
* boundary to avoid creating an unusable filesystem. We do this
|
||||
* silently, instead of returning an error, to avoid breaking
|
||||
* callers that blindly resize the filesystem to the full size of
|
||||
* the underlying block device.
|
||||
*/
|
||||
if (ext4_has_feature_bigalloc(sb))
|
||||
n_blocks_count &= ~((1 << EXT4_CLUSTER_BITS(sb)) - 1);
|
||||
|
||||
retry:
|
||||
o_blocks_count = ext4_blocks_count(es);
|
||||
|
||||
|
||||
@@ -1053,8 +1053,9 @@ int ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i,
|
||||
struct ext4_inode *raw_inode;
|
||||
int error;
|
||||
|
||||
if (EXT4_I(inode)->i_extra_isize == 0)
|
||||
if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
|
||||
return 0;
|
||||
|
||||
raw_inode = ext4_raw_inode(&is->iloc);
|
||||
header = IHDR(inode, raw_inode);
|
||||
is->s.base = is->s.first = IFIRST(header);
|
||||
@@ -1107,8 +1108,9 @@ static int ext4_xattr_ibody_set(handle_t *handle, struct inode *inode,
|
||||
struct ext4_xattr_search *s = &is->s;
|
||||
int error;
|
||||
|
||||
if (EXT4_I(inode)->i_extra_isize == 0)
|
||||
if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
|
||||
return -ENOSPC;
|
||||
|
||||
error = ext4_xattr_set_entry(i, s, inode);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
@@ -76,6 +76,19 @@ struct ext4_xattr_entry {
|
||||
|
||||
#define EXT4_ZERO_XATTR_VALUE ((void *)-1)
|
||||
|
||||
/*
|
||||
* If we want to add an xattr to the inode, we should make sure that
|
||||
* i_extra_isize is not 0 and that the inode size is not less than
|
||||
* EXT4_GOOD_OLD_INODE_SIZE + extra_isize + pad.
|
||||
* EXT4_GOOD_OLD_INODE_SIZE extra_isize header entry pad data
|
||||
* |--------------------------|------------|------|---------|---|-------|
|
||||
*/
|
||||
#define EXT4_INODE_HAS_XATTR_SPACE(inode) \
|
||||
((EXT4_I(inode)->i_extra_isize != 0) && \
|
||||
(EXT4_GOOD_OLD_INODE_SIZE + EXT4_I(inode)->i_extra_isize + \
|
||||
sizeof(struct ext4_xattr_ibody_header) + EXT4_XATTR_PAD <= \
|
||||
EXT4_INODE_SIZE((inode)->i_sb)))
|
||||
|
||||
struct ext4_xattr_info {
|
||||
int name_index;
|
||||
const char *name;
|
||||
|
||||
@@ -173,6 +173,12 @@ void fuse_change_attributes_common(struct inode *inode, struct fuse_attr *attr,
|
||||
inode->i_uid = make_kuid(&init_user_ns, attr->uid);
|
||||
inode->i_gid = make_kgid(&init_user_ns, attr->gid);
|
||||
inode->i_blocks = attr->blocks;
|
||||
|
||||
/* Sanitize nsecs */
|
||||
attr->atimensec = min_t(u32, attr->atimensec, NSEC_PER_SEC - 1);
|
||||
attr->mtimensec = min_t(u32, attr->mtimensec, NSEC_PER_SEC - 1);
|
||||
attr->ctimensec = min_t(u32, attr->ctimensec, NSEC_PER_SEC - 1);
|
||||
|
||||
inode->i_atime.tv_sec = attr->atime;
|
||||
inode->i_atime.tv_nsec = attr->atimensec;
|
||||
/* mtime from server may be stale due to local buffered write */
|
||||
|
||||
@@ -8229,6 +8229,9 @@ static int nfs41_reclaim_complete_handle_errors(struct rpc_task *task, struct nf
|
||||
rpc_delay(task, NFS4_POLL_RETRY_MAX);
|
||||
/* fall through */
|
||||
case -NFS4ERR_RETRY_UNCACHED_REP:
|
||||
case -EACCES:
|
||||
dprintk("%s: failed to reclaim complete error %d for server %s, retrying\n",
|
||||
__func__, task->tk_status, clp->cl_hostname);
|
||||
return -EAGAIN;
|
||||
case -NFS4ERR_BADSESSION:
|
||||
case -NFS4ERR_DEADSESSION:
|
||||
|
||||
@@ -606,8 +606,12 @@ static int ntfs_attr_find(const ATTR_TYPE type, const ntfschar *name,
|
||||
a = (ATTR_RECORD*)((u8*)ctx->attr +
|
||||
le32_to_cpu(ctx->attr->length));
|
||||
for (;; a = (ATTR_RECORD*)((u8*)a + le32_to_cpu(a->length))) {
|
||||
if ((u8*)a < (u8*)ctx->mrec || (u8*)a > (u8*)ctx->mrec +
|
||||
le32_to_cpu(ctx->mrec->bytes_allocated))
|
||||
u8 *mrec_end = (u8 *)ctx->mrec +
|
||||
le32_to_cpu(ctx->mrec->bytes_allocated);
|
||||
u8 *name_end = (u8 *)a + le16_to_cpu(a->name_offset) +
|
||||
a->name_length * sizeof(ntfschar);
|
||||
if ((u8*)a < (u8*)ctx->mrec || (u8*)a > mrec_end ||
|
||||
name_end > mrec_end)
|
||||
break;
|
||||
ctx->attr = a;
|
||||
if (unlikely(le32_to_cpu(a->type) > le32_to_cpu(type) ||
|
||||
|
||||
@@ -1677,7 +1677,8 @@ const struct inode_operations proc_pid_link_inode_operations = {
|
||||
|
||||
/* building an inode */
|
||||
|
||||
struct inode *proc_pid_make_inode(struct super_block * sb, struct task_struct *task)
|
||||
struct inode *proc_pid_make_inode(struct super_block * sb,
|
||||
struct task_struct *task, umode_t mode)
|
||||
{
|
||||
struct inode * inode;
|
||||
struct proc_inode *ei;
|
||||
@@ -1691,6 +1692,7 @@ struct inode *proc_pid_make_inode(struct super_block * sb, struct task_struct *t
|
||||
|
||||
/* Common stuff */
|
||||
ei = PROC_I(inode);
|
||||
inode->i_mode = mode;
|
||||
inode->i_ino = get_next_ino();
|
||||
inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
|
||||
inode->i_op = &proc_def_inode_operations;
|
||||
@@ -2042,7 +2044,9 @@ proc_map_files_instantiate(struct inode *dir, struct dentry *dentry,
|
||||
struct proc_inode *ei;
|
||||
struct inode *inode;
|
||||
|
||||
inode = proc_pid_make_inode(dir->i_sb, task);
|
||||
inode = proc_pid_make_inode(dir->i_sb, task, S_IFLNK |
|
||||
((mode & FMODE_READ ) ? S_IRUSR : 0) |
|
||||
((mode & FMODE_WRITE) ? S_IWUSR : 0));
|
||||
if (!inode)
|
||||
return -ENOENT;
|
||||
|
||||
@@ -2051,12 +2055,6 @@ proc_map_files_instantiate(struct inode *dir, struct dentry *dentry,
|
||||
|
||||
inode->i_op = &proc_map_files_link_inode_operations;
|
||||
inode->i_size = 64;
|
||||
inode->i_mode = S_IFLNK;
|
||||
|
||||
if (mode & FMODE_READ)
|
||||
inode->i_mode |= S_IRUSR;
|
||||
if (mode & FMODE_WRITE)
|
||||
inode->i_mode |= S_IWUSR;
|
||||
|
||||
d_set_d_op(dentry, &tid_map_files_dentry_operations);
|
||||
d_add(dentry, inode);
|
||||
@@ -2410,12 +2408,11 @@ static int proc_pident_instantiate(struct inode *dir,
|
||||
struct inode *inode;
|
||||
struct proc_inode *ei;
|
||||
|
||||
inode = proc_pid_make_inode(dir->i_sb, task);
|
||||
inode = proc_pid_make_inode(dir->i_sb, task, p->mode);
|
||||
if (!inode)
|
||||
goto out;
|
||||
|
||||
ei = PROC_I(inode);
|
||||
inode->i_mode = p->mode;
|
||||
if (S_ISDIR(inode->i_mode))
|
||||
set_nlink(inode, 2); /* Use getattr to fix if necessary */
|
||||
if (p->iop)
|
||||
@@ -3123,11 +3120,10 @@ static int proc_pid_instantiate(struct inode *dir,
|
||||
{
|
||||
struct inode *inode;
|
||||
|
||||
inode = proc_pid_make_inode(dir->i_sb, task);
|
||||
inode = proc_pid_make_inode(dir->i_sb, task, S_IFDIR | S_IRUGO | S_IXUGO);
|
||||
if (!inode)
|
||||
goto out;
|
||||
|
||||
inode->i_mode = S_IFDIR|S_IRUGO|S_IXUGO;
|
||||
inode->i_op = &proc_tgid_base_inode_operations;
|
||||
inode->i_fop = &proc_tgid_base_operations;
|
||||
inode->i_flags|=S_IMMUTABLE;
|
||||
@@ -3422,11 +3418,10 @@ static int proc_task_instantiate(struct inode *dir,
|
||||
struct dentry *dentry, struct task_struct *task, const void *ptr)
|
||||
{
|
||||
struct inode *inode;
|
||||
inode = proc_pid_make_inode(dir->i_sb, task);
|
||||
inode = proc_pid_make_inode(dir->i_sb, task, S_IFDIR | S_IRUGO | S_IXUGO);
|
||||
|
||||
if (!inode)
|
||||
goto out;
|
||||
inode->i_mode = S_IFDIR|S_IRUGO|S_IXUGO;
|
||||
inode->i_op = &proc_tid_base_inode_operations;
|
||||
inode->i_fop = &proc_tid_base_operations;
|
||||
inode->i_flags|=S_IMMUTABLE;
|
||||
|
||||
@@ -183,14 +183,13 @@ proc_fd_instantiate(struct inode *dir, struct dentry *dentry,
|
||||
struct proc_inode *ei;
|
||||
struct inode *inode;
|
||||
|
||||
inode = proc_pid_make_inode(dir->i_sb, task);
|
||||
inode = proc_pid_make_inode(dir->i_sb, task, S_IFLNK);
|
||||
if (!inode)
|
||||
goto out;
|
||||
|
||||
ei = PROC_I(inode);
|
||||
ei->fd = fd;
|
||||
|
||||
inode->i_mode = S_IFLNK;
|
||||
inode->i_op = &proc_pid_link_inode_operations;
|
||||
inode->i_size = 64;
|
||||
|
||||
@@ -322,14 +321,13 @@ proc_fdinfo_instantiate(struct inode *dir, struct dentry *dentry,
|
||||
struct proc_inode *ei;
|
||||
struct inode *inode;
|
||||
|
||||
inode = proc_pid_make_inode(dir->i_sb, task);
|
||||
inode = proc_pid_make_inode(dir->i_sb, task, S_IFREG | S_IRUSR);
|
||||
if (!inode)
|
||||
goto out;
|
||||
|
||||
ei = PROC_I(inode);
|
||||
ei->fd = fd;
|
||||
|
||||
inode->i_mode = S_IFREG | S_IRUSR;
|
||||
inode->i_fop = &proc_fdinfo_file_operations;
|
||||
|
||||
d_set_d_op(dentry, &tid_fd_dentry_operations);
|
||||
|
||||
@@ -163,7 +163,7 @@ extern int proc_pid_statm(struct seq_file *, struct pid_namespace *,
|
||||
extern const struct dentry_operations pid_dentry_operations;
|
||||
extern int pid_getattr(struct vfsmount *, struct dentry *, struct kstat *);
|
||||
extern int proc_setattr(struct dentry *, struct iattr *);
|
||||
extern struct inode *proc_pid_make_inode(struct super_block *, struct task_struct *);
|
||||
extern struct inode *proc_pid_make_inode(struct super_block *, struct task_struct *, umode_t);
|
||||
extern int pid_revalidate(struct dentry *, unsigned int);
|
||||
extern int pid_delete_dentry(const struct dentry *);
|
||||
extern int proc_pid_readdir(struct file *, struct dir_context *);
|
||||
|
||||
@@ -92,12 +92,11 @@ static int proc_ns_instantiate(struct inode *dir,
|
||||
struct inode *inode;
|
||||
struct proc_inode *ei;
|
||||
|
||||
inode = proc_pid_make_inode(dir->i_sb, task);
|
||||
inode = proc_pid_make_inode(dir->i_sb, task, S_IFLNK | S_IRWXUGO);
|
||||
if (!inode)
|
||||
goto out;
|
||||
|
||||
ei = PROC_I(inode);
|
||||
inode->i_mode = S_IFLNK|S_IRWXUGO;
|
||||
inode->i_op = &proc_ns_link_inode_operations;
|
||||
ei->ns_ops = ns_ops;
|
||||
|
||||
|
||||
@@ -307,6 +307,8 @@ struct bpf_prog *bpf_prog_get_type(u32 ufd, enum bpf_prog_type type);
|
||||
struct bpf_prog *bpf_prog_add(struct bpf_prog *prog, int i);
|
||||
struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog);
|
||||
void bpf_prog_put(struct bpf_prog *prog);
|
||||
int __bpf_prog_charge(struct user_struct *user, u32 pages);
|
||||
void __bpf_prog_uncharge(struct user_struct *user, u32 pages);
|
||||
|
||||
struct bpf_map *bpf_map_get_with_uref(u32 ufd);
|
||||
struct bpf_map *__bpf_map_get(struct fd f);
|
||||
@@ -403,6 +405,15 @@ static inline struct bpf_prog *bpf_prog_get_type_path(const char *name,
|
||||
return ERR_PTR(-EOPNOTSUPP);
|
||||
}
|
||||
|
||||
static inline int __bpf_prog_charge(struct user_struct *user, u32 pages)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void __bpf_prog_uncharge(struct user_struct *user, u32 pages)
|
||||
{
|
||||
}
|
||||
|
||||
static inline bool unprivileged_ebpf_enabled(void)
|
||||
{
|
||||
return false;
|
||||
|
||||
@@ -113,7 +113,6 @@ static __always_inline int test_clear_buffer_##name(struct buffer_head *bh) \
|
||||
* of the form "mark_buffer_foo()". These are higher-level functions which
|
||||
* do something in addition to setting a b_state bit.
|
||||
*/
|
||||
BUFFER_FNS(Uptodate, uptodate)
|
||||
BUFFER_FNS(Dirty, dirty)
|
||||
TAS_BUFFER_FNS(Dirty, dirty)
|
||||
BUFFER_FNS(Lock, locked)
|
||||
@@ -131,6 +130,30 @@ BUFFER_FNS(Meta, meta)
|
||||
BUFFER_FNS(Prio, prio)
|
||||
BUFFER_FNS(Defer_Completion, defer_completion)
|
||||
|
||||
static __always_inline void set_buffer_uptodate(struct buffer_head *bh)
|
||||
{
|
||||
/*
|
||||
* make it consistent with folio_mark_uptodate
|
||||
* pairs with smp_load_acquire in buffer_uptodate
|
||||
*/
|
||||
smp_mb__before_atomic();
|
||||
set_bit(BH_Uptodate, &bh->b_state);
|
||||
}
|
||||
|
||||
static __always_inline void clear_buffer_uptodate(struct buffer_head *bh)
|
||||
{
|
||||
clear_bit(BH_Uptodate, &bh->b_state);
|
||||
}
|
||||
|
||||
static __always_inline int buffer_uptodate(const struct buffer_head *bh)
|
||||
{
|
||||
/*
|
||||
* make it consistent with folio_test_uptodate
|
||||
* pairs with smp_mb__before_atomic in set_buffer_uptodate
|
||||
*/
|
||||
return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0;
|
||||
}
|
||||
|
||||
#define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK)
|
||||
|
||||
/* If we *know* page->private refers to buffer_heads */
|
||||
|
||||
@@ -55,6 +55,8 @@
|
||||
#define PCI_CLASS_BRIDGE_EISA 0x0602
|
||||
#define PCI_CLASS_BRIDGE_MC 0x0603
|
||||
#define PCI_CLASS_BRIDGE_PCI 0x0604
|
||||
#define PCI_CLASS_BRIDGE_PCI_NORMAL 0x060400
|
||||
#define PCI_CLASS_BRIDGE_PCI_SUBTRACTIVE 0x060401
|
||||
#define PCI_CLASS_BRIDGE_PCMCIA 0x0605
|
||||
#define PCI_CLASS_BRIDGE_NUBUS 0x0606
|
||||
#define PCI_CLASS_BRIDGE_CARDBUS 0x0607
|
||||
|
||||
@@ -65,6 +65,7 @@
|
||||
|
||||
struct giveback_urb_bh {
|
||||
bool running;
|
||||
bool high_prio;
|
||||
spinlock_t lock;
|
||||
struct list_head head;
|
||||
struct tasklet_struct bh;
|
||||
|
||||
@@ -798,6 +798,7 @@ enum {
|
||||
};
|
||||
|
||||
void l2cap_chan_hold(struct l2cap_chan *c);
|
||||
struct l2cap_chan *l2cap_chan_hold_unless_zero(struct l2cap_chan *c);
|
||||
void l2cap_chan_put(struct l2cap_chan *c);
|
||||
|
||||
static inline void l2cap_chan_lock(struct l2cap_chan *chan)
|
||||
|
||||
@@ -457,4 +457,12 @@ snd_pci_quirk_lookup_id(u16 vendor, u16 device,
|
||||
}
|
||||
#endif
|
||||
|
||||
/* async signal helpers */
|
||||
struct snd_fasync;
|
||||
|
||||
int snd_fasync_helper(int fd, struct file *file, int on,
|
||||
struct snd_fasync **fasyncp);
|
||||
void snd_kill_fasync(struct snd_fasync *fasync, int signal, int poll);
|
||||
void snd_fasync_free(struct snd_fasync *fasync);
|
||||
|
||||
#endif /* __SOUND_CORE_H */
|
||||
|
||||
@@ -20,15 +20,15 @@ TRACE_EVENT(spmi_write_begin,
|
||||
__field ( u8, sid )
|
||||
__field ( u16, addr )
|
||||
__field ( u8, len )
|
||||
__dynamic_array ( u8, buf, len + 1 )
|
||||
__dynamic_array ( u8, buf, len )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->opcode = opcode;
|
||||
__entry->sid = sid;
|
||||
__entry->addr = addr;
|
||||
__entry->len = len + 1;
|
||||
memcpy(__get_dynamic_array(buf), buf, len + 1);
|
||||
__entry->len = len;
|
||||
memcpy(__get_dynamic_array(buf), buf, len);
|
||||
),
|
||||
|
||||
TP_printk("opc=%d sid=%02d addr=0x%04x len=%d buf=0x[%*phD]",
|
||||
@@ -91,7 +91,7 @@ TRACE_EVENT(spmi_read_end,
|
||||
__field ( u16, addr )
|
||||
__field ( int, ret )
|
||||
__field ( u8, len )
|
||||
__dynamic_array ( u8, buf, len + 1 )
|
||||
__dynamic_array ( u8, buf, len )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
@@ -99,8 +99,8 @@ TRACE_EVENT(spmi_read_end,
|
||||
__entry->sid = sid;
|
||||
__entry->addr = addr;
|
||||
__entry->ret = ret;
|
||||
__entry->len = len + 1;
|
||||
memcpy(__get_dynamic_array(buf), buf, len + 1);
|
||||
__entry->len = len;
|
||||
memcpy(__get_dynamic_array(buf), buf, len);
|
||||
),
|
||||
|
||||
TP_printk("opc=%d sid=%02d addr=0x%04x ret=%d len=%02d buf=0x[%*phD]",
|
||||
|
||||
@@ -134,9 +134,9 @@ static inline __attribute_const__ __u32 __fswahb32(__u32 val)
|
||||
|
||||
static __always_inline unsigned long __swab(const unsigned long y)
|
||||
{
|
||||
#if BITS_PER_LONG == 64
|
||||
#if __BITS_PER_LONG == 64
|
||||
return __swab64(y);
|
||||
#else /* BITS_PER_LONG == 32 */
|
||||
#else /* __BITS_PER_LONG == 32 */
|
||||
return __swab32(y);
|
||||
#endif
|
||||
}
|
||||
|
||||
14
init/main.c
14
init/main.c
@@ -487,21 +487,15 @@ asmlinkage __visible void __init start_kernel(void)
|
||||
smp_setup_processor_id();
|
||||
debug_objects_early_init();
|
||||
|
||||
/*
|
||||
* Set up the the initial canary ASAP:
|
||||
*/
|
||||
add_latent_entropy();
|
||||
boot_init_stack_canary();
|
||||
|
||||
cgroup_init_early();
|
||||
|
||||
local_irq_disable();
|
||||
early_boot_irqs_disabled = true;
|
||||
|
||||
/*
|
||||
* Interrupts are still disabled. Do necessary setups, then
|
||||
* enable them
|
||||
*/
|
||||
/*
|
||||
* Interrupts are still disabled. Do necessary setups, then
|
||||
* enable them.
|
||||
*/
|
||||
boot_cpu_init();
|
||||
page_address_init();
|
||||
pr_notice("%s", linux_banner);
|
||||
|
||||
@@ -107,19 +107,29 @@ struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size,
|
||||
gfp_t gfp_flags = GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO |
|
||||
gfp_extra_flags;
|
||||
struct bpf_prog *fp;
|
||||
u32 pages, delta;
|
||||
int ret;
|
||||
|
||||
BUG_ON(fp_old == NULL);
|
||||
|
||||
size = round_up(size, PAGE_SIZE);
|
||||
if (size <= fp_old->pages * PAGE_SIZE)
|
||||
pages = size / PAGE_SIZE;
|
||||
if (pages <= fp_old->pages)
|
||||
return fp_old;
|
||||
|
||||
delta = pages - fp_old->pages;
|
||||
ret = __bpf_prog_charge(fp_old->aux->user, delta);
|
||||
if (ret)
|
||||
return NULL;
|
||||
|
||||
fp = __vmalloc(size, gfp_flags, PAGE_KERNEL);
|
||||
if (fp != NULL) {
|
||||
if (fp == NULL) {
|
||||
__bpf_prog_uncharge(fp_old->aux->user, delta);
|
||||
} else {
|
||||
kmemcheck_annotate_bitfield(fp, meta);
|
||||
|
||||
memcpy(fp, fp_old, fp_old->pages * PAGE_SIZE);
|
||||
fp->pages = size / PAGE_SIZE;
|
||||
fp->pages = pages;
|
||||
fp->aux->prog = fp;
|
||||
|
||||
/* We keep fp->aux from fp_old around in the new
|
||||
|
||||
@@ -652,19 +652,39 @@ static void free_used_maps(struct bpf_prog_aux *aux)
|
||||
kfree(aux->used_maps);
|
||||
}
|
||||
|
||||
int __bpf_prog_charge(struct user_struct *user, u32 pages)
|
||||
{
|
||||
unsigned long memlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
|
||||
unsigned long user_bufs;
|
||||
|
||||
if (user) {
|
||||
user_bufs = atomic_long_add_return(pages, &user->locked_vm);
|
||||
if (user_bufs > memlock_limit) {
|
||||
atomic_long_sub(pages, &user->locked_vm);
|
||||
return -EPERM;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void __bpf_prog_uncharge(struct user_struct *user, u32 pages)
|
||||
{
|
||||
if (user)
|
||||
atomic_long_sub(pages, &user->locked_vm);
|
||||
}
|
||||
|
||||
static int bpf_prog_charge_memlock(struct bpf_prog *prog)
|
||||
{
|
||||
struct user_struct *user = get_current_user();
|
||||
unsigned long memlock_limit;
|
||||
int ret;
|
||||
|
||||
memlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
|
||||
|
||||
atomic_long_add(prog->pages, &user->locked_vm);
|
||||
if (atomic_long_read(&user->locked_vm) > memlock_limit) {
|
||||
atomic_long_sub(prog->pages, &user->locked_vm);
|
||||
ret = __bpf_prog_charge(user, prog->pages);
|
||||
if (ret) {
|
||||
free_uid(user);
|
||||
return -EPERM;
|
||||
return ret;
|
||||
}
|
||||
|
||||
prog->aux->user = user;
|
||||
return 0;
|
||||
}
|
||||
@@ -673,7 +693,7 @@ static void bpf_prog_uncharge_memlock(struct bpf_prog *prog)
|
||||
{
|
||||
struct user_struct *user = prog->aux->user;
|
||||
|
||||
atomic_long_sub(prog->pages, &user->locked_vm);
|
||||
__bpf_prog_uncharge(user, prog->pages);
|
||||
free_uid(user);
|
||||
}
|
||||
|
||||
|
||||
@@ -891,7 +891,7 @@ static struct p9_fid *p9_fid_create(struct p9_client *clnt)
|
||||
unsigned long flags;
|
||||
|
||||
p9_debug(P9_DEBUG_FID, "clnt %p\n", clnt);
|
||||
fid = kmalloc(sizeof(struct p9_fid), GFP_KERNEL);
|
||||
fid = kzalloc(sizeof(struct p9_fid), GFP_KERNEL);
|
||||
if (!fid)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
@@ -902,11 +902,9 @@ static struct p9_fid *p9_fid_create(struct p9_client *clnt)
|
||||
}
|
||||
fid->fid = ret;
|
||||
|
||||
memset(&fid->qid, 0, sizeof(struct p9_qid));
|
||||
fid->mode = -1;
|
||||
fid->uid = current_fsuid();
|
||||
fid->clnt = clnt;
|
||||
fid->rdir = NULL;
|
||||
spin_lock_irqsave(&clnt->lock, flags);
|
||||
list_add(&fid->flist, &clnt->fidlist);
|
||||
spin_unlock_irqrestore(&clnt->lock, flags);
|
||||
|
||||
@@ -113,7 +113,8 @@ static struct l2cap_chan *__l2cap_get_chan_by_scid(struct l2cap_conn *conn,
|
||||
}
|
||||
|
||||
/* Find channel with given SCID.
|
||||
* Returns locked channel. */
|
||||
* Returns a reference locked channel.
|
||||
*/
|
||||
static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
|
||||
u16 cid)
|
||||
{
|
||||
@@ -121,15 +122,19 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
c = __l2cap_get_chan_by_scid(conn, cid);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
if (c) {
|
||||
/* Only lock if chan reference is not 0 */
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
}
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return c;
|
||||
}
|
||||
|
||||
/* Find channel with given DCID.
|
||||
* Returns locked channel.
|
||||
* Returns a reference locked channel.
|
||||
*/
|
||||
static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
|
||||
u16 cid)
|
||||
@@ -138,8 +143,12 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
c = __l2cap_get_chan_by_dcid(conn, cid);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
if (c) {
|
||||
/* Only lock if chan reference is not 0 */
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
}
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return c;
|
||||
@@ -164,8 +173,12 @@ static struct l2cap_chan *l2cap_get_chan_by_ident(struct l2cap_conn *conn,
|
||||
|
||||
mutex_lock(&conn->chan_lock);
|
||||
c = __l2cap_get_chan_by_ident(conn, ident);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
if (c) {
|
||||
/* Only lock if chan reference is not 0 */
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (c)
|
||||
l2cap_chan_lock(c);
|
||||
}
|
||||
mutex_unlock(&conn->chan_lock);
|
||||
|
||||
return c;
|
||||
@@ -491,6 +504,16 @@ void l2cap_chan_hold(struct l2cap_chan *c)
|
||||
kref_get(&c->kref);
|
||||
}
|
||||
|
||||
struct l2cap_chan *l2cap_chan_hold_unless_zero(struct l2cap_chan *c)
|
||||
{
|
||||
BT_DBG("chan %p orig refcnt %u", c, kref_read(&c->kref));
|
||||
|
||||
if (!kref_get_unless_zero(&c->kref))
|
||||
return NULL;
|
||||
|
||||
return c;
|
||||
}
|
||||
|
||||
void l2cap_chan_put(struct l2cap_chan *c)
|
||||
{
|
||||
BT_DBG("chan %p orig refcnt %d", c, atomic_read(&c->kref.refcount));
|
||||
@@ -1781,11 +1804,11 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
|
||||
bdaddr_t *dst,
|
||||
u8 link_type)
|
||||
{
|
||||
struct l2cap_chan *c, *c1 = NULL;
|
||||
struct l2cap_chan *c, *tmp, *c1 = NULL;
|
||||
|
||||
read_lock(&chan_list_lock);
|
||||
|
||||
list_for_each_entry(c, &chan_list, global_l) {
|
||||
list_for_each_entry_safe(c, tmp, &chan_list, global_l) {
|
||||
if (state && c->state != state)
|
||||
continue;
|
||||
|
||||
@@ -1803,9 +1826,11 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
|
||||
src_match = !bacmp(&c->src, src);
|
||||
dst_match = !bacmp(&c->dst, dst);
|
||||
if (src_match && dst_match) {
|
||||
l2cap_chan_hold(c);
|
||||
read_unlock(&chan_list_lock);
|
||||
return c;
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
if (c) {
|
||||
read_unlock(&chan_list_lock);
|
||||
return c;
|
||||
}
|
||||
}
|
||||
|
||||
/* Closest match */
|
||||
@@ -1818,7 +1843,7 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm,
|
||||
}
|
||||
|
||||
if (c1)
|
||||
l2cap_chan_hold(c1);
|
||||
c1 = l2cap_chan_hold_unless_zero(c1);
|
||||
|
||||
read_unlock(&chan_list_lock);
|
||||
|
||||
@@ -4194,6 +4219,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn,
|
||||
|
||||
unlock:
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -4306,6 +4332,7 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn,
|
||||
|
||||
done:
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -5034,6 +5061,7 @@ send_move_response:
|
||||
l2cap_send_move_chan_rsp(chan, result);
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -5126,6 +5154,7 @@ static void l2cap_move_continue(struct l2cap_conn *conn, u16 icid, u16 result)
|
||||
}
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
}
|
||||
|
||||
static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid,
|
||||
@@ -5155,6 +5184,7 @@ static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid,
|
||||
l2cap_send_move_chan_cfm(chan, L2CAP_MC_UNCONFIRMED);
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
}
|
||||
|
||||
static int l2cap_move_channel_rsp(struct l2cap_conn *conn,
|
||||
@@ -5218,6 +5248,7 @@ static int l2cap_move_channel_confirm(struct l2cap_conn *conn,
|
||||
l2cap_send_move_chan_cfm_rsp(conn, cmd->ident, icid);
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -5253,6 +5284,7 @@ static inline int l2cap_move_channel_confirm_rsp(struct l2cap_conn *conn,
|
||||
}
|
||||
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -5625,12 +5657,11 @@ static inline int l2cap_le_credits(struct l2cap_conn *conn,
|
||||
if (credits > max_credits) {
|
||||
BT_ERR("LE credits overflow");
|
||||
l2cap_send_disconn_req(chan, ECONNRESET);
|
||||
l2cap_chan_unlock(chan);
|
||||
|
||||
/* Return 0 so that we don't trigger an unnecessary
|
||||
* command reject packet.
|
||||
*/
|
||||
return 0;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
chan->tx_credits += credits;
|
||||
@@ -5643,7 +5674,9 @@ static inline int l2cap_le_credits(struct l2cap_conn *conn,
|
||||
if (chan->tx_credits)
|
||||
chan->ops->resume(chan);
|
||||
|
||||
unlock:
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -6941,6 +6974,7 @@ drop:
|
||||
|
||||
done:
|
||||
l2cap_chan_unlock(chan);
|
||||
l2cap_chan_put(chan);
|
||||
}
|
||||
|
||||
static void l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm,
|
||||
@@ -7345,7 +7379,7 @@ static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c,
|
||||
if (src_type != c->src_type)
|
||||
continue;
|
||||
|
||||
l2cap_chan_hold(c);
|
||||
c = l2cap_chan_hold_unless_zero(c);
|
||||
read_unlock(&chan_list_lock);
|
||||
return c;
|
||||
}
|
||||
|
||||
@@ -2986,11 +2986,12 @@ begin_fwd:
|
||||
*/
|
||||
void sk_forced_mem_schedule(struct sock *sk, int size)
|
||||
{
|
||||
int amt;
|
||||
int delta, amt;
|
||||
|
||||
if (size <= sk->sk_forward_alloc)
|
||||
delta = size - sk->sk_forward_alloc;
|
||||
if (delta <= 0)
|
||||
return;
|
||||
amt = sk_mem_pages(size);
|
||||
amt = sk_mem_pages(delta);
|
||||
sk->sk_forward_alloc += amt * SK_MEM_QUANTUM;
|
||||
sk_memory_allocated_add(sk, amt);
|
||||
|
||||
|
||||
@@ -26,6 +26,11 @@
|
||||
#include <net/transp_v6.h>
|
||||
#include <net/ping.h>
|
||||
|
||||
static void ping_v6_destroy(struct sock *sk)
|
||||
{
|
||||
inet6_destroy_sock(sk);
|
||||
}
|
||||
|
||||
/* Compatibility glue so we can support IPv6 when it's compiled as a module */
|
||||
static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len,
|
||||
int *addr_len)
|
||||
@@ -180,6 +185,7 @@ struct proto pingv6_prot = {
|
||||
.owner = THIS_MODULE,
|
||||
.init = ping_init_sock,
|
||||
.close = ping_close,
|
||||
.destroy = ping_v6_destroy,
|
||||
.connect = ip6_datagram_connect_v6_only,
|
||||
.disconnect = __udp_disconnect,
|
||||
.setsockopt = ipv6_setsockopt,
|
||||
|
||||
@@ -119,6 +119,7 @@ static struct nft_trans *nft_trans_alloc(struct nft_ctx *ctx, int msg_type,
|
||||
if (trans == NULL)
|
||||
return NULL;
|
||||
|
||||
INIT_LIST_HEAD(&trans->list);
|
||||
trans->msg_type = msg_type;
|
||||
trans->ctx = *ctx;
|
||||
|
||||
@@ -2514,7 +2515,7 @@ cont:
|
||||
list_for_each_entry(i, &ctx->table->sets, list) {
|
||||
int tmp;
|
||||
|
||||
if (!nft_is_active_next(ctx->net, set))
|
||||
if (!nft_is_active_next(ctx->net, i))
|
||||
continue;
|
||||
if (!sscanf(i->name, name, &tmp))
|
||||
continue;
|
||||
|
||||
@@ -807,11 +807,16 @@ nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum)
|
||||
}
|
||||
|
||||
static int
|
||||
nfqnl_mangle(void *data, int data_len, struct nf_queue_entry *e, int diff)
|
||||
nfqnl_mangle(void *data, unsigned int data_len, struct nf_queue_entry *e, int diff)
|
||||
{
|
||||
struct sk_buff *nskb;
|
||||
|
||||
if (diff < 0) {
|
||||
unsigned int min_len = skb_transport_offset(e->skb);
|
||||
|
||||
if (data_len < min_len)
|
||||
return -EINVAL;
|
||||
|
||||
if (pskb_trim(e->skb, data_len))
|
||||
return -ENOMEM;
|
||||
} else if (diff > 0) {
|
||||
|
||||
@@ -356,6 +356,7 @@ static int acquire_refill(struct rds_connection *conn)
|
||||
static void release_refill(struct rds_connection *conn)
|
||||
{
|
||||
clear_bit(RDS_RECV_REFILL, &conn->c_flags);
|
||||
smp_mb__after_atomic();
|
||||
|
||||
/* We don't use wait_on_bit()/wake_up_bit() because our waking is in a
|
||||
* hot path and finding waiters is very rare. We don't want to walk
|
||||
|
||||
@@ -427,6 +427,9 @@ static int route4_set_parms(struct net *net, struct tcf_proto *tp,
|
||||
goto errout;
|
||||
}
|
||||
|
||||
if (!nhandle)
|
||||
return -EINVAL;
|
||||
|
||||
h1 = to_hash(nhandle);
|
||||
b = rtnl_dereference(head->table[h1]);
|
||||
if (!b) {
|
||||
@@ -486,6 +489,9 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
|
||||
int err;
|
||||
bool new = true;
|
||||
|
||||
if (!handle)
|
||||
return -EINVAL;
|
||||
|
||||
if (opt == NULL)
|
||||
return handle ? -EINVAL : 0;
|
||||
|
||||
@@ -534,7 +540,7 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
|
||||
rcu_assign_pointer(f->next, f1);
|
||||
rcu_assign_pointer(*fp, f);
|
||||
|
||||
if (fold && fold->handle && f->handle != fold->handle) {
|
||||
if (fold) {
|
||||
th = to_hash(fold->handle);
|
||||
h = from_hash(fold->handle >> 16);
|
||||
b = rtnl_dereference(head->table[th]);
|
||||
|
||||
@@ -69,6 +69,17 @@ static void xprt_free_allocation(struct rpc_rqst *req)
|
||||
kfree(req);
|
||||
}
|
||||
|
||||
static void xprt_bc_reinit_xdr_buf(struct xdr_buf *buf)
|
||||
{
|
||||
buf->head[0].iov_len = PAGE_SIZE;
|
||||
buf->tail[0].iov_len = 0;
|
||||
buf->pages = NULL;
|
||||
buf->page_len = 0;
|
||||
buf->flags = 0;
|
||||
buf->len = 0;
|
||||
buf->buflen = PAGE_SIZE;
|
||||
}
|
||||
|
||||
static int xprt_alloc_xdr_buf(struct xdr_buf *buf, gfp_t gfp_flags)
|
||||
{
|
||||
struct page *page;
|
||||
@@ -291,6 +302,9 @@ void xprt_free_bc_rqst(struct rpc_rqst *req)
|
||||
*/
|
||||
spin_lock_bh(&xprt->bc_pa_lock);
|
||||
if (xprt_need_to_requeue(xprt)) {
|
||||
xprt_bc_reinit_xdr_buf(&req->rq_snd_buf);
|
||||
xprt_bc_reinit_xdr_buf(&req->rq_rcv_buf);
|
||||
req->rq_rcv_buf.len = PAGE_SIZE;
|
||||
list_add_tail(&req->rq_bc_pa_list, &xprt->bc_pa_list);
|
||||
xprt->bc_alloc_count++;
|
||||
req = NULL;
|
||||
|
||||
@@ -1205,7 +1205,14 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
|
||||
* timeout fires.
|
||||
*/
|
||||
sock_hold(sk);
|
||||
schedule_delayed_work(&vsk->connect_work, timeout);
|
||||
|
||||
/* If the timeout function is already scheduled,
|
||||
* reschedule it, then ungrab the socket refcount to
|
||||
* keep it balanced.
|
||||
*/
|
||||
if (mod_delayed_work(system_wq, &vsk->connect_work,
|
||||
timeout))
|
||||
sock_put(sk);
|
||||
|
||||
/* Skip ahead to preserve error code set above. */
|
||||
goto out_wait;
|
||||
|
||||
@@ -232,12 +232,13 @@ static int inode_alloc_security(struct inode *inode)
|
||||
if (!isec)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_init(&isec->lock);
|
||||
spin_lock_init(&isec->lock);
|
||||
INIT_LIST_HEAD(&isec->list);
|
||||
isec->inode = inode;
|
||||
isec->sid = SECINITSID_UNLABELED;
|
||||
isec->sclass = SECCLASS_FILE;
|
||||
isec->task_sid = sid;
|
||||
isec->initialized = LABEL_INVALID;
|
||||
inode->i_security = isec;
|
||||
|
||||
return 0;
|
||||
@@ -248,7 +249,7 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
/*
|
||||
* Try reloading inode security labels that have been marked as invalid. The
|
||||
* @may_sleep parameter indicates when sleeping and thus reloading labels is
|
||||
* allowed; when set to false, returns ERR_PTR(-ECHILD) when the label is
|
||||
* allowed; when set to false, returns -ECHILD when the label is
|
||||
* invalid. The @opt_dentry parameter should be set to a dentry of the inode;
|
||||
* when no dentry is available, set it to NULL instead.
|
||||
*/
|
||||
@@ -1390,7 +1391,8 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
{
|
||||
struct superblock_security_struct *sbsec = NULL;
|
||||
struct inode_security_struct *isec = inode->i_security;
|
||||
u32 sid;
|
||||
u32 task_sid, sid = 0;
|
||||
u16 sclass;
|
||||
struct dentry *dentry;
|
||||
#define INITCONTEXTLEN 255
|
||||
char *context = NULL;
|
||||
@@ -1398,12 +1400,15 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
int rc = 0;
|
||||
|
||||
if (isec->initialized == LABEL_INITIALIZED)
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
mutex_lock(&isec->lock);
|
||||
spin_lock(&isec->lock);
|
||||
if (isec->initialized == LABEL_INITIALIZED)
|
||||
goto out_unlock;
|
||||
|
||||
if (isec->sclass == SECCLASS_FILE)
|
||||
isec->sclass = inode_mode_to_security_class(inode->i_mode);
|
||||
|
||||
sbsec = inode->i_sb->s_security;
|
||||
if (!(sbsec->flags & SE_SBINITIALIZED)) {
|
||||
/* Defer initialization until selinux_complete_init,
|
||||
@@ -1416,12 +1421,18 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
sclass = isec->sclass;
|
||||
task_sid = isec->task_sid;
|
||||
sid = isec->sid;
|
||||
isec->initialized = LABEL_PENDING;
|
||||
spin_unlock(&isec->lock);
|
||||
|
||||
switch (sbsec->behavior) {
|
||||
case SECURITY_FS_USE_NATIVE:
|
||||
break;
|
||||
case SECURITY_FS_USE_XATTR:
|
||||
if (!(inode->i_opflags & IOP_XATTR)) {
|
||||
isec->sid = sbsec->def_sid;
|
||||
sid = sbsec->def_sid;
|
||||
break;
|
||||
}
|
||||
/* Need a dentry, since the xattr API requires one.
|
||||
@@ -1443,7 +1454,7 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
* inode_doinit with a dentry, before these inodes could
|
||||
* be used again by userspace.
|
||||
*/
|
||||
goto out_unlock;
|
||||
goto out_invalid;
|
||||
}
|
||||
|
||||
len = INITCONTEXTLEN;
|
||||
@@ -1451,7 +1462,7 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
if (!context) {
|
||||
rc = -ENOMEM;
|
||||
dput(dentry);
|
||||
goto out_unlock;
|
||||
goto out;
|
||||
}
|
||||
context[len] = '\0';
|
||||
rc = __vfs_getxattr(dentry, inode, XATTR_NAME_SELINUX, context, len);
|
||||
@@ -1462,14 +1473,14 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
rc = __vfs_getxattr(dentry, inode, XATTR_NAME_SELINUX, NULL, 0);
|
||||
if (rc < 0) {
|
||||
dput(dentry);
|
||||
goto out_unlock;
|
||||
goto out;
|
||||
}
|
||||
len = rc;
|
||||
context = kmalloc(len+1, GFP_NOFS);
|
||||
if (!context) {
|
||||
rc = -ENOMEM;
|
||||
dput(dentry);
|
||||
goto out_unlock;
|
||||
goto out;
|
||||
}
|
||||
context[len] = '\0';
|
||||
rc = __vfs_getxattr(dentry, inode, XATTR_NAME_SELINUX, context, len);
|
||||
@@ -1481,7 +1492,7 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
"%d for dev=%s ino=%ld\n", __func__,
|
||||
-rc, inode->i_sb->s_id, inode->i_ino);
|
||||
kfree(context);
|
||||
goto out_unlock;
|
||||
goto out;
|
||||
}
|
||||
/* Map ENODATA to the default file SID */
|
||||
sid = sbsec->def_sid;
|
||||
@@ -1511,29 +1522,25 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
}
|
||||
}
|
||||
kfree(context);
|
||||
isec->sid = sid;
|
||||
break;
|
||||
case SECURITY_FS_USE_TASK:
|
||||
isec->sid = isec->task_sid;
|
||||
sid = task_sid;
|
||||
break;
|
||||
case SECURITY_FS_USE_TRANS:
|
||||
/* Default to the fs SID. */
|
||||
isec->sid = sbsec->sid;
|
||||
sid = sbsec->sid;
|
||||
|
||||
/* Try to obtain a transition SID. */
|
||||
isec->sclass = inode_mode_to_security_class(inode->i_mode);
|
||||
rc = security_transition_sid(isec->task_sid, sbsec->sid,
|
||||
isec->sclass, NULL, &sid);
|
||||
rc = security_transition_sid(task_sid, sid, sclass, NULL, &sid);
|
||||
if (rc)
|
||||
goto out_unlock;
|
||||
isec->sid = sid;
|
||||
goto out;
|
||||
break;
|
||||
case SECURITY_FS_USE_MNTPOINT:
|
||||
isec->sid = sbsec->mntpoint_sid;
|
||||
sid = sbsec->mntpoint_sid;
|
||||
break;
|
||||
default:
|
||||
/* Default to the fs superblock SID. */
|
||||
isec->sid = sbsec->sid;
|
||||
sid = sbsec->sid;
|
||||
|
||||
if ((sbsec->flags & SE_SBGENFS) && !S_ISLNK(inode->i_mode)) {
|
||||
/* We must have a dentry to determine the label on
|
||||
@@ -1556,26 +1563,39 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent
|
||||
* could be used again by userspace.
|
||||
*/
|
||||
if (!dentry)
|
||||
goto out_unlock;
|
||||
isec->sclass = inode_mode_to_security_class(inode->i_mode);
|
||||
rc = selinux_genfs_get_sid(dentry, isec->sclass,
|
||||
goto out_invalid;
|
||||
rc = selinux_genfs_get_sid(dentry, sclass,
|
||||
sbsec->flags, &sid);
|
||||
dput(dentry);
|
||||
if (rc)
|
||||
goto out_unlock;
|
||||
isec->sid = sid;
|
||||
goto out;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
isec->initialized = LABEL_INITIALIZED;
|
||||
out:
|
||||
spin_lock(&isec->lock);
|
||||
if (isec->initialized == LABEL_PENDING) {
|
||||
if (rc) {
|
||||
isec->initialized = LABEL_INVALID;
|
||||
goto out_unlock;
|
||||
}
|
||||
isec->initialized = LABEL_INITIALIZED;
|
||||
isec->sid = sid;
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&isec->lock);
|
||||
out:
|
||||
if (isec->sclass == SECCLASS_FILE)
|
||||
isec->sclass = inode_mode_to_security_class(inode->i_mode);
|
||||
spin_unlock(&isec->lock);
|
||||
return rc;
|
||||
|
||||
out_invalid:
|
||||
spin_lock(&isec->lock);
|
||||
if (isec->initialized == LABEL_PENDING) {
|
||||
isec->initialized = LABEL_INVALID;
|
||||
isec->sid = sid;
|
||||
}
|
||||
spin_unlock(&isec->lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Convert a Linux signal to an access vector. */
|
||||
@@ -3220,9 +3240,11 @@ static void selinux_inode_post_setxattr(struct dentry *dentry, const char *name,
|
||||
}
|
||||
|
||||
isec = backing_inode_security(dentry);
|
||||
spin_lock(&isec->lock);
|
||||
isec->sclass = inode_mode_to_security_class(inode->i_mode);
|
||||
isec->sid = newsid;
|
||||
isec->initialized = LABEL_INITIALIZED;
|
||||
spin_unlock(&isec->lock);
|
||||
|
||||
return;
|
||||
}
|
||||
@@ -3319,9 +3341,11 @@ static int selinux_inode_setsecurity(struct inode *inode, const char *name,
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
spin_lock(&isec->lock);
|
||||
isec->sclass = inode_mode_to_security_class(inode->i_mode);
|
||||
isec->sid = newsid;
|
||||
isec->initialized = LABEL_INITIALIZED;
|
||||
spin_unlock(&isec->lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -3977,8 +4001,11 @@ static void selinux_task_to_inode(struct task_struct *p,
|
||||
struct inode_security_struct *isec = inode->i_security;
|
||||
u32 sid = task_sid(p);
|
||||
|
||||
spin_lock(&isec->lock);
|
||||
isec->sclass = inode_mode_to_security_class(inode->i_mode);
|
||||
isec->sid = sid;
|
||||
isec->initialized = LABEL_INITIALIZED;
|
||||
spin_unlock(&isec->lock);
|
||||
}
|
||||
|
||||
/* Returns error only if unable to parse addresses */
|
||||
@@ -4297,24 +4324,24 @@ static int selinux_socket_post_create(struct socket *sock, int family,
|
||||
const struct task_security_struct *tsec = current_security();
|
||||
struct inode_security_struct *isec = inode_security_novalidate(SOCK_INODE(sock));
|
||||
struct sk_security_struct *sksec;
|
||||
u16 sclass = socket_type_to_security_class(family, type, protocol);
|
||||
u32 sid = SECINITSID_KERNEL;
|
||||
int err = 0;
|
||||
|
||||
isec->sclass = socket_type_to_security_class(family, type, protocol);
|
||||
|
||||
if (kern)
|
||||
isec->sid = SECINITSID_KERNEL;
|
||||
else {
|
||||
err = socket_sockcreate_sid(tsec, isec->sclass, &(isec->sid));
|
||||
if (!kern) {
|
||||
err = socket_sockcreate_sid(tsec, sclass, &sid);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
isec->sclass = sclass;
|
||||
isec->sid = sid;
|
||||
isec->initialized = LABEL_INITIALIZED;
|
||||
|
||||
if (sock->sk) {
|
||||
sksec = sock->sk->sk_security;
|
||||
sksec->sid = isec->sid;
|
||||
sksec->sclass = isec->sclass;
|
||||
sksec->sclass = sclass;
|
||||
sksec->sid = sid;
|
||||
err = selinux_netlbl_socket_post_create(sock->sk, family);
|
||||
}
|
||||
|
||||
@@ -4498,16 +4525,22 @@ static int selinux_socket_accept(struct socket *sock, struct socket *newsock)
|
||||
int err;
|
||||
struct inode_security_struct *isec;
|
||||
struct inode_security_struct *newisec;
|
||||
u16 sclass;
|
||||
u32 sid;
|
||||
|
||||
err = sock_has_perm(current, sock->sk, SOCKET__ACCEPT);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
newisec = inode_security_novalidate(SOCK_INODE(newsock));
|
||||
|
||||
isec = inode_security_novalidate(SOCK_INODE(sock));
|
||||
newisec->sclass = isec->sclass;
|
||||
newisec->sid = isec->sid;
|
||||
spin_lock(&isec->lock);
|
||||
sclass = isec->sclass;
|
||||
sid = isec->sid;
|
||||
spin_unlock(&isec->lock);
|
||||
|
||||
newisec = inode_security_novalidate(SOCK_INODE(newsock));
|
||||
newisec->sclass = sclass;
|
||||
newisec->sid = sid;
|
||||
newisec->initialized = LABEL_INITIALIZED;
|
||||
|
||||
return 0;
|
||||
@@ -6030,9 +6063,9 @@ static void selinux_inode_invalidate_secctx(struct inode *inode)
|
||||
{
|
||||
struct inode_security_struct *isec = inode->i_security;
|
||||
|
||||
mutex_lock(&isec->lock);
|
||||
spin_lock(&isec->lock);
|
||||
isec->initialized = LABEL_INVALID;
|
||||
mutex_unlock(&isec->lock);
|
||||
spin_unlock(&isec->lock);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -39,7 +39,8 @@ struct task_security_struct {
|
||||
|
||||
enum label_initialized {
|
||||
LABEL_INVALID, /* invalid or not initialized */
|
||||
LABEL_INITIALIZED /* initialized */
|
||||
LABEL_INITIALIZED, /* initialized */
|
||||
LABEL_PENDING
|
||||
};
|
||||
|
||||
struct inode_security_struct {
|
||||
@@ -52,7 +53,7 @@ struct inode_security_struct {
|
||||
u32 sid; /* SID of this object */
|
||||
u16 sclass; /* security class of this object */
|
||||
unsigned char initialized; /* initialization flag */
|
||||
struct mutex lock;
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
struct file_security_struct {
|
||||
|
||||
@@ -1301,7 +1301,7 @@ static int sel_make_bools(void)
|
||||
goto out;
|
||||
|
||||
isec->sid = sid;
|
||||
isec->initialized = 1;
|
||||
isec->initialized = LABEL_INITIALIZED;
|
||||
inode->i_fop = &sel_bool_ops;
|
||||
inode->i_ino = i|SEL_BOOL_INO_OFFSET;
|
||||
d_add(dentry, inode);
|
||||
@@ -1835,7 +1835,7 @@ static int sel_fill_super(struct super_block *sb, void *data, int silent)
|
||||
isec = (struct inode_security_struct *)inode->i_security;
|
||||
isec->sid = SECINITSID_DEVNULL;
|
||||
isec->sclass = SECCLASS_CHR_FILE;
|
||||
isec->initialized = 1;
|
||||
isec->initialized = LABEL_INITIALIZED;
|
||||
|
||||
init_special_inode(inode, S_IFCHR | S_IRUGO | S_IWUGO, MKDEV(MEM_MAJOR, 3));
|
||||
d_add(dentry, inode);
|
||||
|
||||
@@ -127,9 +127,9 @@ static loff_t snd_info_entry_llseek(struct file *file, loff_t offset, int orig)
|
||||
entry = data->entry;
|
||||
mutex_lock(&entry->access);
|
||||
if (entry->c.ops->llseek) {
|
||||
offset = entry->c.ops->llseek(entry,
|
||||
data->file_private_data,
|
||||
file, offset, orig);
|
||||
ret = entry->c.ops->llseek(entry,
|
||||
data->file_private_data,
|
||||
file, offset, orig);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
||||
@@ -25,6 +25,7 @@
|
||||
#include <linux/time.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/ioport.h>
|
||||
#include <linux/fs.h>
|
||||
#include <sound/core.h>
|
||||
|
||||
#ifdef CONFIG_SND_DEBUG
|
||||
@@ -153,3 +154,96 @@ snd_pci_quirk_lookup(struct pci_dev *pci, const struct snd_pci_quirk *list)
|
||||
}
|
||||
EXPORT_SYMBOL(snd_pci_quirk_lookup);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Deferred async signal helpers
|
||||
*
|
||||
* Below are a few helper functions to wrap the async signal handling
|
||||
* in the deferred work. The main purpose is to avoid the messy deadlock
|
||||
* around tasklist_lock and co at the kill_fasync() invocation.
|
||||
* fasync_helper() and kill_fasync() are replaced with snd_fasync_helper()
|
||||
* and snd_kill_fasync(), respectively. In addition, snd_fasync_free() has
|
||||
* to be called at releasing the relevant file object.
|
||||
*/
|
||||
struct snd_fasync {
|
||||
struct fasync_struct *fasync;
|
||||
int signal;
|
||||
int poll;
|
||||
int on;
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
static DEFINE_SPINLOCK(snd_fasync_lock);
|
||||
static LIST_HEAD(snd_fasync_list);
|
||||
|
||||
static void snd_fasync_work_fn(struct work_struct *work)
|
||||
{
|
||||
struct snd_fasync *fasync;
|
||||
|
||||
spin_lock_irq(&snd_fasync_lock);
|
||||
while (!list_empty(&snd_fasync_list)) {
|
||||
fasync = list_first_entry(&snd_fasync_list, struct snd_fasync, list);
|
||||
list_del_init(&fasync->list);
|
||||
spin_unlock_irq(&snd_fasync_lock);
|
||||
if (fasync->on)
|
||||
kill_fasync(&fasync->fasync, fasync->signal, fasync->poll);
|
||||
spin_lock_irq(&snd_fasync_lock);
|
||||
}
|
||||
spin_unlock_irq(&snd_fasync_lock);
|
||||
}
|
||||
|
||||
static DECLARE_WORK(snd_fasync_work, snd_fasync_work_fn);
|
||||
|
||||
int snd_fasync_helper(int fd, struct file *file, int on,
|
||||
struct snd_fasync **fasyncp)
|
||||
{
|
||||
struct snd_fasync *fasync = NULL;
|
||||
|
||||
if (on) {
|
||||
fasync = kzalloc(sizeof(*fasync), GFP_KERNEL);
|
||||
if (!fasync)
|
||||
return -ENOMEM;
|
||||
INIT_LIST_HEAD(&fasync->list);
|
||||
}
|
||||
|
||||
spin_lock_irq(&snd_fasync_lock);
|
||||
if (*fasyncp) {
|
||||
kfree(fasync);
|
||||
fasync = *fasyncp;
|
||||
} else {
|
||||
if (!fasync) {
|
||||
spin_unlock_irq(&snd_fasync_lock);
|
||||
return 0;
|
||||
}
|
||||
*fasyncp = fasync;
|
||||
}
|
||||
fasync->on = on;
|
||||
spin_unlock_irq(&snd_fasync_lock);
|
||||
return fasync_helper(fd, file, on, &fasync->fasync);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(snd_fasync_helper);
|
||||
|
||||
void snd_kill_fasync(struct snd_fasync *fasync, int signal, int poll)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (!fasync || !fasync->on)
|
||||
return;
|
||||
spin_lock_irqsave(&snd_fasync_lock, flags);
|
||||
fasync->signal = signal;
|
||||
fasync->poll = poll;
|
||||
list_move(&fasync->list, &snd_fasync_list);
|
||||
schedule_work(&snd_fasync_work);
|
||||
spin_unlock_irqrestore(&snd_fasync_lock, flags);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(snd_kill_fasync);
|
||||
|
||||
void snd_fasync_free(struct snd_fasync *fasync)
|
||||
{
|
||||
if (!fasync)
|
||||
return;
|
||||
fasync->on = 0;
|
||||
flush_work(&snd_fasync_work);
|
||||
kfree(fasync);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(snd_fasync_free);
|
||||
|
||||
@@ -74,7 +74,7 @@ struct snd_timer_user {
|
||||
unsigned int filter;
|
||||
struct timespec tstamp; /* trigger tstamp */
|
||||
wait_queue_head_t qchange_sleep;
|
||||
struct fasync_struct *fasync;
|
||||
struct snd_fasync *fasync;
|
||||
struct mutex ioctl_lock;
|
||||
};
|
||||
|
||||
@@ -1293,7 +1293,7 @@ static void snd_timer_user_interrupt(struct snd_timer_instance *timeri,
|
||||
}
|
||||
__wake:
|
||||
spin_unlock(&tu->qlock);
|
||||
kill_fasync(&tu->fasync, SIGIO, POLL_IN);
|
||||
snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
|
||||
wake_up(&tu->qchange_sleep);
|
||||
}
|
||||
|
||||
@@ -1330,7 +1330,7 @@ static void snd_timer_user_ccallback(struct snd_timer_instance *timeri,
|
||||
spin_lock_irqsave(&tu->qlock, flags);
|
||||
snd_timer_user_append_to_tqueue(tu, &r1);
|
||||
spin_unlock_irqrestore(&tu->qlock, flags);
|
||||
kill_fasync(&tu->fasync, SIGIO, POLL_IN);
|
||||
snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
|
||||
wake_up(&tu->qchange_sleep);
|
||||
}
|
||||
|
||||
@@ -1397,7 +1397,7 @@ static void snd_timer_user_tinterrupt(struct snd_timer_instance *timeri,
|
||||
spin_unlock(&tu->qlock);
|
||||
if (append == 0)
|
||||
return;
|
||||
kill_fasync(&tu->fasync, SIGIO, POLL_IN);
|
||||
snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
|
||||
wake_up(&tu->qchange_sleep);
|
||||
}
|
||||
|
||||
@@ -1439,6 +1439,7 @@ static int snd_timer_user_release(struct inode *inode, struct file *file)
|
||||
if (tu->timeri)
|
||||
snd_timer_close(tu->timeri);
|
||||
mutex_unlock(&tu->ioctl_lock);
|
||||
snd_fasync_free(tu->fasync);
|
||||
kfree(tu->queue);
|
||||
kfree(tu->tqueue);
|
||||
kfree(tu);
|
||||
@@ -2026,7 +2027,7 @@ static int snd_timer_user_fasync(int fd, struct file * file, int on)
|
||||
struct snd_timer_user *tu;
|
||||
|
||||
tu = file->private_data;
|
||||
return fasync_helper(fd, file, on, &tu->fasync);
|
||||
return snd_fasync_helper(fd, file, on, &tu->fasync);
|
||||
}
|
||||
|
||||
static ssize_t snd_timer_user_read(struct file *file, char __user *buffer,
|
||||
|
||||
@@ -409,6 +409,7 @@ static const struct snd_pci_quirk cs420x_fixup_tbl[] = {
|
||||
|
||||
/* codec SSID */
|
||||
SND_PCI_QUIRK(0x106b, 0x0600, "iMac 14,1", CS420X_IMAC27_122),
|
||||
SND_PCI_QUIRK(0x106b, 0x0900, "iMac 12,1", CS420X_IMAC27_122),
|
||||
SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81),
|
||||
SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122),
|
||||
SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101),
|
||||
|
||||
@@ -238,6 +238,7 @@ enum {
|
||||
CXT_PINCFG_LEMOTE_A1205,
|
||||
CXT_PINCFG_COMPAQ_CQ60,
|
||||
CXT_FIXUP_STEREO_DMIC,
|
||||
CXT_PINCFG_LENOVO_NOTEBOOK,
|
||||
CXT_FIXUP_INC_MIC_BOOST,
|
||||
CXT_FIXUP_HEADPHONE_MIC_PIN,
|
||||
CXT_FIXUP_HEADPHONE_MIC,
|
||||
@@ -698,6 +699,14 @@ static const struct hda_fixup cxt_fixups[] = {
|
||||
.type = HDA_FIXUP_FUNC,
|
||||
.v.func = cxt_fixup_stereo_dmic,
|
||||
},
|
||||
[CXT_PINCFG_LENOVO_NOTEBOOK] = {
|
||||
.type = HDA_FIXUP_PINS,
|
||||
.v.pins = (const struct hda_pintbl[]) {
|
||||
{ 0x1a, 0x05d71030 },
|
||||
{ }
|
||||
},
|
||||
.chain_id = CXT_FIXUP_STEREO_DMIC,
|
||||
},
|
||||
[CXT_FIXUP_INC_MIC_BOOST] = {
|
||||
.type = HDA_FIXUP_FUNC,
|
||||
.v.func = cxt5066_increase_mic_boost,
|
||||
@@ -860,7 +869,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
|
||||
SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
|
||||
SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
|
||||
SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
|
||||
SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
|
||||
SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_PINCFG_LENOVO_NOTEBOOK),
|
||||
SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo G50-70", CXT_FIXUP_STEREO_DMIC),
|
||||
SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
|
||||
SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
|
||||
|
||||
@@ -350,7 +350,8 @@ static int bcd2000_init_midi(struct bcd2000 *bcd2k)
|
||||
static void bcd2000_free_usb_related_resources(struct bcd2000 *bcd2k,
|
||||
struct usb_interface *interface)
|
||||
{
|
||||
/* usb_kill_urb not necessary, urb is aborted automatically */
|
||||
usb_kill_urb(bcd2k->midi_out_urb);
|
||||
usb_kill_urb(bcd2k->midi_in_urb);
|
||||
|
||||
usb_free_urb(bcd2k->midi_out_urb);
|
||||
usb_free_urb(bcd2k->midi_in_urb);
|
||||
|
||||
Reference in New Issue
Block a user