Merge android-4.9.96 (320d53a) into msm-4.9
* refs/heads/tmp-320d53a: Linux 4.9.96 block/mq: fix potential deadlock during cpu hotplug writeback: safer lock nesting fanotify: fix logic of events on child mm/filemap.c: fix NULL pointer in page_cache_tree_insert() autofs: mount point create should honour passed in mode Don't leak MNT_INTERNAL away from internal mounts rpc_pipefs: fix double-dput() orangefs_kill_sb(): deal with allocation failures hypfs_kill_super(): deal with failed allocations jffs2_kill_sb(): deal with failed allocations udf: Fix leak of UTF-16 surrogates into encoded strings powerpc/lib: Fix off-by-one in alternate feature patching powerpc/eeh: Fix enabling bridge MMIO windows MIPS: memset.S: Fix clobber of v1 in last_fixup MIPS: memset.S: Fix return of __clear_user from Lpartial_fixup MIPS: memset.S: EVA & fault support for small_memset MIPS: uaccess: Add micromips clobbers to bzero invocation HID: hidraw: Fix crash on HIDIOCGFEATURE with a destroyed device random: add new ioctl RNDRESEEDCRNG random: crng_reseed() should lock the crng instance that it is modifying random: fix crng_ready() test ALSA: hda - New VIA controller suppor no-snoop path ALSA: rawmidi: Fix missing input substream checks in compat ioctls ALSA: line6: Use correct endpoint type for midi output drm/radeon: Fix PCIe lane width calculation drm/rockchip: Clear all interrupts before requesting the IRQ drm/amdgpu: Fix PCIe lane width calculation drm/amdgpu: Fix always_valid bos multiple LRU insertions. drm/amdgpu: Add an ATPX quirk for hybrid laptop ext4: don't allow r/w mounts if metadata blocks overlap the superblock ALSA: pcm: Fix endless loop for XRUN recovery in OSS emulation ALSA: pcm: Fix mutex unbalance in OSS emulation ioctls ALSA: pcm: Return -EBUSY for OSS ioctls changing busy streams ALSA: pcm: Avoid potential races between OSS ioctls and read/write ALSA: pcm: Use ERESTARTSYS instead of EINTR in OSS emulation vfio/pci: Virtualize Maximum Read Request Size watchdog: f71808e_wdt: Fix WD_EN register read dt-bindings: clock: mediatek: add binding for fixed-factor clock axisel_d4 thermal: imx: Fix race condition in imx_thermal_probe() pwm: rcar: Fix a condition to prevent mismatch value setting to duty clk: bcm2835: De-assert/assert PLL reset signal when appropriate clk: fix false-positive Wmaybe-uninitialized warning clk: mvebu: armada-38x: add support for missing clocks clk: mvebu: armada-38x: add support for 1866MHz variants mmc: jz4740: Fix race condition in IRQ mask update iommu/vt-d: Fix a potential memory leak um: Use POSIX ucontext_t instead of struct ucontext um: Compile with modern headers nfit, address-range-scrub: fix scrub in-progress reporting libnvdimm, namespace: use a safe lookup for dimm device name dmaengine: at_xdmac: fix rare residue corruption IB/srp: Fix completion vector assignment algorithm IB/srp: Fix srp_abort() ALSA: pcm: Fix UAF at PCM release via PCM timer access RDMA/rxe: Fix an out-of-bounds read RDMA/ucma: Don't allow setting RDMA_OPTION_IB_PATH without an RDMA device ext4: fail ext4_iget for root directory if unallocated ext4: protect i_disksize update by i_data_sem in direct write path ext4: don't update checksum of new initialized bitmaps jbd2: if the journal is aborted then don't allow update of the log tail random: use a tighter cap in credit_entropy_bits_safe() irqchip/gic: Take lock when updating irq type thunderbolt: Resume control channel after hibernation image is created ASoC: ssm2602: Replace reg_default_raw with reg_default HID: core: Fix size as type u32 HID: Fix hid_report_len usage powerpc/powernv: Fix OPAL NVRAM driver OPAL_BUSY loops powerpc/powernv: define a standard delay for OPAL_BUSY type retry loops powerpc/64: Fix smp_wmb barrier definition use use lwsync consistently powerpc/powernv: Handle unknown OPAL errors in opal_nvram_write() HID: i2c-hid: fix size check and type usage smb3: Fix root directory when server returns inode number of zero usb: dwc3: pci: Properly cleanup resource USB:fix USB3 devices behind USB3 hubs not resuming at hibernate thaw USB: gadget: f_midi: fixing a possible double-free in f_midi ACPI / hotplug / PCI: Check presence of slot itself in get_slot_status() ACPI / video: Add quirk to force acpi-video backlight on Samsung 670Z5E regmap: Fix reversed bounds check in regmap_raw_write() xen-netfront: Fix hang on device removal spi: Fix scatterlist elements size in spi_map_buf ARM: dts: at91: sama5d4: fix pinctrl compatible string ARM: dts: exynos: Fix IOMMU support for GScaler devices on Exynos5250 ARM: dts: at91: at91sam9g25: fix mux-mask pinctrl property usb: gadget: udc: core: update usb_ep_queue() documentation usb: musb: gadget: misplaced out of bounds check mm, slab: reschedule cache_reap() on the same CPU ipc/shm: fix use-after-free of shm file via remap_file_pages() resource: fix integer overflow at reallocation fs/reiserfs/journal.c: add missing resierfs_warning() arg ubi: Reject MLC NAND ubi: Fix error for write access ubi: fastmap: Don't flush fastmap work on detach ubifs: Check ubifs_wbuf_sync() return code tty: make n_tty_read() always abort if hangup is in progress f2fs: check cap_resource only for data blocks Revert "f2fs: introduce f2fs_set_page_dirty_nobuffer" f2fs: clear PageError on writepage BACKPORT: dm verity: add 'check_at_most_once' option to only validate hashes once f2fs: call unlock_new_inode() before d_instantiate() f2fs: refactor read path to allow multiple postprocessing steps fscrypt: allow synchronous bio decryption ANDROID: arm-smccc: fix clang build Change-Id: I016ee22b2aecb696dcab53f636786c676102295c Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
This commit is contained in:
@@ -109,6 +109,17 @@ fec_start <offset>
|
||||
This is the offset, in <data_block_size> blocks, from the start of the
|
||||
FEC device to the beginning of the encoding data.
|
||||
|
||||
check_at_most_once
|
||||
Verify data blocks only the first time they are read from the data device,
|
||||
rather than every time. This reduces the overhead of dm-verity so that it
|
||||
can be used on systems that are memory and/or CPU constrained. However, it
|
||||
provides a reduced level of security because only offline tampering of the
|
||||
data device's content will be detected, not online tampering.
|
||||
|
||||
Hash blocks are still verified each time they are read from the hash device,
|
||||
since verification of hash blocks is less performance critical than data
|
||||
blocks, and a hash block will not be verified any more after all the data
|
||||
blocks it covers have been verified anyway.
|
||||
|
||||
Theory of operation
|
||||
===================
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,6 +1,6 @@
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 95
|
||||
SUBLEVEL = 96
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
atmel,mux-mask = <
|
||||
/* A B C */
|
||||
0xffffffff 0xffe0399f 0xc000001c /* pioA */
|
||||
0x0007ffff 0x8000fe3f 0x00000000 /* pioB */
|
||||
0x0007ffff 0x00047e3f 0x00000000 /* pioB */
|
||||
0x80000000 0x07c0ffff 0xb83fffff /* pioC */
|
||||
0x003fffff 0x003f8000 0x00000000 /* pioD */
|
||||
>;
|
||||
|
||||
@@ -640,7 +640,7 @@
|
||||
power-domains = <&pd_gsc>;
|
||||
clocks = <&clock CLK_GSCL0>;
|
||||
clock-names = "gscl";
|
||||
iommu = <&sysmmu_gsc0>;
|
||||
iommus = <&sysmmu_gsc0>;
|
||||
};
|
||||
|
||||
gsc_1: gsc@13e10000 {
|
||||
@@ -650,7 +650,7 @@
|
||||
power-domains = <&pd_gsc>;
|
||||
clocks = <&clock CLK_GSCL1>;
|
||||
clock-names = "gscl";
|
||||
iommu = <&sysmmu_gsc1>;
|
||||
iommus = <&sysmmu_gsc1>;
|
||||
};
|
||||
|
||||
gsc_2: gsc@13e20000 {
|
||||
@@ -660,7 +660,7 @@
|
||||
power-domains = <&pd_gsc>;
|
||||
clocks = <&clock CLK_GSCL2>;
|
||||
clock-names = "gscl";
|
||||
iommu = <&sysmmu_gsc2>;
|
||||
iommus = <&sysmmu_gsc2>;
|
||||
};
|
||||
|
||||
gsc_3: gsc@13e30000 {
|
||||
@@ -670,7 +670,7 @@
|
||||
power-domains = <&pd_gsc>;
|
||||
clocks = <&clock CLK_GSCL3>;
|
||||
clock-names = "gscl";
|
||||
iommu = <&sysmmu_gsc3>;
|
||||
iommus = <&sysmmu_gsc3>;
|
||||
};
|
||||
|
||||
hdmi: hdmi@14530000 {
|
||||
|
||||
@@ -1362,7 +1362,7 @@
|
||||
pinctrl@fc06a000 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
compatible = "atmel,at91sam9x5-pinctrl", "atmel,at91rm9200-pinctrl", "simple-bus";
|
||||
compatible = "atmel,sama5d3-pinctrl", "atmel,at91sam9x5-pinctrl", "simple-bus";
|
||||
ranges = <0xfc068000 0xfc068000 0x100
|
||||
0xfc06a000 0xfc06a000 0x4000>;
|
||||
/* WARNING: revisit as pin spec has changed */
|
||||
|
||||
@@ -1257,6 +1257,13 @@ __clear_user(void __user *addr, __kernel_size_t size)
|
||||
{
|
||||
__kernel_size_t res;
|
||||
|
||||
#ifdef CONFIG_CPU_MICROMIPS
|
||||
/* micromips memset / bzero also clobbers t7 & t8 */
|
||||
#define bzero_clobbers "$4", "$5", "$6", __UA_t0, __UA_t1, "$15", "$24", "$31"
|
||||
#else
|
||||
#define bzero_clobbers "$4", "$5", "$6", __UA_t0, __UA_t1, "$31"
|
||||
#endif /* CONFIG_CPU_MICROMIPS */
|
||||
|
||||
if (eva_kernel_access()) {
|
||||
__asm__ __volatile__(
|
||||
"move\t$4, %1\n\t"
|
||||
@@ -1266,7 +1273,7 @@ __clear_user(void __user *addr, __kernel_size_t size)
|
||||
"move\t%0, $6"
|
||||
: "=r" (res)
|
||||
: "r" (addr), "r" (size)
|
||||
: "$4", "$5", "$6", __UA_t0, __UA_t1, "$31");
|
||||
: bzero_clobbers);
|
||||
} else {
|
||||
might_fault();
|
||||
__asm__ __volatile__(
|
||||
@@ -1277,7 +1284,7 @@ __clear_user(void __user *addr, __kernel_size_t size)
|
||||
"move\t%0, $6"
|
||||
: "=r" (res)
|
||||
: "r" (addr), "r" (size)
|
||||
: "$4", "$5", "$6", __UA_t0, __UA_t1, "$31");
|
||||
: bzero_clobbers);
|
||||
}
|
||||
|
||||
return res;
|
||||
|
||||
@@ -218,7 +218,7 @@
|
||||
1: PTR_ADDIU a0, 1 /* fill bytewise */
|
||||
R10KCBARRIER(0(ra))
|
||||
bne t1, a0, 1b
|
||||
sb a1, -1(a0)
|
||||
EX(sb, a1, -1(a0), .Lsmall_fixup\@)
|
||||
|
||||
2: jr ra /* done */
|
||||
move a2, zero
|
||||
@@ -251,13 +251,18 @@
|
||||
PTR_L t0, TI_TASK($28)
|
||||
andi a2, STORMASK
|
||||
LONG_L t0, THREAD_BUADDR(t0)
|
||||
LONG_ADDU a2, t1
|
||||
LONG_ADDU a2, a0
|
||||
jr ra
|
||||
LONG_SUBU a2, t0
|
||||
|
||||
.Llast_fixup\@:
|
||||
jr ra
|
||||
andi v1, a2, STORMASK
|
||||
nop
|
||||
|
||||
.Lsmall_fixup\@:
|
||||
PTR_SUBU a2, t1, a0
|
||||
jr ra
|
||||
PTR_ADDIU a2, 1
|
||||
|
||||
.endm
|
||||
|
||||
|
||||
@@ -34,7 +34,8 @@
|
||||
#define rmb() __asm__ __volatile__ ("sync" : : : "memory")
|
||||
#define wmb() __asm__ __volatile__ ("sync" : : : "memory")
|
||||
|
||||
#ifdef __SUBARCH_HAS_LWSYNC
|
||||
/* The sub-arch has lwsync */
|
||||
#if defined(__powerpc64__) || defined(CONFIG_PPC_E500MC)
|
||||
# define SMPWMB LWSYNC
|
||||
#else
|
||||
# define SMPWMB eieio
|
||||
|
||||
@@ -21,6 +21,9 @@
|
||||
/* We calculate number of sg entries based on PAGE_SIZE */
|
||||
#define SG_ENTRIES_PER_NODE ((PAGE_SIZE - 16) / sizeof(struct opal_sg_entry))
|
||||
|
||||
/* Default time to sleep or delay between OPAL_BUSY/OPAL_BUSY_EVENT loops */
|
||||
#define OPAL_BUSY_DELAY_MS 10
|
||||
|
||||
/* /sys/firmware/opal */
|
||||
extern struct kobject *opal_kobj;
|
||||
|
||||
|
||||
@@ -5,10 +5,6 @@
|
||||
#include <linux/stringify.h>
|
||||
#include <asm/feature-fixups.h>
|
||||
|
||||
#if defined(__powerpc64__) || defined(CONFIG_PPC_E500MC)
|
||||
#define __SUBARCH_HAS_LWSYNC
|
||||
#endif
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
extern unsigned int __start___lwsync_fixup, __stop___lwsync_fixup;
|
||||
extern void do_lwsync_fixups(unsigned long value, void *fixup_start,
|
||||
|
||||
@@ -795,7 +795,8 @@ static void eeh_restore_bridge_bars(struct eeh_dev *edev)
|
||||
eeh_ops->write_config(pdn, 15*4, 4, edev->config_space[15]);
|
||||
|
||||
/* PCI Command: 0x4 */
|
||||
eeh_ops->write_config(pdn, PCI_COMMAND, 4, edev->config_space[1]);
|
||||
eeh_ops->write_config(pdn, PCI_COMMAND, 4, edev->config_space[1] |
|
||||
PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
|
||||
|
||||
/* Check the PCIe link is ready */
|
||||
eeh_bridge_check_link(edev);
|
||||
|
||||
@@ -55,7 +55,7 @@ static int patch_alt_instruction(unsigned int *src, unsigned int *dest,
|
||||
unsigned int *target = (unsigned int *)branch_target(src);
|
||||
|
||||
/* Branch within the section doesn't need translating */
|
||||
if (target < alt_start || target >= alt_end) {
|
||||
if (target < alt_start || target > alt_end) {
|
||||
instr = translate_branch(dest, src);
|
||||
if (!instr)
|
||||
return 1;
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
|
||||
#define DEBUG
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/of.h>
|
||||
@@ -56,9 +57,17 @@ static ssize_t opal_nvram_write(char *buf, size_t count, loff_t *index)
|
||||
|
||||
while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) {
|
||||
rc = opal_write_nvram(__pa(buf), count, off);
|
||||
if (rc == OPAL_BUSY_EVENT)
|
||||
if (rc == OPAL_BUSY_EVENT) {
|
||||
msleep(OPAL_BUSY_DELAY_MS);
|
||||
opal_poll_events(NULL);
|
||||
} else if (rc == OPAL_BUSY) {
|
||||
msleep(OPAL_BUSY_DELAY_MS);
|
||||
}
|
||||
}
|
||||
|
||||
if (rc)
|
||||
return -EIO;
|
||||
|
||||
*index += count;
|
||||
return count;
|
||||
}
|
||||
|
||||
@@ -318,7 +318,7 @@ static void hypfs_kill_super(struct super_block *sb)
|
||||
|
||||
if (sb->s_root)
|
||||
hypfs_delete_tree(sb->s_root);
|
||||
if (sb_info->update_file)
|
||||
if (sb_info && sb_info->update_file)
|
||||
hypfs_remove(sb_info->update_file);
|
||||
kfree(sb->s_fs_info);
|
||||
sb->s_fs_info = NULL;
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
#include <sys/mount.h>
|
||||
#include <sys/socket.h>
|
||||
#include <sys/stat.h>
|
||||
#include <sys/sysmacros.h>
|
||||
#include <sys/un.h>
|
||||
#include <sys/types.h>
|
||||
#include <os.h>
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
#include <os.h>
|
||||
#include <sysdep/mcontext.h>
|
||||
#include <um_malloc.h>
|
||||
#include <sys/ucontext.h>
|
||||
|
||||
void (*sig_info[NSIG])(int, struct siginfo *, struct uml_pt_regs *) = {
|
||||
[SIGTRAP] = relay_signal,
|
||||
@@ -159,7 +160,7 @@ static void (*handlers[_NSIG])(int sig, struct siginfo *si, mcontext_t *mc) = {
|
||||
|
||||
static void hard_handler(int sig, siginfo_t *si, void *p)
|
||||
{
|
||||
struct ucontext *uc = p;
|
||||
ucontext_t *uc = p;
|
||||
mcontext_t *mc = &uc->uc_mcontext;
|
||||
unsigned long pending = 1UL << sig;
|
||||
|
||||
|
||||
@@ -6,11 +6,12 @@
|
||||
#include <sysdep/stub.h>
|
||||
#include <sysdep/faultinfo.h>
|
||||
#include <sysdep/mcontext.h>
|
||||
#include <sys/ucontext.h>
|
||||
|
||||
void __attribute__ ((__section__ (".__syscall_stub")))
|
||||
stub_segv_handler(int sig, siginfo_t *info, void *p)
|
||||
{
|
||||
struct ucontext *uc = p;
|
||||
ucontext_t *uc = p;
|
||||
|
||||
GET_FAULTINFO_FROM_MC(*((struct faultinfo *) STUB_DATA),
|
||||
&uc->uc_mcontext);
|
||||
|
||||
@@ -2019,15 +2019,15 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
|
||||
|
||||
blk_mq_init_cpu_queues(q, set->nr_hw_queues);
|
||||
|
||||
mutex_lock(&all_q_mutex);
|
||||
get_online_cpus();
|
||||
mutex_lock(&all_q_mutex);
|
||||
|
||||
list_add_tail(&q->all_q_node, &all_q_list);
|
||||
blk_mq_add_queue_tag_set(set, q);
|
||||
blk_mq_map_swqueue(q, cpu_online_mask);
|
||||
|
||||
put_online_cpus();
|
||||
mutex_unlock(&all_q_mutex);
|
||||
put_online_cpus();
|
||||
|
||||
return q;
|
||||
|
||||
|
||||
@@ -967,8 +967,11 @@ static ssize_t scrub_show(struct device *dev,
|
||||
if (nd_desc) {
|
||||
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
|
||||
|
||||
mutex_lock(&acpi_desc->init_mutex);
|
||||
rc = sprintf(buf, "%d%s", acpi_desc->scrub_count,
|
||||
(work_busy(&acpi_desc->work)) ? "+\n" : "\n");
|
||||
work_busy(&acpi_desc->work)
|
||||
&& !acpi_desc->cancel ? "+\n" : "\n");
|
||||
mutex_unlock(&acpi_desc->init_mutex);
|
||||
}
|
||||
device_unlock(dev);
|
||||
return rc;
|
||||
|
||||
@@ -213,6 +213,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
|
||||
"3570R/370R/470R/450R/510R/4450RV"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* https://bugzilla.redhat.com/show_bug.cgi?id=1557060 */
|
||||
.callback = video_detect_force_video,
|
||||
.ident = "SAMSUNG 670Z5E",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "670Z5E"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* https://bugzilla.redhat.com/show_bug.cgi?id=1094948 */
|
||||
.callback = video_detect_force_video,
|
||||
|
||||
@@ -1736,7 +1736,7 @@ int regmap_raw_write(struct regmap *map, unsigned int reg,
|
||||
return -EINVAL;
|
||||
if (val_len % map->format.val_bytes)
|
||||
return -EINVAL;
|
||||
if (map->max_raw_write && map->max_raw_write > val_len)
|
||||
if (map->max_raw_write && map->max_raw_write < val_len)
|
||||
return -E2BIG;
|
||||
|
||||
map->lock(map->lock_arg);
|
||||
|
||||
@@ -434,8 +434,9 @@ struct crng_state primary_crng = {
|
||||
* its value (from 0->1->2).
|
||||
*/
|
||||
static int crng_init = 0;
|
||||
#define crng_ready() (likely(crng_init > 0))
|
||||
#define crng_ready() (likely(crng_init > 1))
|
||||
static int crng_init_cnt = 0;
|
||||
static unsigned long crng_global_init_time = 0;
|
||||
#define CRNG_INIT_CNT_THRESH (2*CHACHA20_KEY_SIZE)
|
||||
static void _extract_crng(struct crng_state *crng,
|
||||
__u8 out[CHACHA20_BLOCK_SIZE]);
|
||||
@@ -741,7 +742,7 @@ retry:
|
||||
|
||||
static int credit_entropy_bits_safe(struct entropy_store *r, int nbits)
|
||||
{
|
||||
const int nbits_max = (int)(~0U >> (ENTROPY_SHIFT + 1));
|
||||
const int nbits_max = r->poolinfo->poolwords * 32;
|
||||
|
||||
if (nbits < 0)
|
||||
return -EINVAL;
|
||||
@@ -800,7 +801,7 @@ static int crng_fast_load(const char *cp, size_t len)
|
||||
|
||||
if (!spin_trylock_irqsave(&primary_crng.lock, flags))
|
||||
return 0;
|
||||
if (crng_ready()) {
|
||||
if (crng_init != 0) {
|
||||
spin_unlock_irqrestore(&primary_crng.lock, flags);
|
||||
return 0;
|
||||
}
|
||||
@@ -836,7 +837,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
|
||||
_crng_backtrack_protect(&primary_crng, buf.block,
|
||||
CHACHA20_KEY_SIZE);
|
||||
}
|
||||
spin_lock_irqsave(&primary_crng.lock, flags);
|
||||
spin_lock_irqsave(&crng->lock, flags);
|
||||
for (i = 0; i < 8; i++) {
|
||||
unsigned long rv;
|
||||
if (!arch_get_random_seed_long(&rv) &&
|
||||
@@ -852,7 +853,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
|
||||
wake_up_interruptible(&crng_init_wait);
|
||||
pr_notice("random: crng init done\n");
|
||||
}
|
||||
spin_unlock_irqrestore(&primary_crng.lock, flags);
|
||||
spin_unlock_irqrestore(&crng->lock, flags);
|
||||
}
|
||||
|
||||
static inline void maybe_reseed_primary_crng(void)
|
||||
@@ -872,8 +873,9 @@ static void _extract_crng(struct crng_state *crng,
|
||||
{
|
||||
unsigned long v, flags;
|
||||
|
||||
if (crng_init > 1 &&
|
||||
time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL))
|
||||
if (crng_ready() &&
|
||||
(time_after(crng_global_init_time, crng->init_time) ||
|
||||
time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL)))
|
||||
crng_reseed(crng, crng == &primary_crng ? &input_pool : NULL);
|
||||
spin_lock_irqsave(&crng->lock, flags);
|
||||
if (arch_get_random_long(&v))
|
||||
@@ -1153,7 +1155,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
|
||||
fast_mix(fast_pool);
|
||||
add_interrupt_bench(cycles);
|
||||
|
||||
if (!crng_ready()) {
|
||||
if (unlikely(crng_init == 0)) {
|
||||
if ((fast_pool->count >= 64) &&
|
||||
crng_fast_load((char *) fast_pool->pool,
|
||||
sizeof(fast_pool->pool))) {
|
||||
@@ -1668,6 +1670,7 @@ static int rand_initialize(void)
|
||||
init_std_data(&input_pool);
|
||||
init_std_data(&blocking_pool);
|
||||
crng_initialize(&primary_crng);
|
||||
crng_global_init_time = jiffies;
|
||||
|
||||
#ifdef CONFIG_NUMA
|
||||
pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL);
|
||||
@@ -1854,6 +1857,14 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
|
||||
input_pool.entropy_count = 0;
|
||||
blocking_pool.entropy_count = 0;
|
||||
return 0;
|
||||
case RNDRESEEDCRNG:
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
if (crng_init < 2)
|
||||
return -ENODATA;
|
||||
crng_reseed(&primary_crng, NULL);
|
||||
crng_global_init_time = jiffies - 1;
|
||||
return 0;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -2147,7 +2158,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
|
||||
{
|
||||
struct entropy_store *poolp = &input_pool;
|
||||
|
||||
if (!crng_ready()) {
|
||||
if (unlikely(crng_init == 0)) {
|
||||
crng_fast_load(buffer, count);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -545,9 +545,7 @@ static void bcm2835_pll_off(struct clk_hw *hw)
|
||||
const struct bcm2835_pll_data *data = pll->data;
|
||||
|
||||
spin_lock(&cprman->regs_lock);
|
||||
cprman_write(cprman, data->cm_ctrl_reg,
|
||||
cprman_read(cprman, data->cm_ctrl_reg) |
|
||||
CM_PLL_ANARST);
|
||||
cprman_write(cprman, data->cm_ctrl_reg, CM_PLL_ANARST);
|
||||
cprman_write(cprman, data->a2w_ctrl_reg,
|
||||
cprman_read(cprman, data->a2w_ctrl_reg) |
|
||||
A2W_PLL_CTRL_PWRDN);
|
||||
@@ -583,6 +581,10 @@ static int bcm2835_pll_on(struct clk_hw *hw)
|
||||
cpu_relax();
|
||||
}
|
||||
|
||||
cprman_write(cprman, data->a2w_ctrl_reg,
|
||||
cprman_read(cprman, data->a2w_ctrl_reg) |
|
||||
A2W_PLL_CTRL_PRST_DISABLE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -46,10 +46,11 @@ static u32 __init armada_38x_get_tclk_freq(void __iomem *sar)
|
||||
}
|
||||
|
||||
static const u32 armada_38x_cpu_frequencies[] __initconst = {
|
||||
0, 0, 0, 0,
|
||||
1066 * 1000 * 1000, 0, 0, 0,
|
||||
666 * 1000 * 1000, 0, 800 * 1000 * 1000, 0,
|
||||
1066 * 1000 * 1000, 0, 1200 * 1000 * 1000, 0,
|
||||
1332 * 1000 * 1000, 0, 0, 0,
|
||||
1600 * 1000 * 1000,
|
||||
1600 * 1000 * 1000, 0, 0, 0,
|
||||
1866 * 1000 * 1000, 0, 0, 2000 * 1000 * 1000,
|
||||
};
|
||||
|
||||
static u32 __init armada_38x_get_cpu_freq(void __iomem *sar)
|
||||
@@ -75,11 +76,11 @@ static const struct coreclk_ratio armada_38x_coreclk_ratios[] __initconst = {
|
||||
};
|
||||
|
||||
static const int armada_38x_cpu_l2_ratios[32][2] __initconst = {
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
{1, 2}, {0, 1}, {1, 2}, {0, 1},
|
||||
{1, 2}, {0, 1}, {1, 2}, {0, 1},
|
||||
{1, 2}, {0, 1}, {0, 1}, {0, 1},
|
||||
{1, 2}, {0, 1}, {0, 1}, {0, 1},
|
||||
{1, 2}, {0, 1}, {0, 1}, {0, 1},
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
{1, 2}, {0, 1}, {0, 1}, {1, 2},
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
@@ -90,7 +91,7 @@ static const int armada_38x_cpu_ddr_ratios[32][2] __initconst = {
|
||||
{1, 2}, {0, 1}, {0, 1}, {0, 1},
|
||||
{1, 2}, {0, 1}, {0, 1}, {0, 1},
|
||||
{1, 2}, {0, 1}, {0, 1}, {0, 1},
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
{1, 2}, {0, 1}, {0, 1}, {7, 15},
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
{0, 1}, {0, 1}, {0, 1}, {0, 1},
|
||||
|
||||
@@ -46,7 +46,7 @@ struct div4_clk {
|
||||
unsigned int shift;
|
||||
};
|
||||
|
||||
static struct div4_clk div4_clks[] = {
|
||||
static const struct div4_clk div4_clks[] = {
|
||||
{ "zg", "pll0", CPG_FRQCRA, 16 },
|
||||
{ "m3", "pll1", CPG_FRQCRA, 12 },
|
||||
{ "b", "pll1", CPG_FRQCRA, 8 },
|
||||
@@ -79,7 +79,7 @@ sh73a0_cpg_register_clock(struct device_node *np, struct sh73a0_cpg *cpg,
|
||||
{
|
||||
const struct clk_div_table *table = NULL;
|
||||
unsigned int shift, reg, width;
|
||||
const char *parent_name;
|
||||
const char *parent_name = NULL;
|
||||
unsigned int mult = 1;
|
||||
unsigned int div = 1;
|
||||
|
||||
@@ -135,7 +135,7 @@ sh73a0_cpg_register_clock(struct device_node *np, struct sh73a0_cpg *cpg,
|
||||
shift = 24;
|
||||
width = 5;
|
||||
} else {
|
||||
struct div4_clk *c;
|
||||
const struct div4_clk *c;
|
||||
|
||||
for (c = div4_clks; c->name; c++) {
|
||||
if (!strcmp(name, c->name)) {
|
||||
|
||||
@@ -1473,10 +1473,10 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
|
||||
for (retry = 0; retry < AT_XDMAC_RESIDUE_MAX_RETRIES; retry++) {
|
||||
check_nda = at_xdmac_chan_read(atchan, AT_XDMAC_CNDA) & 0xfffffffc;
|
||||
rmb();
|
||||
initd = !!(at_xdmac_chan_read(atchan, AT_XDMAC_CC) & AT_XDMAC_CC_INITD);
|
||||
rmb();
|
||||
cur_ubc = at_xdmac_chan_read(atchan, AT_XDMAC_CUBC);
|
||||
rmb();
|
||||
initd = !!(at_xdmac_chan_read(atchan, AT_XDMAC_CC) & AT_XDMAC_CC_INITD);
|
||||
rmb();
|
||||
cur_nda = at_xdmac_chan_read(atchan, AT_XDMAC_CNDA) & 0xfffffffc;
|
||||
rmb();
|
||||
|
||||
|
||||
@@ -569,6 +569,7 @@ static const struct amdgpu_px_quirk amdgpu_px_quirk_list[] = {
|
||||
{ 0x1002, 0x6900, 0x1002, 0x0124, AMDGPU_PX_QUIRK_FORCE_ATPX },
|
||||
{ 0x1002, 0x6900, 0x1028, 0x0812, AMDGPU_PX_QUIRK_FORCE_ATPX },
|
||||
{ 0x1002, 0x6900, 0x1028, 0x0813, AMDGPU_PX_QUIRK_FORCE_ATPX },
|
||||
{ 0x1002, 0x67DF, 0x1028, 0x0774, AMDGPU_PX_QUIRK_FORCE_ATPX },
|
||||
{ 0, 0, 0, 0, 0 },
|
||||
};
|
||||
|
||||
|
||||
@@ -201,8 +201,10 @@ void amdgpu_bo_list_get_list(struct amdgpu_bo_list *list,
|
||||
for (i = 0; i < list->num_entries; i++) {
|
||||
unsigned priority = list->array[i].priority;
|
||||
|
||||
list_add_tail(&list->array[i].tv.head,
|
||||
&bucket[priority]);
|
||||
if (!list->array[i].robj->parent)
|
||||
list_add_tail(&list->array[i].tv.head,
|
||||
&bucket[priority]);
|
||||
|
||||
list->array[i].user_pages = NULL;
|
||||
}
|
||||
|
||||
|
||||
@@ -519,7 +519,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
|
||||
INIT_LIST_HEAD(&duplicates);
|
||||
amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
|
||||
|
||||
if (p->uf_entry.robj)
|
||||
if (p->uf_entry.robj && !p->uf_entry.robj->parent)
|
||||
list_add(&p->uf_entry.tv.head, &p->validated);
|
||||
|
||||
if (need_mmap_lock)
|
||||
|
||||
@@ -6449,9 +6449,9 @@ static void si_set_pcie_lane_width_in_smc(struct amdgpu_device *adev,
|
||||
{
|
||||
u32 lane_width;
|
||||
u32 new_lane_width =
|
||||
(amdgpu_new_state->caps & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >> ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT;
|
||||
((amdgpu_new_state->caps & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >> ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT) + 1;
|
||||
u32 current_lane_width =
|
||||
(amdgpu_current_state->caps & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >> ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT;
|
||||
((amdgpu_current_state->caps & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >> ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT) + 1;
|
||||
|
||||
if (new_lane_width != current_lane_width) {
|
||||
amdgpu_set_pcie_lanes(adev, new_lane_width);
|
||||
|
||||
@@ -5969,9 +5969,9 @@ static void si_set_pcie_lane_width_in_smc(struct radeon_device *rdev,
|
||||
{
|
||||
u32 lane_width;
|
||||
u32 new_lane_width =
|
||||
(radeon_new_state->caps & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >> ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT;
|
||||
((radeon_new_state->caps & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >> ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT) + 1;
|
||||
u32 current_lane_width =
|
||||
(radeon_current_state->caps & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >> ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT;
|
||||
((radeon_current_state->caps & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >> ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT) + 1;
|
||||
|
||||
if (new_lane_width != current_lane_width) {
|
||||
radeon_set_pcie_lanes(rdev, new_lane_width);
|
||||
|
||||
@@ -1386,6 +1386,9 @@ static int vop_initial(struct vop *vop)
|
||||
usleep_range(10, 20);
|
||||
reset_control_deassert(ahb_rst);
|
||||
|
||||
VOP_INTR_SET_TYPE(vop, clear, INTR_MASK, 1);
|
||||
VOP_INTR_SET_TYPE(vop, enable, INTR_MASK, 0);
|
||||
|
||||
memcpy(vop->regsbak, vop->regs, vop->len);
|
||||
|
||||
for (i = 0; i < vop_data->table_size; i++)
|
||||
@@ -1541,17 +1544,9 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
|
||||
|
||||
mutex_init(&vop->vsync_mutex);
|
||||
|
||||
ret = devm_request_irq(dev, vop->irq, vop_isr,
|
||||
IRQF_SHARED, dev_name(dev), vop);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* IRQ is initially disabled; it gets enabled in power_on */
|
||||
disable_irq(vop->irq);
|
||||
|
||||
ret = vop_create_crtc(vop);
|
||||
if (ret)
|
||||
goto err_enable_irq;
|
||||
return ret;
|
||||
|
||||
pm_runtime_enable(&pdev->dev);
|
||||
|
||||
@@ -1561,13 +1556,19 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
|
||||
goto err_disable_pm_runtime;
|
||||
}
|
||||
|
||||
ret = devm_request_irq(dev, vop->irq, vop_isr,
|
||||
IRQF_SHARED, dev_name(dev), vop);
|
||||
if (ret)
|
||||
goto err_disable_pm_runtime;
|
||||
|
||||
/* IRQ is initially disabled; it gets enabled in power_on */
|
||||
disable_irq(vop->irq);
|
||||
|
||||
return 0;
|
||||
|
||||
err_disable_pm_runtime:
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
vop_destroy_crtc(vop);
|
||||
err_enable_irq:
|
||||
enable_irq(vop->irq); /* To balance out the disable_irq above */
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -1370,7 +1370,7 @@ u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags)
|
||||
* of implement() working on 8 byte chunks
|
||||
*/
|
||||
|
||||
int len = hid_report_len(report) + 7;
|
||||
u32 len = hid_report_len(report) + 7;
|
||||
|
||||
return kmalloc(len, flags);
|
||||
}
|
||||
@@ -1435,7 +1435,7 @@ void __hid_request(struct hid_device *hid, struct hid_report *report,
|
||||
{
|
||||
char *buf;
|
||||
int ret;
|
||||
int len;
|
||||
u32 len;
|
||||
|
||||
buf = hid_alloc_report_buf(report, GFP_KERNEL);
|
||||
if (!buf)
|
||||
@@ -1461,14 +1461,14 @@ out:
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__hid_request);
|
||||
|
||||
int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, int size,
|
||||
int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
|
||||
int interrupt)
|
||||
{
|
||||
struct hid_report_enum *report_enum = hid->report_enum + type;
|
||||
struct hid_report *report;
|
||||
struct hid_driver *hdrv;
|
||||
unsigned int a;
|
||||
int rsize, csize = size;
|
||||
u32 rsize, csize = size;
|
||||
u8 *cdata = data;
|
||||
int ret = 0;
|
||||
|
||||
@@ -1526,7 +1526,7 @@ EXPORT_SYMBOL_GPL(hid_report_raw_event);
|
||||
*
|
||||
* This is data entry for lower layers.
|
||||
*/
|
||||
int hid_input_report(struct hid_device *hid, int type, u8 *data, int size, int interrupt)
|
||||
int hid_input_report(struct hid_device *hid, int type, u8 *data, u32 size, int interrupt)
|
||||
{
|
||||
struct hid_report_enum *report_enum;
|
||||
struct hid_driver *hdrv;
|
||||
|
||||
@@ -1279,7 +1279,8 @@ static void hidinput_led_worker(struct work_struct *work)
|
||||
led_work);
|
||||
struct hid_field *field;
|
||||
struct hid_report *report;
|
||||
int len, ret;
|
||||
int ret;
|
||||
u32 len;
|
||||
__u8 *buf;
|
||||
|
||||
field = hidinput_get_led_field(hid);
|
||||
|
||||
@@ -315,7 +315,8 @@ static struct attribute_group mt_attribute_group = {
|
||||
static void mt_get_feature(struct hid_device *hdev, struct hid_report *report)
|
||||
{
|
||||
struct mt_device *td = hid_get_drvdata(hdev);
|
||||
int ret, size = hid_report_len(report);
|
||||
int ret;
|
||||
u32 size = hid_report_len(report);
|
||||
u8 *buf;
|
||||
|
||||
/*
|
||||
@@ -919,7 +920,7 @@ static void mt_set_input_mode(struct hid_device *hdev)
|
||||
struct hid_report_enum *re;
|
||||
struct mt_class *cls = &td->mtclass;
|
||||
char *buf;
|
||||
int report_len;
|
||||
u32 report_len;
|
||||
|
||||
if (td->inputmode < 0)
|
||||
return;
|
||||
|
||||
@@ -110,8 +110,8 @@ struct rmi_data {
|
||||
u8 *writeReport;
|
||||
u8 *readReport;
|
||||
|
||||
int input_report_size;
|
||||
int output_report_size;
|
||||
u32 input_report_size;
|
||||
u32 output_report_size;
|
||||
|
||||
unsigned long flags;
|
||||
|
||||
|
||||
@@ -192,6 +192,11 @@ static ssize_t hidraw_get_report(struct file *file, char __user *buffer, size_t
|
||||
int ret = 0, len;
|
||||
unsigned char report_number;
|
||||
|
||||
if (!hidraw_table[minor] || !hidraw_table[minor]->exist) {
|
||||
ret = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
|
||||
dev = hidraw_table[minor]->hid;
|
||||
|
||||
if (!dev->ll_driver->raw_request) {
|
||||
|
||||
@@ -142,10 +142,10 @@ struct i2c_hid {
|
||||
* register of the HID
|
||||
* descriptor. */
|
||||
unsigned int bufsize; /* i2c buffer size */
|
||||
char *inbuf; /* Input buffer */
|
||||
char *rawbuf; /* Raw Input buffer */
|
||||
char *cmdbuf; /* Command buffer */
|
||||
char *argsbuf; /* Command arguments buffer */
|
||||
u8 *inbuf; /* Input buffer */
|
||||
u8 *rawbuf; /* Raw Input buffer */
|
||||
u8 *cmdbuf; /* Command buffer */
|
||||
u8 *argsbuf; /* Command arguments buffer */
|
||||
|
||||
unsigned long flags; /* device flags */
|
||||
unsigned long quirks; /* Various quirks */
|
||||
@@ -451,7 +451,8 @@ out_unlock:
|
||||
|
||||
static void i2c_hid_get_input(struct i2c_hid *ihid)
|
||||
{
|
||||
int ret, ret_size;
|
||||
int ret;
|
||||
u32 ret_size;
|
||||
int size = le16_to_cpu(ihid->hdesc.wMaxInputLength);
|
||||
|
||||
if (size > ihid->bufsize)
|
||||
@@ -476,7 +477,7 @@ static void i2c_hid_get_input(struct i2c_hid *ihid)
|
||||
return;
|
||||
}
|
||||
|
||||
if (ret_size > size) {
|
||||
if ((ret_size > size) || (ret_size <= 2)) {
|
||||
dev_err(&ihid->client->dev, "%s: incomplete report (%d/%d)\n",
|
||||
__func__, size, ret_size);
|
||||
return;
|
||||
|
||||
@@ -351,7 +351,7 @@ static int wacom_set_device_mode(struct hid_device *hdev,
|
||||
u8 *rep_data;
|
||||
struct hid_report *r;
|
||||
struct hid_report_enum *re;
|
||||
int length;
|
||||
u32 length;
|
||||
int error = -ENOMEM, limit = 0;
|
||||
|
||||
if (wacom_wac->mode_report < 0)
|
||||
|
||||
@@ -1231,6 +1231,9 @@ static int ucma_set_ib_path(struct ucma_context *ctx,
|
||||
if (!optlen)
|
||||
return -EINVAL;
|
||||
|
||||
if (!ctx->cm_id->device)
|
||||
return -EINVAL;
|
||||
|
||||
memset(&sa_path, 0, sizeof(sa_path));
|
||||
|
||||
ib_sa_unpack_path(path_data->path_rec, &sa_path);
|
||||
|
||||
@@ -747,9 +747,8 @@ static int init_send_wqe(struct rxe_qp *qp, struct ib_send_wr *ibwr,
|
||||
memcpy(wqe->dma.sge, ibwr->sg_list,
|
||||
num_sge * sizeof(struct ib_sge));
|
||||
|
||||
wqe->iova = (mask & WR_ATOMIC_MASK) ?
|
||||
atomic_wr(ibwr)->remote_addr :
|
||||
rdma_wr(ibwr)->remote_addr;
|
||||
wqe->iova = mask & WR_ATOMIC_MASK ? atomic_wr(ibwr)->remote_addr :
|
||||
mask & WR_READ_OR_WRITE_MASK ? rdma_wr(ibwr)->remote_addr : 0;
|
||||
wqe->mask = mask;
|
||||
wqe->dma.length = length;
|
||||
wqe->dma.resid = length;
|
||||
|
||||
@@ -2626,9 +2626,11 @@ static int srp_abort(struct scsi_cmnd *scmnd)
|
||||
ret = FAST_IO_FAIL;
|
||||
else
|
||||
ret = FAILED;
|
||||
srp_free_req(ch, req, scmnd, 0);
|
||||
scmnd->result = DID_ABORT << 16;
|
||||
scmnd->scsi_done(scmnd);
|
||||
if (ret == SUCCESS) {
|
||||
srp_free_req(ch, req, scmnd, 0);
|
||||
scmnd->result = DID_ABORT << 16;
|
||||
scmnd->scsi_done(scmnd);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -3395,12 +3397,10 @@ static ssize_t srp_create_target(struct device *dev,
|
||||
num_online_nodes());
|
||||
const int ch_end = ((node_idx + 1) * target->ch_count /
|
||||
num_online_nodes());
|
||||
const int cv_start = (node_idx * ibdev->num_comp_vectors /
|
||||
num_online_nodes() + target->comp_vector)
|
||||
% ibdev->num_comp_vectors;
|
||||
const int cv_end = ((node_idx + 1) * ibdev->num_comp_vectors /
|
||||
num_online_nodes() + target->comp_vector)
|
||||
% ibdev->num_comp_vectors;
|
||||
const int cv_start = node_idx * ibdev->num_comp_vectors /
|
||||
num_online_nodes();
|
||||
const int cv_end = (node_idx + 1) * ibdev->num_comp_vectors /
|
||||
num_online_nodes();
|
||||
int cpu_idx = 0;
|
||||
|
||||
for_each_online_cpu(cpu) {
|
||||
|
||||
@@ -389,6 +389,7 @@ int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_
|
||||
pasid_max - 1, GFP_KERNEL);
|
||||
if (ret < 0) {
|
||||
kfree(svm);
|
||||
kfree(sdev);
|
||||
goto out;
|
||||
}
|
||||
svm->pasid = ret;
|
||||
|
||||
@@ -21,6 +21,8 @@
|
||||
|
||||
#include "irq-gic-common.h"
|
||||
|
||||
static DEFINE_RAW_SPINLOCK(irq_controller_lock);
|
||||
|
||||
static const struct gic_kvm_info *gic_kvm_info;
|
||||
|
||||
const struct gic_kvm_info *gic_get_kvm_info(void)
|
||||
@@ -52,11 +54,13 @@ int gic_configure_irq(unsigned int irq, unsigned int type,
|
||||
u32 confoff = (irq / 16) * 4;
|
||||
u32 val, oldval;
|
||||
int ret = 0;
|
||||
unsigned long flags;
|
||||
|
||||
/*
|
||||
* Read current configuration register, and insert the config
|
||||
* for "irq", depending on "type".
|
||||
*/
|
||||
raw_spin_lock_irqsave(&irq_controller_lock, flags);
|
||||
val = oldval = readl_relaxed(base + GIC_DIST_CONFIG + confoff);
|
||||
if (type & IRQ_TYPE_LEVEL_MASK)
|
||||
val &= ~confmask;
|
||||
@@ -64,8 +68,10 @@ int gic_configure_irq(unsigned int irq, unsigned int type,
|
||||
val |= confmask;
|
||||
|
||||
/* If the current configuration is the same, then we are done */
|
||||
if (val == oldval)
|
||||
if (val == oldval) {
|
||||
raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Write back the new configuration, and possibly re-enable
|
||||
@@ -83,6 +89,7 @@ int gic_configure_irq(unsigned int irq, unsigned int type,
|
||||
pr_warn("GIC: PPI%d is secure or misconfigured\n",
|
||||
irq - 16);
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
|
||||
|
||||
if (sync_access)
|
||||
sync_access();
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/reboot.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
#define DM_MSG_PREFIX "verity"
|
||||
|
||||
@@ -32,6 +33,7 @@
|
||||
#define DM_VERITY_OPT_LOGGING "ignore_corruption"
|
||||
#define DM_VERITY_OPT_RESTART "restart_on_corruption"
|
||||
#define DM_VERITY_OPT_IGN_ZEROES "ignore_zero_blocks"
|
||||
#define DM_VERITY_OPT_AT_MOST_ONCE "check_at_most_once"
|
||||
|
||||
#define DM_VERITY_OPTS_MAX (2 + DM_VERITY_OPTS_FEC)
|
||||
|
||||
@@ -394,6 +396,18 @@ static int verity_bv_zero(struct dm_verity *v, struct dm_verity_io *io,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Moves the bio iter one data block forward.
|
||||
*/
|
||||
static inline void verity_bv_skip_block(struct dm_verity *v,
|
||||
struct dm_verity_io *io,
|
||||
struct bvec_iter *iter)
|
||||
{
|
||||
struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
|
||||
|
||||
bio_advance_iter(bio, iter, 1 << v->data_dev_block_bits);
|
||||
}
|
||||
|
||||
/*
|
||||
* Verify one "dm_verity_io" structure.
|
||||
*/
|
||||
@@ -406,9 +420,16 @@ static int verity_verify_io(struct dm_verity_io *io)
|
||||
|
||||
for (b = 0; b < io->n_blocks; b++) {
|
||||
int r;
|
||||
sector_t cur_block = io->block + b;
|
||||
struct shash_desc *desc = verity_io_hash_desc(v, io);
|
||||
|
||||
r = verity_hash_for_block(v, io, io->block + b,
|
||||
if (v->validated_blocks &&
|
||||
likely(test_bit(cur_block, v->validated_blocks))) {
|
||||
verity_bv_skip_block(v, io, &io->iter);
|
||||
continue;
|
||||
}
|
||||
|
||||
r = verity_hash_for_block(v, io, cur_block,
|
||||
verity_io_want_digest(v, io),
|
||||
&is_zero);
|
||||
if (unlikely(r < 0))
|
||||
@@ -441,13 +462,16 @@ static int verity_verify_io(struct dm_verity_io *io)
|
||||
return r;
|
||||
|
||||
if (likely(memcmp(verity_io_real_digest(v, io),
|
||||
verity_io_want_digest(v, io), v->digest_size) == 0))
|
||||
verity_io_want_digest(v, io), v->digest_size) == 0)) {
|
||||
if (v->validated_blocks)
|
||||
set_bit(cur_block, v->validated_blocks);
|
||||
continue;
|
||||
}
|
||||
else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA,
|
||||
io->block + b, NULL, &start) == 0)
|
||||
cur_block, NULL, &start) == 0)
|
||||
continue;
|
||||
else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,
|
||||
io->block + b))
|
||||
cur_block))
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
@@ -641,6 +665,8 @@ void verity_status(struct dm_target *ti, status_type_t type,
|
||||
args += DM_VERITY_OPTS_FEC;
|
||||
if (v->zero_digest)
|
||||
args++;
|
||||
if (v->validated_blocks)
|
||||
args++;
|
||||
if (!args)
|
||||
return;
|
||||
DMEMIT(" %u", args);
|
||||
@@ -659,6 +685,8 @@ void verity_status(struct dm_target *ti, status_type_t type,
|
||||
}
|
||||
if (v->zero_digest)
|
||||
DMEMIT(" " DM_VERITY_OPT_IGN_ZEROES);
|
||||
if (v->validated_blocks)
|
||||
DMEMIT(" " DM_VERITY_OPT_AT_MOST_ONCE);
|
||||
sz = verity_fec_status_table(v, sz, result, maxlen);
|
||||
break;
|
||||
}
|
||||
@@ -712,6 +740,7 @@ void verity_dtr(struct dm_target *ti)
|
||||
if (v->bufio)
|
||||
dm_bufio_client_destroy(v->bufio);
|
||||
|
||||
vfree(v->validated_blocks);
|
||||
kfree(v->salt);
|
||||
kfree(v->root_digest);
|
||||
kfree(v->zero_digest);
|
||||
@@ -733,6 +762,26 @@ void verity_dtr(struct dm_target *ti)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(verity_dtr);
|
||||
|
||||
static int verity_alloc_most_once(struct dm_verity *v)
|
||||
{
|
||||
struct dm_target *ti = v->ti;
|
||||
|
||||
/* the bitset can only handle INT_MAX blocks */
|
||||
if (v->data_blocks > INT_MAX) {
|
||||
ti->error = "device too large to use check_at_most_once";
|
||||
return -E2BIG;
|
||||
}
|
||||
|
||||
v->validated_blocks = vzalloc(BITS_TO_LONGS(v->data_blocks) *
|
||||
sizeof(unsigned long));
|
||||
if (!v->validated_blocks) {
|
||||
ti->error = "failed to allocate bitset for check_at_most_once";
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int verity_alloc_zero_digest(struct dm_verity *v)
|
||||
{
|
||||
int r = -ENOMEM;
|
||||
@@ -802,6 +851,12 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v)
|
||||
}
|
||||
continue;
|
||||
|
||||
} else if (!strcasecmp(arg_name, DM_VERITY_OPT_AT_MOST_ONCE)) {
|
||||
r = verity_alloc_most_once(v);
|
||||
if (r)
|
||||
return r;
|
||||
continue;
|
||||
|
||||
} else if (verity_is_fec_opt_arg(arg_name)) {
|
||||
r = verity_fec_parse_opt_args(as, v, &argc, arg_name);
|
||||
if (r)
|
||||
@@ -1070,7 +1125,7 @@ EXPORT_SYMBOL_GPL(verity_ctr);
|
||||
|
||||
static struct target_type verity_target = {
|
||||
.name = "verity",
|
||||
.version = {1, 3, 0},
|
||||
.version = {1, 4, 0},
|
||||
.module = THIS_MODULE,
|
||||
.ctr = verity_ctr,
|
||||
.dtr = verity_dtr,
|
||||
|
||||
@@ -63,6 +63,7 @@ struct dm_verity {
|
||||
sector_t hash_level_block[DM_VERITY_MAX_LEVELS];
|
||||
|
||||
struct dm_verity_fec *fec; /* forward error correction */
|
||||
unsigned long *validated_blocks; /* bitset blocks validated */
|
||||
};
|
||||
|
||||
struct dm_verity_io {
|
||||
|
||||
@@ -368,9 +368,9 @@ static void jz4740_mmc_set_irq_enabled(struct jz4740_mmc_host *host,
|
||||
host->irq_mask &= ~irq;
|
||||
else
|
||||
host->irq_mask |= irq;
|
||||
spin_unlock_irqrestore(&host->lock, flags);
|
||||
|
||||
writew(host->irq_mask, host->base + JZ_REG_MMC_IMASK);
|
||||
spin_unlock_irqrestore(&host->lock, flags);
|
||||
}
|
||||
|
||||
static void jz4740_mmc_clock_enable(struct jz4740_mmc_host *host,
|
||||
|
||||
@@ -244,7 +244,7 @@ static int ubiblock_open(struct block_device *bdev, fmode_t mode)
|
||||
* in any case.
|
||||
*/
|
||||
if (mode & FMODE_WRITE) {
|
||||
ret = -EPERM;
|
||||
ret = -EROFS;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
|
||||
@@ -894,6 +894,17 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Both UBI and UBIFS have been designed for SLC NAND and NOR flashes.
|
||||
* MLC NAND is different and needs special care, otherwise UBI or UBIFS
|
||||
* will die soon and you will lose all your data.
|
||||
*/
|
||||
if (mtd->type == MTD_MLCNANDFLASH) {
|
||||
pr_err("ubi: refuse attaching mtd%d - MLC NAND is not supported\n",
|
||||
mtd->index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (ubi_num == UBI_DEV_NUM_AUTO) {
|
||||
/* Search for an empty slot in the @ubi_devices array */
|
||||
for (ubi_num = 0; ubi_num < UBI_MAX_DEVICES; ubi_num++)
|
||||
|
||||
@@ -362,7 +362,6 @@ static void ubi_fastmap_close(struct ubi_device *ubi)
|
||||
{
|
||||
int i;
|
||||
|
||||
flush_work(&ubi->fm_work);
|
||||
return_unused_pool_pebs(ubi, &ubi->fm_pool);
|
||||
return_unused_pool_pebs(ubi, &ubi->fm_wl_pool);
|
||||
|
||||
|
||||
@@ -2038,7 +2038,10 @@ static void netback_changed(struct xenbus_device *dev,
|
||||
case XenbusStateInitialised:
|
||||
case XenbusStateReconfiguring:
|
||||
case XenbusStateReconfigured:
|
||||
break;
|
||||
|
||||
case XenbusStateUnknown:
|
||||
wake_up_all(&module_unload_q);
|
||||
break;
|
||||
|
||||
case XenbusStateInitWait:
|
||||
@@ -2169,7 +2172,9 @@ static int xennet_remove(struct xenbus_device *dev)
|
||||
xenbus_switch_state(dev, XenbusStateClosing);
|
||||
wait_event(module_unload_q,
|
||||
xenbus_read_driver_state(dev->otherend) ==
|
||||
XenbusStateClosing);
|
||||
XenbusStateClosing ||
|
||||
xenbus_read_driver_state(dev->otherend) ==
|
||||
XenbusStateUnknown);
|
||||
|
||||
xenbus_switch_state(dev, XenbusStateClosed);
|
||||
wait_event(module_unload_q,
|
||||
|
||||
@@ -1747,7 +1747,7 @@ struct device *create_namespace_pmem(struct nd_region *nd_region,
|
||||
}
|
||||
|
||||
if (i < nd_region->ndr_mappings) {
|
||||
struct nvdimm_drvdata *ndd = to_ndd(&nd_region->mapping[i]);
|
||||
struct nvdimm *nvdimm = nd_region->mapping[i].nvdimm;
|
||||
|
||||
/*
|
||||
* Give up if we don't find an instance of a uuid at each
|
||||
@@ -1755,7 +1755,7 @@ struct device *create_namespace_pmem(struct nd_region *nd_region,
|
||||
* find a dimm with two instances of the same uuid.
|
||||
*/
|
||||
dev_err(&nd_region->dev, "%s missing label for %pUb\n",
|
||||
dev_name(ndd->dev), nd_label->uuid);
|
||||
nvdimm_name(nvdimm), nd_label->uuid);
|
||||
rc = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
@@ -587,6 +587,7 @@ static unsigned int get_slot_status(struct acpiphp_slot *slot)
|
||||
{
|
||||
unsigned long long sta = 0;
|
||||
struct acpiphp_func *func;
|
||||
u32 dvid;
|
||||
|
||||
list_for_each_entry(func, &slot->funcs, sibling) {
|
||||
if (func->flags & FUNC_HAS_STA) {
|
||||
@@ -597,19 +598,27 @@ static unsigned int get_slot_status(struct acpiphp_slot *slot)
|
||||
if (ACPI_SUCCESS(status) && sta)
|
||||
break;
|
||||
} else {
|
||||
u32 dvid;
|
||||
|
||||
pci_bus_read_config_dword(slot->bus,
|
||||
PCI_DEVFN(slot->device,
|
||||
func->function),
|
||||
PCI_VENDOR_ID, &dvid);
|
||||
if (dvid != 0xffffffff) {
|
||||
if (pci_bus_read_dev_vendor_id(slot->bus,
|
||||
PCI_DEVFN(slot->device, func->function),
|
||||
&dvid, 0)) {
|
||||
sta = ACPI_STA_ALL;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!sta) {
|
||||
/*
|
||||
* Check for the slot itself since it may be that the
|
||||
* ACPI slot is a device below PCIe upstream port so in
|
||||
* that case it may not even be reachable yet.
|
||||
*/
|
||||
if (pci_bus_read_dev_vendor_id(slot->bus,
|
||||
PCI_DEVFN(slot->device, 0), &dvid, 0)) {
|
||||
sta = ACPI_STA_ALL;
|
||||
}
|
||||
}
|
||||
|
||||
return (unsigned int)sta;
|
||||
}
|
||||
|
||||
|
||||
@@ -156,8 +156,12 @@ static int rcar_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
|
||||
if (div < 0)
|
||||
return div;
|
||||
|
||||
/* Let the core driver set pwm->period if disabled and duty_ns == 0 */
|
||||
if (!pwm_is_enabled(pwm) && !duty_ns)
|
||||
/*
|
||||
* Let the core driver set pwm->period if disabled and duty_ns == 0.
|
||||
* But, this driver should prevent to set the new duty_ns if current
|
||||
* duty_cycle is not set
|
||||
*/
|
||||
if (!pwm_is_enabled(pwm) && !duty_ns && !pwm->state.duty_cycle)
|
||||
return 0;
|
||||
|
||||
rcar_pwm_update(rp, RCAR_PWMCR_SYNC, RCAR_PWMCR_SYNC, RCAR_PWMCR);
|
||||
|
||||
@@ -743,8 +743,14 @@ static int spi_map_buf(struct spi_master *master, struct device *dev,
|
||||
for (i = 0; i < sgs; i++) {
|
||||
|
||||
if (vmalloced_buf || kmap_buf) {
|
||||
min = min_t(size_t,
|
||||
len, desc_len - offset_in_page(buf));
|
||||
/*
|
||||
* Next scatterlist entry size is the minimum between
|
||||
* the desc_len and the remaining buffer length that
|
||||
* fits in a page.
|
||||
*/
|
||||
min = min_t(size_t, desc_len,
|
||||
min_t(size_t, len,
|
||||
PAGE_SIZE - offset_in_page(buf)));
|
||||
if (vmalloced_buf)
|
||||
vm_page = vmalloc_to_page(buf);
|
||||
else
|
||||
|
||||
@@ -587,6 +587,9 @@ static int imx_thermal_probe(struct platform_device *pdev)
|
||||
regmap_write(map, TEMPSENSE0 + REG_CLR, TEMPSENSE0_POWER_DOWN);
|
||||
regmap_write(map, TEMPSENSE0 + REG_SET, TEMPSENSE0_MEASURE_TEMP);
|
||||
|
||||
data->irq_enabled = true;
|
||||
data->mode = THERMAL_DEVICE_ENABLED;
|
||||
|
||||
ret = devm_request_threaded_irq(&pdev->dev, data->irq,
|
||||
imx_thermal_alarm_irq, imx_thermal_alarm_irq_thread,
|
||||
0, "imx_thermal", data);
|
||||
@@ -598,9 +601,6 @@ static int imx_thermal_probe(struct platform_device *pdev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
data->irq_enabled = true;
|
||||
data->mode = THERMAL_DEVICE_ENABLED;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -628,6 +628,7 @@ static const struct dev_pm_ops nhi_pm_ops = {
|
||||
* we just disable hotplug, the
|
||||
* pci-tunnels stay alive.
|
||||
*/
|
||||
.thaw_noirq = nhi_resume_noirq,
|
||||
.restore_noirq = nhi_resume_noirq,
|
||||
};
|
||||
|
||||
|
||||
@@ -2182,6 +2182,12 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
|
||||
}
|
||||
if (tty_hung_up_p(file))
|
||||
break;
|
||||
/*
|
||||
* Abort readers for ttys which never actually
|
||||
* get hung up. See __tty_hangup().
|
||||
*/
|
||||
if (test_bit(TTY_HUPPING, &tty->flags))
|
||||
break;
|
||||
if (!timeout)
|
||||
break;
|
||||
if (file->f_flags & O_NONBLOCK) {
|
||||
|
||||
@@ -709,6 +709,14 @@ static void __tty_hangup(struct tty_struct *tty, int exit_session)
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Some console devices aren't actually hung up for technical and
|
||||
* historical reasons, which can lead to indefinite interruptible
|
||||
* sleep in n_tty_read(). The following explicitly tells
|
||||
* n_tty_read() to abort readers.
|
||||
*/
|
||||
set_bit(TTY_HUPPING, &tty->flags);
|
||||
|
||||
/* inuse_filps is protected by the single tty lock,
|
||||
this really needs to change if we want to flush the
|
||||
workqueue with the lock held */
|
||||
@@ -763,6 +771,7 @@ static void __tty_hangup(struct tty_struct *tty, int exit_session)
|
||||
* from the ldisc side, which is now guaranteed.
|
||||
*/
|
||||
set_bit(TTY_HUPPED, &tty->flags);
|
||||
clear_bit(TTY_HUPPING, &tty->flags);
|
||||
tty_unlock(tty);
|
||||
|
||||
if (f)
|
||||
|
||||
@@ -240,8 +240,13 @@ static int generic_suspend(struct usb_device *udev, pm_message_t msg)
|
||||
if (!udev->parent)
|
||||
rc = hcd_bus_suspend(udev, msg);
|
||||
|
||||
/* Non-root devices don't need to do anything for FREEZE or PRETHAW */
|
||||
else if (msg.event == PM_EVENT_FREEZE || msg.event == PM_EVENT_PRETHAW)
|
||||
/*
|
||||
* Non-root USB2 devices don't need to do anything for FREEZE
|
||||
* or PRETHAW. USB3 devices don't support global suspend and
|
||||
* needs to be selectively suspended.
|
||||
*/
|
||||
else if ((msg.event == PM_EVENT_FREEZE || msg.event == PM_EVENT_PRETHAW)
|
||||
&& (udev->speed < USB_SPEED_SUPER))
|
||||
rc = 0;
|
||||
else
|
||||
rc = usb_port_suspend(udev, msg);
|
||||
|
||||
@@ -173,7 +173,7 @@ static int dwc3_pci_probe(struct pci_dev *pci,
|
||||
ret = platform_device_add_resources(dwc3, res, ARRAY_SIZE(res));
|
||||
if (ret) {
|
||||
dev_err(dev, "couldn't add resources to dwc3 device\n");
|
||||
return ret;
|
||||
goto err;
|
||||
}
|
||||
|
||||
dwc3->dev.parent = dev;
|
||||
|
||||
@@ -413,7 +413,8 @@ static int f_midi_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
|
||||
if (err) {
|
||||
ERROR(midi, "%s: couldn't enqueue request: %d\n",
|
||||
midi->out_ep->name, err);
|
||||
free_ep_req(midi->out_ep, req);
|
||||
if (req->buf != NULL)
|
||||
free_ep_req(midi->out_ep, req);
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -64,7 +64,9 @@ struct usb_request *alloc_ep_req(struct usb_ep *ep, size_t len);
|
||||
/* Frees a usb_request previously allocated by alloc_ep_req() */
|
||||
static inline void free_ep_req(struct usb_ep *ep, struct usb_request *req)
|
||||
{
|
||||
WARN_ON(req->buf == NULL);
|
||||
kfree(req->buf);
|
||||
req->buf = NULL;
|
||||
usb_ep_free_request(ep, req);
|
||||
}
|
||||
|
||||
|
||||
@@ -248,6 +248,9 @@ EXPORT_SYMBOL_GPL(usb_ep_free_request);
|
||||
* arranges to poll once per interval, and the gadget driver usually will
|
||||
* have queued some data to transfer at that time.
|
||||
*
|
||||
* Note that @req's ->complete() callback must never be called from
|
||||
* within usb_ep_queue() as that can create deadlock situations.
|
||||
*
|
||||
* Returns zero, or a negative error code. Endpoints that are not enabled
|
||||
* report errors; errors will also be
|
||||
* reported when the usb peripheral is disconnected.
|
||||
|
||||
@@ -114,15 +114,19 @@ static int service_tx_status_request(
|
||||
}
|
||||
|
||||
is_in = epnum & USB_DIR_IN;
|
||||
if (is_in) {
|
||||
epnum &= 0x0f;
|
||||
ep = &musb->endpoints[epnum].ep_in;
|
||||
} else {
|
||||
ep = &musb->endpoints[epnum].ep_out;
|
||||
epnum &= 0x0f;
|
||||
if (epnum >= MUSB_C_NUM_EPS) {
|
||||
handled = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
if (is_in)
|
||||
ep = &musb->endpoints[epnum].ep_in;
|
||||
else
|
||||
ep = &musb->endpoints[epnum].ep_out;
|
||||
regs = musb->endpoints[epnum].regs;
|
||||
|
||||
if (epnum >= MUSB_C_NUM_EPS || !ep->desc) {
|
||||
if (!ep->desc) {
|
||||
handled = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -810,6 +810,7 @@ static int vfio_exp_config_write(struct vfio_pci_device *vdev, int pos,
|
||||
{
|
||||
__le16 *ctrl = (__le16 *)(vdev->vconfig + pos -
|
||||
offset + PCI_EXP_DEVCTL);
|
||||
int readrq = le16_to_cpu(*ctrl) & PCI_EXP_DEVCTL_READRQ;
|
||||
|
||||
count = vfio_default_config_write(vdev, pos, count, perm, offset, val);
|
||||
if (count < 0)
|
||||
@@ -835,6 +836,27 @@ static int vfio_exp_config_write(struct vfio_pci_device *vdev, int pos,
|
||||
pci_try_reset_function(vdev->pdev);
|
||||
}
|
||||
|
||||
/*
|
||||
* MPS is virtualized to the user, writes do not change the physical
|
||||
* register since determining a proper MPS value requires a system wide
|
||||
* device view. The MRRS is largely independent of MPS, but since the
|
||||
* user does not have that system-wide view, they might set a safe, but
|
||||
* inefficiently low value. Here we allow writes through to hardware,
|
||||
* but we set the floor to the physical device MPS setting, so that
|
||||
* we can at least use full TLPs, as defined by the MPS value.
|
||||
*
|
||||
* NB, if any devices actually depend on an artificially low MRRS
|
||||
* setting, this will need to be revisited, perhaps with a quirk
|
||||
* though pcie_set_readrq().
|
||||
*/
|
||||
if (readrq != (le16_to_cpu(*ctrl) & PCI_EXP_DEVCTL_READRQ)) {
|
||||
readrq = 128 <<
|
||||
((le16_to_cpu(*ctrl) & PCI_EXP_DEVCTL_READRQ) >> 12);
|
||||
readrq = max(readrq, pcie_get_mps(vdev->pdev));
|
||||
|
||||
pcie_set_readrq(vdev->pdev, readrq);
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
@@ -853,11 +875,12 @@ static int __init init_pci_cap_exp_perm(struct perm_bits *perm)
|
||||
* Allow writes to device control fields, except devctl_phantom,
|
||||
* which could confuse IOMMU, MPS, which can break communication
|
||||
* with other physical devices, and the ARI bit in devctl2, which
|
||||
* is set at probe time. FLR gets virtualized via our writefn.
|
||||
* is set at probe time. FLR and MRRS get virtualized via our
|
||||
* writefn.
|
||||
*/
|
||||
p_setw(perm, PCI_EXP_DEVCTL,
|
||||
PCI_EXP_DEVCTL_BCR_FLR | PCI_EXP_DEVCTL_PAYLOAD,
|
||||
~PCI_EXP_DEVCTL_PHANTOM);
|
||||
PCI_EXP_DEVCTL_BCR_FLR | PCI_EXP_DEVCTL_PAYLOAD |
|
||||
PCI_EXP_DEVCTL_READRQ, ~PCI_EXP_DEVCTL_PHANTOM);
|
||||
p_setw(perm, PCI_EXP_DEVCTL2, NO_VIRT, ~PCI_EXP_DEVCTL2_ARI);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -496,7 +496,7 @@ static bool watchdog_is_running(void)
|
||||
|
||||
is_running = (superio_inb(watchdog.sioaddr, SIO_REG_ENABLE) & BIT(0))
|
||||
&& (superio_inb(watchdog.sioaddr, F71808FG_REG_WDT_CONF)
|
||||
& F71808FG_FLAG_WD_EN);
|
||||
& BIT(F71808FG_FLAG_WD_EN));
|
||||
|
||||
superio_exit(watchdog.sioaddr);
|
||||
|
||||
|
||||
@@ -746,7 +746,7 @@ static int autofs4_dir_mkdir(struct inode *dir,
|
||||
|
||||
autofs4_del_active(dentry);
|
||||
|
||||
inode = autofs4_get_inode(dir->i_sb, S_IFDIR | 0555);
|
||||
inode = autofs4_get_inode(dir->i_sb, S_IFDIR | mode);
|
||||
if (!inode)
|
||||
return -ENOMEM;
|
||||
d_add(dentry, inode);
|
||||
|
||||
@@ -1412,6 +1412,7 @@ struct dfs_info3_param {
|
||||
#define CIFS_FATTR_NEED_REVAL 0x4
|
||||
#define CIFS_FATTR_INO_COLLISION 0x8
|
||||
#define CIFS_FATTR_UNKNOWN_NLINK 0x10
|
||||
#define CIFS_FATTR_FAKE_ROOT_INO 0x20
|
||||
|
||||
struct cifs_fattr {
|
||||
u32 cf_flags;
|
||||
|
||||
@@ -701,6 +701,18 @@ cgfi_exit:
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Simple function to return a 64 bit hash of string. Rarely called */
|
||||
static __u64 simple_hashstr(const char *str)
|
||||
{
|
||||
const __u64 hash_mult = 1125899906842597L; /* a big enough prime */
|
||||
__u64 hash = 0;
|
||||
|
||||
while (*str)
|
||||
hash = (hash + (__u64) *str++) * hash_mult;
|
||||
|
||||
return hash;
|
||||
}
|
||||
|
||||
int
|
||||
cifs_get_inode_info(struct inode **inode, const char *full_path,
|
||||
FILE_ALL_INFO *data, struct super_block *sb, int xid,
|
||||
@@ -810,6 +822,14 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
|
||||
tmprc);
|
||||
fattr.cf_uniqueid = iunique(sb, ROOT_I);
|
||||
cifs_autodisable_serverino(cifs_sb);
|
||||
} else if ((fattr.cf_uniqueid == 0) &&
|
||||
strlen(full_path) == 0) {
|
||||
/* some servers ret bad root ino ie 0 */
|
||||
cifs_dbg(FYI, "Invalid (0) inodenum\n");
|
||||
fattr.cf_flags |=
|
||||
CIFS_FATTR_FAKE_ROOT_INO;
|
||||
fattr.cf_uniqueid =
|
||||
simple_hashstr(tcon->treeName);
|
||||
}
|
||||
}
|
||||
} else
|
||||
@@ -826,6 +846,16 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
|
||||
&fattr.cf_uniqueid, data);
|
||||
if (tmprc)
|
||||
fattr.cf_uniqueid = CIFS_I(*inode)->uniqueid;
|
||||
else if ((fattr.cf_uniqueid == 0) &&
|
||||
strlen(full_path) == 0) {
|
||||
/*
|
||||
* Reuse existing root inode num since
|
||||
* inum zero for root causes ls of . and .. to
|
||||
* not be returned
|
||||
*/
|
||||
cifs_dbg(FYI, "Srv ret 0 inode num for root\n");
|
||||
fattr.cf_uniqueid = CIFS_I(*inode)->uniqueid;
|
||||
}
|
||||
} else
|
||||
fattr.cf_uniqueid = CIFS_I(*inode)->uniqueid;
|
||||
}
|
||||
@@ -887,6 +917,9 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
|
||||
}
|
||||
|
||||
cgii_exit:
|
||||
if ((*inode) && ((*inode)->i_ino == 0))
|
||||
cifs_dbg(FYI, "inode number of zero returned\n");
|
||||
|
||||
kfree(buf);
|
||||
cifs_put_tlink(tlink);
|
||||
return rc;
|
||||
|
||||
@@ -25,15 +25,8 @@
|
||||
#include <linux/namei.h>
|
||||
#include "fscrypt_private.h"
|
||||
|
||||
/*
|
||||
* Call fscrypt_decrypt_page on every single page, reusing the encryption
|
||||
* context.
|
||||
*/
|
||||
static void completion_pages(struct work_struct *work)
|
||||
static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
|
||||
{
|
||||
struct fscrypt_ctx *ctx =
|
||||
container_of(work, struct fscrypt_ctx, r.work);
|
||||
struct bio *bio = ctx->r.bio;
|
||||
struct bio_vec *bv;
|
||||
int i;
|
||||
|
||||
@@ -45,22 +38,38 @@ static void completion_pages(struct work_struct *work)
|
||||
if (ret) {
|
||||
WARN_ON_ONCE(1);
|
||||
SetPageError(page);
|
||||
} else {
|
||||
} else if (done) {
|
||||
SetPageUptodate(page);
|
||||
}
|
||||
unlock_page(page);
|
||||
if (done)
|
||||
unlock_page(page);
|
||||
}
|
||||
}
|
||||
|
||||
void fscrypt_decrypt_bio(struct bio *bio)
|
||||
{
|
||||
__fscrypt_decrypt_bio(bio, false);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_decrypt_bio);
|
||||
|
||||
static void completion_pages(struct work_struct *work)
|
||||
{
|
||||
struct fscrypt_ctx *ctx =
|
||||
container_of(work, struct fscrypt_ctx, r.work);
|
||||
struct bio *bio = ctx->r.bio;
|
||||
|
||||
__fscrypt_decrypt_bio(bio, true);
|
||||
fscrypt_release_ctx(ctx);
|
||||
bio_put(bio);
|
||||
}
|
||||
|
||||
void fscrypt_decrypt_bio_pages(struct fscrypt_ctx *ctx, struct bio *bio)
|
||||
void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
|
||||
{
|
||||
INIT_WORK(&ctx->r.work, completion_pages);
|
||||
ctx->r.bio = bio;
|
||||
queue_work(fscrypt_read_workqueue, &ctx->r.work);
|
||||
fscrypt_enqueue_decrypt_work(&ctx->r.work);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_decrypt_bio_pages);
|
||||
EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);
|
||||
|
||||
void fscrypt_pullback_bio_page(struct page **page, bool restore)
|
||||
{
|
||||
|
||||
@@ -45,12 +45,18 @@ static mempool_t *fscrypt_bounce_page_pool = NULL;
|
||||
static LIST_HEAD(fscrypt_free_ctxs);
|
||||
static DEFINE_SPINLOCK(fscrypt_ctx_lock);
|
||||
|
||||
struct workqueue_struct *fscrypt_read_workqueue;
|
||||
static struct workqueue_struct *fscrypt_read_workqueue;
|
||||
static DEFINE_MUTEX(fscrypt_init_mutex);
|
||||
|
||||
static struct kmem_cache *fscrypt_ctx_cachep;
|
||||
struct kmem_cache *fscrypt_info_cachep;
|
||||
|
||||
void fscrypt_enqueue_decrypt_work(struct work_struct *work)
|
||||
{
|
||||
queue_work(fscrypt_read_workqueue, work);
|
||||
}
|
||||
EXPORT_SYMBOL(fscrypt_enqueue_decrypt_work);
|
||||
|
||||
/**
|
||||
* fscrypt_release_ctx() - Releases an encryption context
|
||||
* @ctx: The encryption context to release.
|
||||
|
||||
@@ -96,7 +96,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
|
||||
/* crypto.c */
|
||||
extern struct kmem_cache *fscrypt_info_cachep;
|
||||
extern int fscrypt_initialize(unsigned int cop_flags);
|
||||
extern struct workqueue_struct *fscrypt_read_workqueue;
|
||||
extern int fscrypt_do_page_crypto(const struct inode *inode,
|
||||
fscrypt_direction_t rw, u64 lblk_num,
|
||||
struct page *src_page,
|
||||
|
||||
@@ -242,8 +242,6 @@ static int ext4_init_block_bitmap(struct super_block *sb,
|
||||
*/
|
||||
ext4_mark_bitmap_end(num_clusters_in_group(sb, block_group),
|
||||
sb->s_blocksize * 8, bh->b_data);
|
||||
ext4_block_bitmap_csum_set(sb, block_group, gdp, bh);
|
||||
ext4_group_desc_csum_set(sb, block_group, gdp);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -447,6 +445,7 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
|
||||
err = ext4_init_block_bitmap(sb, bh, block_group, desc);
|
||||
set_bitmap_uptodate(bh);
|
||||
set_buffer_uptodate(bh);
|
||||
set_buffer_verified(bh);
|
||||
ext4_unlock_group(sb, block_group);
|
||||
unlock_buffer(bh);
|
||||
if (err) {
|
||||
|
||||
@@ -63,44 +63,6 @@ void ext4_mark_bitmap_end(int start_bit, int end_bit, char *bitmap)
|
||||
memset(bitmap + (i >> 3), 0xff, (end_bit - i) >> 3);
|
||||
}
|
||||
|
||||
/* Initializes an uninitialized inode bitmap */
|
||||
static int ext4_init_inode_bitmap(struct super_block *sb,
|
||||
struct buffer_head *bh,
|
||||
ext4_group_t block_group,
|
||||
struct ext4_group_desc *gdp)
|
||||
{
|
||||
struct ext4_group_info *grp;
|
||||
struct ext4_sb_info *sbi = EXT4_SB(sb);
|
||||
J_ASSERT_BH(bh, buffer_locked(bh));
|
||||
|
||||
/* If checksum is bad mark all blocks and inodes use to prevent
|
||||
* allocation, essentially implementing a per-group read-only flag. */
|
||||
if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) {
|
||||
grp = ext4_get_group_info(sb, block_group);
|
||||
if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp))
|
||||
percpu_counter_sub(&sbi->s_freeclusters_counter,
|
||||
grp->bb_free);
|
||||
set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state);
|
||||
if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {
|
||||
int count;
|
||||
count = ext4_free_inodes_count(sb, gdp);
|
||||
percpu_counter_sub(&sbi->s_freeinodes_counter,
|
||||
count);
|
||||
}
|
||||
set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state);
|
||||
return -EFSBADCRC;
|
||||
}
|
||||
|
||||
memset(bh->b_data, 0, (EXT4_INODES_PER_GROUP(sb) + 7) / 8);
|
||||
ext4_mark_bitmap_end(EXT4_INODES_PER_GROUP(sb), sb->s_blocksize * 8,
|
||||
bh->b_data);
|
||||
ext4_inode_bitmap_csum_set(sb, block_group, gdp, bh,
|
||||
EXT4_INODES_PER_GROUP(sb) / 8);
|
||||
ext4_group_desc_csum_set(sb, block_group, gdp);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void ext4_end_bitmap_read(struct buffer_head *bh, int uptodate)
|
||||
{
|
||||
if (uptodate) {
|
||||
@@ -184,17 +146,14 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
|
||||
|
||||
ext4_lock_group(sb, block_group);
|
||||
if (desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT)) {
|
||||
err = ext4_init_inode_bitmap(sb, bh, block_group, desc);
|
||||
memset(bh->b_data, 0, (EXT4_INODES_PER_GROUP(sb) + 7) / 8);
|
||||
ext4_mark_bitmap_end(EXT4_INODES_PER_GROUP(sb),
|
||||
sb->s_blocksize * 8, bh->b_data);
|
||||
set_bitmap_uptodate(bh);
|
||||
set_buffer_uptodate(bh);
|
||||
set_buffer_verified(bh);
|
||||
ext4_unlock_group(sb, block_group);
|
||||
unlock_buffer(bh);
|
||||
if (err) {
|
||||
ext4_error(sb, "Failed to init inode bitmap for group "
|
||||
"%u: %d", block_group, err);
|
||||
goto out;
|
||||
}
|
||||
return bh;
|
||||
}
|
||||
ext4_unlock_group(sb, block_group);
|
||||
|
||||
@@ -3421,7 +3421,6 @@ static ssize_t ext4_direct_IO_write(struct kiocb *iocb, struct iov_iter *iter)
|
||||
{
|
||||
struct file *file = iocb->ki_filp;
|
||||
struct inode *inode = file->f_mapping->host;
|
||||
struct ext4_inode_info *ei = EXT4_I(inode);
|
||||
ssize_t ret;
|
||||
loff_t offset = iocb->ki_pos;
|
||||
size_t count = iov_iter_count(iter);
|
||||
@@ -3445,7 +3444,7 @@ static ssize_t ext4_direct_IO_write(struct kiocb *iocb, struct iov_iter *iter)
|
||||
goto out;
|
||||
}
|
||||
orphan = 1;
|
||||
ei->i_disksize = inode->i_size;
|
||||
ext4_update_i_disksize(inode, inode->i_size);
|
||||
ext4_journal_stop(handle);
|
||||
}
|
||||
|
||||
@@ -3573,7 +3572,7 @@ static ssize_t ext4_direct_IO_write(struct kiocb *iocb, struct iov_iter *iter)
|
||||
if (ret > 0) {
|
||||
loff_t end = offset + ret;
|
||||
if (end > inode->i_size) {
|
||||
ei->i_disksize = end;
|
||||
ext4_update_i_disksize(inode, end);
|
||||
i_size_write(inode, end);
|
||||
/*
|
||||
* We're going to return a positive `ret'
|
||||
@@ -4561,6 +4560,12 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
|
||||
goto bad_inode;
|
||||
raw_inode = ext4_raw_inode(&iloc);
|
||||
|
||||
if ((ino == EXT4_ROOT_INO) && (raw_inode->i_links_count == 0)) {
|
||||
EXT4_ERROR_INODE(inode, "root inode unallocated");
|
||||
ret = -EFSCORRUPTED;
|
||||
goto bad_inode;
|
||||
}
|
||||
|
||||
if (EXT4_INODE_SIZE(inode->i_sb) > EXT4_GOOD_OLD_INODE_SIZE) {
|
||||
ei->i_extra_isize = le16_to_cpu(raw_inode->i_extra_isize);
|
||||
if (EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize >
|
||||
|
||||
@@ -91,7 +91,7 @@ static void mpage_end_io(struct bio *bio)
|
||||
if (bio->bi_error) {
|
||||
fscrypt_release_ctx(bio->bi_private);
|
||||
} else {
|
||||
fscrypt_decrypt_bio_pages(bio->bi_private, bio);
|
||||
fscrypt_enqueue_decrypt_bio(bio->bi_private, bio);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2270,6 +2270,8 @@ static int ext4_check_descriptors(struct super_block *sb,
|
||||
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
|
||||
"Block bitmap for group %u overlaps "
|
||||
"superblock", i);
|
||||
if (!(sb->s_flags & MS_RDONLY))
|
||||
return 0;
|
||||
}
|
||||
if (block_bitmap < first_block || block_bitmap > last_block) {
|
||||
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
|
||||
@@ -2282,6 +2284,8 @@ static int ext4_check_descriptors(struct super_block *sb,
|
||||
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
|
||||
"Inode bitmap for group %u overlaps "
|
||||
"superblock", i);
|
||||
if (!(sb->s_flags & MS_RDONLY))
|
||||
return 0;
|
||||
}
|
||||
if (inode_bitmap < first_block || inode_bitmap > last_block) {
|
||||
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
|
||||
@@ -2294,6 +2298,8 @@ static int ext4_check_descriptors(struct super_block *sb,
|
||||
ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: "
|
||||
"Inode table for group %u overlaps "
|
||||
"superblock", i);
|
||||
if (!(sb->s_flags & MS_RDONLY))
|
||||
return 0;
|
||||
}
|
||||
if (inode_table < first_block ||
|
||||
inode_table + sbi->s_itb_per_group - 1 > last_block) {
|
||||
|
||||
@@ -385,7 +385,7 @@ static int f2fs_set_meta_page_dirty(struct page *page)
|
||||
if (!PageUptodate(page))
|
||||
SetPageUptodate(page);
|
||||
if (!PageDirty(page)) {
|
||||
f2fs_set_page_dirty_nobuffers(page);
|
||||
__set_page_dirty_nobuffers(page);
|
||||
inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_META);
|
||||
SetPagePrivate(page);
|
||||
f2fs_trace_pid(page);
|
||||
|
||||
203
fs/f2fs/data.c
203
fs/f2fs/data.c
@@ -19,8 +19,6 @@
|
||||
#include <linux/bio.h>
|
||||
#include <linux/prefetch.h>
|
||||
#include <linux/uio.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/memcontrol.h>
|
||||
#include <linux/cleancache.h>
|
||||
|
||||
#include "f2fs.h"
|
||||
@@ -30,6 +28,11 @@
|
||||
#include <trace/events/f2fs.h>
|
||||
#include <trace/events/android_fs.h>
|
||||
|
||||
#define NUM_PREALLOC_POST_READ_CTXS 128
|
||||
|
||||
static struct kmem_cache *bio_post_read_ctx_cache;
|
||||
static mempool_t *bio_post_read_ctx_pool;
|
||||
|
||||
static bool __is_cp_guaranteed(struct page *page)
|
||||
{
|
||||
struct address_space *mapping = page->mapping;
|
||||
@@ -50,11 +53,77 @@ static bool __is_cp_guaranteed(struct page *page)
|
||||
return false;
|
||||
}
|
||||
|
||||
static void f2fs_read_end_io(struct bio *bio)
|
||||
/* postprocessing steps for read bios */
|
||||
enum bio_post_read_step {
|
||||
STEP_INITIAL = 0,
|
||||
STEP_DECRYPT,
|
||||
};
|
||||
|
||||
struct bio_post_read_ctx {
|
||||
struct bio *bio;
|
||||
struct work_struct work;
|
||||
unsigned int cur_step;
|
||||
unsigned int enabled_steps;
|
||||
};
|
||||
|
||||
static void __read_end_io(struct bio *bio)
|
||||
{
|
||||
struct bio_vec *bvec;
|
||||
struct page *page;
|
||||
struct bio_vec *bv;
|
||||
int i;
|
||||
|
||||
bio_for_each_segment_all(bv, bio, i) {
|
||||
page = bv->bv_page;
|
||||
|
||||
/* PG_error was set if any post_read step failed */
|
||||
if (bio->bi_error || PageError(page)) {
|
||||
ClearPageUptodate(page);
|
||||
SetPageError(page);
|
||||
} else {
|
||||
SetPageUptodate(page);
|
||||
}
|
||||
unlock_page(page);
|
||||
}
|
||||
if (bio->bi_private)
|
||||
mempool_free(bio->bi_private, bio_post_read_ctx_pool);
|
||||
bio_put(bio);
|
||||
}
|
||||
|
||||
static void bio_post_read_processing(struct bio_post_read_ctx *ctx);
|
||||
|
||||
static void decrypt_work(struct work_struct *work)
|
||||
{
|
||||
struct bio_post_read_ctx *ctx =
|
||||
container_of(work, struct bio_post_read_ctx, work);
|
||||
|
||||
fscrypt_decrypt_bio(ctx->bio);
|
||||
|
||||
bio_post_read_processing(ctx);
|
||||
}
|
||||
|
||||
static void bio_post_read_processing(struct bio_post_read_ctx *ctx)
|
||||
{
|
||||
switch (++ctx->cur_step) {
|
||||
case STEP_DECRYPT:
|
||||
if (ctx->enabled_steps & (1 << STEP_DECRYPT)) {
|
||||
INIT_WORK(&ctx->work, decrypt_work);
|
||||
fscrypt_enqueue_decrypt_work(&ctx->work);
|
||||
return;
|
||||
}
|
||||
ctx->cur_step++;
|
||||
/* fall-through */
|
||||
default:
|
||||
__read_end_io(ctx->bio);
|
||||
}
|
||||
}
|
||||
|
||||
static bool f2fs_bio_post_read_required(struct bio *bio)
|
||||
{
|
||||
return bio->bi_private && !bio->bi_error;
|
||||
}
|
||||
|
||||
static void f2fs_read_end_io(struct bio *bio)
|
||||
{
|
||||
#ifdef CONFIG_F2FS_FAULT_INJECTION
|
||||
if (time_to_inject(F2FS_P_SB(bio->bi_io_vec->bv_page), FAULT_IO)) {
|
||||
f2fs_show_injection_info(FAULT_IO);
|
||||
@@ -62,28 +131,15 @@ static void f2fs_read_end_io(struct bio *bio)
|
||||
}
|
||||
#endif
|
||||
|
||||
if (f2fs_bio_encrypted(bio)) {
|
||||
if (bio->bi_error) {
|
||||
fscrypt_release_ctx(bio->bi_private);
|
||||
} else {
|
||||
fscrypt_decrypt_bio_pages(bio->bi_private, bio);
|
||||
return;
|
||||
}
|
||||
if (f2fs_bio_post_read_required(bio)) {
|
||||
struct bio_post_read_ctx *ctx = bio->bi_private;
|
||||
|
||||
ctx->cur_step = STEP_INITIAL;
|
||||
bio_post_read_processing(ctx);
|
||||
return;
|
||||
}
|
||||
|
||||
bio_for_each_segment_all(bvec, bio, i) {
|
||||
struct page *page = bvec->bv_page;
|
||||
|
||||
if (!bio->bi_error) {
|
||||
if (!PageUptodate(page))
|
||||
SetPageUptodate(page);
|
||||
} else {
|
||||
ClearPageUptodate(page);
|
||||
SetPageError(page);
|
||||
}
|
||||
unlock_page(page);
|
||||
}
|
||||
bio_put(bio);
|
||||
__read_end_io(bio);
|
||||
}
|
||||
|
||||
static void f2fs_write_end_io(struct bio *bio)
|
||||
@@ -480,29 +536,33 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
|
||||
unsigned nr_pages)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct fscrypt_ctx *ctx = NULL;
|
||||
struct bio *bio;
|
||||
struct bio_post_read_ctx *ctx;
|
||||
unsigned int post_read_steps = 0;
|
||||
|
||||
if (f2fs_encrypted_file(inode)) {
|
||||
ctx = fscrypt_get_ctx(inode, GFP_NOFS);
|
||||
if (IS_ERR(ctx))
|
||||
return ERR_CAST(ctx);
|
||||
bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
|
||||
if (!bio)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
f2fs_target_device(sbi, blkaddr, bio);
|
||||
bio->bi_end_io = f2fs_read_end_io;
|
||||
bio_set_op_attrs(bio, REQ_OP_READ, 0);
|
||||
|
||||
if (f2fs_encrypted_file(inode))
|
||||
post_read_steps |= 1 << STEP_DECRYPT;
|
||||
if (post_read_steps) {
|
||||
ctx = mempool_alloc(bio_post_read_ctx_pool, GFP_NOFS);
|
||||
if (!ctx) {
|
||||
bio_put(bio);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
ctx->bio = bio;
|
||||
ctx->enabled_steps = post_read_steps;
|
||||
bio->bi_private = ctx;
|
||||
|
||||
/* wait the page to be moved by cleaning */
|
||||
f2fs_wait_on_block_writeback(sbi, blkaddr);
|
||||
}
|
||||
|
||||
bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
|
||||
if (!bio) {
|
||||
if (ctx)
|
||||
fscrypt_release_ctx(ctx);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
f2fs_target_device(sbi, blkaddr, bio);
|
||||
bio->bi_end_io = f2fs_read_end_io;
|
||||
bio->bi_private = ctx;
|
||||
bio_set_op_attrs(bio, REQ_OP_READ, 0);
|
||||
|
||||
return bio;
|
||||
}
|
||||
|
||||
@@ -1524,7 +1584,7 @@ static int encrypt_one_page(struct f2fs_io_info *fio)
|
||||
if (!f2fs_encrypted_file(inode))
|
||||
return 0;
|
||||
|
||||
/* wait for GCed encrypted page writeback */
|
||||
/* wait for GCed page writeback via META_MAPPING */
|
||||
f2fs_wait_on_block_writeback(fio->sbi, fio->old_blkaddr);
|
||||
|
||||
retry_encrypt:
|
||||
@@ -1674,6 +1734,7 @@ got_it:
|
||||
goto out_writepage;
|
||||
|
||||
set_page_writeback(page);
|
||||
ClearPageError(page);
|
||||
f2fs_put_dnode(&dn);
|
||||
if (fio->need_lock == LOCK_REQ)
|
||||
f2fs_unlock_op(fio->sbi);
|
||||
@@ -1696,6 +1757,7 @@ got_it:
|
||||
goto out_writepage;
|
||||
|
||||
set_page_writeback(page);
|
||||
ClearPageError(page);
|
||||
|
||||
/* LFS mode write path */
|
||||
write_data_page(&dn, fio);
|
||||
@@ -2236,8 +2298,8 @@ repeat:
|
||||
|
||||
f2fs_wait_on_page_writeback(page, DATA, false);
|
||||
|
||||
/* wait for GCed encrypted page writeback */
|
||||
if (f2fs_encrypted_file(inode))
|
||||
/* wait for GCed page writeback via META_MAPPING */
|
||||
if (f2fs_post_read_required(inode))
|
||||
f2fs_wait_on_block_writeback(sbi, blkaddr);
|
||||
|
||||
if (len == PAGE_SIZE || PageUptodate(page))
|
||||
@@ -2450,35 +2512,6 @@ int f2fs_release_page(struct page *page, gfp_t wait)
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* This was copied from __set_page_dirty_buffers which gives higher performance
|
||||
* in very high speed storages. (e.g., pmem)
|
||||
*/
|
||||
void f2fs_set_page_dirty_nobuffers(struct page *page)
|
||||
{
|
||||
struct address_space *mapping = page->mapping;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!mapping))
|
||||
return;
|
||||
|
||||
spin_lock(&mapping->private_lock);
|
||||
lock_page_memcg(page);
|
||||
SetPageDirty(page);
|
||||
spin_unlock(&mapping->private_lock);
|
||||
|
||||
spin_lock_irqsave(&mapping->tree_lock, flags);
|
||||
WARN_ON_ONCE(!PageUptodate(page));
|
||||
account_page_dirtied(page, mapping);
|
||||
radix_tree_tag_set(&mapping->page_tree,
|
||||
page_index(page), PAGECACHE_TAG_DIRTY);
|
||||
spin_unlock_irqrestore(&mapping->tree_lock, flags);
|
||||
unlock_page_memcg(page);
|
||||
|
||||
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
|
||||
return;
|
||||
}
|
||||
|
||||
static int f2fs_set_data_page_dirty(struct page *page)
|
||||
{
|
||||
struct address_space *mapping = page->mapping;
|
||||
@@ -2502,7 +2535,7 @@ static int f2fs_set_data_page_dirty(struct page *page)
|
||||
}
|
||||
|
||||
if (!PageDirty(page)) {
|
||||
f2fs_set_page_dirty_nobuffers(page);
|
||||
__set_page_dirty_nobuffers(page);
|
||||
update_dirty_page(inode, page);
|
||||
return 1;
|
||||
}
|
||||
@@ -2595,3 +2628,27 @@ const struct address_space_operations f2fs_dblock_aops = {
|
||||
.migratepage = f2fs_migrate_page,
|
||||
#endif
|
||||
};
|
||||
|
||||
int __init f2fs_init_post_read_processing(void)
|
||||
{
|
||||
bio_post_read_ctx_cache = KMEM_CACHE(bio_post_read_ctx, 0);
|
||||
if (!bio_post_read_ctx_cache)
|
||||
goto fail;
|
||||
bio_post_read_ctx_pool =
|
||||
mempool_create_slab_pool(NUM_PREALLOC_POST_READ_CTXS,
|
||||
bio_post_read_ctx_cache);
|
||||
if (!bio_post_read_ctx_pool)
|
||||
goto fail_free_cache;
|
||||
return 0;
|
||||
|
||||
fail_free_cache:
|
||||
kmem_cache_destroy(bio_post_read_ctx_cache);
|
||||
fail:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
void __exit f2fs_destroy_post_read_processing(void)
|
||||
{
|
||||
mempool_destroy(bio_post_read_ctx_pool);
|
||||
kmem_cache_destroy(bio_post_read_ctx_cache);
|
||||
}
|
||||
|
||||
@@ -1612,7 +1612,7 @@ static inline bool f2fs_has_xattr_block(unsigned int ofs)
|
||||
}
|
||||
|
||||
static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
|
||||
struct inode *inode)
|
||||
struct inode *inode, bool cap)
|
||||
{
|
||||
if (!inode)
|
||||
return true;
|
||||
@@ -1625,7 +1625,7 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
|
||||
if (!gid_eq(F2FS_OPTION(sbi).s_resgid, GLOBAL_ROOT_GID) &&
|
||||
in_group_p(F2FS_OPTION(sbi).s_resgid))
|
||||
return true;
|
||||
if (capable(CAP_SYS_RESOURCE))
|
||||
if (cap && capable(CAP_SYS_RESOURCE))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
@@ -1660,7 +1660,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
|
||||
avail_user_block_count = sbi->user_block_count -
|
||||
sbi->current_reserved_blocks;
|
||||
|
||||
if (!__allow_reserved_blocks(sbi, inode))
|
||||
if (!__allow_reserved_blocks(sbi, inode, true))
|
||||
avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
|
||||
|
||||
if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) {
|
||||
@@ -1867,7 +1867,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
|
||||
valid_block_count = sbi->total_valid_block_count +
|
||||
sbi->current_reserved_blocks + 1;
|
||||
|
||||
if (!__allow_reserved_blocks(sbi, inode))
|
||||
if (!__allow_reserved_blocks(sbi, inode, false))
|
||||
valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
|
||||
|
||||
if (unlikely(valid_block_count > sbi->user_block_count)) {
|
||||
@@ -2886,6 +2886,8 @@ void destroy_checkpoint_caches(void);
|
||||
/*
|
||||
* data.c
|
||||
*/
|
||||
int f2fs_init_post_read_processing(void);
|
||||
void f2fs_destroy_post_read_processing(void);
|
||||
void f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type);
|
||||
void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi,
|
||||
struct inode *inode, nid_t ino, pgoff_t idx,
|
||||
@@ -2917,7 +2919,6 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
|
||||
u64 start, u64 len);
|
||||
bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio);
|
||||
bool should_update_outplace(struct inode *inode, struct f2fs_io_info *fio);
|
||||
void f2fs_set_page_dirty_nobuffers(struct page *page);
|
||||
int __f2fs_write_data_pages(struct address_space *mapping,
|
||||
struct writeback_control *wbc,
|
||||
enum iostat_type io_type);
|
||||
@@ -3246,9 +3247,13 @@ static inline void f2fs_set_encrypted_inode(struct inode *inode)
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline bool f2fs_bio_encrypted(struct bio *bio)
|
||||
/*
|
||||
* Returns true if the reads of the inode's data need to undergo some
|
||||
* postprocessing step, like decryption or authenticity verification.
|
||||
*/
|
||||
static inline bool f2fs_post_read_required(struct inode *inode)
|
||||
{
|
||||
return bio->bi_private != NULL;
|
||||
return f2fs_encrypted_file(inode);
|
||||
}
|
||||
|
||||
#define F2FS_FEATURE_FUNCS(name, flagname) \
|
||||
@@ -3316,7 +3321,7 @@ static inline bool f2fs_may_encrypt(struct inode *inode)
|
||||
|
||||
static inline bool f2fs_force_buffered_io(struct inode *inode, int rw)
|
||||
{
|
||||
return (f2fs_encrypted_file(inode) ||
|
||||
return (f2fs_post_read_required(inode) ||
|
||||
(rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) ||
|
||||
F2FS_I_SB(inode)->s_ndevs);
|
||||
}
|
||||
|
||||
@@ -112,8 +112,8 @@ mapped:
|
||||
/* fill the page */
|
||||
f2fs_wait_on_page_writeback(page, DATA, false);
|
||||
|
||||
/* wait for GCed encrypted page writeback */
|
||||
if (f2fs_encrypted_file(inode))
|
||||
/* wait for GCed page writeback via META_MAPPING */
|
||||
if (f2fs_post_read_required(inode))
|
||||
f2fs_wait_on_block_writeback(sbi, dn.data_blkaddr);
|
||||
|
||||
out_sem:
|
||||
|
||||
@@ -850,8 +850,8 @@ next_step:
|
||||
if (IS_ERR(inode) || is_bad_inode(inode))
|
||||
continue;
|
||||
|
||||
/* if encrypted inode, let's go phase 3 */
|
||||
if (f2fs_encrypted_file(inode)) {
|
||||
/* if inode uses special I/O path, let's go phase 3 */
|
||||
if (f2fs_post_read_required(inode)) {
|
||||
add_gc_inode(gc_list, inode);
|
||||
continue;
|
||||
}
|
||||
@@ -899,7 +899,7 @@ next_step:
|
||||
|
||||
start_bidx = start_bidx_of_node(nofs, inode)
|
||||
+ ofs_in_node;
|
||||
if (f2fs_encrypted_file(inode))
|
||||
if (f2fs_post_read_required(inode))
|
||||
move_data_block(inode, start_bidx, segno, off);
|
||||
else
|
||||
move_data_page(inode, start_bidx, gc_type,
|
||||
|
||||
@@ -26,7 +26,7 @@ bool f2fs_may_inline_data(struct inode *inode)
|
||||
if (i_size_read(inode) > MAX_INLINE_DATA(inode))
|
||||
return false;
|
||||
|
||||
if (f2fs_encrypted_file(inode))
|
||||
if (f2fs_post_read_required(inode))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
|
||||
@@ -294,8 +294,8 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
|
||||
|
||||
alloc_nid_done(sbi, ino);
|
||||
|
||||
d_instantiate(dentry, inode);
|
||||
unlock_new_inode(inode);
|
||||
d_instantiate(dentry, inode);
|
||||
|
||||
if (IS_DIRSYNC(dir))
|
||||
f2fs_sync_fs(sbi->sb, 1);
|
||||
@@ -597,8 +597,8 @@ static int f2fs_symlink(struct inode *dir, struct dentry *dentry,
|
||||
err = page_symlink(inode, disk_link.name, disk_link.len);
|
||||
|
||||
err_out:
|
||||
d_instantiate(dentry, inode);
|
||||
unlock_new_inode(inode);
|
||||
d_instantiate(dentry, inode);
|
||||
|
||||
/*
|
||||
* Let's flush symlink data in order to avoid broken symlink as much as
|
||||
@@ -661,8 +661,8 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
|
||||
|
||||
alloc_nid_done(sbi, inode->i_ino);
|
||||
|
||||
d_instantiate(dentry, inode);
|
||||
unlock_new_inode(inode);
|
||||
d_instantiate(dentry, inode);
|
||||
|
||||
if (IS_DIRSYNC(dir))
|
||||
f2fs_sync_fs(sbi->sb, 1);
|
||||
@@ -713,8 +713,8 @@ static int f2fs_mknod(struct inode *dir, struct dentry *dentry,
|
||||
|
||||
alloc_nid_done(sbi, inode->i_ino);
|
||||
|
||||
d_instantiate(dentry, inode);
|
||||
unlock_new_inode(inode);
|
||||
d_instantiate(dentry, inode);
|
||||
|
||||
if (IS_DIRSYNC(dir))
|
||||
f2fs_sync_fs(sbi->sb, 1);
|
||||
|
||||
@@ -1772,7 +1772,7 @@ static int f2fs_set_node_page_dirty(struct page *page)
|
||||
if (!PageUptodate(page))
|
||||
SetPageUptodate(page);
|
||||
if (!PageDirty(page)) {
|
||||
f2fs_set_page_dirty_nobuffers(page);
|
||||
__set_page_dirty_nobuffers(page);
|
||||
inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES);
|
||||
SetPagePrivate(page);
|
||||
f2fs_trace_pid(page);
|
||||
|
||||
@@ -3096,8 +3096,13 @@ static int __init init_f2fs_fs(void)
|
||||
err = f2fs_create_root_stats();
|
||||
if (err)
|
||||
goto free_filesystem;
|
||||
err = f2fs_init_post_read_processing();
|
||||
if (err)
|
||||
goto free_root_stats;
|
||||
return 0;
|
||||
|
||||
free_root_stats:
|
||||
f2fs_destroy_root_stats();
|
||||
free_filesystem:
|
||||
unregister_filesystem(&f2fs_fs_type);
|
||||
free_shrinker:
|
||||
@@ -3120,6 +3125,7 @@ fail:
|
||||
|
||||
static void __exit exit_f2fs_fs(void)
|
||||
{
|
||||
f2fs_destroy_post_read_processing();
|
||||
f2fs_destroy_root_stats();
|
||||
unregister_filesystem(&f2fs_fs_type);
|
||||
unregister_shrinker(&f2fs_shrinker_info);
|
||||
|
||||
@@ -745,11 +745,12 @@ int inode_congested(struct inode *inode, int cong_bits)
|
||||
*/
|
||||
if (inode && inode_to_wb_is_valid(inode)) {
|
||||
struct bdi_writeback *wb;
|
||||
bool locked, congested;
|
||||
struct wb_lock_cookie lock_cookie = {};
|
||||
bool congested;
|
||||
|
||||
wb = unlocked_inode_to_wb_begin(inode, &locked);
|
||||
wb = unlocked_inode_to_wb_begin(inode, &lock_cookie);
|
||||
congested = wb_congested(wb, cong_bits);
|
||||
unlocked_inode_to_wb_end(inode, locked);
|
||||
unlocked_inode_to_wb_end(inode, &lock_cookie);
|
||||
return congested;
|
||||
}
|
||||
|
||||
|
||||
@@ -951,7 +951,7 @@ out:
|
||||
}
|
||||
|
||||
/*
|
||||
* This is a variaon of __jbd2_update_log_tail which checks for validity of
|
||||
* This is a variation of __jbd2_update_log_tail which checks for validity of
|
||||
* provided log tail and locks j_checkpoint_mutex. So it is safe against races
|
||||
* with other threads updating log tail.
|
||||
*/
|
||||
@@ -1394,6 +1394,9 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
|
||||
journal_superblock_t *sb = journal->j_superblock;
|
||||
int ret;
|
||||
|
||||
if (is_journal_aborted(journal))
|
||||
return -EIO;
|
||||
|
||||
BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex));
|
||||
jbd_debug(1, "JBD2: updating superblock (start %lu, seq %u)\n",
|
||||
tail_block, tail_tid);
|
||||
|
||||
@@ -342,7 +342,7 @@ static void jffs2_put_super (struct super_block *sb)
|
||||
static void jffs2_kill_sb(struct super_block *sb)
|
||||
{
|
||||
struct jffs2_sb_info *c = JFFS2_SB_INFO(sb);
|
||||
if (!(sb->s_flags & MS_RDONLY))
|
||||
if (c && !(sb->s_flags & MS_RDONLY))
|
||||
jffs2_stop_garbage_collect_thread(c);
|
||||
kill_mtd_super(sb);
|
||||
kfree(c);
|
||||
|
||||
@@ -1051,7 +1051,8 @@ static struct mount *clone_mnt(struct mount *old, struct dentry *root,
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
mnt->mnt.mnt_flags = old->mnt.mnt_flags & ~(MNT_WRITE_HOLD|MNT_MARKED);
|
||||
mnt->mnt.mnt_flags = old->mnt.mnt_flags;
|
||||
mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL);
|
||||
/* Don't allow unprivileged users to change mount flags */
|
||||
if (flag & CL_UNPRIVILEGED) {
|
||||
mnt->mnt.mnt_flags |= MNT_LOCK_ATIME;
|
||||
|
||||
@@ -92,7 +92,7 @@ static bool fanotify_should_send_event(struct fsnotify_mark *inode_mark,
|
||||
u32 event_mask,
|
||||
void *data, int data_type)
|
||||
{
|
||||
__u32 marks_mask, marks_ignored_mask;
|
||||
__u32 marks_mask = 0, marks_ignored_mask = 0;
|
||||
struct path *path = data;
|
||||
|
||||
pr_debug("%s: inode_mark=%p vfsmnt_mark=%p mask=%x data=%p"
|
||||
@@ -108,24 +108,20 @@ static bool fanotify_should_send_event(struct fsnotify_mark *inode_mark,
|
||||
!d_can_lookup(path->dentry))
|
||||
return false;
|
||||
|
||||
if (inode_mark && vfsmnt_mark) {
|
||||
marks_mask = (vfsmnt_mark->mask | inode_mark->mask);
|
||||
marks_ignored_mask = (vfsmnt_mark->ignored_mask | inode_mark->ignored_mask);
|
||||
} else if (inode_mark) {
|
||||
/*
|
||||
* if the event is for a child and this inode doesn't care about
|
||||
* events on the child, don't send it!
|
||||
*/
|
||||
if ((event_mask & FS_EVENT_ON_CHILD) &&
|
||||
!(inode_mark->mask & FS_EVENT_ON_CHILD))
|
||||
return false;
|
||||
marks_mask = inode_mark->mask;
|
||||
marks_ignored_mask = inode_mark->ignored_mask;
|
||||
} else if (vfsmnt_mark) {
|
||||
marks_mask = vfsmnt_mark->mask;
|
||||
marks_ignored_mask = vfsmnt_mark->ignored_mask;
|
||||
} else {
|
||||
BUG();
|
||||
/*
|
||||
* if the event is for a child and this inode doesn't care about
|
||||
* events on the child, don't send it!
|
||||
*/
|
||||
if (inode_mark &&
|
||||
(!(event_mask & FS_EVENT_ON_CHILD) ||
|
||||
(inode_mark->mask & FS_EVENT_ON_CHILD))) {
|
||||
marks_mask |= inode_mark->mask;
|
||||
marks_ignored_mask |= inode_mark->ignored_mask;
|
||||
}
|
||||
|
||||
if (vfsmnt_mark) {
|
||||
marks_mask |= vfsmnt_mark->mask;
|
||||
marks_ignored_mask |= vfsmnt_mark->ignored_mask;
|
||||
}
|
||||
|
||||
if (d_is_dir(path->dentry) &&
|
||||
|
||||
@@ -559,6 +559,11 @@ void orangefs_kill_sb(struct super_block *sb)
|
||||
/* provided sb cleanup */
|
||||
kill_anon_super(sb);
|
||||
|
||||
if (!ORANGEFS_SB(sb)) {
|
||||
mutex_lock(&orangefs_request_mutex);
|
||||
mutex_unlock(&orangefs_request_mutex);
|
||||
return;
|
||||
}
|
||||
/*
|
||||
* issue the unmount to userspace to tell it to remove the
|
||||
* dynamic mount info it has for this superblock
|
||||
|
||||
@@ -2640,7 +2640,7 @@ static int journal_init_dev(struct super_block *super,
|
||||
if (IS_ERR(journal->j_dev_bd)) {
|
||||
result = PTR_ERR(journal->j_dev_bd);
|
||||
journal->j_dev_bd = NULL;
|
||||
reiserfs_warning(super,
|
||||
reiserfs_warning(super, "sh-457",
|
||||
"journal_init_dev: Cannot open '%s': %i",
|
||||
jdev_name, result);
|
||||
return result;
|
||||
|
||||
@@ -1728,8 +1728,11 @@ static void ubifs_remount_ro(struct ubifs_info *c)
|
||||
|
||||
dbg_save_space_info(c);
|
||||
|
||||
for (i = 0; i < c->jhead_cnt; i++)
|
||||
ubifs_wbuf_sync(&c->jheads[i].wbuf);
|
||||
for (i = 0; i < c->jhead_cnt; i++) {
|
||||
err = ubifs_wbuf_sync(&c->jheads[i].wbuf);
|
||||
if (err)
|
||||
ubifs_ro_mode(c, err);
|
||||
}
|
||||
|
||||
c->mst_node->flags &= ~cpu_to_le32(UBIFS_MST_DIRTY);
|
||||
c->mst_node->flags |= cpu_to_le32(UBIFS_MST_NO_ORPHS);
|
||||
@@ -1795,8 +1798,11 @@ static void ubifs_put_super(struct super_block *sb)
|
||||
int err;
|
||||
|
||||
/* Synchronize write-buffers */
|
||||
for (i = 0; i < c->jhead_cnt; i++)
|
||||
ubifs_wbuf_sync(&c->jheads[i].wbuf);
|
||||
for (i = 0; i < c->jhead_cnt; i++) {
|
||||
err = ubifs_wbuf_sync(&c->jheads[i].wbuf);
|
||||
if (err)
|
||||
ubifs_ro_mode(c, err);
|
||||
}
|
||||
|
||||
/*
|
||||
* We are being cleanly unmounted which means the
|
||||
|
||||
@@ -28,6 +28,9 @@
|
||||
|
||||
#include "udf_sb.h"
|
||||
|
||||
#define SURROGATE_MASK 0xfffff800
|
||||
#define SURROGATE_PAIR 0x0000d800
|
||||
|
||||
static int udf_uni2char_utf8(wchar_t uni,
|
||||
unsigned char *out,
|
||||
int boundlen)
|
||||
@@ -37,6 +40,9 @@ static int udf_uni2char_utf8(wchar_t uni,
|
||||
if (boundlen <= 0)
|
||||
return -ENAMETOOLONG;
|
||||
|
||||
if ((uni & SURROGATE_MASK) == SURROGATE_PAIR)
|
||||
return -EINVAL;
|
||||
|
||||
if (uni < 0x80) {
|
||||
out[u_len++] = (unsigned char)uni;
|
||||
} else if (uni < 0x800) {
|
||||
|
||||
@@ -176,7 +176,8 @@
|
||||
#define CLK_TOP_AUD_EXT1 156
|
||||
#define CLK_TOP_AUD_EXT2 157
|
||||
#define CLK_TOP_NFI1X_PAD 158
|
||||
#define CLK_TOP_NR 159
|
||||
#define CLK_TOP_AXISEL_D4 159
|
||||
#define CLK_TOP_NR 160
|
||||
|
||||
/* APMIXEDSYS */
|
||||
|
||||
|
||||
@@ -155,6 +155,7 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
|
||||
|
||||
#define SMCCC_SMC_INST "smc #0"
|
||||
#define SMCCC_HVC_INST "hvc #0"
|
||||
#define SMCCC_REG(n) asm("x" # n)
|
||||
|
||||
#elif defined(CONFIG_ARM)
|
||||
#include <asm/opcodes-sec.h>
|
||||
@@ -162,6 +163,7 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
|
||||
|
||||
#define SMCCC_SMC_INST __SMC(0)
|
||||
#define SMCCC_HVC_INST __HVC(0)
|
||||
#define SMCCC_REG(n) asm("r" # n)
|
||||
|
||||
#endif
|
||||
|
||||
@@ -194,47 +196,47 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
|
||||
|
||||
#define __declare_arg_0(a0, res) \
|
||||
struct arm_smccc_res *___res = res; \
|
||||
register u32 r0 asm("r0") = a0; \
|
||||
register unsigned long r1 asm("r1"); \
|
||||
register unsigned long r2 asm("r2"); \
|
||||
register unsigned long r3 asm("r3")
|
||||
register u32 r0 SMCCC_REG(0) = a0; \
|
||||
register unsigned long r1 SMCCC_REG(1); \
|
||||
register unsigned long r2 SMCCC_REG(2); \
|
||||
register unsigned long r3 SMCCC_REG(3)
|
||||
|
||||
#define __declare_arg_1(a0, a1, res) \
|
||||
struct arm_smccc_res *___res = res; \
|
||||
register u32 r0 asm("r0") = a0; \
|
||||
register typeof(a1) r1 asm("r1") = a1; \
|
||||
register unsigned long r2 asm("r2"); \
|
||||
register unsigned long r3 asm("r3")
|
||||
register u32 r0 SMCCC_REG(0) = a0; \
|
||||
register typeof(a1) r1 SMCCC_REG(1) = a1; \
|
||||
register unsigned long r2 SMCCC_REG(2); \
|
||||
register unsigned long r3 SMCCC_REG(3)
|
||||
|
||||
#define __declare_arg_2(a0, a1, a2, res) \
|
||||
struct arm_smccc_res *___res = res; \
|
||||
register u32 r0 asm("r0") = a0; \
|
||||
register typeof(a1) r1 asm("r1") = a1; \
|
||||
register typeof(a2) r2 asm("r2") = a2; \
|
||||
register unsigned long r3 asm("r3")
|
||||
register u32 r0 SMCCC_REG(0) = a0; \
|
||||
register typeof(a1) r1 SMCCC_REG(1) = a1; \
|
||||
register typeof(a2) r2 SMCCC_REG(2) = a2; \
|
||||
register unsigned long r3 SMCCC_REG(3)
|
||||
|
||||
#define __declare_arg_3(a0, a1, a2, a3, res) \
|
||||
struct arm_smccc_res *___res = res; \
|
||||
register u32 r0 asm("r0") = a0; \
|
||||
register typeof(a1) r1 asm("r1") = a1; \
|
||||
register typeof(a2) r2 asm("r2") = a2; \
|
||||
register typeof(a3) r3 asm("r3") = a3
|
||||
register u32 r0 SMCCC_REG(0) = a0; \
|
||||
register typeof(a1) r1 SMCCC_REG(1) = a1; \
|
||||
register typeof(a2) r2 SMCCC_REG(2) = a2; \
|
||||
register typeof(a3) r3 SMCCC_REG(3) = a3
|
||||
|
||||
#define __declare_arg_4(a0, a1, a2, a3, a4, res) \
|
||||
__declare_arg_3(a0, a1, a2, a3, res); \
|
||||
register typeof(a4) r4 asm("r4") = a4
|
||||
register typeof(a4) r4 SMCCC_REG(4) = a4
|
||||
|
||||
#define __declare_arg_5(a0, a1, a2, a3, a4, a5, res) \
|
||||
__declare_arg_4(a0, a1, a2, a3, a4, res); \
|
||||
register typeof(a5) r5 asm("r5") = a5
|
||||
register typeof(a5) r5 SMCCC_REG(5) = a5
|
||||
|
||||
#define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res) \
|
||||
__declare_arg_5(a0, a1, a2, a3, a4, a5, res); \
|
||||
register typeof(a6) r6 asm("r6") = a6
|
||||
register typeof(a6) r6 SMCCC_REG(6) = a6
|
||||
|
||||
#define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res) \
|
||||
__declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res); \
|
||||
register typeof(a7) r7 asm("r7") = a7
|
||||
register typeof(a7) r7 SMCCC_REG(7) = a7
|
||||
|
||||
#define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
|
||||
#define __declare_args(count, ...) ___declare_args(count, __VA_ARGS__)
|
||||
|
||||
@@ -195,6 +195,11 @@ static inline void set_bdi_congested(struct backing_dev_info *bdi, int sync)
|
||||
set_wb_congested(bdi->wb.congested, sync);
|
||||
}
|
||||
|
||||
struct wb_lock_cookie {
|
||||
bool locked;
|
||||
unsigned long flags;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_CGROUP_WRITEBACK
|
||||
|
||||
/**
|
||||
|
||||
@@ -374,7 +374,7 @@ static inline struct bdi_writeback *inode_to_wb(struct inode *inode)
|
||||
/**
|
||||
* unlocked_inode_to_wb_begin - begin unlocked inode wb access transaction
|
||||
* @inode: target inode
|
||||
* @lockedp: temp bool output param, to be passed to the end function
|
||||
* @cookie: output param, to be passed to the end function
|
||||
*
|
||||
* The caller wants to access the wb associated with @inode but isn't
|
||||
* holding inode->i_lock, mapping->tree_lock or wb->list_lock. This
|
||||
@@ -382,12 +382,12 @@ static inline struct bdi_writeback *inode_to_wb(struct inode *inode)
|
||||
* association doesn't change until the transaction is finished with
|
||||
* unlocked_inode_to_wb_end().
|
||||
*
|
||||
* The caller must call unlocked_inode_to_wb_end() with *@lockdep
|
||||
* afterwards and can't sleep during transaction. IRQ may or may not be
|
||||
* disabled on return.
|
||||
* The caller must call unlocked_inode_to_wb_end() with *@cookie afterwards and
|
||||
* can't sleep during the transaction. IRQs may or may not be disabled on
|
||||
* return.
|
||||
*/
|
||||
static inline struct bdi_writeback *
|
||||
unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
|
||||
unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie)
|
||||
{
|
||||
rcu_read_lock();
|
||||
|
||||
@@ -395,10 +395,10 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
|
||||
* Paired with store_release in inode_switch_wb_work_fn() and
|
||||
* ensures that we see the new wb if we see cleared I_WB_SWITCH.
|
||||
*/
|
||||
*lockedp = smp_load_acquire(&inode->i_state) & I_WB_SWITCH;
|
||||
cookie->locked = smp_load_acquire(&inode->i_state) & I_WB_SWITCH;
|
||||
|
||||
if (unlikely(*lockedp))
|
||||
spin_lock_irq(&inode->i_mapping->tree_lock);
|
||||
if (unlikely(cookie->locked))
|
||||
spin_lock_irqsave(&inode->i_mapping->tree_lock, cookie->flags);
|
||||
|
||||
/*
|
||||
* Protected by either !I_WB_SWITCH + rcu_read_lock() or tree_lock.
|
||||
@@ -410,12 +410,13 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
|
||||
/**
|
||||
* unlocked_inode_to_wb_end - end inode wb access transaction
|
||||
* @inode: target inode
|
||||
* @locked: *@lockedp from unlocked_inode_to_wb_begin()
|
||||
* @cookie: @cookie from unlocked_inode_to_wb_begin()
|
||||
*/
|
||||
static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked)
|
||||
static inline void unlocked_inode_to_wb_end(struct inode *inode,
|
||||
struct wb_lock_cookie *cookie)
|
||||
{
|
||||
if (unlikely(locked))
|
||||
spin_unlock_irq(&inode->i_mapping->tree_lock);
|
||||
if (unlikely(cookie->locked))
|
||||
spin_unlock_irqrestore(&inode->i_mapping->tree_lock, cookie->flags);
|
||||
|
||||
rcu_read_unlock();
|
||||
}
|
||||
@@ -462,12 +463,13 @@ static inline struct bdi_writeback *inode_to_wb(struct inode *inode)
|
||||
}
|
||||
|
||||
static inline struct bdi_writeback *
|
||||
unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
|
||||
unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie)
|
||||
{
|
||||
return inode_to_wb(inode);
|
||||
}
|
||||
|
||||
static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked)
|
||||
static inline void unlocked_inode_to_wb_end(struct inode *inode,
|
||||
struct wb_lock_cookie *cookie)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user