* refs/heads/tmp-859e0a8:
Linux 4.9.105
Revert "vti4: Don't override MTU passed on link creation via IFLA_MTU"
Revert "vti4: Don't override MTU passed on link creation via IFLA_MTU"
Linux 4.9.104
kdb: make "mdr" command repeat
pinctrl: msm: Use dynamic GPIO numbering
regulator: of: Add a missing 'of_node_put()' in an error handling path of 'of_regulator_match()'
ARM: dts: porter: Fix HDMI output routing
ARM: dts: imx7d: cl-som-imx7: fix pinctrl_enet
regmap: Correct comparison in regmap_cached
netlabel: If PF_INET6, check sk_buff ip header version
selftests/net: fixes psock_fanout eBPF test case
perf report: Fix memory corruption in --branch-history mode --branch-history
perf tests: Use arch__compare_symbol_names to compare symbols
x86/apic: Set up through-local-APIC mode on the boot CPU if 'noapic' specified
drm/rockchip: Respect page offset for PRIME mmap calls
MIPS: Octeon: Fix logging messages with spurious periods after newlines
pinctrl: sh-pfc: r8a7796: Fix MOD_SEL register pin assignment for SSI pins group
rcu: Call touch_nmi_watchdog() while printing stall warnings
audit: return on memory error to avoid null pointer dereference
ARM: dts: bcm283x: Fix probing of bcm2835-i2s
udf: Provide saner default for invalid uid / gid
PCI: Add function 1 DMA alias quirk for Marvell 88SE9220
cpufreq: Reorder cpufreq_online() error code path
net: stmmac: ensure that the MSS desc is the last desc to set the own bit
net: stmmac: ensure that the device has released ownership before reading data
dmaengine: qcom: bam_dma: get num-channels and num-ees from dt
hwrng: stm32 - add reset during probe
enic: enable rq before updating rq descriptors
dmaengine: rcar-dmac: Check the done lists in rcar_dmac_chan_get_residue()
dmaengine: pl330: fix a race condition in case of threaded irqs
ALSA: vmaster: Propagate slave error
x86/devicetree: Fix device IRQ settings in DT
x86/devicetree: Initialize device tree before using it
gfs2: Fix fallocate chunk size
soc: qcom: wcnss_ctrl: Fix increment in NV upload
arm64: dts: qcom: Fix SPI5 config on MSM8996
perf/x86/intel: Fix event update for auto-reload
perf/x86/intel: Fix large period handling on Broadwell CPUs
cdrom: do not call check_disk_change() inside cdrom_open()
perf/x86/intel: Properly save/restore the PMU state in the NMI handler
hwmon: (pmbus/adm1275) Accept negative page register values
hwmon: (pmbus/max8688) Accept negative page register values
drm/panel: simple: Fix the bus format for the Ontat panel
perf/core: Fix perf_output_read_group()
f2fs: fix to check extent cache in f2fs_drop_extent_tree
powerpc: Add missing prototype for arch_irq_work_raise()
ipmi_ssif: Fix kernel panic at msg_done_handler
PCI: Restore config space on runtime resume despite being unbound
MIPS: ath79: Fix AR724X_PLL_REG_PCIE_CONFIG offset
spi: bcm-qspi: fIX some error handling paths
regulator: gpio: Fix some error handling paths in 'gpio_regulator_probe()'
IB/core: Honor port_num while resolving GID for IB link layer
perf stat: Fix core dump when flag T is used
perf top: Fix top.call-graph config option reading
KVM: lapic: stop advertising DIRECTED_EOI when in-kernel IOAPIC is in use
i2c: mv64xxx: Apply errata delay only in standard mode
cxgb4: Fix queue free path of ULD drivers
ACPICA: acpi: acpica: fix acpi operand cache leak in nseval.c
ACPICA: Events: add a return on failure from acpi_hw_register_read
bcache: quit dc->writeback_thread when BCACHE_DEV_DETACHING is set
zorro: Set up z->dev.dma_mask for the DMA API
cpufreq: cppc_cpufreq: Fix cppc_cpufreq_init() failure path
arm: dts: socfpga: fix GIC PPI warning
virtio-net: Fix operstate for virtio when no VIRTIO_NET_F_STATUS
ima: Fallback to the builtin hash algorithm
cxgb4: Setup FW queues before registering netdev
ath10k: Fix kernel panic while using worker (ath10k_sta_rc_update_wk)
net/mlx5: Protect from command bit overflow
selftests: Print the test we're running to /dev/kmsg
tools/thermal: tmon: fix for segfault
powerpc/perf: Fix kernel address leak via sampling registers
powerpc/perf: Prevent kernel address leak to userspace via BHRB buffer
hwmon: (nct6775) Fix writing pwmX_mode
parisc/pci: Switch LBA PCI bus from Hard Fail to Soft Fail mode
m68k: set dma and coherent masks for platform FEC ethernets
powerpc/mpic: Check if cpu_possible() in mpic_physmask()
ACPI: acpi_pad: Fix memory leak in power saving threads
drivers: macintosh: rack-meter: really fix bogus memsets
xen/acpi: off by one in read_acpi_id()
rxrpc: Don't treat call aborts as conn aborts
rxrpc: Fix Tx ring annotation after initial Tx failure
btrfs: fix lockdep splat in btrfs_alloc_subvolume_writers
Btrfs: fix copy_items() return value when logging an inode
btrfs: tests/qgroup: Fix wrong tree backref level
net: bgmac: Fix endian access in bgmac_dma_tx_ring_free()
sparc64: Make atomic_xchg() an inline function rather than a macro.
fscache: Fix hanging wait on page discarded by writeback
KVM: VMX: raise internal error for exception during invalid protected mode state
sched/rt: Fix rq->clock_update_flags < RQCF_ACT_SKIP warning
ocfs2/dlm: don't handle migrate lockres if already in shutdown
btrfs: Fix possible softlock on single core machines
Btrfs: fix NULL pointer dereference in log_dir_items
Btrfs: bail out on error during replay_dir_deletes
mm: fix races between address_space dereference and free in page_evicatable
mm/ksm: fix interaction with THP
dp83640: Ensure against premature access to PHY registers after reset
cpufreq: CPPC: Initialize shared perf capabilities of CPUs
Force log to disk before reading the AGF during a fstrim
sr: get/drop reference to device in revalidate and check_events
swap: divide-by-zero when zero length swap file on ssd
fs/proc/proc_sysctl.c: fix potential page fault while unregistering sysctl table
x86/mm: Do not forbid _PAGE_RW before init for __ro_after_init
x86/pgtable: Don't set huge PUD/PMD on non-leaf entries
nvme: don't send keep-alives to the discovery controller
sh: fix debug trap failure to process signals before return to user
net: mvneta: fix enable of all initialized RXQs
net: Fix untag for vlan packets without ethernet header
mm/kmemleak.c: wait for scan completion before disabling free
builddeb: Fix header package regarding dtc source links
llc: properly handle dev_queue_xmit() return value
perf/x86/intel: Fix linear IP of PEBS real_ip on Haswell and later CPUs
net: qmi_wwan: add BroadMobi BM806U 2020:2033
ARM: 8748/1: mm: Define vdso_start, vdso_end as array
batman-adv: fix packet loss for broadcasted DHCP packets to a server
batman-adv: fix multicast-via-unicast transmission with AP isolation
selftests: ftrace: Add a testcase for probepoint
selftests: ftrace: Add a testcase for string type with kprobe_event
selftests: ftrace: Add probe event argument syntax testcase
mm, thp: do not cause memcg oom for thp
mm/mempolicy.c: avoid use uninitialized preferred_node
RDMA/qedr: Fix rc initialization on CNQ allocation failure
RDMA/qedr: fix QP's ack timeout configuration
RDMA/ucma: Correct option size check using optlen
kbuild: make scripts/adjust_autoksyms.sh robust against timestamp races
brcmfmac: Fix check for ISO3166 code
perf/cgroup: Fix child event counting bug
vti4: Don't override MTU passed on link creation via IFLA_MTU
vti4: Don't count header length twice on tunnel setup
batman-adv: Fix skbuff rcsum on packet reroute
batman-adv: fix header size check in batadv_dbg_arp()
net: Fix vlan untag for bridge and vlan_dev with reorder_hdr off
drm/imx: move arming of the vblank event to atomic_flush
sunvnet: does not support GSO for sctp
ipv4: lock mtu in fnhe when received PMTU < net.ipv4.route.min_pmtu
workqueue: use put_device() instead of kfree()
bnxt_en: Check valid VNIC ID in bnxt_hwrm_vnic_set_tpa().
netfilter: ebtables: fix erroneous reject of last rule
dmaengine: mv_xor_v2: Fix clock resource by adding a register clock
arm64: Relax ARM_SMCCC_ARCH_WORKAROUND_1 discovery
xen: xenbus: use put_device() instead of kfree()
IB/core: Fix possible crash to access NULL netdev
net: smsc911x: Fix unload crash when link is up
net: qcom/emac: Use proper free methods during TX
fsl/fman: avoid sleeping in atomic context while adding an address
fbdev: Fixing arbitrary kernel leak in case FBIOGETCMAP_SPARC in sbusfb_ioctl_helper().
IB/mlx5: Fix an error code in __mlx5_ib_modify_qp()
IB/mlx4: Include GID type when deleting GIDs from HW table under RoCE
IB/mlx4: Fix corruption of RoCEv2 IPv4 GIDs
RDMA/qedr: Fix iWARP write and send with immediate
RDMA/qedr: Fix kernel panic when running fio over NFSoRDMA
ia64/err-inject: Use get_user_pages_fast()
e1000e: allocate ring descriptors with dma_zalloc_coherent
e1000e: Fix check_for_link return value with autoneg off
batman-adv: Fix multicast packet loss with a single WANT_ALL_IPV4/6 flag
watchdog: sbsa: use 32-bit read for WCV
watchdog: f71808e_wdt: Fix magic close handling
iwlwifi: mvm: fix TX of CCMP 256
KVM: PPC: Book3S HV: Fix VRMA initialization with 2MB or 1GB memory backing
selftests/powerpc: Skip the subpage_prot tests if the syscall is unavailable
Btrfs: send, fix issuing write op when processing hole in no data mode
drm/sun4i: Fix dclk_set_phase
xen/pirq: fix error path cleanup when binding MSIs
nvmet: fix PSDT field check in command format
net/tcp/illinois: replace broken algorithm reference link
gianfar: Fix Rx byte accounting for ndev stats
powerpc/boot: Fix random libfdt related build errors
ARM: dts: NSP: Fix amount of RAM on BCM958625HR
sit: fix IFLA_MTU ignored on NEWLINK
ip6_tunnel: fix IFLA_MTU ignored on NEWLINK
bcache: fix kcrashes with fio in RAID5 backend dev
dmaengine: rcar-dmac: fix max_chunk_size for R-Car Gen3
virtio-gpu: fix ioctl and expose the fixed status to userspace.
r8152: fix tx packets accounting
qrtr: add MODULE_ALIAS macro to smd
ARM: orion5x: Revert commit 4904dbda41.
ceph: fix dentry leak when failing to init debugfs
clocksource/drivers/fsl_ftm_timer: Fix error return checking
nvme-pci: Fix nvme queue cleanup if IRQ setup fails
batman-adv: Fix netlink dumping of BLA backbones
batman-adv: Fix netlink dumping of BLA claims
batman-adv: Ignore invalid batadv_v_gw during netlink send
batman-adv: Ignore invalid batadv_iv_gw during netlink send
netfilter: ebtables: convert BUG_ONs to WARN_ONs
batman-adv: invalidate checksum on fragment reassembly
batman-adv: fix packet checksum in receive path
md/raid1: fix NULL pointer dereference
md: fix a potential deadlock of raid5/raid10 reshape
fs: dcache: Use READ_ONCE when accessing i_dir_seq
fs: dcache: Avoid livelock between d_alloc_parallel and __d_add
kvm: fix warning for CONFIG_HAVE_KVM_EVENTFD builds
macvlan: fix use-after-free in macvlan_common_newlink()
arm64: fix unwind_frame() for filtered out fn for function graph tracing
mac80211: drop frames with unexpected DS bits from fast-rx to slow path
x86/topology: Update the 'cpu cores' field in /proc/cpuinfo correctly across CPU hotplug operations
locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs
integrity/security: fix digsig.c build error with header file
regulatory: add NUL to request alpha2
smsc75xx: fix smsc75xx_set_features()
ARM: OMAP: Fix dmtimer init for omap1
PKCS#7: fix direct verification of SignerInfo signature
s390/cio: clear timer when terminating driver I/O
s390/cio: fix return code after missing interrupt
s390/cio: fix ccw_device_start_timeout API
powerpc/bpf/jit: Fix 32-bit JIT for seccomp_data access
kernel/relay.c: limit kmalloc size to KMALLOC_MAX_SIZE
md: raid5: avoid string overflow warning
locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()
drm/exynos: fix comparison to bitshift when dealing with a mask
drm/exynos: g2d: use monotonic timestamps
md raid10: fix NULL deference in handle_write_completed()
mac80211: Do not disconnect on invalid operating class
mac80211: fix calling sleeping function in atomic context
mac80211: fix a possible leak of station stats
mac80211: round IEEE80211_TX_STATUS_HEADROOM up to multiple of 4
rxrpc: Work around usercopy check
NFC: llcp: Limit size of SDP URI
iwlwifi: mvm: always init rs with 20mhz bandwidth rates
iwlwifi: mvm: fix security bug in PN checking
ibmvnic: Free RX socket buffer in case of adapter error
ARM: OMAP1: clock: Fix debugfs_create_*() usage
ARM: OMAP3: Fix prm wake interrupt for resume
ARM: OMAP2+: timer: fix a kmemleak caused in omap_get_timer_dt
selftests: memfd: add config fragment for fuse
selftests: pstore: Adding config fragment CONFIG_PSTORE_RAM=m
libata: Fix compile warning with ATA_DEBUG enabled
ptr_ring: prevent integer overflow when calculating size
ARC: Fix malformed ARC_EMUL_UNALIGNED default
irqchip/gic-v3: Change pr_debug message to pr_devel
cpumask: Make for_each_cpu_wrap() available on UP as well
irqchip/gic-v3: Ignore disabled ITS nodes
locking/qspinlock: Ensure node->count is updated before initialising node
vfs/proc/kcore, x86/mm/kcore: Fix SMAP fault when dumping vsyscall user page
bpf: fix rlimit in reuseport net selftest
tools/libbpf: handle issues with bpf ELF objects containing .eh_frames
bcache: return attach error when no cache set exist
bcache: fix for data collapse after re-attaching an attached device
bcache: fix for allocator and register thread race
bcache: properly set task state in bch_writeback_thread()
cifs: silence compiler warnings showing up with gcc-8.0.0
proc: fix /proc/*/map_files lookup
arm64: spinlock: Fix theoretical trylock() A-B-A with LSE atomics
RDS: IB: Fix null pointer issue
xen/grant-table: Use put_page instead of free_page
xen-netfront: Fix race between device setup and open
MIPS: TXx9: use IS_BUILTIN() for CONFIG_LEDS_CLASS
MIPS: generic: Fix machine compatible matching
bpf: fix selftests/bpf test_kmod.sh failure when CONFIG_BPF_JIT_ALWAYS_ON=y
ACPI / scan: Use acpi_bus_get_status() to initialize ACPI_TYPE_DEVICE devs
ACPI: processor_perflib: Do not send _PPC change notification if not ready
firmware: dmi_scan: Fix handling of empty DMI strings
x86/power: Fix swsusp_arch_resume prototype
netfilter: ipv6: nf_defrag: Kill frag queue on RFC2460 failure
drm/nouveau/pmu/fuc: don't use movw directly anymore
IB/ipoib: Fix for potential no-carrier state
openvswitch: Remove padding from packet before L3+ conntrack processing
mm/fadvise: discard partial page if endbyte is also EOF
mm: pin address_space before dereferencing it while isolating an LRU page
mm: thp: use down_read_trylock() in khugepaged to avoid long block
sparc64: update pmdp_invalidate() to return old pmd value
asm-generic: provide generic_pmdp_establish()
mm/mempolicy: add nodes_empty check in SYSC_migrate_pages
mm/mempolicy: fix the check of nodemask from user
ocfs2: return error when we attempt to access a dirty bh in jbd2
ocfs2/acl: use 'ip_xattr_sem' to protect getting extended attribute
ocfs2: return -EROFS to mount.ocfs2 if inode block is invalid
kvm: Map PFN-type memory regions as writable (if possible)
tcp_nv: fix potential integer overflow in tcpnv_acked
gianfar: prevent integer wrapping in the rx handler
ntb_transport: Fix bug with max_mw_size parameter
RDMA/mlx5: Avoid memory leak in case of XRCD dealloc failure
powerpc/numa: Ensure nodes initialized for hotplug
powerpc/numa: Use ibm,max-associativity-domains to discover possible nodes
jffs2: Fix use-after-free bug in jffs2_iget()'s error handling path
device property: Define type of PROPERTY_ENRTY_*() macros
fm10k: fix "failed to kill vid" message for VF
HID: roccat: prevent an out of bounds read in kovaplus_profile_activated()
btrfs: fail mount when sb flag is not in BTRFS_SUPER_FLAG_SUPP
Btrfs: fix scrub to repair raid6 corruption
btrfs: Fix out of bounds access in btrfs_search_slot
Btrfs: set plug for fsync
ipmi/powernv: Fix error return code in ipmi_powernv_probe()
mac80211_hwsim: fix possible memory leak in hwsim_new_radio_nl()
kconfig: Fix expr_free() E_NOT leak
kconfig: Fix automatic menu creation mem leak
kconfig: Don't leak main menus during parsing
watchdog: sp5100_tco: Fix watchdog disable bit
nfs: Do not convert nfs_idmap_cache_timeout to jiffies
net: stmmac: dwmac-meson8b: propagate rate changes to the parent clock
net: stmmac: dwmac-meson8b: fix setting the RGMII TX clock on Meson8b
dm thin: fix documentation relative to low water mark threshold
iommu/vt-d: Use domain instead of cache fetching
perf record: Fix failed memory allocation for get_cpuid_str
tools lib traceevent: Fix get_field_str() for dynamic strings
perf callchain: Fix attr.sample_max_stack setting
tools lib traceevent: Simplify pointer print logic and fix %pF
i40iw: Zero-out consumer key on allocate stag for FMR
Input: psmouse - fix Synaptics detection when protocol is disabled
PCI: Add function 1 DMA alias quirk for Marvell 9128
tracing/hrtimer: Fix tracing bugs by taking all clock bases and modes into account
netfilter: ipv6: nf_defrag: Pass on packets to stack per RFC2460
kvm: x86: fix KVM_XEN_HVM_CONFIG ioctl
ALSA: hda - Use IS_REACHABLE() for dependency on input
NFSv4: always set NFS_LOCK_LOST when a lock is lost.
x86/tsc: Allow TSC calibration without PIT
firewire-ohci: work around oversized DMA reads on JMicron controllers
kvm: x86: IA32_ARCH_CAPABILITIES is always supported
KVM: x86: Update cpuid properly when CR4.OSXAVE or CR4.PKE is changed
KVM: s390: vsie: fix < 8k check for the itdba
KVM/VMX: Expose SSBD properly to guests
kernel/signal.c: avoid undefined behaviour in kill_something_info
kernel/sys.c: fix potential Spectre v1 issue
kasan: fix memory hotplug during boot
ipc/shm: fix shmat() nil address after round-down when remapping
Revert "ipc/shm: Fix shmat mmap nil-page protection"
IB/hfi1: Use after free race condition in send context error path
drm/vmwgfx: Fix 32-bit VMW_PORT_HB_[IN|OUT] macros
xen-swiotlb: fix the check condition for xen_swiotlb_free_coherent
libata: blacklist Micron 500IT SSD with MU01 firmware
libata: Blacklist some Sandisk SSDs for NCQ
mmc: sdhci-iproc: fix 32bit writes for TRANSFER_MODE register
mmc: sdhci-iproc: remove hard coded mmc cap 1.8v
do d_instantiate/unlock_new_inode combinations safely
ALSA: timer: Fix pause event notification
aio: fix io_destroy(2) vs. lookup_ioctx() race
affs_lookup(): close a race with affs_remove_link()
KVM: Fix spelling mistake: "cop_unsuable" -> "cop_unusable"
MIPS: Fix ptrace(2) PTRACE_PEEKUSR and PTRACE_POKEUSR accesses to o32 FGRs
MIPS: ptrace: Expose FIR register through FP regset
MIPS: c-r4k: Fix data corruption related to cache coherence
UPSTREAM: sched/fair: Consider RT/IRQ pressure in capacity_spare_wake
BACKPORT, FROMLIST: fscrypt: add Speck128/256 support
Conflicts:
fs/crypto/fscrypt_private.h
fs/crypto/keyinfo.c
include/uapi/linux/fs.h
Change-Id: I47c7c5551b887b4a109cceb04811fc6ccf98b5bc
Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
910 lines
26 KiB
C
910 lines
26 KiB
C
#ifndef __LINUX_CPUMASK_H
|
|
#define __LINUX_CPUMASK_H
|
|
|
|
/*
|
|
* Cpumasks provide a bitmap suitable for representing the
|
|
* set of CPU's in a system, one bit position per CPU number. In general,
|
|
* only nr_cpu_ids (<= NR_CPUS) bits are valid.
|
|
*/
|
|
#include <linux/kernel.h>
|
|
#include <linux/threads.h>
|
|
#include <linux/bitmap.h>
|
|
#include <linux/bug.h>
|
|
|
|
/* Don't assign or return these: may not be this big! */
|
|
typedef struct cpumask { DECLARE_BITMAP(bits, NR_CPUS); } cpumask_t;
|
|
|
|
/**
|
|
* cpumask_bits - get the bits in a cpumask
|
|
* @maskp: the struct cpumask *
|
|
*
|
|
* You should only assume nr_cpu_ids bits of this mask are valid. This is
|
|
* a macro so it's const-correct.
|
|
*/
|
|
#define cpumask_bits(maskp) ((maskp)->bits)
|
|
|
|
/**
|
|
* cpumask_pr_args - printf args to output a cpumask
|
|
* @maskp: cpumask to be printed
|
|
*
|
|
* Can be used to provide arguments for '%*pb[l]' when printing a cpumask.
|
|
*/
|
|
#define cpumask_pr_args(maskp) nr_cpu_ids, cpumask_bits(maskp)
|
|
|
|
#if NR_CPUS == 1
|
|
#define nr_cpu_ids 1
|
|
#else
|
|
extern int nr_cpu_ids;
|
|
#endif
|
|
|
|
#ifdef CONFIG_CPUMASK_OFFSTACK
|
|
/* Assuming NR_CPUS is huge, a runtime limit is more efficient. Also,
|
|
* not all bits may be allocated. */
|
|
#define nr_cpumask_bits nr_cpu_ids
|
|
#else
|
|
#define nr_cpumask_bits NR_CPUS
|
|
#endif
|
|
|
|
/*
|
|
* The following particular system cpumasks and operations manage
|
|
* possible, present, active and online cpus.
|
|
*
|
|
* cpu_possible_mask- has bit 'cpu' set iff cpu is populatable
|
|
* cpu_present_mask - has bit 'cpu' set iff cpu is populated
|
|
* cpu_online_mask - has bit 'cpu' set iff cpu available to scheduler
|
|
* cpu_active_mask - has bit 'cpu' set iff cpu available to migration
|
|
* cpu_isolated_mask- has bit 'cpu' set iff cpu isolated
|
|
*
|
|
* If !CONFIG_HOTPLUG_CPU, present == possible, and active == online.
|
|
*
|
|
* The cpu_possible_mask is fixed at boot time, as the set of CPU id's
|
|
* that it is possible might ever be plugged in at anytime during the
|
|
* life of that system boot. The cpu_present_mask is dynamic(*),
|
|
* representing which CPUs are currently plugged in. And
|
|
* cpu_online_mask is the dynamic subset of cpu_present_mask,
|
|
* indicating those CPUs available for scheduling.
|
|
*
|
|
* If HOTPLUG is enabled, then cpu_possible_mask is forced to have
|
|
* all NR_CPUS bits set, otherwise it is just the set of CPUs that
|
|
* ACPI reports present at boot.
|
|
*
|
|
* If HOTPLUG is enabled, then cpu_present_mask varies dynamically,
|
|
* depending on what ACPI reports as currently plugged in, otherwise
|
|
* cpu_present_mask is just a copy of cpu_possible_mask.
|
|
*
|
|
* (*) Well, cpu_present_mask is dynamic in the hotplug case. If not
|
|
* hotplug, it's a copy of cpu_possible_mask, hence fixed at boot.
|
|
*
|
|
* Subtleties:
|
|
* 1) UP arch's (NR_CPUS == 1, CONFIG_SMP not defined) hardcode
|
|
* assumption that their single CPU is online. The UP
|
|
* cpu_{online,possible,present}_masks are placebos. Changing them
|
|
* will have no useful affect on the following num_*_cpus()
|
|
* and cpu_*() macros in the UP case. This ugliness is a UP
|
|
* optimization - don't waste any instructions or memory references
|
|
* asking if you're online or how many CPUs there are if there is
|
|
* only one CPU.
|
|
*/
|
|
|
|
extern struct cpumask __cpu_possible_mask;
|
|
extern struct cpumask __cpu_online_mask;
|
|
extern struct cpumask __cpu_present_mask;
|
|
extern struct cpumask __cpu_active_mask;
|
|
extern struct cpumask __cpu_isolated_mask;
|
|
#define cpu_possible_mask ((const struct cpumask *)&__cpu_possible_mask)
|
|
#define cpu_online_mask ((const struct cpumask *)&__cpu_online_mask)
|
|
#define cpu_present_mask ((const struct cpumask *)&__cpu_present_mask)
|
|
#define cpu_active_mask ((const struct cpumask *)&__cpu_active_mask)
|
|
#define cpu_isolated_mask ((const struct cpumask *)&__cpu_isolated_mask)
|
|
|
|
#if NR_CPUS > 1
|
|
#define num_online_cpus() cpumask_weight(cpu_online_mask)
|
|
#define num_possible_cpus() cpumask_weight(cpu_possible_mask)
|
|
#define num_present_cpus() cpumask_weight(cpu_present_mask)
|
|
#define num_active_cpus() cpumask_weight(cpu_active_mask)
|
|
#define num_isolated_cpus() cpumask_weight(cpu_isolated_mask)
|
|
#define num_online_uniso_cpus() \
|
|
({ \
|
|
cpumask_t mask; \
|
|
\
|
|
cpumask_andnot(&mask, cpu_online_mask, cpu_isolated_mask); \
|
|
cpumask_weight(&mask); \
|
|
})
|
|
#define cpu_online(cpu) cpumask_test_cpu((cpu), cpu_online_mask)
|
|
#define cpu_possible(cpu) cpumask_test_cpu((cpu), cpu_possible_mask)
|
|
#define cpu_present(cpu) cpumask_test_cpu((cpu), cpu_present_mask)
|
|
#define cpu_active(cpu) cpumask_test_cpu((cpu), cpu_active_mask)
|
|
#define cpu_isolated(cpu) cpumask_test_cpu((cpu), cpu_isolated_mask)
|
|
#else
|
|
#define num_online_cpus() 1U
|
|
#define num_possible_cpus() 1U
|
|
#define num_present_cpus() 1U
|
|
#define num_active_cpus() 1U
|
|
#define num_isolated_cpus() 0U
|
|
#define num_online_uniso_cpus() 1U
|
|
#define cpu_online(cpu) ((cpu) == 0)
|
|
#define cpu_possible(cpu) ((cpu) == 0)
|
|
#define cpu_present(cpu) ((cpu) == 0)
|
|
#define cpu_active(cpu) ((cpu) == 0)
|
|
#define cpu_isolated(cpu) ((cpu) != 0)
|
|
#endif
|
|
|
|
/* verify cpu argument to cpumask_* operators */
|
|
static inline unsigned int cpumask_check(unsigned int cpu)
|
|
{
|
|
#ifdef CONFIG_DEBUG_PER_CPU_MAPS
|
|
WARN_ON_ONCE(cpu >= nr_cpumask_bits);
|
|
#endif /* CONFIG_DEBUG_PER_CPU_MAPS */
|
|
return cpu;
|
|
}
|
|
|
|
#if NR_CPUS == 1
|
|
/* Uniprocessor. Assume all masks are "1". */
|
|
static inline unsigned int cpumask_first(const struct cpumask *srcp)
|
|
{
|
|
return 0;
|
|
}
|
|
|
|
/* Valid inputs for n are -1 and 0. */
|
|
static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
|
|
{
|
|
return n+1;
|
|
}
|
|
|
|
static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
|
|
{
|
|
return n+1;
|
|
}
|
|
|
|
static inline unsigned int cpumask_next_and(int n,
|
|
const struct cpumask *srcp,
|
|
const struct cpumask *andp)
|
|
{
|
|
return n+1;
|
|
}
|
|
|
|
/* cpu must be a valid cpu, ie 0, so there's no other choice. */
|
|
static inline unsigned int cpumask_any_but(const struct cpumask *mask,
|
|
unsigned int cpu)
|
|
{
|
|
return 1;
|
|
}
|
|
|
|
static inline unsigned int cpumask_local_spread(unsigned int i, int node)
|
|
{
|
|
return 0;
|
|
}
|
|
|
|
#define for_each_cpu(cpu, mask) \
|
|
for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
|
|
#define for_each_cpu_not(cpu, mask) \
|
|
for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
|
|
#define for_each_cpu_wrap(cpu, mask, start) \
|
|
for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)(start))
|
|
#define for_each_cpu_and(cpu, mask, and) \
|
|
for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)and)
|
|
#else
|
|
/**
|
|
* cpumask_first - get the first cpu in a cpumask
|
|
* @srcp: the cpumask pointer
|
|
*
|
|
* Returns >= nr_cpu_ids if no cpus set.
|
|
*/
|
|
static inline unsigned int cpumask_first(const struct cpumask *srcp)
|
|
{
|
|
return find_first_bit(cpumask_bits(srcp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_next - get the next cpu in a cpumask
|
|
* @n: the cpu prior to the place to search (ie. return will be > @n)
|
|
* @srcp: the cpumask pointer
|
|
*
|
|
* Returns >= nr_cpu_ids if no further cpus set.
|
|
*/
|
|
static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
|
|
{
|
|
/* -1 is a legal arg here. */
|
|
if (n != -1)
|
|
cpumask_check(n);
|
|
return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
|
|
}
|
|
|
|
/**
|
|
* cpumask_next_zero - get the next unset cpu in a cpumask
|
|
* @n: the cpu prior to the place to search (ie. return will be > @n)
|
|
* @srcp: the cpumask pointer
|
|
*
|
|
* Returns >= nr_cpu_ids if no further cpus unset.
|
|
*/
|
|
static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
|
|
{
|
|
/* -1 is a legal arg here. */
|
|
if (n != -1)
|
|
cpumask_check(n);
|
|
return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
|
|
}
|
|
|
|
int cpumask_next_and(int n, const struct cpumask *, const struct cpumask *);
|
|
int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
|
|
unsigned int cpumask_local_spread(unsigned int i, int node);
|
|
|
|
/**
|
|
* for_each_cpu - iterate over every cpu in a mask
|
|
* @cpu: the (optionally unsigned) integer iterator
|
|
* @mask: the cpumask pointer
|
|
*
|
|
* After the loop, cpu is >= nr_cpu_ids.
|
|
*/
|
|
#define for_each_cpu(cpu, mask) \
|
|
for ((cpu) = -1; \
|
|
(cpu) = cpumask_next((cpu), (mask)), \
|
|
(cpu) < nr_cpu_ids;)
|
|
|
|
/**
|
|
* for_each_cpu_not - iterate over every cpu in a complemented mask
|
|
* @cpu: the (optionally unsigned) integer iterator
|
|
* @mask: the cpumask pointer
|
|
*
|
|
* After the loop, cpu is >= nr_cpu_ids.
|
|
*/
|
|
#define for_each_cpu_not(cpu, mask) \
|
|
for ((cpu) = -1; \
|
|
(cpu) = cpumask_next_zero((cpu), (mask)), \
|
|
(cpu) < nr_cpu_ids;)
|
|
|
|
extern int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap);
|
|
|
|
/**
|
|
* for_each_cpu_wrap - iterate over every cpu in a mask, starting at a specified location
|
|
* @cpu: the (optionally unsigned) integer iterator
|
|
* @mask: the cpumask poiter
|
|
* @start: the start location
|
|
*
|
|
* The implementation does not assume any bit in @mask is set (including @start).
|
|
*
|
|
* After the loop, cpu is >= nr_cpu_ids.
|
|
*/
|
|
#define for_each_cpu_wrap(cpu, mask, start) \
|
|
for ((cpu) = cpumask_next_wrap((start)-1, (mask), (start), false); \
|
|
(cpu) < nr_cpumask_bits; \
|
|
(cpu) = cpumask_next_wrap((cpu), (mask), (start), true))
|
|
|
|
/**
|
|
* for_each_cpu_and - iterate over every cpu in both masks
|
|
* @cpu: the (optionally unsigned) integer iterator
|
|
* @mask: the first cpumask pointer
|
|
* @and: the second cpumask pointer
|
|
*
|
|
* This saves a temporary CPU mask in many places. It is equivalent to:
|
|
* struct cpumask tmp;
|
|
* cpumask_and(&tmp, &mask, &and);
|
|
* for_each_cpu(cpu, &tmp)
|
|
* ...
|
|
*
|
|
* After the loop, cpu is >= nr_cpu_ids.
|
|
*/
|
|
#define for_each_cpu_and(cpu, mask, and) \
|
|
for ((cpu) = -1; \
|
|
(cpu) = cpumask_next_and((cpu), (mask), (and)), \
|
|
(cpu) < nr_cpu_ids;)
|
|
#endif /* SMP */
|
|
|
|
#define CPU_BITS_NONE \
|
|
{ \
|
|
[0 ... BITS_TO_LONGS(NR_CPUS)-1] = 0UL \
|
|
}
|
|
|
|
#define CPU_BITS_CPU0 \
|
|
{ \
|
|
[0] = 1UL \
|
|
}
|
|
|
|
/**
|
|
* cpumask_set_cpu - set a cpu in a cpumask
|
|
* @cpu: cpu number (< nr_cpu_ids)
|
|
* @dstp: the cpumask pointer
|
|
*/
|
|
static inline void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp)
|
|
{
|
|
set_bit(cpumask_check(cpu), cpumask_bits(dstp));
|
|
}
|
|
|
|
/**
|
|
* cpumask_clear_cpu - clear a cpu in a cpumask
|
|
* @cpu: cpu number (< nr_cpu_ids)
|
|
* @dstp: the cpumask pointer
|
|
*/
|
|
static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
|
|
{
|
|
clear_bit(cpumask_check(cpu), cpumask_bits(dstp));
|
|
}
|
|
|
|
/**
|
|
* cpumask_test_cpu - test for a cpu in a cpumask
|
|
* @cpu: cpu number (< nr_cpu_ids)
|
|
* @cpumask: the cpumask pointer
|
|
*
|
|
* Returns 1 if @cpu is set in @cpumask, else returns 0
|
|
*/
|
|
static inline int cpumask_test_cpu(int cpu, const struct cpumask *cpumask)
|
|
{
|
|
return test_bit(cpumask_check(cpu), cpumask_bits((cpumask)));
|
|
}
|
|
|
|
/**
|
|
* cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
|
|
* @cpu: cpu number (< nr_cpu_ids)
|
|
* @cpumask: the cpumask pointer
|
|
*
|
|
* Returns 1 if @cpu is set in old bitmap of @cpumask, else returns 0
|
|
*
|
|
* test_and_set_bit wrapper for cpumasks.
|
|
*/
|
|
static inline int cpumask_test_and_set_cpu(int cpu, struct cpumask *cpumask)
|
|
{
|
|
return test_and_set_bit(cpumask_check(cpu), cpumask_bits(cpumask));
|
|
}
|
|
|
|
/**
|
|
* cpumask_test_and_clear_cpu - atomically test and clear a cpu in a cpumask
|
|
* @cpu: cpu number (< nr_cpu_ids)
|
|
* @cpumask: the cpumask pointer
|
|
*
|
|
* Returns 1 if @cpu is set in old bitmap of @cpumask, else returns 0
|
|
*
|
|
* test_and_clear_bit wrapper for cpumasks.
|
|
*/
|
|
static inline int cpumask_test_and_clear_cpu(int cpu, struct cpumask *cpumask)
|
|
{
|
|
return test_and_clear_bit(cpumask_check(cpu), cpumask_bits(cpumask));
|
|
}
|
|
|
|
/**
|
|
* cpumask_setall - set all cpus (< nr_cpu_ids) in a cpumask
|
|
* @dstp: the cpumask pointer
|
|
*/
|
|
static inline void cpumask_setall(struct cpumask *dstp)
|
|
{
|
|
bitmap_fill(cpumask_bits(dstp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_clear - clear all cpus (< nr_cpu_ids) in a cpumask
|
|
* @dstp: the cpumask pointer
|
|
*/
|
|
static inline void cpumask_clear(struct cpumask *dstp)
|
|
{
|
|
bitmap_zero(cpumask_bits(dstp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_and - *dstp = *src1p & *src2p
|
|
* @dstp: the cpumask result
|
|
* @src1p: the first input
|
|
* @src2p: the second input
|
|
*
|
|
* If *@dstp is empty, returns 0, else returns 1
|
|
*/
|
|
static inline int cpumask_and(struct cpumask *dstp,
|
|
const struct cpumask *src1p,
|
|
const struct cpumask *src2p)
|
|
{
|
|
return bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p),
|
|
cpumask_bits(src2p), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_or - *dstp = *src1p | *src2p
|
|
* @dstp: the cpumask result
|
|
* @src1p: the first input
|
|
* @src2p: the second input
|
|
*/
|
|
static inline void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
|
|
const struct cpumask *src2p)
|
|
{
|
|
bitmap_or(cpumask_bits(dstp), cpumask_bits(src1p),
|
|
cpumask_bits(src2p), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_xor - *dstp = *src1p ^ *src2p
|
|
* @dstp: the cpumask result
|
|
* @src1p: the first input
|
|
* @src2p: the second input
|
|
*/
|
|
static inline void cpumask_xor(struct cpumask *dstp,
|
|
const struct cpumask *src1p,
|
|
const struct cpumask *src2p)
|
|
{
|
|
bitmap_xor(cpumask_bits(dstp), cpumask_bits(src1p),
|
|
cpumask_bits(src2p), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_andnot - *dstp = *src1p & ~*src2p
|
|
* @dstp: the cpumask result
|
|
* @src1p: the first input
|
|
* @src2p: the second input
|
|
*
|
|
* If *@dstp is empty, returns 0, else returns 1
|
|
*/
|
|
static inline int cpumask_andnot(struct cpumask *dstp,
|
|
const struct cpumask *src1p,
|
|
const struct cpumask *src2p)
|
|
{
|
|
return bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p),
|
|
cpumask_bits(src2p), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_complement - *dstp = ~*srcp
|
|
* @dstp: the cpumask result
|
|
* @srcp: the input to invert
|
|
*/
|
|
static inline void cpumask_complement(struct cpumask *dstp,
|
|
const struct cpumask *srcp)
|
|
{
|
|
bitmap_complement(cpumask_bits(dstp), cpumask_bits(srcp),
|
|
nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_equal - *src1p == *src2p
|
|
* @src1p: the first input
|
|
* @src2p: the second input
|
|
*/
|
|
static inline bool cpumask_equal(const struct cpumask *src1p,
|
|
const struct cpumask *src2p)
|
|
{
|
|
return bitmap_equal(cpumask_bits(src1p), cpumask_bits(src2p),
|
|
nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_intersects - (*src1p & *src2p) != 0
|
|
* @src1p: the first input
|
|
* @src2p: the second input
|
|
*/
|
|
static inline bool cpumask_intersects(const struct cpumask *src1p,
|
|
const struct cpumask *src2p)
|
|
{
|
|
return bitmap_intersects(cpumask_bits(src1p), cpumask_bits(src2p),
|
|
nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_subset - (*src1p & ~*src2p) == 0
|
|
* @src1p: the first input
|
|
* @src2p: the second input
|
|
*
|
|
* Returns 1 if *@src1p is a subset of *@src2p, else returns 0
|
|
*/
|
|
static inline int cpumask_subset(const struct cpumask *src1p,
|
|
const struct cpumask *src2p)
|
|
{
|
|
return bitmap_subset(cpumask_bits(src1p), cpumask_bits(src2p),
|
|
nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_empty - *srcp == 0
|
|
* @srcp: the cpumask to that all cpus < nr_cpu_ids are clear.
|
|
*/
|
|
static inline bool cpumask_empty(const struct cpumask *srcp)
|
|
{
|
|
return bitmap_empty(cpumask_bits(srcp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_full - *srcp == 0xFFFFFFFF...
|
|
* @srcp: the cpumask to that all cpus < nr_cpu_ids are set.
|
|
*/
|
|
static inline bool cpumask_full(const struct cpumask *srcp)
|
|
{
|
|
return bitmap_full(cpumask_bits(srcp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_weight - Count of bits in *srcp
|
|
* @srcp: the cpumask to count bits (< nr_cpu_ids) in.
|
|
*/
|
|
static inline unsigned int cpumask_weight(const struct cpumask *srcp)
|
|
{
|
|
return bitmap_weight(cpumask_bits(srcp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_shift_right - *dstp = *srcp >> n
|
|
* @dstp: the cpumask result
|
|
* @srcp: the input to shift
|
|
* @n: the number of bits to shift by
|
|
*/
|
|
static inline void cpumask_shift_right(struct cpumask *dstp,
|
|
const struct cpumask *srcp, int n)
|
|
{
|
|
bitmap_shift_right(cpumask_bits(dstp), cpumask_bits(srcp), n,
|
|
nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_shift_left - *dstp = *srcp << n
|
|
* @dstp: the cpumask result
|
|
* @srcp: the input to shift
|
|
* @n: the number of bits to shift by
|
|
*/
|
|
static inline void cpumask_shift_left(struct cpumask *dstp,
|
|
const struct cpumask *srcp, int n)
|
|
{
|
|
bitmap_shift_left(cpumask_bits(dstp), cpumask_bits(srcp), n,
|
|
nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_copy - *dstp = *srcp
|
|
* @dstp: the result
|
|
* @srcp: the input cpumask
|
|
*/
|
|
static inline void cpumask_copy(struct cpumask *dstp,
|
|
const struct cpumask *srcp)
|
|
{
|
|
bitmap_copy(cpumask_bits(dstp), cpumask_bits(srcp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_any - pick a "random" cpu from *srcp
|
|
* @srcp: the input cpumask
|
|
*
|
|
* Returns >= nr_cpu_ids if no cpus set.
|
|
*/
|
|
#define cpumask_any(srcp) cpumask_first(srcp)
|
|
|
|
/**
|
|
* cpumask_first_and - return the first cpu from *srcp1 & *srcp2
|
|
* @src1p: the first input
|
|
* @src2p: the second input
|
|
*
|
|
* Returns >= nr_cpu_ids if no cpus set in both. See also cpumask_next_and().
|
|
*/
|
|
#define cpumask_first_and(src1p, src2p) cpumask_next_and(-1, (src1p), (src2p))
|
|
|
|
/**
|
|
* cpumask_any_and - pick a "random" cpu from *mask1 & *mask2
|
|
* @mask1: the first input cpumask
|
|
* @mask2: the second input cpumask
|
|
*
|
|
* Returns >= nr_cpu_ids if no cpus set.
|
|
*/
|
|
#define cpumask_any_and(mask1, mask2) cpumask_first_and((mask1), (mask2))
|
|
|
|
/**
|
|
* cpumask_of - the cpumask containing just a given cpu
|
|
* @cpu: the cpu (<= nr_cpu_ids)
|
|
*/
|
|
#define cpumask_of(cpu) (get_cpu_mask(cpu))
|
|
|
|
/**
|
|
* cpumask_parse_user - extract a cpumask from a user string
|
|
* @buf: the buffer to extract from
|
|
* @len: the length of the buffer
|
|
* @dstp: the cpumask to set.
|
|
*
|
|
* Returns -errno, or 0 for success.
|
|
*/
|
|
static inline int cpumask_parse_user(const char __user *buf, int len,
|
|
struct cpumask *dstp)
|
|
{
|
|
return bitmap_parse_user(buf, len, cpumask_bits(dstp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_parselist_user - extract a cpumask from a user string
|
|
* @buf: the buffer to extract from
|
|
* @len: the length of the buffer
|
|
* @dstp: the cpumask to set.
|
|
*
|
|
* Returns -errno, or 0 for success.
|
|
*/
|
|
static inline int cpumask_parselist_user(const char __user *buf, int len,
|
|
struct cpumask *dstp)
|
|
{
|
|
return bitmap_parselist_user(buf, len, cpumask_bits(dstp),
|
|
nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_parse - extract a cpumask from a string
|
|
* @buf: the buffer to extract from
|
|
* @dstp: the cpumask to set.
|
|
*
|
|
* Returns -errno, or 0 for success.
|
|
*/
|
|
static inline int cpumask_parse(const char *buf, struct cpumask *dstp)
|
|
{
|
|
char *nl = strchr(buf, '\n');
|
|
unsigned int len = nl ? (unsigned int)(nl - buf) : strlen(buf);
|
|
|
|
return bitmap_parse(buf, len, cpumask_bits(dstp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpulist_parse - extract a cpumask from a user string of ranges
|
|
* @buf: the buffer to extract from
|
|
* @dstp: the cpumask to set.
|
|
*
|
|
* Returns -errno, or 0 for success.
|
|
*/
|
|
static inline int cpulist_parse(const char *buf, struct cpumask *dstp)
|
|
{
|
|
return bitmap_parselist(buf, cpumask_bits(dstp), nr_cpumask_bits);
|
|
}
|
|
|
|
/**
|
|
* cpumask_size - size to allocate for a 'struct cpumask' in bytes
|
|
*/
|
|
static inline size_t cpumask_size(void)
|
|
{
|
|
return BITS_TO_LONGS(nr_cpumask_bits) * sizeof(long);
|
|
}
|
|
|
|
/*
|
|
* cpumask_var_t: struct cpumask for stack usage.
|
|
*
|
|
* Oh, the wicked games we play! In order to make kernel coding a
|
|
* little more difficult, we typedef cpumask_var_t to an array or a
|
|
* pointer: doing &mask on an array is a noop, so it still works.
|
|
*
|
|
* ie.
|
|
* cpumask_var_t tmpmask;
|
|
* if (!alloc_cpumask_var(&tmpmask, GFP_KERNEL))
|
|
* return -ENOMEM;
|
|
*
|
|
* ... use 'tmpmask' like a normal struct cpumask * ...
|
|
*
|
|
* free_cpumask_var(tmpmask);
|
|
*
|
|
*
|
|
* However, one notable exception is there. alloc_cpumask_var() allocates
|
|
* only nr_cpumask_bits bits (in the other hand, real cpumask_t always has
|
|
* NR_CPUS bits). Therefore you don't have to dereference cpumask_var_t.
|
|
*
|
|
* cpumask_var_t tmpmask;
|
|
* if (!alloc_cpumask_var(&tmpmask, GFP_KERNEL))
|
|
* return -ENOMEM;
|
|
*
|
|
* var = *tmpmask;
|
|
*
|
|
* This code makes NR_CPUS length memcopy and brings to a memory corruption.
|
|
* cpumask_copy() provide safe copy functionality.
|
|
*
|
|
* Note that there is another evil here: If you define a cpumask_var_t
|
|
* as a percpu variable then the way to obtain the address of the cpumask
|
|
* structure differently influences what this_cpu_* operation needs to be
|
|
* used. Please use this_cpu_cpumask_var_t in those cases. The direct use
|
|
* of this_cpu_ptr() or this_cpu_read() will lead to failures when the
|
|
* other type of cpumask_var_t implementation is configured.
|
|
*/
|
|
#ifdef CONFIG_CPUMASK_OFFSTACK
|
|
typedef struct cpumask *cpumask_var_t;
|
|
|
|
#define this_cpu_cpumask_var_ptr(x) this_cpu_read(x)
|
|
|
|
bool alloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node);
|
|
bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags);
|
|
bool zalloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags, int node);
|
|
bool zalloc_cpumask_var(cpumask_var_t *mask, gfp_t flags);
|
|
void alloc_bootmem_cpumask_var(cpumask_var_t *mask);
|
|
void free_cpumask_var(cpumask_var_t mask);
|
|
void free_bootmem_cpumask_var(cpumask_var_t mask);
|
|
|
|
static inline bool cpumask_available(cpumask_var_t mask)
|
|
{
|
|
return mask != NULL;
|
|
}
|
|
|
|
#else
|
|
typedef struct cpumask cpumask_var_t[1];
|
|
|
|
#define this_cpu_cpumask_var_ptr(x) this_cpu_ptr(x)
|
|
|
|
static inline bool alloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)
|
|
{
|
|
return true;
|
|
}
|
|
|
|
static inline bool alloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags,
|
|
int node)
|
|
{
|
|
return true;
|
|
}
|
|
|
|
static inline bool zalloc_cpumask_var(cpumask_var_t *mask, gfp_t flags)
|
|
{
|
|
cpumask_clear(*mask);
|
|
return true;
|
|
}
|
|
|
|
static inline bool zalloc_cpumask_var_node(cpumask_var_t *mask, gfp_t flags,
|
|
int node)
|
|
{
|
|
cpumask_clear(*mask);
|
|
return true;
|
|
}
|
|
|
|
static inline void alloc_bootmem_cpumask_var(cpumask_var_t *mask)
|
|
{
|
|
}
|
|
|
|
static inline void free_cpumask_var(cpumask_var_t mask)
|
|
{
|
|
}
|
|
|
|
static inline void free_bootmem_cpumask_var(cpumask_var_t mask)
|
|
{
|
|
}
|
|
|
|
static inline bool cpumask_available(cpumask_var_t mask)
|
|
{
|
|
return true;
|
|
}
|
|
#endif /* CONFIG_CPUMASK_OFFSTACK */
|
|
|
|
/* It's common to want to use cpu_all_mask in struct member initializers,
|
|
* so it has to refer to an address rather than a pointer. */
|
|
extern const DECLARE_BITMAP(cpu_all_bits, NR_CPUS);
|
|
#define cpu_all_mask to_cpumask(cpu_all_bits)
|
|
|
|
/* First bits of cpu_bit_bitmap are in fact unset. */
|
|
#define cpu_none_mask to_cpumask(cpu_bit_bitmap[0])
|
|
|
|
#define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_mask)
|
|
#define for_each_online_cpu(cpu) for_each_cpu((cpu), cpu_online_mask)
|
|
#define for_each_present_cpu(cpu) for_each_cpu((cpu), cpu_present_mask)
|
|
#define for_each_isolated_cpu(cpu) for_each_cpu((cpu), cpu_isolated_mask)
|
|
|
|
/* Wrappers for arch boot code to manipulate normally-constant masks */
|
|
void init_cpu_present(const struct cpumask *src);
|
|
void init_cpu_possible(const struct cpumask *src);
|
|
void init_cpu_online(const struct cpumask *src);
|
|
|
|
static inline void
|
|
set_cpu_possible(unsigned int cpu, bool possible)
|
|
{
|
|
if (possible)
|
|
cpumask_set_cpu(cpu, &__cpu_possible_mask);
|
|
else
|
|
cpumask_clear_cpu(cpu, &__cpu_possible_mask);
|
|
}
|
|
|
|
static inline void
|
|
set_cpu_present(unsigned int cpu, bool present)
|
|
{
|
|
if (present)
|
|
cpumask_set_cpu(cpu, &__cpu_present_mask);
|
|
else
|
|
cpumask_clear_cpu(cpu, &__cpu_present_mask);
|
|
}
|
|
|
|
static inline void
|
|
set_cpu_online(unsigned int cpu, bool online)
|
|
{
|
|
if (online)
|
|
cpumask_set_cpu(cpu, &__cpu_online_mask);
|
|
else
|
|
cpumask_clear_cpu(cpu, &__cpu_online_mask);
|
|
}
|
|
|
|
static inline void
|
|
set_cpu_active(unsigned int cpu, bool active)
|
|
{
|
|
if (active)
|
|
cpumask_set_cpu(cpu, &__cpu_active_mask);
|
|
else
|
|
cpumask_clear_cpu(cpu, &__cpu_active_mask);
|
|
}
|
|
|
|
static inline void
|
|
set_cpu_isolated(unsigned int cpu, bool isolated)
|
|
{
|
|
if (isolated)
|
|
cpumask_set_cpu(cpu, &__cpu_isolated_mask);
|
|
else
|
|
cpumask_clear_cpu(cpu, &__cpu_isolated_mask);
|
|
}
|
|
|
|
|
|
/**
|
|
* to_cpumask - convert an NR_CPUS bitmap to a struct cpumask *
|
|
* @bitmap: the bitmap
|
|
*
|
|
* There are a few places where cpumask_var_t isn't appropriate and
|
|
* static cpumasks must be used (eg. very early boot), yet we don't
|
|
* expose the definition of 'struct cpumask'.
|
|
*
|
|
* This does the conversion, and can be used as a constant initializer.
|
|
*/
|
|
#define to_cpumask(bitmap) \
|
|
((struct cpumask *)(1 ? (bitmap) \
|
|
: (void *)sizeof(__check_is_bitmap(bitmap))))
|
|
|
|
static inline int __check_is_bitmap(const unsigned long *bitmap)
|
|
{
|
|
return 1;
|
|
}
|
|
|
|
/*
|
|
* Special-case data structure for "single bit set only" constant CPU masks.
|
|
*
|
|
* We pre-generate all the 64 (or 32) possible bit positions, with enough
|
|
* padding to the left and the right, and return the constant pointer
|
|
* appropriately offset.
|
|
*/
|
|
extern const unsigned long
|
|
cpu_bit_bitmap[BITS_PER_LONG+1][BITS_TO_LONGS(NR_CPUS)];
|
|
|
|
static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
|
|
{
|
|
const unsigned long *p = cpu_bit_bitmap[1 + cpu % BITS_PER_LONG];
|
|
p -= cpu / BITS_PER_LONG;
|
|
return to_cpumask(p);
|
|
}
|
|
|
|
#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
|
|
|
|
#if NR_CPUS <= BITS_PER_LONG
|
|
#define CPU_BITS_ALL \
|
|
{ \
|
|
[BITS_TO_LONGS(NR_CPUS)-1] = BITMAP_LAST_WORD_MASK(NR_CPUS) \
|
|
}
|
|
|
|
#else /* NR_CPUS > BITS_PER_LONG */
|
|
|
|
#define CPU_BITS_ALL \
|
|
{ \
|
|
[0 ... BITS_TO_LONGS(NR_CPUS)-2] = ~0UL, \
|
|
[BITS_TO_LONGS(NR_CPUS)-1] = BITMAP_LAST_WORD_MASK(NR_CPUS) \
|
|
}
|
|
#endif /* NR_CPUS > BITS_PER_LONG */
|
|
|
|
/**
|
|
* cpumap_print_to_pagebuf - copies the cpumask into the buffer either
|
|
* as comma-separated list of cpus or hex values of cpumask
|
|
* @list: indicates whether the cpumap must be list
|
|
* @mask: the cpumask to copy
|
|
* @buf: the buffer to copy into
|
|
*
|
|
* Returns the length of the (null-terminated) @buf string, zero if
|
|
* nothing is copied.
|
|
*/
|
|
static inline ssize_t
|
|
cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
|
|
{
|
|
return bitmap_print_to_pagebuf(list, buf, cpumask_bits(mask),
|
|
nr_cpu_ids);
|
|
}
|
|
|
|
#if NR_CPUS <= BITS_PER_LONG
|
|
#define CPU_MASK_ALL \
|
|
(cpumask_t) { { \
|
|
[BITS_TO_LONGS(NR_CPUS)-1] = BITMAP_LAST_WORD_MASK(NR_CPUS) \
|
|
} }
|
|
#else
|
|
#define CPU_MASK_ALL \
|
|
(cpumask_t) { { \
|
|
[0 ... BITS_TO_LONGS(NR_CPUS)-2] = ~0UL, \
|
|
[BITS_TO_LONGS(NR_CPUS)-1] = BITMAP_LAST_WORD_MASK(NR_CPUS) \
|
|
} }
|
|
#endif /* NR_CPUS > BITS_PER_LONG */
|
|
|
|
#define CPU_MASK_NONE \
|
|
(cpumask_t) { { \
|
|
[0 ... BITS_TO_LONGS(NR_CPUS)-1] = 0UL \
|
|
} }
|
|
|
|
#define CPU_MASK_CPU0 \
|
|
(cpumask_t) { { \
|
|
[0] = 1UL \
|
|
} }
|
|
|
|
#endif /* __LINUX_CPUMASK_H */
|