Merge 5.15.118 into android13-5.15-lts
Changes in 5.15.118 test_firmware: Use kstrtobool() instead of strtobool() test_firmware: prevent race conditions by a correct implementation of locking test_firmware: fix a memory leak with reqs buffer ksmbd: fix slab-out-of-bounds read in smb2_handle_negotiate drm/amdgpu: fix Null pointer dereference error in amdgpu_device_recover_vram of: overlay: rename variables to be consistent of: overlay: rework overlay apply and remove kfree()s of: overlay: Fix missing of_node_put() in error case of init_overlay_changeset() power: supply: ab8500: Fix external_power_changed race power: supply: sc27xx: Fix external_power_changed race power: supply: bq27xxx: Use mod_delayed_work() instead of cancel() + schedule() ARM: dts: vexpress: add missing cache properties tools: gpio: fix debounce_period_us output of lsgpio power: supply: Ratelimit no data debug output platform/x86: asus-wmi: Ignore WMI events with codes 0x7B, 0xC0 regulator: Fix error checking for debugfs_create_dir irqchip/gic-v3: Disable pseudo NMIs on Mediatek devices w/ firmware issues power: supply: Fix logic checking if system is running from battery btrfs: scrub: try harder to mark RAID56 block groups read-only btrfs: handle memory allocation failure in btrfs_csum_one_bio ASoC: soc-pcm: test if a BE can be prepared parisc: Improve cache flushing for PCXL in arch_sync_dma_for_cpu() parisc: Flush gatt writes and adjust gatt mask in parisc_agp_mask_memory() MIPS: unhide PATA_PLATFORM MIPS: Alchemy: fix dbdma2 mips: Move initrd_start check after initrd address sanitisation. ASoC: dwc: move DMA init to snd_soc_dai_driver probe() xen/blkfront: Only check REQ_FUA for writes drm:amd:amdgpu: Fix missing buffer object unlock in failure path NVMe: Add MAXIO 1602 to bogus nid list. irqchip/gic: Correctly validate OF quirk descriptors io_uring: hold uring mutex around poll removal wifi: cfg80211: fix locking in regulatory disconnect wifi: cfg80211: fix double lock bug in reg_wdev_chan_valid() epoll: ep_autoremove_wake_function should use list_del_init_careful ocfs2: fix use-after-free when unmounting read-only filesystem ocfs2: check new file size on fallocate call nios2: dts: Fix tse_mac "max-frame-size" property nilfs2: fix incomplete buffer cleanup in nilfs_btnode_abort_change_key() nilfs2: fix possible out-of-bounds segment allocation in resize ioctl kexec: support purgatories with .text.hot sections x86/purgatory: remove PGO flags powerpc/purgatory: remove PGO flags ALSA: usb-audio: Add quirk flag for HEM devices to enable native DSD playback dm thin metadata: check fail_io before using data_sm nouveau: fix client work fence deletion race RDMA/uverbs: Restrict usage of privileged QKEYs net: usb: qmi_wwan: add support for Compal RXM-G1 drm/amd/display: edp do not add non-edid timings drm/amdgpu: add missing radeon secondary PCI ID ALSA: hda/realtek: Add a quirk for Compaq N14JP6 Remove DECnet support from kernel thunderbolt: dma_test: Use correct value for absent rings when creating paths thunderbolt: Mask ring interrupt on Intel hardware as well USB: serial: option: add Quectel EM061KGL series serial: lantiq: add missing interrupt ack usb: dwc3: gadget: Reset num TRBs before giving back the request RDMA/rtrs: Fix the last iu->buf leak in err path RDMA/rtrs: Fix rxe_dealloc_pd warning RDMA/rxe: Fix packet length checks spi: fsl-dspi: avoid SCK glitches with continuous transfers netfilter: nf_tables: integrate pipapo into commit protocol netfilter: nfnetlink: skip error delivery on batch in case of ENOMEM netfilter: nf_tables: incorrect error path handling with NFT_MSG_NEWRULE net: enetc: correct the indexes of highest and 2nd highest TCs ping6: Fix send to link-local addresses with VRF. net/sched: simplify tcf_pedit_act net/sched: act_pedit: remove extra check for key type net/sched: act_pedit: Parse L3 Header for L4 offset net/sched: cls_u32: Fix reference counter leak leading to overflow RDMA/rxe: Remove the unused variable obj RDMA/rxe: Removed unused name from rxe_task struct RDMA/rxe: Fix the use-before-initialization error of resp_pkts iavf: remove mask from iavf_irq_enable_queues() octeontx2-af: fixed resource availability check octeontx2-af: fix lbk link credits on cn10k RDMA/mlx5: Initiate dropless RQ for RAW Ethernet functions RDMA/cma: Always set static rate to 0 for RoCE IB/uverbs: Fix to consider event queue closing also upon non-blocking mode IB/isert: Fix dead lock in ib_isert IB/isert: Fix possible list corruption in CMA handler IB/isert: Fix incorrect release of isert connection net: ethtool: correct MAX attribute value for stats ipvlan: fix bound dev checking for IPv6 l3s mode sctp: fix an error code in sctp_sf_eat_auth() igc: Clean the TX buffer and TX descriptor ring igb: fix nvm.ops.read() error handling drm/nouveau: don't detect DSM for non-NVIDIA device drm/nouveau/dp: check for NULL nv_connector->native_mode drm/nouveau: add nv_encoder pointer check for NULL cifs: fix lease break oops in xfstest generic/098 ext4: drop the call to ext4_error() from ext4_get_group_info() net/sched: cls_api: Fix lockup on flushing explicitly created chain net: lapbether: only support ethernet devices dm: don't lock fs when the map is NULL during suspend or resume net: tipc: resize nlattr array to correct size selftests/ptp: Fix timestamp printf format for PTP_SYS_OFFSET afs: Fix vlserver probe RTT handling cgroup: always put cset in cgroup_css_set_put_fork rcu/kvfree: Avoid freeing new kfree_rcu() memory after old grace period neighbour: Remove unused inline function neigh_key_eq16() net: Remove unused inline function dst_hold_and_use() net: Remove DECnet leftovers from flow.h. neighbour: delete neigh_lookup_nodev as not used of: overlay: add entry to of_overlay_action_name[] mmc: block: ensure error propagation for non-blk nilfs2: reject devices with insufficient block count Linux 5.15.118 Change-Id: I6c577a46faade097c6e1962115117421e0c14a59 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -920,10 +920,6 @@
|
|||||||
|
|
||||||
debugpat [X86] Enable PAT debugging
|
debugpat [X86] Enable PAT debugging
|
||||||
|
|
||||||
decnet.addr= [HW,NET]
|
|
||||||
Format: <area>[,<node>]
|
|
||||||
See also Documentation/networking/decnet.rst.
|
|
||||||
|
|
||||||
default_hugepagesz=
|
default_hugepagesz=
|
||||||
[HW] The size of the default HugeTLB page. This is
|
[HW] The size of the default HugeTLB page. This is
|
||||||
the size represented by the legacy /proc/ hugepages
|
the size represented by the legacy /proc/ hugepages
|
||||||
|
|||||||
@@ -34,13 +34,14 @@ Table : Subdirectories in /proc/sys/net
|
|||||||
========= =================== = ========== ==================
|
========= =================== = ========== ==================
|
||||||
Directory Content Directory Content
|
Directory Content Directory Content
|
||||||
========= =================== = ========== ==================
|
========= =================== = ========== ==================
|
||||||
core General parameter appletalk Appletalk protocol
|
802 E802 protocol mptcp Multipath TCP
|
||||||
unix Unix domain sockets netrom NET/ROM
|
appletalk Appletalk protocol netfilter Network Filter
|
||||||
802 E802 protocol ax25 AX25
|
ax25 AX25 netrom NET/ROM
|
||||||
ethernet Ethernet protocol rose X.25 PLP layer
|
bridge Bridging rose X.25 PLP layer
|
||||||
ipv4 IP version 4 x25 X.25 protocol
|
core General parameter tipc TIPC
|
||||||
bridge Bridging decnet DEC net
|
ethernet Ethernet protocol unix Unix domain sockets
|
||||||
ipv6 IP version 6 tipc TIPC
|
ipv4 IP version 4 x25 X.25 protocol
|
||||||
|
ipv6 IP version 6
|
||||||
========= =================== = ========== ==================
|
========= =================== = ========== ==================
|
||||||
|
|
||||||
1. /proc/sys/net/core - Network core options
|
1. /proc/sys/net/core - Network core options
|
||||||
|
|||||||
@@ -119,10 +119,32 @@ Finally, if you need to remove all overlays in one-go, just call
|
|||||||
of_overlay_remove_all() which will remove every single one in the correct
|
of_overlay_remove_all() which will remove every single one in the correct
|
||||||
order.
|
order.
|
||||||
|
|
||||||
In addition, there is the option to register notifiers that get called on
|
There is the option to register notifiers that get called on
|
||||||
overlay operations. See of_overlay_notifier_register/unregister and
|
overlay operations. See of_overlay_notifier_register/unregister and
|
||||||
enum of_overlay_notify_action for details.
|
enum of_overlay_notify_action for details.
|
||||||
|
|
||||||
Note that a notifier callback is not supposed to store pointers to a device
|
A notifier callback for OF_OVERLAY_PRE_APPLY, OF_OVERLAY_POST_APPLY, or
|
||||||
tree node or its content beyond OF_OVERLAY_POST_REMOVE corresponding to the
|
OF_OVERLAY_PRE_REMOVE may store pointers to a device tree node in the overlay
|
||||||
respective node it received.
|
or its content but these pointers must not persist past the notifier callback
|
||||||
|
for OF_OVERLAY_POST_REMOVE. The memory containing the overlay will be
|
||||||
|
kfree()ed after OF_OVERLAY_POST_REMOVE notifiers are called. Note that the
|
||||||
|
memory will be kfree()ed even if the notifier for OF_OVERLAY_POST_REMOVE
|
||||||
|
returns an error.
|
||||||
|
|
||||||
|
The changeset notifiers in drivers/of/dynamic.c are a second type of notifier
|
||||||
|
that could be triggered by applying or removing an overlay. These notifiers
|
||||||
|
are not allowed to store pointers to a device tree node in the overlay
|
||||||
|
or its content. The overlay code does not protect against such pointers
|
||||||
|
remaining active when the memory containing the overlay is freed as a result
|
||||||
|
of removing the overlay.
|
||||||
|
|
||||||
|
Any other code that retains a pointer to the overlay nodes or data is
|
||||||
|
considered to be a bug because after removing the overlay the pointer
|
||||||
|
will refer to freed memory.
|
||||||
|
|
||||||
|
Users of overlays must be especially aware of the overall operations that
|
||||||
|
occur on the system to ensure that other kernel code does not retain any
|
||||||
|
pointers to the overlay nodes or data. Any example of an inadvertent use
|
||||||
|
of such pointers is if a driver or subsystem module is loaded after an
|
||||||
|
overlay has been applied, and the driver or subsystem scans the entire
|
||||||
|
devicetree or a large portion of it, including the overlay nodes.
|
||||||
|
|||||||
@@ -1,243 +0,0 @@
|
|||||||
.. SPDX-License-Identifier: GPL-2.0
|
|
||||||
|
|
||||||
=========================================
|
|
||||||
Linux DECnet Networking Layer Information
|
|
||||||
=========================================
|
|
||||||
|
|
||||||
1. Other documentation....
|
|
||||||
==========================
|
|
||||||
|
|
||||||
- Project Home Pages
|
|
||||||
- http://www.chygwyn.com/ - Kernel info
|
|
||||||
- http://linux-decnet.sourceforge.net/ - Userland tools
|
|
||||||
- http://www.sourceforge.net/projects/linux-decnet/ - Status page
|
|
||||||
|
|
||||||
2. Configuring the kernel
|
|
||||||
=========================
|
|
||||||
|
|
||||||
Be sure to turn on the following options:
|
|
||||||
|
|
||||||
- CONFIG_DECNET (obviously)
|
|
||||||
- CONFIG_PROC_FS (to see what's going on)
|
|
||||||
- CONFIG_SYSCTL (for easy configuration)
|
|
||||||
|
|
||||||
if you want to try out router support (not properly debugged yet)
|
|
||||||
you'll need the following options as well...
|
|
||||||
|
|
||||||
- CONFIG_DECNET_ROUTER (to be able to add/delete routes)
|
|
||||||
- CONFIG_NETFILTER (will be required for the DECnet routing daemon)
|
|
||||||
|
|
||||||
Don't turn on SIOCGIFCONF support for DECnet unless you are really sure
|
|
||||||
that you need it, in general you won't and it can cause ifconfig to
|
|
||||||
malfunction.
|
|
||||||
|
|
||||||
Run time configuration has changed slightly from the 2.4 system. If you
|
|
||||||
want to configure an endnode, then the simplified procedure is as follows:
|
|
||||||
|
|
||||||
- Set the MAC address on your ethernet card before starting _any_ other
|
|
||||||
network protocols.
|
|
||||||
|
|
||||||
As soon as your network card is brought into the UP state, DECnet should
|
|
||||||
start working. If you need something more complicated or are unsure how
|
|
||||||
to set the MAC address, see the next section. Also all configurations which
|
|
||||||
worked with 2.4 will work under 2.5 with no change.
|
|
||||||
|
|
||||||
3. Command line options
|
|
||||||
=======================
|
|
||||||
|
|
||||||
You can set a DECnet address on the kernel command line for compatibility
|
|
||||||
with the 2.4 configuration procedure, but in general it's not needed any more.
|
|
||||||
If you do st a DECnet address on the command line, it has only one purpose
|
|
||||||
which is that its added to the addresses on the loopback device.
|
|
||||||
|
|
||||||
With 2.4 kernels, DECnet would only recognise addresses as local if they
|
|
||||||
were added to the loopback device. In 2.5, any local interface address
|
|
||||||
can be used to loop back to the local machine. Of course this does not
|
|
||||||
prevent you adding further addresses to the loopback device if you
|
|
||||||
want to.
|
|
||||||
|
|
||||||
N.B. Since the address list of an interface determines the addresses for
|
|
||||||
which "hello" messages are sent, if you don't set an address on the loopback
|
|
||||||
interface then you won't see any entries in /proc/net/neigh for the local
|
|
||||||
host until such time as you start a connection. This doesn't affect the
|
|
||||||
operation of the local communications in any other way though.
|
|
||||||
|
|
||||||
The kernel command line takes options looking like the following::
|
|
||||||
|
|
||||||
decnet.addr=1,2
|
|
||||||
|
|
||||||
the two numbers are the node address 1,2 = 1.2 For 2.2.xx kernels
|
|
||||||
and early 2.3.xx kernels, you must use a comma when specifying the
|
|
||||||
DECnet address like this. For more recent 2.3.xx kernels, you may
|
|
||||||
use almost any character except space, although a `.` would be the most
|
|
||||||
obvious choice :-)
|
|
||||||
|
|
||||||
There used to be a third number specifying the node type. This option
|
|
||||||
has gone away in favour of a per interface node type. This is now set
|
|
||||||
using /proc/sys/net/decnet/conf/<dev>/forwarding. This file can be
|
|
||||||
set with a single digit, 0=EndNode, 1=L1 Router and 2=L2 Router.
|
|
||||||
|
|
||||||
There are also equivalent options for modules. The node address can
|
|
||||||
also be set through the /proc/sys/net/decnet/ files, as can other system
|
|
||||||
parameters.
|
|
||||||
|
|
||||||
Currently the only supported devices are ethernet and ip_gre. The
|
|
||||||
ethernet address of your ethernet card has to be set according to the DECnet
|
|
||||||
address of the node in order for it to be autoconfigured (and then appear in
|
|
||||||
/proc/net/decnet_dev). There is a utility available at the above
|
|
||||||
FTP sites called dn2ethaddr which can compute the correct ethernet
|
|
||||||
address to use. The address can be set by ifconfig either before or
|
|
||||||
at the time the device is brought up. If you are using RedHat you can
|
|
||||||
add the line::
|
|
||||||
|
|
||||||
MACADDR=AA:00:04:00:03:04
|
|
||||||
|
|
||||||
or something similar, to /etc/sysconfig/network-scripts/ifcfg-eth0 or
|
|
||||||
wherever your network card's configuration lives. Setting the MAC address
|
|
||||||
of your ethernet card to an address starting with "hi-ord" will cause a
|
|
||||||
DECnet address which matches to be added to the interface (which you can
|
|
||||||
verify with iproute2).
|
|
||||||
|
|
||||||
The default device for routing can be set through the /proc filesystem
|
|
||||||
by setting /proc/sys/net/decnet/default_device to the
|
|
||||||
device you want DECnet to route packets out of when no specific route
|
|
||||||
is available. Usually this will be eth0, for example::
|
|
||||||
|
|
||||||
echo -n "eth0" >/proc/sys/net/decnet/default_device
|
|
||||||
|
|
||||||
If you don't set the default device, then it will default to the first
|
|
||||||
ethernet card which has been autoconfigured as described above. You can
|
|
||||||
confirm that by looking in the default_device file of course.
|
|
||||||
|
|
||||||
There is a list of what the other files under /proc/sys/net/decnet/ do
|
|
||||||
on the kernel patch web site (shown above).
|
|
||||||
|
|
||||||
4. Run time kernel configuration
|
|
||||||
================================
|
|
||||||
|
|
||||||
|
|
||||||
This is either done through the sysctl/proc interface (see the kernel web
|
|
||||||
pages for details on what the various options do) or through the iproute2
|
|
||||||
package in the same way as IPv4/6 configuration is performed.
|
|
||||||
|
|
||||||
Documentation for iproute2 is included with the package, although there is
|
|
||||||
as yet no specific section on DECnet, most of the features apply to both
|
|
||||||
IP and DECnet, albeit with DECnet addresses instead of IP addresses and
|
|
||||||
a reduced functionality.
|
|
||||||
|
|
||||||
If you want to configure a DECnet router you'll need the iproute2 package
|
|
||||||
since its the _only_ way to add and delete routes currently. Eventually
|
|
||||||
there will be a routing daemon to send and receive routing messages for
|
|
||||||
each interface and update the kernel routing tables accordingly. The
|
|
||||||
routing daemon will use netfilter to listen to routing packets, and
|
|
||||||
rtnetlink to update the kernels routing tables.
|
|
||||||
|
|
||||||
The DECnet raw socket layer has been removed since it was there purely
|
|
||||||
for use by the routing daemon which will now use netfilter (a much cleaner
|
|
||||||
and more generic solution) instead.
|
|
||||||
|
|
||||||
5. How can I tell if its working?
|
|
||||||
=================================
|
|
||||||
|
|
||||||
Here is a quick guide of what to look for in order to know if your DECnet
|
|
||||||
kernel subsystem is working.
|
|
||||||
|
|
||||||
- Is the node address set (see /proc/sys/net/decnet/node_address)
|
|
||||||
- Is the node of the correct type
|
|
||||||
(see /proc/sys/net/decnet/conf/<dev>/forwarding)
|
|
||||||
- Is the Ethernet MAC address of each Ethernet card set to match
|
|
||||||
the DECnet address. If in doubt use the dn2ethaddr utility available
|
|
||||||
at the ftp archive.
|
|
||||||
- If the previous two steps are satisfied, and the Ethernet card is up,
|
|
||||||
you should find that it is listed in /proc/net/decnet_dev and also
|
|
||||||
that it appears as a directory in /proc/sys/net/decnet/conf/. The
|
|
||||||
loopback device (lo) should also appear and is required to communicate
|
|
||||||
within a node.
|
|
||||||
- If you have any DECnet routers on your network, they should appear
|
|
||||||
in /proc/net/decnet_neigh, otherwise this file will only contain the
|
|
||||||
entry for the node itself (if it doesn't check to see if lo is up).
|
|
||||||
- If you want to send to any node which is not listed in the
|
|
||||||
/proc/net/decnet_neigh file, you'll need to set the default device
|
|
||||||
to point to an Ethernet card with connection to a router. This is
|
|
||||||
again done with the /proc/sys/net/decnet/default_device file.
|
|
||||||
- Try starting a simple server and client, like the dnping/dnmirror
|
|
||||||
over the loopback interface. With luck they should communicate.
|
|
||||||
For this step and those after, you'll need the DECnet library
|
|
||||||
which can be obtained from the above ftp sites as well as the
|
|
||||||
actual utilities themselves.
|
|
||||||
- If this seems to work, then try talking to a node on your local
|
|
||||||
network, and see if you can obtain the same results.
|
|
||||||
- At this point you are on your own... :-)
|
|
||||||
|
|
||||||
6. How to send a bug report
|
|
||||||
===========================
|
|
||||||
|
|
||||||
If you've found a bug and want to report it, then there are several things
|
|
||||||
you can do to help me work out exactly what it is that is wrong. Useful
|
|
||||||
information (_most_ of which _is_ _essential_) includes:
|
|
||||||
|
|
||||||
- What kernel version are you running ?
|
|
||||||
- What version of the patch are you running ?
|
|
||||||
- How far though the above set of tests can you get ?
|
|
||||||
- What is in the /proc/decnet* files and /proc/sys/net/decnet/* files ?
|
|
||||||
- Which services are you running ?
|
|
||||||
- Which client caused the problem ?
|
|
||||||
- How much data was being transferred ?
|
|
||||||
- Was the network congested ?
|
|
||||||
- How can the problem be reproduced ?
|
|
||||||
- Can you use tcpdump to get a trace ? (N.B. Most (all?) versions of
|
|
||||||
tcpdump don't understand how to dump DECnet properly, so including
|
|
||||||
the hex listing of the packet contents is _essential_, usually the -x flag.
|
|
||||||
You may also need to increase the length grabbed with the -s flag. The
|
|
||||||
-e flag also provides very useful information (ethernet MAC addresses))
|
|
||||||
|
|
||||||
7. MAC FAQ
|
|
||||||
==========
|
|
||||||
|
|
||||||
A quick FAQ on ethernet MAC addresses to explain how Linux and DECnet
|
|
||||||
interact and how to get the best performance from your hardware.
|
|
||||||
|
|
||||||
Ethernet cards are designed to normally only pass received network frames
|
|
||||||
to a host computer when they are addressed to it, or to the broadcast address.
|
|
||||||
|
|
||||||
Linux has an interface which allows the setting of extra addresses for
|
|
||||||
an ethernet card to listen to. If the ethernet card supports it, the
|
|
||||||
filtering operation will be done in hardware, if not the extra unwanted packets
|
|
||||||
received will be discarded by the host computer. In the latter case,
|
|
||||||
significant processor time and bus bandwidth can be used up on a busy
|
|
||||||
network (see the NAPI documentation for a longer explanation of these
|
|
||||||
effects).
|
|
||||||
|
|
||||||
DECnet makes use of this interface to allow running DECnet on an ethernet
|
|
||||||
card which has already been configured using TCP/IP (presumably using the
|
|
||||||
built in MAC address of the card, as usual) and/or to allow multiple DECnet
|
|
||||||
addresses on each physical interface. If you do this, be aware that if your
|
|
||||||
ethernet card doesn't support perfect hashing in its MAC address filter
|
|
||||||
then your computer will be doing more work than required. Some cards
|
|
||||||
will simply set themselves into promiscuous mode in order to receive
|
|
||||||
packets from the DECnet specified addresses. So if you have one of these
|
|
||||||
cards its better to set the MAC address of the card as described above
|
|
||||||
to gain the best efficiency. Better still is to use a card which supports
|
|
||||||
NAPI as well.
|
|
||||||
|
|
||||||
|
|
||||||
8. Mailing list
|
|
||||||
===============
|
|
||||||
|
|
||||||
If you are keen to get involved in development, or want to ask questions
|
|
||||||
about configuration, or even just report bugs, then there is a mailing
|
|
||||||
list that you can join, details are at:
|
|
||||||
|
|
||||||
http://sourceforge.net/mail/?group_id=4993
|
|
||||||
|
|
||||||
9. Legal Info
|
|
||||||
=============
|
|
||||||
|
|
||||||
The Linux DECnet project team have placed their code under the GPL. The
|
|
||||||
software is provided "as is" and without warranty express or implied.
|
|
||||||
DECnet is a trademark of Compaq. This software is not a product of
|
|
||||||
Compaq. We acknowledge the help of people at Compaq in providing extra
|
|
||||||
documentation above and beyond what was previously publicly available.
|
|
||||||
|
|
||||||
Steve Whitehouse <SteveW@ACM.org>
|
|
||||||
|
|
||||||
@@ -46,7 +46,6 @@ Contents:
|
|||||||
cdc_mbim
|
cdc_mbim
|
||||||
dccp
|
dccp
|
||||||
dctcp
|
dctcp
|
||||||
decnet
|
|
||||||
dns_resolver
|
dns_resolver
|
||||||
driver
|
driver
|
||||||
eql
|
eql
|
||||||
|
|||||||
@@ -304,7 +304,6 @@ Code Seq# Include File Comments
|
|||||||
0x89 00-06 arch/x86/include/asm/sockios.h
|
0x89 00-06 arch/x86/include/asm/sockios.h
|
||||||
0x89 0B-DF linux/sockios.h
|
0x89 0B-DF linux/sockios.h
|
||||||
0x89 E0-EF linux/sockios.h SIOCPROTOPRIVATE range
|
0x89 E0-EF linux/sockios.h SIOCPROTOPRIVATE range
|
||||||
0x89 E0-EF linux/dn.h PROTOPRIVATE range
|
|
||||||
0x89 F0-FF linux/sockios.h SIOCDEVPRIVATE range
|
0x89 F0-FF linux/sockios.h SIOCDEVPRIVATE range
|
||||||
0x8B all linux/wireless.h
|
0x8B all linux/wireless.h
|
||||||
0x8C 00-3F WiNRADiO driver
|
0x8C 00-3F WiNRADiO driver
|
||||||
|
|||||||
@@ -5205,13 +5205,6 @@ F: include/linux/tfrc.h
|
|||||||
F: include/uapi/linux/dccp.h
|
F: include/uapi/linux/dccp.h
|
||||||
F: net/dccp/
|
F: net/dccp/
|
||||||
|
|
||||||
DECnet NETWORK LAYER
|
|
||||||
L: linux-decnet-user@lists.sourceforge.net
|
|
||||||
S: Orphan
|
|
||||||
W: http://linux-decnet.sourceforge.net
|
|
||||||
F: Documentation/networking/decnet.rst
|
|
||||||
F: net/decnet/
|
|
||||||
|
|
||||||
DECSTATION PLATFORM SUPPORT
|
DECSTATION PLATFORM SUPPORT
|
||||||
M: "Maciej W. Rozycki" <macro@orcam.me.uk>
|
M: "Maciej W. Rozycki" <macro@orcam.me.uk>
|
||||||
L: linux-mips@vger.kernel.org
|
L: linux-mips@vger.kernel.org
|
||||||
|
|||||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 15
|
PATCHLEVEL = 15
|
||||||
SUBLEVEL = 117
|
SUBLEVEL = 118
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Trick or Treat
|
NAME = Trick or Treat
|
||||||
|
|
||||||
|
|||||||
@@ -132,6 +132,7 @@
|
|||||||
reg = <0x2c0f0000 0x1000>;
|
reg = <0x2c0f0000 0x1000>;
|
||||||
interrupts = <0 84 4>;
|
interrupts = <0 84 4>;
|
||||||
cache-level = <2>;
|
cache-level = <2>;
|
||||||
|
cache-unified;
|
||||||
};
|
};
|
||||||
|
|
||||||
pmu {
|
pmu {
|
||||||
|
|||||||
@@ -81,6 +81,7 @@ config MIPS
|
|||||||
select HAVE_LD_DEAD_CODE_DATA_ELIMINATION
|
select HAVE_LD_DEAD_CODE_DATA_ELIMINATION
|
||||||
select HAVE_MOD_ARCH_SPECIFIC
|
select HAVE_MOD_ARCH_SPECIFIC
|
||||||
select HAVE_NMI
|
select HAVE_NMI
|
||||||
|
select HAVE_PATA_PLATFORM
|
||||||
select HAVE_PERF_EVENTS
|
select HAVE_PERF_EVENTS
|
||||||
select HAVE_PERF_REGS
|
select HAVE_PERF_REGS
|
||||||
select HAVE_PERF_USER_STACK_DUMP
|
select HAVE_PERF_USER_STACK_DUMP
|
||||||
|
|||||||
@@ -30,6 +30,7 @@
|
|||||||
*
|
*
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
#include <linux/dma-map-ops.h> /* for dma_default_coherent */
|
||||||
#include <linux/init.h>
|
#include <linux/init.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
@@ -623,17 +624,18 @@ u32 au1xxx_dbdma_put_source(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
|
|||||||
dp->dscr_cmd0 &= ~DSCR_CMD0_IE;
|
dp->dscr_cmd0 &= ~DSCR_CMD0_IE;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* There is an errata on the Au1200/Au1550 parts that could result
|
* There is an erratum on certain Au1200/Au1550 revisions that could
|
||||||
* in "stale" data being DMA'ed. It has to do with the snoop logic on
|
* result in "stale" data being DMA'ed. It has to do with the snoop
|
||||||
* the cache eviction buffer. DMA_NONCOHERENT is on by default for
|
* logic on the cache eviction buffer. dma_default_coherent is set
|
||||||
* these parts. If it is fixed in the future, these dma_cache_inv will
|
* to false on these parts.
|
||||||
* just be nothing more than empty macros. See io.h.
|
|
||||||
*/
|
*/
|
||||||
dma_cache_wback_inv((unsigned long)buf, nbytes);
|
if (!dma_default_coherent)
|
||||||
|
dma_cache_wback_inv(KSEG0ADDR(buf), nbytes);
|
||||||
dp->dscr_cmd0 |= DSCR_CMD0_V; /* Let it rip */
|
dp->dscr_cmd0 |= DSCR_CMD0_V; /* Let it rip */
|
||||||
wmb(); /* drain writebuffer */
|
wmb(); /* drain writebuffer */
|
||||||
dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
|
dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
|
||||||
ctp->chan_ptr->ddma_dbell = 0;
|
ctp->chan_ptr->ddma_dbell = 0;
|
||||||
|
wmb(); /* force doorbell write out to dma engine */
|
||||||
|
|
||||||
/* Get next descriptor pointer. */
|
/* Get next descriptor pointer. */
|
||||||
ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
|
ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
|
||||||
@@ -685,17 +687,18 @@ u32 au1xxx_dbdma_put_dest(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
|
|||||||
dp->dscr_source1, dp->dscr_dest0, dp->dscr_dest1);
|
dp->dscr_source1, dp->dscr_dest0, dp->dscr_dest1);
|
||||||
#endif
|
#endif
|
||||||
/*
|
/*
|
||||||
* There is an errata on the Au1200/Au1550 parts that could result in
|
* There is an erratum on certain Au1200/Au1550 revisions that could
|
||||||
* "stale" data being DMA'ed. It has to do with the snoop logic on the
|
* result in "stale" data being DMA'ed. It has to do with the snoop
|
||||||
* cache eviction buffer. DMA_NONCOHERENT is on by default for these
|
* logic on the cache eviction buffer. dma_default_coherent is set
|
||||||
* parts. If it is fixed in the future, these dma_cache_inv will just
|
* to false on these parts.
|
||||||
* be nothing more than empty macros. See io.h.
|
|
||||||
*/
|
*/
|
||||||
dma_cache_inv((unsigned long)buf, nbytes);
|
if (!dma_default_coherent)
|
||||||
|
dma_cache_inv(KSEG0ADDR(buf), nbytes);
|
||||||
dp->dscr_cmd0 |= DSCR_CMD0_V; /* Let it rip */
|
dp->dscr_cmd0 |= DSCR_CMD0_V; /* Let it rip */
|
||||||
wmb(); /* drain writebuffer */
|
wmb(); /* drain writebuffer */
|
||||||
dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
|
dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
|
||||||
ctp->chan_ptr->ddma_dbell = 0;
|
ctp->chan_ptr->ddma_dbell = 0;
|
||||||
|
wmb(); /* force doorbell write out to dma engine */
|
||||||
|
|
||||||
/* Get next descriptor pointer. */
|
/* Get next descriptor pointer. */
|
||||||
ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
|
ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
|
||||||
|
|||||||
@@ -53,8 +53,6 @@ CONFIG_IPV6_SUBTREES=y
|
|||||||
CONFIG_NETWORK_SECMARK=y
|
CONFIG_NETWORK_SECMARK=y
|
||||||
CONFIG_IP_SCTP=m
|
CONFIG_IP_SCTP=m
|
||||||
CONFIG_VLAN_8021Q=m
|
CONFIG_VLAN_8021Q=m
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_DECNET_ROUTER=y
|
|
||||||
# CONFIG_WIRELESS is not set
|
# CONFIG_WIRELESS is not set
|
||||||
# CONFIG_UEVENT_HELPER is not set
|
# CONFIG_UEVENT_HELPER is not set
|
||||||
# CONFIG_FW_LOADER is not set
|
# CONFIG_FW_LOADER is not set
|
||||||
|
|||||||
@@ -49,8 +49,6 @@ CONFIG_IPV6_SUBTREES=y
|
|||||||
CONFIG_NETWORK_SECMARK=y
|
CONFIG_NETWORK_SECMARK=y
|
||||||
CONFIG_IP_SCTP=m
|
CONFIG_IP_SCTP=m
|
||||||
CONFIG_VLAN_8021Q=m
|
CONFIG_VLAN_8021Q=m
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_DECNET_ROUTER=y
|
|
||||||
# CONFIG_WIRELESS is not set
|
# CONFIG_WIRELESS is not set
|
||||||
# CONFIG_UEVENT_HELPER is not set
|
# CONFIG_UEVENT_HELPER is not set
|
||||||
# CONFIG_FW_LOADER is not set
|
# CONFIG_FW_LOADER is not set
|
||||||
|
|||||||
@@ -48,8 +48,6 @@ CONFIG_IPV6_SUBTREES=y
|
|||||||
CONFIG_NETWORK_SECMARK=y
|
CONFIG_NETWORK_SECMARK=y
|
||||||
CONFIG_IP_SCTP=m
|
CONFIG_IP_SCTP=m
|
||||||
CONFIG_VLAN_8021Q=m
|
CONFIG_VLAN_8021Q=m
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_DECNET_ROUTER=y
|
|
||||||
# CONFIG_WIRELESS is not set
|
# CONFIG_WIRELESS is not set
|
||||||
# CONFIG_UEVENT_HELPER is not set
|
# CONFIG_UEVENT_HELPER is not set
|
||||||
# CONFIG_FW_LOADER is not set
|
# CONFIG_FW_LOADER is not set
|
||||||
|
|||||||
@@ -69,7 +69,6 @@ CONFIG_IP_NF_RAW=m
|
|||||||
CONFIG_IP_NF_ARPTABLES=m
|
CONFIG_IP_NF_ARPTABLES=m
|
||||||
CONFIG_IP_NF_ARPFILTER=m
|
CONFIG_IP_NF_ARPFILTER=m
|
||||||
CONFIG_IP_NF_ARP_MANGLE=m
|
CONFIG_IP_NF_ARP_MANGLE=m
|
||||||
CONFIG_DECNET_NF_GRABULATOR=m
|
|
||||||
CONFIG_BRIDGE_NF_EBTABLES=m
|
CONFIG_BRIDGE_NF_EBTABLES=m
|
||||||
CONFIG_BRIDGE_EBT_BROUTE=m
|
CONFIG_BRIDGE_EBT_BROUTE=m
|
||||||
CONFIG_BRIDGE_EBT_T_FILTER=m
|
CONFIG_BRIDGE_EBT_T_FILTER=m
|
||||||
@@ -99,7 +98,6 @@ CONFIG_ATM_MPOA=m
|
|||||||
CONFIG_ATM_BR2684=m
|
CONFIG_ATM_BR2684=m
|
||||||
CONFIG_BRIDGE=m
|
CONFIG_BRIDGE=m
|
||||||
CONFIG_VLAN_8021Q=m
|
CONFIG_VLAN_8021Q=m
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_LLC2=m
|
CONFIG_LLC2=m
|
||||||
CONFIG_ATALK=m
|
CONFIG_ATALK=m
|
||||||
CONFIG_DEV_APPLETALK=m
|
CONFIG_DEV_APPLETALK=m
|
||||||
|
|||||||
@@ -116,7 +116,6 @@ CONFIG_IP6_NF_FILTER=m
|
|||||||
CONFIG_IP6_NF_TARGET_REJECT=m
|
CONFIG_IP6_NF_TARGET_REJECT=m
|
||||||
CONFIG_IP6_NF_MANGLE=m
|
CONFIG_IP6_NF_MANGLE=m
|
||||||
CONFIG_IP6_NF_RAW=m
|
CONFIG_IP6_NF_RAW=m
|
||||||
CONFIG_DECNET_NF_GRABULATOR=m
|
|
||||||
CONFIG_BRIDGE_NF_EBTABLES=m
|
CONFIG_BRIDGE_NF_EBTABLES=m
|
||||||
CONFIG_BRIDGE_EBT_BROUTE=m
|
CONFIG_BRIDGE_EBT_BROUTE=m
|
||||||
CONFIG_BRIDGE_EBT_T_FILTER=m
|
CONFIG_BRIDGE_EBT_T_FILTER=m
|
||||||
@@ -146,7 +145,6 @@ CONFIG_ATM_MPOA=m
|
|||||||
CONFIG_ATM_BR2684=m
|
CONFIG_ATM_BR2684=m
|
||||||
CONFIG_BRIDGE=m
|
CONFIG_BRIDGE=m
|
||||||
CONFIG_VLAN_8021Q=m
|
CONFIG_VLAN_8021Q=m
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_LLC2=m
|
CONFIG_LLC2=m
|
||||||
CONFIG_ATALK=m
|
CONFIG_ATALK=m
|
||||||
CONFIG_DEV_APPLETALK=m
|
CONFIG_DEV_APPLETALK=m
|
||||||
|
|||||||
@@ -200,7 +200,6 @@ CONFIG_IP6_NF_TARGET_REJECT=m
|
|||||||
CONFIG_IP6_NF_MANGLE=m
|
CONFIG_IP6_NF_MANGLE=m
|
||||||
CONFIG_IP6_NF_RAW=m
|
CONFIG_IP6_NF_RAW=m
|
||||||
CONFIG_IP6_NF_SECURITY=m
|
CONFIG_IP6_NF_SECURITY=m
|
||||||
CONFIG_DECNET_NF_GRABULATOR=m
|
|
||||||
CONFIG_BRIDGE_NF_EBTABLES=m
|
CONFIG_BRIDGE_NF_EBTABLES=m
|
||||||
CONFIG_BRIDGE_EBT_BROUTE=m
|
CONFIG_BRIDGE_EBT_BROUTE=m
|
||||||
CONFIG_BRIDGE_EBT_T_FILTER=m
|
CONFIG_BRIDGE_EBT_T_FILTER=m
|
||||||
@@ -234,7 +233,6 @@ CONFIG_ATM_BR2684=m
|
|||||||
CONFIG_BRIDGE=m
|
CONFIG_BRIDGE=m
|
||||||
CONFIG_VLAN_8021Q=m
|
CONFIG_VLAN_8021Q=m
|
||||||
CONFIG_VLAN_8021Q_GVRP=y
|
CONFIG_VLAN_8021Q_GVRP=y
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_LLC2=m
|
CONFIG_LLC2=m
|
||||||
CONFIG_ATALK=m
|
CONFIG_ATALK=m
|
||||||
CONFIG_DEV_APPLETALK=m
|
CONFIG_DEV_APPLETALK=m
|
||||||
|
|||||||
@@ -198,7 +198,6 @@ CONFIG_IP6_NF_TARGET_REJECT=m
|
|||||||
CONFIG_IP6_NF_MANGLE=m
|
CONFIG_IP6_NF_MANGLE=m
|
||||||
CONFIG_IP6_NF_RAW=m
|
CONFIG_IP6_NF_RAW=m
|
||||||
CONFIG_IP6_NF_SECURITY=m
|
CONFIG_IP6_NF_SECURITY=m
|
||||||
CONFIG_DECNET_NF_GRABULATOR=m
|
|
||||||
CONFIG_BRIDGE_NF_EBTABLES=m
|
CONFIG_BRIDGE_NF_EBTABLES=m
|
||||||
CONFIG_BRIDGE_EBT_BROUTE=m
|
CONFIG_BRIDGE_EBT_BROUTE=m
|
||||||
CONFIG_BRIDGE_EBT_T_FILTER=m
|
CONFIG_BRIDGE_EBT_T_FILTER=m
|
||||||
@@ -232,7 +231,6 @@ CONFIG_ATM_BR2684=m
|
|||||||
CONFIG_BRIDGE=m
|
CONFIG_BRIDGE=m
|
||||||
CONFIG_VLAN_8021Q=m
|
CONFIG_VLAN_8021Q=m
|
||||||
CONFIG_VLAN_8021Q_GVRP=y
|
CONFIG_VLAN_8021Q_GVRP=y
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_LLC2=m
|
CONFIG_LLC2=m
|
||||||
CONFIG_ATALK=m
|
CONFIG_ATALK=m
|
||||||
CONFIG_DEV_APPLETALK=m
|
CONFIG_DEV_APPLETALK=m
|
||||||
|
|||||||
@@ -116,7 +116,6 @@ CONFIG_IP6_NF_FILTER=m
|
|||||||
CONFIG_IP6_NF_TARGET_REJECT=m
|
CONFIG_IP6_NF_TARGET_REJECT=m
|
||||||
CONFIG_IP6_NF_MANGLE=m
|
CONFIG_IP6_NF_MANGLE=m
|
||||||
CONFIG_IP6_NF_RAW=m
|
CONFIG_IP6_NF_RAW=m
|
||||||
CONFIG_DECNET_NF_GRABULATOR=m
|
|
||||||
CONFIG_BRIDGE_NF_EBTABLES=m
|
CONFIG_BRIDGE_NF_EBTABLES=m
|
||||||
CONFIG_BRIDGE_EBT_BROUTE=m
|
CONFIG_BRIDGE_EBT_BROUTE=m
|
||||||
CONFIG_BRIDGE_EBT_T_FILTER=m
|
CONFIG_BRIDGE_EBT_T_FILTER=m
|
||||||
@@ -137,7 +136,6 @@ CONFIG_BRIDGE_EBT_REDIRECT=m
|
|||||||
CONFIG_BRIDGE_EBT_SNAT=m
|
CONFIG_BRIDGE_EBT_SNAT=m
|
||||||
CONFIG_BRIDGE_EBT_LOG=m
|
CONFIG_BRIDGE_EBT_LOG=m
|
||||||
CONFIG_BRIDGE=m
|
CONFIG_BRIDGE=m
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_NET_SCHED=y
|
CONFIG_NET_SCHED=y
|
||||||
CONFIG_NET_SCH_CBQ=m
|
CONFIG_NET_SCH_CBQ=m
|
||||||
CONFIG_NET_SCH_HTB=m
|
CONFIG_NET_SCH_HTB=m
|
||||||
|
|||||||
@@ -156,10 +156,6 @@ static unsigned long __init init_initrd(void)
|
|||||||
pr_err("initrd start must be page aligned\n");
|
pr_err("initrd start must be page aligned\n");
|
||||||
goto disable;
|
goto disable;
|
||||||
}
|
}
|
||||||
if (initrd_start < PAGE_OFFSET) {
|
|
||||||
pr_err("initrd start < PAGE_OFFSET\n");
|
|
||||||
goto disable;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Sanitize initrd addresses. For example firmware
|
* Sanitize initrd addresses. For example firmware
|
||||||
@@ -172,6 +168,11 @@ static unsigned long __init init_initrd(void)
|
|||||||
initrd_end = (unsigned long)__va(end);
|
initrd_end = (unsigned long)__va(end);
|
||||||
initrd_start = (unsigned long)__va(__pa(initrd_start));
|
initrd_start = (unsigned long)__va(__pa(initrd_start));
|
||||||
|
|
||||||
|
if (initrd_start < PAGE_OFFSET) {
|
||||||
|
pr_err("initrd start < PAGE_OFFSET\n");
|
||||||
|
goto disable;
|
||||||
|
}
|
||||||
|
|
||||||
ROOT_DEV = Root_RAM0;
|
ROOT_DEV = Root_RAM0;
|
||||||
return PFN_UP(end);
|
return PFN_UP(end);
|
||||||
disable:
|
disable:
|
||||||
|
|||||||
@@ -97,7 +97,7 @@
|
|||||||
rx-fifo-depth = <8192>;
|
rx-fifo-depth = <8192>;
|
||||||
tx-fifo-depth = <8192>;
|
tx-fifo-depth = <8192>;
|
||||||
address-bits = <48>;
|
address-bits = <48>;
|
||||||
max-frame-size = <1518>;
|
max-frame-size = <1500>;
|
||||||
local-mac-address = [00 00 00 00 00 00];
|
local-mac-address = [00 00 00 00 00 00];
|
||||||
altr,has-supplementary-unicast;
|
altr,has-supplementary-unicast;
|
||||||
altr,enable-sup-addr = <1>;
|
altr,enable-sup-addr = <1>;
|
||||||
|
|||||||
@@ -106,7 +106,7 @@
|
|||||||
interrupt-names = "rx_irq", "tx_irq";
|
interrupt-names = "rx_irq", "tx_irq";
|
||||||
rx-fifo-depth = <8192>;
|
rx-fifo-depth = <8192>;
|
||||||
tx-fifo-depth = <8192>;
|
tx-fifo-depth = <8192>;
|
||||||
max-frame-size = <1518>;
|
max-frame-size = <1500>;
|
||||||
local-mac-address = [ 00 00 00 00 00 00 ];
|
local-mac-address = [ 00 00 00 00 00 00 ];
|
||||||
phy-mode = "rgmii-id";
|
phy-mode = "rgmii-id";
|
||||||
phy-handle = <&phy0>;
|
phy-handle = <&phy0>;
|
||||||
|
|||||||
@@ -446,11 +446,27 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
|
|||||||
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
|
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
|
||||||
enum dma_data_direction dir)
|
enum dma_data_direction dir)
|
||||||
{
|
{
|
||||||
|
/*
|
||||||
|
* fdc: The data cache line is written back to memory, if and only if
|
||||||
|
* it is dirty, and then invalidated from the data cache.
|
||||||
|
*/
|
||||||
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
|
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
|
||||||
}
|
}
|
||||||
|
|
||||||
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
|
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
|
||||||
enum dma_data_direction dir)
|
enum dma_data_direction dir)
|
||||||
{
|
{
|
||||||
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
|
unsigned long addr = (unsigned long) phys_to_virt(paddr);
|
||||||
|
|
||||||
|
switch (dir) {
|
||||||
|
case DMA_TO_DEVICE:
|
||||||
|
case DMA_BIDIRECTIONAL:
|
||||||
|
flush_kernel_dcache_range(addr, size);
|
||||||
|
return;
|
||||||
|
case DMA_FROM_DEVICE:
|
||||||
|
purge_kernel_dcache_range_asm(addr, addr + size);
|
||||||
|
return;
|
||||||
|
default:
|
||||||
|
BUG();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -243,8 +243,6 @@ CONFIG_ATM_LANE=m
|
|||||||
CONFIG_ATM_BR2684=m
|
CONFIG_ATM_BR2684=m
|
||||||
CONFIG_BRIDGE=m
|
CONFIG_BRIDGE=m
|
||||||
CONFIG_VLAN_8021Q=m
|
CONFIG_VLAN_8021Q=m
|
||||||
CONFIG_DECNET=m
|
|
||||||
CONFIG_DECNET_ROUTER=y
|
|
||||||
CONFIG_ATALK=m
|
CONFIG_ATALK=m
|
||||||
CONFIG_DEV_APPLETALK=m
|
CONFIG_DEV_APPLETALK=m
|
||||||
CONFIG_IPDDP=m
|
CONFIG_IPDDP=m
|
||||||
|
|||||||
@@ -4,6 +4,11 @@ KASAN_SANITIZE := n
|
|||||||
|
|
||||||
targets += trampoline_$(BITS).o purgatory.ro kexec-purgatory.c
|
targets += trampoline_$(BITS).o purgatory.ro kexec-purgatory.c
|
||||||
|
|
||||||
|
# When profile-guided optimization is enabled, llvm emits two different
|
||||||
|
# overlapping text sections, which is not supported by kexec. Remove profile
|
||||||
|
# optimization flags.
|
||||||
|
KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS))
|
||||||
|
|
||||||
LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined
|
LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined
|
||||||
|
|
||||||
$(obj)/purgatory.ro: $(obj)/trampoline_$(BITS).o FORCE
|
$(obj)/purgatory.ro: $(obj)/trampoline_$(BITS).o FORCE
|
||||||
|
|||||||
@@ -14,6 +14,11 @@ $(obj)/sha256.o: $(srctree)/lib/crypto/sha256.c FORCE
|
|||||||
|
|
||||||
CFLAGS_sha256.o := -D__DISABLE_EXPORTS
|
CFLAGS_sha256.o := -D__DISABLE_EXPORTS
|
||||||
|
|
||||||
|
# When profile-guided optimization is enabled, llvm emits two different
|
||||||
|
# overlapping text sections, which is not supported by kexec. Remove profile
|
||||||
|
# optimization flags.
|
||||||
|
KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS))
|
||||||
|
|
||||||
# When linking purgatory.ro with -r unresolved symbols are not checked,
|
# When linking purgatory.ro with -r unresolved symbols are not checked,
|
||||||
# also link a purgatory.chk binary without -r to check for unresolved symbols.
|
# also link a purgatory.chk binary without -r to check for unresolved symbols.
|
||||||
PURGATORY_LDFLAGS := -e purgatory_start -nostdlib -z nodefaultlib
|
PURGATORY_LDFLAGS := -e purgatory_start -nostdlib -z nodefaultlib
|
||||||
|
|||||||
@@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
|
|||||||
ring_req->u.rw.handle = info->handle;
|
ring_req->u.rw.handle = info->handle;
|
||||||
ring_req->operation = rq_data_dir(req) ?
|
ring_req->operation = rq_data_dir(req) ?
|
||||||
BLKIF_OP_WRITE : BLKIF_OP_READ;
|
BLKIF_OP_WRITE : BLKIF_OP_READ;
|
||||||
if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
|
if (req_op(req) == REQ_OP_FLUSH ||
|
||||||
|
(req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
|
||||||
/*
|
/*
|
||||||
* Ideally we can do an unordered flush-to-disk.
|
* Ideally we can do an unordered flush-to-disk.
|
||||||
* In case the backend onlysupports barriers, use that.
|
* In case the backend onlysupports barriers, use that.
|
||||||
|
|||||||
@@ -90,6 +90,9 @@ parisc_agp_tlbflush(struct agp_memory *mem)
|
|||||||
{
|
{
|
||||||
struct _parisc_agp_info *info = &parisc_agp_info;
|
struct _parisc_agp_info *info = &parisc_agp_info;
|
||||||
|
|
||||||
|
/* force fdc ops to be visible to IOMMU */
|
||||||
|
asm_io_sync();
|
||||||
|
|
||||||
writeq(info->gart_base | ilog2(info->gart_size), info->ioc_regs+IOC_PCOM);
|
writeq(info->gart_base | ilog2(info->gart_size), info->ioc_regs+IOC_PCOM);
|
||||||
readq(info->ioc_regs+IOC_PCOM); /* flush */
|
readq(info->ioc_regs+IOC_PCOM); /* flush */
|
||||||
}
|
}
|
||||||
@@ -158,6 +161,7 @@ parisc_agp_insert_memory(struct agp_memory *mem, off_t pg_start, int type)
|
|||||||
info->gatt[j] =
|
info->gatt[j] =
|
||||||
parisc_agp_mask_memory(agp_bridge,
|
parisc_agp_mask_memory(agp_bridge,
|
||||||
paddr, type);
|
paddr, type);
|
||||||
|
asm_io_fdc(&info->gatt[j]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -191,7 +195,16 @@ static unsigned long
|
|||||||
parisc_agp_mask_memory(struct agp_bridge_data *bridge, dma_addr_t addr,
|
parisc_agp_mask_memory(struct agp_bridge_data *bridge, dma_addr_t addr,
|
||||||
int type)
|
int type)
|
||||||
{
|
{
|
||||||
return SBA_PDIR_VALID_BIT | addr;
|
unsigned ci; /* coherent index */
|
||||||
|
dma_addr_t pa;
|
||||||
|
|
||||||
|
pa = addr & IOVP_MASK;
|
||||||
|
asm("lci 0(%1), %0" : "=r" (ci) : "r" (phys_to_virt(pa)));
|
||||||
|
|
||||||
|
pa |= (ci >> PAGE_SHIFT) & 0xff;/* move CI (8 bits) into lowest byte */
|
||||||
|
pa |= SBA_PDIR_VALID_BIT; /* set "valid" bit */
|
||||||
|
|
||||||
|
return cpu_to_le64(pa);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
|
|||||||
@@ -1557,6 +1557,7 @@ static const u16 amdgpu_unsupported_pciidlist[] = {
|
|||||||
0x5874,
|
0x5874,
|
||||||
0x5940,
|
0x5940,
|
||||||
0x5941,
|
0x5941,
|
||||||
|
0x5b70,
|
||||||
0x5b72,
|
0x5b72,
|
||||||
0x5b73,
|
0x5b73,
|
||||||
0x5b74,
|
0x5b74,
|
||||||
|
|||||||
@@ -78,9 +78,10 @@ static void amdgpu_bo_user_destroy(struct ttm_buffer_object *tbo)
|
|||||||
static void amdgpu_bo_vm_destroy(struct ttm_buffer_object *tbo)
|
static void amdgpu_bo_vm_destroy(struct ttm_buffer_object *tbo)
|
||||||
{
|
{
|
||||||
struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
|
struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
|
||||||
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo);
|
struct amdgpu_bo *shadow_bo = ttm_to_amdgpu_bo(tbo), *bo;
|
||||||
struct amdgpu_bo_vm *vmbo;
|
struct amdgpu_bo_vm *vmbo;
|
||||||
|
|
||||||
|
bo = shadow_bo->parent;
|
||||||
vmbo = to_amdgpu_bo_vm(bo);
|
vmbo = to_amdgpu_bo_vm(bo);
|
||||||
/* in case amdgpu_device_recover_vram got NULL of bo->parent */
|
/* in case amdgpu_device_recover_vram got NULL of bo->parent */
|
||||||
if (!list_empty(&vmbo->shadow_list)) {
|
if (!list_empty(&vmbo->shadow_list)) {
|
||||||
@@ -690,7 +691,6 @@ int amdgpu_bo_create_vm(struct amdgpu_device *adev,
|
|||||||
return r;
|
return r;
|
||||||
|
|
||||||
*vmbo_ptr = to_amdgpu_bo_vm(bo_ptr);
|
*vmbo_ptr = to_amdgpu_bo_vm(bo_ptr);
|
||||||
INIT_LIST_HEAD(&(*vmbo_ptr)->shadow_list);
|
|
||||||
return r;
|
return r;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -741,6 +741,8 @@ void amdgpu_bo_add_to_shadow_list(struct amdgpu_bo_vm *vmbo)
|
|||||||
|
|
||||||
mutex_lock(&adev->shadow_list_lock);
|
mutex_lock(&adev->shadow_list_lock);
|
||||||
list_add_tail(&vmbo->shadow_list, &adev->shadow_list);
|
list_add_tail(&vmbo->shadow_list, &adev->shadow_list);
|
||||||
|
vmbo->shadow->parent = amdgpu_bo_ref(&vmbo->bo);
|
||||||
|
vmbo->shadow->tbo.destroy = &amdgpu_bo_vm_destroy;
|
||||||
mutex_unlock(&adev->shadow_list_lock);
|
mutex_unlock(&adev->shadow_list_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -983,7 +983,6 @@ static int amdgpu_vm_pt_create(struct amdgpu_device *adev,
|
|||||||
return r;
|
return r;
|
||||||
}
|
}
|
||||||
|
|
||||||
(*vmbo)->shadow->parent = amdgpu_bo_ref(bo);
|
|
||||||
amdgpu_bo_add_to_shadow_list(*vmbo);
|
amdgpu_bo_add_to_shadow_list(*vmbo);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
@@ -7197,8 +7197,10 @@ static int gfx_v10_0_kiq_resume(struct amdgpu_device *adev)
|
|||||||
return r;
|
return r;
|
||||||
|
|
||||||
r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
|
r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
|
||||||
if (unlikely(r != 0))
|
if (unlikely(r != 0)) {
|
||||||
|
amdgpu_bo_unreserve(ring->mqd_obj);
|
||||||
return r;
|
return r;
|
||||||
|
}
|
||||||
|
|
||||||
gfx_v10_0_kiq_init_queue(ring);
|
gfx_v10_0_kiq_init_queue(ring);
|
||||||
amdgpu_bo_kunmap(ring->mqd_obj);
|
amdgpu_bo_kunmap(ring->mqd_obj);
|
||||||
|
|||||||
@@ -3871,8 +3871,10 @@ static int gfx_v9_0_kiq_resume(struct amdgpu_device *adev)
|
|||||||
return r;
|
return r;
|
||||||
|
|
||||||
r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
|
r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
|
||||||
if (unlikely(r != 0))
|
if (unlikely(r != 0)) {
|
||||||
|
amdgpu_bo_unreserve(ring->mqd_obj);
|
||||||
return r;
|
return r;
|
||||||
|
}
|
||||||
|
|
||||||
gfx_v9_0_kiq_init_queue(ring);
|
gfx_v9_0_kiq_init_queue(ring);
|
||||||
amdgpu_bo_kunmap(ring->mqd_obj);
|
amdgpu_bo_kunmap(ring->mqd_obj);
|
||||||
|
|||||||
@@ -8177,7 +8177,13 @@ static int amdgpu_dm_connector_get_modes(struct drm_connector *connector)
|
|||||||
drm_add_modes_noedid(connector, 640, 480);
|
drm_add_modes_noedid(connector, 640, 480);
|
||||||
} else {
|
} else {
|
||||||
amdgpu_dm_connector_ddc_get_modes(connector, edid);
|
amdgpu_dm_connector_ddc_get_modes(connector, edid);
|
||||||
amdgpu_dm_connector_add_common_modes(encoder, connector);
|
/* most eDP supports only timings from its edid,
|
||||||
|
* usually only detailed timings are available
|
||||||
|
* from eDP edid. timings which are not from edid
|
||||||
|
* may damage eDP
|
||||||
|
*/
|
||||||
|
if (connector->connector_type != DRM_MODE_CONNECTOR_eDP)
|
||||||
|
amdgpu_dm_connector_add_common_modes(encoder, connector);
|
||||||
amdgpu_dm_connector_add_freesync_modes(connector, edid);
|
amdgpu_dm_connector_add_freesync_modes(connector, edid);
|
||||||
}
|
}
|
||||||
amdgpu_dm_fbc_init(connector);
|
amdgpu_dm_fbc_init(connector);
|
||||||
|
|||||||
@@ -220,6 +220,9 @@ static void nouveau_dsm_pci_probe(struct pci_dev *pdev, acpi_handle *dhandle_out
|
|||||||
int optimus_funcs;
|
int optimus_funcs;
|
||||||
struct pci_dev *parent_pdev;
|
struct pci_dev *parent_pdev;
|
||||||
|
|
||||||
|
if (pdev->vendor != PCI_VENDOR_ID_NVIDIA)
|
||||||
|
return;
|
||||||
|
|
||||||
*has_pr3 = false;
|
*has_pr3 = false;
|
||||||
parent_pdev = pci_upstream_bridge(pdev);
|
parent_pdev = pci_upstream_bridge(pdev);
|
||||||
if (parent_pdev) {
|
if (parent_pdev) {
|
||||||
|
|||||||
@@ -729,7 +729,8 @@ out:
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
nouveau_connector_set_edid(nv_connector, edid);
|
nouveau_connector_set_edid(nv_connector, edid);
|
||||||
nouveau_connector_set_encoder(connector, nv_encoder);
|
if (nv_encoder)
|
||||||
|
nouveau_connector_set_encoder(connector, nv_encoder);
|
||||||
return status;
|
return status;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -965,7 +966,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
|
|||||||
/* Determine display colour depth for everything except LVDS now,
|
/* Determine display colour depth for everything except LVDS now,
|
||||||
* DP requires this before mode_valid() is called.
|
* DP requires this before mode_valid() is called.
|
||||||
*/
|
*/
|
||||||
if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)
|
if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
|
||||||
nouveau_connector_detect_depth(connector);
|
nouveau_connector_detect_depth(connector);
|
||||||
|
|
||||||
/* Find the native mode if this is a digital panel, if we didn't
|
/* Find the native mode if this is a digital panel, if we didn't
|
||||||
@@ -986,7 +987,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
|
|||||||
* "native" mode as some VBIOS tables require us to use the
|
* "native" mode as some VBIOS tables require us to use the
|
||||||
* pixel clock as part of the lookup...
|
* pixel clock as part of the lookup...
|
||||||
*/
|
*/
|
||||||
if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS)
|
if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
|
||||||
nouveau_connector_detect_depth(connector);
|
nouveau_connector_detect_depth(connector);
|
||||||
|
|
||||||
if (nv_encoder->dcb->type == DCB_OUTPUT_TV)
|
if (nv_encoder->dcb->type == DCB_OUTPUT_TV)
|
||||||
|
|||||||
@@ -126,10 +126,16 @@ nouveau_name(struct drm_device *dev)
|
|||||||
static inline bool
|
static inline bool
|
||||||
nouveau_cli_work_ready(struct dma_fence *fence)
|
nouveau_cli_work_ready(struct dma_fence *fence)
|
||||||
{
|
{
|
||||||
if (!dma_fence_is_signaled(fence))
|
bool ret = true;
|
||||||
return false;
|
|
||||||
dma_fence_put(fence);
|
spin_lock_irq(fence->lock);
|
||||||
return true;
|
if (!dma_fence_is_signaled_locked(fence))
|
||||||
|
ret = false;
|
||||||
|
spin_unlock_irq(fence->lock);
|
||||||
|
|
||||||
|
if (ret == true)
|
||||||
|
dma_fence_put(fence);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
|
|||||||
@@ -3113,7 +3113,7 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
|
|||||||
route->path_rec->traffic_class = tos;
|
route->path_rec->traffic_class = tos;
|
||||||
route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
|
route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
|
||||||
route->path_rec->rate_selector = IB_SA_EQ;
|
route->path_rec->rate_selector = IB_SA_EQ;
|
||||||
route->path_rec->rate = iboe_get_rate(ndev);
|
route->path_rec->rate = IB_RATE_PORT_CURRENT;
|
||||||
dev_put(ndev);
|
dev_put(ndev);
|
||||||
route->path_rec->packet_life_time_selector = IB_SA_EQ;
|
route->path_rec->packet_life_time_selector = IB_SA_EQ;
|
||||||
/* In case ACK timeout is set, use this value to calculate
|
/* In case ACK timeout is set, use this value to calculate
|
||||||
@@ -4770,7 +4770,7 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv,
|
|||||||
if (!ndev)
|
if (!ndev)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
ib.rec.rate = iboe_get_rate(ndev);
|
ib.rec.rate = IB_RATE_PORT_CURRENT;
|
||||||
ib.rec.hop_limit = 1;
|
ib.rec.hop_limit = 1;
|
||||||
ib.rec.mtu = iboe_get_mtu(ndev->mtu);
|
ib.rec.mtu = iboe_get_mtu(ndev->mtu);
|
||||||
|
|
||||||
|
|||||||
@@ -1851,8 +1851,13 @@ static int modify_qp(struct uverbs_attr_bundle *attrs,
|
|||||||
attr->path_mtu = cmd->base.path_mtu;
|
attr->path_mtu = cmd->base.path_mtu;
|
||||||
if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE)
|
if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE)
|
||||||
attr->path_mig_state = cmd->base.path_mig_state;
|
attr->path_mig_state = cmd->base.path_mig_state;
|
||||||
if (cmd->base.attr_mask & IB_QP_QKEY)
|
if (cmd->base.attr_mask & IB_QP_QKEY) {
|
||||||
|
if (cmd->base.qkey & IB_QP_SET_QKEY && !capable(CAP_NET_RAW)) {
|
||||||
|
ret = -EPERM;
|
||||||
|
goto release_qp;
|
||||||
|
}
|
||||||
attr->qkey = cmd->base.qkey;
|
attr->qkey = cmd->base.qkey;
|
||||||
|
}
|
||||||
if (cmd->base.attr_mask & IB_QP_RQ_PSN)
|
if (cmd->base.attr_mask & IB_QP_RQ_PSN)
|
||||||
attr->rq_psn = cmd->base.rq_psn;
|
attr->rq_psn = cmd->base.rq_psn;
|
||||||
if (cmd->base.attr_mask & IB_QP_SQ_PSN)
|
if (cmd->base.attr_mask & IB_QP_SQ_PSN)
|
||||||
|
|||||||
@@ -222,8 +222,12 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_queue *ev_queue,
|
|||||||
spin_lock_irq(&ev_queue->lock);
|
spin_lock_irq(&ev_queue->lock);
|
||||||
|
|
||||||
while (list_empty(&ev_queue->event_list)) {
|
while (list_empty(&ev_queue->event_list)) {
|
||||||
spin_unlock_irq(&ev_queue->lock);
|
if (ev_queue->is_closed) {
|
||||||
|
spin_unlock_irq(&ev_queue->lock);
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
|
||||||
|
spin_unlock_irq(&ev_queue->lock);
|
||||||
if (filp->f_flags & O_NONBLOCK)
|
if (filp->f_flags & O_NONBLOCK)
|
||||||
return -EAGAIN;
|
return -EAGAIN;
|
||||||
|
|
||||||
@@ -233,12 +237,6 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_queue *ev_queue,
|
|||||||
return -ERESTARTSYS;
|
return -ERESTARTSYS;
|
||||||
|
|
||||||
spin_lock_irq(&ev_queue->lock);
|
spin_lock_irq(&ev_queue->lock);
|
||||||
|
|
||||||
/* If device was disassociated and no event exists set an error */
|
|
||||||
if (list_empty(&ev_queue->event_list) && ev_queue->is_closed) {
|
|
||||||
spin_unlock_irq(&ev_queue->lock);
|
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
event = list_entry(ev_queue->event_list.next, struct ib_uverbs_event, list);
|
event = list_entry(ev_queue->event_list.next, struct ib_uverbs_event, list);
|
||||||
|
|||||||
@@ -4376,6 +4376,9 @@ const struct mlx5_ib_profile raw_eth_profile = {
|
|||||||
STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
|
STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
|
||||||
mlx5_ib_stage_post_ib_reg_umr_init,
|
mlx5_ib_stage_post_ib_reg_umr_init,
|
||||||
NULL),
|
NULL),
|
||||||
|
STAGE_CREATE(MLX5_IB_STAGE_DELAY_DROP,
|
||||||
|
mlx5_ib_stage_delay_drop_init,
|
||||||
|
mlx5_ib_stage_delay_drop_cleanup),
|
||||||
STAGE_CREATE(MLX5_IB_STAGE_RESTRACK,
|
STAGE_CREATE(MLX5_IB_STAGE_RESTRACK,
|
||||||
mlx5_ib_restrack_init,
|
mlx5_ib_restrack_init,
|
||||||
NULL),
|
NULL),
|
||||||
|
|||||||
@@ -179,6 +179,9 @@ static int rxe_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
|
|||||||
pkt->mask = RXE_GRH_MASK;
|
pkt->mask = RXE_GRH_MASK;
|
||||||
pkt->paylen = be16_to_cpu(udph->len) - sizeof(*udph);
|
pkt->paylen = be16_to_cpu(udph->len) - sizeof(*udph);
|
||||||
|
|
||||||
|
/* remove udp header */
|
||||||
|
skb_pull(skb, sizeof(struct udphdr));
|
||||||
|
|
||||||
rxe_rcv(skb);
|
rxe_rcv(skb);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
@@ -419,6 +422,9 @@ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt)
|
|||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* remove udp header */
|
||||||
|
skb_pull(skb, sizeof(struct udphdr));
|
||||||
|
|
||||||
rxe_rcv(skb);
|
rxe_rcv(skb);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|||||||
@@ -203,6 +203,9 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp,
|
|||||||
spin_lock_init(&qp->rq.producer_lock);
|
spin_lock_init(&qp->rq.producer_lock);
|
||||||
spin_lock_init(&qp->rq.consumer_lock);
|
spin_lock_init(&qp->rq.consumer_lock);
|
||||||
|
|
||||||
|
skb_queue_head_init(&qp->req_pkts);
|
||||||
|
skb_queue_head_init(&qp->resp_pkts);
|
||||||
|
|
||||||
atomic_set(&qp->ssn, 0);
|
atomic_set(&qp->ssn, 0);
|
||||||
atomic_set(&qp->skb_out, 0);
|
atomic_set(&qp->skb_out, 0);
|
||||||
}
|
}
|
||||||
@@ -263,12 +266,8 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
|
|||||||
qp->req.opcode = -1;
|
qp->req.opcode = -1;
|
||||||
qp->comp.opcode = -1;
|
qp->comp.opcode = -1;
|
||||||
|
|
||||||
skb_queue_head_init(&qp->req_pkts);
|
rxe_init_task(&qp->req.task, qp, rxe_requester);
|
||||||
|
rxe_init_task(&qp->comp.task, qp, rxe_completer);
|
||||||
rxe_init_task(rxe, &qp->req.task, qp,
|
|
||||||
rxe_requester, "req");
|
|
||||||
rxe_init_task(rxe, &qp->comp.task, qp,
|
|
||||||
rxe_completer, "comp");
|
|
||||||
|
|
||||||
qp->qp_timeout_jiffies = 0; /* Can't be set for UD/UC in modify_qp */
|
qp->qp_timeout_jiffies = 0; /* Can't be set for UD/UC in modify_qp */
|
||||||
if (init->qp_type == IB_QPT_RC) {
|
if (init->qp_type == IB_QPT_RC) {
|
||||||
@@ -313,10 +312,7 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
skb_queue_head_init(&qp->resp_pkts);
|
rxe_init_task(&qp->resp.task, qp, rxe_responder);
|
||||||
|
|
||||||
rxe_init_task(rxe, &qp->resp.task, qp,
|
|
||||||
rxe_responder, "resp");
|
|
||||||
|
|
||||||
qp->resp.opcode = OPCODE_NONE;
|
qp->resp.opcode = OPCODE_NONE;
|
||||||
qp->resp.msn = 0;
|
qp->resp.msn = 0;
|
||||||
|
|||||||
@@ -95,13 +95,10 @@ void rxe_do_task(struct tasklet_struct *t)
|
|||||||
task->ret = ret;
|
task->ret = ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
int rxe_init_task(void *obj, struct rxe_task *task,
|
int rxe_init_task(struct rxe_task *task, void *arg, int (*func)(void *))
|
||||||
void *arg, int (*func)(void *), char *name)
|
|
||||||
{
|
{
|
||||||
task->obj = obj;
|
|
||||||
task->arg = arg;
|
task->arg = arg;
|
||||||
task->func = func;
|
task->func = func;
|
||||||
snprintf(task->name, sizeof(task->name), "%s", name);
|
|
||||||
task->destroyed = false;
|
task->destroyed = false;
|
||||||
|
|
||||||
tasklet_setup(&task->tasklet, rxe_do_task);
|
tasklet_setup(&task->tasklet, rxe_do_task);
|
||||||
|
|||||||
@@ -19,14 +19,12 @@ enum {
|
|||||||
* called again.
|
* called again.
|
||||||
*/
|
*/
|
||||||
struct rxe_task {
|
struct rxe_task {
|
||||||
void *obj;
|
|
||||||
struct tasklet_struct tasklet;
|
struct tasklet_struct tasklet;
|
||||||
int state;
|
int state;
|
||||||
spinlock_t state_lock; /* spinlock for task state */
|
spinlock_t state_lock; /* spinlock for task state */
|
||||||
void *arg;
|
void *arg;
|
||||||
int (*func)(void *arg);
|
int (*func)(void *arg);
|
||||||
int ret;
|
int ret;
|
||||||
char name[16];
|
|
||||||
bool destroyed;
|
bool destroyed;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -35,8 +33,7 @@ struct rxe_task {
|
|||||||
* arg => parameter to pass to fcn
|
* arg => parameter to pass to fcn
|
||||||
* func => function to call until it returns != 0
|
* func => function to call until it returns != 0
|
||||||
*/
|
*/
|
||||||
int rxe_init_task(void *obj, struct rxe_task *task,
|
int rxe_init_task(struct rxe_task *task, void *arg, int (*func)(void *));
|
||||||
void *arg, int (*func)(void *), char *name);
|
|
||||||
|
|
||||||
/* cleanup task */
|
/* cleanup task */
|
||||||
void rxe_cleanup_task(struct rxe_task *task);
|
void rxe_cleanup_task(struct rxe_task *task);
|
||||||
|
|||||||
@@ -656,9 +656,13 @@ static int
|
|||||||
isert_connect_error(struct rdma_cm_id *cma_id)
|
isert_connect_error(struct rdma_cm_id *cma_id)
|
||||||
{
|
{
|
||||||
struct isert_conn *isert_conn = cma_id->qp->qp_context;
|
struct isert_conn *isert_conn = cma_id->qp->qp_context;
|
||||||
|
struct isert_np *isert_np = cma_id->context;
|
||||||
|
|
||||||
ib_drain_qp(isert_conn->qp);
|
ib_drain_qp(isert_conn->qp);
|
||||||
|
|
||||||
|
mutex_lock(&isert_np->mutex);
|
||||||
list_del_init(&isert_conn->node);
|
list_del_init(&isert_conn->node);
|
||||||
|
mutex_unlock(&isert_np->mutex);
|
||||||
isert_conn->cm_id = NULL;
|
isert_conn->cm_id = NULL;
|
||||||
isert_put_conn(isert_conn);
|
isert_put_conn(isert_conn);
|
||||||
|
|
||||||
@@ -2431,6 +2435,7 @@ isert_free_np(struct iscsi_np *np)
|
|||||||
{
|
{
|
||||||
struct isert_np *isert_np = np->np_context;
|
struct isert_np *isert_np = np->np_context;
|
||||||
struct isert_conn *isert_conn, *n;
|
struct isert_conn *isert_conn, *n;
|
||||||
|
LIST_HEAD(drop_conn_list);
|
||||||
|
|
||||||
if (isert_np->cm_id)
|
if (isert_np->cm_id)
|
||||||
rdma_destroy_id(isert_np->cm_id);
|
rdma_destroy_id(isert_np->cm_id);
|
||||||
@@ -2450,7 +2455,7 @@ isert_free_np(struct iscsi_np *np)
|
|||||||
node) {
|
node) {
|
||||||
isert_info("cleaning isert_conn %p state (%d)\n",
|
isert_info("cleaning isert_conn %p state (%d)\n",
|
||||||
isert_conn, isert_conn->state);
|
isert_conn, isert_conn->state);
|
||||||
isert_connect_release(isert_conn);
|
list_move_tail(&isert_conn->node, &drop_conn_list);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2461,11 +2466,16 @@ isert_free_np(struct iscsi_np *np)
|
|||||||
node) {
|
node) {
|
||||||
isert_info("cleaning isert_conn %p state (%d)\n",
|
isert_info("cleaning isert_conn %p state (%d)\n",
|
||||||
isert_conn, isert_conn->state);
|
isert_conn, isert_conn->state);
|
||||||
isert_connect_release(isert_conn);
|
list_move_tail(&isert_conn->node, &drop_conn_list);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
mutex_unlock(&isert_np->mutex);
|
mutex_unlock(&isert_np->mutex);
|
||||||
|
|
||||||
|
list_for_each_entry_safe(isert_conn, n, &drop_conn_list, node) {
|
||||||
|
list_del_init(&isert_conn->node);
|
||||||
|
isert_connect_release(isert_conn);
|
||||||
|
}
|
||||||
|
|
||||||
np->np_context = NULL;
|
np->np_context = NULL;
|
||||||
kfree(isert_np);
|
kfree(isert_np);
|
||||||
}
|
}
|
||||||
@@ -2560,8 +2570,6 @@ static void isert_wait_conn(struct iscsi_conn *conn)
|
|||||||
isert_put_unsol_pending_cmds(conn);
|
isert_put_unsol_pending_cmds(conn);
|
||||||
isert_wait4cmds(conn);
|
isert_wait4cmds(conn);
|
||||||
isert_wait4logout(isert_conn);
|
isert_wait4logout(isert_conn);
|
||||||
|
|
||||||
queue_work(isert_release_wq, &isert_conn->release_work);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void isert_free_conn(struct iscsi_conn *conn)
|
static void isert_free_conn(struct iscsi_conn *conn)
|
||||||
|
|||||||
@@ -2028,6 +2028,7 @@ static int rtrs_clt_rdma_cm_handler(struct rdma_cm_id *cm_id,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* The caller should do the cleanup in case of error */
|
||||||
static int create_cm(struct rtrs_clt_con *con)
|
static int create_cm(struct rtrs_clt_con *con)
|
||||||
{
|
{
|
||||||
struct rtrs_path *s = con->c.path;
|
struct rtrs_path *s = con->c.path;
|
||||||
@@ -2050,14 +2051,14 @@ static int create_cm(struct rtrs_clt_con *con)
|
|||||||
err = rdma_set_reuseaddr(cm_id, 1);
|
err = rdma_set_reuseaddr(cm_id, 1);
|
||||||
if (err != 0) {
|
if (err != 0) {
|
||||||
rtrs_err(s, "Set address reuse failed, err: %d\n", err);
|
rtrs_err(s, "Set address reuse failed, err: %d\n", err);
|
||||||
goto destroy_cm;
|
return err;
|
||||||
}
|
}
|
||||||
err = rdma_resolve_addr(cm_id, (struct sockaddr *)&clt_path->s.src_addr,
|
err = rdma_resolve_addr(cm_id, (struct sockaddr *)&clt_path->s.src_addr,
|
||||||
(struct sockaddr *)&clt_path->s.dst_addr,
|
(struct sockaddr *)&clt_path->s.dst_addr,
|
||||||
RTRS_CONNECT_TIMEOUT_MS);
|
RTRS_CONNECT_TIMEOUT_MS);
|
||||||
if (err) {
|
if (err) {
|
||||||
rtrs_err(s, "Failed to resolve address, err: %d\n", err);
|
rtrs_err(s, "Failed to resolve address, err: %d\n", err);
|
||||||
goto destroy_cm;
|
return err;
|
||||||
}
|
}
|
||||||
/*
|
/*
|
||||||
* Combine connection status and session events. This is needed
|
* Combine connection status and session events. This is needed
|
||||||
@@ -2072,29 +2073,15 @@ static int create_cm(struct rtrs_clt_con *con)
|
|||||||
if (err == 0)
|
if (err == 0)
|
||||||
err = -ETIMEDOUT;
|
err = -ETIMEDOUT;
|
||||||
/* Timedout or interrupted */
|
/* Timedout or interrupted */
|
||||||
goto errr;
|
return err;
|
||||||
}
|
}
|
||||||
if (con->cm_err < 0) {
|
if (con->cm_err < 0)
|
||||||
err = con->cm_err;
|
return con->cm_err;
|
||||||
goto errr;
|
if (READ_ONCE(clt_path->state) != RTRS_CLT_CONNECTING)
|
||||||
}
|
|
||||||
if (READ_ONCE(clt_path->state) != RTRS_CLT_CONNECTING) {
|
|
||||||
/* Device removal */
|
/* Device removal */
|
||||||
err = -ECONNABORTED;
|
return -ECONNABORTED;
|
||||||
goto errr;
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
errr:
|
|
||||||
stop_cm(con);
|
|
||||||
mutex_lock(&con->con_mutex);
|
|
||||||
destroy_con_cq_qp(con);
|
|
||||||
mutex_unlock(&con->con_mutex);
|
|
||||||
destroy_cm:
|
|
||||||
destroy_cm(con);
|
|
||||||
|
|
||||||
return err;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rtrs_clt_path_up(struct rtrs_clt_path *clt_path)
|
static void rtrs_clt_path_up(struct rtrs_clt_path *clt_path)
|
||||||
@@ -2331,7 +2318,7 @@ static void rtrs_clt_close_work(struct work_struct *work)
|
|||||||
static int init_conns(struct rtrs_clt_path *clt_path)
|
static int init_conns(struct rtrs_clt_path *clt_path)
|
||||||
{
|
{
|
||||||
unsigned int cid;
|
unsigned int cid;
|
||||||
int err;
|
int err, i;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* On every new session connections increase reconnect counter
|
* On every new session connections increase reconnect counter
|
||||||
@@ -2347,10 +2334,8 @@ static int init_conns(struct rtrs_clt_path *clt_path)
|
|||||||
goto destroy;
|
goto destroy;
|
||||||
|
|
||||||
err = create_cm(to_clt_con(clt_path->s.con[cid]));
|
err = create_cm(to_clt_con(clt_path->s.con[cid]));
|
||||||
if (err) {
|
if (err)
|
||||||
destroy_con(to_clt_con(clt_path->s.con[cid]));
|
|
||||||
goto destroy;
|
goto destroy;
|
||||||
}
|
|
||||||
}
|
}
|
||||||
err = alloc_path_reqs(clt_path);
|
err = alloc_path_reqs(clt_path);
|
||||||
if (err)
|
if (err)
|
||||||
@@ -2361,15 +2346,21 @@ static int init_conns(struct rtrs_clt_path *clt_path)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
destroy:
|
destroy:
|
||||||
while (cid--) {
|
/* Make sure we do the cleanup in the order they are created */
|
||||||
struct rtrs_clt_con *con = to_clt_con(clt_path->s.con[cid]);
|
for (i = 0; i <= cid; i++) {
|
||||||
|
struct rtrs_clt_con *con;
|
||||||
|
|
||||||
stop_cm(con);
|
if (!clt_path->s.con[i])
|
||||||
|
break;
|
||||||
|
|
||||||
mutex_lock(&con->con_mutex);
|
con = to_clt_con(clt_path->s.con[i]);
|
||||||
destroy_con_cq_qp(con);
|
if (con->c.cm_id) {
|
||||||
mutex_unlock(&con->con_mutex);
|
stop_cm(con);
|
||||||
destroy_cm(con);
|
mutex_lock(&con->con_mutex);
|
||||||
|
destroy_con_cq_qp(con);
|
||||||
|
mutex_unlock(&con->con_mutex);
|
||||||
|
destroy_cm(con);
|
||||||
|
}
|
||||||
destroy_con(con);
|
destroy_con(con);
|
||||||
}
|
}
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -37,8 +37,10 @@ struct rtrs_iu *rtrs_iu_alloc(u32 iu_num, size_t size, gfp_t gfp_mask,
|
|||||||
goto err;
|
goto err;
|
||||||
|
|
||||||
iu->dma_addr = ib_dma_map_single(dma_dev, iu->buf, size, dir);
|
iu->dma_addr = ib_dma_map_single(dma_dev, iu->buf, size, dir);
|
||||||
if (ib_dma_mapping_error(dma_dev, iu->dma_addr))
|
if (ib_dma_mapping_error(dma_dev, iu->dma_addr)) {
|
||||||
|
kfree(iu->buf);
|
||||||
goto err;
|
goto err;
|
||||||
|
}
|
||||||
|
|
||||||
iu->cqe.done = done;
|
iu->cqe.done = done;
|
||||||
iu->size = size;
|
iu->size = size;
|
||||||
|
|||||||
@@ -16,7 +16,13 @@ void gic_enable_of_quirks(const struct device_node *np,
|
|||||||
const struct gic_quirk *quirks, void *data)
|
const struct gic_quirk *quirks, void *data)
|
||||||
{
|
{
|
||||||
for (; quirks->desc; quirks++) {
|
for (; quirks->desc; quirks++) {
|
||||||
if (!of_device_is_compatible(np, quirks->compatible))
|
if (!quirks->compatible && !quirks->property)
|
||||||
|
continue;
|
||||||
|
if (quirks->compatible &&
|
||||||
|
!of_device_is_compatible(np, quirks->compatible))
|
||||||
|
continue;
|
||||||
|
if (quirks->property &&
|
||||||
|
!of_property_read_bool(np, quirks->property))
|
||||||
continue;
|
continue;
|
||||||
if (quirks->init(data))
|
if (quirks->init(data))
|
||||||
pr_info("GIC: enabling workaround for %s\n",
|
pr_info("GIC: enabling workaround for %s\n",
|
||||||
@@ -28,7 +34,7 @@ void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
|
|||||||
void *data)
|
void *data)
|
||||||
{
|
{
|
||||||
for (; quirks->desc; quirks++) {
|
for (; quirks->desc; quirks++) {
|
||||||
if (quirks->compatible)
|
if (quirks->compatible || quirks->property)
|
||||||
continue;
|
continue;
|
||||||
if (quirks->iidr != (quirks->mask & iidr))
|
if (quirks->iidr != (quirks->mask & iidr))
|
||||||
continue;
|
continue;
|
||||||
|
|||||||
@@ -13,6 +13,7 @@
|
|||||||
struct gic_quirk {
|
struct gic_quirk {
|
||||||
const char *desc;
|
const char *desc;
|
||||||
const char *compatible;
|
const char *compatible;
|
||||||
|
const char *property;
|
||||||
bool (*init)(void *data);
|
bool (*init)(void *data);
|
||||||
u32 iidr;
|
u32 iidr;
|
||||||
u32 mask;
|
u32 mask;
|
||||||
|
|||||||
@@ -41,6 +41,7 @@
|
|||||||
|
|
||||||
#define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0)
|
#define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0)
|
||||||
#define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1)
|
#define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1)
|
||||||
|
#define FLAGS_WORKAROUND_MTK_GICR_SAVE (1ULL << 2)
|
||||||
|
|
||||||
#define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1)
|
#define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1)
|
||||||
|
|
||||||
@@ -1679,6 +1680,15 @@ static bool gic_enable_quirk_msm8996(void *data)
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool gic_enable_quirk_mtk_gicr(void *data)
|
||||||
|
{
|
||||||
|
struct gic_chip_data *d = data;
|
||||||
|
|
||||||
|
d->flags |= FLAGS_WORKAROUND_MTK_GICR_SAVE;
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
static bool gic_enable_quirk_cavium_38539(void *data)
|
static bool gic_enable_quirk_cavium_38539(void *data)
|
||||||
{
|
{
|
||||||
struct gic_chip_data *d = data;
|
struct gic_chip_data *d = data;
|
||||||
@@ -1714,6 +1724,11 @@ static const struct gic_quirk gic_quirks[] = {
|
|||||||
.compatible = "qcom,msm8996-gic-v3",
|
.compatible = "qcom,msm8996-gic-v3",
|
||||||
.init = gic_enable_quirk_msm8996,
|
.init = gic_enable_quirk_msm8996,
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
.desc = "GICv3: Mediatek Chromebook GICR save problem",
|
||||||
|
.property = "mediatek,broken-save-restore-fw",
|
||||||
|
.init = gic_enable_quirk_mtk_gicr,
|
||||||
|
},
|
||||||
{
|
{
|
||||||
.desc = "GICv3: HIP06 erratum 161010803",
|
.desc = "GICv3: HIP06 erratum 161010803",
|
||||||
.iidr = 0x0204043b,
|
.iidr = 0x0204043b,
|
||||||
@@ -1750,6 +1765,11 @@ static void gic_enable_nmi_support(void)
|
|||||||
if (!gic_prio_masking_enabled())
|
if (!gic_prio_masking_enabled())
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
if (gic_data.flags & FLAGS_WORKAROUND_MTK_GICR_SAVE) {
|
||||||
|
pr_warn("Skipping NMI enable due to firmware issues\n");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
|
ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL);
|
||||||
if (!ppi_nmi_refs)
|
if (!ppi_nmi_refs)
|
||||||
return;
|
return;
|
||||||
|
|||||||
@@ -1145,13 +1145,10 @@ static int do_resume(struct dm_ioctl *param)
|
|||||||
/* Do we need to load a new map ? */
|
/* Do we need to load a new map ? */
|
||||||
if (new_map) {
|
if (new_map) {
|
||||||
sector_t old_size, new_size;
|
sector_t old_size, new_size;
|
||||||
int srcu_idx;
|
|
||||||
|
|
||||||
/* Suspend if it isn't already suspended */
|
/* Suspend if it isn't already suspended */
|
||||||
old_map = dm_get_live_table(md, &srcu_idx);
|
if (param->flags & DM_SKIP_LOCKFS_FLAG)
|
||||||
if ((param->flags & DM_SKIP_LOCKFS_FLAG) || !old_map)
|
|
||||||
suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
|
suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
|
||||||
dm_put_live_table(md, srcu_idx);
|
|
||||||
if (param->flags & DM_NOFLUSH_FLAG)
|
if (param->flags & DM_NOFLUSH_FLAG)
|
||||||
suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG;
|
suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG;
|
||||||
if (!dm_suspended_md(md))
|
if (!dm_suspended_md(md))
|
||||||
|
|||||||
@@ -1778,13 +1778,15 @@ int dm_thin_remove_range(struct dm_thin_device *td,
|
|||||||
|
|
||||||
int dm_pool_block_is_shared(struct dm_pool_metadata *pmd, dm_block_t b, bool *result)
|
int dm_pool_block_is_shared(struct dm_pool_metadata *pmd, dm_block_t b, bool *result)
|
||||||
{
|
{
|
||||||
int r;
|
int r = -EINVAL;
|
||||||
uint32_t ref_count;
|
uint32_t ref_count;
|
||||||
|
|
||||||
down_read(&pmd->root_lock);
|
down_read(&pmd->root_lock);
|
||||||
r = dm_sm_get_count(pmd->data_sm, b, &ref_count);
|
if (!pmd->fail_io) {
|
||||||
if (!r)
|
r = dm_sm_get_count(pmd->data_sm, b, &ref_count);
|
||||||
*result = (ref_count > 1);
|
if (!r)
|
||||||
|
*result = (ref_count > 1);
|
||||||
|
}
|
||||||
up_read(&pmd->root_lock);
|
up_read(&pmd->root_lock);
|
||||||
|
|
||||||
return r;
|
return r;
|
||||||
@@ -1792,10 +1794,11 @@ int dm_pool_block_is_shared(struct dm_pool_metadata *pmd, dm_block_t b, bool *re
|
|||||||
|
|
||||||
int dm_pool_inc_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e)
|
int dm_pool_inc_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e)
|
||||||
{
|
{
|
||||||
int r = 0;
|
int r = -EINVAL;
|
||||||
|
|
||||||
pmd_write_lock(pmd);
|
pmd_write_lock(pmd);
|
||||||
r = dm_sm_inc_blocks(pmd->data_sm, b, e);
|
if (!pmd->fail_io)
|
||||||
|
r = dm_sm_inc_blocks(pmd->data_sm, b, e);
|
||||||
pmd_write_unlock(pmd);
|
pmd_write_unlock(pmd);
|
||||||
|
|
||||||
return r;
|
return r;
|
||||||
@@ -1803,10 +1806,11 @@ int dm_pool_inc_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_
|
|||||||
|
|
||||||
int dm_pool_dec_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e)
|
int dm_pool_dec_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e)
|
||||||
{
|
{
|
||||||
int r = 0;
|
int r = -EINVAL;
|
||||||
|
|
||||||
pmd_write_lock(pmd);
|
pmd_write_lock(pmd);
|
||||||
r = dm_sm_dec_blocks(pmd->data_sm, b, e);
|
if (!pmd->fail_io)
|
||||||
|
r = dm_sm_dec_blocks(pmd->data_sm, b, e);
|
||||||
pmd_write_unlock(pmd);
|
pmd_write_unlock(pmd);
|
||||||
|
|
||||||
return r;
|
return r;
|
||||||
|
|||||||
@@ -2526,6 +2526,10 @@ retry:
|
|||||||
}
|
}
|
||||||
|
|
||||||
map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
|
map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
|
||||||
|
if (!map) {
|
||||||
|
/* avoid deadlock with fs/namespace.c:do_mount() */
|
||||||
|
suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
|
||||||
|
}
|
||||||
|
|
||||||
r = __dm_suspend(md, map, suspend_flags, TASK_INTERRUPTIBLE, DMF_SUSPENDED);
|
r = __dm_suspend(md, map, suspend_flags, TASK_INTERRUPTIBLE, DMF_SUSPENDED);
|
||||||
if (r)
|
if (r)
|
||||||
|
|||||||
@@ -267,6 +267,7 @@ static ssize_t power_ro_lock_store(struct device *dev,
|
|||||||
goto out_put;
|
goto out_put;
|
||||||
}
|
}
|
||||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_BOOT_WP;
|
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_BOOT_WP;
|
||||||
|
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||||
blk_execute_rq(NULL, req, 0);
|
blk_execute_rq(NULL, req, 0);
|
||||||
ret = req_to_mmc_queue_req(req)->drv_op_result;
|
ret = req_to_mmc_queue_req(req)->drv_op_result;
|
||||||
blk_put_request(req);
|
blk_put_request(req);
|
||||||
@@ -658,6 +659,7 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md,
|
|||||||
idatas[0] = idata;
|
idatas[0] = idata;
|
||||||
req_to_mmc_queue_req(req)->drv_op =
|
req_to_mmc_queue_req(req)->drv_op =
|
||||||
rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
|
rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
|
||||||
|
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||||
req_to_mmc_queue_req(req)->drv_op_data = idatas;
|
req_to_mmc_queue_req(req)->drv_op_data = idatas;
|
||||||
req_to_mmc_queue_req(req)->ioc_count = 1;
|
req_to_mmc_queue_req(req)->ioc_count = 1;
|
||||||
blk_execute_rq(NULL, req, 0);
|
blk_execute_rq(NULL, req, 0);
|
||||||
@@ -727,6 +729,7 @@ static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md,
|
|||||||
}
|
}
|
||||||
req_to_mmc_queue_req(req)->drv_op =
|
req_to_mmc_queue_req(req)->drv_op =
|
||||||
rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
|
rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
|
||||||
|
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||||
req_to_mmc_queue_req(req)->drv_op_data = idata;
|
req_to_mmc_queue_req(req)->drv_op_data = idata;
|
||||||
req_to_mmc_queue_req(req)->ioc_count = num_of_cmds;
|
req_to_mmc_queue_req(req)->ioc_count = num_of_cmds;
|
||||||
blk_execute_rq(NULL, req, 0);
|
blk_execute_rq(NULL, req, 0);
|
||||||
@@ -2789,6 +2792,7 @@ static int mmc_dbg_card_status_get(void *data, u64 *val)
|
|||||||
if (IS_ERR(req))
|
if (IS_ERR(req))
|
||||||
return PTR_ERR(req);
|
return PTR_ERR(req);
|
||||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_CARD_STATUS;
|
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_CARD_STATUS;
|
||||||
|
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||||
blk_execute_rq(NULL, req, 0);
|
blk_execute_rq(NULL, req, 0);
|
||||||
ret = req_to_mmc_queue_req(req)->drv_op_result;
|
ret = req_to_mmc_queue_req(req)->drv_op_result;
|
||||||
if (ret >= 0) {
|
if (ret >= 0) {
|
||||||
@@ -2827,6 +2831,7 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
|
|||||||
goto out_free;
|
goto out_free;
|
||||||
}
|
}
|
||||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_EXT_CSD;
|
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_EXT_CSD;
|
||||||
|
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||||
req_to_mmc_queue_req(req)->drv_op_data = &ext_csd;
|
req_to_mmc_queue_req(req)->drv_op_data = &ext_csd;
|
||||||
blk_execute_rq(NULL, req, 0);
|
blk_execute_rq(NULL, req, 0);
|
||||||
err = req_to_mmc_queue_req(req)->drv_op_result;
|
err = req_to_mmc_queue_req(req)->drv_op_result;
|
||||||
|
|||||||
@@ -197,8 +197,8 @@ int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data)
|
|||||||
int bw_sum = 0;
|
int bw_sum = 0;
|
||||||
u8 bw;
|
u8 bw;
|
||||||
|
|
||||||
prio_top = netdev_get_prio_tc_map(ndev, tc_nums - 1);
|
prio_top = tc_nums - 1;
|
||||||
prio_next = netdev_get_prio_tc_map(ndev, tc_nums - 2);
|
prio_next = tc_nums - 2;
|
||||||
|
|
||||||
/* Support highest prio and second prio tc in cbs mode */
|
/* Support highest prio and second prio tc in cbs mode */
|
||||||
if (tc != prio_top && tc != prio_next)
|
if (tc != prio_top && tc != prio_next)
|
||||||
|
|||||||
@@ -461,7 +461,7 @@ void iavf_set_ethtool_ops(struct net_device *netdev);
|
|||||||
void iavf_update_stats(struct iavf_adapter *adapter);
|
void iavf_update_stats(struct iavf_adapter *adapter);
|
||||||
void iavf_reset_interrupt_capability(struct iavf_adapter *adapter);
|
void iavf_reset_interrupt_capability(struct iavf_adapter *adapter);
|
||||||
int iavf_init_interrupt_scheme(struct iavf_adapter *adapter);
|
int iavf_init_interrupt_scheme(struct iavf_adapter *adapter);
|
||||||
void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask);
|
void iavf_irq_enable_queues(struct iavf_adapter *adapter);
|
||||||
void iavf_free_all_tx_resources(struct iavf_adapter *adapter);
|
void iavf_free_all_tx_resources(struct iavf_adapter *adapter);
|
||||||
void iavf_free_all_rx_resources(struct iavf_adapter *adapter);
|
void iavf_free_all_rx_resources(struct iavf_adapter *adapter);
|
||||||
|
|
||||||
|
|||||||
@@ -253,21 +253,18 @@ static void iavf_irq_disable(struct iavf_adapter *adapter)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* iavf_irq_enable_queues - Enable interrupt for specified queues
|
* iavf_irq_enable_queues - Enable interrupt for all queues
|
||||||
* @adapter: board private structure
|
* @adapter: board private structure
|
||||||
* @mask: bitmap of queues to enable
|
|
||||||
**/
|
**/
|
||||||
void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask)
|
void iavf_irq_enable_queues(struct iavf_adapter *adapter)
|
||||||
{
|
{
|
||||||
struct iavf_hw *hw = &adapter->hw;
|
struct iavf_hw *hw = &adapter->hw;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = 1; i < adapter->num_msix_vectors; i++) {
|
for (i = 1; i < adapter->num_msix_vectors; i++) {
|
||||||
if (mask & BIT(i - 1)) {
|
wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
|
||||||
wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1),
|
IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
|
||||||
IAVF_VFINT_DYN_CTLN1_INTENA_MASK |
|
IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
|
||||||
IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -281,7 +278,7 @@ void iavf_irq_enable(struct iavf_adapter *adapter, bool flush)
|
|||||||
struct iavf_hw *hw = &adapter->hw;
|
struct iavf_hw *hw = &adapter->hw;
|
||||||
|
|
||||||
iavf_misc_irq_enable(adapter);
|
iavf_misc_irq_enable(adapter);
|
||||||
iavf_irq_enable_queues(adapter, ~0);
|
iavf_irq_enable_queues(adapter);
|
||||||
|
|
||||||
if (flush)
|
if (flush)
|
||||||
iavf_flush(hw);
|
iavf_flush(hw);
|
||||||
|
|||||||
@@ -40,7 +40,7 @@
|
|||||||
#define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT)
|
#define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT)
|
||||||
#define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3
|
#define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3
|
||||||
#define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT)
|
#define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT)
|
||||||
#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
|
#define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...63 */ /* Reset: VFR */
|
||||||
#define IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT 0
|
#define IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT 0
|
||||||
#define IAVF_VFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT)
|
#define IAVF_VFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT)
|
||||||
#define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2
|
#define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2
|
||||||
|
|||||||
@@ -822,6 +822,8 @@ static int igb_set_eeprom(struct net_device *netdev,
|
|||||||
*/
|
*/
|
||||||
ret_val = hw->nvm.ops.read(hw, last_word, 1,
|
ret_val = hw->nvm.ops.read(hw, last_word, 1,
|
||||||
&eeprom_buff[last_word - first_word]);
|
&eeprom_buff[last_word - first_word]);
|
||||||
|
if (ret_val)
|
||||||
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Device's eeprom is always little-endian, word addressable */
|
/* Device's eeprom is always little-endian, word addressable */
|
||||||
@@ -841,6 +843,7 @@ static int igb_set_eeprom(struct net_device *netdev,
|
|||||||
hw->nvm.ops.update(hw);
|
hw->nvm.ops.update(hw);
|
||||||
|
|
||||||
igb_set_fw_version(adapter);
|
igb_set_fw_version(adapter);
|
||||||
|
out:
|
||||||
kfree(eeprom_buff);
|
kfree(eeprom_buff);
|
||||||
return ret_val;
|
return ret_val;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -254,6 +254,13 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
|
|||||||
/* reset BQL for queue */
|
/* reset BQL for queue */
|
||||||
netdev_tx_reset_queue(txring_txq(tx_ring));
|
netdev_tx_reset_queue(txring_txq(tx_ring));
|
||||||
|
|
||||||
|
/* Zero out the buffer ring */
|
||||||
|
memset(tx_ring->tx_buffer_info, 0,
|
||||||
|
sizeof(*tx_ring->tx_buffer_info) * tx_ring->count);
|
||||||
|
|
||||||
|
/* Zero out the descriptor ring */
|
||||||
|
memset(tx_ring->desc, 0, tx_ring->size);
|
||||||
|
|
||||||
/* reset next_to_use and next_to_clean */
|
/* reset next_to_use and next_to_clean */
|
||||||
tx_ring->next_to_use = 0;
|
tx_ring->next_to_use = 0;
|
||||||
tx_ring->next_to_clean = 0;
|
tx_ring->next_to_clean = 0;
|
||||||
@@ -267,7 +274,7 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
|
|||||||
*/
|
*/
|
||||||
void igc_free_tx_resources(struct igc_ring *tx_ring)
|
void igc_free_tx_resources(struct igc_ring *tx_ring)
|
||||||
{
|
{
|
||||||
igc_clean_tx_ring(tx_ring);
|
igc_disable_tx_ring(tx_ring);
|
||||||
|
|
||||||
vfree(tx_ring->tx_buffer_info);
|
vfree(tx_ring->tx_buffer_info);
|
||||||
tx_ring->tx_buffer_info = NULL;
|
tx_ring->tx_buffer_info = NULL;
|
||||||
|
|||||||
@@ -1885,7 +1885,8 @@ static int nix_check_txschq_alloc_req(struct rvu *rvu, int lvl, u16 pcifunc,
|
|||||||
free_cnt = rvu_rsrc_free_count(&txsch->schq);
|
free_cnt = rvu_rsrc_free_count(&txsch->schq);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (free_cnt < req_schq || req_schq > MAX_TXSCHQ_PER_FUNC)
|
if (free_cnt < req_schq || req->schq[lvl] > MAX_TXSCHQ_PER_FUNC ||
|
||||||
|
req->schq_contig[lvl] > MAX_TXSCHQ_PER_FUNC)
|
||||||
return NIX_AF_ERR_TLX_ALLOC_FAIL;
|
return NIX_AF_ERR_TLX_ALLOC_FAIL;
|
||||||
|
|
||||||
/* If contiguous queues are needed, check for availability */
|
/* If contiguous queues are needed, check for availability */
|
||||||
@@ -4066,10 +4067,6 @@ int rvu_mbox_handler_nix_set_rx_cfg(struct rvu *rvu, struct nix_rx_cfg *req,
|
|||||||
|
|
||||||
static u64 rvu_get_lbk_link_credits(struct rvu *rvu, u16 lbk_max_frs)
|
static u64 rvu_get_lbk_link_credits(struct rvu *rvu, u16 lbk_max_frs)
|
||||||
{
|
{
|
||||||
/* CN10k supports 72KB FIFO size and max packet size of 64k */
|
|
||||||
if (rvu->hw->lbk_bufsize == 0x12000)
|
|
||||||
return (rvu->hw->lbk_bufsize - lbk_max_frs) / 16;
|
|
||||||
|
|
||||||
return 1600; /* 16 * max LBK datarate = 16 * 100Gbps */
|
return 1600; /* 16 * max LBK datarate = 16 * 100Gbps */
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -102,6 +102,10 @@ static unsigned int ipvlan_nf_input(void *priv, struct sk_buff *skb,
|
|||||||
|
|
||||||
skb->dev = addr->master->dev;
|
skb->dev = addr->master->dev;
|
||||||
skb->skb_iif = skb->dev->ifindex;
|
skb->skb_iif = skb->dev->ifindex;
|
||||||
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
|
if (addr->atype == IPVL_IPV6)
|
||||||
|
IP6CB(skb)->iif = skb->dev->ifindex;
|
||||||
|
#endif
|
||||||
len = skb->len + ETH_HLEN;
|
len = skb->len + ETH_HLEN;
|
||||||
ipvlan_count_rx(addr->master, len, true, false);
|
ipvlan_count_rx(addr->master, len, true, false);
|
||||||
out:
|
out:
|
||||||
|
|||||||
@@ -1217,7 +1217,9 @@ static const struct usb_device_id products[] = {
|
|||||||
{QMI_FIXED_INTF(0x05c6, 0x9080, 8)},
|
{QMI_FIXED_INTF(0x05c6, 0x9080, 8)},
|
||||||
{QMI_FIXED_INTF(0x05c6, 0x9083, 3)},
|
{QMI_FIXED_INTF(0x05c6, 0x9083, 3)},
|
||||||
{QMI_FIXED_INTF(0x05c6, 0x9084, 4)},
|
{QMI_FIXED_INTF(0x05c6, 0x9084, 4)},
|
||||||
|
{QMI_QUIRK_SET_DTR(0x05c6, 0x9091, 2)}, /* Compal RXM-G1 */
|
||||||
{QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */
|
{QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */
|
||||||
|
{QMI_QUIRK_SET_DTR(0x05c6, 0x90db, 2)}, /* Compal RXM-G1 */
|
||||||
{QMI_FIXED_INTF(0x05c6, 0x920d, 0)},
|
{QMI_FIXED_INTF(0x05c6, 0x920d, 0)},
|
||||||
{QMI_FIXED_INTF(0x05c6, 0x920d, 5)},
|
{QMI_FIXED_INTF(0x05c6, 0x920d, 5)},
|
||||||
{QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */
|
{QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */
|
||||||
|
|||||||
@@ -384,6 +384,9 @@ static int lapbeth_new_device(struct net_device *dev)
|
|||||||
|
|
||||||
ASSERT_RTNL();
|
ASSERT_RTNL();
|
||||||
|
|
||||||
|
if (dev->type != ARPHRD_ETHER)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
ndev = alloc_netdev(sizeof(*lapbeth), "lapb%d", NET_NAME_UNKNOWN,
|
ndev = alloc_netdev(sizeof(*lapbeth), "lapb%d", NET_NAME_UNKNOWN,
|
||||||
lapbeth_setup);
|
lapbeth_setup);
|
||||||
if (!ndev)
|
if (!ndev)
|
||||||
|
|||||||
@@ -3390,6 +3390,8 @@ static const struct pci_device_id nvme_id_table[] = {
|
|||||||
.driver_data = NVME_QUIRK_BOGUS_NID, },
|
.driver_data = NVME_QUIRK_BOGUS_NID, },
|
||||||
{ PCI_DEVICE(0x1e4B, 0x1202), /* MAXIO MAP1202 */
|
{ PCI_DEVICE(0x1e4B, 0x1202), /* MAXIO MAP1202 */
|
||||||
.driver_data = NVME_QUIRK_BOGUS_NID, },
|
.driver_data = NVME_QUIRK_BOGUS_NID, },
|
||||||
|
{ PCI_DEVICE(0x1e4B, 0x1602), /* MAXIO MAP1602 */
|
||||||
|
.driver_data = NVME_QUIRK_BOGUS_NID, },
|
||||||
{ PCI_DEVICE(0x1cc1, 0x5350), /* ADATA XPG GAMMIX S50 */
|
{ PCI_DEVICE(0x1cc1, 0x5350), /* ADATA XPG GAMMIX S50 */
|
||||||
.driver_data = NVME_QUIRK_BOGUS_NID, },
|
.driver_data = NVME_QUIRK_BOGUS_NID, },
|
||||||
{ PCI_DEVICE(0x1e49, 0x0021), /* ZHITAI TiPro5000 NVMe SSD */
|
{ PCI_DEVICE(0x1e49, 0x0021), /* ZHITAI TiPro5000 NVMe SSD */
|
||||||
|
|||||||
@@ -57,8 +57,10 @@ struct fragment {
|
|||||||
* struct overlay_changeset
|
* struct overlay_changeset
|
||||||
* @id: changeset identifier
|
* @id: changeset identifier
|
||||||
* @ovcs_list: list on which we are located
|
* @ovcs_list: list on which we are located
|
||||||
* @fdt: base of memory allocated to hold aligned FDT that was unflattened to create @overlay_tree
|
* @new_fdt: Memory allocated to hold unflattened aligned FDT
|
||||||
* @overlay_tree: expanded device tree that contains the fragment nodes
|
* @overlay_mem: the memory chunk that contains @overlay_root
|
||||||
|
* @overlay_root: expanded device tree that contains the fragment nodes
|
||||||
|
* @notify_state: most recent notify action used on overlay
|
||||||
* @count: count of fragment structures
|
* @count: count of fragment structures
|
||||||
* @fragments: fragment nodes in the overlay expanded device tree
|
* @fragments: fragment nodes in the overlay expanded device tree
|
||||||
* @symbols_fragment: last element of @fragments[] is the __symbols__ node
|
* @symbols_fragment: last element of @fragments[] is the __symbols__ node
|
||||||
@@ -67,8 +69,10 @@ struct fragment {
|
|||||||
struct overlay_changeset {
|
struct overlay_changeset {
|
||||||
int id;
|
int id;
|
||||||
struct list_head ovcs_list;
|
struct list_head ovcs_list;
|
||||||
const void *fdt;
|
const void *new_fdt;
|
||||||
struct device_node *overlay_tree;
|
const void *overlay_mem;
|
||||||
|
struct device_node *overlay_root;
|
||||||
|
enum of_overlay_notify_action notify_state;
|
||||||
int count;
|
int count;
|
||||||
struct fragment *fragments;
|
struct fragment *fragments;
|
||||||
bool symbols_fragment;
|
bool symbols_fragment;
|
||||||
@@ -115,7 +119,6 @@ void of_overlay_mutex_unlock(void)
|
|||||||
mutex_unlock(&of_overlay_phandle_mutex);
|
mutex_unlock(&of_overlay_phandle_mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static LIST_HEAD(ovcs_list);
|
static LIST_HEAD(ovcs_list);
|
||||||
static DEFINE_IDR(ovcs_idr);
|
static DEFINE_IDR(ovcs_idr);
|
||||||
|
|
||||||
@@ -149,19 +152,14 @@ int of_overlay_notifier_unregister(struct notifier_block *nb)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(of_overlay_notifier_unregister);
|
EXPORT_SYMBOL_GPL(of_overlay_notifier_unregister);
|
||||||
|
|
||||||
static char *of_overlay_action_name[] = {
|
|
||||||
"pre-apply",
|
|
||||||
"post-apply",
|
|
||||||
"pre-remove",
|
|
||||||
"post-remove",
|
|
||||||
};
|
|
||||||
|
|
||||||
static int overlay_notify(struct overlay_changeset *ovcs,
|
static int overlay_notify(struct overlay_changeset *ovcs,
|
||||||
enum of_overlay_notify_action action)
|
enum of_overlay_notify_action action)
|
||||||
{
|
{
|
||||||
struct of_overlay_notify_data nd;
|
struct of_overlay_notify_data nd;
|
||||||
int i, ret;
|
int i, ret;
|
||||||
|
|
||||||
|
ovcs->notify_state = action;
|
||||||
|
|
||||||
for (i = 0; i < ovcs->count; i++) {
|
for (i = 0; i < ovcs->count; i++) {
|
||||||
struct fragment *fragment = &ovcs->fragments[i];
|
struct fragment *fragment = &ovcs->fragments[i];
|
||||||
|
|
||||||
@@ -173,7 +171,7 @@ static int overlay_notify(struct overlay_changeset *ovcs,
|
|||||||
if (notifier_to_errno(ret)) {
|
if (notifier_to_errno(ret)) {
|
||||||
ret = notifier_to_errno(ret);
|
ret = notifier_to_errno(ret);
|
||||||
pr_err("overlay changeset %s notifier error %d, target: %pOF\n",
|
pr_err("overlay changeset %s notifier error %d, target: %pOF\n",
|
||||||
of_overlay_action_name[action], ret, nd.target);
|
of_overlay_action_name(action), ret, nd.target);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -183,7 +181,7 @@ static int overlay_notify(struct overlay_changeset *ovcs,
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* The values of properties in the "/__symbols__" node are paths in
|
* The values of properties in the "/__symbols__" node are paths in
|
||||||
* the ovcs->overlay_tree. When duplicating the properties, the paths
|
* the ovcs->overlay_root. When duplicating the properties, the paths
|
||||||
* need to be adjusted to be the correct path for the live device tree.
|
* need to be adjusted to be the correct path for the live device tree.
|
||||||
*
|
*
|
||||||
* The paths refer to a node in the subtree of a fragment node's "__overlay__"
|
* The paths refer to a node in the subtree of a fragment node's "__overlay__"
|
||||||
@@ -219,7 +217,7 @@ static struct property *dup_and_fixup_symbol_prop(
|
|||||||
|
|
||||||
if (path_len < 1)
|
if (path_len < 1)
|
||||||
return NULL;
|
return NULL;
|
||||||
fragment_node = __of_find_node_by_path(ovcs->overlay_tree, path + 1);
|
fragment_node = __of_find_node_by_path(ovcs->overlay_root, path + 1);
|
||||||
overlay_node = __of_find_node_by_path(fragment_node, "__overlay__/");
|
overlay_node = __of_find_node_by_path(fragment_node, "__overlay__/");
|
||||||
of_node_put(fragment_node);
|
of_node_put(fragment_node);
|
||||||
of_node_put(overlay_node);
|
of_node_put(overlay_node);
|
||||||
@@ -716,53 +714,50 @@ static struct device_node *find_target(struct device_node *info_node)
|
|||||||
|
|
||||||
/**
|
/**
|
||||||
* init_overlay_changeset() - initialize overlay changeset from overlay tree
|
* init_overlay_changeset() - initialize overlay changeset from overlay tree
|
||||||
* @ovcs: Overlay changeset to build
|
* @ovcs: Overlay changeset to build
|
||||||
* @fdt: base of memory allocated to hold aligned FDT that was unflattened to create @tree
|
|
||||||
* @tree: Contains the overlay fragments and overlay fixup nodes
|
|
||||||
*
|
*
|
||||||
* Initialize @ovcs. Populate @ovcs->fragments with node information from
|
* Initialize @ovcs. Populate @ovcs->fragments with node information from
|
||||||
* the top level of @tree. The relevant top level nodes are the fragment
|
* the top level of @overlay_root. The relevant top level nodes are the
|
||||||
* nodes and the __symbols__ node. Any other top level node will be ignored.
|
* fragment nodes and the __symbols__ node. Any other top level node will
|
||||||
|
* be ignored. Populate other @ovcs fields.
|
||||||
*
|
*
|
||||||
* Return: 0 on success, -ENOMEM if memory allocation failure, -EINVAL if error
|
* Return: 0 on success, -ENOMEM if memory allocation failure, -EINVAL if error
|
||||||
* detected in @tree, or -ENOSPC if idr_alloc() error.
|
* detected in @overlay_root. On error return, the caller of
|
||||||
|
* init_overlay_changeset() must call free_overlay_changeset().
|
||||||
*/
|
*/
|
||||||
static int init_overlay_changeset(struct overlay_changeset *ovcs,
|
static int init_overlay_changeset(struct overlay_changeset *ovcs)
|
||||||
const void *fdt, struct device_node *tree)
|
|
||||||
{
|
{
|
||||||
struct device_node *node, *overlay_node;
|
struct device_node *node, *overlay_node;
|
||||||
struct fragment *fragment;
|
struct fragment *fragment;
|
||||||
struct fragment *fragments;
|
struct fragment *fragments;
|
||||||
int cnt, id, ret;
|
int cnt, ret;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* None of the resources allocated by this function will be freed in
|
||||||
|
* the error paths. Instead the caller of this function is required
|
||||||
|
* to call free_overlay_changeset() (which will free the resources)
|
||||||
|
* if error return.
|
||||||
|
*/
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Warn for some issues. Can not return -EINVAL for these until
|
* Warn for some issues. Can not return -EINVAL for these until
|
||||||
* of_unittest_apply_overlay() is fixed to pass these checks.
|
* of_unittest_apply_overlay() is fixed to pass these checks.
|
||||||
*/
|
*/
|
||||||
if (!of_node_check_flag(tree, OF_DYNAMIC))
|
if (!of_node_check_flag(ovcs->overlay_root, OF_DYNAMIC))
|
||||||
pr_debug("%s() tree is not dynamic\n", __func__);
|
pr_debug("%s() ovcs->overlay_root is not dynamic\n", __func__);
|
||||||
|
|
||||||
if (!of_node_check_flag(tree, OF_DETACHED))
|
if (!of_node_check_flag(ovcs->overlay_root, OF_DETACHED))
|
||||||
pr_debug("%s() tree is not detached\n", __func__);
|
pr_debug("%s() ovcs->overlay_root is not detached\n", __func__);
|
||||||
|
|
||||||
if (!of_node_is_root(tree))
|
if (!of_node_is_root(ovcs->overlay_root))
|
||||||
pr_debug("%s() tree is not root\n", __func__);
|
pr_debug("%s() ovcs->overlay_root is not root\n", __func__);
|
||||||
|
|
||||||
ovcs->overlay_tree = tree;
|
|
||||||
ovcs->fdt = fdt;
|
|
||||||
|
|
||||||
INIT_LIST_HEAD(&ovcs->ovcs_list);
|
|
||||||
|
|
||||||
of_changeset_init(&ovcs->cset);
|
of_changeset_init(&ovcs->cset);
|
||||||
|
|
||||||
id = idr_alloc(&ovcs_idr, ovcs, 1, 0, GFP_KERNEL);
|
|
||||||
if (id <= 0)
|
|
||||||
return id;
|
|
||||||
|
|
||||||
cnt = 0;
|
cnt = 0;
|
||||||
|
|
||||||
/* fragment nodes */
|
/* fragment nodes */
|
||||||
for_each_child_of_node(tree, node) {
|
for_each_child_of_node(ovcs->overlay_root, node) {
|
||||||
overlay_node = of_get_child_by_name(node, "__overlay__");
|
overlay_node = of_get_child_by_name(node, "__overlay__");
|
||||||
if (overlay_node) {
|
if (overlay_node) {
|
||||||
cnt++;
|
cnt++;
|
||||||
@@ -770,7 +765,7 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
node = of_get_child_by_name(tree, "__symbols__");
|
node = of_get_child_by_name(ovcs->overlay_root, "__symbols__");
|
||||||
if (node) {
|
if (node) {
|
||||||
cnt++;
|
cnt++;
|
||||||
of_node_put(node);
|
of_node_put(node);
|
||||||
@@ -779,11 +774,12 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
|
|||||||
fragments = kcalloc(cnt, sizeof(*fragments), GFP_KERNEL);
|
fragments = kcalloc(cnt, sizeof(*fragments), GFP_KERNEL);
|
||||||
if (!fragments) {
|
if (!fragments) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto err_free_idr;
|
goto err_out;
|
||||||
}
|
}
|
||||||
|
ovcs->fragments = fragments;
|
||||||
|
|
||||||
cnt = 0;
|
cnt = 0;
|
||||||
for_each_child_of_node(tree, node) {
|
for_each_child_of_node(ovcs->overlay_root, node) {
|
||||||
overlay_node = of_get_child_by_name(node, "__overlay__");
|
overlay_node = of_get_child_by_name(node, "__overlay__");
|
||||||
if (!overlay_node)
|
if (!overlay_node)
|
||||||
continue;
|
continue;
|
||||||
@@ -795,7 +791,7 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
|
|||||||
of_node_put(fragment->overlay);
|
of_node_put(fragment->overlay);
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
of_node_put(node);
|
of_node_put(node);
|
||||||
goto err_free_fragments;
|
goto err_out;
|
||||||
}
|
}
|
||||||
|
|
||||||
cnt++;
|
cnt++;
|
||||||
@@ -805,7 +801,7 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
|
|||||||
* if there is a symbols fragment in ovcs->fragments[i] it is
|
* if there is a symbols fragment in ovcs->fragments[i] it is
|
||||||
* the final element in the array
|
* the final element in the array
|
||||||
*/
|
*/
|
||||||
node = of_get_child_by_name(tree, "__symbols__");
|
node = of_get_child_by_name(ovcs->overlay_root, "__symbols__");
|
||||||
if (node) {
|
if (node) {
|
||||||
ovcs->symbols_fragment = 1;
|
ovcs->symbols_fragment = 1;
|
||||||
fragment = &fragments[cnt];
|
fragment = &fragments[cnt];
|
||||||
@@ -815,7 +811,8 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
|
|||||||
if (!fragment->target) {
|
if (!fragment->target) {
|
||||||
pr_err("symbols in overlay, but not in live tree\n");
|
pr_err("symbols in overlay, but not in live tree\n");
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto err_free_fragments;
|
of_node_put(node);
|
||||||
|
goto err_out;
|
||||||
}
|
}
|
||||||
|
|
||||||
cnt++;
|
cnt++;
|
||||||
@@ -824,20 +821,14 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
|
|||||||
if (!cnt) {
|
if (!cnt) {
|
||||||
pr_err("no fragments or symbols in overlay\n");
|
pr_err("no fragments or symbols in overlay\n");
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto err_free_fragments;
|
goto err_out;
|
||||||
}
|
}
|
||||||
|
|
||||||
ovcs->id = id;
|
|
||||||
ovcs->count = cnt;
|
ovcs->count = cnt;
|
||||||
ovcs->fragments = fragments;
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_free_fragments:
|
err_out:
|
||||||
kfree(fragments);
|
|
||||||
err_free_idr:
|
|
||||||
idr_remove(&ovcs_idr, id);
|
|
||||||
|
|
||||||
pr_err("%s() failed, ret = %d\n", __func__, ret);
|
pr_err("%s() failed, ret = %d\n", __func__, ret);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
@@ -850,21 +841,34 @@ static void free_overlay_changeset(struct overlay_changeset *ovcs)
|
|||||||
if (ovcs->cset.entries.next)
|
if (ovcs->cset.entries.next)
|
||||||
of_changeset_destroy(&ovcs->cset);
|
of_changeset_destroy(&ovcs->cset);
|
||||||
|
|
||||||
if (ovcs->id)
|
if (ovcs->id) {
|
||||||
idr_remove(&ovcs_idr, ovcs->id);
|
idr_remove(&ovcs_idr, ovcs->id);
|
||||||
|
list_del(&ovcs->ovcs_list);
|
||||||
|
ovcs->id = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
for (i = 0; i < ovcs->count; i++) {
|
for (i = 0; i < ovcs->count; i++) {
|
||||||
of_node_put(ovcs->fragments[i].target);
|
of_node_put(ovcs->fragments[i].target);
|
||||||
of_node_put(ovcs->fragments[i].overlay);
|
of_node_put(ovcs->fragments[i].overlay);
|
||||||
}
|
}
|
||||||
kfree(ovcs->fragments);
|
kfree(ovcs->fragments);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* There should be no live pointers into ovcs->overlay_tree and
|
* There should be no live pointers into ovcs->overlay_mem and
|
||||||
* ovcs->fdt due to the policy that overlay notifiers are not allowed
|
* ovcs->new_fdt due to the policy that overlay notifiers are not
|
||||||
* to retain pointers into the overlay devicetree.
|
* allowed to retain pointers into the overlay devicetree other
|
||||||
|
* than during the window from OF_OVERLAY_PRE_APPLY overlay
|
||||||
|
* notifiers until the OF_OVERLAY_POST_REMOVE overlay notifiers.
|
||||||
|
*
|
||||||
|
* A memory leak will occur here if within the window.
|
||||||
*/
|
*/
|
||||||
kfree(ovcs->overlay_tree);
|
|
||||||
kfree(ovcs->fdt);
|
if (ovcs->notify_state == OF_OVERLAY_INIT ||
|
||||||
|
ovcs->notify_state == OF_OVERLAY_POST_REMOVE) {
|
||||||
|
kfree(ovcs->overlay_mem);
|
||||||
|
kfree(ovcs->new_fdt);
|
||||||
|
}
|
||||||
kfree(ovcs);
|
kfree(ovcs);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -872,28 +876,13 @@ static void free_overlay_changeset(struct overlay_changeset *ovcs)
|
|||||||
* internal documentation
|
* internal documentation
|
||||||
*
|
*
|
||||||
* of_overlay_apply() - Create and apply an overlay changeset
|
* of_overlay_apply() - Create and apply an overlay changeset
|
||||||
* @fdt: base of memory allocated to hold the aligned FDT
|
* @ovcs: overlay changeset
|
||||||
* @tree: Expanded overlay device tree
|
|
||||||
* @ovcs_id: Pointer to overlay changeset id
|
|
||||||
*
|
*
|
||||||
* Creates and applies an overlay changeset.
|
* Creates and applies an overlay changeset.
|
||||||
*
|
*
|
||||||
* If an error occurs in a pre-apply notifier, then no changes are made
|
|
||||||
* to the device tree.
|
|
||||||
*
|
|
||||||
|
|
||||||
* A non-zero return value will not have created the changeset if error is from:
|
|
||||||
* - parameter checks
|
|
||||||
* - building the changeset
|
|
||||||
* - overlay changeset pre-apply notifier
|
|
||||||
*
|
|
||||||
* If an error is returned by an overlay changeset pre-apply notifier
|
* If an error is returned by an overlay changeset pre-apply notifier
|
||||||
* then no further overlay changeset pre-apply notifier will be called.
|
* then no further overlay changeset pre-apply notifier will be called.
|
||||||
*
|
*
|
||||||
* A non-zero return value will have created the changeset if error is from:
|
|
||||||
* - overlay changeset entry notifier
|
|
||||||
* - overlay changeset post-apply notifier
|
|
||||||
*
|
|
||||||
* If an error is returned by an overlay changeset post-apply notifier
|
* If an error is returned by an overlay changeset post-apply notifier
|
||||||
* then no further overlay changeset post-apply notifier will be called.
|
* then no further overlay changeset post-apply notifier will be called.
|
||||||
*
|
*
|
||||||
@@ -907,64 +896,35 @@ static void free_overlay_changeset(struct overlay_changeset *ovcs)
|
|||||||
* following attempt to apply or remove an overlay changeset will be
|
* following attempt to apply or remove an overlay changeset will be
|
||||||
* refused.
|
* refused.
|
||||||
*
|
*
|
||||||
* Returns 0 on success, or a negative error number. Overlay changeset
|
* Returns 0 on success, or a negative error number. On error return,
|
||||||
* id is returned to *ovcs_id.
|
* the caller of of_overlay_apply() must call free_overlay_changeset().
|
||||||
*/
|
*/
|
||||||
|
|
||||||
static int of_overlay_apply(const void *fdt, struct device_node *tree,
|
static int of_overlay_apply(struct overlay_changeset *ovcs)
|
||||||
int *ovcs_id)
|
|
||||||
{
|
{
|
||||||
struct overlay_changeset *ovcs;
|
|
||||||
int ret = 0, ret_revert, ret_tmp;
|
int ret = 0, ret_revert, ret_tmp;
|
||||||
|
|
||||||
/*
|
|
||||||
* As of this point, fdt and tree belong to the overlay changeset.
|
|
||||||
* overlay changeset code is responsible for freeing them.
|
|
||||||
*/
|
|
||||||
|
|
||||||
if (devicetree_corrupt()) {
|
if (devicetree_corrupt()) {
|
||||||
pr_err("devicetree state suspect, refuse to apply overlay\n");
|
pr_err("devicetree state suspect, refuse to apply overlay\n");
|
||||||
kfree(fdt);
|
|
||||||
kfree(tree);
|
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
ovcs = kzalloc(sizeof(*ovcs), GFP_KERNEL);
|
ret = of_resolve_phandles(ovcs->overlay_root);
|
||||||
if (!ovcs) {
|
if (ret)
|
||||||
kfree(fdt);
|
|
||||||
kfree(tree);
|
|
||||||
ret = -ENOMEM;
|
|
||||||
goto out;
|
goto out;
|
||||||
}
|
|
||||||
|
|
||||||
of_overlay_mutex_lock();
|
ret = init_overlay_changeset(ovcs);
|
||||||
mutex_lock(&of_mutex);
|
|
||||||
|
|
||||||
ret = of_resolve_phandles(tree);
|
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err_free_tree;
|
goto out;
|
||||||
|
|
||||||
ret = init_overlay_changeset(ovcs, fdt, tree);
|
|
||||||
if (ret)
|
|
||||||
goto err_free_tree;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* after overlay_notify(), ovcs->overlay_tree related pointers may have
|
|
||||||
* leaked to drivers, so can not kfree() tree, aka ovcs->overlay_tree;
|
|
||||||
* and can not free memory containing aligned fdt. The aligned fdt
|
|
||||||
* is contained within the memory at ovcs->fdt, possibly at an offset
|
|
||||||
* from ovcs->fdt.
|
|
||||||
*/
|
|
||||||
ret = overlay_notify(ovcs, OF_OVERLAY_PRE_APPLY);
|
ret = overlay_notify(ovcs, OF_OVERLAY_PRE_APPLY);
|
||||||
if (ret) {
|
if (ret)
|
||||||
pr_err("overlay changeset pre-apply notify error %d\n", ret);
|
goto out;
|
||||||
goto err_free_overlay_changeset;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = build_changeset(ovcs);
|
ret = build_changeset(ovcs);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err_free_overlay_changeset;
|
goto out;
|
||||||
|
|
||||||
ret_revert = 0;
|
ret_revert = 0;
|
||||||
ret = __of_changeset_apply_entries(&ovcs->cset, &ret_revert);
|
ret = __of_changeset_apply_entries(&ovcs->cset, &ret_revert);
|
||||||
@@ -974,7 +934,7 @@ static int of_overlay_apply(const void *fdt, struct device_node *tree,
|
|||||||
ret_revert);
|
ret_revert);
|
||||||
devicetree_state_flags |= DTSF_APPLY_FAIL;
|
devicetree_state_flags |= DTSF_APPLY_FAIL;
|
||||||
}
|
}
|
||||||
goto err_free_overlay_changeset;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = __of_changeset_apply_notify(&ovcs->cset);
|
ret = __of_changeset_apply_notify(&ovcs->cset);
|
||||||
@@ -982,29 +942,10 @@ static int of_overlay_apply(const void *fdt, struct device_node *tree,
|
|||||||
pr_err("overlay apply changeset entry notify error %d\n", ret);
|
pr_err("overlay apply changeset entry notify error %d\n", ret);
|
||||||
/* notify failure is not fatal, continue */
|
/* notify failure is not fatal, continue */
|
||||||
|
|
||||||
list_add_tail(&ovcs->ovcs_list, &ovcs_list);
|
|
||||||
*ovcs_id = ovcs->id;
|
|
||||||
|
|
||||||
ret_tmp = overlay_notify(ovcs, OF_OVERLAY_POST_APPLY);
|
ret_tmp = overlay_notify(ovcs, OF_OVERLAY_POST_APPLY);
|
||||||
if (ret_tmp) {
|
if (ret_tmp)
|
||||||
pr_err("overlay changeset post-apply notify error %d\n",
|
|
||||||
ret_tmp);
|
|
||||||
if (!ret)
|
if (!ret)
|
||||||
ret = ret_tmp;
|
ret = ret_tmp;
|
||||||
}
|
|
||||||
|
|
||||||
goto out_unlock;
|
|
||||||
|
|
||||||
err_free_tree:
|
|
||||||
kfree(fdt);
|
|
||||||
kfree(tree);
|
|
||||||
|
|
||||||
err_free_overlay_changeset:
|
|
||||||
free_overlay_changeset(ovcs);
|
|
||||||
|
|
||||||
out_unlock:
|
|
||||||
mutex_unlock(&of_mutex);
|
|
||||||
of_overlay_mutex_unlock();
|
|
||||||
|
|
||||||
out:
|
out:
|
||||||
pr_debug("%s() err=%d\n", __func__, ret);
|
pr_debug("%s() err=%d\n", __func__, ret);
|
||||||
@@ -1013,15 +954,16 @@ out:
|
|||||||
}
|
}
|
||||||
|
|
||||||
int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size,
|
int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size,
|
||||||
int *ovcs_id)
|
int *ret_ovcs_id)
|
||||||
{
|
{
|
||||||
void *new_fdt;
|
void *new_fdt;
|
||||||
void *new_fdt_align;
|
void *new_fdt_align;
|
||||||
|
void *overlay_mem;
|
||||||
int ret;
|
int ret;
|
||||||
u32 size;
|
u32 size;
|
||||||
struct device_node *overlay_root = NULL;
|
struct overlay_changeset *ovcs;
|
||||||
|
|
||||||
*ovcs_id = 0;
|
*ret_ovcs_id = 0;
|
||||||
|
|
||||||
if (overlay_fdt_size < sizeof(struct fdt_header) ||
|
if (overlay_fdt_size < sizeof(struct fdt_header) ||
|
||||||
fdt_check_header(overlay_fdt)) {
|
fdt_check_header(overlay_fdt)) {
|
||||||
@@ -1033,41 +975,67 @@ int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size,
|
|||||||
if (overlay_fdt_size < size)
|
if (overlay_fdt_size < size)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
ovcs = kzalloc(sizeof(*ovcs), GFP_KERNEL);
|
||||||
|
if (!ovcs)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
of_overlay_mutex_lock();
|
||||||
|
mutex_lock(&of_mutex);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* ovcs->notify_state must be set to OF_OVERLAY_INIT before allocating
|
||||||
|
* ovcs resources, implicitly set by kzalloc() of ovcs
|
||||||
|
*/
|
||||||
|
|
||||||
|
ovcs->id = idr_alloc(&ovcs_idr, ovcs, 1, 0, GFP_KERNEL);
|
||||||
|
if (ovcs->id <= 0) {
|
||||||
|
ret = ovcs->id;
|
||||||
|
goto err_free_ovcs;
|
||||||
|
}
|
||||||
|
|
||||||
|
INIT_LIST_HEAD(&ovcs->ovcs_list);
|
||||||
|
list_add_tail(&ovcs->ovcs_list, &ovcs_list);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Must create permanent copy of FDT because of_fdt_unflatten_tree()
|
* Must create permanent copy of FDT because of_fdt_unflatten_tree()
|
||||||
* will create pointers to the passed in FDT in the unflattened tree.
|
* will create pointers to the passed in FDT in the unflattened tree.
|
||||||
*/
|
*/
|
||||||
new_fdt = kmalloc(size + FDT_ALIGN_SIZE, GFP_KERNEL);
|
new_fdt = kmalloc(size + FDT_ALIGN_SIZE, GFP_KERNEL);
|
||||||
if (!new_fdt)
|
if (!new_fdt) {
|
||||||
return -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
goto err_free_ovcs;
|
||||||
|
}
|
||||||
|
ovcs->new_fdt = new_fdt;
|
||||||
|
|
||||||
new_fdt_align = PTR_ALIGN(new_fdt, FDT_ALIGN_SIZE);
|
new_fdt_align = PTR_ALIGN(new_fdt, FDT_ALIGN_SIZE);
|
||||||
memcpy(new_fdt_align, overlay_fdt, size);
|
memcpy(new_fdt_align, overlay_fdt, size);
|
||||||
|
|
||||||
of_fdt_unflatten_tree(new_fdt_align, NULL, &overlay_root);
|
overlay_mem = of_fdt_unflatten_tree(new_fdt_align, NULL,
|
||||||
if (!overlay_root) {
|
&ovcs->overlay_root);
|
||||||
|
if (!overlay_mem) {
|
||||||
pr_err("unable to unflatten overlay_fdt\n");
|
pr_err("unable to unflatten overlay_fdt\n");
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto out_free_new_fdt;
|
goto err_free_ovcs;
|
||||||
}
|
}
|
||||||
|
ovcs->overlay_mem = overlay_mem;
|
||||||
|
|
||||||
ret = of_overlay_apply(new_fdt, overlay_root, ovcs_id);
|
ret = of_overlay_apply(ovcs);
|
||||||
if (ret < 0) {
|
if (ret < 0)
|
||||||
/*
|
goto err_free_ovcs;
|
||||||
* new_fdt and overlay_root now belong to the overlay
|
|
||||||
* changeset.
|
mutex_unlock(&of_mutex);
|
||||||
* overlay changeset code is responsible for freeing them.
|
of_overlay_mutex_unlock();
|
||||||
*/
|
|
||||||
goto out;
|
*ret_ovcs_id = ovcs->id;
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_free_ovcs:
|
||||||
|
free_overlay_changeset(ovcs);
|
||||||
|
|
||||||
out_free_new_fdt:
|
mutex_unlock(&of_mutex);
|
||||||
kfree(new_fdt);
|
of_overlay_mutex_unlock();
|
||||||
|
|
||||||
out:
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(of_overlay_fdt_apply);
|
EXPORT_SYMBOL_GPL(of_overlay_fdt_apply);
|
||||||
@@ -1204,28 +1172,24 @@ int of_overlay_remove(int *ovcs_id)
|
|||||||
if (!ovcs) {
|
if (!ovcs) {
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
pr_err("remove: Could not find overlay #%d\n", *ovcs_id);
|
pr_err("remove: Could not find overlay #%d\n", *ovcs_id);
|
||||||
goto out_unlock;
|
goto err_unlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!overlay_removal_is_ok(ovcs)) {
|
if (!overlay_removal_is_ok(ovcs)) {
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto out_unlock;
|
goto err_unlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = overlay_notify(ovcs, OF_OVERLAY_PRE_REMOVE);
|
ret = overlay_notify(ovcs, OF_OVERLAY_PRE_REMOVE);
|
||||||
if (ret) {
|
if (ret)
|
||||||
pr_err("overlay changeset pre-remove notify error %d\n", ret);
|
goto err_unlock;
|
||||||
goto out_unlock;
|
|
||||||
}
|
|
||||||
|
|
||||||
list_del(&ovcs->ovcs_list);
|
|
||||||
|
|
||||||
ret_apply = 0;
|
ret_apply = 0;
|
||||||
ret = __of_changeset_revert_entries(&ovcs->cset, &ret_apply);
|
ret = __of_changeset_revert_entries(&ovcs->cset, &ret_apply);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
if (ret_apply)
|
if (ret_apply)
|
||||||
devicetree_state_flags |= DTSF_REVERT_FAIL;
|
devicetree_state_flags |= DTSF_REVERT_FAIL;
|
||||||
goto out_unlock;
|
goto err_unlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = __of_changeset_revert_notify(&ovcs->cset);
|
ret = __of_changeset_revert_notify(&ovcs->cset);
|
||||||
@@ -1235,17 +1199,24 @@ int of_overlay_remove(int *ovcs_id)
|
|||||||
|
|
||||||
*ovcs_id = 0;
|
*ovcs_id = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Note that the overlay memory will be kfree()ed by
|
||||||
|
* free_overlay_changeset() even if the notifier for
|
||||||
|
* OF_OVERLAY_POST_REMOVE returns an error.
|
||||||
|
*/
|
||||||
ret_tmp = overlay_notify(ovcs, OF_OVERLAY_POST_REMOVE);
|
ret_tmp = overlay_notify(ovcs, OF_OVERLAY_POST_REMOVE);
|
||||||
if (ret_tmp) {
|
if (ret_tmp)
|
||||||
pr_err("overlay changeset post-remove notify error %d\n",
|
|
||||||
ret_tmp);
|
|
||||||
if (!ret)
|
if (!ret)
|
||||||
ret = ret_tmp;
|
ret = ret_tmp;
|
||||||
}
|
|
||||||
|
|
||||||
free_overlay_changeset(ovcs);
|
free_overlay_changeset(ovcs);
|
||||||
|
|
||||||
out_unlock:
|
err_unlock:
|
||||||
|
/*
|
||||||
|
* If jumped over free_overlay_changeset(), then did not kfree()
|
||||||
|
* overlay related memory. This is a memory leak unless a subsequent
|
||||||
|
* of_overlay_remove() of this overlay is successful.
|
||||||
|
*/
|
||||||
mutex_unlock(&of_mutex);
|
mutex_unlock(&of_mutex);
|
||||||
|
|
||||||
out:
|
out:
|
||||||
|
|||||||
@@ -550,6 +550,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
|
|||||||
{ KE_KEY, 0x71, { KEY_F13 } }, /* General-purpose button */
|
{ KE_KEY, 0x71, { KEY_F13 } }, /* General-purpose button */
|
||||||
{ KE_IGNORE, 0x79, }, /* Charger type dectection notification */
|
{ KE_IGNORE, 0x79, }, /* Charger type dectection notification */
|
||||||
{ KE_KEY, 0x7a, { KEY_ALS_TOGGLE } }, /* Ambient Light Sensor Toggle */
|
{ KE_KEY, 0x7a, { KEY_ALS_TOGGLE } }, /* Ambient Light Sensor Toggle */
|
||||||
|
{ KE_IGNORE, 0x7B, }, /* Charger connect/disconnect notification */
|
||||||
{ KE_KEY, 0x7c, { KEY_MICMUTE } },
|
{ KE_KEY, 0x7c, { KEY_MICMUTE } },
|
||||||
{ KE_KEY, 0x7D, { KEY_BLUETOOTH } }, /* Bluetooth Enable */
|
{ KE_KEY, 0x7D, { KEY_BLUETOOTH } }, /* Bluetooth Enable */
|
||||||
{ KE_KEY, 0x7E, { KEY_BLUETOOTH } }, /* Bluetooth Disable */
|
{ KE_KEY, 0x7E, { KEY_BLUETOOTH } }, /* Bluetooth Disable */
|
||||||
@@ -575,6 +576,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
|
|||||||
{ KE_KEY, 0xA6, { KEY_SWITCHVIDEOMODE } }, /* SDSP CRT + TV + HDMI */
|
{ KE_KEY, 0xA6, { KEY_SWITCHVIDEOMODE } }, /* SDSP CRT + TV + HDMI */
|
||||||
{ KE_KEY, 0xA7, { KEY_SWITCHVIDEOMODE } }, /* SDSP LCD + CRT + TV + HDMI */
|
{ KE_KEY, 0xA7, { KEY_SWITCHVIDEOMODE } }, /* SDSP LCD + CRT + TV + HDMI */
|
||||||
{ KE_KEY, 0xB5, { KEY_CALC } },
|
{ KE_KEY, 0xB5, { KEY_CALC } },
|
||||||
|
{ KE_IGNORE, 0xC0, }, /* External display connect/disconnect notification */
|
||||||
{ KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
|
{ KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
|
||||||
{ KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
|
{ KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
|
||||||
{ KE_IGNORE, 0xC6, }, /* Ambient Light Sensor notification */
|
{ KE_IGNORE, 0xC6, }, /* Ambient Light Sensor notification */
|
||||||
|
|||||||
@@ -902,10 +902,8 @@ static int ab8500_btemp_get_ext_psy_data(struct device *dev, void *data)
|
|||||||
*/
|
*/
|
||||||
static void ab8500_btemp_external_power_changed(struct power_supply *psy)
|
static void ab8500_btemp_external_power_changed(struct power_supply *psy)
|
||||||
{
|
{
|
||||||
struct ab8500_btemp *di = power_supply_get_drvdata(psy);
|
class_for_each_device(power_supply_class, NULL, psy,
|
||||||
|
ab8500_btemp_get_ext_psy_data);
|
||||||
class_for_each_device(power_supply_class, NULL,
|
|
||||||
di->btemp_psy, ab8500_btemp_get_ext_psy_data);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* ab8500 btemp driver interrupts and their respective isr */
|
/* ab8500 btemp driver interrupts and their respective isr */
|
||||||
|
|||||||
@@ -2384,10 +2384,8 @@ out:
|
|||||||
*/
|
*/
|
||||||
static void ab8500_fg_external_power_changed(struct power_supply *psy)
|
static void ab8500_fg_external_power_changed(struct power_supply *psy)
|
||||||
{
|
{
|
||||||
struct ab8500_fg *di = power_supply_get_drvdata(psy);
|
class_for_each_device(power_supply_class, NULL, psy,
|
||||||
|
ab8500_fg_get_ext_psy_data);
|
||||||
class_for_each_device(power_supply_class, NULL,
|
|
||||||
di->fg_psy, ab8500_fg_get_ext_psy_data);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -1083,10 +1083,8 @@ static int poll_interval_param_set(const char *val, const struct kernel_param *k
|
|||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
mutex_lock(&bq27xxx_list_lock);
|
mutex_lock(&bq27xxx_list_lock);
|
||||||
list_for_each_entry(di, &bq27xxx_battery_devices, list) {
|
list_for_each_entry(di, &bq27xxx_battery_devices, list)
|
||||||
cancel_delayed_work_sync(&di->work);
|
mod_delayed_work(system_wq, &di->work, 0);
|
||||||
schedule_delayed_work(&di->work, 0);
|
|
||||||
}
|
|
||||||
mutex_unlock(&bq27xxx_list_lock);
|
mutex_unlock(&bq27xxx_list_lock);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|||||||
@@ -356,6 +356,10 @@ static int __power_supply_is_system_supplied(struct device *dev, void *data)
|
|||||||
struct power_supply *psy = dev_get_drvdata(dev);
|
struct power_supply *psy = dev_get_drvdata(dev);
|
||||||
unsigned int *count = data;
|
unsigned int *count = data;
|
||||||
|
|
||||||
|
if (!psy->desc->get_property(psy, POWER_SUPPLY_PROP_SCOPE, &ret))
|
||||||
|
if (ret.intval == POWER_SUPPLY_SCOPE_DEVICE)
|
||||||
|
return 0;
|
||||||
|
|
||||||
(*count)++;
|
(*count)++;
|
||||||
if (psy->desc->type != POWER_SUPPLY_TYPE_BATTERY)
|
if (psy->desc->type != POWER_SUPPLY_TYPE_BATTERY)
|
||||||
if (!psy->desc->get_property(psy, POWER_SUPPLY_PROP_ONLINE,
|
if (!psy->desc->get_property(psy, POWER_SUPPLY_PROP_ONLINE,
|
||||||
@@ -374,8 +378,8 @@ int power_supply_is_system_supplied(void)
|
|||||||
__power_supply_is_system_supplied);
|
__power_supply_is_system_supplied);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If no power class device was found at all, most probably we are
|
* If no system scope power class device was found at all, most probably we
|
||||||
* running on a desktop system, so assume we are on mains power.
|
* are running on a desktop system, so assume we are on mains power.
|
||||||
*/
|
*/
|
||||||
if (count == 0)
|
if (count == 0)
|
||||||
return 1;
|
return 1;
|
||||||
|
|||||||
@@ -277,7 +277,8 @@ static ssize_t power_supply_show_property(struct device *dev,
|
|||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
if (ret == -ENODATA)
|
if (ret == -ENODATA)
|
||||||
dev_dbg(dev, "driver has no data for `%s' property\n",
|
dev_dbg_ratelimited(dev,
|
||||||
|
"driver has no data for `%s' property\n",
|
||||||
attr->attr.name);
|
attr->attr.name);
|
||||||
else if (ret != -ENODEV && ret != -EAGAIN)
|
else if (ret != -ENODEV && ret != -EAGAIN)
|
||||||
dev_err_ratelimited(dev,
|
dev_err_ratelimited(dev,
|
||||||
|
|||||||
@@ -733,13 +733,6 @@ static int sc27xx_fgu_set_property(struct power_supply *psy,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sc27xx_fgu_external_power_changed(struct power_supply *psy)
|
|
||||||
{
|
|
||||||
struct sc27xx_fgu_data *data = power_supply_get_drvdata(psy);
|
|
||||||
|
|
||||||
power_supply_changed(data->battery);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int sc27xx_fgu_property_is_writeable(struct power_supply *psy,
|
static int sc27xx_fgu_property_is_writeable(struct power_supply *psy,
|
||||||
enum power_supply_property psp)
|
enum power_supply_property psp)
|
||||||
{
|
{
|
||||||
@@ -774,7 +767,7 @@ static const struct power_supply_desc sc27xx_fgu_desc = {
|
|||||||
.num_properties = ARRAY_SIZE(sc27xx_fgu_props),
|
.num_properties = ARRAY_SIZE(sc27xx_fgu_props),
|
||||||
.get_property = sc27xx_fgu_get_property,
|
.get_property = sc27xx_fgu_get_property,
|
||||||
.set_property = sc27xx_fgu_set_property,
|
.set_property = sc27xx_fgu_set_property,
|
||||||
.external_power_changed = sc27xx_fgu_external_power_changed,
|
.external_power_changed = power_supply_changed,
|
||||||
.property_is_writeable = sc27xx_fgu_property_is_writeable,
|
.property_is_writeable = sc27xx_fgu_property_is_writeable,
|
||||||
.no_thermal = true,
|
.no_thermal = true,
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -5193,7 +5193,7 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
rdev->debugfs = debugfs_create_dir(rname, debugfs_root);
|
rdev->debugfs = debugfs_create_dir(rname, debugfs_root);
|
||||||
if (!rdev->debugfs) {
|
if (IS_ERR(rdev->debugfs)) {
|
||||||
rdev_warn(rdev, "Failed to create debugfs directory\n");
|
rdev_warn(rdev, "Failed to create debugfs directory\n");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@@ -6103,7 +6103,7 @@ static int __init regulator_init(void)
|
|||||||
ret = class_register(®ulator_class);
|
ret = class_register(®ulator_class);
|
||||||
|
|
||||||
debugfs_root = debugfs_create_dir("regulator", NULL);
|
debugfs_root = debugfs_create_dir("regulator", NULL);
|
||||||
if (!debugfs_root)
|
if (IS_ERR(debugfs_root))
|
||||||
pr_warn("regulator: Failed to create debugfs directory\n");
|
pr_warn("regulator: Failed to create debugfs directory\n");
|
||||||
|
|
||||||
#ifdef CONFIG_DEBUG_FS
|
#ifdef CONFIG_DEBUG_FS
|
||||||
|
|||||||
@@ -975,7 +975,9 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
|
|||||||
static int dspi_setup(struct spi_device *spi)
|
static int dspi_setup(struct spi_device *spi)
|
||||||
{
|
{
|
||||||
struct fsl_dspi *dspi = spi_controller_get_devdata(spi->controller);
|
struct fsl_dspi *dspi = spi_controller_get_devdata(spi->controller);
|
||||||
|
u32 period_ns = DIV_ROUND_UP(NSEC_PER_SEC, spi->max_speed_hz);
|
||||||
unsigned char br = 0, pbr = 0, pcssck = 0, cssck = 0;
|
unsigned char br = 0, pbr = 0, pcssck = 0, cssck = 0;
|
||||||
|
u32 quarter_period_ns = DIV_ROUND_UP(period_ns, 4);
|
||||||
u32 cs_sck_delay = 0, sck_cs_delay = 0;
|
u32 cs_sck_delay = 0, sck_cs_delay = 0;
|
||||||
struct fsl_dspi_platform_data *pdata;
|
struct fsl_dspi_platform_data *pdata;
|
||||||
unsigned char pasc = 0, asc = 0;
|
unsigned char pasc = 0, asc = 0;
|
||||||
@@ -1003,6 +1005,19 @@ static int dspi_setup(struct spi_device *spi)
|
|||||||
sck_cs_delay = pdata->sck_cs_delay;
|
sck_cs_delay = pdata->sck_cs_delay;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Since tCSC and tASC apply to continuous transfers too, avoid SCK
|
||||||
|
* glitches of half a cycle by never allowing tCSC + tASC to go below
|
||||||
|
* half a SCK period.
|
||||||
|
*/
|
||||||
|
if (cs_sck_delay < quarter_period_ns)
|
||||||
|
cs_sck_delay = quarter_period_ns;
|
||||||
|
if (sck_cs_delay < quarter_period_ns)
|
||||||
|
sck_cs_delay = quarter_period_ns;
|
||||||
|
|
||||||
|
dev_dbg(&spi->dev,
|
||||||
|
"DSPI controller timing params: CS-to-SCK delay %u ns, SCK-to-CS delay %u ns\n",
|
||||||
|
cs_sck_delay, sck_cs_delay);
|
||||||
|
|
||||||
clkrate = clk_get_rate(dspi->clk);
|
clkrate = clk_get_rate(dspi->clk);
|
||||||
hz_to_spi_baud(&pbr, &br, spi->max_speed_hz, clkrate);
|
hz_to_spi_baud(&pbr, &br, spi->max_speed_hz, clkrate);
|
||||||
|
|
||||||
|
|||||||
@@ -192,9 +192,9 @@ static int dma_test_start_rings(struct dma_test *dt)
|
|||||||
}
|
}
|
||||||
|
|
||||||
ret = tb_xdomain_enable_paths(dt->xd, dt->tx_hopid,
|
ret = tb_xdomain_enable_paths(dt->xd, dt->tx_hopid,
|
||||||
dt->tx_ring ? dt->tx_ring->hop : 0,
|
dt->tx_ring ? dt->tx_ring->hop : -1,
|
||||||
dt->rx_hopid,
|
dt->rx_hopid,
|
||||||
dt->rx_ring ? dt->rx_ring->hop : 0);
|
dt->rx_ring ? dt->rx_ring->hop : -1);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dma_test_free_rings(dt);
|
dma_test_free_rings(dt);
|
||||||
return ret;
|
return ret;
|
||||||
@@ -218,9 +218,9 @@ static void dma_test_stop_rings(struct dma_test *dt)
|
|||||||
tb_ring_stop(dt->tx_ring);
|
tb_ring_stop(dt->tx_ring);
|
||||||
|
|
||||||
ret = tb_xdomain_disable_paths(dt->xd, dt->tx_hopid,
|
ret = tb_xdomain_disable_paths(dt->xd, dt->tx_hopid,
|
||||||
dt->tx_ring ? dt->tx_ring->hop : 0,
|
dt->tx_ring ? dt->tx_ring->hop : -1,
|
||||||
dt->rx_hopid,
|
dt->rx_hopid,
|
||||||
dt->rx_ring ? dt->rx_ring->hop : 0);
|
dt->rx_ring ? dt->rx_ring->hop : -1);
|
||||||
if (ret)
|
if (ret)
|
||||||
dev_warn(&dt->svc->dev, "failed to disable DMA paths\n");
|
dev_warn(&dt->svc->dev, "failed to disable DMA paths\n");
|
||||||
|
|
||||||
|
|||||||
@@ -53,9 +53,14 @@ static int ring_interrupt_index(const struct tb_ring *ring)
|
|||||||
|
|
||||||
static void nhi_mask_interrupt(struct tb_nhi *nhi, int mask, int ring)
|
static void nhi_mask_interrupt(struct tb_nhi *nhi, int mask, int ring)
|
||||||
{
|
{
|
||||||
if (nhi->quirks & QUIRK_AUTO_CLEAR_INT)
|
if (nhi->quirks & QUIRK_AUTO_CLEAR_INT) {
|
||||||
return;
|
u32 val;
|
||||||
iowrite32(mask, nhi->iobase + REG_RING_INTERRUPT_MASK_CLEAR_BASE + ring);
|
|
||||||
|
val = ioread32(nhi->iobase + REG_RING_INTERRUPT_BASE + ring);
|
||||||
|
iowrite32(val & ~mask, nhi->iobase + REG_RING_INTERRUPT_BASE + ring);
|
||||||
|
} else {
|
||||||
|
iowrite32(mask, nhi->iobase + REG_RING_INTERRUPT_MASK_CLEAR_BASE + ring);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nhi_clear_interrupt(struct tb_nhi *nhi, int ring)
|
static void nhi_clear_interrupt(struct tb_nhi *nhi, int ring)
|
||||||
|
|||||||
@@ -274,6 +274,7 @@ lqasc_err_int(int irq, void *_port)
|
|||||||
struct ltq_uart_port *ltq_port = to_ltq_uart_port(port);
|
struct ltq_uart_port *ltq_port = to_ltq_uart_port(port);
|
||||||
|
|
||||||
spin_lock_irqsave(<q_port->lock, flags);
|
spin_lock_irqsave(<q_port->lock, flags);
|
||||||
|
__raw_writel(ASC_IRNCR_EIR, port->membase + LTQ_ASC_IRNCR);
|
||||||
/* clear any pending interrupts */
|
/* clear any pending interrupts */
|
||||||
asc_update_bits(0, ASCWHBSTATE_CLRPE | ASCWHBSTATE_CLRFE |
|
asc_update_bits(0, ASCWHBSTATE_CLRPE | ASCWHBSTATE_CLRFE |
|
||||||
ASCWHBSTATE_CLRROE, port->membase + LTQ_ASC_WHBSTATE);
|
ASCWHBSTATE_CLRROE, port->membase + LTQ_ASC_WHBSTATE);
|
||||||
|
|||||||
@@ -198,6 +198,7 @@ static void dwc3_gadget_del_and_unmap_request(struct dwc3_ep *dep,
|
|||||||
list_del(&req->list);
|
list_del(&req->list);
|
||||||
req->remaining = 0;
|
req->remaining = 0;
|
||||||
req->needs_extra_trb = false;
|
req->needs_extra_trb = false;
|
||||||
|
req->num_trbs = 0;
|
||||||
|
|
||||||
if (req->request.status == -EINPROGRESS)
|
if (req->request.status == -EINPROGRESS)
|
||||||
req->request.status = status;
|
req->request.status = status;
|
||||||
|
|||||||
@@ -248,6 +248,8 @@ static void option_instat_callback(struct urb *urb);
|
|||||||
#define QUECTEL_VENDOR_ID 0x2c7c
|
#define QUECTEL_VENDOR_ID 0x2c7c
|
||||||
/* These Quectel products use Quectel's vendor ID */
|
/* These Quectel products use Quectel's vendor ID */
|
||||||
#define QUECTEL_PRODUCT_EC21 0x0121
|
#define QUECTEL_PRODUCT_EC21 0x0121
|
||||||
|
#define QUECTEL_PRODUCT_EM061K_LTA 0x0123
|
||||||
|
#define QUECTEL_PRODUCT_EM061K_LMS 0x0124
|
||||||
#define QUECTEL_PRODUCT_EC25 0x0125
|
#define QUECTEL_PRODUCT_EC25 0x0125
|
||||||
#define QUECTEL_PRODUCT_EG91 0x0191
|
#define QUECTEL_PRODUCT_EG91 0x0191
|
||||||
#define QUECTEL_PRODUCT_EG95 0x0195
|
#define QUECTEL_PRODUCT_EG95 0x0195
|
||||||
@@ -266,6 +268,8 @@ static void option_instat_callback(struct urb *urb);
|
|||||||
#define QUECTEL_PRODUCT_RM520N 0x0801
|
#define QUECTEL_PRODUCT_RM520N 0x0801
|
||||||
#define QUECTEL_PRODUCT_EC200U 0x0901
|
#define QUECTEL_PRODUCT_EC200U 0x0901
|
||||||
#define QUECTEL_PRODUCT_EC200S_CN 0x6002
|
#define QUECTEL_PRODUCT_EC200S_CN 0x6002
|
||||||
|
#define QUECTEL_PRODUCT_EM061K_LWW 0x6008
|
||||||
|
#define QUECTEL_PRODUCT_EM061K_LCN 0x6009
|
||||||
#define QUECTEL_PRODUCT_EC200T 0x6026
|
#define QUECTEL_PRODUCT_EC200T 0x6026
|
||||||
#define QUECTEL_PRODUCT_RM500K 0x7001
|
#define QUECTEL_PRODUCT_RM500K 0x7001
|
||||||
|
|
||||||
@@ -1189,6 +1193,18 @@ static const struct usb_device_id option_ids[] = {
|
|||||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
|
||||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
|
||||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0xff, 0x30) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0x00, 0x40) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0xff, 0x40) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0xff, 0x30) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0x00, 0x40) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0xff, 0x40) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0xff, 0x30) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0x00, 0x40) },
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0xff, 0x40) },
|
||||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
|
||||||
.driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
|
.driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
|
||||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
|
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
|
||||||
|
|||||||
@@ -115,8 +115,8 @@ responded:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) &&
|
rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us);
|
||||||
rtt_us < server->probe.rtt) {
|
if (rtt_us < server->probe.rtt) {
|
||||||
server->probe.rtt = rtt_us;
|
server->probe.rtt = rtt_us;
|
||||||
server->rtt = rtt_us;
|
server->rtt = rtt_us;
|
||||||
alist->preferred = index;
|
alist->preferred = index;
|
||||||
|
|||||||
@@ -2576,10 +2576,20 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
|
|||||||
}
|
}
|
||||||
|
|
||||||
ret = inc_block_group_ro(cache, 0);
|
ret = inc_block_group_ro(cache, 0);
|
||||||
if (!do_chunk_alloc || ret == -ETXTBSY)
|
|
||||||
goto unlock_out;
|
|
||||||
if (!ret)
|
if (!ret)
|
||||||
goto out;
|
goto out;
|
||||||
|
if (ret == -ETXTBSY)
|
||||||
|
goto unlock_out;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Skip chunk alloction if the bg is SYSTEM, this is to avoid system
|
||||||
|
* chunk allocation storm to exhaust the system chunk array. Otherwise
|
||||||
|
* we still want to try our best to mark the block group read-only.
|
||||||
|
*/
|
||||||
|
if (!do_chunk_alloc && ret == -ENOSPC &&
|
||||||
|
(cache->flags & BTRFS_BLOCK_GROUP_SYSTEM))
|
||||||
|
goto unlock_out;
|
||||||
|
|
||||||
alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags);
|
alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags);
|
||||||
ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
|
ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|||||||
@@ -700,7 +700,9 @@ blk_status_t btrfs_csum_one_bio(struct btrfs_inode *inode, struct bio *bio,
|
|||||||
sums = kvzalloc(btrfs_ordered_sum_size(fs_info,
|
sums = kvzalloc(btrfs_ordered_sum_size(fs_info,
|
||||||
bytes_left), GFP_KERNEL);
|
bytes_left), GFP_KERNEL);
|
||||||
memalloc_nofs_restore(nofs_flag);
|
memalloc_nofs_restore(nofs_flag);
|
||||||
BUG_ON(!sums); /* -ENOMEM */
|
if (!sums)
|
||||||
|
return BLK_STS_RESOURCE;
|
||||||
|
|
||||||
sums->len = bytes_left;
|
sums->len = bytes_left;
|
||||||
ordered = btrfs_lookup_ordered_extent(inode,
|
ordered = btrfs_lookup_ordered_extent(inode,
|
||||||
offset);
|
offset);
|
||||||
|
|||||||
@@ -3812,13 +3812,20 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
|
|||||||
|
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
ro_set = 1;
|
ro_set = 1;
|
||||||
} else if (ret == -ENOSPC && !sctx->is_dev_replace) {
|
} else if (ret == -ENOSPC && !sctx->is_dev_replace &&
|
||||||
|
!(cache->flags & BTRFS_BLOCK_GROUP_RAID56_MASK)) {
|
||||||
/*
|
/*
|
||||||
* btrfs_inc_block_group_ro return -ENOSPC when it
|
* btrfs_inc_block_group_ro return -ENOSPC when it
|
||||||
* failed in creating new chunk for metadata.
|
* failed in creating new chunk for metadata.
|
||||||
* It is not a problem for scrub, because
|
* It is not a problem for scrub, because
|
||||||
* metadata are always cowed, and our scrub paused
|
* metadata are always cowed, and our scrub paused
|
||||||
* commit_transactions.
|
* commit_transactions.
|
||||||
|
*
|
||||||
|
* For RAID56 chunks, we have to mark them read-only
|
||||||
|
* for scrub, as later we would use our own cache
|
||||||
|
* out of RAID56 realm.
|
||||||
|
* Thus we want the RAID56 bg to be marked RO to
|
||||||
|
* prevent RMW from screwing up out cache.
|
||||||
*/
|
*/
|
||||||
ro_set = 0;
|
ro_set = 0;
|
||||||
} else if (ret == -ETXTBSY) {
|
} else if (ret == -ETXTBSY) {
|
||||||
|
|||||||
@@ -4930,9 +4930,13 @@ oplock_break_ack:
|
|||||||
* disconnected since oplock already released by the server
|
* disconnected since oplock already released by the server
|
||||||
*/
|
*/
|
||||||
if (!oplock_break_cancelled) {
|
if (!oplock_break_cancelled) {
|
||||||
rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
|
/* check for server null since can race with kill_sb calling tree disconnect */
|
||||||
|
if (tcon->ses && tcon->ses->server) {
|
||||||
|
rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
|
||||||
volatile_fid, net_fid, cinode);
|
volatile_fid, net_fid, cinode);
|
||||||
cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
|
cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
|
||||||
|
} else
|
||||||
|
pr_warn_once("lease break not sent for unmounted share\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
cifs_done_oplock_break(cinode);
|
cifs_done_oplock_break(cinode);
|
||||||
|
|||||||
@@ -1754,7 +1754,11 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry,
|
|||||||
{
|
{
|
||||||
int ret = default_wake_function(wq_entry, mode, sync, key);
|
int ret = default_wake_function(wq_entry, mode, sync, key);
|
||||||
|
|
||||||
list_del_init(&wq_entry->entry);
|
/*
|
||||||
|
* Pairs with list_empty_careful in ep_poll, and ensures future loop
|
||||||
|
* iterations see the cause of this wakeup.
|
||||||
|
*/
|
||||||
|
list_del_init_careful(&wq_entry->entry);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -322,17 +322,15 @@ static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb,
|
|||||||
struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
|
struct ext4_group_info *ext4_get_group_info(struct super_block *sb,
|
||||||
ext4_group_t group)
|
ext4_group_t group)
|
||||||
{
|
{
|
||||||
struct ext4_group_info **grp_info;
|
struct ext4_group_info **grp_info;
|
||||||
long indexv, indexh;
|
long indexv, indexh;
|
||||||
|
|
||||||
if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) {
|
if (unlikely(group >= EXT4_SB(sb)->s_groups_count))
|
||||||
ext4_error(sb, "invalid group %u", group);
|
return NULL;
|
||||||
return NULL;
|
indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
|
||||||
}
|
indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
|
||||||
indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb));
|
grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
|
||||||
indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1);
|
return grp_info[indexh];
|
||||||
grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv);
|
|
||||||
return grp_info[indexh];
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -1083,16 +1083,16 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
|
|||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (req->DialectCount == 0) {
|
smb2_buf_len = get_rfc1002_len(work->request_buf);
|
||||||
pr_err("malformed packet\n");
|
smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects) - 4;
|
||||||
|
if (smb2_neg_size > smb2_buf_len) {
|
||||||
rsp->hdr.Status = STATUS_INVALID_PARAMETER;
|
rsp->hdr.Status = STATUS_INVALID_PARAMETER;
|
||||||
rc = -EINVAL;
|
rc = -EINVAL;
|
||||||
goto err_out;
|
goto err_out;
|
||||||
}
|
}
|
||||||
|
|
||||||
smb2_buf_len = get_rfc1002_len(work->request_buf);
|
if (req->DialectCount == 0) {
|
||||||
smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects) - 4;
|
pr_err("malformed packet\n");
|
||||||
if (smb2_neg_size > smb2_buf_len) {
|
|
||||||
rsp->hdr.Status = STATUS_INVALID_PARAMETER;
|
rsp->hdr.Status = STATUS_INVALID_PARAMETER;
|
||||||
rc = -EINVAL;
|
rc = -EINVAL;
|
||||||
goto err_out;
|
goto err_out;
|
||||||
|
|||||||
@@ -285,6 +285,14 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
|
|||||||
if (nbh == NULL) { /* blocksize == pagesize */
|
if (nbh == NULL) { /* blocksize == pagesize */
|
||||||
xa_erase_irq(&btnc->i_pages, newkey);
|
xa_erase_irq(&btnc->i_pages, newkey);
|
||||||
unlock_page(ctxt->bh->b_page);
|
unlock_page(ctxt->bh->b_page);
|
||||||
} else
|
} else {
|
||||||
brelse(nbh);
|
/*
|
||||||
|
* When canceling a buffer that a prepare operation has
|
||||||
|
* allocated to copy a node block to another location, use
|
||||||
|
* nilfs_btnode_delete() to initialize and release the buffer
|
||||||
|
* so that the buffer flags will not be in an inconsistent
|
||||||
|
* state when it is reallocated.
|
||||||
|
*/
|
||||||
|
nilfs_btnode_delete(nbh);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -779,6 +779,15 @@ int nilfs_sufile_resize(struct inode *sufile, __u64 newnsegs)
|
|||||||
goto out_header;
|
goto out_header;
|
||||||
|
|
||||||
sui->ncleansegs -= nsegs - newnsegs;
|
sui->ncleansegs -= nsegs - newnsegs;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the sufile is successfully truncated, immediately adjust
|
||||||
|
* the segment allocation space while locking the semaphore
|
||||||
|
* "mi_sem" so that nilfs_sufile_alloc() never allocates
|
||||||
|
* segments in the truncated space.
|
||||||
|
*/
|
||||||
|
sui->allocmax = newnsegs - 1;
|
||||||
|
sui->allocmin = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
kaddr = kmap_atomic(header_bh->b_page);
|
kaddr = kmap_atomic(header_bh->b_page);
|
||||||
|
|||||||
@@ -405,6 +405,18 @@ unsigned long nilfs_nrsvsegs(struct the_nilfs *nilfs, unsigned long nsegs)
|
|||||||
100));
|
100));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* nilfs_max_segment_count - calculate the maximum number of segments
|
||||||
|
* @nilfs: nilfs object
|
||||||
|
*/
|
||||||
|
static u64 nilfs_max_segment_count(struct the_nilfs *nilfs)
|
||||||
|
{
|
||||||
|
u64 max_count = U64_MAX;
|
||||||
|
|
||||||
|
do_div(max_count, nilfs->ns_blocks_per_segment);
|
||||||
|
return min_t(u64, max_count, ULONG_MAX);
|
||||||
|
}
|
||||||
|
|
||||||
void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs)
|
void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs)
|
||||||
{
|
{
|
||||||
nilfs->ns_nsegments = nsegs;
|
nilfs->ns_nsegments = nsegs;
|
||||||
@@ -414,6 +426,8 @@ void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs)
|
|||||||
static int nilfs_store_disk_layout(struct the_nilfs *nilfs,
|
static int nilfs_store_disk_layout(struct the_nilfs *nilfs,
|
||||||
struct nilfs_super_block *sbp)
|
struct nilfs_super_block *sbp)
|
||||||
{
|
{
|
||||||
|
u64 nsegments, nblocks;
|
||||||
|
|
||||||
if (le32_to_cpu(sbp->s_rev_level) < NILFS_MIN_SUPP_REV) {
|
if (le32_to_cpu(sbp->s_rev_level) < NILFS_MIN_SUPP_REV) {
|
||||||
nilfs_err(nilfs->ns_sb,
|
nilfs_err(nilfs->ns_sb,
|
||||||
"unsupported revision (superblock rev.=%d.%d, current rev.=%d.%d). Please check the version of mkfs.nilfs(2).",
|
"unsupported revision (superblock rev.=%d.%d, current rev.=%d.%d). Please check the version of mkfs.nilfs(2).",
|
||||||
@@ -457,7 +471,35 @@ static int nilfs_store_disk_layout(struct the_nilfs *nilfs,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
nilfs_set_nsegments(nilfs, le64_to_cpu(sbp->s_nsegments));
|
nsegments = le64_to_cpu(sbp->s_nsegments);
|
||||||
|
if (nsegments > nilfs_max_segment_count(nilfs)) {
|
||||||
|
nilfs_err(nilfs->ns_sb,
|
||||||
|
"segment count %llu exceeds upper limit (%llu segments)",
|
||||||
|
(unsigned long long)nsegments,
|
||||||
|
(unsigned long long)nilfs_max_segment_count(nilfs));
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
nblocks = (u64)i_size_read(nilfs->ns_sb->s_bdev->bd_inode) >>
|
||||||
|
nilfs->ns_sb->s_blocksize_bits;
|
||||||
|
if (nblocks) {
|
||||||
|
u64 min_block_count = nsegments * nilfs->ns_blocks_per_segment;
|
||||||
|
/*
|
||||||
|
* To avoid failing to mount early device images without a
|
||||||
|
* second superblock, exclude that block count from the
|
||||||
|
* "min_block_count" calculation.
|
||||||
|
*/
|
||||||
|
|
||||||
|
if (nblocks < min_block_count) {
|
||||||
|
nilfs_err(nilfs->ns_sb,
|
||||||
|
"total number of segment blocks %llu exceeds device size (%llu blocks)",
|
||||||
|
(unsigned long long)min_block_count,
|
||||||
|
(unsigned long long)nblocks);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
nilfs_set_nsegments(nilfs, nsegments);
|
||||||
nilfs->ns_crc_seed = le32_to_cpu(sbp->s_crc_seed);
|
nilfs->ns_crc_seed = le32_to_cpu(sbp->s_crc_seed);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2103,14 +2103,20 @@ static long ocfs2_fallocate(struct file *file, int mode, loff_t offset,
|
|||||||
struct ocfs2_space_resv sr;
|
struct ocfs2_space_resv sr;
|
||||||
int change_size = 1;
|
int change_size = 1;
|
||||||
int cmd = OCFS2_IOC_RESVSP64;
|
int cmd = OCFS2_IOC_RESVSP64;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
|
if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
if (!ocfs2_writes_unwritten_extents(osb))
|
if (!ocfs2_writes_unwritten_extents(osb))
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
|
|
||||||
if (mode & FALLOC_FL_KEEP_SIZE)
|
if (mode & FALLOC_FL_KEEP_SIZE) {
|
||||||
change_size = 0;
|
change_size = 0;
|
||||||
|
} else {
|
||||||
|
ret = inode_newsize_ok(inode, offset + len);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
if (mode & FALLOC_FL_PUNCH_HOLE)
|
if (mode & FALLOC_FL_PUNCH_HOLE)
|
||||||
cmd = OCFS2_IOC_UNRESVSP64;
|
cmd = OCFS2_IOC_UNRESVSP64;
|
||||||
|
|||||||
@@ -954,8 +954,10 @@ static void ocfs2_disable_quotas(struct ocfs2_super *osb)
|
|||||||
for (type = 0; type < OCFS2_MAXQUOTAS; type++) {
|
for (type = 0; type < OCFS2_MAXQUOTAS; type++) {
|
||||||
if (!sb_has_quota_loaded(sb, type))
|
if (!sb_has_quota_loaded(sb, type))
|
||||||
continue;
|
continue;
|
||||||
oinfo = sb_dqinfo(sb, type)->dqi_priv;
|
if (!sb_has_quota_suspended(sb, type)) {
|
||||||
cancel_delayed_work_sync(&oinfo->dqi_sync_work);
|
oinfo = sb_dqinfo(sb, type)->dqi_priv;
|
||||||
|
cancel_delayed_work_sync(&oinfo->dqi_sync_work);
|
||||||
|
}
|
||||||
inode = igrab(sb->s_dquot.files[type]);
|
inode = igrab(sb->s_dquot.files[type]);
|
||||||
/* Turn off quotas. This will remove all dquot structures from
|
/* Turn off quotas. This will remove all dquot structures from
|
||||||
* memory and so they will be automatically synced to global
|
* memory and so they will be automatically synced to global
|
||||||
|
|||||||
@@ -1866,7 +1866,6 @@ enum netdev_ml_priv_type {
|
|||||||
* @tipc_ptr: TIPC specific data
|
* @tipc_ptr: TIPC specific data
|
||||||
* @atalk_ptr: AppleTalk link
|
* @atalk_ptr: AppleTalk link
|
||||||
* @ip_ptr: IPv4 specific data
|
* @ip_ptr: IPv4 specific data
|
||||||
* @dn_ptr: DECnet specific data
|
|
||||||
* @ip6_ptr: IPv6 specific data
|
* @ip6_ptr: IPv6 specific data
|
||||||
* @ax25_ptr: AX.25 specific data
|
* @ax25_ptr: AX.25 specific data
|
||||||
* @ieee80211_ptr: IEEE 802.11 specific data, assign before registering
|
* @ieee80211_ptr: IEEE 802.11 specific data, assign before registering
|
||||||
@@ -2149,9 +2148,6 @@ struct net_device {
|
|||||||
void *atalk_ptr;
|
void *atalk_ptr;
|
||||||
#endif
|
#endif
|
||||||
struct in_device __rcu *ip_ptr;
|
struct in_device __rcu *ip_ptr;
|
||||||
#if IS_ENABLED(CONFIG_DECNET)
|
|
||||||
struct dn_dev __rcu *dn_ptr;
|
|
||||||
#endif
|
|
||||||
struct inet6_dev __rcu *ip6_ptr;
|
struct inet6_dev __rcu *ip6_ptr;
|
||||||
#if IS_ENABLED(CONFIG_AX25)
|
#if IS_ENABLED(CONFIG_AX25)
|
||||||
void *ax25_ptr;
|
void *ax25_ptr;
|
||||||
|
|||||||
@@ -246,11 +246,6 @@ static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
|
|||||||
hook_head = rcu_dereference(net->nf.hooks_bridge[hook]);
|
hook_head = rcu_dereference(net->nf.hooks_bridge[hook]);
|
||||||
#endif
|
#endif
|
||||||
break;
|
break;
|
||||||
#if IS_ENABLED(CONFIG_DECNET)
|
|
||||||
case NFPROTO_DECNET:
|
|
||||||
hook_head = rcu_dereference(net->nf.hooks_decnet[hook]);
|
|
||||||
break;
|
|
||||||
#endif
|
|
||||||
default:
|
default:
|
||||||
WARN_ON_ONCE(1);
|
WARN_ON_ONCE(1);
|
||||||
break;
|
break;
|
||||||
|
|||||||
@@ -7,14 +7,6 @@
|
|||||||
/* in/out/forward only */
|
/* in/out/forward only */
|
||||||
#define NF_ARP_NUMHOOKS 3
|
#define NF_ARP_NUMHOOKS 3
|
||||||
|
|
||||||
/* max hook is NF_DN_ROUTE (6), also see uapi/linux/netfilter_decnet.h */
|
|
||||||
#define NF_DN_NUMHOOKS 7
|
|
||||||
|
|
||||||
#if IS_ENABLED(CONFIG_DECNET)
|
|
||||||
/* Largest hook number + 1, see uapi/linux/netfilter_decnet.h */
|
|
||||||
#define NF_MAX_HOOKS NF_DN_NUMHOOKS
|
|
||||||
#else
|
|
||||||
#define NF_MAX_HOOKS NF_INET_NUMHOOKS
|
#define NF_MAX_HOOKS NF_INET_NUMHOOKS
|
||||||
#endif
|
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|||||||
@@ -1486,12 +1486,26 @@ static inline bool of_device_is_system_power_controller(const struct device_node
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
enum of_overlay_notify_action {
|
enum of_overlay_notify_action {
|
||||||
OF_OVERLAY_PRE_APPLY = 0,
|
OF_OVERLAY_INIT = 0, /* kzalloc() of ovcs sets this value */
|
||||||
|
OF_OVERLAY_PRE_APPLY,
|
||||||
OF_OVERLAY_POST_APPLY,
|
OF_OVERLAY_POST_APPLY,
|
||||||
OF_OVERLAY_PRE_REMOVE,
|
OF_OVERLAY_PRE_REMOVE,
|
||||||
OF_OVERLAY_POST_REMOVE,
|
OF_OVERLAY_POST_REMOVE,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static inline char *of_overlay_action_name(enum of_overlay_notify_action action)
|
||||||
|
{
|
||||||
|
static char *of_overlay_action_name[] = {
|
||||||
|
"init",
|
||||||
|
"pre-apply",
|
||||||
|
"post-apply",
|
||||||
|
"pre-remove",
|
||||||
|
"post-remove",
|
||||||
|
};
|
||||||
|
|
||||||
|
return of_overlay_action_name[action];
|
||||||
|
}
|
||||||
|
|
||||||
struct of_overlay_notify_data {
|
struct of_overlay_notify_data {
|
||||||
struct device_node *overlay;
|
struct device_node *overlay;
|
||||||
struct device_node *target;
|
struct device_node *target;
|
||||||
|
|||||||
231
include/net/dn.h
231
include/net/dn.h
@@ -1,231 +0,0 @@
|
|||||||
/* SPDX-License-Identifier: GPL-2.0 */
|
|
||||||
#ifndef _NET_DN_H
|
|
||||||
#define _NET_DN_H
|
|
||||||
|
|
||||||
#include <linux/dn.h>
|
|
||||||
#include <net/sock.h>
|
|
||||||
#include <net/flow.h>
|
|
||||||
#include <asm/byteorder.h>
|
|
||||||
#include <asm/unaligned.h>
|
|
||||||
|
|
||||||
struct dn_scp /* Session Control Port */
|
|
||||||
{
|
|
||||||
unsigned char state;
|
|
||||||
#define DN_O 1 /* Open */
|
|
||||||
#define DN_CR 2 /* Connect Receive */
|
|
||||||
#define DN_DR 3 /* Disconnect Reject */
|
|
||||||
#define DN_DRC 4 /* Discon. Rej. Complete*/
|
|
||||||
#define DN_CC 5 /* Connect Confirm */
|
|
||||||
#define DN_CI 6 /* Connect Initiate */
|
|
||||||
#define DN_NR 7 /* No resources */
|
|
||||||
#define DN_NC 8 /* No communication */
|
|
||||||
#define DN_CD 9 /* Connect Delivery */
|
|
||||||
#define DN_RJ 10 /* Rejected */
|
|
||||||
#define DN_RUN 11 /* Running */
|
|
||||||
#define DN_DI 12 /* Disconnect Initiate */
|
|
||||||
#define DN_DIC 13 /* Disconnect Complete */
|
|
||||||
#define DN_DN 14 /* Disconnect Notificat */
|
|
||||||
#define DN_CL 15 /* Closed */
|
|
||||||
#define DN_CN 16 /* Closed Notification */
|
|
||||||
|
|
||||||
__le16 addrloc;
|
|
||||||
__le16 addrrem;
|
|
||||||
__u16 numdat;
|
|
||||||
__u16 numoth;
|
|
||||||
__u16 numoth_rcv;
|
|
||||||
__u16 numdat_rcv;
|
|
||||||
__u16 ackxmt_dat;
|
|
||||||
__u16 ackxmt_oth;
|
|
||||||
__u16 ackrcv_dat;
|
|
||||||
__u16 ackrcv_oth;
|
|
||||||
__u8 flowrem_sw;
|
|
||||||
__u8 flowloc_sw;
|
|
||||||
#define DN_SEND 2
|
|
||||||
#define DN_DONTSEND 1
|
|
||||||
#define DN_NOCHANGE 0
|
|
||||||
__u16 flowrem_dat;
|
|
||||||
__u16 flowrem_oth;
|
|
||||||
__u16 flowloc_dat;
|
|
||||||
__u16 flowloc_oth;
|
|
||||||
__u8 services_rem;
|
|
||||||
__u8 services_loc;
|
|
||||||
__u8 info_rem;
|
|
||||||
__u8 info_loc;
|
|
||||||
|
|
||||||
__u16 segsize_rem;
|
|
||||||
__u16 segsize_loc;
|
|
||||||
|
|
||||||
__u8 nonagle;
|
|
||||||
__u8 multi_ireq;
|
|
||||||
__u8 accept_mode;
|
|
||||||
unsigned long seg_total; /* Running total of current segment */
|
|
||||||
|
|
||||||
struct optdata_dn conndata_in;
|
|
||||||
struct optdata_dn conndata_out;
|
|
||||||
struct optdata_dn discdata_in;
|
|
||||||
struct optdata_dn discdata_out;
|
|
||||||
struct accessdata_dn accessdata;
|
|
||||||
|
|
||||||
struct sockaddr_dn addr; /* Local address */
|
|
||||||
struct sockaddr_dn peer; /* Remote address */
|
|
||||||
|
|
||||||
/*
|
|
||||||
* In this case the RTT estimation is not specified in the
|
|
||||||
* docs, nor is any back off algorithm. Here we follow well
|
|
||||||
* known tcp algorithms with a few small variations.
|
|
||||||
*
|
|
||||||
* snd_window: Max number of packets we send before we wait for
|
|
||||||
* an ack to come back. This will become part of a
|
|
||||||
* more complicated scheme when we support flow
|
|
||||||
* control.
|
|
||||||
*
|
|
||||||
* nsp_srtt: Round-Trip-Time (x8) in jiffies. This is a rolling
|
|
||||||
* average.
|
|
||||||
* nsp_rttvar: Round-Trip-Time-Varience (x4) in jiffies. This is the
|
|
||||||
* varience of the smoothed average (but calculated in
|
|
||||||
* a simpler way than for normal statistical varience
|
|
||||||
* calculations).
|
|
||||||
*
|
|
||||||
* nsp_rxtshift: Backoff counter. Value is zero normally, each time
|
|
||||||
* a packet is lost is increases by one until an ack
|
|
||||||
* is received. Its used to index an array of backoff
|
|
||||||
* multipliers.
|
|
||||||
*/
|
|
||||||
#define NSP_MIN_WINDOW 1
|
|
||||||
#define NSP_MAX_WINDOW (0x07fe)
|
|
||||||
unsigned long max_window;
|
|
||||||
unsigned long snd_window;
|
|
||||||
#define NSP_INITIAL_SRTT (HZ)
|
|
||||||
unsigned long nsp_srtt;
|
|
||||||
#define NSP_INITIAL_RTTVAR (HZ*3)
|
|
||||||
unsigned long nsp_rttvar;
|
|
||||||
#define NSP_MAXRXTSHIFT 12
|
|
||||||
unsigned long nsp_rxtshift;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Output queues, one for data, one for otherdata/linkservice
|
|
||||||
*/
|
|
||||||
struct sk_buff_head data_xmit_queue;
|
|
||||||
struct sk_buff_head other_xmit_queue;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Input queue for other data
|
|
||||||
*/
|
|
||||||
struct sk_buff_head other_receive_queue;
|
|
||||||
int other_report;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Stuff to do with the slow timer
|
|
||||||
*/
|
|
||||||
unsigned long stamp; /* time of last transmit */
|
|
||||||
unsigned long persist;
|
|
||||||
int (*persist_fxn)(struct sock *sk);
|
|
||||||
unsigned long keepalive;
|
|
||||||
void (*keepalive_fxn)(struct sock *sk);
|
|
||||||
|
|
||||||
};
|
|
||||||
|
|
||||||
static inline struct dn_scp *DN_SK(struct sock *sk)
|
|
||||||
{
|
|
||||||
return (struct dn_scp *)(sk + 1);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* src,dst : Source and Destination DECnet addresses
|
|
||||||
* hops : Number of hops through the network
|
|
||||||
* dst_port, src_port : NSP port numbers
|
|
||||||
* services, info : Useful data extracted from conninit messages
|
|
||||||
* rt_flags : Routing flags byte
|
|
||||||
* nsp_flags : NSP layer flags byte
|
|
||||||
* segsize : Size of segment
|
|
||||||
* segnum : Number, for data, otherdata and linkservice
|
|
||||||
* xmit_count : Number of times we've transmitted this skb
|
|
||||||
* stamp : Time stamp of most recent transmission, used in RTT calculations
|
|
||||||
* iif: Input interface number
|
|
||||||
*
|
|
||||||
* As a general policy, this structure keeps all addresses in network
|
|
||||||
* byte order, and all else in host byte order. Thus dst, src, dst_port
|
|
||||||
* and src_port are in network order. All else is in host order.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
#define DN_SKB_CB(skb) ((struct dn_skb_cb *)(skb)->cb)
|
|
||||||
struct dn_skb_cb {
|
|
||||||
__le16 dst;
|
|
||||||
__le16 src;
|
|
||||||
__u16 hops;
|
|
||||||
__le16 dst_port;
|
|
||||||
__le16 src_port;
|
|
||||||
__u8 services;
|
|
||||||
__u8 info;
|
|
||||||
__u8 rt_flags;
|
|
||||||
__u8 nsp_flags;
|
|
||||||
__u16 segsize;
|
|
||||||
__u16 segnum;
|
|
||||||
__u16 xmit_count;
|
|
||||||
unsigned long stamp;
|
|
||||||
int iif;
|
|
||||||
};
|
|
||||||
|
|
||||||
static inline __le16 dn_eth2dn(unsigned char *ethaddr)
|
|
||||||
{
|
|
||||||
return get_unaligned((__le16 *)(ethaddr + 4));
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline __le16 dn_saddr2dn(struct sockaddr_dn *saddr)
|
|
||||||
{
|
|
||||||
return *(__le16 *)saddr->sdn_nodeaddr;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void dn_dn2eth(unsigned char *ethaddr, __le16 addr)
|
|
||||||
{
|
|
||||||
__u16 a = le16_to_cpu(addr);
|
|
||||||
ethaddr[0] = 0xAA;
|
|
||||||
ethaddr[1] = 0x00;
|
|
||||||
ethaddr[2] = 0x04;
|
|
||||||
ethaddr[3] = 0x00;
|
|
||||||
ethaddr[4] = (__u8)(a & 0xff);
|
|
||||||
ethaddr[5] = (__u8)(a >> 8);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void dn_sk_ports_copy(struct flowidn *fld, struct dn_scp *scp)
|
|
||||||
{
|
|
||||||
fld->fld_sport = scp->addrloc;
|
|
||||||
fld->fld_dport = scp->addrrem;
|
|
||||||
}
|
|
||||||
|
|
||||||
unsigned int dn_mss_from_pmtu(struct net_device *dev, int mtu);
|
|
||||||
void dn_register_sysctl(void);
|
|
||||||
void dn_unregister_sysctl(void);
|
|
||||||
|
|
||||||
#define DN_MENUVER_ACC 0x01
|
|
||||||
#define DN_MENUVER_USR 0x02
|
|
||||||
#define DN_MENUVER_PRX 0x04
|
|
||||||
#define DN_MENUVER_UIC 0x08
|
|
||||||
|
|
||||||
struct sock *dn_sklist_find_listener(struct sockaddr_dn *addr);
|
|
||||||
struct sock *dn_find_by_skb(struct sk_buff *skb);
|
|
||||||
#define DN_ASCBUF_LEN 9
|
|
||||||
char *dn_addr2asc(__u16, char *);
|
|
||||||
int dn_destroy_timer(struct sock *sk);
|
|
||||||
|
|
||||||
int dn_sockaddr2username(struct sockaddr_dn *addr, unsigned char *buf,
|
|
||||||
unsigned char type);
|
|
||||||
int dn_username2sockaddr(unsigned char *data, int len, struct sockaddr_dn *addr,
|
|
||||||
unsigned char *type);
|
|
||||||
|
|
||||||
void dn_start_slow_timer(struct sock *sk);
|
|
||||||
void dn_stop_slow_timer(struct sock *sk);
|
|
||||||
|
|
||||||
extern __le16 decnet_address;
|
|
||||||
extern int decnet_debug_level;
|
|
||||||
extern int decnet_time_wait;
|
|
||||||
extern int decnet_dn_count;
|
|
||||||
extern int decnet_di_count;
|
|
||||||
extern int decnet_dr_count;
|
|
||||||
extern int decnet_no_fc_max_cwnd;
|
|
||||||
|
|
||||||
extern long sysctl_decnet_mem[3];
|
|
||||||
extern int sysctl_decnet_wmem[3];
|
|
||||||
extern int sysctl_decnet_rmem[3];
|
|
||||||
|
|
||||||
#endif /* _NET_DN_H */
|
|
||||||
@@ -1,199 +0,0 @@
|
|||||||
/* SPDX-License-Identifier: GPL-2.0 */
|
|
||||||
#ifndef _NET_DN_DEV_H
|
|
||||||
#define _NET_DN_DEV_H
|
|
||||||
|
|
||||||
|
|
||||||
struct dn_dev;
|
|
||||||
|
|
||||||
struct dn_ifaddr {
|
|
||||||
struct dn_ifaddr __rcu *ifa_next;
|
|
||||||
struct dn_dev *ifa_dev;
|
|
||||||
__le16 ifa_local;
|
|
||||||
__le16 ifa_address;
|
|
||||||
__u32 ifa_flags;
|
|
||||||
__u8 ifa_scope;
|
|
||||||
char ifa_label[IFNAMSIZ];
|
|
||||||
struct rcu_head rcu;
|
|
||||||
};
|
|
||||||
|
|
||||||
#define DN_DEV_S_RU 0 /* Run - working normally */
|
|
||||||
#define DN_DEV_S_CR 1 /* Circuit Rejected */
|
|
||||||
#define DN_DEV_S_DS 2 /* Data Link Start */
|
|
||||||
#define DN_DEV_S_RI 3 /* Routing Layer Initialize */
|
|
||||||
#define DN_DEV_S_RV 4 /* Routing Layer Verify */
|
|
||||||
#define DN_DEV_S_RC 5 /* Routing Layer Complete */
|
|
||||||
#define DN_DEV_S_OF 6 /* Off */
|
|
||||||
#define DN_DEV_S_HA 7 /* Halt */
|
|
||||||
|
|
||||||
|
|
||||||
/*
|
|
||||||
* The dn_dev_parms structure contains the set of parameters
|
|
||||||
* for each device (hence inclusion in the dn_dev structure)
|
|
||||||
* and an array is used to store the default types of supported
|
|
||||||
* device (in dn_dev.c).
|
|
||||||
*
|
|
||||||
* The type field matches the ARPHRD_ constants and is used in
|
|
||||||
* searching the list for supported devices when new devices
|
|
||||||
* come up.
|
|
||||||
*
|
|
||||||
* The mode field is used to find out if a device is broadcast,
|
|
||||||
* multipoint, or pointopoint. Please note that DECnet thinks
|
|
||||||
* different ways about devices to the rest of the kernel
|
|
||||||
* so the normal IFF_xxx flags are invalid here. For devices
|
|
||||||
* which can be any combination of the previously mentioned
|
|
||||||
* attributes, you can set this on a per device basis by
|
|
||||||
* installing an up() routine.
|
|
||||||
*
|
|
||||||
* The device state field, defines the initial state in which the
|
|
||||||
* device will come up. In the dn_dev structure, it is the actual
|
|
||||||
* state.
|
|
||||||
*
|
|
||||||
* Things have changed here. I've killed timer1 since it's a user space
|
|
||||||
* issue for a user space routing deamon to sort out. The kernel does
|
|
||||||
* not need to be bothered with it.
|
|
||||||
*
|
|
||||||
* Timers:
|
|
||||||
* t2 - Rate limit timer, min time between routing and hello messages
|
|
||||||
* t3 - Hello timer, send hello messages when it expires
|
|
||||||
*
|
|
||||||
* Callbacks:
|
|
||||||
* up() - Called to initialize device, return value can veto use of
|
|
||||||
* device with DECnet.
|
|
||||||
* down() - Called to turn device off when it goes down
|
|
||||||
* timer3() - Called once for each ifaddr when timer 3 goes off
|
|
||||||
*
|
|
||||||
* sysctl - Hook for sysctl things
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
struct dn_dev_parms {
|
|
||||||
int type; /* ARPHRD_xxx */
|
|
||||||
int mode; /* Broadcast, Unicast, Mulitpoint */
|
|
||||||
#define DN_DEV_BCAST 1
|
|
||||||
#define DN_DEV_UCAST 2
|
|
||||||
#define DN_DEV_MPOINT 4
|
|
||||||
int state; /* Initial state */
|
|
||||||
int forwarding; /* 0=EndNode, 1=L1Router, 2=L2Router */
|
|
||||||
unsigned long t2; /* Default value of t2 */
|
|
||||||
unsigned long t3; /* Default value of t3 */
|
|
||||||
int priority; /* Priority to be a router */
|
|
||||||
char *name; /* Name for sysctl */
|
|
||||||
int (*up)(struct net_device *);
|
|
||||||
void (*down)(struct net_device *);
|
|
||||||
void (*timer3)(struct net_device *, struct dn_ifaddr *ifa);
|
|
||||||
void *sysctl;
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
struct dn_dev {
|
|
||||||
struct dn_ifaddr __rcu *ifa_list;
|
|
||||||
struct net_device *dev;
|
|
||||||
struct dn_dev_parms parms;
|
|
||||||
char use_long;
|
|
||||||
struct timer_list timer;
|
|
||||||
unsigned long t3;
|
|
||||||
struct neigh_parms *neigh_parms;
|
|
||||||
__u8 addr[ETH_ALEN];
|
|
||||||
struct neighbour *router; /* Default router on circuit */
|
|
||||||
struct neighbour *peer; /* Peer on pointopoint links */
|
|
||||||
unsigned long uptime; /* Time device went up in jiffies */
|
|
||||||
};
|
|
||||||
|
|
||||||
struct dn_short_packet {
|
|
||||||
__u8 msgflg;
|
|
||||||
__le16 dstnode;
|
|
||||||
__le16 srcnode;
|
|
||||||
__u8 forward;
|
|
||||||
} __packed;
|
|
||||||
|
|
||||||
struct dn_long_packet {
|
|
||||||
__u8 msgflg;
|
|
||||||
__u8 d_area;
|
|
||||||
__u8 d_subarea;
|
|
||||||
__u8 d_id[6];
|
|
||||||
__u8 s_area;
|
|
||||||
__u8 s_subarea;
|
|
||||||
__u8 s_id[6];
|
|
||||||
__u8 nl2;
|
|
||||||
__u8 visit_ct;
|
|
||||||
__u8 s_class;
|
|
||||||
__u8 pt;
|
|
||||||
} __packed;
|
|
||||||
|
|
||||||
/*------------------------- DRP - Routing messages ---------------------*/
|
|
||||||
|
|
||||||
struct endnode_hello_message {
|
|
||||||
__u8 msgflg;
|
|
||||||
__u8 tiver[3];
|
|
||||||
__u8 id[6];
|
|
||||||
__u8 iinfo;
|
|
||||||
__le16 blksize;
|
|
||||||
__u8 area;
|
|
||||||
__u8 seed[8];
|
|
||||||
__u8 neighbor[6];
|
|
||||||
__le16 timer;
|
|
||||||
__u8 mpd;
|
|
||||||
__u8 datalen;
|
|
||||||
__u8 data[2];
|
|
||||||
} __packed;
|
|
||||||
|
|
||||||
struct rtnode_hello_message {
|
|
||||||
__u8 msgflg;
|
|
||||||
__u8 tiver[3];
|
|
||||||
__u8 id[6];
|
|
||||||
__u8 iinfo;
|
|
||||||
__le16 blksize;
|
|
||||||
__u8 priority;
|
|
||||||
__u8 area;
|
|
||||||
__le16 timer;
|
|
||||||
__u8 mpd;
|
|
||||||
} __packed;
|
|
||||||
|
|
||||||
|
|
||||||
void dn_dev_init(void);
|
|
||||||
void dn_dev_cleanup(void);
|
|
||||||
|
|
||||||
int dn_dev_ioctl(unsigned int cmd, void __user *arg);
|
|
||||||
|
|
||||||
void dn_dev_devices_off(void);
|
|
||||||
void dn_dev_devices_on(void);
|
|
||||||
|
|
||||||
void dn_dev_init_pkt(struct sk_buff *skb);
|
|
||||||
void dn_dev_veri_pkt(struct sk_buff *skb);
|
|
||||||
void dn_dev_hello(struct sk_buff *skb);
|
|
||||||
|
|
||||||
void dn_dev_up(struct net_device *);
|
|
||||||
void dn_dev_down(struct net_device *);
|
|
||||||
|
|
||||||
int dn_dev_set_default(struct net_device *dev, int force);
|
|
||||||
struct net_device *dn_dev_get_default(void);
|
|
||||||
int dn_dev_bind_default(__le16 *addr);
|
|
||||||
|
|
||||||
int register_dnaddr_notifier(struct notifier_block *nb);
|
|
||||||
int unregister_dnaddr_notifier(struct notifier_block *nb);
|
|
||||||
|
|
||||||
static inline int dn_dev_islocal(struct net_device *dev, __le16 addr)
|
|
||||||
{
|
|
||||||
struct dn_dev *dn_db;
|
|
||||||
struct dn_ifaddr *ifa;
|
|
||||||
int res = 0;
|
|
||||||
|
|
||||||
rcu_read_lock();
|
|
||||||
dn_db = rcu_dereference(dev->dn_ptr);
|
|
||||||
if (dn_db == NULL) {
|
|
||||||
printk(KERN_DEBUG "dn_dev_islocal: Called for non DECnet device\n");
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
|
|
||||||
for (ifa = rcu_dereference(dn_db->ifa_list);
|
|
||||||
ifa != NULL;
|
|
||||||
ifa = rcu_dereference(ifa->ifa_next))
|
|
||||||
if ((addr ^ ifa->ifa_local) == 0) {
|
|
||||||
res = 1;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
out:
|
|
||||||
rcu_read_unlock();
|
|
||||||
return res;
|
|
||||||
}
|
|
||||||
|
|
||||||
#endif /* _NET_DN_DEV_H */
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user